Active Filters: Audio

Playing audio is an important part of many apps. One common trick is to fade in the volume of audio playback so we don't surprise or startle the user. This year, Apple has made this much simpler to implement using AVAudioPlayer. Let's take a look.

First we'll set up a standard AVAudioPlayer, and begin playing it at a volume of 0:

guard let asset = NSDataAsset(name: "alarm") else { print("Error Loading Audio."); return }

let player: AVAudioPlayer

do {
  player = try AVAudioPlayer(data:
} catch { print("Error Playing."); return }

player.volume = 0
player.numberOfLoops = -1

At this point the audio is playing but we can't hear it.

Before we check out the new feature, let's review the "old" way we might do this.

Before iOS 10, macOS 10.12, and tvOS 10, fading this audio in was, well, let's just call it "verbose":

func fadeInPlayer() {
  if player.volume <= 1 - fadeVolumeStep {
    player.volume += fadeVolumeStep
    dispatchAfterDelayHelper(time: fadeVolumeStepTime) { fadeInPlayer() }
  } else {
    player.volume = 1


Recursive functions, GCD delays, manually managing state. Yuck.

Thankfully, there's now a better way.

Here's all it takes:

player.setVolume(1, fadeDuration: 1.5)

This single line of code will fade in the audio from our initial volume of 0 up to 1 over a period of 1.5 seconds.



Ever since 1983 when Matthew Broderick's IMSAI 8080 began speaking out loud, we've dreamed of computers that can have conversations with us.

In iOS 9, Apple added the ability to synthesize speech using the high-quality 'Alex' voice. Sadly it's only available on US devices for now, but that's sure to change. Let's try it out:

guard let voice = AVSpeechSynthesisVoice(identifier: AVSpeechSynthesisVoiceIdentifierAlex) else { return }

let synth = AVSpeechSynthesizer()
synth.delegate = self

let utter = AVSpeechUtterance(string: "Would you like to play a game?")
utter.voice = voice


We start by making sure 'Alex' is available, then we make a new synthesizer. Next, we create an AVSpeechUtterance, and set it's voice. Then, we simply tell the synthesizer to speak! Very cool.

Even cooler, we can implement one of the optional functions of AVSpeechSynthesizerDelegate to get live progress callbacks as each word is spoken. Neat!

func speechSynthesizer(synthesizer: AVSpeechSynthesizer, willSpeakRangeOfSpeechString characterRange: NSRange, utterance: AVSpeechUtterance) {


#123: Playing Audio in the Background πŸ”Š


Sometimes we want to play audio in our apps. It might be a podcast, a song, or a voice memo. Usually, our users will expect this audio to keep playing if they press the home button on their device. Today we'll look at how to get this working. Let's get started:

First let's setup the boilerplate basic audio playback code:

func playAudioWithData(audioData: NSData) {
  do {
    self.player = try AVAudioPlayer(data: audioData)
  } catch let error as NSError {
    self.player = nil
  } catch {
    self.showGenericErrorAlert("Playback Failed.")
    self.player = nil

  guard let player = player else {
    self.showGenericErrorAlert("Playback Failed."); return

  player.delegate = self

  guard player.prepareToPlay() && else {
    self.showGenericErrorAlert("Playback Failed."); return

func audioPlayerDidFinishPlaying(player: AVAudioPlayer, successfully flag: Bool) {
  do { try AVAudioSession.sharedInstance().setActive(false) } catch { }
  self.player = nil

(Making this code "safe" in Swift can get a little ugly. πŸ˜•)

Next, we'll add a function that we'll call before we begin playback that configures our app's shared AVAudioSession to be in the β€œPlayback” category, and then we'll set the audio session to be active.

  func prepareForPlaybackWithData(audioData: NSData) {
    do {
      try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)

      do {
        try AVAudioSession.sharedInstance().setActive(true)
      } catch let error as NSError {
    } catch let error as NSError {

Finally, we'll head over to our project's Capabilities tab in Xcode and enable the "Audio, AirPlay and Picture in Picture" background mode.

Success! Now when we send our app to the background, its audio continues playing.

Today we'll take a look at how to record audio from the microphone on a user's device. Let's get started.

The first thing we'll need is an Audio Session. This will be a singleton (Bite #4), and we'll also create a property to hold our recorder:

import AVFoundation

class RecordViewController : UIViewController {
  let session = AVAudioSession.sharedInstance()
  var recorder: AVAudioRecorder?

Next, we'll create a function to start recording:

func beginRecording() {
  session.requestRecordPermission { granted in
    guard granted else { return }

    do {
      try self.session.setCategory(AVAudioSessionCategoryPlayAndRecord)
      try self.session.setActive(true)

      let recordingFileName = "recording.caf"

      guard let recordingURL = documentsDirectoryURL()?
        .URLByAppendingPathComponent(recordingFileName) else { return }

      let settings: [String : AnyObject] = [
        AVEncoderAudioQualityKey: AVAudioQuality.High.rawValue,
        AVSampleRateKey: 12000.0,
        AVNumberOfChannelsKey: 1,
        AVFormatIDKey: Int(kAudioFormatMPEG4AAC)

      try self.recorder = AVAudioRecorder(
        URL: recordingURL, 
        settings: settings
    } catch { }

We'll first need to request permission from the user to record audio. Once granted, we'll try to set our Audio Session's category to PlayAndRecord, the category Apple suggests for apps that simultaneously record and playback audio.

We'll create a place for our recording to live, then assemble the settings dictionary for our recorder. We instantiate and store our AVAudioRecorder object, then tell it to start recording. Later, we'll call .stop() on it to stop recording. We can also optionally wire up a delegate to get completion callbacks.

Finally, we can play back our file using AVAudioPlayer:

let audioPlayer = try AVAudioPlayer(contentsOfURL: recordingURL)