We looked at allowing our users to capture photos/videos using UIImagePickerController in Bite #83. Now we'll take things to the next level by starting to create our own custom camera view controller. Today we'll get all the plumbing wired up and get the preview on the screen. Let's get started.

func setupSession() {
  session = AVCaptureSession()
  session.sessionPreset = AVCaptureSessionPresetPhoto

  let camera = AVCaptureDevice
    .defaultDeviceWithMediaType(AVMediaTypeVideo)

  do { input = try AVCaptureDeviceInput(device: camera) } catch { return }

  output = AVCaptureStillImageOutput()
  output.outputSettings = [ AVVideoCodecKey: AVVideoCodecJPEG ]

  guard session.canAddInput(input) && session.canAddOutput(output) else { return }

  session.addInput(input)
  session.addOutput(output)

  previewLayer = AVCaptureVideoPreviewLayer(session: session)
  previewLayer!.videoGravity = AVLayerVideoGravityResizeAspect
  previewLayer!.frame = view.bounds
  previewLayer!.connection?.videoOrientation = .Portrait

  view.layer.addSublayer(previewLayer!)

  session.startRunning()
}

We'll start with the "single view" template in Xcode. There are a number of different objects we'll need to setup and glue together, so we'll go into our view controller and add a function called setupSession. We'll call this in viewWillAppear(animated:).

First we'll instantiate an AVCaptureSession. It's sort of the central hub of all this. We can configure it with a number of different presets. We'll use a preset for taking high quality still photos. Now, our session needs some inputs and outputs.

We'll use defaulDeviceWithMediaType and pass in Video to get the default hardware device for capturing and recording images on the user's device (usually the the back camera). Then we'll try to create an AVCaptureDeviceInput from the device. Next up, an output.

Capture sessions can return us data in all sorts of interesting ways: Still images, videos, raw pixel data, and more. Here we'll set up an AVCaptureStillImageOutput and ask it for JPEG photos. We'll do one more safety check then add both our input and output to our session.

Finally, let's display our camera so the user can see what they're photographing.

We'll pass our session into a new AVCapturePreviewLayer and add it to our view. Then we just need to start the session. If we run the app we'll see it's starting to look like a camera, neat!

Tomorrow, we'll finish up by adding the ability to actually capture some photos.

Update: Part 2 is right here