Active Filters: Core Image

Topics

#87: CIDetector Basics 📷🔍

Topics

We first covered Core Image back in Bite #32. It's the full featured image processing framework that ships with iOS and OS X. Today we'll be taking a look at one of the neatest features of Core Image: Detectors.

Detectors allow us to ask the system if it can find any special features in an image. These features range from things likes faces, rectangles, and even text.

A detected feature of an image may also describe other metadata. For example a CIFaceFeature can report whether the face appears to be smiling, if one of the eyes is closed, and much more. Let's dive in.

We'll start by asking the user for a photo using UIImagePickerController (like we covered in Bite #83). Then we'll convert the image to a CIImage, and create our CIDetector. We'll configure it to be look for faces and use high accuracy. Then we'll ask it for the features (in this case faces) it can find in our image.

let detector = CIDetector(
  ofType: CIDetectorTypeFace,
  context: nil,
  options: [ CIDetectorAccuracy: CIDetectorAccuracyHigh ]
)

let faces = detector.featuresInImage(
  imageToDetect,
  options: [ CIDetectorSmile: true ]
) as! [CIFaceFeature]

When asking for the features in our image, we make sure to pass the CIDetectorSmile option as true so Core Image will let us know who needs to turn their frown upside down. We'll access the properties of each detected face and use them to add some fun debug views:

for face in faces {
  let calloutView = FaceCalloutView(frame: face.bounds)
  calloutView.emoji = face.hasSmile ? "😀" : "😐"

  imageView.addSubview(calloutView)
}

This is just the beginning of what's possible with Detectors. In the future we'll look at taking them even further by wiring them up to a live camera feed.

Topics

#32: Core Image Basics 🌄

Topics

CoreImage has long been a staple on OS X, and was added to iOS a few years ago. It's an incredibly feature-packed image processing API that can apply just about any type of filter or image manipulation you can dream of. Let's take a look at applying a simple color tint to an image:

func applyFilterToImage(image: UIImage) -> UIImage {
  guard let inputImage = image.CIImage else { return image }

  let tintColor = CIColor(red: 0.55, green: 0.33, blue: 0.22)

  let filter = CIFilter(
    name: "CIColorMonochrome",
    withInputParameters: [
      "inputImage" : inputImage,
      "inputColor" : tintColor,
      "inputIntensity" : 1.0
    ]
  )

  return UIImage(CIImage: filter!.outputImage)
}

CoreImage works on CIImages not UIImages, so we convert our image to a CIImage, and use guard since the CIImage property on UIImage is optional. CoreImage also doesn't use UIColor, so we create a CIColor instead.

Then the fun part, we create our filter by name. CoreImage has literally 100's of different filters available, and instead of subclasses, you instantiate them by name.

CIFilters also don't have explicit class properties, you instead supply values for "Input Parameters" on your filter.

Lastly, we apply the filter by simply asking for it's outputImage.