We first covered Core Image back in Bite #32. It's the full featured image processingframework that ships with iOS and OS X. Today we'll be taking a look at one of the neatest features of Core Image: Detectors.
Detectors allow us to ask the system if it can find any special features in an image. These features range from things likes faces, rectangles, and even text.
A detected feature of an image may also describe other metadata. For example a CIFaceFeature can report whether the face appears to be smiling, if one of the eyes is closed, and much more. Let's dive in.
We'll start by asking the user for a photo using UIImagePickerController (like we covered in Bite #83). Then we'll convert the image to a CIImage, and create our CIDetector. We'll configure it to be look for faces and use high accuracy. Then we'll ask it for the features (in this case faces) it can find in our image.
When asking for the features in our image, we make sure to pass the CIDetectorSmile option as true so Core Image will let us know who needs to turn their frown upside down. We'll access the properties of each detected face and use them to add some fun debug views:
This is just the beginning of what's possible with Detectors. In the future we'll look at taking them even further by wiring them up to a live camera feed.
We continue our look at frameworks that map JSON into model types today with Decodable by Johannes Lund. Decodable is another fantastic solution for this task. It takes advantage of Swift 2's new error handling functionality, and unlike ObjectMapper (covered in Bite #84), the properties on your models don't need to be optionals. Let's take a closer look:
To get things wired up we just need to implement the Decodableprotocol on our model types, like so:
Decodable also handles nested types and is quite flexible. For example, we aren't even forced to use Decodable on every type. Here we bypass Decodable a bit and instantiate a value for our rankproperty manually:
Some other noteworth Decodable features are its wonderful printable errors, and how easy it is to add custom decode functions for things like parsing custom date formats, etc.
My continued thanks to the folks over at Hired.com for sponsoring this week's bites.
Finding a good job is tough. Hired can help. Here's how:
π Software Engineers and Designers on Hired can get 5+ job offers in a single week.
π° Each offer includes salary and equity details upfront.
π Full time or contract opportunities are available
β You can view the offers and accept or reject them without talking to anyone.
π’ 2,500 pre-screened companies (both big and small)
π Employers from 12 major metro areas: SF, LA, Seattle, NYC, Boston, Austin, Chicago, Atlanta, San Diego, London, Toronto, and DC.
π It's completely FREE and there's no obligations ever.
πΈ If you get a job through Hired, they'll give you a $2,000 βthank youβ bonus!
πΈπΈ Use the link littlebitesofcocoa.com/hired and they'll double that, giving you a $4,000 bonus when you accept a job!
Getting started with Hired is easy:
Answer a few questions and you'll be getting offers in no time!
Hired is the real deal. They have fixed this historically messy process. If you're looking for a job, they should be your first stop.
Grand Central Dispatch is the name of Apple's collection of task concurrencyfunctions in libdispatch. GCD (as it's often called) is packed full of interesting features involving concurrency, queuing, timers, and more. Let's dive in and take a look at some simple and common use cases:
Common Usage
One of the most common ways to use GCD is to hop to a background queue to do some work, then hop back to the main queue to update the UI:
If we wanted to run some code in viewDidAppear: but only wanted it to run the first time, we could store some flag in a Bool property. Or just use GCD:
We can also easily wait a specified amount of time, then run some code. Here we'll use this technique to pop back to the previous view controller a quick "beat" after the user selects an an option in a table view controller. (As seen in many apps, including Settings.app):
There are plenty (no really, plenty) of options when it comes to parsing JSON into model objects on iOS. We'll be taking a look at some of them from time to time over the coming weeks. First up is ObjectMapper by Hearst. Let's take a look how it works with a fictional set of types:
With ObjectMapper, we implement the Mappableprotocol on our types to support converting to and from JSON:
ObjectMapper can easily handle nested objects, here on our Spaceship model, we've got an optional User property for the captain of the ship.
It also supports subclasses and custom transforms when serializing/deserializing properties. One of the best things about ObjectMapper are the extensions available for other great iOS libraries like Alamofire and Realm (covered in Bite #49). Here's AlamofireObjectMapper in action:
UIImagePickerController has been part of iOS since it's first release and it's evolved quite a bit over the years. Let's take a look at what it can do:
Capture images and videos
Choose images and videos from the Photos library
Crop images after choosing/capturing
Trim videos after choosing/capturing
Whew! That's quite a bit of functionality packed into this one class.
We can't cover all of that in this Bite, instead let's look at a simple example use case. We'll be letting our users take a photo, crop it, and then show how to access it for use in our app.
The first step is to find out the device we're running on has a camera, can take photos. Then we'll configure the UIImagePickerController and present it.
Then we'll add a function from UIImagePickerControllerDelegate where we'll get a userInfodictionary. We'll use the values inside to extract the captured image in either it's original or cropped form. We can also access a few other details, like the cropped rect as a CGRect or the image's metadata as a dictionary.
Note that we'll need to declare conformance to the UINavigationControllerDelegate protocol since UIImagePickerController is actually a subclass of UINavigationController under the hood.
iOS's software keyboard has evolved quite a bit over the years. Just last week we saw a new variant of it on the new iPad Pro. The user can also connect a hardware keyboard at anytime, hiding the software one. It's important, now more than ever, that we not make any assumptions about how and when the keyboard will transition on and off the screen.
It'd be great if the system could tell us all the details of how and where it's about to animate. Then we could use that info to move our own controls and views right alongside it. For this we can listen for some keyboard notifications_, and react appropriately. Let's take a look.
We'll start by registering for two notifications in viewDidLoad:
Next, we'll grab all the necessary values out of the notification'suserInfo, and use them to animate our own views exactly alongside the keyboard as it slides on or off the screen:
We grab the start and end frames, convert them to our view controller's view's coordinate space, and use the difference to move a constraint. Then we animate the constraint like we covered Bite #9.
RateLimit is great library from Sam Soffes. Its purpose is extremely simple, but powerful.
Let's dive in.
It's quite common to want to limit certain tasks by frequency. For instance, consider a feature that loads search suggestions as a user types characters into a text field. We'd probably want to throttle that to make sure we're not calling some HTTP API too often. Let's take a look at how to do this with RateLimit:
We give our rate-limited task a name and a limit, then pass in a closure to be limited. Neat.
Sam offers another great example: Loading new data for a view controller inside viewDidAppear. Imagine if that view controller lived inside a tab bar controller and the user decided to switch back and forth between tabs rapidly.
There'd be no reason to check for new data each time the view was shown, but we would like to check if enough time had passed since the last check.
One last example: Performance. Imagine we were performing some complicated task in a loop, and wanted to update a UIProgressBar to reflect its progress.
We might not want to update the UIProgressBar every single iteration. To make our UI nice and responsive, we might limit the updates to only once every 500 milliseconds or so.
Note: By default RateLimit doesn't persist our limits across app launches. If we do need this behavior, we can just replace RateLimit with PersistentRateLimit.
My sincere thanks to the folks over at Hired.com for sponsoring this week's bites.
Finding a good job is tough. Hired can help. Here's how:
π Software Engineers and Designers on Hired can get 5+ job offers in a single week.
π° Each offer includes salary and equity details upfront.
π Full time or contract opportunities are available
β You can view the offers and accept or reject them without talking to anyone.
π’ 2,500 pre-screened companies (both big and small)
π Employers from 12 major metro areas: SF, LA, Seattle, NYC, Boston, Austin, Chicago, Atlanta, San Diego, London, Toronto, and DC.
π It's completely FREE and there's no obligations ever.
πΈ If you get a job through Hired, they'll give you a $2,000 βthank youβ bonus!
πΈπΈ Use the link littlebitesofcocoa.com/hired and they'll double that, giving you a $4,000 bonus when you accept a job!
Getting started with Hired is easy:
Answer a few questions and you'll be getting offers in no time!
Hired is the real deal. They have fixed this historically messy process. If you're looking for a job, they should be your first stop.
View Controller Previews are another iOS SDK feature announced with 3D Touch on the iPhone 6S and iPhone 6S Plus. They allow users to easily "peek" into a piece of content by pressing lightly, then pressing a little more firmly to actually open it. Let's take a look.
First we'll make sure 3D Touch is available, and register for previewing:
iftraitCollection.forceTouchCapability==.Available{registerForPreviewingWithDelegate(self,sourceView:view)}else{print("3D Touch is not available on this device.")}
Then we need to conform to UIViewControllerPreviewingDelegate:
In the first function we'll need to return a fully configured UIViewController that will serve as the preview. We're coming from a table view controller, so we'll use the passed in location to grab the cell that was pressed, and configure our preview view controller.
We also set the cell'sframe to be our previewContext's sourceRect. This accomplishes "blurring out" all the other elements on the screen during the preview.
We finish things out by implementing the second function, called when the user decides to "pop" into the content. We could use a different view controller but we'll just reuse our existing preview one and show it: