It's quite common for an app to need to display one or more thumbnails (small still-image previews) of what's in a video. However, depending on where the video is coming from, we might not have easy access to pre-made thumbnail(s) for it. Let's look at how we can use AVAssetImageGenerator to grab our own.
We start with a simple NSURL for the video, this can be local or remote. We'll create an AVAsset with it and that to a new AVAssetImageGenerator object. We'll configure the generator to apply preferred transforms so our thumbnails are in the correct orientation.
Then we just create a new CMTime that describes at which point in the video we'd like to capture a thumbnail image. (We use 1/3 of the way through here. Finally, we'll kick off the actual thumbnail generation process and wait for it to complete. We only requested a single frame here, but we can ask for as many as our app needs. The closure will be called once per generated thumbnail.
We'll first need to request permission from the user to record audio. Once granted, we'll try to set our Audio Session's category to PlayAndRecord, the category Apple suggests for apps that simultaneously record and playback audio.
We'll create a place for our recording to live, then assemble the settings dictionary for our recorder. We instantiate and store our AVAudioRecorder object, then tell it to start recording. Later, we'll call .stop() on it to stop recording. We can also optionally wire up a delegate to get completion callbacks.
Finally, we can play back our file using AVAudioPlayer:
The Media Player Framework is how we can interact with the media library on a user's device. Today we'll look at the basics and build a fun little personal-use app to play some of our own favorite albums. Let's get started.
We'll start with a simple UICollectionViewController. It'll have a flow layout and we'll size the items so they make 2 equal columns:
This is just the beginning of what's possible with the Media Player Framework. Be sure to download the project at j.mp/bite105 and try it for yourself!
In Bite #101 we started working on a custom camera view controller.
Today we'll complete it by adding a way for users to capture a photo and do something with it. We'll start by making it easy to use. We'll make the whole screen a capture button by adding a tap gesture recognizer to our view:
We looked at allowing our users to capture photos/videos using UIImagePickerController in Bite #83. Now we'll take things to the next level by starting to create our own customcamera view controller. Today we'll get all the plumbing wired up and get the preview on the screen. Let's get started.
We'll start with the "single view" template in Xcode. There are a number of different objects we'll need to setup and glue together, so we'll go into our view controller and add a function called setupSession. We'll call this in viewWillAppear(animated:).
First we'll instantiate an AVCaptureSession. It's sort of the central hub of all this. We can configure it with a number of different presets. We'll use a preset for taking high quality still photos. Now, our session needs some inputs and outputs.
We'll use defaulDeviceWithMediaType and pass in Video to get the default hardware device for capturing and recording images on the user's device (usually the the back camera). Then we'll try to create an AVCaptureDeviceInput from the device. Next up, an output.
Capture sessions can return us data in all sorts of interesting ways: Still images, videos, raw pixel data, and more. Here we'll set up an AVCaptureStillImageOutput and ask it for JPEG photos. We'll do one more safety check then add both our input and output to our session.
Finally, let's display our camera so the user can see what they're photographing.
We'll pass our session into a new AVCapturePreviewLayer and add it to our view. Then we just need to start the session. If we run the app we'll see it's starting to look like a camera, neat!
Tomorrow, we'll finish up by adding the ability to actually capture some photos.
UIImagePickerController has been part of iOS since it's first release and it's evolved quite a bit over the years. Let's take a look at what it can do:
Capture images and videos
Choose images and videos from the Photos library
Crop images after choosing/capturing
Trim videos after choosing/capturing
Whew! That's quite a bit of functionality packed into this one class.
We can't cover all of that in this Bite, instead let's look at a simple example use case. We'll be letting our users take a photo, crop it, and then show how to access it for use in our app.
The first step is to find out the device we're running on has a camera, can take photos. Then we'll configure the UIImagePickerController and present it.
Then we'll add a function from UIImagePickerControllerDelegate where we'll get a userInfodictionary. We'll use the values inside to extract the captured image in either it's original or cropped form. We can also access a few other details, like the cropped rect as a CGRect or the image's metadata as a dictionary.
Note that we'll need to declare conformance to the UINavigationControllerDelegate protocol since UIImagePickerController is actually a subclass of UINavigationController under the hood.
One of the best parts of CloudKit is how great it is at handling not just our models, but also larger assets like images, audio, or video.
Assets are saved just like any other property. Here we'll attach an image captured from the user's camera to a new record. Then we'll upload it to CloudKit using a CKModifyRecordsOperation (covered in more detail in Bite #31). In our case we're only saving a single record, but we're using an operation anyway, so we can take advantage of its perRecordProgressBlock, and track the upload progress of our asset.
funcimagePickerController(picker:UIImagePickerController,didFinishPickingMediaWithInfoinfo:[String:AnyObject]){guardletmediaURL=info[UIImagePickerControllerMediaURL]as?NSURLelse{return}letspaceshipRecord=CKRecord(recordType:"Spaceship")spaceshipRecord["model"]="Tantive IV"spaceshipRecord["maxSpeed"]=950// in kmspaceshipRecord["image"]=CKAsset(fileURL:mediaURL)letoperation=CKModifyRecordsOperation(recordsToSave:[spaceshipRecord],recordIDsToDelete:nil)operation.perRecordProgressBlock={self.progressView.progress=$1}operation.completionBlock={self.progressView.hidden=true}progressView.hidden=falseCKContainer.defaultContainer().publicCloudDatabase.addOperation(operation)}
It's worth noting that CloudKit doesn't seem to report progress constantly as we might expect. It seems to report between 0-3 times depending on the size of the asset we're uploading.
After fetching a record containing an asset, we can grab the downloaded file from disk using the fileURL property of the CKAsset:
One of the coolest new features in iOS 9 is the new Picture in Picture functionality on iPad. This lets users watch video content from an app even while it's in the background.
To support it in our app, we'll first make sure you set the Playback audio category in our application(application:didFinishLaunchingWithOptions:) function:
Then we'll use AVPlayerViewController to play video content. Picture in Picture mode will automatically kick-in if our app enters background but only if: 1.) our player is full screen, 2.) video content is playing in it, and 3.) Picture in Picture is supported on the device.
Next we'll implement this wonderfully long delegate method to restore our player UI when the user returns from Picture in Picture mode: