Episode #98

Camera Capture

21 minutes
Published on December 5, 2013

This video is only available to subscribers. Get access to this video and 573 others.

In this episode we grab image data live from the camera on an iPhone 5. We discuss inputs and outputs, image formats, and finally (you guessed it) put a mustache live on each face in the video frame using the face detection techniques demonstrated in Episode 96.

Episode Links

Preparing the Capture Session

First we need to create and maintain an AVCaptureSession. On the session we add inputs, getting the front camera input if it is available.

- (void)viewDidLoad {
    [super viewDidLoad];

    self.session = [[AVCaptureSession alloc] init];

    AVCaptureDevice *device = [self frontCamera];

    if (!device) {
        NSLog(@"Couldn't get a camera.");
        return;
    }

    NSError *error;
    AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:device
                                                                         error:&error];

    if (input) {
        [self.session addInput:input];

        // ...
    } else {
        NSLog(@"Couldn't initialize device input: %@", error);
    }

}

- (AVCaptureDevice *)frontCamera {
    for (AVCaptureDevice *device in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
        if ([device position] == AVCaptureDevicePositionFront) {
            return device;
        }
    }

    return [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}

Once we have the input set up, we can add any outputs. Here we're also configuring our preview layer so we can view the live data on the screen:

        AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
        self.sampleQueue = dispatch_queue_create("VideoSampleQueue", DISPATCH_QUEUE_SERIAL);

        [output setSampleBufferDelegate:self queue:self.sampleQueue];
        [self.session addOutput:output];

        self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
        self.previewLayer.frame = self.view.layer.bounds;
        self.view.layer.masksToBounds = YES;
        self.view.layer.backgroundColor = [[UIColor blackColor] CGColor];
        [self.view.layer addSublayer:self.previewLayer];

Finally we start the process by calling startRunning:

        [self.session startRunning];

The AVCaptureVideoDataOutputSampleBufferDelegate Protocol

In order to grab the live camera using our output defined above, we need to implement two methods:

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection {

  // ...

}

- (void)captureOutput:(AVCaptureOutput *)captureOutput
  didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection {

}

In order to do anything with the data, you can convert the sample buffer into a core image object:

    CVImageBufferRef cvImage = CMSampleBufferGetImageBuffer(sampleBuffer);
    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:cvImage];

Note that you'll have to compensate for a number of factors, such as initial rotation, clean aperture of the camera, and mirroring in order to get an image that's easier to process. Take a look at the iCapp demo project for an example of this.