Episode #96

Face Detection

16 minutes
Published on November 21, 2013

This video is only available to subscribers. Get access to this video and 497 others.

In this episode we dive into CoreImage with a fun feature: detecting faces in photos! We also find the eyes & mouth positions and use Core Graphics to draw on our photo.

Episode Links

Detecting features in an image

This could take time, so we do it in a background thread. Note that higher accuracy requires more processing time.

 dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
    dispatch_async(queue, ^{
        CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                                            CIDetectorAccuracy : CIDetectorAccuracyHigh }];
        CIImage *ciImage = [CIImage imageWithCGImage:[self.imageView.image CGImage]];
        NSArray *features = [detector featuresInImage:ciImage];
        dispatch_async(dispatch_get_main_queue(), ^{
            for (CIFeature *feature in features) {
                if ([feature isKindOfClass:[CIFaceFeature class]]) {
                    CIFaceFeature *faceFeature = (CIFaceFeature *)feature;
                    FaceView *face = [[FaceView alloc] init];
                    face.feature = faceFeature;
                    [self.faces addObject:face];

            [self drawFaces];

Here we're creating a new FaceView for each detected face and adding it to a set. Then we call drawFaces.

The first step to drawing the faces is determining where our actual image is. Surprisingly, UIImageView does not provide this value. But it's fairly easy to compute ourselves:

    CGSize imageSize = self.imageView.image.size;
    CGFloat imageScale = fminf(self.imageView.bounds.size.width / imageSize.width,
                               self.imageView.bounds.size.height / imageSize.height);
    CGSize scaledImageSize = CGSizeMake(imageSize.width * imageScale, imageSize.height * imageScale);
    CGRect imageFrame = CGRectMake(
                                   roundf(0.5f * (self.imageView.bounds.size.width - scaledImageSize.width)),
                                   roundf(0.5f * (self.imageView.bounds.size.height - scaledImageSize.height)),
    NSLog(@"Scale: %g", imageScale);
    NSLog(@"Image frame: %@", NSStringFromCGRect(imageFrame));

Once we have the scaled image size, we can know where to draw. Since our FaceView is doing all the drawing, we just pass these values over to the view...

    for (FaceView *face in self.faces) {
        face.hidden = NO;
        face.scale = imageScale;
        face.imageSize = scaledImageSize;
        face.frame = imageFrame;

        if (!face.superview) {
            [self.imageView addSubview:face];

Implementing the FaceView class

Our face view now has everything it needs to do the drawing: the frame (which is the same as the frame of the image), the scale, and the feature itself.

- (void)drawRect:(CGRect)rect {
    if (self.feature) {
        CGContextRef context = UIGraphicsGetCurrentContext();

        CGContextSetStrokeColorWithColor(context, [[UIColor redColor] CGColor]);
        CGContextStrokeRect(context, self.bounds);

        CGRect faceRect = self.feature.bounds;
        CGContextScaleCTM(context, self.scale, self.scale);
        CGContextSetStrokeColorWithColor(context, [[UIColor orangeColor] CGColor]);
        CGContextSetLineWidth(context, 3);
        CGContextStrokeRect(context, faceRect);

        if ([self.feature hasLeftEyePosition]) {
            [self drawEyeAtPosition:self.feature.leftEyePosition inContext:context];

        if ([self.feature hasRightEyePosition]) {
            [self drawEyeAtPosition:self.feature.rightEyePosition inContext:context];

        if ([self.feature hasMouthPosition]) {
            [self drawMouthAtPosition:self.feature.mouthPosition inContext:context];

Drawing the Eyes

- (void)drawEyeAtPosition:(CGPoint)position inContext:(CGContextRef)context {
    position = CGPointMake(position.x, self.imageSize.height - position.y);


    const CGFloat SIZE = 20;

    CGContextSetFillColorWithColor(context, [[UIColor blueColor] CGColor]);

    CGRect eyeRect = CGRectMake(position.x - SIZE/2, position.y - SIZE/2, SIZE, SIZE);
    CGContextFillEllipseInRect(context, eyeRect);


Note that we have to invert the positions coordinate system in the y direction, otherwise our drawing will be too low.

Drawing the Mouth

Similar to the eye drawing, we have to invert the position y value. Then we calculate a rect that would be centered around that position with a proportion. This way if we have smaller faces the mouths won't be drawn too large.

- (void)drawMouthAtPosition:(CGPoint)position inContext:(CGContextRef)context {
    position = CGPointMake(position.x, self.imageSize.height - position.y);

    CGSize mouthSize = CGSizeMake(self.feature.bounds.size.width / 3,
                                  self.feature.bounds.size.height / 8);
    CGRect mouthRect = CGRectMake(roundf(position.x - mouthSize.width / 2),
                                  roundf(position.y - mouthSize.height / 2),

    CGContextSetFillColorWithColor(context, [[UIColor greenColor] CGColor]);
    CGContextFillEllipseInRect(context, mouthRect);


Dealing with Rotation

Right now if we rotate, we'll see the old drawing during the rotation, then it updates. This is somewhat jarring, so an easy way to deal with this is just to hide the face views during rotation:

- (void)willRotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration {
    // hide faces
    for (FaceView *face in self.faces) {
        face.hidden = YES;

- (void)didRotateFromInterfaceOrientation:(UIInterfaceOrientation)fromInterfaceOrientation {
    // redraw faces
    [self drawFaces];


In this sample I have not done anything to account for scale. Both of the provided images are exactly 320 pixels wide. When dealing with scaled images, you'll have to apply the scale factor in drawing, since the provided positions are in the original image coordinates, not the rendered image inside of the image view.