Handling Saved Photos in your iPhone App

By: | Post date: January 16, 2009 | Comments: No Comments
Posted in categories: Cocoa
Tags: ,

I’ve had sporadic reports of flipped and distorted images within TalkBubbles. I thought I had these fixed, but thanks to one persistent customer, discovered otherwise.

Now, with version 1.2.2 submitted to the App Store. I can say it is fixed. Read on to see how to overcome this in your own code.


The Problem

The iPhone coordinate system is flipped and (to me anyway) relatively arbitrary when it comes to how it stores images taken with the camera. For instance, holding the camera normally (in what I would call portrait orientation), the images taken are tagged as having UIImageOrientationRight and being flipped. I’m sure it makes sense to someone, but to me it seems as if this is a straight up image. However, if you assume that and retrieve from the iPhone’s library, it will not be what you expect.

The Solution

The generalized solution is to:

  1. Determine an appropriate transform to orient the image,
  2. Determine the appropriate scale/rect for the image to be rendered into,
  3. Create a context for rendering the image into,
  4. Draw the image into the scaled and transformed rectangle, and
  5. Extract the image from the graphics context (oriented and scaled appropriately).

Step 1

Thanks to Vicente Sosa on the Apple developer forums for his example which I built on for my needs.

This code creates an appropriate transform for the context based on the orientation of the image.  Additionally, it updates the size of the image to reflect rotations that exchange the width and height elements.

CGAffineTransform orientationTransformForDimensions(UIImageOrientation orient, CGSize *origSize)
{
    CGFloat width = origSize->width;
    CGFloat height = origSize->height;
    CGSize size = CGSizeMake(width, height);
    CGAffineTransform transform = CGAffineTransformIdentity;

    CGFloat origHeight = size.height;
    switch(orient) { /* EXIF 1 to 8 */
    case UIImageOrientationUp:
        break;

    case UIImageOrientationUpMirrored:
        transform = CGAffineTransformMakeTranslation(width, 0.0f);
        transform = CGAffineTransformScale(transform, -1.0f, 1.0f);

        break;

    case UIImageOrientationDown:
        transform = CGAffineTransformMakeTranslation(width, height);
        transform = CGAffineTransformRotate(transform, M_PI);
        break;

    case UIImageOrientationDownMirrored:
        transform = CGAffineTransformMakeTranslation(0.0f, height);
        transform = CGAffineTransformScale(transform, 1.0f, -1.0f);
        break;

    case UIImageOrientationLeftMirrored:
        size.height = size.width;
        size.width = origHeight;
        transform = CGAffineTransformMakeTranslation(height, width);
        transform = CGAffineTransformScale(transform, -1.0f, 1.0f);
        transform = CGAffineTransformRotate(transform, 3.0f * M_PI / 2.0f);
        break;

    case UIImageOrientationLeft:
        size.height =         size.width;
        size.width = origHeight;
        transform = CGAffineTransformMakeTranslation(0.0f, width);
        transform = CGAffineTransformRotate(transform, 3.0f * M_PI / 2.0f);
        break;

    case UIImageOrientationRightMirrored:
        size.height = size.width;
        size.width = origHeight;
        transform = CGAffineTransformMakeScale(-1.0f, 1.0f);
        transform = CGAffineTransformRotate(transform, M_PI / 2.0f);
        break;

    case UIImageOrientationRight:
        size.height = size.width;
        size.width = origHeight;
        transform = CGAffineTransformMakeTranslation(height, 0.0f);
        transform = CGAffineTransformRotate(transform, M_PI / 2.0f);
        break;

    default:
        ;
    }

    *origSize = size;
    return transform;
}

This code is invoked within by

orientImage()

function:

    CGImageRef img = [image CGImage];

    CGSize origSize = CGSizeMake(CGImageGetWidth(img), CGImageGetHeight(img));

    UIImageOrientation orientation = [image imageOrientation];

    CGAffineTransform transform = orientationTransformForDimensions(orientation, &origSize);

This alone, however will not handle the reversed (flipped) nature of the iPhone graphics space, so a second transform is used to adjust for this:


    CGFloat iWidth = image.size.width;

    CGFloat iHeight = image.size.height;

    CGAffineTransform flipTransform = CGAffineTransformIdentity;

    if (orientation == UIImageOrientationRight || orientation == UIImageOrientationLeft) {

        flipTransform = CGAffineTransformScale(flipTransform, -1.0f, 1.0f);

        flipTransform = CGAffineTransformTranslate(flipTransform, -iWidth, 0.0f);

    }else {

        flipTransform = CGAffineTransformScale(flipTransform, 1.0f, -1.0f);

        flipTransform = CGAffineTransformTranslate(flipTransform, 0.0f, -iHeight);

    }

Finally, the two transforms are concatenated so the transform is complete:


    transform = CGAffineTransformConcat(transform, flipTransform);

Step 2

UIImageViews are backed by CALayers. At WWDC we were cautioned not the exceed 1024 for any dimension in a single layer object. Since I don’t want to deal with tiled layers and breaking the image up into tiles, I had decided to scale the largest dimension of the image to 1024 max using this code:


    CGFloat xScale, yScale, finalScale, newW, newH;

    if((origSize.width > 1024.0) || (origSize.height > 1024.0)){

        xScale = 1024.0f/origSize.width;

        yScale = 1024.0f/origSize.height;

    }else{

        xScale = 1.0f;

        yScale = 1.0f;

    }

    if(xScale > yScale){

        finalScale = yScale;

    }else{

        finalScale = xScale;

    }

    newW = origSize.width * finalScale;

    newH = origSize.height * finalScale;

These new dimensions will be used as the drawing rectangle when I finally draw the image into the context. (It should be straight forward to arbitrarily scale an image down using this approach, but that was not my need, so it is left up to the reader.)

Steps 3-5

The next bit of code creates a context, applies the transform to handle orientation to it, adjusts the drawing rectangle by the same transform, draws the image, and extracts it back to a new UIImage:


    UIGraphicsBeginImageContext(CGSizeMake(newW, newH));

    CGContextRef context = UIGraphicsGetCurrentContext();

    CGContextSetFillColorWithColor(context, [[UIColor blackColor] CGColor]);

    CGContextFillRect(context, CGRectMake(0.0, 0.0, newW, newH));      //blacken background, just in case

    CGRect clipRect = CGRectMake(0, 0, newW, newH);

    CGRect tClip = CGRectApplyAffineTransform(clipRect, transform);    //apply transform to drawing rectangle

    CGContextConcatCTM(context, transform);

    CGContextDrawImage(context, tClip, img);

    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();   // autoreleased object according to docs

    UIGraphicsEndImageContext();

The new image is then returned from this function.

A note

Testing in Instruments has led me to believe there might be an error in the

UIGraphicsGetImageFromCurrentImageContext()

call.

When I run this code, I see this method grow memory by approximately 2Mbytes each call, even though the image being returned is supposed to be autoreleased according to the documentation.


I hope this helps anyone else struggling with image orientation and scaling problems when pulling an image in from the photo library or camera.