Downscaling Huge ALAssets Without Fear of SIGKILL

December 18th, 2012
Written By:

With the latest release of Etchings, we wanted to support high resolution output. This means reading high-res versions of images from the camera roll, but not blowing our memory limits if the user selects a 30MP monstrosity. We came up with a way to get a smaller version of any ALAsset without having to first uncompress the whole image into memory, and since we couldn’t find this technique anywhere online[0], we’re sharing it here.

By default, the UIImagePickerController hands you a UIImage, but since we want to control the size more closely, we have to make use of the UIImagePickerControllerReferenceURL it provides to get access to the underlying ALAsset. The asset already provides several versions of the original image:

  • A thumbnail: [asset thumbnail];
  • An aspect-correct thumbnail: [asset aspectCorrectThumbnail];
  • A full-resolution image: [[asset defaultRepresentation] fullResolutionImage];
  • An image suitable for displaying fullscreen: [[asset defaultRepresentation] fullscreenImage];

But there’s no obvious way to get an arbitrary size. There is a suggestive method named CGImageWithOptions:, which looks like it takes flags related to the desired size of the image, but if you read the docs carefully, those particular values (kCGImageSourceCreateThumbnailFromImageAlways and kCGImageSourceThumbnailMaxPixelSize) can only be passed to CGImageSourceCreateThumbnailAtIndex, not CGImageSourceCreateWith[Data|URL], which is what CGImageWithOptions: uses.

OK, so, how about dropping down a level? The aforementioned CGImageSourceCreateThumbnailAtIndex method looks like it will do exactly what we want. (Don’t let the word “thumbnail” distract you; here it just means “smaller than original resolution.”) To use this method, we just need to get a CGImageSourceRef for the asset. Normally, you’d create these from a file URL or block of raw data, but what we have is an ALAssetRepresentation.

To connect these things together, all it takes is a bit of glue code to wrap up the ALAssetReprentation as a CGDataProviderRef, and wrap that into a CGImageSourceRef. We use CGDataProviderCreateDirect, passing a small set of functions used to retrieve the image data[1]. Like so:

(This is designed to live in an existing class; you’ll also need to add the AssetsLibrary and ImageIO frameworks to your project. This code is ARC; if you need non-ARC code, just remove the two __bridge annotations.)

To test this out, we ran some experiments with a 6019×6019 image from NASA. (You know this is serious stuff because it’s from NASA.) Fully decompressed, this image uses 138 MB, which is plenty to get your app killed by the system on older devices. We ran a simple test app using the allocations instrument and looked at the dirty memory size[2] using the full-size image versus loading a thumbnailed version with the above code.

On an iPhone 5, when we load the above image at full resolution, we see a jump in our dirty memory of 138 MB, just as we’d expect. When we load the above image, requesting an image of size at most 2500×2500, we see only a 24 MB bump, which is what we were hoping.

On an iPhone 3GS, the app is immediately killed in the first case, but works just fine in the second case. Core Graphics (ImageIO in particular) is doing what we want it to do; it’s downscaling the image without first uncompressing the whole thing.

So, if you need to get an image from the Assets Library at a particular resolution, don’t load the original image first; use this code instead to avoid crashing and leaving your users wondering what happened.

[0] Though people have certainly asked. (back)

[1] We could create an NSData from the ALAssetRepresentation‘s getBytes:fromOffset:length:error: method and create a CGDataProviderRef around that, but using a callback as we do in our sample ensures that if ImageIO is smart and can decompress the image piece-by-piece that we don’t even load the entire compressed version in to memory at once. (back)

[2] I highly recommend the iOS Application Performance: Memory video from WWDC 2012 for more about dirty memory and memory usage in general. (back)

This entry was posted on Tuesday, December 18th, 2012 at 8:40 pm and is filed under iOS, Software Development. You can follow any responses to this entry through the RSS 2.0 feed.

You can leave a response, or trackback from your own site.

11 Responses to “Downscaling Huge ALAssets Without Fear of SIGKILL”

  1. Arno Willig says:

    Thanks for sharing, this is really useful information.

  2. Xiaochao says:

    Is there a way to directly get Jpeg NSData from the scaled down version of CGImageRef without expanding it to UIImage?

    I’m asking because I want put the jpeg thumbnail data directly into Core Data

  3. Jesse says:

    UIImage is just a thin wrapper around CGImageRef, so I don’t think there’s much of a cost to wrapping it to UIImage so you can use UIImageJPEGRepresentation.

    If you find too much overhead there, you could use CGImageDestinationCreateWithData instead, which would let you get JPEG data directly from the CGImageSourceRef via CGImageDestinationAddImageFromSource without even making a CGImageRef.

  4. Mike says:

    Jesse – Thanks so much for sharing this. Brilliant.
    Saved me quite a few hours I’m guessing. Much obliged.

  5. Dan Wexler says:

    Great post!

    Another trick is to use your own image reading libraries, which support streaming. However, to get those to work, you first need to copy the raw ALAsset data from the original file into your local sandbox filesystem. Once it is in the local filesystem, libjpg or libpng can open these files and stream the input.

    To copy the file locally from an ALAsset without loading it into memory, the trick is to use [ALAssetRepresentation getBytes], and stream the data to the local file. You can copy a 24 MPix image from the ALAsset to the local file in about 0.7s on an iPad3. So, there is more overhead for this method than the call to CGImageSourceCreateThumbnailAtIndex, but it gives you a bit more control if you want to, say, stream the image in and work on it in full resolution and stream out to the destination rather than downsampling to a smaller format.

    And, BTW, Etchings is a great app!

  6. Diogo says:

    Great! Thanks for sharing! Works flawless!

  7. Bryn Bodayle says:

    This is extremely helpful, thank you!

    Is there anyway to put this into a category? I’m unable to get the callbacks to work there.

  8. Bryn Bodayle says:

    Actually, I’ve been able to put it in a category. Thanks so much!

  9. Taylan Pince says:

    This works beautifully for JPEG images, but I am still seeing a huge memory spike when loading large PNG’s (~40MB). Here is what I am seeing:

    Loading your example JPEG from NASA (3.1MB): 18MB memory usage, no spike

    Loading a transparent PNG (39.9 MB): 190MB memory spike, drops back down to ~20MB after load

    I am using your method almost exactly, with a max pixel size of 1200.

    I guess this can be attributed to the 10x file size difference between the two image files, but still, I don’t get why it drops down to 20MB after the image is loaded. Can the spike be tamed? Any tips?

  10. Rich Chang says:

    FYI, your code worked well for me. One issue I had was that this does not preserve iOS 7 photo filters on the loaded photo. I added the following to fix that:


    // apply ios filters
    if ([[rep metadata] objectForKey:@"AdjustmentXMP"]) {
    NSString *adjustment = [[rep metadata] objectForKey:@"AdjustmentXMP"];
    if (adjustment) {
    NSData *xmpData = [adjustment dataUsingEncoding:NSUTF8StringEncoding];
    CIImage *image = [CIImage imageWithCGImage:imageRef];

    NSError *error = nil;
    NSArray *filterArray = [CIFilter filterArrayFromSerializedXMP:xmpData
    inputImageExtent:image.extent
    error:&error];
    CIContext *context = [CIContext contextWithOptions:nil];
    if (filterArray && !error) {
    for (CIFilter *filter in filterArray) {
    [filter setValue:image forKey:kCIInputImageKey];
    image = [filter outputImage];
    }
    imageRef = [context createCGImage:image fromRect:[image extent]];
    }
    }
    }

    Thanks,
    Rich

  11. Pavel Osipov says:

    Idea is great but there are some problems with provided implementation. Here is a description and my fixes: http://stackoverflow.com/a/22558040/1059494

Leave a Reply