Downscaling Huge ALAssets Without Fear of SIGKILL

<  Return to Blog

Share this Post:

Share on facebook
Share on linkedin
Share on twitter
Share on email

With the latest release of Etchings, we wanted to support high resolution output. This means reading high-res versions of images from the camera roll, but not blowing our memory limits if the user selects a 30MP monstrosity. We came up with a way to get a smaller version of any ALAsset without having to first uncompress the whole image into memory, and since we couldn’t find this technique anywhere online[0], we’re sharing it here.

By default, the UIImagePickerController hands you a UIImage, but since we want to control the size more closely, we have to make use of the UIImagePickerControllerReferenceURL it provides to get access to the underlying ALAsset. The asset already provides several versions of the original image:

  • A thumbnail: [asset thumbnail];
  • An aspect-correct thumbnail: [asset aspectCorrectThumbnail];
  • A full-resolution image: [[asset defaultRepresentation] fullResolutionImage];
  • An image suitable for displaying fullscreen: [[asset defaultRepresentation] fullscreenImage];

But there’s no obvious way to get an arbitrary size. There is a suggestive method named CGImageWithOptions:, which looks like it takes flags related to the desired size of the image, but if you read the docs carefully, those particular values (kCGImageSourceCreateThumbnailFromImageAlways and kCGImageSourceThumbnailMaxPixelSize) can only be passed to CGImageSourceCreateThumbnailAtIndex, not CGImageSourceCreateWith[Data|URL], which is what CGImageWithOptions: uses.

OK, so, how about dropping down a level? The aforementioned CGImageSourceCreateThumbnailAtIndex method looks like it will do exactly what we want. (Don’t let the word “thumbnail” distract you; here it just means “smaller than original resolution.”) To use this method, we just need to get a CGImageSourceRef for the asset. Normally, you’d create these from a file URL or block of raw data, but what we have is an ALAssetRepresentation.

To connect these things together, all it takes is a bit of glue code to wrap up the ALAssetReprentation as a CGDataProviderRef, and wrap that into a CGImageSourceRef. We use CGDataProviderCreateDirect, passing a small set of functions used to retrieve the image data[1]. Like so:

(This is designed to live in an existing class; you’ll also need to add the AssetsLibrary and ImageIO frameworks to your project. This code is ARC; if you need non-ARC code, just remove the two __bridge annotations.)

To test this out, we ran some experiments with a 6019×6019 image from NASA. (You know this is serious stuff because it’s from NASA.) Fully decompressed, this image uses 138 MB, which is plenty to get your app killed by the system on older devices. We ran a simple test app using the allocations instrument and looked at the dirty memory size[2] using the full-size image versus loading a thumbnailed version with the above code.

On an iPhone 5, when we load the above image at full resolution, we see a jump in our dirty memory of 138 MB, just as we’d expect. When we load the above image, requesting an image of size at most 2500×2500, we see only a 24 MB bump, which is what we were hoping.

On an iPhone 3GS, the app is immediately killed in the first case, but works just fine in the second case. Core Graphics (ImageIO in particular) is doing what we want it to do; it’s downscaling the image without first uncompressing the whole thing.

So, if you need to get an image from the Assets Library at a particular resolution, don’t load the original image first; use this code instead to avoid crashing and leaving your users wondering what happened.

[0] Though people have certainly asked. (back)

[1] We could create an NSData from the ALAssetRepresentation‘s getBytes:fromOffset:length:error: method and create a CGDataProviderRef around that, but using a callback as we do in our sample ensures that if ImageIO is smart and can decompress the image piece-by-piece that we don’t even load the entire compressed version in to memory at once. (back)

[2] I highly recommend the iOS Application Performance: Memory video from WWDC 2012 for more about dirty memory and memory usage in general. (back)

More To Explore

Moving Digital Health
Digital Health Podcast

Moving Digital Health: Scarlet Batchelor

For the seventh episode of Moving Digital Health, the new podcast from MindSea, CEO Reuben Hall talks with Scarlet Batchelor. Scarlet is a brand strategist

Digital Health Podcast

Moving Digital Health: Juhan Sonin of GoInvo

In the sixth episode of MindSea’s new Moving Digital Health podcast, CEO Reuben Hall is joined by Juhan Sonin, Creative Director at GoInvo, a design

Want to create an effective digital health app that impacts lives?