With the latest release of Etchings, we wanted to support high resolution output. This means reading high-res versions of images from the camera roll, but not blowing our memory limits if the user selects a 30MP monstrosity. We came up with a way to get a smaller version of any ALAsset without having to first uncompress the whole image into memory, and since we couldn’t find this technique anywhere online, we’re sharing it here.
By default, the
UIImagePickerController hands you a
UIImage, but since we want to control the size more closely, we have to make use of the
UIImagePickerControllerReferenceURL it provides to get access to the underlying
ALAsset. The asset already provides several versions of the original image:
- A thumbnail:
- An aspect-correct thumbnail:
- A full-resolution image:
[[asset defaultRepresentation] fullResolutionImage];
- An image suitable for displaying fullscreen:
[[asset defaultRepresentation] fullscreenImage];
But there’s no obvious way to get an arbitrary size. There is a suggestive method named
CGImageWithOptions:, which looks like it takes flags related to the desired size of the image, but if you read the docs carefully, those particular values (
kCGImageSourceThumbnailMaxPixelSize) can only be passed to
CGImageSourceCreateWith[Data|URL], which is what
OK, so, how about dropping down a level? The aforementioned
CGImageSourceCreateThumbnailAtIndex method looks like it will do exactly what we want. (Don’t let the word “thumbnail” distract you; here it just means “smaller than original resolution.”) To use this method, we just need to get a
CGImageSourceRef for the asset. Normally, you’d create these from a file URL or block of raw data, but what we have is an
To connect these things together, all it takes is a bit of glue code to wrap up the
ALAssetReprentation as a
CGDataProviderRef, and wrap that into a
CGImageSourceRef. We use
CGDataProviderCreateDirect, passing a small set of functions used to retrieve the image data. Like so:
(This is designed to live in an existing class; you’ll also need to add the AssetsLibrary and ImageIO frameworks to your project. This code is ARC; if you need non-ARC code, just remove the two
To test this out, we ran some experiments with a 6019×6019 image from NASA. (You know this is serious stuff because it’s from NASA.) Fully decompressed, this image uses 138 MB, which is plenty to get your app killed by the system on older devices. We ran a simple test app using the allocations instrument and looked at the dirty memory size using the full-size image versus loading a thumbnailed version with the above code.
On an iPhone 5, when we load the above image at full resolution, we see a jump in our dirty memory of 138 MB, just as we’d expect. When we load the above image, requesting an image of size at most 2500×2500, we see only a 24 MB bump, which is what we were hoping.
On an iPhone 3GS, the app is immediately killed in the first case, but works just fine in the second case. Core Graphics (ImageIO in particular) is doing what we want it to do; it’s downscaling the image without first uncompressing the whole thing.
So, if you need to get an image from the Assets Library at a particular resolution, don’t load the original image first; use this code instead to avoid crashing and leaving your users wondering what happened.
 Though people have certainly asked. (back)
 We could create an
NSData from the
getBytes:fromOffset:length:error: method and create a
CGDataProviderRef around that, but using a callback as we do in our sample ensures that if ImageIO is smart and can decompress the image piece-by-piece that we don’t even load the entire compressed version in to memory at once. (back)
 I highly recommend the iOS Application Performance: Memory video from WWDC 2012 for more about dirty memory and memory usage in general. (back)