Articleswift-png 4.4.5PNG
Using iPhone-optimized images
Learn how to read and create iPhone-optimized PNG files, premultiply and straighten alpha, and access packed image data.
iPhoneOptimized.mdKey terms
- iPhone-optimized image
A PNG file that uses the BGR/BGRA color formats, and omits the modular redundancy check from the compressed image data stream. iPhone-optimized images are designed to be computationally efficient for iPhone hardware, and are sometimes (rarely) more space-efficient than standard PNG images.
- modular redundancy check
A checksum algorithm used to detect errors in data transmission. The modular redundancy check is omitted from iPhone-optimized images. See Adler-32.
- BGR/BGRA color format
The native color format of an iPhone. It is used to blit image data to the iPhone’s graphics hardware without having to do as much post-processing on it.
- premultiplied alpha
A pixel encoding where the color samples are scaled by the alpha sample. This improves compression by zeroing-out all color channels in fully-transparent pixels.
- straight alpha
A pixel encoding where the color samples are not scaled by the alpha sample. This is the normal PNG pixel encoding.
Worked example
As of version 4.0, this library has first-class support for iPhone-optimized images. iPhone-optimized images are a proprietary Apple extension to the PNG standard. Sometimes people refer to them as CgBI images. This name comes from the CgBI
application chunk present at the beginning of such files, whose name in turn comes from the CGBitmapInfo
option set in the Apple Core Graphics framework.
iPhone-optimized images are occasionally more space-efficient than standard PNG images, because the color model they use (discussed shortly) quantizes away color information that the user will never see. It is a common misconception that iPhone-optimized images are optimized for file size. They are mainly optimized for computational efficiency, by omitting the modular redundancy check from the compressed image data stream. (Some authors erroneously refer to it as the cyclic redundancy check, which is a distinct concept, and completely unaffected by iPhone optimizations.) iPhone-optimized images also use the BGR/BGRA color formats, the latter of which is the native color format of an iphone. This makes it possible to blit image data to an idevice’s graphics hardware without having to do as much post-processing on it.
First-class support means that the library supports iPhone-optimized images out of the box. Most PNG libraries such as libpng require third-party plugins to handle them, since there is some debate in the open source community over whether such images should be considered real PNG files. Swift PNG is, of course, a Swift library, so it supports them anyway, on all platforms, including non-Apple platforms. A possible use case is to have a Linux server serve iPhone-optimized images to an iOS client, thus reducing battery consumption on users’ devices.
In this tutorial, we will convert the following iphone-optimized image to a standard PNG file, and then convert it back into an iphone-optimized image.
You don’t need any special settings to handle iPhone-optimized images. You can decode them as you would any other PNG file.
import PNG
let path:String = "Sources/PNG/docs.docc/iPhoneOptimized/iPhoneOptimized"
guard
var image:PNG.Image = try .decompress(path: "\(path).png")
else
{
fatalError("failed to open file '\(path).png'")
}
iPhoneOptimized.swift:3We can check if a file is an iphone-optimized image by inspecting its color format.
print(image.layout.format)
iPhoneOptimized.swift:15bgra8(palette: [], fill: nil)
The bgra8(palette:fill:)
format is one of two iPhone-optimized color formats. It is analogous to the rgba8(palette:fill:)
format. Another possibility is bgr8(palette:fill:key:)
, which lacks an alpha channel, and is analogous to rgb8(palette:fill:key:)
.
We can unpack iPhone-optimized images to any color target. iPhone-optimized images use premultiplied alpha. We can convert the pixels back to straight alpha using the RGBA.straightened
or VA.straightened
computed properties.
let rgba:[PNG.RGBA<UInt8>] = image.unpack(
as: PNG.RGBA<UInt8>.self).map(\.straightened)
iPhoneOptimized.swift:18It is often convenient to work in the premultiplied color space, so the library does not straighten the alpha automatically. Of course, it’s also unnecessary to straighten the alpha if you know the image has no transparency.
Depending on your use case, you may not be getting the most out of iPhone-optimized images by unpacking them to a color target. As mentioned previously, the iPhone-optimized format is designed such that the raw, packed image data can be uploaded directly to the graphics hardware. You can access the packed data buffer through the Image.storage
property.
print(image.storage[..<16])
iPhoneOptimized.swift:22[25, 0, 1, 255, 16, 8, 8, 255, 8, 0, 16, 255, 32, 13, 0, 255]
We can convert the iPhone-optimized example image to a standard PNG file by re-encoding it as any of the standard color formats.
let standard:PNG.Image = .init(
packing: rgba,
size: image.size,
layout: .init(format: .rgb8(palette: [], fill: nil, key: nil)))
try standard.compress(path: "\(path)-rgb8.png")
iPhoneOptimized.swift:25We can convert it back into an iPhone-optimized image by specifying one of the iphone-optimized color formats. The RGBA.premultiplied
property converts the pixels to the premultiplied color space. Again, this step is unnecessary if you know the image contains no transparency.
let apple:PNG.Image = .init(
packing: standard.unpack(as: PNG.RGBA<UInt8>.self).map(\.premultiplied),
size: standard.size,
layout: .init(format: .bgr8(palette: [], fill: nil, key: nil)))
try apple.compress(path: "\(path)-bgr8.png")
iPhoneOptimized.swift:33The RGBA.premultiplied
and RGBA.straightened
properties satisfy the condition that x.premultiplied == x.premultiplied.straightened.premultiplied
for all x
.
See also
Basic decoding
Learn how to decompress a png file to its rectangular image representation, and unpack rectangular image data to the built-in rgba, grayscale-alpha, and scalar color targets.
Read MoreBasic encoding
Learn how to define an image layout, understand the relationship between color formats and color targets, create a rectangular image data instance from a pixel array, and compress images at different compression levels.
Read MoreIndexing
Learn how to define a color palette, encode an image from an index array, decode an image to an index array, and use custom indexing and deindexing functions.
Read MoreImage metadata
Learn how to inspect and edit image metadata.
Read MoreIn-memory images
Learn how to decode an image from a memory blob, encode an image into a memory blob, and implement a custom data source or destination.
Read MoreOnline decoding
Learn how to use the contextual api to manually manage decoder state, display partially-downloaded images, display previews of partially-downloaded interlaced images with overdrawing, rebind image data to a different image layout, and customize the chunk granularity in emitted PNG files.
Read MoreCustom color
Learn how to define a custom color target, understand and use the library’s convolution and deconvolution helper functions, implement pixel packing and unpacking for a custom HSVA color target, and apply chroma keys from applicable color formats.
Read More