I'm being a bit picky here, but IOS is an operating system that runs on Cisco networking equipment. "Images" is how one refers to the software that is loaded on the switch/router/etc. For example, I can buy an "layer 3 enhanced image" for a Cisco switches that will give me limited routing capabilities.
For that reason, one should be careful about the case of the letters when referring to IOS and iOS, as they can lead to confusion if not used appropriately.
Given the audience here, I would imagine anyone writing about Cisco IOS would naturally write the title as "Cisco IOS Image Tricks", but as pointed out, hn would be better off not trying to secondguess and proper case titles.
Using 16-bit images won't save any memory, and may actually use more memory. The iOS graphics pipeline requires 32-bit images for rendering, so all other types of images are converted to a 32-bit texture.
If you are trying to optimize memory consumption, you need to measure memory consumption! You can't just change some code and assume your memory consumption has gone down.
Like gurgeous, I also measure memory consumption almost to a fault. Even though it appear iOS loads jpgs directly into a 32bpp RGBA bitmap, you can after that trim down the size of the image resident in memory. It makes a significant difference in memory usage according to Instruments, but slows down the loading of an image from disk. Unfortunately, like noted elsewhere in this thread, it doesn't seem possible to load a jpg directly into a 16bpp in memory representation and skip the intermediate 32bpp format.
You could compress those images down to 4bpp or even 2bpp by encoding to PVRTC. Since they're photorealistic images you'll retain image quality despite being 8x or even 16x smaller, and the graphics chip supports the format natively so you don't need to decompress it.
I use it in games whenever a photorealistic image is required.
Some more info:
- "Avoid using PVRTC when you have static images that are the focus of your content"
- "images are 2-4x larger, but memory use is less"
- "amazing memory improvement, but no real change in performance"
- "textures must be power of two"
It probably doesn't fit my use case, but it's fascinating.
Right now you are using 32bpp images, and then falling back to 16bpp for slower devices. Because PNG uses zip-style compression, pictures of houses won't compress that well, so your size on disk and size in memory will be similar.
PVRTC allows you to save at 4bpp or 2bpp, so you'll get sizes 8x or even 16x SMALLER than a full color image, both on disk and while loaded into memory (loading images are a major source of lag on iPhone, so a smaller on-disk footprint is always welcome). You can even zip compress it for further on-disk savings, though YMMV.
The quality degradation is similar to what you'll get in JPEG, in that there will be some artifacting depending on the quality of the encoder and the source image (hard edges don't compress well, nor does text, but photorealistic images retain a lot of quality).
The power-of-2 restriction is the only real pain point of PVRTC. But if you're making sprite sheets, that's probably not going to be a big issue for you anyway. Once it's in memory it's just a texture like any other, and you can use it sprite sheet style like you're already doing, chopping up the contents any way you choose.
I suggest you try cropping one of your house images to a square sized power-of-2 as a quick test and convert it to PVRTC so you can see for yourself what the quality is like. It may yet be usable for your purposes, and the potential payoff is huge.
Yeah. When you load a JPEG image, it gets decoded to a 32bpp image in memory on the device. PVRTC images, on the other hand, stay the same size (2bpp or 4bpp) when loaded into memory. Plus they don't need to be decoded.
So this 512x512 image would take 64k of RAM using 2bpp pvrtc, 128k using 4bpp pvrtc, and a whopping 1MB using JPEG or PNG.
Another thing to look out for with PVRTC compression is that you tend to get compression artifacts along the edges of the texture. If you attempt to tile multiple texture to form a larger image, they will show up as very distracting lines.
It can be fixed by discarding the last few pixels of each edge.
I remember back in the day of 60 MHz PowerPC with 16 MB RAM, that I used software that would decode JPEG files in blocks, so you could look at high-res photos without holding the whole uncompressed image in RAM. In fact, I have an app like that for my Android phone for viewing a very high res bike route map I rendered out of a PDF.
Are there any libraries like that floating around that would work in this situation?
So, it sounds like 'spriting' in this context is a way of sending a stitched together larger image and then showing a portion of it in each uiimage. I work on Android, and I'm interested in the technique but just was looking for some validation before I go off doing research.
For that reason, one should be careful about the case of the letters when referring to IOS and iOS, as they can lead to confusion if not used appropriately.