I don't mean this as a dig at the developers of this particular site because my annoyance is more of a cumulative thing as opposed to being about this website in general, but IMO this is yet another website that does one simple thing that should be a library (or at the very least a documented web api) and not a website.
What? The designers who will be using this to convert PNG's won't have the slightest idea how to access an API. I, as a programmer, don't want to write code to do this either. I think creating a site for this is awesome.
It would be cool if they created an API as well, and a library (but for what language? so many compatibility issues...), but to say this should "not be a website" is entirely backwards. A website is the most user-friendly and accessible platform possible, and makes perfect sense to be the first step.
I have to agree with the OP here. Making this a webpage is silly.
If it were a library the program that created the PNG could have saved it in this compressed form in the first place.
Choosing the language isn't an issue, either. The obvious choice is C because that's what regular libpng is written in. And every language in existence provides a way to interface with C.
"What? The designers who will be using this to convert PNG's won't have the slightest idea how to access an API."
For designers it should be a Photoshop plugin (built on the theoretical library) or just a desktop app that can bulk convert entire directories.
While dealing with 20 images at a time is certainly nicer than one at a time, a drag & drop website UI like this one is an absolute workflow killer on the design side.
The problem is that websites are the most accessible way to deliver an interface for humans. But nobody else can do anything with the tech, and you're limited to the one workflow that the creator had thought of.
If this were a library or a command line app like pngcrush, web services could be spawned from it in days, easily. Or desktop apps. Or editor plugins. But in the current format, it's inextensible.
(disclaimer: this isn't always true, but it's true in this case)
You're right, proper transparency support is step 1 -- but that isn't to say that GIMP couldn't have some sort of automatic lossy compression tool (akin to this) on top of that tech.
EDIT: I suppose this software is equivalent to gimp's "Automatically select palette" option when converting images to indexed. Perhaps that option could be replaced with a list of algorithms to choose from, like how the Size dialog lets you choose your own scaling algorithms.
Yes, exactly. This tool is simply because "Automatically select palette" doesn't support transparency. If it did there would be no need for this.
I'm not sure you really need multiple algorithms, the algorithm itself isn't anything special as far as I know - it just supports transparency that's all.
You can get somewhat similar results by selecting all the transparent and partially transparent parts and saving the selection. Then flatten the image (i.e. mix the transparency into the background color), reduce the color depth, then use the color to alpha option only on the selection (not the whole image) to subtract the color and bring back the transparency.
Then count how many colors you have, if you have too many, undo everything, and choose a lower number for color depth and try again. It helps to choose a color that does not otherwise exist in the image for the background mixing color.
I believe this website uses pngquant to perform the optimization. You can get the tool from http://pngquant.org (it's open-source and pretty easy to use as a library too).
Thanks for all the feedback, everyone! We're still working on making TinyPNG easier to use and removing some of the kinks. Keep the comments coming!
We created this service because we (our web agency) were building a couple of sites that used very large transparent images. Unfortunately the file sizes were also massive, so we went looking for compression mechanisms beyond traditional lossless optimisation tools, which simply did not reduce our files enough. We built TinyPNG around a couple of existing open source quantising and compression tools.
Initially it was an internal tool, but we were extremely surprised by the consistent good quality of the results. So we decided to share it as an online service so that it is as easy as possible for everyone to reduce file sizes. An API will be coming soon!
Is it always a good idea to use lossy compression? No, certainly not. There are some edge cases that perform poorly. But we think the results are impressive. Use your own judgment! :-)
We actually went looking for PNG files on very popular sites (Facebook, Google, Github, Duckduckgo, many others) and almost all of them could benefit from TinyPNG's file size reduction without noticeable quality loss.
Mostly pngquant (the new version), optipng and advpng. We are still tuning parameters and swapping in and out tools based on our benchmarks and testing suite.
I wouldn't say they look awful, there's a difference in the background part of the image but the main focus has little change.
It's a trade-off, but I would imagine that if there's thousands of files it could be a significant savings on bandwidth, and you could always keep the original for download but not display puroses.
Here are the results of pngnq which does exactly the same thing as this tool. It created a file size of 46K, (which I was then able to reduce to 41K with pngout followed by advpng).
Riot[1] (windows) does as good or better than pnggauntlet and lets you preview things. supports the same backends I think (pngout, optipng, advpng, etc) and some extra dithering options... perhaps worth a look, I switched from pnggauntlet. :)
There are _slight_ artifacts on the defocused green die. At typical Wikipedia in-article display sizes (400x300, in this case), this is acceptable. Viewing in Win Firefox 13.01, Windows gamma. I'd use tinypng on large files before downscaling for web use.
If you have a good downscaling tool it will create a larger number of intermediate colors when you downscale and you'll actually end up with a larger image!
PNG8 (which IE6 supports) has had support for alpha transparency for a long time, but only Fireworks was able to create those files. I'm guessing that these guys are making PNG8s? I haven't tested the site myself.
Fireworks is not the only program that can create those files. I've never even heard of fireworks and I create indexed pngs with transparency all the time.
Although IE6 has it's share of problems, it's actually kind of simple developing for one single version of one browser. Once you get used to the quirks, they are constants.
(Note: I'm not saying I like it, just that it's become "normal")
I'd be curious to know if they have plans to release the source for this. It'd be much more useful to use this in a build step, rather than drag-and-drop into a browser window.
It must be a start-up with a freemium model. Soon they will reveal a monthly subscription option with a professional bulk uploader in Flash or even a desktop client. Just wait and see.
Ditto. I'd like to see some side-by-side comparisons with PNGOUT, PNGCrush, OptiPNG, and AdvPNG. I use ImageOptim on my Mac to automate my PNG optimization—and it can get some impressive gains.
On the Mac I use ImageAlpha to reduce the colours to 256 (it has a preview window so you can see if it's OK), then when saving the file I choose the option to send it to ImageOptim.
On Linux you can get the same by using pngquant, then using trimage. AFAIK there's no GUI frontend for pngquant so I guess how many colours would be ok and check before running trimage.
The reason why I use 2 tools is that ImageAlpha/pngquant is lossy and ImageOptim/trimage is lossless.
The tools you mention use lossless compression. For 24-bit PNG files that means they probably won't come close the kind of lossy compression used by TinyPNG.
Can't you achieve the same results with ImageMagick? For example, `mogrify -format png8 *.png` converts all png images in the current directory to indexed versions.
Then I opened the original panda PNG with GIMP, clicked on Image -> Mode -> Indexed -> Optimized palete -> 255 colors, and obtained an even smaller file size.
nQuant is a similar (C#, Apache 2.0 licensed) re-compressor; it was created by Matt Wrock as an adjustment of Xiaolin Wu’s algorithm to work with the alpha channel.
I typically just apply a posterize adjustment layer in photoshop and still save out to 24-bit. More often than not I can get pretty good compression, and I have the ability to mask out different areas of the posterization to achieve varying levels of compression interactively. Kudos for making this website though, seems like a great way to quickly compress images without having to think about it too much.
Also, the original has values in the RGB channels for pixels with no alpha that aren't visible (due to the alpha) but need to be compressed, so it's not a fair test (although both do to some degree - turquoise area around the bear), so the new one could be compressed even more.
I've only tried one image so far, but the one I tried showed impressive results. I've always used http://punypng.com, which has always done a great job, but I just ran a file compressed down to 131kb in PunyPNG and TinyPNG took it down to 42kb!
PNG compresses "exact" gradients pretty well. I'd be interested in a tool that not only reduces colors (what this one is doing), but also smooths gradient-like segments of the image to comrpess better.
Yeah, I am curious how much you can improve compression by keeping everything 24 bit, but slightly tweaking the color values to make the result more compressible. (PNG applies one of five predictors to each line, takes the difference from the prediction, and deflates the result.)
For example, as a greedy algorithm: find the best predictor for a line, then tweak the line to reduce the alphabet (round off the difference from predicted values) or to get longer deflate matches (substitute values if they're close and match a previous run).
PNG is no longer the only widely supported format with alpha channels. You can use WebP (which has javascript or flash polyfill.) You can fall back to a plain PNG for those with javascript disabled.
We plan to release an API. The compression code is a combination of a couple of open source tools with some careful tuning and heuristics for optimal result.
To compare yourself:
I get one byte of difference between what pngnq produces and their example shrunk version.BTW, I use pngnq all the time in instances where losing some information is okay. It's a great tool.