Hacker News new | past | comments | ask | show | jobs | submit login

The files in the before/after preview slider are exactly the same file size when they're supposed to be showing how their compressed image looks as good as the original. They're also not the same files you get when you click the download button.



Those are just preview files used for fast page load, but they are based on resized versions of the original and the JPEGmini version. You are welcome to download the original and JPEGmini full resolution files, and compare them at "Actual Size" (100% zoom).


If you get two large images and resize them down to a small preview, they'll look the same. It's really complete bullshit showing those as "examples".


I downloaded the pic of the dog, opened the 'Original' in Paint.net, re-saved it at the same filesize as the JPEGmini version and it's indistinguishable from the other two versions. JPEGmini doesn't seem to do anything that I can't already do in any image editor.

Looks like snake oil to me.


Note my previous reply on this issue: JPEGmini adaptively encodes each JPEG file to the minimum file size possible without affecting its original quality (the output size is of course different for each input file). You can take a specific file and tune a regular JPEG encoder to reach the same size as JPEGmini did on that specific file. And you can also manually tune the quality for each image using a regular JPEG encoder by viewing the photo before and after compression. But there is no other technology that can do this automatically (and adaptively) for millions of photos.


>JPEGmini adaptively encodes each JPEG file to the minimum file size possible without affecting its original quality

That is indeed what the FAQ says, but that being the case, the tool does not actually work very well, and the presentation is incredibly dishonest. The fact is that the JPEGmini versions of these images do lose noticeable quality, but Beamr is hiding this by giving a demo where the images are shown at 25% scale.

Take the dog image, for example. Using the slider, you'd think that JPEGmini nailed it; no visible artifacts whatsoever. But let's look at a section of the image at 100% scale, and see if this tool is really that impressive: http://imgur.com/z12mHnd

...holy block artifacts Batman!


I hope you're aware imgur compresses images that you upload to it regardless of whether they were already compressed.


Maybe they do that for lossy images, but I just compared what's on imgur to the original on my computer, and it is pixel-for-pixel identical. They did shave about 7KB off of it though, so maybe they push PNGs through pngout or similar.

I mean, as far as I know, there aren't even any practically useful algorithms out there for doing lossy compression on PNGs (although there should be).


Fireworks, pngquant, and png-nq have quantization algorithms that will dither a 32-bit PNG with alpha down to 8-bit palletized PNGs with alpha. The palette selection algorithms the free tools use (I haven't used Fireworks) sometimes drop important colors used in only a small section of an image, resulting in a blue power LED losing its blue color.


Yeah, technically that's lossy compression, but what I meant was lossy 32-bit PNG; that is, a preprocessing step before the prediction step which makes the result more compressible by the final DEFLATE step while having a minimal impact on quality.


That sounds very interesting. I wonder what kinds of transforms would improve compressibility by DEFLATE. I know a bit about PNG's predictors, but not enough about DEFLATE to confidently guess. If you ever work on this, please let me know. I'd like to collaborate.


Even if imgur did use a lossy format (which it didn't), you can see a difference between the top and bottom halves of that single image.


>But there is no other technology that can do this automatically (and adaptively) for millions of photos.

Excluding of course, the for loop or the while loop; particularly when used in a shell/python/perl/etc script. Matlab is also particularly well suited for this. Then there are visual macro thingamajigs. iMacros for browser based repetitive tasks, which could then be used with an online image editor. Irfanview, I believe has batch image processing, as do many other popular photo/image editors. And last, but not least, Imagej.

You may have added feedback to the loop where many would have had none, but feedback is not a new or novel concept. The only thing I can see that is possibly non-trivial is your method for assessing quality of the output. Given the many high-quality image-processing libraries, and well-documented techniques available, and the subjective nature of assessing "quality" with respect to how an image "looks", I doubt there is anything original there. You've enhanced the workflow for casual users, the uninformed, and those who prefer to spend their time on something else. That arguably has value. While it seems a bit of a stretch to call it "technology" to this audience but, that is what the word means.

IMO, you'd receive a "warmer" welcome from the more technically-minded folks here if you'd dispense with the marketing hype (definitely stop making impossible claims), and show some real evidence of just how much "better" your output is over some reasonable defaults, including cases where your system fails to meet your stated goals (even a random quality assessment will get it right sometimes). Nobody is ever going to believe that any system as you've described works for every case, every time (simply impossible). In other words, you aren't going to sell any ice to these Eskimos.

edit: accidentally posted comment before I was finished blathering.


i would appreciate an explanation of what you mean by "without affecting its original quality." We're talking about lossy compression, so whether that goal is achieved is purely subjective, isn't it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: