Hacker News new | past | comments | ask | show | jobs | submit login
Google offers JPEG alternative for faster Web (cnet.com)
204 points by ukdm on Sept 30, 2010 | hide | past | favorite | 110 comments



Wait, so Google took a bunch of JPGs, re-compressed them with a lossy format, and claims to have gotten them 40% smaller with no loss in quality. Either the lossyness in the format is entirely in aspects of the image that the JPG compression already removed and they're just better at compressing the residue (possible, but I am skeptical), or else the image is altered, but in a way that Google claims is subjectively a sidegrade in quality. I'm not putting much faith in their quality claims until I see side-by-side comparisons of a JPG and WebP compression of the same TIFF (or other uncompressed) image at the same compression ratio. A double-blind study is probably too much to ask, but it would be nice.


That's that the article says, but they're probably restating something and got it horribly wrong.

What probably happened was that they took images and compressed them to WebP and JPEG and compared the sizes.


No, I would believe Google recompressed a whole ton of JPEGs as a test. Not the most scientific test of a codec, but as a Web company they are interested in improving image serving more than image authoring.

As for the recompression, you can losslessly squeeze a JPEG by at least 15% (e.g. StuffIt came out with this a few years back, claiming higher numbers but mainly at low bitrates I think). In the lossy camp, H264's intra-frame encoding significantly outdoes both JPEG and JPEG2000.

WebP is probably similar to H264 intra. As for recompression vs. straight compression, this probably has little effect to the extent JPEG and WebP are "compatible", e.g. both being block-based transforms. It would be unfair, on the other hand, to run a very different codec after JPEG and compare it to WebP after JPEG, because the other codec might be working hard to encode the JPEG artifacts.


> It would be unfair, on the other hand, to run a very different codec after JPEG

Exactly. I found Google’s “study” pretty sketchy, given the lack of concrete detail about this. http://code.google.com/speed/webp/docs/c_study.html

Here’s some of what I wrote in an email to a friend:

Since they’re dealing with an arbitrary collection of already encoded images, there are likely e.g. artifacts along JPEG block boundaries that take up extra space in JPEG 2000. While they have a big sample size, they don’t compare the metric used (PSNR) with noticeable quality degradation.

There's a graph of size distribution (which would be a lot more readable if they binned some of the sizes and showed a histogram instead of a sea of overlapping red plus signs), but then compression percentages aren't in any way related to those various sizes: were big images easier to compress better than JPG/JP2? Small images? Looking at the size distribution, a large percentage of these images are absolutely tiny, the kinds of images that as far as I know JPEG 2000 was never intended to be used for. The overhead of the data storage container ends up dominating the size of the image for very tiny images – I don’t know anything about the relative overhead of JPG/JP2/etc. images, but it would be good to include that in any discussion.

It seems to me like the WebP images have their color profiles stripped. Is that an inherent part of WebP? If so, I hope Google doesn’t encourage people dealing with photographs to adopt it in large numbers. Browsers are just finally getting to the point where proper color management of images is expected; no need to regress there.


WebP is lossy. Google provides a gallery of images here: http://code.google.com/speed/webp/gallery.html

Basically, they are using predictive coding to achieve good lossy compression. However, I agree, I would also like to see double blind studies on the quality degradation.


That's great that they have the sample gallery, but without a lossless source (e.g. a control group) to compare it to the gallery that they show counts for nothing. I grabbed the source for it, I'll see if I can do a jpg/webp/png side by side with some analytical data as well.


Update: Here's some very basic file comparisons. Special thanks to Erik Anderson of http://divineerror.deviantart.com for the lossless images.

In the folder is the original image, the compressed version of the image in both jpg and webp, and a enhanced difference map between the compressed version and the lossless image.

http://jjcm.org:8081/webp

Basic analysis shows that right now webp has better preservation of luminance, but at the expense of hue/color. I'll have a blog post up in a bit with a myriad of file tests, difference maps, percentage differences, and hue offsets in a bit.


Awesome set of pictures. I can live with the squares of WebP better than the hugely-visible gradients of JPEG, methinks. Subjectively, I'd say those images show it to be definitely superior, by a pretty large margin.

Very impressive. I can easily live with the 2x decode / 8x encode with those results.

edit: though, if there's no alpha capabilities, count me out. Yes, that's my deciding factor.


"We plan to add support for a transparency layer, also known as alpha channel in a future update."

http://blog.chromium.org/2010/09/webp-new-image-format-for-w...


Don't jump on it yet, alpha capabilities aren't the only thing that it's missing. After some analysis it doesn't support color profiles - greyscale being the biggest factor here. About to upload a sample black and white image, and you'll see the issues there.


I was wondering if profiles might have been the reason for some of the larger hue/sat differences (over the whole image).

Will gladly keep looking, it's interesting either way :) Thanks!


Update 2: Here's the blog post along with mean delta values for both RGB and Luminance from the source image: http://news.ycombinator.com/item?id=1746621


You shouldn't get any kind of percentage differences in the color channels, at any size above 8x8 blocks in the image. If the entire image is offset that most likely shows a bug in your conversion process.


They are claiming better compression at constant PSNR: http://code.google.com/speed/webp/docs/c_study.html


But they do not provide SSIM comparisons, they compare PSNR.

Here is "how to cheat with codec comparisons", http://x264dev.multimedia.cx/?p=472 which exaplains (partially) why PSNR isn't that great when judging image quality.

SSIM: http://en.wikipedia.org/wiki/Structural_similarity


I agree that we should expect side by side examples to support the claims, but my first thought is that they're making an analysis of the JPEG compression and using improvements on computing power to encode the information more efficiently. Consider that JPEG is using a (or more) Huffman table and a (or more) quantization table. (I'm getting this from wikipedia, IANA compression expert)

What if you analyzed all those images and came up with a composite huffman compression that was more efficient than the best guess in the 70s? Then, you did some magic on the quantization table to make the most common vales correspond to the lowest numbers, relying on processing power to decode the compressed quantization table before you started?


Why bother wasting time analyzing Huffman tables when you own a video codec? JPEG is mostly an MPEG-1 keyframe. WebP is exactly a VP8 keyframe. VP8 is better than MPEG1, so there's no need to change anything when you can just use that decoder.

Although there are many inefficiencies left in VP8, and to a lesser extent H.264, when dealing with very large images. One is that the same texture can be repeated in different areas of the same image, but prediction only happens from neighboring pixels, so it can't reuse the same texture in compression. Some solutions are in the JVTVC/H.265 proposals and are usually called "extended texture prediction".


I note that WinZip 14 does decode JPGs and reencode them smaller, but without further loss of quality:

"The main trick those three programs use is (partially) decode the image back to the DCT coefficients and recompress them with a much better algorithm then default Huffman coding." - http://www.maximumcompression.com/data/jpg.php


Most lossy image compression systems use a transform that does not reduce the data size but makes transformed data that is more compressible. This means the final layer of compression is a traditional compressor. These, like the transforms themselves get improved upon over time. JPEG is old.

When I did some experiments with various compression techniques, I found that DCT with a LZMA base compared quite well to newer compression systems.


If you take jpegs from old cameras and recompress them into jpeg, you can get a remarkable reduction in storage. ... But I can't tell whether they even used such a "recompression to jpeg" control case in coming up with the 39% figure.


Have they explained their reasons for not backing JPEG-XR instead? This seems to be a step back from JPEG-XR:

- No alpha support

- Not lossless support (Useful for the alpha channel, and could encourage more cameras to support lossless images)

- No HDR support

JPEG-XR also allows different regions of the image to be encoded independently. See wikipedia for more features: http://en.wikipedia.org/wiki/JPEG_XR

I have no idea what the patent landscape for JPEG-XR is, but I'd be disappointed if we replaced JPEG and didn't get some of these features.

The lack of alpha support in JPEG is especially a pain for web developers. PNG does not do photos well.


Gah! No alpha? Count me out. The web NEEDS alpha channels. I can't count the number of hacks I've seen to get around the lack of an alpha channel in preferred-format-X.


I've seen the 'no alpha' repeated a few times, but can you give your use cases for alpha in jpeg? I understand the need for alpha channels in gif/png, and have fought the lack of alpha support for years, but for jpg's I don't remember ever needing it, nor can I easily imagine a situation where it would be needed. So I'm interested to learn about where you'd use transparency in jpgs.


why not? I mean, why should I be forced to use a lossyless image format just because I want it to blend with the background (if the image compresses better with jpeg and the image quality is fine)

Are there technical reasons to not implement an alpha in jpeg-like compressions?


Features shouldn't be there 'just because it's possible', at least not if they may impact other features(1). If greater compression can be reached by leaving out the alpha, and if there's no compelling reason to put it in, it should be left out. 'why not' is not a reason, there needs to be a business case for each feature, in everything.

In my experience, and this seems to be a widely held position, the main use case for jpgs is in pictures, as in photographs. The main use case for png (gif) is for graphical elements: borders, menus, etc. Those last ones you want to compress with a lossless format anyway - you need to be sure that a flat menu background is not dithered or doesn't have other artifacts. I understand the question mostly as 'do you need transparency in photographs' and 'do you need non-rectangular photographs where the non-rectangular nature is encoded in the photograph itself, and not part of another rendering in stage in the presentation layer'.

Thinking about it more, maybe things like drop shadows or other fancy borders could be case where you need transparency in photos. Otherwise you have to work around it by having the picture as jpg and the border as a separate (or several separate) png's. More requests, harder layouting, etc. I'm not convinced yet that this use case alone is a compelling argument.

As for technical reasons to not implement it, I don't know - I'm assuming there are because I'm quite sure that someone at Google must've thought about it and decided against it, they must have had their reasons.

(1) I'm reasoning from the assumption that including transparency has adverse effects on file size and/or decompression complexity. Maybe there aren't in which case balancing features becomes a different matter and most of my argument is moot.


Lossless never took off in any variation of JPEG (it was there since the original ITU spec). People don't need it.


Compressed raw images would be nice; raw files are huge.


I was hoping Jpeg2000 would take over because it is so flexible. It encodes really well at the low end and really well at the high end. You can target a specific file size and have it produce it. Instead of the blocking artifacts caused by the 8x8 DCT grid, you get smooth blurring.


http://upload.wikimedia.org/wikipedia/commons/5/51/JPEG_JFIF...

JPEG looks better to me. It's only a data point of 1, but it's the most important data point to me. ;)


I've seen that picture before, and it's pretty indicative of its lower settings, yes. And I agree - the sharper & more accurate edges of 2000 make the less detailed / smoother textured interiors look disproportionately worse.


Both of those compressed versions have (different kinds of) unacceptable artifacts, IMO. It'd be interesting to see comparisons at a few other levels of compression.


Sadly, jpeg2000 may never catch on until the patents encumbering it expire.


Given that the JPEG 2000 patents are licensed royalty-free, I think this says something about open source patent nitpicking.

"All patents that WG1 believes to be essential to practice JPEG 2000 Part 1 are available on ITU-T’s patent policy 2.1, which is fee-free (see the ITU-T web site for their interpretation of this)." http://www.jpeg.org/faq.phtml?action=show_answer&questio...


Two things after a quick scan of the page:

1) "Again, at every JPEG meeting the following resolution is also passed unanimously" appears to be saying each year they vote whether to continue or wether to reel in the bait. Tasty free morsels given by fisherman often turn out to have hooks attached.

2) It appears that one has to give up any patents that read on to a baseline spec for JPEG. This sounds like it could be bad for large multimedia companies who want to retain rights to an image format - what's the baseline. If the baseline is something like "uses huffman codes to compress image data" then this is going to be a potential huge cost in IPR given up.

These are possibly unfounded, just first thoughts at 2am.


This hasn't stopped nearly every codec popular today.


It has stopped a number of codecs that you most likely never heard about, since they, well, did not become popular.

Take for instance arithmetic coding option in original JPEG, the only patent-encumbered part of it. Just NOONE ever using it.


The smooth blurring is because wavelet compression (which JPEG2K uses) naturally resizes the image as it gets smaller. Resize your JPEGs smaller, compress them, then upscale them again, and they might look better than JPEG2K does.


This has other implications.

* a save in bandwidth is huge on mobile speed, google believes that speed effects web use, web use effects revenue

* a save in bandwidth is cheaper for google

* att, verizon, sprint, and tmobile are limiting mobile data plans, smaller images means more web page loads

* net neutrality might fail, you might have to pay for data

* google runs a lot of content via app-engine, gmail + chrome, google should be able to make the switch for the stacks they own to develop an advantage.

* others will follow in adoption like facebook if it saves them on one of their largest costs cdns.

* openness an open format can go on more devices.

* open devices might appear faster on the web.


JPEG (and many image/video coding algorithms) are really made up of a couple pieces -- transform, modeling and entropy coding. In the case of JPEG, the transform is handled through breaking the image up into eight by eight blocks that are then run through the DCT and quantized. This is where the loss comes from.

Modeling and entropy coding are handled on the coefficients generated above. However, this is done on each 8x8 block (note, I am making a slight simplification ignoring the use of differential compression on the DC coefficients between blocks. Since the algorithm is relegated to encoding at most 64 coefficients at a time, there isn't much "modeling" that can be done.

If one reorders the coefficients of the 8x8 blocks to resemble a non-block based transform -- you can perform better modeling to get much better compression with the exact same image quality as the original JPEG image. However, in this case, you lose compatibility with a JPEG encoder since the format of the coefficients is not JPEG.


One simple way to be able to throw more CPU at image compression/decompression, and thus save space, could be to use bigger blocks than 8x8.


Won't the on-line Porn Industry have to adopt this for large scale adoption to take place?

I'd think that the size decrease alone would sell the Porn Industry.


Are galleries that important? I thought right now it's all about videos.


Soft / "art" sites focus much more on stills, and headline pixel counts, e.g. met-art etc. I would expect them to prefer large file sizes, to aggravate people who like to scrape content.


I would expect them to prefer smaller file sizes, especially in public preview galleries, to lower their bandwidth costs. Someone scraping high-quality images incurs a small one-time bandwidth cost, and they can re-encode the images at a lower quality if they want to save bandwidth.


As a past member of Met-Art and current member of Digital Desire, I would think that this has the potential to save lots of bandwidth for those types of services. And even the services that do specialize in videos usually offer photos sets of shoots, so it would still benefit them as well.

The trick will be to build enough mass to make it common enough for all the browsers to support WebP. I can see Firefox and Webkit adding it fairly quickly, so the real question is, if and when will Microsoft support it?


I would expect the large file sizes to hurt the host at least as much as the scrapers.

Better to insert random delays in responses to HTTP requests from suspected scrapers (or outright reject them).


What about all the "video preview" images? Surely porn sites would benefit from serving less bytes there.


I don't think those are important enough to risk browser compatibility issues, I don't see a reason why the porn industry would throw its weight in the ring because of this.

And that's if this new format would actually have advantages at these sizes. The factor doesn't need to remain constant in different resolutions. (And even if, we're talking about small pictures anyway, so if you save 1kb per preview pic, would this matter compared to the video stream bandwidth?)


There are no browser compatibility issues if you use conservative HTTP content negotiation. Browsers which support WebP can say so in their HTTP Accept headers, and the server can serve WebP to those browsers, falling back on ordinary JPEG for older browsers. This was also used back when PNG support was spotty, by sites which wanted to combine the goodness of PNG with the ubiquity of GIF. It's actually pretty simple:

http://en.wikipedia.org/wiki/Content_negotiation

Now that I think about it, there's probably a business opportunity for someone who handles such hosting issues for image-heavy web sites. The porn industry may seem sketchy, but they have a lot of money and a pressing need to keep overhead down.


Yes, but it has to have some significant effect. For this to happen, the gain from the format conversion has to matter for the porn sites use case, this has to be a certain percentage of bandwidth (wonder how that's for video sites), and then the amount of browser who support it (not just advertise it, cf. IE and its PNG "support") has to be high enough.

Anybody got some statistics on browser distribution on a major *tube site? I guess we'll see a lot of Internet Explorer there, although it Chrome's "incognito" mode may have found some fans.


The cost of format conversion can be made very low by having it done automatically by the web server. It'll just take some time for the software to do that.


Just a note to anyone using Adobe software to produce their web PNGs: Make sure to run PNGCrush to remove all extraneous information in them! http://pmt.sourceforge.net/pngcrush/index.html


Just be sure to test your crushed pngs under all the situations in which they'll be used. I've had a few situations where PNGCrushed files couldn't be opened by certain programs, but the uncrushed files could. In particular the Python image library tends to choke on files that have been run through PNGCrush.


Does this apply when using the Export options in Fireworks, is it just for saving as a Fireworks PNG?


I'd be very curious in seeing a comparison with JPEG-XR, which has a number of nice advantages for photography.


I've been googling for examples of that format to no avail. I just wanted to see if Google's browser supports it.


Not sure about this one google. Firstly, Worst. Name. Ever! Secondly, what kind of browser support do they think they'll get? I know ie is covered via google chrome frame but will Apple and Mozilla jump on this? Both issues have alloyed this one on me.


Chrome is a given, Firefox (and isn't webkit) is open source. Google has the devs to write the needed code/extension and give it to those projects.


Okay, I'll buy that Chrome and ie are a shoe-in, and that FF is probably easy. So that leaves Apple and Opera. Not bad, not bad...


Chrome and Safari both use WebKit. If Google releases an implementation, it shouldn't be hard for Apple to adopt it. That just leaves Opera.


AFAIK Opera supports WebM, I assume they wouldn't have problem doing the same with WebP. IE and Safari might be a different story though, they wouldn't be so keen to support a format from their big competitor. If I would be in MS' shoes I would use that as an occasion to put a support for JPEG XR into Chrome/WebKit.


http://code.google.com/speed/webp/

> The WebP team is developing a patch to WebKit to provide native support for WebP in an upcoming release of Google Chrome.

So it will be available to all webkit browsers willing to accept the patch.


Easy if Apple wants to.


Well, the name hints at a close relationship to WebM so I guess they target the same progressive browsers.


My boss walked passed and I called him over to tell him about WebP and he literally laughed out loud at the name. I said it without a hint of sarcasm and it was the first thing he noticed. Anecdotal I know; but still...


this reminds me of the internet's reaction to the name "iPad" - although I haven't heard anything along those lines since the first week it was announced.


As a regular person, I really can't see a 40% decrease in size (of which I'm skeptical) for just jpeg images (not nearly the full "65% of the bytes on the web") being worth the huge switch-over costs. The ubiquity of jpeg is just too valuable.


If you understand how to use content negotiation (which basically no one outside Google does) then there's virtually no switching cost.


I mean cost beyond just web development. As mentioned in the article, there is a massive range of consumer and non-consumer products which have adopted jpeg as a universal format.


So those devices just keep on using JPEG and thus bear no additional cost. Even if WebP is only used between Google and Chrome, it will be worth it for Google.


Given that most JPEG images are generated by digital cameras, I don't think WebP will get any traction until Canon, Nikon et al support WebP natively.

And hopefully attaching metadata to WebP images will be saner than it is for JPEGs.


It shouldn't be too long before Android phones start supporting WebP natively, and image-hosting sites like Picasa and Flickr already do re-encoding to lower the image size; adding WebP to that shouldn't be a serious problem. I can see this getting traction even without support from most digital cameras.

As for metadata, the container format is based on RIFF, which consists of tagged binary chunks. Metadata chunks follow the image data, and consist of null-terminated strings. No word yet on whether or not you can use Unicode.

http://code.google.com/speed/webp/docs/riff_container.html


> No word yet on whether or not you can use Unicode.

You should be able to use UTF-8 since it doesn't include embedded nulls.


Sure, but I'm worried about the guy who decides that his decoder will use ISO-8859-1 by default, or UTF-16 if that's specified in some meta-metadata chunk somewhere. If this were written down in a spec, we wouldn't have to fret about it.


Looks like they didn't bother to seize the opportunity to add an alpha channel. Being stuck with PNG for transparent/translucent images sucks.


Lossy compression for alpha channel isn't that good idea, I think. Brightness and color compression artefacts will be multiplied by alpha compression artefacts.


Then provide baseline support for an uncompressed alpha channel. Not being able to store one at all in an image limits the usefulness of the format.

Artifacts for lossy alpha compression are already, to some extent, well understood and dealt with, since compressing video game textures that contain an alpha channel is already done lossily using the DXTC compression formats.


You could choose alpha compression quality separately from luma/chroma. Alternatively, either compress the premultiplied colour values directly, or compress raw colour but optimise the premultiplied error.


There's no way this will catch on; the slight image quality per bit improvement is not nearly enough to counteract the huge momentum of existing JPEG use. Certainly JPEG isn't the best possible image codec or even all that good, but it's good enough, and it works everywhere.


But there is a loss in quality! The dithering is noticeably worse. Notice the light outline on that red-ish thing in the top middle of the image. I bet I could've gotten the 10kb decrease by just lowering the JPG quality.


Umm...how can you claim a loss in quality without looking at the original? The linked page shows two lossily compressed versions of an unknown original image. You're commenting on which image subjectively LOOKS better to you, not on compression quality.


Take a look at those red and orange "things" in the center of the image in the PNG/WebP version. My bet is on a loss in quality.


My point was just that they're showing a lower quality image that is is lower in size. Great, ice is cold.


Our CPUs are so much faster than even a few years ago, but bandwidth hasn't increased that much (at least not here in Canada). I'd gladly trade some CPU cycles for bandwidth (or same bandwidth, but better quality).


The article is quite light so far. I am sure it brings other improvements than just a reduced file size. I would hope that at least some features of JPEG2000 would make it in this format. Maybe also a convenient way to pack several images in a single file without resorting to css clipping tricks.


At 8 times longer to compress, this reminds of Iterated Systems FIF, but article claiming based on WebM suggests it's still DCT compression.

Adoption by just Flickr and Facebook could push a new image format fast.

Google has much to gain since they archive a copy of indexed images. Hence their interest in "recompression".


With OLED/IPS etc. taking over in the next few years, will this really support >8bpp or HDR?

Interesting move, Google


WebP uses the same color model as WebM which "works exclusively with an 8-bit YUV 4:2:0 image format" so seems like WebP will not be HDR capable, which is a pity.


Considering it basically is 1 frame of WebM it isn't exactly a surprise either.


I love this site! Reading the comments here has taught me more about image compression in 45 minutes than I've learned from reading random articles on the web for the last few years...


There's a conversion tool available for WebP now: http://code.google.com/speed/webp/download.html


They have only Linux binary. I've compiled one for Mac OS X: http://pornel.net/webp


Why is the performance of PNG so disappointing? The WebP sample image from the article (top image) is shown here as a PNG of 234kB...


Partly because PNG is lossless, but also largely because that's not the kind of image that PNG compresses well.

JPEG is a better format (than PNG) to use for photos and similar types of images, because it does an okay job of compressing them and because the compression artifacts are a lot less noticeable. However, JPEG is a really bad format for things like graphics with lots of flat colors or text, because there the compression artifacts are very easy to see. This image from Wikipedia comparing the two shows the difference: http://en.wikipedia.org/wiki/File:Comparison_of_JPEG_and_PNG...

PNG has the advantage of being lossless, and so it performs very well in some situations, but very poorly in others. Photos and images like them are an example of where PNG compresses poorly, because the . PNG actually does outperform JPEG (compression-wise) on some kinds of images though: those with lots of flat colors/text/gradients/etc., and those conveniently end up being the kinds of images where compression artifacts are really visible.

tl;dr: Use the right format for the job. PNG is not good at compressing photo-like images, but does really well at things like diagrams and vector graphics. JPEG is best-used for things like photos where compression artifacts aren't a huge deal.

If you're looking to learn more about how JPEG and PNG compress images, I recommend checking out Smashing Magazine's guides to JPEG and PNG optimization techniques. (I don't usually recommend Smashing Mag articles, but these are both good.) Both will give you some insight into how these things work.

http://www.smashingmagazine.com/2009/07/01/clever-jpeg-optim... http://www.smashingmagazine.com/2009/07/15/clever-png-optimi...


JPEGs do bad things to your color space, though. I certainly welcome a replacement.

For example, see: http://www.hackerfactor.com/blog/index.php?/archives/250-Sho...

and

http://www.hackerfactor.com/blog/index.php?/archives/355-How...

It would be nice to see a more modern replacement, assuming the technology is better. That said, I haven't read enough about WebP yet to know if it actually fixes any of those problems with the JPEG format.


That's a rounding loss, not a change to "your color space". In any case, JPEG supports storing RGB instead of YUV 4:2:0 (or even YUV 4:4:4).

WebP _doesn't_ support this, but it really doesn't matter. The point of image files is for you to look at the image, not at the pixel values.


From the article I cited:

"In fact, of the 16 million colors in the 24-bit true-color pallet, JPEG can only store about 2.3 million colors. That's about 14% of the available color space."

and

"If JPEG stored images using RGB, then the Q tables would cause colors to diverge. For example, blue would have the most loss due to compression so images would appear more reddish and greenish. Instead, images are converted to a different color representation: luminance, chrominance red, and chrominance blue (YCrCb). Changing any one of these values alters the red, green, and blue components concurrently and prevents color divergence."


The article you cited is nonsense. Quantization which causes one of the color channels to get darker is called a "DC shift" and encoders try very hard not to introduce it.


There are limits to non-lossy compression…


Also, PNG isn't very good for photos and other "natural" images; JPEG-LS (which nothing supports) is much better.


I'm surprised that in 2010 people still don't understand the difference between PNG and JPG and that each is better for different kinds of images.


There are no surprising facts, only models that are surprised by facts; and if a model is surprised by the facts, it is no credit to that model.

It is always best to think of reality as perfectly normal. Since the beginning, not one unusual thing has ever happened.

The goal is to become completely at home with [a world where people don't understand the difference between PNG and JPG in 2010]. Like a native. Because, in fact, that is where you live. - (paraphrased) http://lesswrong.com/lw/pc/quantum_explanations/

--

Calling reality "weird" keeps you inside a viewpoint already proven erroneous. Probability theory tells us that surprise is the measure of a poor hypothesis; if a model is consistently stupid - consistently hits on events the model assigns tiny probabilities - then it's time to discard that model. A good model makes reality look normal, not weird; a good model assigns high probability to that which is actually the case. Intuition is only a model by another name: poor intuitions are shocked by reality, good intuitions make reality feel natural. You want to reshape your intuitions so that the universe looks normal. You want to think like reality.

This end state cannot be forced. [..] But it will also hinder you to keep thinking How bizarre! Spending emotional energy on incredulity wastes time you could be using to update. It repeatedly throws you back into the frame of the old, wrong viewpoint. It feeds your sense of righteous indignation at reality daring to contradict you. - http://lesswrong.com/lw/hs/think_like_reality/


Whenever I hear someone describe quantum physics as "weird" - whenever I hear someone bewailing the mysterious effects of observation on the observed, or the bizarre existence of nonlocal correlations, or the incredible impossibility of knowing position and momentum at the same time - then I think to myself: This person will never understand physics no matter how many books they read.

Well, that rules out Einstein.


No it doesn't, Einstein died years before that author was even born. He will never hear Einstein bewailing anything.


Point being, Einstein spent a large part of his career completely unable to deal with the sheer weirdness of quantum physics. ("God does not play dice," "spooky action at a distance," and so forth.)

Ultimately he was able to adapt his worldview to include the implications of quantum theory, but until then he was most certainly not in a state where he would "never understand physics."

It was a great essay, actually, just a terrible lede, as EY himself acknowledged in the comments.


Really? I still know people that don't know how to delete files, because they're not in MS Word's File menu, after new, open, and save.


Please, if anyone can figure out a format so my mom doesn't attach 10 mb files that should be 750k, I'm all for it.


An actually helpful suggestion (I hope): help her change the settings on her camera. Explain that she'll be able to take more pictures without filling up the memory.


The Jpeg shown is of a higher fidelity than the WebP image; look around the top edge of the red quadrilateral. In the Jpeg image, the edge is sharp, in the WebP image it looks a bit like the coloured sprinkles you might put on ice-cream. It would be better to show two versions of an image, having the same file size, so that we can look for any difference in quality.

Read more: http://news.cnet.com/8618-30685_3-20018146.html?communityId=...


I suspect that the improvement is not good enough that this image format would be adopted, when jpeg is already established.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: