Hacker News new | past | comments | ask | show | jobs | submit login

Which actually makes me more sympathetic to Chrome not (yet) adopting JPEG-XL.

Don't get me wrong, I think JPEG-XL is a great idea, but to everyone saying "how can supporting another image format possibly do any harm", this is the answer.




Why not implement all image codecs in a safer language instead?

That would seem to tackle the problem at its root rather than relying on an implementation's age as a proxy for safety, given that that clearly isn't a good measure.


There are efforts to do that, notably https://github.com/google/wuffs

RLBox is another interesting option that lets you sandbox C/C++ code.

I think the main reason is that security is one of those things that people don't care about until it is too late to change. They get to the point of having a fast PDF library in C++ that has all the features. Then they realise that they should have written it in a safer language but by that point it means a complete rewrite.

The same reason not enough people use Bazel. By the time most people realise they need it, you've already implemented a huge build system using Make or whatever.


Firefox led a hand in making Rust, so I imagine if there is a browser that can make a more secure browsing experience, it would be Firefox, by making media decoders in Rust.


They also have an interesting sandbox thing using WebAssembly: https://blog.mozilla.org/attack-and-defense/2021/12/06/webas...


I think they are working on that slowly. Lots of stuff is moving to Swift. Including bits of iMessage.


Almost all people don't want to or aren't capable of implementing image codecs, the safer languages aren't fast enough to do it in, and the people who are capable of it don't want to learn them.


All good points, but hopefully Google would be able to find the resources to overcome these?


Google encourages all new native code in Android to be written in Rust. Rust-based codecs can certainly reach the speeds of C++. And it does rule out memory safety bugs. https://security.googleblog.com/2022/12/memory-safe-language...

Of course as the blog post says, just because memory safety bugs are overcome doesn't mean vulnerabilities have stopped; people find other kinds of vulnerability now.


It can be overcome with time and it is getting better, those are just the historical reasons it's not already better.

Google has contributed lots of fuzzing time and security improvements to eg ffmpeg already.


Definitely, but GP was specifically using this as an argument for Google not supporting a codec in Chrome. If anybody can spare the effort to do it safely, it’s them.


I call bullshit on this one.

I don't buy that being able to manually copy data into a memory buffer is critical for performance when implementing image codecs. Nor do I accept that, even if we do want to manually copy data into memory, a bounds check at runtime would degrade performance to a noticeable extent.


"Manually copy data into a memory buffer" is pretty vague… try "writing a DSP function that does qpel motion compensation without having to calculate and bounds check each source memory access from the start of the image because you're on x86-32 and you only have like six GPRs".

Though that one's for video; images are simpler but you also have to deploy the code to a lot more platforms.


Why would an iPhone be running x86-specific code?

I don't dispute that these optimizations may have been necessary on older hardware, but I think the current generation of Apple CPUs should have plenty of power to not need these micro optimizations (and the hardware video decoder would take care of this anyway).


> Why would an iPhone be running x86-specific code?

The same codebase has to support that (since there's Intel Macs and Intel iOS Simulator), and in this case Apple didn't write the decoder (it's Google's libwebp). I was thinking of an example from ffmpeg in that case.

> and the hardware video decoder would take care of this anyway

…actually, considering that a hardware decoder has to do all the same memory accesses and is written in a combination of C and Verilog, I'm not at all sure it's more secure.


I'm sure that given a middling-to-Google (say, 30k) bounty it would be done. I'd give it a shot, anyways.


Give it a try, it's fun. But between writing Huffman decoders and IDCTs and reading the specs in a 1000 page Word document, it's a lot to learn.


when it comes down to doing the metal, benchmarks becomes an important thing -- and you rarely stop until the inner most routines are in Assembly.


I'd guess it's a combination of labor required to rewrite them and that you'd more or less have to use a safe systems language in order to not have a performance regression


That's a great question, and I'd love to know the answer.


Often it's infeasible to justify rewriting a lot of existing code, but my point is that these days this concern shouldn't really be an obstacle to integrating a new codec.


It should certainly lower the bar of adopting a new codec if the implementation is in a memory-safe language.

Even so, it is more code, and somewhat more risk. Lack of safety elsewhere might end up using code that is otherwise safe in order to build an exploit (by sending it something invalid that breaks an invariant, or building gadgets out of it, etc.).


Adding something in rust into a browser means you now need to bundle all of the needed crates and that your browser now also needs rustc to build… at a minimum.

You also need potentially to audit all the crates and keep them up to date and so on… without crates you can't do so much.


I can see that for components heavily interfacing with high surface area things like encryption, hardware interfacing etc., but why would that be true for a relatively “pure” computational problem like an image codec? Bytes in, bytes out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: