Which actually makes me more sympathetic to Chrome not (yet) adopting JPEG-XL.
Don't get me wrong, I think JPEG-XL is a great idea, but to everyone saying "how can supporting another image format possibly do any harm", this is the answer.
Why not implement all image codecs in a safer language instead?
That would seem to tackle the problem at its root rather than relying on an implementation's age as a proxy for safety, given that that clearly isn't a good measure.
RLBox is another interesting option that lets you sandbox C/C++ code.
I think the main reason is that security is one of those things that people don't care about until it is too late to change. They get to the point of having a fast PDF library in C++ that has all the features. Then they realise that they should have written it in a safer language but by that point it means a complete rewrite.
The same reason not enough people use Bazel. By the time most people realise they need it, you've already implemented a huge build system using Make or whatever.
Firefox led a hand in making Rust, so I imagine if there is a browser that can make a more secure browsing experience, it would be Firefox, by making media decoders in Rust.
Almost all people don't want to or aren't capable of implementing image codecs, the safer languages aren't fast enough to do it in, and the people who are capable of it don't want to learn them.
Of course as the blog post says, just because memory safety bugs are overcome doesn't mean vulnerabilities have stopped; people find other kinds of vulnerability now.
Definitely, but GP was specifically using this as an argument for Google not supporting a codec in Chrome. If anybody can spare the effort to do it safely, it’s them.
I don't buy that being able to manually copy data into a memory buffer is critical for performance when implementing image codecs. Nor do I accept that, even if we do want to manually copy data into memory, a bounds check at runtime would degrade performance to a noticeable extent.
"Manually copy data into a memory buffer" is pretty vague… try "writing a DSP function that does qpel motion compensation without having to calculate and bounds check each source memory access from the start of the image because you're on x86-32 and you only have like six GPRs".
Though that one's for video; images are simpler but you also have to deploy the code to a lot more platforms.
I don't dispute that these optimizations may have been necessary on older hardware, but I think the current generation of Apple CPUs should have plenty of power to not need these micro optimizations (and the hardware video decoder would take care of this anyway).
> Why would an iPhone be running x86-specific code?
The same codebase has to support that (since there's Intel Macs and Intel iOS Simulator), and in this case Apple didn't write the decoder (it's Google's libwebp). I was thinking of an example from ffmpeg in that case.
> and the hardware video decoder would take care of this anyway
…actually, considering that a hardware decoder has to do all the same memory accesses and is written in a combination of C and Verilog, I'm not at all sure it's more secure.
I'd guess it's a combination of labor required to rewrite them and that you'd more or less have to use a safe systems language in order to not have a performance regression
Often it's infeasible to justify rewriting a lot of existing code, but my point is that these days this concern shouldn't really be an obstacle to integrating a new codec.
It should certainly lower the bar of adopting a new codec if the implementation is in a memory-safe language.
Even so, it is more code, and somewhat more risk. Lack of safety elsewhere might end up using code that is otherwise safe in order to build an exploit (by sending it something invalid that breaks an invariant, or building gadgets out of it, etc.).
Adding something in rust into a browser means you now need to bundle all of the needed crates and that your browser now also needs rustc to build… at a minimum.
You also need potentially to audit all the crates and keep them up to date and so on… without crates you can't do so much.
I can see that for components heavily interfacing with high surface area things like encryption, hardware interfacing etc., but why would that be true for a relatively “pure” computational problem like an image codec? Bytes in, bytes out.
Don't get me wrong, I think JPEG-XL is a great idea, but to everyone saying "how can supporting another image format possibly do any harm", this is the answer.