Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Almost all people don't want to or aren't capable of implementing image codecs, the safer languages aren't fast enough to do it in, and the people who are capable of it don't want to learn them.


All good points, but hopefully Google would be able to find the resources to overcome these?


Google encourages all new native code in Android to be written in Rust. Rust-based codecs can certainly reach the speeds of C++. And it does rule out memory safety bugs. https://security.googleblog.com/2022/12/memory-safe-language...

Of course as the blog post says, just because memory safety bugs are overcome doesn't mean vulnerabilities have stopped; people find other kinds of vulnerability now.


It can be overcome with time and it is getting better, those are just the historical reasons it's not already better.

Google has contributed lots of fuzzing time and security improvements to eg ffmpeg already.


Definitely, but GP was specifically using this as an argument for Google not supporting a codec in Chrome. If anybody can spare the effort to do it safely, it’s them.


I call bullshit on this one.

I don't buy that being able to manually copy data into a memory buffer is critical for performance when implementing image codecs. Nor do I accept that, even if we do want to manually copy data into memory, a bounds check at runtime would degrade performance to a noticeable extent.


"Manually copy data into a memory buffer" is pretty vague… try "writing a DSP function that does qpel motion compensation without having to calculate and bounds check each source memory access from the start of the image because you're on x86-32 and you only have like six GPRs".

Though that one's for video; images are simpler but you also have to deploy the code to a lot more platforms.


Why would an iPhone be running x86-specific code?

I don't dispute that these optimizations may have been necessary on older hardware, but I think the current generation of Apple CPUs should have plenty of power to not need these micro optimizations (and the hardware video decoder would take care of this anyway).


> Why would an iPhone be running x86-specific code?

The same codebase has to support that (since there's Intel Macs and Intel iOS Simulator), and in this case Apple didn't write the decoder (it's Google's libwebp). I was thinking of an example from ffmpeg in that case.

> and the hardware video decoder would take care of this anyway

…actually, considering that a hardware decoder has to do all the same memory accesses and is written in a combination of C and Verilog, I'm not at all sure it's more secure.


I'm sure that given a middling-to-Google (say, 30k) bounty it would be done. I'd give it a shot, anyways.


Give it a try, it's fun. But between writing Huffman decoders and IDCTs and reading the specs in a 1000 page Word document, it's a lot to learn.


when it comes down to doing the metal, benchmarks becomes an important thing -- and you rarely stop until the inner most routines are in Assembly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: