Hacker News new | past | comments | ask | show | jobs | submit login

> It's better that the codec is 100% safe but there's a tiny amount of unsafe glue code, [...]

I meant that many codec cannot be easily made in such way because memory allocation can occur here and there.




> many codec cannot be easily made in such way because memory allocation can occur here and there.

This is an extremely vague complaint and thus suspicious. Of course we could imagine an image format which decides to allow you to declaratively construct cloud servers which transmit XML and so it needs DRM to protect your cloud service credentials - but I claim (and I feel like most people will agree) the fact WUFFS can't do that is a good thing and we should not use this hypothetical "image" format aka massive security hole.

Try specifics. This is a WebP bug. For a WebP codec, where does it need memory allocation? My guess is it does allocations only during a table creation step, and after it figures out how big the final image is. So, twice, in specific parts of the code, like JPEG.


WebP Lossless is a comparably simple codec; it is basically a PNG with more transformation methods and custom entropy coding. So I believe it is probably more doable to rewrite in Wuffs than other codecs. The entirety of WebP has a lot more allocations though, a brief skim through libwebp shows various calls to malloc/calloc for, e.g. I/O buffering (src/dec/io_dec.c among others), alpha plane (src/dec/alpha_dec.c), color map and transformation (src/dec/vp8l_dec.c), gradient smoothing (src/utils/quant_levels_dec_utils.c) and so on.


I've written a WebP Lossy decoder in Go and I'm confident that I can write one in Wuffs too.


If you want to open an image file of unknown resolution, you might want the library to allocate and return a byte array matching the resolution of the image.

Oh, you could design a more complex API to avoid that - but I'm a programmer of very average ability, ask me to juggle too many buffers and struts and I'll probably introduce a buffer overflow by mistake.


> This is an extremely vague complaint and thus suspicious.

Look I have nothing against using WUFFS but I don’t think you’re in a strong position to determine what’s suspicious: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


I mean, sure, I think you should solve this class of problems with WUFFS. I wrote that for Apple a day or two ago, I wrote it here about Google, I'd say it to Microsoft, I'd say it on a train, or a boat, I'd say it in the rain or with a goat.

It's not going to stop being true, and I don't get the impression most HN readers already knew this sort of work should be WUFFS, so I'm going to keep saying it.

You can pair me with lots of things, "tialaramex vaccine" works, "tialaramex oil", "tialaramex UTF8", "tialaramex Minecraft".


> memory allocation can occur here and there

What about HW decoders? Of video formats (that are much more complex than image decoders)? Such as MPEG, H.264, H.265, VP9, AV1 and similar. If memory allocation is needed here and there, I guess there is always known maximum size of such allocation in advance. Written in the spec. Think of at chip design time, or at software decoder compile time. How else would HW decoders be even possible?

Also: Hey Google, do you even fuzz-test? Your own stuff?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: