Hacker News new | past | comments | ask | show | jobs | submit login
“Land initial Rust MP4 parser and unit tests” (bugzilla.mozilla.org)
217 points by steveklabnik on June 18, 2015 | hide | past | favorite | 86 comments



Audio, video, and image codecs written in Rust seem like a fantastic early opportunity to investigate Rust's utility for closing down related security vulnerabilities.

Is anyone aware of work on fuzz testing an interesting Rust-based "attack surface"(1)? I'm very interested to see what kinds of issues are/aren't turned up in Rust vs. the usual C/C++ code for these libraries.

[1] By "attack surface", I'm thinking of traditional bits of code with high exposure to untrusted data: codecs, HTTP parsers, etc.


I know we've run the Rust URL parser through AFL. IIRC, it found one panic (brackets in URLs) and nothing else. Nothing security-sensitive was found.

(Don't take the statement about security too strongly, however; AFL will not detect logic problems that could result in security issues, such as interactions with TLS hostname validation or whatnot.)


I don't think that production-quality codecs will be written in Rust. As far as I aware, codecs are usually very complex piece of software and employ a lot of hand-written assembly and very low-level C. Rust just doesn't offer anything valuable in this area.


A very complex piece of software is exactly the sort of thing you want to avoid writing in C or assembly. The last person I know writing a codec implementation did it by machine-translating the spec into a functional program that output assembler, C, python, Verilog etc as targets. That did require handcrafting "leaf nodes" for things like matrix multiplication, but they were small enough to verify.


Yes, I think of Rust as a promising target for parser generators. Thinking about text parsing, if Bison generated Rust, then you'd have a memory-safe parser that should be about as efficient as C. Something like this probably already exists or is being worked on. Ideally existing code generators could target Rust also to get memory safety.

There are some SIMD optimizations in parsers though that I don't know how easily you could express in Rust. The quintessential example of this for text parsing is Clang's optimization that uses SSE to skip over C++ comments 16 bytes at a time:

https://github.com/llvm-mirror/clang/blob/61f6bf2c8a8e94c4fa...


I am far from an expert in this area, but you can do SIMD with Rust: https://github.com/huonw/simd


This looks like a good start (though experimental). If it supported something like AltaVec's vec_any_eq() intrinsic on its u8x16 type, that would do the trick. vec_any_eq() takes a vector and a value and returns true if any element of the vector equals the value.

On x86 with SSE, this could generate a sequence of two instructions: pcmpeqb (do 16 byte-wise compares) followed by pmovmskb (collect the 16 comparison results into a single byte). Then you'd get the same efficiency as what Clang does (Clang searches 8 bytes at a time for a '/' character when skipping over comments).


Huon is working on SIMD full time at Mozilla this summer as far as I know. We should see more mature support materialize in the next couple of months for sure.


I have been working with https://github.com/kevinmehall/rust-peg for a while. Also there is https://github.com/Geal/nom


A while back there was a blog post (I think by Dark Shikari) about the use of assembly in x264. I couldn't find it by searching, but basically the very best C versions of essential image processing algorithms were orders of magnitude slower than the hand-crafted assembly versions, especially with SIMD instructions.

To create a viable codec, a functional spec compiler would have to close that performance gap. Edit: it might be possible to create a set of SIMD algorithmic building blocks (e.g. like liboil[0]) that could be verified for correctness then incorporated into the compiler.

[0] https://wiki.freedesktop.org/liboil/


There has been work on typed assembly that could also apply: http://research.microsoft.com/en-us/projects/talproj/


Conveniently computers are getting orders of magnitude faster and cheaper, while the software on them - by virtue of complexity - has an increasing attack surface.

I will happily trade a video player utilising 4% of my CPU power and with fewer potential vulnerabilities than one using 0.4%.


When it comes to video encoding, the tradeoff is often between encoding live video or not encoding live video.

I think there's also an important tradeoff to be made in how much of a carbon footprint we dedicate to software. I really don't think we should be trying to make software expand to consume all available resources. We have to be both efficient and secure.


Even when you are on a mobile device constrained by battery?


Very interesting. How can we get more info on this? Is there any public code or product out there?


The resulting commercial product is not the codec itself, but a set of test signals for codec verification for use by hardware vendors.

http://www.argondesign.com/products/argon-streams-hevc/


Thanks! I'm contributing to an open-source project where we do a lot of standardization (including MP4). We're trying to improve the parser generation by using a model from the specification. We even have some funding for this. I'd be happy if we could discuss this (contact@gpac.io). Better standards means a better world for everyone :)


Can you imagine, writing a Bluetooth spec (and profiles) in a high level language and let Rust (or something else) produce the native driver code?


Yes that's exactly what it is about! Unfortunately, our current result shows that it still requires much time to model the spec correctly. Any help appreciated, my contact is available on the message above :)


We'll see. For inline assembly, sure, it doesn't, but you can do some pretty intense stuff with Rust's type system to guarantee certain static properties. For example, earlier today this comment appeared on Reddit, talking about how to use the type system to guarantee that you're not doing out-of-bound array accesses: http://www.reddit.com/r/rust/comments/3aahl1/outside_of_clos...

It is true that the lower you go, the less we currently offer, but over time I expect library authors will use this and other future features to check even more properties without runtime overhead.


A more modern language (modules, closures, pattern matching, package management, type inference) and memory safety are nice. If I were writing a new codec, I'd seriously consider writing the lion's share of it in Rust, to save time and to avoid security problems.


I think the sentiment of the parent was that we should avoid (re)writing things in pet language of the week.

Sure, Rust has a great following, but so does GO. Which is the "right" choice for a new codec? I don't think there is a clear answer.


Rust does not require a runtime / garbage collector and offers a programming model that interacts very precisely with external threading requirements. It supports being called from the native platform ABI without initialization, and it supports writing libraries to match an existing, native ("C") ABI and function as a drop-in replacement. Go does not do these things.

The suggestion to use Rust here is because this is specifically a thing Rust was designed to do, not because it has a huge following. Go is a great language, but it is designed for different things (and has a following of people who want it to do those different things). If those two languages are the two choices for replacing a codec that functions as a library, there is an objectively correct answer.


For a codec there is probably a clearer answer than for some other applications- codecs are very performance-sensitive (probably don't want a GC) and also very security-sensitive (you're decoding completely untrusted data). Previously they've always been written in C or C++ because of the performance requirements, and Rust keeps the same performance while adding some more safety checks. Go doesn't.


>Sure, Rust has a great following, but so does GO. Which is the "right" choice for a new codec? I don't think there is a clear answer.

Sure there is. Go would be an absolutely horrible choice for writing a video codec -- and wasn't even designed for that kind of work in the first place.


> Sure, Rust has a great following, but so does GO. Which is the "right" choice for a new codec? I don't think there is a clear answer.

Actually, there is quite a clear right answer for a hardware codec.

You have 3 choices: C, C++, or Rust.

While C or C++ probably is better for the core of the codec, that's not where your security and concurrency issues probably are.

Most of your issues are in unpacking headers, validating packets, getting decode parameters correct, etc. Rust is WAY better for this task than C/C++ from a security or concurrency point of view.


There is also Ada. :)


How easy is it to inter-link C and ADA?



As codec operations move into dedicated hardware blocks on silicon, their code will be more about pointing registers at blocks of data and less about performing the mathematics on CPU.


rust code can incorporate assembly just like C can and low-level rust should be just as fast as low level C so it sounds like rust will be a great replacement to write production-quality codecs


I use downvotes pretty sparingly on HN, but I'm afraid you're completely wrong. This is the kind of thing that Rust excels at.


What about (de)muxers?


Container formats are much, much, much simpler and demand much less CPU than sample data formats. One could (and I have) write a perfectly useable ISO parser in emacs lisp, of all things. The reason that they tend to be written in C is because a container parser is not super useful outside of the context of handling the sample data; the overwhelming majority of usages of code that understands containers is to be able to pump out pointers to each individual sample. Doing this across process boundaries would be ... unpleasant.


Can anyone explain why mp4 parsing lib in a specific language is #1 on HN? mp4 is just the container format (not the video stream). I'm not trying to belittle, just curious if I'm missing context.


This is the first intrusion of Rust into the Firefox source tree.

The whole point of Rust and Servo development was that they would eventually lead to safer, cleaner code in Mozilla's shipping web browsers. This code itself doesn't seem significant, but it's a milestone in that it marks the beginning of the payoff from that effort.


It also is vindication for making Rust easy to combine with other languages. If you tried to do this with Go and its own custom stdlib, main(), GC, threads, stacks it would be a nightmare (not to mention an extra 1 MiB to the binary).


Go and Rust have different purposes. You can think of Go as a leaner Java/C#. And Rust as a safer C/C++. Despite both Go and Rust are labeled as "system programming languages" by some, there are different understandings what "system" exactly means in that context.


Go 1.5 adds c-archive mode, which takes care of 1 of your complaints (no "extra main()" requirement).


I believe it's because this is the first piece of Rust code landing in Firefox.


It's not the first patch, but it's the first patch that's actually landed, yes.


Bear in mind that the #1 on HN isn't necessarily the most upvoted, look at the number of points in the submission.


Well, yes. Otherwise the same stories from when Jobs died would still be on the front page.


Is it possible to query the mozilla bugtracker to see how many security issues was fixed in the pre-rust MP4 parser before it was replaced?


Most security bugs are private. Even after the bug has been fixed, users running older versions of Firefox may still be affected so Mozilla doesn't want to expose too much information about how to exploit the bug. The fix, of course, is open source for everyone to examine.


Mozilla doesn't opensource their bug metadata?!

Ah its cool, they're rational actors who don't crucify people for private interests.


Mozilla security bugs do get opened up eventually. Specifically, once not only Mozilla but also various downstream distributors (linux distributions, etc) have shipped a fix for it. Release cycles there vary, so there is typically a gap of a month to a bit over a year (depending on whether the fix could be backported to the previous ESR) between the fix shipping in Firefox and the bug being fully disclosed.

That said, even after a security bug is open some information in it may remain hidden. For example, weaponized exploits attached to bugs are generally kept hidden even after the bug is opened.


Amazing the future is now! How does it perform?


I imagine that using rust for this is going to mainly provide security benefits. Parsing the MP4 file isn't very CPU intensive; it's a straightforward binary format. (Decoding the video stream is what is CPU intensive, and this code doesn't do that.)


It would be pretty exciting to see an audio or even a video decoder/encoder written in Rust


Some VLC committers have been experimenting with Rust. Mostly parsing, see https://github.com/geal/nom and http://spw15.langsec.org/papers/couprie-nom.pdf


Oh yes I remember about it, it was published on HN some time ago (by you I see :-) ): https://news.ycombinator.com/item?id=9602055 I have to read the paper, forgot about it


I came across a FLAC decoder on HN yesterday [0]. It's pretty new and I haven't looked at it in any depth.

[0] https://github.com/ruud-v-a/claxon (via https://news.ycombinator.com/item?id=9731249)


Perhaps, but software codecs aren't nearly as interesting as hardware codecs. Efficient high-resolution video decoding uses dedicated hardware these days.

It'd be interesting to see portions of libva rewritten using Rust, though.


The only reason to implement a codec in hardware is speed or power consumption. Software codecs are much more flexible with the parameters you can tweak, and more tolerant of variances in the input such as buffer under/overflows, etc.

In that respect, software codecs are much more interesting, but hardware does allow you to get cutting-edge compression to market more quickly.


> The only reason to implement a codec in hardware is speed or power consumption.

Which are two of the most critical properties of a codec. When people look at the battery life of a new platform, one of the common questions is "how many hours of continuous video playback?". Or "how many hours of screen-off audio playback?".


> Which are two of the most critical properties of a codec.

If your target is mobile or embedded, yes. Fortunately, that's not the only use case. Software codecs are a lot more interesting from the standpoint of not being fixed function.


I have to agree... for example, when looking at a micro HTPC replacement for a full on desktop cpu/board, I tried several... none of which could handle 1080p video well without proprietary drivers, often that didn't work in the OS/kernel the image I happened to be trying used.... my last attempt was a quad-core cubox i4-pro, which actually didn't do too bad in software, but would overheat and lockup in use.

In the end I'm using a core i3-5010U based box, that runs well enough... but would really love to see something almost as powerful, but lower power use and better supported. On the one hand, I want a stable flexible system... on the other I want to have the ability to tinker.

It seems a lot of times vendor interests are at odds with a tinkering consumer.

On the same note, been thinking that something akin to Kodi/XBMC that has a control interface that's web based and works on mobile devices, combined with an output interface that can render to something like a google chromecast would be awesome. It could run on a desktop, in another room, and simply display on the TV, or multiple TVs for that matter.

Looking at some of the streaming gaming options from nVidia and Steam, I'm hoping to see this become a home media option that actually works nicely in the future.


FPGAs could change that equation :P


Unlikely. They haven't so far in the last 20 years they've been around. And the (painful) processing time required for "place & route" massively limits the degree to which their dynamic nature can be exploited.


Interesting set of priorities there. Hardware codecs are much slower to market than software due to fixed costs, speed is completely critical in decoding codec performance, and lots of people care about power.


Software implementations of a new codec are generally available earlier, but the first implementations shipping to consumers (in STBs, phones, etc) are all going to be in hardware, because the computational cost of a new codec is typically high enough to justify that lead time. That hardware will be limited to a specific set of of run-time levels and profiles, VBV sizes, etc.

Of course hardware implementations have their benefits. But the knobs that software codecs offer are much more interesting.


Keep in mind that the vast majority of "hardware codecs" are actually tiny embedded DSPs running software (well, firmware) that nobody generally gets to see.


Sadly, with the rise of VP8/VP9/webm, this got lost.


Those have been implemented in hardware too.


h264 exists in hardware everywhere now. VP8/VP9, not really.

And VP8/VP9 has again the same patent issues as h264.


The vast majority of phones shipping today include VP8 in hardware. VP9 is a bit rarer at the moment.

While it's up to you and your lawyer to decide whether you can ship a codec, many companies feel perfectly comfortable shipping VP8 and VP9, including Mozilla, Google, Samsung, most SoC vendors, etc.


And VP8/VP9 has again the same patent issues as h264.

No, not the same patent issues. Google obtained a blanket global license for VP8 and VP9 from MPEGLA.


Wrong. Using VP8/VP9 means you have to accept the Google license, which says it is void as soon as you claim software patents are invalid or sue in court against any of Google’s claims.


Sounds like it's still a work in progress:

  This isn't functional yet, but I'd like to get
  the code in tree as a base to build upon.


Indeed, the amount of parsing code that was actually imported here is tiny:

https://bug1175322.bugzilla.mozilla.org/attachment.cgi?id=86...


It seems to be the very beginning of the project, I guess that talking about benchmarks is premature


Sorry, I was mostly joking


So in the future Firefox / Servo will implement decoders in Rust too?


The decoders in Firefox now are third-party libraries (like Google's VP9), so I don't think Mozilla will rewrite them in Rust. Perhaps Mozilla will write the official Daala decoder in Rust?


One of the Daala developers here. The official reference Daala codebase is implemented in C89 - this is to ensure the widest possible compatibility with all sorts of weird platforms. In addition, the tooling for assembly and intrinsics are mature.

Of course, this is only the reference implementation - once the bitstream is stable, it'd be great to try writing a decoder in Rust.

An easier starting point might be audio or image codecs, where speed is not as critical and the formats are well defined. For example, here is a pure Rust image codec library: https://github.com/PistonDevelopers/image


By the way, do you know if Daala will be renamed to NetVC or not? I don't really like these names like NetVC/ HEVC which sound like some infections. May be it can be kept as Daala? After all Opus wasn't named "NetAC".


Opus's contributing codecs were called CELT and SILK. The IETF working group name was just "CODEC". The name "Opus" was chosen at the end. I imagine a similar procedure will happen in the NETVC working group.


You mean you expect other major codecs to be merged with Daala and out of respect for the contribution there will be a change of name?


That's some of it, but also it's just to see if we can come up with a better name than Daala. We also have to check trademarks, etc. There hasn't really been any major discussion around the name yet on the netvc mailing list [1]. The first netvc working group meeting will be in July at IETF 93.

[1] https://mailarchive.ietf.org/arch/search/?email_list=video-c...


Thanks for the pointer, I'll keep an eye on it. I think Daala is a good sounding name, but if trademarks are a problem, that's a different story.


Daala decoder in Rust would be neat.


So Firefox is completely re-witten in Rust? Seriously?

Yes / No / Maybe / Don't know?


The current plan is to write a new browser called Servo in Rust, which is not intended to supplant Firefox. Servo is currently classified as a research project, not product development, with the goal of exploring parallelism and safety in browser engines. Small, isolated utilities written in Rust may end up in Firefox.

(I didn't downvote you, btw. I'm guessing someone thought you were being needlessly sensationalist.)


I think the plan is that FirefoxOS and Firefox for Android will transition over to Servo, presumably because of the major benefits of parallelism on mobile. Desktop, as you say, it's just limited modules.


No.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: