Not patching swiftshader bugs in swiftshader feels inexcusable and makes that whole component look like a huge security vulnerability. It's weird that this post mentions it multiple times but never explains why it's not getting fixed.
Is it really true that those bugs aren't getting fixed? There's a commit on July 18th to SwiftShader that seems to be addressing exactly the bug described in this report:
It's short-staffed and there's few community contributors. It's really unfortunate for a really foundational piece of infrastructure for Chrome, Android, and more.
I read the Android team is trying to secure Android by writing new code in Rust. I am wondering how many of these issues would go away by using Rust or at least minimized to easier to audit unsafe blocks. Obviously it might take them 10x the effort to write the code on Rust and to integrate that into Chrome, but the security effort shown here has a large cost as well.
Seems like the first two issues would be harder to encounter in rust (raw pointers and double mutable borrows) but rewriting a huge project is a bad idea because of the massive amount of work involved.
The maintainers could make a crate for some tiny crucial vulnerable subsystem and call it from c++, that might be more reasonable baby step and get them rolling with rust inside chrome (as opposed to chrome inside rust which would possibly take forever, some dork can crank a little rust program in a weekend)
Also automatic rewriting the whole codebase could work but that’s just a research direction at this point even for google as far as I know, and chrome probably isn’t the kind of thing you just auto-rewrite and it works right away (maybe I’m pessimistic)
Initially the Chrome team was the opinion that adding Rust to Chrome would be a too big effort and thus focusing on better practices for C++ would be a better approach, hence Olipan, the C++ GC now introduced in V8.
Eventually about an year later they changed their mind, and are now allowing Rust for new code in Chrome, in a kind of baby steps.
Most likely, because as the report shows, no matter how carefully the code is written and reviewed, there are security flaws due to memory corruption sneaking in.
There is at least already a WebGPU implementation in Rust (the one that Firefox uses). So they could use that if they wanted to. I guess it's probably better for the overall health of the ecosystem if there are multiple implementations though.
Yup, wgpu is already a thing. While it's ironically widely used on desktop, it's less mature in the browser context. Like, there is an open-world 3D MMORPG using wgpu for its graphics, meanwhile it's not yet enabled on stable Firefox.
I'm not sure whether many different implementations is inherently good, though.
You can say the same thing about something as simple as "shared memory" -- normal multiprocessing computers have had shared memory since time immemorial, but browsers literally disabled SharedArrayBuffer from 2018 to 2020 and anyone using them to communicate with Web Workers had to find another way. Browsers run a 24/7 onslaught of extremely untrustworthy code, whereas games only run themselves.
Firefox has not enabled WebGPU via wgpu for the same reasons Chrome Security has done an in-depth review of Dawn. It is a component that must be hardened. For anyone out there trying it out by enabling config flags, remember to disable it once you are done. It will be ready in time.
I would love to hear about an implementation of multiplayer that receives code from hostile opponents and executes it, but I do not anticipate you'll find many examples.
> SV_SteamAuthClient in various Activision Infinity Ward Call of Duty games before 2015-08-11 is missing a size check when reading authBlob data into a buffer, which allows one to execute code on the remote target machine when sending a steam authentication request. This affects Call of Duty: Modern Warfare 2, Call of Duty: Modern Warfare 3, Call of Duty: Ghosts, Call of Duty: Advanced Warfare, Call of Duty: Black Ops 1, and Call of Duty: Black Ops 2.
In case this needs to be pointed out, an RCE in a game is an accident, not the way they designed their multiplayer to work. I was describing why the Firefox team might wait for a feature to be security-hardened before releasing it. The answer remains the same -- they design and market the thing to be secure even when it executes untrusted code. Activision does not advertise their games as able to "securely execute RCE gadgets from maliciously crafted steam authentication packets". This part may be surprising: the Chrome and Firefox teams do, in fact, try to ensure that when someone gains RCE, that they execute it securely and it can't get very far.
I am not attempting to claim that games do not have security issues or cannot experience remote code execution, just that this is not a normal pattern of behaviour that they plan for, so it is normal that a game author would deploy wgpu long before Firefox does (while Firefox spends a lot of effort on fuzzing, etc). If anything a terrible CVE that Activision has expended apparently zero resources fixing is a very good example of what I'm talking about.
https://veloren.net/
I'm a bit impartial since I'm a former contributor, but I think it's super cool.
Aside from that, the Bevy game engine also uses wgpu on non-web, but afaik no game of particular significance or player base has shipped with it yet. I think the biggest user of it is actually a software tool for mining (the hardhat kind), but it's a "call us for a quote" kind of thing so hard to tell how big it is.
That's kind of irrelevant to the adoption potential of WebGPU.
Those examples you gave are not comparable at all, ativex and flash are way, way higher level, don't operate at all like graphics API middleware.
PNaCL was a WASM alternative design, which thankfully lost as WASM is much more flexible.
My point is that WebGPU is way better positioned than any middleware, it has industry backing and official support from all relevant platforms (or plans for it).
It's also benefitted a huge amount from hindsight.
Just wanted to point out that wgpu has both webgpu and webgl2 backends. So, currently, most projects use the webgl2 backend via wgpu for any rust app running in firefox right now.
>Also automatic rewriting the whole codebase could work but that’s just a research direction at this point even for google as far as I know, and chrome probably isn’t the kind of thing you just auto-rewrite and it works right away (maybe I’m pessimistic)
GPT4 is extremely good at translating between programming languages without error. The hard part is there's way to much code to feed to it all at once, so the naive approach of just feeding it all in wouldn't work, and doing it file-by-file would cause problems as GPT wouldn't understand the functions defined in other files.
I doubt this would work (at least with the current version of GPT4). I tried the same thing last week (Swift to Kotlin) and there were so many errors the whole exercise was pointless. Claude fared better, but I wouldn’t bother doing it again.
I think you are correct to suggest starting from small important pieces. I propose the attack surfaces and working out from them. The dom, and sandbox as a whole seems like a prime candidate.
Wouldn't help in this case, the bugs are in things that Rust's type system doesn't help you with like references being held across processes that are sharing memory segments, connected to GCd JavaScript heaps. Affine types aren't able to express that stuff.
At least some of the issues here appear to be UAFs where raw pointers are taken to RC'd objects under the assumption that the object will not be freed while the raw pointer is held. That is not possible to express in Rust (without unsafe).
Some other issues are definitely going to be trickier - for example, passing a raw pointer to another context and then crashing that context while it's still held. This has come up within Rust before - how do you handle `&str` backed my mmap when another process could write to those values.
And then some are... maybes. Integer underflow panics in tests but not in release - when mixing it with something inherently unsafe like alloca, would Rust have helped? IDK. Certainly I think integer overflows are less likely to make it to release in Rust thanks to the default behavior, but it's also absolutely an area where I expect Rust to do only a bit better than other languages.
Cross-process shared memory doesn't really make a difference here. Rust's support for multithreading works the same way regardless, there's no expressivity problem.
x-process shared memory is definitely a problem. This has come up before - ex: is it safe to hold a &str into an mmap'd page? Another process could modify the data such that it is no longer valid utf8, even though you held a reference to it.
This is just out of scope for the borrow checker. Instead you'll have to roll your own abstraction that tries to maintain its own invariants, such as a &MmapStr that doesn't perform any `unsafe` operations that assume utf8/ immutability.
You can cause the same problems from another thread in the same process, or even from the same thread. The fact that an OS process boundary is involved isn't relevant- you simply don't form the &str if some other part of the system has that kind of access to it, whether or not the OS is involved.
The point of Rust's type system here is that it makes that distinction at all, not that the one side of that distinction has additional invariants.
> You can cause the same problems from another thread in the same process, or even from the same thread. The fact that an OS process boundary is involved isn't relevant- you simply don't form the &str if some other part of the system has that kind of access to it, whether or not the OS is involved.
No you can't, not without unsafe. Rust will not let you.
Again, this is sort of an up in the air discussion. In most cases saying "your type system can't handle someone else ptrace'ing you" is obvious and fine, but when you're talking about certain abstractions like mmap, where multi-processing and shared contexts are inherent, it's important to note that you can't just rely on normal aliasing rules.
My point is that you can solve the inter-process case with precisely the same tools that Rust already uses to solve the intra-process cases.
Outside of a pure-Rust single-threaded process that uses only stack allocation, those cases are already using unsafe- for heap allocation, or to launch a thread, or whatever. Or if Rust is a guest in some other language's process, you don't even need unsafe! And the type system features used to contain that unsafe are just as useful when you're sharing memory with another process as when you're sharing memory with another thread.
I'm obviously not saying you can just take a shared reference to an arbitrary mmapped buffer, or whatever. I'm saying that nvm0n2's claim is misleading, because Rust's type system does give you the tools for cases like this, at least to a greater degree than C++'s type system!