I think the authors should mention the background story for how this project originated at Google in Google Research (UK). Tried browsing through the Github project page and didn't see any obvious references, aside from the committers list.
AFAIK, the first time I heard about "Project Oak" was about four or five years ago.
A canonical use of Oak is to build privacy-preserving sealed computing applications.
In a sealed computing application, a node (usually a client device) sends data to an enclave application (usually a server), which processes data without the service provider hosting the enclave application being able to see the inputs, outputs, or side effects of the computation.
It both predates the Apple approach and is more thorough. I cannot inspect or ensure the software BoM of my image with Apple’s approach, I just have to trust them. With Oak you have trust down to the hardware.
> When on-device computation with Apple devices such as iPhone and Mac is possible, the security and privacy advantages are clear: users control their own devices, researchers can inspect both hardware and software, runtime transparency is cryptographically assured through Secure Boot, and Apple retains no privileged access
Waitasec - ZOOM AND ENHANCE!
> users control their own devices
I’ll believe that when Apple lets me downgrade my iOS version.
Honestly, I think it will be used for the reverse (and unfortunately more evil) - Google wants to be able to control YOUR machine's compute environment for things like playing back of DRM'd content. They want a chain of trust that your browser cannot be modified to do things like block ads.
From a service owner perspective, if I offer content and want to enforce strong identity from the user then this seems like a win. I may lose eyeballs but will gain higher confidence that my content is being consumed as intended.
I'm fine with more controls in place, a safer internet is clearly a social win that would reduce life alerting fraud, scams etc. If power users want to go to their peer-to-peer cesspool then go for it.
A safer internet does not necessarily follow from having this system in place. I'd like to point out that this is an opinion that you have which I and others disagree with.
I also don't believe that content creators have any kind of legal or moral right to force the general public to "consume as intended". For instance, I've got a shelf in my office that's built with supports that are designed for plumbing. I have not consumed these pipes as intended.
How does enforcing strong attestation from the user result in a safer internet or reduce life alerting frauds and scams? It's not users injecting that onto pages, it's the ad networks that operators choose to use.
I've been fortunate to be paid by Google to hide user data from Google since 2016. Not many companies would shell out anything for this sort of privacy feature.
As for the Oak stack, they win the race. It is the only stack that currently provides full hardware attestation covering 100% of the code running in the enclave, and 100% of it is open-source. There are other good efforts, such as CoCo containers with their Key Broker, but so far they only cover the initial boot firmware, not the full set of software running inside the enclave.
It’s really apples and oranges, Oak is about being able to execute code without side effects, even when it’s running in an environment you don’t provide. If it gets extended to the phone you can snark about ads, but really it would only be able to address whether any data associated with your viewing an ads escapes to a third party. So it would largely make ads be more like a billboard vs the way they work today. But that’s speculation, Oak isn’t trying to make the world safe from advertisers, it’s trying to make your data safe from being used in ways you didn’t permit, even when it’s being operated on in an environment you didn’t provide.
So, something that can be used to run Tor relays that provably don't intentionally misbehave? Or hidden services that the hosting provider has no way to give other people access to?
Attestation is to homomorphic encryption as storing things in a bank safe is to burying it out in the woods. There’s an entity providing you service and they’re trying their best to guarantee that they’re not going to decrypt your stuff but there’s usually some sort of collusion that will make it possible.
I was curious if someone would build something that allows the DCAP datacenter attestation to be exposed to applications, e.g. "prove via intel that the SHA of the software running on the machine is XYZ"
>"prove via intel that the SHA of the software running on the machine is XYZ"
This is exactly the purpose of MRENCLAVE in Intel SGX remote attestation quotes (and similar fields in other TEE platforms), and proving the software identity to remote clients is a common use case.
Maybe I misunderstand - is that what you mean, or is there another use case you are looking for?
Super cool. I did some reading about Secure Enclaves with I was dreaming up ways to democratize compute; very cool to see a project like this making it a reality.
This reminds me of Spritely Goblins from the Spritely Institute, which has "vats" where you can run code in a distributed manner using object capabilities.
Nitro enclaves is a lot less ambitious than this. This is a full blown microkernel. Whilst nitro Enclave is a Linux kernel with just virtio drivers enabled + a small initrd containing your Linux application. The "Trusted compute base" of nitro enclaves is larger.
Nitro enclaves also doesn't have all this high level infrastructure of composing microservices like this does
I think (but somebody smarter might correct me) that with nitro enclaves you also need to trust Amazon. Whilst with this you need to trust AMD, but don't need to trust GCP
Nice thing about nitro enclaves is that the Linux bits aren't tied to OCI. E.g. Monzo uses nix to build their enclave images https://github.com/monzo/aws-nitro-util
Maybe I’m just paranoid, but isn’t the (possibly unwritten) intent of this project to be able to flip the client and server around and run code in your browser and phone?
I don’t understand their incentive to work on this unless they can use it to gatekeep “official” youtube clients (for example).
Incentive is that there is a small market segment that wants "actual privacy" and a concern that this segment could become very large at any moment due to publicity/awareness. Nobody wants to be caught with their pants down in that event.
A bit surprised that it’s written in rust, rather than Go. I suppose rust can take advantage of more low level apis, plus no overhead of garbage collection.
edit: love that the community is not silo’d into a proprietary chat platform as well:
> We welcome contributors! To join our community, we recommend joining the mailing list.
I really wish more open source projects used mailing lists.
1) decentralized means of communication
2) able to join these communities from any type of environment (ie, corporate hell hole) without much friction. With discord, slack (especially at fortune 500s). It usually involved a whole process of approvals to get the damn thing installed and punch a hole through the firewall to get access to the service.
No, using a personal email and device for what I consider contributing from a work aspect (ie, submitting patch to OSS to solve specific problem with project) is not acceptable.
The hardware features used for this are Intel and AMD CPU extensions: they're writing a microvm to run inside special "enclave" virtual machines. Go is a fine language but it's not really intended for this sort of work. Rust is a natural fit for this work: you can write low level drivers and also ensure a number of safety properties.
> A bit surprised that it’s written in rust, rather than Go. I suppose rust can take advantage of more low level apis, plus no overhead of garbage collection.
It’s security-focused technology. Rust has huge advantages over Go in this area.
Could you name some advantages? I would agree Rust has huge advantages compared to C/C++, and Rust also has a much bigger presence in the "security space". But I would say that's more because of Rust's lack of GC, smaller footprint which works in embedded systems etc.
I guess you could say that Rust's type system being more expressive might eliminate certain classes of bugs, which have security implications. But "huge advantages"?
(Honestly I'm not flame baiting, I'm genuinely curious if my worldview is wrong)
I think lack of union/sum types, i.e. lack of compiler exhaustivity checks on cases is pretty relevant here. For security applications the goal is maximum stringency w.r.t. correctness so I think “huge advantage “ isn’t an exaggeration regarding sum types. It’s not like “have you checked all cases” is an unimportant question when trying to prove correctness.
does it really? aside from a handful of crates and the default std hashmap i being slow but cryptographically sound: I would not have assumed so.
Go usage inside Google is actually quite low, people talk a lot about Go being a google project but in reality its a project made by some people who work at Google.
When I last checked it was a bronze supported language (with C++, Python and Java being Gold).
I think using Go is not a popular decision in the Android ecosystem, especially for those system programming stuffs. It's very likely that the project needs tight client side integration, so they probably wanted to use a language which has a wider support especially in the case of possible iOS support.
AFAIK, the first time I heard about "Project Oak" was about four or five years ago.
This predates Apple's Private Cloud Compute.