> Often, security experts argue that the web isn’t suited for E2EE applications because of the vast attack surface for code injection – abusable by the developer or by an external attacker. For many web applications, a web browser receives and runs code from a zillion servers, retrieved over a zillion TLS connections.
This feels like the wrong argument. I don't think this has anything to do with suitability of end to end encryption. It is easily worked around with e.g. subresource integrity, or rolling your own signing scheme.
The real problem with end to end security on the web is you don't have a trusted base. You have to bootstrap your application with some sort of trusted base. On a website you are redownloading the website everytime.
The entire point of end to end encryption is that the service provider should not be able to intercept messages. The service provider is the attacker. This is impossible to prevent if you redownload the app everytime you open the page†. You have to build trust from some starting point. If you have some trusted bootstrap code you can build from there with signatures, but you can't build trust out of thin air running code directly supplied by the party you are trying to protect against.
The problem isn't the zillion TLS servers. The problem is the first TLS server, which in the e2ee threat model we have to assume is evil.
† i guess service workers can extend that to once every 24 hours. Still not super compelling.
Subresource integrity hash checks are supposed to let you pin a particular version of a webpage / webapp, but the W3C managed to not let SRI work on bookmarks. If you could bookmark a specific version of a url with SRI, that would make so many problems go away.
You can have a personal startpage saved on your device's local storage, with an SRI link to a webapp, but that takes a bit of fiddling.
> Subresource integrity hash checks are supposed to let you pin a particular version of a webpage / webapp
I dont think that is true. Its called subresource integrity not resource integrity. I don't think pinning a specific top level resource was ever part of the goal.
> but the W3C managed to not let SRI work on bookmarks. If you could bookmark a specific version of a url with SRI, that would make so many problems go away.
I don't really think this makes sense in the context of how web browsers currently work. Are you proposing that users just unilateraly pin versions of websites?
The idea is that users could bookmark reviewed and vetted versions of websites / apps.
It won't work "unilaterally" for obvious reasons but there are lots of p2p, e2e, security and crypto type apps where creators would love to be able to reduce the degree to which infra like CDNs and hosting need to be trusted.
The "page level SRI" mechanism wouldn't even need to work that well (eg limited support for web features). If you could get a trusted "bootstrapper" to work with no software installation then you could do the rest of the trust chain in js "userland".
> It won't work "unilaterally" for obvious reasons but there are lots of p2p, e2e, security and crypto type apps where creators would love to be able to reduce the degree to which infra like CDNs and hosting need to be trusted.
If we are moving out of the http/web space into some sort of distributed protocol, just use magnet links, problem solved. Or some equivalant hash based content-addresable scheme. (How applicable depends on what space we are talking about)
I agree but now you are asking your users to either install an extension or download something.
What many people want is the convenience of http/web with the immutability of content addressing and the frustration of OP at the start of our thread is that SRI feels so close to being able to provide that but stops short.
This has been a really interesting discussion for me because I think you understand the use cases and alternatives but it seems like you don't agree that hash pinned / immutable web sites natively in the browser would enable many interesting use cases.
I think i don't like it if its implemented as metadata stored separately from the url. As a separate url scheme where the url scheme includes both the hash and the document location, i think it would be cool. Really that essentially comes down to supporting magnet links with the webseed (ws) parameter in browser. Which would be really cool.
Actually, a browser COULD implement subresource integrity for bookmarked urls - it would just require another field in the bookmark metadata. They could arrange for 'rightclick, save as bookmark' to pull the sri hash from the link and autocopy it into the bookmark.
#featurerequest
[pinning a specific top level resource] - You can include an SRI link on one of your pages that points to the top level index.html page on someone else's domain, and the browser will verify the pinned hash of the index.html file. Try it and see. If their index page also uses SRI to pin its own resources, the entire site is pinned.
[just unilateraly pin versions of websites?] - If the other website is advertising itself as 'not supposed to change', then yes, this is a way of confirming that it has not changed.
Perhaps bookmarklets could make that possible? It could generate loading of the resources using the SRIs hardcoded directly in the JS contained in the bookmark itself.
Or even simpler, use the data: URI for the initial page as a bookmark.
The data: url to bootstrap trust is a really cool idea. I like that.
Of course then you are starting from a null origin, which makes certain traditional web architecture things hard. Maybe that doesn't matter if you design for it from the get-go. Too bad <iframe> doesn't support integrity. I guess you also sacrafice features that require a secure origin with this method.
This is where IPFS can shine, as (if you don't use the very-optional IPNS) you are browsing directly to a fixed hash that your browser could (it doesn't, but this is trivially fixable) verify; it thereby exists in a space between pre-downloaded and web-requested software (with different tradeoffs, some positive and some negative; but like, all three of these options have negatives).
This feels like the wrong argument. I don't think this has anything to do with suitability of end to end encryption. It is easily worked around with e.g. subresource integrity, or rolling your own signing scheme.
The real problem with end to end security on the web is you don't have a trusted base. You have to bootstrap your application with some sort of trusted base. On a website you are redownloading the website everytime.
The entire point of end to end encryption is that the service provider should not be able to intercept messages. The service provider is the attacker. This is impossible to prevent if you redownload the app everytime you open the page†. You have to build trust from some starting point. If you have some trusted bootstrap code you can build from there with signatures, but you can't build trust out of thin air running code directly supplied by the party you are trying to protect against.
The problem isn't the zillion TLS servers. The problem is the first TLS server, which in the e2ee threat model we have to assume is evil.
† i guess service workers can extend that to once every 24 hours. Still not super compelling.