Declarative code, such as HTML or CSS, which describe particular behaviors of rendering from a broad, but limited palette, are a different severity from imperative code that can interact with various features of your host platform.
As a user, for the web execution trust model to work, you need to know that the code you're about to execute was vetted by originating site and not altered in transit. TLS provides this. It won't help you with easing the cognitive load of making that decision, or extending your trust model to third-party origins referred to by the site you visit, but it does provide baseline assurance that the content wasn't tampered with by an agent that wasn't a party known to you or your origin ahead of time.
As a side-effect, this move serves to further segregate the document-based 'legacy' web and the new web that's an application delivery platform. In my opinion, any move that sets these two use-cases further apart, without necessarily impacting the nameplate usability expectation of either, is a welcome step.
The post states that any new features including something as simple as a CSS property will now require a secure context, regardless of whether the new feature exposes more security risk or not. This is a marked departure from the declarative markup vs imperative scripts distinction that you make, or any other risk analysis which has guided which features require a secure context in the past. It doesn't distinguish between simple web pages and web applications. Instead it is a blanket policy that if you don't encrypt, you will not be able to use any modern web standards, period.
Correct me here if I'm wrong but the linked article actually uses the example of a new CSS property as an instance of something that would _not_ require a security context.
> you need to know that the code you're about to execute was vetted by originating site and not altered in transit. TLS provides this.
TLS provides the latter, but certainly not the former. Many sites are serving JS that they cloned from some github repo and have never looked at beyond that.
I actually quite like the security model of the web.
All code is considered untrusted except for the OS (browser) itself. Permissions are fine-grained, explicit, optional, and enabled on a site-by-site basis. Even basic things, like an application's ability to play audio or execute JavaScript aren't entirely taken for granted and can be controlled by the user.
Overall, I think the web does a pretty good job of balancing security with user convenience. Certainly better than any other mainstream platform I'm aware of.
But by doing that you're feeling ok with running untrusted code which could easily exploit the JIT. Whereas with native code you have to trust it so you'll only run good trusted code.
There is some truth in the fact that until we have support for signed code on the web - and a way to check that whoever signed can be trusted, we only have "level 1" security.
In FxOS we used code signing to grant access to more powerful apis. I think that something like what the Dat project is doing could be interesting in this regard, or web packages as described in https://github.com/WICG/webpackage/blob/master/explainer.md
https://developer.mozilla.org/en-US/docs/Web/Security/Secure...
Features Restricted to Secure Contexts:
https://developer.mozilla.org/en-US/docs/Web/Security/Secure...
Chrome's Secure Origins seem to be the same thing:
https://www.chromium.org/Home/chromium-security/prefer-secur...