Hacker News new | past | comments | ask | show | jobs | submit login
Requiring secure contexts for all new features (blog.mozilla.org)
159 points by jwarren on Jan 15, 2018 | hide | past | favorite | 17 comments



They probably should've linked to a better intro to Secure Contexts than a standards document. Here's a simple explanation for them from Mozilla:

https://developer.mozilla.org/en-US/docs/Web/Security/Secure...

Features Restricted to Secure Contexts:

https://developer.mozilla.org/en-US/docs/Web/Security/Secure...

Chrome's Secure Origins seem to be the same thing:

https://www.chromium.org/Home/chromium-security/prefer-secur...


I adjusted the post to take your feedback into account. Thanks!


This seems reasonable.

Declarative code, such as HTML or CSS, which describe particular behaviors of rendering from a broad, but limited palette, are a different severity from imperative code that can interact with various features of your host platform.

As a user, for the web execution trust model to work, you need to know that the code you're about to execute was vetted by originating site and not altered in transit. TLS provides this. It won't help you with easing the cognitive load of making that decision, or extending your trust model to third-party origins referred to by the site you visit, but it does provide baseline assurance that the content wasn't tampered with by an agent that wasn't a party known to you or your origin ahead of time.

As a side-effect, this move serves to further segregate the document-based 'legacy' web and the new web that's an application delivery platform. In my opinion, any move that sets these two use-cases further apart, without necessarily impacting the nameplate usability expectation of either, is a welcome step.


The post states that any new features including something as simple as a CSS property will now require a secure context, regardless of whether the new feature exposes more security risk or not. This is a marked departure from the declarative markup vs imperative scripts distinction that you make, or any other risk analysis which has guided which features require a secure context in the past. It doesn't distinguish between simple web pages and web applications. Instead it is a blanket policy that if you don't encrypt, you will not be able to use any modern web standards, period.


Correct me here if I'm wrong but the linked article actually uses the example of a new CSS property as an instance of something that would _not_ require a security context.


No. It says a new CSS color keyword would not require a secure context, but that a new CSS property likely would.


> you need to know that the code you're about to execute was vetted by originating site and not altered in transit. TLS provides this.

TLS provides the latter, but certainly not the former. Many sites are serving JS that they cloned from some github repo and have never looked at beyond that.


My eye got caught by the author's signature on the side: "Standards hacker. Mozillian. Loves talking about turning the web into an OS."

I'm personally going the opposite direction, I started using browsing with JS disabled a while ago and found my browsing experience improved.

With the recent security issues, is that really the way we want to go?


I actually quite like the security model of the web.

All code is considered untrusted except for the OS (browser) itself. Permissions are fine-grained, explicit, optional, and enabled on a site-by-site basis. Even basic things, like an application's ability to play audio or execute JavaScript aren't entirely taken for granted and can be controlled by the user.

Overall, I think the web does a pretty good job of balancing security with user convenience. Certainly better than any other mainstream platform I'm aware of.


Probably, you’re putting an extra layer of protection (the browser sandbox) between the attack vector and your system.


But by doing that you're feeling ok with running untrusted code which could easily exploit the JIT. Whereas with native code you have to trust it so you'll only run good trusted code.


> with native code you have to trust it

I consider that a drawback of native code. Not an advantage.

> so you'll only run good trusted code

In practice this isn't a very safe assumption to make.


There is some truth in the fact that until we have support for signed code on the web - and a way to check that whoever signed can be trusted, we only have "level 1" security.

In FxOS we used code signing to grant access to more powerful apis. I think that something like what the Dat project is doing could be interesting in this regard, or web packages as described in https://github.com/WICG/webpackage/blob/master/explainer.md


Turn on Tracking Protection in Firefox and add a blocker such as Ghostery, and you will have both performance/security and enhanced functionality.


New CSS properties are only going to work in secure contexts?

How is local development supposed to occur?

E: there's going to be some flag to enable it for development


http://localhost and file:// are considered secure contexts according to https://developer.mozilla.org/en-US/docs/Web/Security/Secure...


file://, http://localhost, and http://127.0.0.1 (as well as some other defined-to-be-localhost IPs) are all considered "secure contexts".

There's still a problem here, of course: things can work locally but fail when deployed to an http:// site.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: