Hacker News new | past | comments | ask | show | jobs | submit login

I said something similar in a reply below, but I find it interesting that this amounts to a .Gov-wide decision that availability is always less important than confidentiality and integrity.

While that's probably valid in the main, is that always true? FEMA/NOAA spring to mind. As does IRS guidance, especially since those documents should have digital signatures themselves for an additional layer of integrity.

Was this idea part of the discussion?




Bear in mind that when it comes to plain HTTP, it's not just the system's confidentiality and integrity that you need to weigh: it's the user's confidentiality and integrity. That's a larger moral responsibility, in my opinion.

These issues were already worked through for the executive branch as part of the White House HTTPS policy published in June 2015:

https://https.cio.gov/

Some rationale for "Why everything?" can be found here:

https://https.cio.gov/everything/

Personally, I'd say that plain HTTP is insecure enough, and today's internet is hostile enough, that plain HTTP provides a very weak form of "availability". It's on site operators to ensure that when their services and information is available, that it's available in a manner that doesn't subject the user to risk.


How is the user's C/I adversely affected by allowing FEMA to have a non-https subdomain for emergency alerts? That's a super easy addition to the policy: HTTPS everywhere except for GET requests to alerts.fema.gov.

I assume you know, but in case you don't, hostnames are typically outside the envelope for HTTPS. So this hypothetical GET already leaks that it's going to alerts.fema.gov. Then realize that the HTTPS cipher suites positively identified the HTTPS library being used, and packet details + origin IP leak the OS.

Edit: I'll even do you one better. Have the policy be HTTPS everywhere and HSTS everywhere but alerts.fema.gov


Yes, I do know that hostnames are typically outside the HTTPS envelope. However, user-agent is not, and would be exposed (and could then possibly be correlated to other HTTPS traffic from the same IP address). Also, potentially cookies from a previous session -- even a previous HTTPS session -- could be exposed, depending on how careless the server operator is. (You can set flags to make sure cookies only go over HTTPS, but that doesn't always happen.)

From an integrity perspective, connecting to alerts.fema.gov over HTTP does potentially subject the user to code injection attacks. Those do happen:

* https://arxiv.org/abs/1602.07128

* https://citizenlab.org/2015/04/chinas-great-cannon/

* http://www.forbes.com/sites/kashmirhill/2014/10/28/find-out-...

Now, are any of these likely to happen on an arbitrary request to alerts.fema.gov? Maybe not. (Especially since Verizon has since been fined by the FCC.) But I'm trying to point out that it's not just the service owner whose safety has to be weighed in policies like this.

FWIW, the GSA plan announced in this post is intentionally crafted to be gradual and to avoid breaking things. It only affects future domains, not present ones, and so we'll have plenty of time to see whether being a total hardass about HSTS causes negative effects. Agencies can still do specialized services on their existing domains.

There's also going to have to be some carveout somewhere for specialized services like OCSP/CRL, which are already exempted from the policy mandate that came out in June 2015:

https://https.cio.gov/guide/#are-federally-operated-certific...

But in any case, the push should be, clearly and loudly, towards changing the defaults that browsers and users accept, and I think GSA's change weighs the tradeoffs appropriately in making such a push.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: