Hacker News new | past | comments | ask | show | jobs | submit login
Please consider the impacts of banning HTTP (github.com/whitehouse)
257 points by v4n4d1s on April 20, 2015 | hide | past | favorite | 259 comments



Until there's a free, easy, maintainable, and actually existent solution to SSL certs, enforcing HTTPS-only is just downright extortion.

Referring to solutions that are under construction doesn't cut it. If you're that passionate about it, contribute to the SSL cert solution yourself instead of to the endless calls for HTTPS-only.

The 'Semantic Web' movement promises that if everyone would just publish their websites in XHTML with RDF annotations, we'd magically achieve world peace and end hunger. (I exaggerate slightly.) Should we ban non-semantic websites?

Physical snailmail and the spoken word are unencrypted. Both are frequently used to transfer data more sensitive than cat pics. If I'm surfing a website in a coffee shop, yes, there's a danger someone could intercept the data to spy on me. But they could just as well look over my shoulder, and HTTPS-everywhere isn't gonna do anything about that.


I agree wholeheartedly with all of the points you're trying to make but the spoken word / snailmail comparison is broken and detracts from your message.

Saying that spoken word and snail mail are unencrypted and thereby implying that we should be no more worried about unencrypted digital traffic than we are about our real-world conversations is a harmful analogy, because it's evidently superficially true, but it implies that accessing your bank account over HTTP is no more dangerous than saying your credit card number in the street. That analogy would hold if every conversation that we have (or letter we send) could be recorded, filtered and searched at scale by anyone who cared enough to do so. In reality, using encrypted web communication is more analogous to real-world conversation - if somebody is particularly interested in what you, individually, are doing or saying then barring significant efforts on your behalf, they can probably get access to it but encryption, and analogously the security afforded by the difficulty of overhearing people's conversations in real life, prevents widespread, untargeted surveillance that would make, for example, automated harvesting of bank details, or the identification of people interested in a particular political movement trivial.


> Until there's a free, easy, maintainable, and actually existent solution to SSL certs, enforcing HTTPS-only is just downright extortion.

Note that the proposal this comment is in regard to a propasal to ban HTTP for websites of the federal government. It does not mean you have to stop offering HTTP for your own service.

I think the government can afford a couple of SSL certificates, and the privacy and authenticity improvements are well worth it when communicating with goverment institutions.


You can do two things at the same time. Advocate for a better cert solution while advocating for HTTPS everywhere.

The truth is that currently, RIGHT NOW there are dozens of government agencies monitoring HTTP traffic that includes things like Bing searches (!) which millions of people do and do not realize are not private. Browsers need to be designed to aid laymen when things are insecure.

Lastly, I don't know if it is you specifically, xamuel, but there is a history[1] of government agents infiltrating influential organizations and communities in order to slow down movements, change prevailing attitudes, or discredit the members there. I think in cases like this it is important to remember how influential Hacker News is, since it feeds publications that set public perceptions about technology, like Wired and the New York Times.

[1] Operation CHAOS, Project MERRIMAC, Project RESISTANCE, Operation Mockingbird, GATEWAY, CLEAN SWEEP, UNDERPASS, and many others.


What's the difference between infiltration and legitimately voicing an opinion? Should government agents not have a seat at the table in an open forum like HN?


> What's the difference between infiltration and legitimately voicing an opinion?

Visible affiliation.


Excellent point. What does that mean for civilians on that side of the argument- do they have to prove their status? Do the government's actions mean we can't assume good faith anymore?


Anonymous/pseudonymous speech is a long standing tradition of free speech, which many of us are enjoying right now right here. However, there is a difference between private conduct and conduct as a government agent. Government is an agency owned (ultimately) by the people and created by them to achieve certain purposes. To further those purposes, people can institute rules of conducts for the agents of the government. Not using anonymous/pseudonymous speech while performing government duties may very well be one of these rules. Not because government is always evil, but because we think our goals will be achieved better if government would act openly and identifiably and the reasons why we value anonymous/pseudonymous speech largely do not apply to the government actions. The government as such does not have inherent rights that people have (though its agents have them as people, but when working for hire as agents they may be bound by stricter rules than in their private life). That is the difference.


The difference is that they are not voicing individual opinions, they are running operations for the government to quash or change opinion.

Government agents who come to speak and identify themselves as such are obviously always welcomed to have an opinion and voice it in public.

I have replied to several sock puppet accounts on HN that were clearly foreign governments trying to influence the discussion, this is not acceptable.


Infiltrators are paid to participate and push opinions that are unrelated to their own (although they may match by coincidence).


Given that the context of this discussion is using HTTPS on .gov websites, isn't it reasonable to assume that other government departments will provide their private keys to the relevant SIGINT agencies?

I don't see that HTTPS-encrypting .gov websites provides much security advantage.


Arguably there is a solution: just use self-signed certs and/or your own ca. And have browsers implement some form of trust-on-first use and/or some dns/web-of-trust way of avoiding a big scary warning message. This won't fix everything, but it is more secure than http and more honest thsn the idea that you should trust all the CAs browsers bundle.

Ideally browsers should just bundle their own CA certs, and implement some form of semi-formal wot/have a sane UI for the rest. After all we trust our browsers implicitly - but why should we elevate them to do transitive trust for us?

Lets just build on x509, and get some kind of meaningful trust.

Lets say that Apple, Microsoft, Debian, Red Hat (eaxh distribute their own trusted (self-signed) CA cert. And also work with Mozilla, Google to trust (sign) their certs.

Then let trust-on-first-use or some other distributed method take care of the rest. When let's encrypt work: let distributions trust that too.

The resulting system would not be perfect - but I still think it would have a better trust model than our current mess.


Could not agree more with the first paragraph.

As it stands, we punish people who want "half security" much more than people who go for "no security"


> have a sane UI for the rest

If any solution to this mess exists, it is going to require this UI. A fundamental problem with PKI is the trust decisions are not being made by the people that rely on that trust for protection.

What we need is pluggable trust, where it is easy indicate that I trust the shared-by-hand-only cert a friend made for chats between a few friends, a different cert for communications with my bank that I got a copy of by walking into a local branch, and some well-known CA for everything else. This is not "web of trust", though the concepts may overlap; this is about having a easy way to plug in whatever trust model you care to use and allowing different trust models for different endpoints.


What stops the browser from automatically trusting a self-signed cert for PayPal or your bank?


What stops the browser from automatically trusting a forged certificate signed by a bundled CA? That's not a hypothetical question. It's happened before - either through incompetent CAs, or malignant ones (see: Google/Mozilla vs China).

The problem with the current trust model, is that it's unclear who we trust -- or put in another way, who we empower to betray us. No trust without the possibility of betrayal - no betrayal without trust.

With the current model, the path from who the user trusts (eg: Mozilla, Google and the OS vendor) is abused to extend to way too many CAs. So many, that the user can give up (ie: I use the browser and trust the green bar) -- or get a crippled experience, because the model assumes that you trust all bundled CAs. Sure, power users can in theory remove CAs from the store (and add ones, like I do for cacert.org, as I use them for my domains).

The fact that I add cacert.org reminds me of another thing: There should probably not be any CAs that can sign arbitrary subdomain.TLD. Since I add cacert.org, they can empower someone to mitm all my tls connections. But that is a separate issue - this issue already exist.

Trust decisions is all about meaningful choice -- and choosing between not using the web, and trusting Chinese (and every other) intelligence, along with various foreign corporations (they're all foreign to someone) to not enable/be tricked into mitm my email, my web browsing etc.


Certificate pinning, among other things.


> Until there's a free, easy, maintainable, and actually existent solution to SSL certs, enforcing HTTPS-only is just downright extortion.

True - but the second that solution exists, I can't think of anything that should stick with unsecured HTTP, and this article didn't change my mind. I don't think we need a "ban", though. Just flip the way browsers show secured vs unsecured, instead of the green reassuring lock for https, that becomes the expected default and http gets a scary red plaintext indicator.


> Until there's a free, easy, maintainable, and actually existent solution to SSL certs, enforcing HTTPS-only is just downright extortion.

> Referring to solutions that are under construction doesn't cut it. If you're that passionate about it, contribute to the SSL cert solution yourself instead of to the endless calls for HTTPS-only

Right. In similar threads, I've seen a lot of people linking to Let's Encrypt[0]. The idea of that project is great but, at best, all we can do now is discuss how to enforce HTTPS once Let's Encrypt (or something comparable) is available. Anybody who runs a small, personal website that generates no revenue would essentially be screwed if it were enforced before then, as (and someone can correct me if I'm wrong here) there aren't really any affordable options at this point for people who don't have much money to throw towards their site.

[0] https://letsencrypt.org/


> essentially be screwed if it were enforced before then

Let's Encrypt, if it goes according to plan, is only a few months away. It's not just some fairytale that we're hoping will come true someday. It could be reality very soon! It's got some pretty big names behind it, including one of the Big Three browsers, so I'd say that it has a pretty good chance of success.

And if Let's Encrypt fails, surely someone else will try something similar in the near future. Some registrars are already handing out a free certificate with every domain. I got ~10 certificates in the last year alone, half of them for free (StartSSL) and the other half for $5/yr (PositiveSSL). The momentum is there, it's irreversible. Even if we don't hit $0, we're asymptotically headed toward it.

Moreover, given the pace at which governments and other large organizations move, I have zero worry that HTTPS-only will be "enforced" before free certificates become widely available. Ditto for browser vendors. Chrome will not risk blocking non-HTTPS websites before the time is ripe, because if it did, people will just delete Chrome and move to another browser.

This whole debate is just a bunch of FUD concerning entirely unrealistic scenarios. Why are we spreading this sickening FUD instead of, say, supporting the two well-known organizations (EFF & Mozilla) that are trying to bring free SSL to everyone?


StartSSL has been around for a long time, and issues free certs (with a $20 fee iff you need to revoke it before expiration).


I actually used StartSSL. It took, literally, days to figure out how to create the certificate. The next year I renewed it; it took, literally, days to figure out how to renew the certificate.

The next year I just paid someone to do it for me.

I would never recommend StartSSL to anybody.


It's true that the UI is obtuse, but there are step-by-step guides with pictures like [1][2].

[1] https://www.digitalocean.com/community/tutorials/how-to-set-...

[2] http://www.troyhunt.com/2013/09/the-complete-guide-to-loadin...


I renewed three certificates with StartSSL yesterday. It took me literally a couple of minutes to do so.

I have also written up how the entire process works (from generating the key to creating the CSR and getting the thing signed) for a specific (non-webserver) use case and while I don't claim my writeup is perfect, several people have had no difficulty following it in under 15 minutes, even though it was the first TLS certificate they ever installed.


So your point is both that it is so easy it took "literally a couple of minutes" but so difficult "you've written up how the entire process works" that you've had to share with "several people" so they could repeat the same exercise...

Seems like your narrative is contradictory.


Agreed. As an expert in various things, I have learned to try to shut my mouth when the topic is how easy those things are for novices.

A young friend is learning to program, so I set up a virtual server as a place for them to upload things. It was only when I went to give them the account information that I had to stop and think about how complicated the "easy" act of uploading files via SSH is. Shell commands, directory trees, working directories, the fact that the web site is in /var/www and what that means, why index.html is special, what ssh keys and asymmetric encryption are, what a bastion host is, etc, etc.


A part of this problem could be solved with sshfs


That's not contradictory at all. Plenty of system administration tasks fall into the category of "easy to perform but not self-evident to someone with no experience".


No contradiction, something being easy does not imply it being self-evident / obvious. There are many simple things that are non-obvious without retrospect.


You can't get wildcard certs so you'd have to do this for ever subdomain you want to use.


They're still free, whereas no other company (at this time) will give you a free certificate for a single subdomain year after year. If you are using so many subdomains that generating the CSRs and pasting them in StartSSL's CSR field is too much work, then maybe a paid wildcard certificate is the better solution for you. But as long as you can count the subdomains that need a certificate on one hand, I'm not going to pay some other company $50 (maybe more? not sure) for something that takes me less than half an hour per year.


1 company where it's kinda sorta not-so-hard to do SSL is not good enough. If we're going to go HTTPS-only it needs to be as close to as easy to get them as it is to install apache on your old laptop and serve some html you wrote.

What irks me is that the biggest HTTPS-only advocates (mostly Google employees) simply do not care about this problem. They do not address it.


> But they could just as well look over my shoulder

That's not a good argument. "Over my shoulder" surveillance is very hard to achieve covertly and consistently and is much easier to detect than digital surveillance. It also scales pretty poorly compared to digital one, at least until we get cameras that can record in every direction with resolution enough to read computer display from a dozen meters and put them literally on every wall.


> Until there's a free, easy, maintainable, and actually existent solution to SSL certs, enforcing HTTPS-only is just downright extortion.

> Referring to solutions that are under construction doesn't cut it. If you're that passionate about it, contribute to the SSL cert solution yourself instead of to the endless calls for HTTPS-only.

So you want a free, easy, and maintainable solution, but you don't want to talk about solutions that are currently under development? What kind of argument is that? It's as if you specifically added that condition to preempt any discussion about Let's Encrypt. Guess what, a lot of us actually are passionate about this solution, so we're actually contributing to that project by supporting EFF and Mozilla.

The call for HTTPS-only does not ring in a vacuum. Context and timing are critical for any plan that might involve a chicken-and-egg problem. But that's not an insurmountable problem. The proposed enforcement will come into effect some time in the future (if ever). So we still have a few months, maybe a couple of years to prepare for it. That's enough time to build a free CA that can disrupt the shit out of the extortionist market.

Opposing a plan for the future just because a prerequisite does not exist right now is gratuitous negativity, especially if you're deliberately ignoring "actually existent" efforts to build that prerequisite.


Yes, I intentionally wrote that to preempt discussion of Let's Encrypt because Let's Encrypt isn't a working solution yet. If/when Let's Encrypt is a working solution then let's continue the discussion / send "the boys" to break kneecaps wherever someone's using HTTP / etc.

If Let's Encrypt is as close to completion as HTTPS-only advocates claim, then you have very little to lose by simply waiting until it's finished, and then start the evangelism. It isn't like we're standing at a cusp and tomorrow some committee's going to vote whether to permanently ban HTTP or permanently keep it. Right now it's like you're a doctor yanking a patient's access to an important medicine because "there's a better medicine coming along, it's in the last stages of clinical trials, it should be ready any month now".


No, we're like a doctor who is threatening to yank it, with the explicit goal of pressuring others to bring forth a better medicine sooner. Nobody's actually yanking anything yet, and since this is the federal government (of healthcare.gov fame), I don't expect them to yank anything effectively anytime soon.


A doctor using prescriptions to pressure clinical trials into going the way she wants them to. What could possibly, possibly go wrong. (What does that even mean, nobody's actually yanking anything, they're just threatening to do so to coerce people? It sounds creepy and manipulative.)


The analogy breaks if you take it too far. This is politics, not medical science. Good clinical trials will discover facts about the world. Good politics will change the world, making previously discovered facts irrelevant.

Threatening one another into taking action is exactly how progress happens in a market economy. Somebody is always threatening to drive somebody else out of business. Either you disrupt or you are disrupted. And since we probably won't be getting rid of this ruthless system anytime soon, I'd rather want to see the good guys drive the bad extortionists out of business rather than the other way around.


It feels to me that this is very similar stunt to encrypting smart phones. After Snowden published details about various spying programs, people became more cautious when communicating.

This is just to convince general population that now it is safe again and you don't have to worry about someone snooping on you, because the communication is encrypted.

Yes, your communication with let's say Google is encrypted, but that doesn't mean Google won't share your data to the three letter organizations. And this is the simplest way, ignoring NSAs various efforts to place vulnerabilities in encrypting software.

Similarly as with phones encryption, it doesn't really help that much, for example when last time NSA ever needed physical access to your phone?


IMO, this is where benevolent dictators come in.

Banning, discouraging, creating a market for 'solutions to SSL' and evolving the whole thing together is a challenging proposition.


FUD

Snailmail and spoken word are much harder to snoop en masse. Apples, oranges.


Snailmail and spoken word are also much harder to use legitimately en masse. Consider the cost of an email campaign vs. a snailmail campaign.


I can easily intercept all snailmail my neighbour gets and peek through it. I cannot do that for his http-traffic.


You can if you hack that internet box on the corner of your street. Also, your local internet provider can.


it's only extortion if we ignore little things like the definition of the word and the reality of the situation. I guess it would weaken your argument's emotional impact to drop the hyperbole but I feel no particular obligation to accept the way you desire to frame the debate. I find your shallow obvious attempts at emotional manipulation into your point of view to be rather gross.


This issue was raised on a proposal that "would require the use of HTTPS on all publicly accessible Federal websites and web services"; presumably the government would simply become a certificate authority to make the transition smooth.


Physical snail mail and spoken word are far, far more difficult to do bulk captures and data mining against. That's the point.


This is interesting.

I've always been okay with dropping HTTP for HTTPS-only as a long-term goal, as long as we get rid of the SSL cert racket first.

As far as MITM and identity go, could we at least modify the protocol to allow fingerprint caching, as opposed to certs, as a fallback? ...SSH does this and it is arguably more important to secure than HTTPS.

Fingerprint caching seems insecure when you think about it, yet we're all okay with maintaining our servers with this in place.

Furthermore, X.509/ASN.1 is the worst thing to happen, ever. I know this because I damn near tore my hair out trying to implement an x.509 certificate validation.


> as long as we get rid of the SSL cert racket first.

"It is really a messed up situation to have to pay to not have your website marked as dangerous"


Which suits some people just fine


It's also messed up that we have to pay for domain names. Why can't we have them for free? I want Google.com please. You can get free certs now, and more ways to obtain them are coming this summer. In either case, let's solve our immediate problem now, then add different authentication methods to browsers after.


This is an incorrect analogy. Domain names are a finite resource, the cost of signing a certificate approaches zero.


You don't actually pay the signing of the certificate, you pay for the trust that the CA should give you.


Then why does VeriSign charge more than Gandi for the same thing (domain control validation)?

Why do we treat ID verified certificates (i.e. it has your name on it) as somehow "better" than the former, but the browser doesn't care, it just cares that the cert was signed?

Why do certificates expire, but not require new keys? (And why does this expiration cause a scary warning akin to a self signed cert?) There is no practical reason for the expiration, save to line the pockets of the CAs.

None of this crap makes any sense, unless you view the CA system as exploitative and broken by design, in which case the answer to most of these things is "because greed".


I mostly agree with you and the cynical in me says it's all about greed, except regarding expiration: nothing is eternal, especially considering crypto, so it's a safe assumption to say that nothing can be guaranteed for more than a given number of years. If you don't put a limit, you stall development of new primitives because deployment is more expensive than deprecation of what is already existing. Putting a "best before" date keeps everyone's head up.

The CA system looks like a good idea on paper if you keep it technical; if you look at it from a more widespread angle there's little surprise that it turned out to be like it is. But the idea remains good.


It is zero. StartSSL has had free certs for years. Let's Encrypt will give another source of free certs. I expect shortly after that basic DV certs will be given out for free from most CA's. But most people don't care about finite or not finite resources. I am starting to suspect that the isse is mostly just the cost (with not everyone yet knowing how to get free certs). If your main objection is that you don't like the StartSSL UI, you can get a paid DV very for $5. That is about half the price of your domain.


The cost of the actual signing is pretty much zero, but there were definitely paid devs who built these commercial solutions.


Indeed.

But couldn't it be ad support for how little it costs? Just watch this 5 minute video then get a free domain certificate.


I see the point you're trying to make.

I think we need a NFP org to become a trusted auth for these types of things. No way is GoDaddy going to help us here..


You want the future of the Web to be Web developers watching insipid 5-minute videos before they can get anything done?


I'd take that over what we have now in a heartbeat.

My time isn't worthless, but for a personal web-site I am already "wasting" tons of time on it, and what is 5 minutes more? Better than a $60/year certificate.

StartSSL is too terrible. Let's Encrypt doesn't exist. And CloudFlare requires you to use their entire service for the free certificates.


not for proving identity - how do you prove that ebay is ebay or that bank site really is Barclays and not some scammer


Not with SSL certs anyway.

At least not as long as your browser trusts hundreds of CA's, including shady ones such as Comodo[1] who will issue fake certs to any name (Google, Skype, etc.).

[1] https://www.schneier.com/blog/archives/2011/03/comodo_group_...


The fact that Comodo is occasionally scammed (a headline-generating event) does not prove that they add no level of identity authentication.


With an email hostmaster/webmaster@example.com and/or a DNS record. EV certs require more, but DV certs have their identities verified automatically in seconds.


Fingerprint caching seems insecure when you think about it, yet we're all okay with maintaining our servers with this in place.

You're supposed to use out of band methods to verify the host key before approving it.


Yes, you are supposed to. But, do you know anyone/any organization that actually does this? I don't.


I'm well aware people tend not to practice security hygiene when it comes to SSH. I do, I have friends that do and friends that don't.

While I acknowledge it's burdensome and the process could probably be streamlined, it's not my problem.


>As far as MITM and identity go, could we at least modify the >protocol to allow fingerprint caching, as opposed to certs, >as a fallback?

We already have that. It's called HTTP Public Key Pinning (HPKP). But not "opposed to certs". In addition. Which is the right thing to do: Don't use it as a replacement for existing security, but add it on top of what we already have.


Yeah, I agree with you. I suppose I worded that strangely. I didn't mean cut certs out of the equation, just use fingerprints as fallback, if possible. I wasn't aware of HPKP. According to mozilla docs, it's lacking some browser support. If it got implemented across the board, this would be great. I suppose it also depends on whether or not the browser screams at the user if the fingerprint fallback is necessary.


> Fingerprint caching seems insecure when you think about it, yet we're all okay with maintaining our servers with this in place.

SSH's fingerprint system is intended to be used by people who understand the basics of it. HTTPS is intended to be used by the average web user, who will click "yes" on any prompt that comes up.


so who is going to pay for it?


This seems to make two wrong assumptions:

1. HTTPS does not only guarantee that data is secret, it also guarantees that data is not manipulated. And in this sense scientific data is very sensitive - it matters that you know your data is the correct data.

2. As so many do, it vastly exaggerates the performance costs of HTTPS. They are really minimal. If you really care I suggest you benchmark before you make any claims regarding your servers can't handle it.


Your second point is valid only for x86 machines (and modern ones).

AES instruction sets implemented in hardware are the main factor in making HTTPS viable. But my toaster (which speaks HTTP), or my ARM based router, or even my high end server iDRAC/iLO does not have a dedicated x86 or AES hardware instruction.

so the point stands, even if it's not relevant to modern laptop users.


This point is important, because there is a lot more non-AESNI hardware than the converse. But luckily, it's possible (and likely, I hope) that TLS 1.3 will include the ChaCha20-Poly1305 AEAD ciphersuite, which should improve this matter quite a bit for most users who need software implementations - it's _much_ simpler to implement, and even in software, a heavily optimized can get within the ballpark of AESNI.

Google is already using ChaCha20-Poly1305 inside Chrome to talk to Google servers, if your hardware doesn't support AESNI. It's been doing this since early last year, and Adam reported at that time nearly 40% of all traffic to Google was going through it (including mobile devices IIRC): https://www.imperialviolet.org/2014/02/27/tlssymmetriccrypto...


It won't just include it, at this stage it seems set to become mandatory-to-implement (not necessarily mandatory to deploy). We'll see.


ARMv8 comes with the AES instruction set as well, and Linux supports it. The newer Android phones like the Nexus 9 are already using it.


Of all the things that should be using strong encryption by default, router and server management interfaces are at the top of that list. Those interfaces _do not_ have the kinds of high-performance transfer requirements like that of bulk data transfers or streaming video services.

As for bulk data transfers for government-funded scientific research, data integrity for anything having to do with clinical trials or personally-identifiable information is among my highest concerns. And unless we're pumping that data across private fiber, the latency and bandwidth limits of the global Internet far outweigh the overhead caused by encryption/authentication algorithms.

(I would add that I think bulk data transfers ought to be encrypted over private/internal networks as well, but that's an argument for when I'm not on mobile.)


> AES instruction sets implemented in hardware are the main factor in making HTTPS viable.

Not true. It's something like 30-40 cycles per byte to do AES without specialized instructions. That makes saturating 100mbit trivial, and saturating gigabit one of the easier problems. How much traffic can your toaster and router possibly be terminating?


Power usage is very important for low-end devices, especially smartphones which will be talking to lots of HTTPS resources. This is the complaint on the other side of the pond, away from the people who need ASIC accelerators for whatever reason to keep up with bandwidth/latency/cpu concerns.

Just as an example of the difference - I wrote a naive implementation of ChaCha20 in C with zero optimization effort and it does 5cpb out of the gate (Sandy Bridge). Just using vector-types and letting GCC/Clang vectorize brings that down to ~3cpb on Sandy Bridge - no effort. The Krovetz implementation of ChaCha20 is closer to 1.2cpb on my machine, with AES-256 doing 1.0cpb using AESNI (again, my own naive implementation). All software.

Even the most hand optimized, secure AES software implementations are still in the realm of ~15-20cpb (IIRC), three-to-four times worse than the unoptimized competitor. As linked elsewhere in this thread by me, non-scientific tests show it 3x faster in software on some mobile phones. That's a lot of extra cycles-per-byte for your battery to chew through using AES-256, and I'd guess I easily churn through a low number of gigs of HTTPS data every month..


I'm not sure about phones. Active cellular data demolishes my battery; I don't think the extra CPU use would matter terribly much.

And let's say you use 6-8GB of HTTPS data. 3 minutes of full use of a single 1GHz core per month is not a lot of battery.


Could you not solve 1 by having an HTTPS index page with content hashes?


If they can modify the content on a website, they can modify the content hashes on the index page. I suppose you could put the content hashes on an https page...


>> Could you not solve 1 by having an HTTPS index page with content hashes?

> I suppose you could put the content hashes on an https page...

Aren't these the same thing?


Yes, I meant using https for a subsection instead of the entire page as a solution, my comment reluctantly recognising the parents solution as valid.


If they can modify the content on a web site, they can modify the data hosted on the site, in place.


> The effect on those without computers at home

The additional bandwidth is not really that big. Yes, you cannot cache the big news-papers etc. when everything is https, but I think most stuff people do nowadays is things that's not cache-friendly anyway (like your private e-mail, facebook etc.). If they are afraid of some people using all the bandwidth because they cannot block youtube, netflix etc., they can can divide the traffic in better ways, limiting each client.

> Restricting the use of HTTP will require changing a number of scientific analysis tools.

A non-argument. One cannot let things remain the same for ages just because of backwards compatibility.

> HTTPS is not a security improvement for the hosts.

So what?

> HTTPS is frequently implemented improperly

Non-argument. Https not being perfect doesn't mean it's useless.

I'm not necessarily saying https-everywhere is a good idea, just that these arguments adds nothing to the discussion.


> I think most stuff people do nowadays is things that's not cache-friendly anyway

So you're OK with throwing away a perfectly fine and proven internet protocol which has survived for several decades, on a vague notion you have that "it's probably not that cache-friendly anyway".

That sounds solid.

> HTTPS is not a security improvement for the hosts. So what?

Reduced performance. Increased latency. Increased complexity. Increased attack-vector size. But indeed: so what?

> HTTPS is frequently implemented improperly. Non-argument.

Yes. Who cares about the real world, anyway?

I'm awfully sorry, but I honestly don't think your post counts as a very good counter-argument to those very valid points which were raised in the reported issue.


>So you're OK with throwing away a perfectly fine and proven internet protocol which has survived for several decades, on a vague notion you have that "it's probably not that cache-friendly anyway".

HTTPS is not throwing away HTTP. It just protects it with TLS.

>Increased attack-vector size. But indeed: so what?

I think you meant 'decreased'. By not being able to modify the payloads or steal cookies, attackers are only left with the TLS protocol to try to mess with, which is a much smaller attack vector than being able to tweak HTTP headers and so-on.


> I think you meant 'decreased'.

It could have been a sideways glance at flaws in SSL/TLS that have rendered servers, data, or both compromised. Basically, a straw-man claim that plain text is actually more secure, since a flaw in the crypto stack could exist.


> HTTPS is not throwing away HTTP. It just protects it with TLS.

Actually, in the issue that is linked the real problem is that HTTP is to be discarded in the government.


Just how much cacheable content do you suppose lies in RESTful messages? I think you have significantly misunderstood the use case for REST. All that performance stuff you bring up applies to CDNs, but not API requests.

It doesn't seem to me like you grasp the real world. There is a bit of a disparity between how we handle static content vs. dynamic API calls, and you demonstrate a willingness to conflate them repeatedly in this thread. It's baffling.


> So you're OK with throwing away a perfectly fine and proven internet protocol which has survived for several decades, on a vague notion you have that "it's probably not that cache-friendly anyway".

Don't do straw men, please. I said that it's not as big an issue as the author said it was. Nothing more, nothing less.

> Reduced performance. Increased latency. Increased complexity. Increased attack-vector size. But indeed: so what?

Not that much increased complexity in most cases. But the performance is a potentially good argument. The author should have used that instead.


Additionally,

> HTTPS would hide attacks from existing intrusion detection systems.

is completely wrong. Intrusion detection systems are usually checking traffic between the reverse proxies and the servers where traffic is not encrypted.

There are quite a few problems with his post and it makes it appear that he has an agenda rather that genuine concern.


There's also a danger in users who will think that because the web is all "encrypted" now that must mean it's "safe", which will simply not be true.

Additionally, it makes website-owners beholden to the CAs (and I know they're already beholden to domain registrars but that's not really a good argument against adding an additional, arguably unnecessary cost to someone who just wants to run a hobby website.)


> There's also a danger in users who will think that because the web is all "encrypted" now that must mean it's "safe", which will simply not be true.

As I remember it, the proposal for mandatory opportunistic encryption was that it'd still be the http:// scheme with no padlock, so no expectations are set but some benefit is there.


and I know they're already beholden to domain registrars but that's not really a good argument against adding an additional, arguably unnecessary cost to someone who just wants to run a hobby website.

You are not required to have a domain to have a website; just using an IP address works fine too, and - more importantly - unlike what would happen if the proposal to ban plaintext goes through, browsers don't give big scary warnings if you use an IP instead.


First, remember than the ban is only for .gov sites, so hobby argument is irrelevant here.

That said, Firefox does give a "scary warning" if you use an IP.


Don't get me started.

We had an internal dev' server. We generated a certificate for the IP address using our internal CA. We installed the internal CA on client machines+inside Firefox, Chrome and IE worked perfectly, Firefox bounced the IP address "domain" certificate even though it was valid by all metrics we could measure. It just claimed that the IP address of the host and the certificate's CN weren't matching even though they were. We tried it as the alternative name also, no dice.

We finally had to assign the machine an internal domain name using our DNS servers and redirect from the IP address just to fix Firefox's HTTPS issues with IP addresses. It is damn stupid and no other browser acts this way.


>A non-argument. One cannot let things remain the same for ages just because of backwards compatibility.

A non-answer. We have done just that for ages, which is why dual core gigahertz processors with gigabyte of ram started up in 16 bit real mode (I don't recall if they do anymore) long after nobody used 16 bit real mode.


> > The effect on those without computers at home

> The additional bandwidth is not really that big.

It's actually not so much about bandwidth, it's more that if they don't have a computer at home they will have to use a computer at a library that forbids all HTTPS, making the resources unavailable.

> A non-argument. One cannot let things remain the same for ages just because of backwards compatibility.

So you volunteer to migrate and maintain all the software we're talking about here, or to pay the people that will do it over the decades to come ?

Migration doesn't come out of thin air by simply snapping fingers. Yes, the software is old. Yes, it's ugly to see. Yes, it's hard to maintain. But, if you read the article, you would no that there is no funding to maintain it, so they're doing what they can with what they have. Everyone would love to use the cleanest, leanest, UNIXest tech stack, but the realities of the world make it far more difficult than what we're used to see on HN.


> It's actually not so much about bandwidth, it's more that if they don't have a computer at home they will have to use a computer at a library that forbids all HTTPS, making the resources unavailable.

That's a weak argument. If places like Libraries continue to block all HTTPS content, before long they would have blocked so much of the internet that the entire service would lack a coherent point.

Most sites in the top 50 are HTTPS only now. How long until they all are?


Regarding old software, I'd say the problem is being overblown. You don't need to touch any of them. If the service is HTTPS only and the client is HTTP only, the obvious solutions is to have a proxy in the middle, converting between the two.

For example: "Stunnel is a proxy designed to add TLS encryption functionality to existing clients and servers without any changes in the programs' code."

https://www.stunnel.org/


The kinds of projects and organizations that will suffer the most from forcing HTTPS are ones like those under government research umbrellas which require vast amounts of paperwork to do something as simple as setup an SSH key so rsync can be used. That pales in comparison to the amount of paperwork required to do something like install a new piece of software or setup a new server. On top of that, such changes are not budgeted for, so you end up with highly educated and trained people, who ought to be doing their research or actually making progress in their work, spending time fixing artificial problems that ought to be handled by dedicated staff. This wastes taxpayer dollars and leads to personnel burnout.

Why? All because some memo-writer in Washington said, "There's no such thing as insensitive web traffic!"

It's easy to make these proclamations when you have no part in fixing the fallout.


That's what I was thinking too reading the article. At least half of his points are non-arguments and and the other half is stuff like "but we've always done it this way! why change that?!"

Please. Don't forget you can actually be infected with malware through plain-text connections. It's not just a privacy thing.


I'm aware that there is a proposed service to simplify the acquisition of SSL certificates for websites but at this point getting SSL and HTTPS ready is both a costly and complex exercise for many webmasters. Managing several websites for generally small audiences (1000-5000 people) and with limited revenue to pay the costs associated with getting a cert makes me reluctant to support an outright ban on the use of plain HTTP especially when the content is not of a personal or indeed personally identifiable nature. How does the proposal address the ease of access to certificates and the current state of affairs in certifying authorities?


> In summary: Non-sensitive web traffic does exist.

A million times this. I cannot believe that the current HTTPS-nazis totally fail to see something this obvious.

There's countless use-cases where plain HTTP is not only OK, but probably the simplest and best option.

Trying to ban it to address some vague possibly-maybe security-related theoretical issues seems asinine.


Non-sensitive web traffic does not exist. Every request is sensitive in the sense that it's another piece of data that can be used to build a tracking profile of the computer that sent it. You might not care that a particular request is tracked, but that doesn't change anything because you may decide you do care at some unknown point in the future. Using HTTP takes that choice away.

There's countless use-cases where plain HTTP is not only OK, but probably the simplest and best option.

That doesn't mean HTTPS everywhere is a bad idea. There definitely are use-cases where HTTP would be faster/cheaper/easier/simpler/etc, but the move to ban HTTP takes all that in to account, and argues that banning it is still a good idea because the implications for privacy are simply more important than any of those issues.


I'm concerned about building profiles of internet users, probably more than the average person is, but I see the HTTP/HTTPS question as fairly marginal for that particular issue. Building profiles of users' browsing behavior is primarily done via methods that HTTPS lets pass through, such as the Facebook "like" button, Google AdSense ads, etc. Much of this data (though probably not Google's) can then be bought and cross-correlated into databases.


Building profiles of users' browsing behavior is primarily done via methods that HTTPS lets pass through

What do you think is the purpose of the NSA hardware that is deployed[1] at the Tier1 internet exchanges[2]?

[1] http://en.wikipedia.org/wiki/Room_641A

[2] http://en.wikipedia.org/wiki/Tier_1_network#List_of_tier_1_n...


Something that concerns me is that the major browser manufacturers seem to be able to dictate what HTTPS certificates are OK, and therefore what sites non-technical people will have access to.

Surely a move towards centralised control of the web is not good from the tracking/privacy point of view.

The short-term benefits might look nice, but this looks to me like a long-term play. The fact that this movement has been led by Google - who value tracking - would seem to suggest that their tracking is not harmed.


Something that concerns me is that the major browser manufacturers seem to be able to dictate what HTTPS certificates are OK, and therefore what sites non-technical people will have access to.

I'd say that's more about censorship than tracking/privacy, which is also a very important issue to consider. For things like banking (this has to be the most widely mentioned use-case for SSL) it is arguably a centralised entity we're interacting with so it somewhat makes sense, but the Internet is more than that - much more.


> Every request is sensitive in the sense that it's another piece of data that can be used to build a tracking profile of the computer that sent it.

HTTPS does not solve this problem. SNI leaks the hostname of the requested resource, the timestamp of the request and, if not used in conjunction with an anonymizing proxy/VPN/TOR, the identity and possible location of the requesting machine. In case of static IPs even HTTPS-only allows a fair share of tracking and profile-building by ISP or other snoops.


> There definitely are use-cases where HTTP would be faster/cheaper/easier/simpler/etc, but the move to ban HTTP takes all that in to account, and argues that banning it is still a good idea because the implications for privacy are simply more important than any of those issues.

Where are these arguments? I haven't seen any argument that acknowledges that we are losing a big part of what made the web initially great; that it was dead simple to create your own website.

I'd respect the https-only crowd if they would acknowledge this as a legitimate big loss. But instead they seem to ignore it. I think they are just ignorant to the needs of anyone but their large corporate employers.


How much more difficult is it to get and set up a cert, compared to getting and setting up a domain? It's a hurdle, but not large compared to the existing system, as far as I see it.

But that's only if you want your own domain. Back then, people used Angelfire and Geocities, and so can you nowadays use Neocities (great project!) and get HTTPS without doing anything.


As you pointed out in another thread, there is only 1 place to get free ssl certs, so I would say it is infinitely more difficult.

If self-signed certs were not looked down upon I would consider that to be a reasonable compromise.


Why does it need to be free, though? Domains aren't, and you can get a PositiveSSL for $5/year.


Domains are far more cost effective. I can buy 1 domain and put up as many websites as I want. Certs have a 1-per-site cost that is too high.


I am looking at their site and failing to find that =P

Can you point me towards that ?

(not for arguments sake. I really want a certificate)


Don't go through the main website, they're much more expensive, find a cheaper reseller.

Here: https://www.ssls.com/comodo-ssl-certificates/positivessl.htm...

(SSLs is part of the Namecheap group, btw)


You should checkout https://www.cheapsslshop.com where I got mine. really cheaper.


When any society (and the web is a society) starts to sacrifice the freedom for its citizens to act in public if they so choose in the name of "protecting them" from the poor behaviour of a few bad actors, it becomes awfully difficult to characterise that society as "free".


...especially if that protection involves authorisation by centralised entities. I would be far more supportive of ubiquitous encryption if it was controlled by the users.

Notice how encryption which is under control of the user (mobile device encryption, cryptocurrencies, full-disk encryption, self-signed certificates) is seen as a threat, while systems like the SSL CA model where governments could more easily obtain keys if they wanted to, are not?

"Those who give up freedom for security deserve neither."


How is deprecating a protocol "sacrificing freedom"? Such hyperbole.


"My choice to do something is a freedom that cannot be taken from me" is the same sort of argument people use to carry assault rifles around shopping centres. Sometimes you should be willing to give up a freedom if the result is a benefit for the whole of society, even if you happen to like or benefit from the status quo. That's part of being a member of humanity (or at least, it should be).


Surely such an argument is true for any network communication. My (admittedly vague) understanding of such monitoring is that it happens at the TCP/IP level. Therefore, should we outlaw all 'insecure' TCP/IP communication?

I can see good arguments for _encouraging_ HTTPS where necessary but not the blanket prohibition of HTTP.


> Every request is sensitive in the sense that it's another piece of data that can be used to build a tracking profile of the computer that sent it.

HTTPS does not protect you from third parties profiling what sites you visit, only the specific URL paths you access (since who you connect to is still in the clear).

In a big organization, a caching HTTP proxy server would provide the caching benefit, where an HTTPS-only Internet cannot. But while a home user has a single IP, all users behind a caching HTTP proxy server effectively share the same IP (or cluster of IPs), and are mix-anonymized (unless the proxy is not configured this way).

So sure - HTTPS ensures that URL paths accessed are private. But there are cases where there is a caching benefit and the real-world privacy-loss isn't as big as you make out in those situations.

If you're bothered about organizations spying on their users, then sure, those users can use HTTPS. My argument is about the cases where users are doing things where they want the caching though.

> Using HTTP takes that choice away.

Users and websites can still have the choice of using HTTPS. If we were talking about banning HTTPS or not requiring HTTPS to be available, then you'd have a point. But we're talking about banning HTTP. That takes choice away by definition, so don't pretend this is about taking choice away from users. In reality what you're arguing for here is to foist your choices about trading cacheability for privacy onto others by taking their choices away. There's nothing wrong with this position, but let's not pretend that it is something that it isn't.

I'm certainly in favor of sites being HTTPS only when they involve directly private information. I also accept that access to anything involves a tracking trail which in aggregate is privacy-sensitive. But where there is a caching benefit (or some other benefit) of not using HTTPS, perhaps users should have the choice to do so. Maybe clients shouldn't do it by default, and maybe it's reasonable for sites to always offer HTTPS as an option. Still, "don't ban HTTP" does have a case in such scenarios.

I think part of the problem here is that HTTP has spawned a number of use cases larger than any of us can contemplate at once. This is much broader than the general "web app" or "information browsing" use cases for which the HTTPS case is really important. "Ban HTTP" means banning all of these cases. It's very difficult to convince anyone that all use cases have really been genuinely considered on neutral grounds, and much easier to make the decision first and then pretend that workarounds are enough when a new use case is presented.


> In a big organization, a caching HTTP proxy server would provide the caching benefit, where an HTTPS-only Internet cannot.

Why? HTTPS proxies aren't exactly unheard of, and organizations are already rolling out their own CAs anyway.


Implementing your suggestion would make things very convoluted when it all works perfectly already.

When I'm at work and want to access my online banking securely, I don't want my organization's proxy cache to MITM me. So I choose not to use my organization's MITM CA.

When I'm at work and am working on testing my reproducible server deployment, I really want to use my organization's caching proxy server so that I can download packages at 1 or 10 Gbit instead of the much lower speed of my organization's shared Internet connection.

This is very easily done right now: I use HTTPS for my online banking, and HTTP for my distribution package downloads. Everything points at my organization's proxy server, but no other special arrangements are required.

You're telling me that my organization has to roll out its own CA, implement HTTPS MITM caching, and I have to add the CA in my test deployment, all for what? So my organization can MITM my online banking, or that my organization's MITM proxy is now an additional attack surface for my online banking?

Sure, there are technical ways round this, but it involves jumping through hoops for no privacy gain. As I've pointed out elsewhere, downloading distribution packages over HTTPS offers no real privacy benefit since what users are downloading can be inferred from download sizes anyway.

Again: all I'm saying is that there are valid use cases for the current status quo.


This might sound like a stupid question, but unless I am grossly under-informed, with HTTPS, all a proxy server can do is forward the connection and the data. The proxy could not see the URL(s) being requested, so caching is not going to work. Or is it?


The proxy can MITM the connection, as long as the browser accepts the proxy's certificate. Corporate environments tend to have their own CA infrastructure, so this isn't much of a problem for them.


> Corporate environments tend to have their own CA infrastructure, so this isn't much of a problem for them.

That's a very broad statement. Some corporate environments certainly do, and so this isn't much of a problem for them. But some corporate environments certainly do not, and so this is a problem for them. What you're essentially doing here is requiring that all corporate environments that don't currently have a CA infrastructure deploy one instead of using HTTP caching that works today. You're positively encouraging CA MITMing here.


>You're positively encouraging CA MITMing here.

Internal to a business that is already MitMing everything, yes. Is that strange?


So, some researcher sitting at his computer in his lab in his university, connecting to another university's or NASA's server, downloading freely available, government-funded research data, used to do his publicly funded research grant, the results of which will be published in a journal for the whole world to see--

This HTTP request is sensitive? It's secret? A profile is going to be built from it? A profile of what? Bob Roberts, Ph.D., University of Studyton, downloading NASA solar data to his lab computer? Oh no, poor Dr. Roberts, now the NSA will know what he downloads...

Come on. Insensitive web traffic certainly does exist. If you think it doesn't, you must have no imagination.


Most arguments I've seen against HTTPS everywhere seem to fail to realise that it's not just the data confidentiality that's important, it's the integrity too.

If you don't care about the data you're getting back from a server being wrong, why bother requesting it in the first place? If you do care about it being wrong, HTTPS isn't a bad way to help ensure that what you're getting is what the server is giving out. It's not perfect, but nothing is.


> If you don't care about the data you're getting back from a server being wrong, why bother requesting it in the first place

Perhaps I'm not overly concerned with the prospect of someone altering a gif of a cat falling over because someone put ham on its face.

People can send me fake letters and post can be intercepted but it's still useful.

I'm not saying that security isn't important or that we shouldn't be securing more than we are but the argument that it's vital everything is secured is clearly false. There is traffic that I simply do not care if it is public, and the risk of it being modified (and the consequences if it is) are so low that I think the added time and stress of worrying about it would be more detrimental to me.


So if I intercepted kittens.gif and altered the content type to be 'text/javascript' and the body to be a javascript payload that tries to exploit your browser, you're not overly concerned?


Not really, since javascript loaded into an image tag shouldn't run and I was already following an unknown link to an unknown location coming from an unknown server. The risk that it was intercepted by you is low, and the consequences seem to be low too.

You're also skipping straight over the risk that it happens at all, which is a fairly important part of a risk assessment.


I'd be more worried about my UA not being able to handle that scenario by just refusing to run that javascript payload.


A more realistic situation could be a malicious attacker intercepting the cat picture and replacing it with an image containing something that could land the viewer in jail for having any trace of it on his computer (child pornography), and then sending an anonymous tip to law enforcement...


Not really assuming said gif is inside an img tag.


What if they replaced the cat gif with a some objectionable content? e.g. a beheading, surely this could scar you for life?


Is that particularly likely? That someone will MITM my connection to show me a shock image? Is that a realistic threat you're trying to guard against?

What makes that much more likely than someone just emailing it to me? Or tricking me into it by putting different text in the link?


> If you don't care about the data you're getting back from a server being wrong, why bother requesting it in the first place?

If that's the argument, it reasonably applies to everything, everywhere, and we should scrap TCP alltogether.

Instead we need to implement a replacement which ensures everything everywhere at every level of every stack does crypto as well. Let's call it STCP!

Or does that suddenly sounds zealous? It just follows from your base argument.


Or Google calls it QUIC [1]. And it's exactly that, a replacement for TCP with TLS.

[1] http://blog.chromium.org/2013/06/experimenting-with-quic.htm...


This counter-argument fails because HTTPS is much easier to implement and web servers already use it. Whereas TCP is much lower level and to make "STCP" we would have to recode servers entirely to handle both TCP and STCP requests from clients.


TCP is a transport level protocol. It's job is to get messages from A to B in a way that tries to guarantee delivery (amongst other properties). HTTP is an application level protocol, it's job is to encapsulate messages and commands between two applications that may be remote. We have already built mechanisms that secure traffic on the transport layer (see IPSec) and they actually work well when they're needed/useful. It turns out that right now, securing data on the application layer is much easier / cheaper than trying to do IPSec everywhere.

Finally, HTTP is a bit of a special case, because much of the time data delivered over HTTP is code that's going to be executed on a client's machine. If anything, this makes it much more important that the integrity is preserved.


There's actually a crypto IP replacement out there: https://github.com/hyperboria/cjdns



Cryptography cannot be separated from authentication, and only the application can know how to do authentication. E.g. emails are designed to be forwarded between servers and not care about the exact path they take, so it would be foolish to apply hostname-based encryption to emails; instead emails are (or should be) encrypted using S/MIME or OpenPGP, to the specific intended recipient. It's then fine, indeed desirable, for these encrypted emails to be passed around over cleartext transports such as TCP.


> Cryptography cannot be separated from authentication

Frequently repeated but still wrong. Cryptography requires one of the following: 1. Two key pairs, or 2. A shared secret.

The shared secret implies authenticity. But there are entire classes of cryptosystems based on not knowing with whom you are communicating. Crypto establishes a channel through which Alice and Bob can then negotiate authenticity. (To put it in simple terms: it's better to be phished over secure transport than to be phished over plaintext.)

For some reason, a large number of people seem to have completely skipped over this basic advantage of unauthenticated channels: you have now isolated the communication to you and your prospective phisher. This is a gain, this is an advantage, and it borders on absurd that people go to such lengths to deny this.


>> Cryptography cannot be separated from authentication

> Frequently repeated but still wrong.

> The shared secret implies authenticity.

Just checking: are you aware of the concept of authenticated encryption (https://en.wikipedia.org/wiki/Authenticated_encryption) and its importance?


Cryptography without authentication still provides protection against (non-MITM) eavesdroppers, which is very important with public wifi networks nowadays.

Which is why it's strange that self-signed connections are represented to the user as dangerous, while unencrypted connections do not have such a warning even though the former is strictly better.


Scenarios where the attacker is restricted to being a non-MITM eavesdropper are pretty rare, public wifi networks aren't an example of such.


Attackers will always prefer passive attacks over active attacks, though. There is no reason to give them that convenience.


Public WiFi is the anti example since if you can read to the WiFi, you can write to it and MITM connections. Passive is probably best, right now, against large scale fiber taps. And only as a stopgap.


Why would that sound zealous? It really would be nice if crypto were part of the foundations of the Internet, rather than applied as window-dressing.

It'd require some hard thought about what guarantees are possible and desirable, but that'd be a good thing.


jhourcle has addressed this for his type of data with an https side channel for checksums of files available for download. See the response to the comments (also explains why rsync is not appropriate in their system for a specific class of file).

jhourcle is not against https, but he/she wants the ability to use http where an exception is needed. The cost of the certificate is negligible, but the ability to keep a fragmented, heterogeneous (age wise) environment is part and parcel of the job, and NO additional funding is being provided to accomplish or operate this under https.

From reading the original and the response to other comments, it is apparent to me that jhourcle has thought this through and is only asking for an exception process that they can file for. Just as they do today for systems that are not in compliance (not related to https) for various reasons (ref: comment response).


HTTP is very useful for caching and mirrors for Linux distributions because signatures are checked by package managers outside of the transport protocol.


And additionally signature verification provides far better security than HTTPS on its own because a compromised mirror cannot compromise the signature.


What about confidentiality of the package downloads though? e.g. Snoop the traffic, notice they are downloading a fix, they probably don't have it, automatically attack them before they apply it.


HTTPS doesn't protect much from that either. A third party can figure out what you just downloaded based on the download size, since the size of package files are well known. Even if there is some uncertainty between sets of packages it could be inferred from your other downloads, since decisions on what to download are based on known logic (dependency tree, what the victim likely had installed before, etc).

I suppose in your case the window would be smaller using HTTPS, as the package downloaded could probably be inferred only after the download is complete. But your method would need automation anyway, so I don't think the difference in timing would really matter in practice.

But if you are in the position to automatically attack someone, why wait to see if they are downloading a fix? Why not just try to attack with known recent vulnerabilities anyway?


If you had the capability to deliver a prepared attack that quickly, you could probably already just attack them outright.


Why is HTTP very useful? You can still mirror using HTTPS, even locally (using apt-mirror).

That said, Linux distributions don't usually use browsers to pull packages, so the point is moot.


Imagine you've MITM'd funnycatpictures.com. Why, as an end user would I care about the integrity of the funny cat picture you maliciously send me. Provided it is still a funny picture of a cat, your evil http version of the website is still just as useful as the original.

There are a lot more funny cat picture sites than banking websites. They are fine using the same protocol they have been since 1995.


Deliver porn, spam, political messages, or illegal content.

Or MITM a recipes site and deliver incorrect dietary advice.

MITM someone's personal site and add small comments that imply they wish the KKK was a stronger political force, thus discrediting them.


Maybe someone even MITMed the comments on this page I'm reading right now, but I'm not going to be awfully worried about that.


We could really do with an HTTP referent-integrity specification. Just as you specify the size of a linked image, you could add a hash to specify the expected content of a resource. Particularly useful with large linked non-browser resouces.

http://www.w3.org/TR/SRI/

https://wiki.whatwg.org/wiki/Link_Hashes


HTTP: here is the length (response header) and here is the payload transfered with checksums (tcp). HTTP has quite good integrity of the data you requested.

If you mean HTTPs is great to check if it's coming from the right server (identity/authority) then well, see how well the CA system works and how weak certificates really are (e.g. "Ron was wrong, Whit is right"). You can work around that by doing certificate pinning (see google chrome for example) and stuff like that, but in that case I'd already be better off signing the payload and sending that as part of the response.

The scientific institutes of germany worked around all the CA headaches by becoming an intermediate CA (see who signed https://www.pki.dfn.de/ ). I'd just expect more intermediate CAs popping up as a result of HTTPs only which will weaken HTTPs even more.

(And yes, I'm using https everywhere and would enable https for just about every site, except e.g. for linux iso downloads and alike)


HTTP has quite good integrity of the data you requested

Not for the values of "integrity" that include the possibility of intermediate tampering. You have no indication that the request received by the server is the one you sent, nor the response that you received is the one that the server sent.


Except HTTPS does not (currently) solve data integrity issues in real life use cases, getting a certificate issued for someone else's domains is far from difficult... Especially since DNS is still plaintext.


getting a certificate issued for someone else's domains is far from difficult

Is it? Please provide a cert for ycombinator.com. Thanks.



From what I can tell, they're only resellers, the companies that actually issue the certs are GeoTrust and Symantec.

In any case, I know it can and does happen, I just don't agree that it is "far from difficult". Having to hack an SSL provider is out of reach for the vast majority online thieves and casual snoopers.


Well yes, none of them would actually have their own name on said certs. But all of them had API access to make symantec or geotrust issue certs. End result is the same.

Having been involved in all of the hacks I linked, I wouldn't describe them as anything extraordinary.


Well yes, none of them would actually have their own name on said certs. But all of them had API access to make symantec or geotrust issue certs. End result is the same.

Last time I used a RapidSSL reseller, I had to authenticate my domain against a GeoTrust controlled page before the cert was issued. Is this not the case with WebNIC?


DNSSEC is meant to solve the integrity issue. Now, whether you feel it actually does the job is another matter.

Personally, I use it on three domains, one which was done for testing purposes, and the other two because there are DNS records I would like to keep secure from manipulation. It'd be nice if TLSA/DANE was more widely supported, if only to be an additional bar against certificate forgery, but unfortunately it's not.

It'd be good if DNS servers other than Knot had decent native support for signing, but they don't in general.


Sorry; but this is just ignorant. If you request a carrot cake recipe but receive a modified recipe that uses arsenic instead of carrots.. Without https, this kind of thing can easily happen. You think you are downloading poison control instructions but instead the instructions are modified; causing death or injury. HTTPS is not for "sensitive" data but for the integrity of all data. Imagine going to a voter registration page in Africa; you show up at the address and it's an ambush by anti-democracy militants because they were able to hijack the information on the official website.

The 1990s bullshit opinion that HTTPS is somehow only for 'sensitive' data is very destructive to a safe Internet.


> If you request a carrot cake recipe but receive a modified recipe that uses arsenic instead of carrots.. Without https, this kind of thing can easily happen

I can easily get struck by lightning when I go outside in the rain. I can easily get eaten by mountain lions who break out of their cages when I go to the zoo.

There being a one in ten million chance of something happening doesn't dismiss all of the valid concerns the linked article raises.

The importance of HTTPS' added security is something for both the host and user to consider. The host can choose not to implement it, and the user can choose not to trust/view the data if they have even the slightest possible reason to believe their data might be modified maliciously in transit; or tracked for some sort of profiling that they want to avoid.

As it stands, I'm weary about accessing any Chinese servers without HTTPS; but I'm not at all concerned about my ISP or government MITM'ing my connections to Ars Technica for nefarious purposes. Nor do I care in the slightest if my ISP knows I read a story there about Norway planning to drop FM radio transmissions.


> the user can choose not to trust/view the data if they have even the slightest possible reason to believe their data might be modified maliciously in transit; or tracked for some sort of profiling that they want to avoid.

If every internet user were educated enough to behave in such a way, then I'd say that would be a fair suggestion, but I don't think it's reasonable to expect that the average user is capable of making such judgements.


Then do you support dismantling tech companies who exploit tracking?

If people are too stupid to understand taking an HTTP risk, aren't they also too stupid to risk using Gmail?


+1 And the premise that some commenter on HN should even enter into an argument about which of YOUR data is sensitive is destructive. It's your security. You should decide what imperils it.


> HTTPS-nazis > vague possibly-maybe security-related theoretical issues seems asinine

This is basically gaslighting. Network security is real and people really need it. Please refrain from shaming people for talking about the things they need.


You know, in this case, there'd be nothing wrong with making checksums available via https (to verify integrity) and leaving the bulk data available over http. Mind you, 95% of people would never verify the checksums, but at least in theory it keeps their bulk data distribution easy.


In theory if you got a good original OS download the package managers would be doing all this verification automatically.

Maybe that's acctually a good idea to consider as some sort of specification. Large file download via HTTP with some HTTPS checksum available via a single click.


Who determine "sensitive" data?

You consider abortion non-sensitive and you should read legal (or whatever) information with HTTP. You are comfortable with that. Someone else is not, and would want to use HTTPS.


It's not just about sensitive. It's also about correct.


> "HTTP has become the de facto standard in sharing scientific data. [...] HTTPS would cause an undue strain on limited resources that have become even more constrained over the past few years.

Interesting that this keeps coming up. I saw the same thing at the top of the comments on the Python 2/3 article here yesterday [1] That academia is full of people trying to do technical things and doing them poorly, so everyone else should hold back progress because they're barely managing as-is, with no engineering training or staffing, if you change anything it's all coming down.

Why is this a problem with Python or HTTPs and not a problem with the priorities of academic departments?

[1] https://news.ycombinator.com/item?id=9397320


Read the GitHub bug. We're talking about missions started decades ago. There is no budget and no staff to maintain the existing code, much less rewrite parts of it.

But because some wonk thousands of miles away in Washington made a "HTTPS only from now on!" proclamation, all these existing projects which are working fine should be canceled? Or the staff should put in hours of unpaid overtime to learn and rewrite and test decades-old codebases?

How about you volunteer to do some of that work, and then you can criticize "priorities" and "people trying to do technical things and doing them poorly."


Conventions, not rules, better every time. Though a rule that public services should at least state that the data is not sensitive and not needing https is probably a good idea to force people to at least think about it.

But we should encourage more public data not less, not add another potential step that might delay/discourage people from making data available. Making https/ssl/tls easier and easier will make this a non issue eventually.

(I had to stop myself from spamming the github issue with a link to the seven red lines video.... :) https://www.youtube.com/watch?v=BKorP55Aqvg )


Commenting on the argument rather than the conclusion, it does an odd bait-and-switch.

> there is a statement that 'there is no such thing as insensitive web traffic' -- yet there is. [...] Forcing these transfers to go to HTTPS would cause an undue strain on limited resources that have become even more constrained over the past few years.

That HTTPS adds additional strain on resources says nothing on whether the data is sensitive or not. The entire post leaves "Non-sensitive web traffic does exist" as an assertion while going on to provide arguments around resources.

Not that "HTTPS-Only Standard" makes a particularly coherent argument in the other direction.


Title is misleading and the conversation reflects it: the request is for reconsideration of the US federal government's move toward HTTPS-only on its own websites. Nobody is trying to ban non-HTTPS websites in general.


HTTPS certainly is not cache friendly. Is there an easy way to extend https to handle heavy cache use cases (to improve caching behavior across clients)?

For example:

Assume: file.data exists and is already encrypted with enckey=K, deckey=D, algo=X, etc

Client -> https://www.example.com/file.data <-> <protocol between server/client to get deckey, algo, etc needed to decrypt>

<- transfer of file.data (from cache) without further encryption but still in the scope of the outer https. The response would carry appropriate headers to indicate where the "plain" data starts and how long it is. At this point, you can use a packet filter and see the encrypted body of the file but not the https headers or anything else.

- Client recognizes this valid https session response but takes the inner sections without further decryption. The inner section would need to be marked (as in a multi-part section), and https response headers would need to indicate which byte range of the body should be read as-is and decoded with key deckey.

Again, I am hoping for some sort of extension to https to make it cache friendly.

Advantages: File is encrypted once (or as needed if policy requires it) and caching proxies can serve the file without re-encryption per user session response.

Disadvantage: Likely need to change http/s in some way to wrap and mark plain data.


There is a spec for it already: http://www.w3.org/TR/SRI/


This adds an integrity (checksum) attribute to a resource and I think that is fine. It can also be used via https or http but they indicate http is not safe (3.3.2).

The problem is that the data is going to be encrypted per user https session and this adds a load to the server (say if you are streaming a movie or downloading a large file). With https the data has to be encrypted per user so there really is no caching on the output. Sure, the original data file can be in a cache, but what goes out (payload sans header) is the encrypted-per-user byte set. Unlike http where the payload across all users is the same.


The way I see it there are 2 problems with HTTP

1. Allows passive snooping (not always a problem)

2. No way of knowing whether data has been tampered with (always a problem)

Would be nice if there was a way to stop 2 (e.g. checksum sent through HTTPS), but allowed the caching benefits of 1


TLS/SSL actually supports authentication/signing only, it's just that most browsers don't accept it by default: http://www.hamwan.org/t/SSL+without+Encryption


This seems to be the feature we need if browsers would support it.

I can see how this would cause issues in the "presentation", i.e., how could a browser warn a user that for this session your data is not encrypted (given expectations for pages served via https have already been defaulted to 'you are at the right site + data transfers are encrypted').


I thought that only protected against modification in transit though. Often, what you really want is to verify the content is what the original source intended it to be, rather than what the server sent. In that case, you want a mechanism for attaching a GPG-like signature to the content. It doesn't matter then if the content is delivered via mirrors, caches and HTTP. This would be especially useful for things like jQuery mirrors, where a compromise of the server would affect many sites at once.


>> Due to many institutions having policies against FTP and peer-to-peer protocols,

This is the problem. Your organization needs to stop cargoculting IT policy. Perhaps banning HTTP will be enough pain to make that change possible.


> This paperwork has restricted our ability to openly share scientific data with international partners, and has impacted our sharing with an NSF partner that had a foreign-born system administrator.

I'd say their IT policy is not the only broken one...


I think schools banning HTTPS has a lot more to do with being able to censor what the students are visiting than saving bandwidth.


Maybe schools should just stop censoring students? We should setting examples for children, and set the example of mutual trust and responsibility, not "how to spy on other people below you".


That would require legal changes. At the moment many schools are legally required to censor students.

This is also in a country which still doesn't teach comprehensive sex ed' in many places and for which many adults don't want kids to have access to that information.


They can't buy and configure an HTTPS proxy? Those things do exist after all.


HTTPS proxies work less well today than they did even five years ago. Certificate pinning and other security improvements have broken a lot of things (by design).

So the question is: What is more important, allowing HTTPS proxies, or stopping governments with a CA from MitM-ing traffic (e.g. Iran, China, etc).

A nice compromise might be to inform users that they're being MitM-ed by an installed CA, but only once and subtly so.


Which browsers enforce pinning when faced with a CA proxy? Chrome explicitly overrides pinning in this case, so they don't break half of all corporate usage.


Even with HTTPS enabled, institutions can censor based on IP or DNS. Blocking based on Deep-Packet-Inspection is a bit complex to set up for normal schools i think.


I'm pretty sure a DPI-based filtering can be purchased as an off-the-self service.

And that's one of the core reasons I dislike HTTPS everywhere. It will lead to organizations like schools installing their own root certificates to MITM traffic, lessening the security of those things where encryption is the most important, like online banking.


Many universities already require the installation of a root certificate. The primary purpose is to avoid buying commercial certificates for every last internal university site, but it also has the effect you've mentioned.


Nope. Pretty much 100% of schools in the U.S. (with internet access) operate a transparent or explicit HTTP(s) proxy as the only way out to the internet from student-accessible vlans. Websense is probably the most popular commercial solution. My school district ran a Squid extension called Squidguard for a long time before switching to the filtering solution bundled with its new wireless controller system.

Implementing web filtering is a condition of receiving federal subsidies; they take it seriously. A school with only a handful of PCs might use a host-based solution but proxy-as-the-only-way-out network design is very, very common.


Plenty of vendors are ready to sell turnkey censorware to them. It's already a thing.


"If the schools, libraries, and other filtered access points own the client machines they can install their own CA into those machines and have the proxy software intercept and generate certificates." (From the comments.)

That's the first time I've seen a man-in-the-middle attack described as a technique for improving security.


The benefit of ubiquitous encryption outweighs this small list of minor drawbacks a million times over.

Even if you could convince me that it's ok to send some traffic in the clear, that wouldn't make any difference. You're just going to have to suck it up, for the benefit of the web and humanity in general.


Why does the news of Yahoo need to be sent over an encrypted channel? Since bandwidth is still expensive, why take away basic caching? I don't want to setup reverse proxies because I don't want a massive headache to separate Yahoo from people banks or medical records.


I'd have a lot of fun and probably make a lot of money, if I could control Yahoo News for a week.


That is the single most unlikely thing that could happen. It is also just a dumb argument to require people to do a lot of book keeping for every printer or other device on a network.

I am starting to think this is some weird plot to make entry into any web software harder and protect those already here because the arguments for this don't make sense. I assume that caching is going to end.


Really? 4chan got Apple stock to plummet 5% just by announcing stuff about Steve Jobs. Yahoo News publishing news like "Elon Musk dies" or "3 killed in Tesla car battery explosions" or just other simple "X misses Q3 by 80%" or "X to acquire Y" would do a pretty good job at changing prices, I'd guess. (Though if you went overboard they could roll trades back, maybe.)

As far as the fun part. Just change people's quotes, slightly. Twist words around. Make it seem like Obama really regrets the healthcare act. Or something funnier. Done well, it'd be a fantastic piece of trolling. Done really well, you could send people into a panic.


Has this actually happened or is this just a fever dream? We are talking actual dollars and pain-in-the-butt work versus some weird hijack looks like it would get caught in very short order. I am unwilling to trade caching and bandwidth for this.


http://www.cnet.com/news/whos-to-blame-for-spreading-phony-j...

Even easier - someone just posted to CNN iReport and AAPL fell 10%. Awesome.

You asked what the benefit of encrypting news is. Well one benefit is that you restrict who can modify stories. Instead of just compromising the network, you've gotta compromise the box. There's value in making sure all data people receive comes from the source they believe it does. (Now if they decide to make trading decisions on CNN iReport or Yahoo News, well that's another issue.)


You keep giving examples that do not involve anything like a Man In The Middle attack. The CNN iReport was a regular posting. All of your examples have nothing to do with the proposal and would have happened if the site was using https.


"HTTPS-only" goes directly against the architectural principles laid out in "REST", where intermediaries should be able to understand (in a limited sense) the request and responses that pass through, do caching, differentiate idempotent from non-idempotent actions etc.

The ability for intermediaries to see what goes through is in large part why "REST" is said to aid scalability, the same point this article seems to address.

Now, both movements, "HTTPS-only" and "REST" are widely popular in dev communities. Yet I never see one acknowledge the existence of the other, which threatens it. In fact, I'd see people religiously support both, unaware of their cognitive dissonance.

Curious, I think.


Because your initial premise is flawed. Equal GET requests will often have different results based on the user doing them. Either because they are requesting their "own" data or because they have different privileges and see different results. While not perfect, it's the reality.

This throws out all possibilities of caching. And why intermediates should differentiate more than that I cannot see. So https is in no way limiting REST.


My premise is that HTTPs-only and REST have opposing constraints.

You have not demonstrated any flaws in it, REST says communication is stateless and cacheable except for acknowledging some select minority cases when it's not the case.

Turning the minority cases into the only way of communication nullifies most of the benefits of REST, because the whole rationale of the paper is lost. I.e. intelligent shared processing and caching by intermediaries.

I'm taking no stance on what "the reality is". I'm taking no side about which side is more correct. I'm stating what both sides want, and finding it curious they don't see the contradiction.


I think the description of REST you've outlined is not entirely right. The statelessness relates to client state, not the system state - i.e., POST/PUT/DELETE etc. can very well change the system state and that's the whole point of them - and also session state is allowed too, it's just not the part of REST architecture but is assumed to be implemented externally.

It is true that HTTPS may impede some cacheable resources. Maybe HTTPS may be improved to allow transparent caching of _some_ content, but the security implications may be hard to predict and will require very careful implementation to not introduce new security issues with attacks on caches themselves (DNS system still has this problem AFAIK).


The statelessness relates to communication state. A client can hold state and it most certainly will hold state (consider your browser: open tabs with URLs, bookmarks, local browser cache; form autocompletion; settings; all of this is "state").

Instead, REST talks about a request being stateless and a response being stateless (i.e. sufficient on its own and not dependent on preceding or future communication between that client and server).

This is, again, done for the benefit of intermediaries, because intermediaries should not be forced to hold state in order to interpret REST communication. Every request, response should be sufficient on its own to be understood.


Sorry, I was not clear - by "client state" I meant not "state kept on the client" but "state on the server that is kept different for every client".


Equal GET requests will often have different results based on the user doing them

Well, that's very non-REST.


No it's is exactly REST.

Section 5.2.2 of Fielding's thesis specifically says how per-user authenticated cannot be cached in a shared cache because they may vary per request. There are other cases mentioned too.


Let's quote from section 5.2.2:

"All REST interactions are stateless. That is, each request contains all of the information necessary for a connector to understand the request, independent of any requests that may have preceded it. This restriction accomplishes four functions: 1) it removes any need for the connectors to retain application state between requests, thus reducing consumption of physical resources and improving scalability; 2) it allows interactions to be processed in parallel without requiring that the processing mechanism understand the interaction semantics; 3) it allows an intermediary to view and understand a request in isolation, which may be necessary when services are dynamically rearranged; and, 4) it forces all of the information that might factor into the reusability of a cached response to be present in each request."

When the paper was written, the per-user requests were supposed to be an exception, a minority case.

HTTPS will effectively make everything opaque and "per-user", and hence everything I quoted above which refers to intermediaries will no longer matter.

Restrictions in 5.1.3 ("Stateless"), 5.1.4 ("Cache"), 5.1.5 ("Uniform interface") and 5.1.6 ("Layered") would no longer apply either. All intermediaries will see is encrypted data, so shared data and functionality as explained can no longer be moved to an intermediary.

BTW, parent, way to selectively refer to a phrase in Fielding's paper while missing the point of 99% of the rest of it.


Sorry for the short response, I was on mobile.

The part of 5.2.2 I was referring to is this:

If some form of user authentication is part of the request, or if the response indicates that it should not be shared, then the response is only cacheable by a non-shared cache.

This clearly indicates that returning different versions of the same resource on a per-user basis is valid REST architecture (and I was responding to the comment that Equal GET requests will often have different results based on the user doing them is very non-REST.).

While it is only one phrase, it is the only phrase that deals with user authentication and matches the discussion very well. As such I think your comment about missing 99% of his thesis is incorrect - indeed, I think choosing the correct and relevant part is precisely what "getting the point" is about.

I agree with your points about HTTPS, but that is orthogonal to the user authentication discussion in the sense that transport-layer is separate to the API design.

However, I appreciate that your points about how the transport layer affects the assumptions around API design are correct, and that going to a HTTPS-only transport mechanism may have performance impacts in many cases (especially high-volume ones).


I know which phrase you're referring to, but if you read it in context, it's apparent this is an exception case, because the very same section talks about cacheable, stateless requests and responses.

All of REST's constraints are about encouraging cacheability and "visibility" to intermediaries. Intermediaries should in most cases be able to see which resource is being requested/returned, read the method, read the content-type and other headers.

All of this is not available during an HTTPS session. So "HTTP + a bit of HTTPs" is REST + a dose of realism.

But "HTTPs-only" is something else entirely.


I think we are dramatic agreement?

HTTPS = breaking caching.

User Authentication = returning different results for per-resource queries, which is RESTful.


I'm afraid we're not in a dramatic agreement. You point to an exception which REST allows to claim the exception is RESTful.

The exception is there for practical reasons and it doesn't satisfy REST's constraints nor benefits from REST's properties.

Either way, my point's been exhausted, so, I'll shut up now ;)


First, proxies are certainly not one of the necessary principles of REST. Even without proxies, there can be REST. More importantly, most REST APIs can't take advantage of proxies anyway, because most responses must not be cached.

Second, HTTPS is MITMed using self-signed certificates (signed by custom CAs which are installed in browsers on the network) by proxies all the time. This is very common in corporate networks. Therefore, HTTPS currently works with proxies.


> Yet I never see one acknowledge the existence of the other, which threatens it. In fact, I'd see people religiously support both, unaware of their cognitive dissonance.

Are you sure you're not just seeing different groups of people support one or the other? I support HTTPS-only and am more or less anti-REST.


Curious, what could you have against REST, and is it only the JSON incarnation?


I think the big advantage of "REST" was being easy to use from the browser, but modern REST (with e.g. content negotiation and HTTP verbs) actually goes against that. I think strict, automatically-checked schemata for APIs are very valuable, so I'd prefer to use something like thrift or even WS-* rather than REST.


> think strict, automatically-checked schemata for APIs are very valuable, so I'd prefer to use something like thrift or even WS-* rather than REST.

Strict, automatically-checked schemata for APIs are perfectly doable with REST, JSON or whatever Protobuf flavor. OTOH automatically-generated schemata behemoths have been created with SOAP and WS-* that I have very creative ideas about how to deal with and dispose of.

As for the easy-in-the-browser part (whether it is for tests or implementation), it was merely a side effect of reusing the HTTP spec semantics as a common-ground general purpose vocabulary. REST in itself doesn't even mandate HTTP.


> Strict, automatically-checked schemata for APIs are perfectly doable with REST, JSON or whatever Protobuf flavor.

Up to a point, but having a single standard that's built into all the tooling is huge. Hopefully one or other approach will "win" in the REST world and we'll start to see some convergence.

> As for the easy-in-the-browser part (whether it is for tests or implementation), it was merely a side effect of reusing the HTTP spec semantics as a common-ground general purpose vocabulary.

Intended or otherwise, it was a big advantage, and I think it was the real reason for "REST"'s success.


So why not have the TLS applied at the edge of your network of machines that provide the service, and plain comms between them? Or is it somehow important that everyone, everywhere be able to read the stuff?


So the author thinks that scientific data is non-sensitive data. He's an astronomer.

Perhaps he's not familiar with the story of another astronomer, Galileo, and what people thought of his data.

http://en.wikipedia.org/wiki/Galileo_Galilei

It's not always about what you think about your own data. It's also about what others think of your data... which is something beyond your control and sometimes beyond imagining.


Isn't HTTPS-only just a financial ploy of root certificate vendors? I'd welcome more security but it shouldn't cost an arm and a leg.


It looks like mix of very good arguments (some traffic is not sensitive, and ensuring data integrity may be done much cheaper than HTTPS) with iffy arguments (we must have bad security because some governments ban some people from having a good one) with outright bad ones (since HTTPS can be implemented incorrectly or have bugs, it is not useful).


Trying to secure everything weakly leads to weaker security on important data. If you're using HTTPS for everything, it's so tempting to run everything through a CDN such as Cloudflare, which lets them look at your most critical data. This over-centralization creates a convenient point for central wiretapping. If you run the important stuff like credit card data through your own secured server, and serve the cat videos through the CDN unencrypted, you're be more secure than if you run everything through the CDN. HTTPS Everywere discourages this, which is why it's a form of security theater.

Then there's the EFF's own backdoor, the HTTPS Everywhere plug-in. Compromise the EFF's "rules" servers, and you can redirect user traffic anywhere. Their "rules" are regular expressions which can rewrite domain names. Here's an HTTPS Everywhere rule, from their examples:

    <rule from="^http://([\w-]+\.)?dezeen\.com/"
        to="https://$1dezeen.com/" />
That's a third party using a regular expression to rewrite a second level domain. This rule always rewrites it to the same second level domain. But do all of the thousands of rules in the EFF's database? Here's an dangerous looking one that doesn't:[1]

    <rule from="^http://(?:g-images\.|(?:ec[5x]|g-ecx)\.images-)amazon\.com/"    
    to="https://d1ge0kk1l5kms0.cloudfront.net/"/>
 
That redirects some Amazon subdomains to the domain "d1ge0kk1l5kms0.cloudfront.net". Seems legit. The EFF wouldn't let someone redirect Amazon traffic to a hostile site hosted on Cloudfront, would they? If someone set up an instance on Cloudfront which faked the Amazon site, and got a rule like that into the EFF's database, they have a working MITM attack. That site is "secured" by a "*.cloudfront.net" wildcard SSL cert, so all we know is that it's hosted on Cloudfront. Does the EFF must have some way to check that "d1ge0kk1l5kms0.cloudfront.net" string? Nothing in their documentation indicates they do.

Welcome to "EFF Backdoors Everywhere".

[1] https://www.eff.org/https-everywhere/atlas/domains/amazonaws...


Ahh, you're nuts. If it's HTTPS, it must be secure -- that's what the "S" stands for!


Cert racket and other attendant problems aside, there is little in the argument itself against banning HTTP. It is the same old argument - If we change things, things will break.

Yes, this is the nature of change. Not all change is good. But no good will ever be discovered without attempting change.


It comes up everytime but let's at the very least wait until we see if this is a viable way to easily implement HTTPS: https://letsencrypt.org/

I'd love to see such thing being built in straight into the Nginx/Apache packages of disto's to really make it straightforward.

Personally I have a mail.mydomain.nl, a mydomain.nl and an owncloud instance at cloud.mydomain.nl, it is such a pain to update every year, it requires at least 2, if you do it well 3 long sessions at startssl. If you by any chance don't have postmaster@mydomain.nl set up, you get to do that first too. This problem really, really needs solving.


Simply banning any kind of legacy protocol is not exactly in good spirit. People should have freedom of choice when it comes to running THEIR OWN infrastructure.


The ban is just for .gov sites.

"This proposed initiative, “The HTTPS-Only Standard,” would require the use of HTTPS on all publicly accessible Federal websites and web services."


I don't think anyone was trying to tell you how to build your core application network or your home LAN. SSL Everywhere is about critical connections subject to interception.


I was referring to public networks as well. I should be able to do HTTP GET to my server if I choose to do so. In the same way as I can open a socket to my server and write plain text to it.


I think you should be able, in the sense that it should not be legally or technologically prohibited nor prohibitive. I do think there is a line, beyond which a service should be obligated to encrypt everything, and that line is somewhere around carrying others' messages, certainly around getting common carrier status for the same.

Edit: I must stress, social not legal consequences should apply. I'm in no hurry to invite government scrutiny of this line.


Poster could set up a reverse proxy to support apps that couldn't be updated to HTTPS in less time than it took to write his github issue post:

https://github.com/WhiteHouse/https/issues/107#issuecomment-...


Maybe because your tone was rude?


Are you trying to garner sympathy from the Hacker News crowd? Because you're not going to get it.


Not at all. Apologies for leaving that impression!

Want more people to call bullshit when people in government use lazy arguments as an excuse to compromise privacy of citizens.

"But if there's encryption, my job is more work" arguments from NSA, CIA, FBI, military, etc, exactly like this are a huge threat to civil liberties & freedom.


Sorry, downvoted you by mistake :|


The number of crypto negative posts here makes me feel like HN SSL posts are being gamed. It's the same sorts of comments from multiple "people" across different threads. Anyone want to do the NLP to verify




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: