Hacker News new | past | comments | ask | show | jobs | submit login
Google Will Soon Shame All Websites That Are Unencrypted (vice.com)
448 points by devhxinc on Jan 27, 2016 | hide | past | favorite | 356 comments



Which is hilarious because the reason I can't switch The New Yorker website to HTTPS is because of ads - which I'm getting from Google DFP which allows non-secure ad assets.

In short; Google will penalize me because I use Google.

The universe has a sense of humor.


Similarly, Google claimed they would start penalizing websites that showed full-page ads for mobile apps instead of showing you the website. But every single time I try to get to Gmail, or Drive, or Calendar, or any Google service on the web using a mobile device, I'm shown a full page ad for a mobile app. Google has been doing this for years, and it seems like it's also been a year since they said they'd punish all sites that do that. But Gmail still turns up as #1 in search results for email, so does calendar, etc. It seems to me that they have whitelisted themselves and choose not to punish any Google property that breaks the Google rules, despite claiming to do so.

Edit: Typically, when a service tells me "no you can't use this service until you view a full page ad" I just give up and not bother continuing to the service. But the same is not true for Google. I reluctantly click through the full page ad every single time. It's incredibly annoying that I let them get away with this and still use the services. They are so outrageously arrogant about it and it bothers me greatly, but still, I don't change.

Edit 2:

Going to calendar.google.com: http://i.imgur.com/fNRhhYx.png

First results for searching 'calendar': http://i.imgur.com/l3A5Wlh.png



You are right. I think Google want people to get annoyed with this vignette ads if he or she uses google calendar. The user should install Google calendar the APP to be fully controlled by google. Then Google can just sends the user any ad what google like...

They all prefer user using apps rather than web pages. Google and others want to get fully control of users and make money...


> First results for searching 'calendar': http://i.imgur.com/l3A5Wlh.png

Well in your screenshot it seems like you scrolled down on the "calendar" search results. I get some other random thing ahead of Google Calendar, in incognito or not.

It is really annoying (I too hate those things, I would have installed the app if I wanted the app), but the click through thing only happens once in my testing. Are you clearing your cookies regularly?


I scrolled down because the top of the page was a Google Ad for Google Calendar. I chose to show the first organic result. Here's the top of the page: http://imgur.com/cO84ogZ.png


How about using a software to get rid of google ads?

I can erase google ads from DNS level. Users can never reach any google ad at all.

What you think?


Adaway was already invented ;)


The fact is they did penalize themselves for couple of times in history.


Mostly when obliged to do so.


They also said not to "penalize" websites based on user agent. Yet they do it and have been doing it for years.

They also said to use valid html/etc while they didn't do it for cost saving/performance reasons. Not sure this one is still true.

My guess is that this list of preaching water and drinking wine is pretty long for Google. I think their view is they know what they are breaking so it is OK in that particular case. The rest of us has to suck it up.


Yahoo shows YMail. Bing is the only one shows Gmail as first, although Yahoo technically uses Bing. You can pretty much say Yahoo actually "put herself above others" and more guilty than Google. In fact, I don't think Google is doing anything wrong. After all, Gmail is popular, and if you are doing a Google search, the user may be interested to know Google also offer email and most likely the user is already a Google user.


> You can pretty much say Yahoo actually "put herself above others"

Do you mean `itself`? Since when are tech companies assigned genders?


Parent poster might not be from an English-speaking background.


I am not, sort of. You can refer to a country by "she", so why is it inappropriate for a company? I don't see any issues. You can view a company as a mother too.


That's an archaic and half-valid use, so stretching it to apply to a company makes it pretty much invalid.

You could try to convince people to use the word that way, but at present it's just not done. Companies are 'it' or you can talk about the people that make up the company as 'they'.


given that they are not a native speaker this seems over the top. Also maybe consider Sapir–Whorf before stating universal rules..


What? I'm talking about English, not universal rules. The non-native speaker is the one that shouldn't be making assertions about what phrases do or do not have 'issues'.

Also Sapir-Whorf is dumb.


If you try to dictate how language must be used you're just being ignorant of how she constantly evolves through her use by different speakers.

So, there.


I'm not. I'm merely pointing out that such a use is going against the way English has been changing over time.


English as she is spoke does this?


One of the reasons a country has feminine gender is the association with the motherland (ie. one's native country).


Not all countries have feminine gender, just check https://en.wikipedia.org/wiki/Fatherland


I never thought about the question of whether, in languages that require nouns to have grammatical gender, particular countries may have a different grammatical gender from others, but on reflection I already know examples where they do in Portuguese: o Brasil, o Canadá (amusing to me because of the national anthem), but a Argentina, a Alemanha.

I wonder if this also happens in German; the only examples I'm thinking of offhand are feminine (die Schweiz, die Türkei) but now I'm not at all sure that there isn't a masculine one too!


Apparently Iraq, Iran, Yemen, the Congo, Lebanon, and Chad are masculine in German: https://german.yabla.com/lessons.php?lesson_id=409


Actually I can't think of many cases where German would use pronouns with countries. The reason these are masculine is because they are typically referred to using a definitive pronoun (literally "the Iraq", "the Iran", etc). It's more common with names of regions -- which may indicate that these countries used to be mere geographical regions (rather than sovereign nations) when the names entered the German language.

It also happens with countries like the UK, the US, the Czech Republic and so on, but obviously for the same reasons as in English.

I can't actually think of a country that's feminine in German. The "die" you often see is actually indicating plural (e.g. "die vereinigten Staaten", the United States; or "die Niederlande", "the Netherlands").


When you use pronouns for anaphora, would you use "es" for all countries, or is it plausible to imagine "er" or "sie", as with common nouns?

For example: Vor drei Monaten waren meine Mutter und ich in der Schweiz; wir haben _____ wirklich schön gefunden.

Would you accept "sie" here as a reference to Switzerland (because it was referred to as "die Schweiz"), or "es", or both? My intution is "es", but I'm not not a native speaker and non-native German speakers notoriously over-apply "es" to inanimate things.


I'd use "es" because it refers to the experience of being in Switzerland rather than the country itself.

But Switzerland is another example of a country that is typically used with an article. Consider the sentence "Ich fahre nach ____" with a country name. It doesn't work for countries like Switzerland ("nach Schweiz" sounds wrong, you'd instead say "in die Schweiz" -- same as "nach Kongo" vs "in den Kongo").


Thanks! Can we force the sentence to be about the country itself?

- Was meinen Sie über die Schweiz?

- ____ ist schön. / Ich finde ____ schön.


Several countries have articles in German, most don't. Plural countries (USA, UAE) have the plural article, which in the standard form is the same as the feminine article ("die") which makes for even more confusion, but when used in a case changes differently ("in den USA" vs "in der Schweiz") :)

Some feminine countries: Switzerland, the Dominican Republic, Mongolia, Slovakia, Turkey, Ukraine, Central African Republic.

Male, in addition to your own list: Niger (!= Nigeria), Sudan, Vatican.

Neutral: UK (because kingdom is a neutral noun in German), potentially others


In their native tongues, sure. But we're not talking about Afrikaans or French, we're talking about English. And since Britannia is feminine, English would have developed with the word "motherland" representing the native country.


But if you have a child company, wouldn't you expect to associate the parent company with a feminine gender before a masculine gender? That's what I am getting at. An organization has that "motherland" feel in some way.


Not really, no. Motherland is a very specific term that's been ingrained into English most likely because of the close personal relationship between people and their native countries, which would have been Britannia for many English speakers when the language was developing. There isn't really that same deep and universal connection when talking about organizations, so a similar term probably wouldn't develop anytime soon.


Sure, but what about ships? If you've read any sci-fi, the term "mothership" should spring to your mind. Or "motherboard" in hardware.

The concept of "some larger entity that spawns smaller entities" seems to generally lend itself to the mother/daughter terminology if you want to be poetic about it.

That said, whatever happened to artistic liberties?


In Portuguese we also say motherland. I doubt it's a Saxon thing, given Portuguese is a Latin language.


This. I would've written the same by mistake.


No offense meant but why not get the app?

I understand not wanting an application for a news website or something like that but something you use often like google calendar it would seem like the application would be better than the mobile page.


I do have the app. And that fact makes this double-annoying. When trying to visit a website, I'm told not to do that. That would be annoying on its own, and in fact it was for the first few years that it happened. But that's not at all what is frustrating me right now. What's super annoying is that Google claimed last year that they would penalize websites that do this, because they find it annoying too. Except they have done no such thing. I'm calling out Google's hypocrisy on who gets to show full-page ads without being penalized - Google does and nobody else?

If they want to show full-page ads and be super annoying, then fine, I'll deal with it. But don't pretend to be against it when you do the same practice yourself.


> I do have the app. And that fact makes this double-annoying.

It really just shows the sad state of mobile advertising when they're showing you ads for an app you already have.


On the other hand, I like that websites are unable to query my phone to find out what apps I have installed.


Yes I agree with you, I don't want any random website to be able to query my phone to find out what apps I have installed either.

However, if I'm on a specific app publisher's website, I wouldn't mind letting them know (through some mechanism) that I've already installed their specific app.


Sad state?

How do you expecet them to know all the apps installed on your phone? And if they DID know this information, people would be up in arms about privacy or lack there-of.


It's google. They know that you downloaded the app from the Play Store and used the app to connect to their servers directly. They already have plenty of information.


Yes, showing you an advertisement for an app you've already installed is bad UI/UX, regardless of the reason why. On the publisher side, it's also a wasted ad impression.

I don't expect them to know all the apps installed on my phone, nor do I think they need that much information to solve this particular problem.


If they don't know it, they shouldn't make assumptions.


Why do you assume they're not being penalized?

Showing that they rank above "timeanddate.com" doesn't mean a lot.


Being ranked #1 in a Google search for 'calendar' does mean a lot. Also, let's say they are penalizing themselves, but the penalty isn't enough to change their ranking. Why, then, would they claim that they are making this change because it's better for users to not have these ads but to still run these ads themselves?

> Our analysis shows that it is not a good search experience and can be frustrating for users because they are expecting to see the content of the Web page.

https://googlewebmastercentral.blogspot.com/2015/09/mobile-f...

This would imply that they know the experience is bad for users, they know that the penalty won't hurt their ranking, and so they will continue to show the bad experience regardless? That's just as bad as them not penalizing themselves for the full page ad.


A penalty that doesn't knock down one of the largest sites on the internet can still be a big deal to everyone else.

I assume different departments run calendar and search.


Not if you want to allow attendees to edit your event.

This is missing from the android app. So you have to browse to the calendar on the web and ... this.


Google DFP allows it because publishers (e.g. the New Yorker) aren't ready to switch all their traffic HTTPS. If they wanted to they could turn the switch and be HTTPS and tell DFP to only serve secure creatives.

One of the larger difficulties for publishers is that many of the 3rd party SSPs aren't ready to go full HTTPs and so publishers are reluctant to make the switch because it reduces demand sources.

Disclaimer: Work for Google in advertising


  One of the larger difficulties for publishers is that
  many of the 3rd party SSPs aren't ready to go full HTTPs
Right, but Google can motivate or improve the third-party advertisers to update much more effectively than publishers can - just Google hasn't chosen to do that yet.

It would be easy to proxy http-only ads from through a CDN that added encryption. Or to charge a premium to http-only ad networks, and ramp the premium up over time.


How do you think Google can motivate 3rd party advertisers? I'm talking about the Rubicons, Pubmatics, etc. Google doesn't have any real leverage over them. Other than the fact that as publishers do move to SSL (because of the SEO penalty that non SSL sites have) they won't use the SSPs that don't demand or can't insure 100% SSL for their buyers. So in effect Google is putting pressure on them to do it.

I don't understand your next part at all. Who would charge a premium to http only networks? Those sites don't actually rely on Google for delivery.


This is also effectively true for the more broadly used Google Adsense (not just DFP). They do support displaying adsense, but then screen out all non-https ads. Which, of course, results in a lower CPM.[1]

[1]https://support.google.com/adsense/answer/10528?hl=en

>>In short; Google will penalize me because I use Google

+++


To be fair, sites without ads are a better experience than sites with ads.


Sure. I like free things as well.


I dunno. Sites that can't pay the bills tend not to give good experiences, due to not being able to do things. Like exist.


Depends on what the bills are for. Most sites that don't have content-production staff need twenty bucks a month for hosting, plus an occasional prod from a sysadmin.


Are there some types of content where the author, photographer, etc will produce it without direct compensation? Sure.

Are there others where that's not practical? Also, yes. Maybe not things that you need, but this problem does exist.

At the moment, there's not a model, outside of ads, that works very well for that sort of thing. There are some subscription/micropayment schemes that seem promising, but nothing that works as well as ads do.


I didn't say it was supposed to work for all sites, just that st3v3r is ignoring a huge swath of the internet by implying that nonprofessional sites don't exist.


They're trying to nudge their customers for a while. It is just a little difficult when that's one's biggest source of income.

For example, https://support.google.com/dfp_sb/answer/4515432?hl=en


Also funny, because for many sites that run DFP or Adsense...that's their biggest source of income.

So, G is rationalizing their slow pace with the same reason that's not good enough for others :)


Google is a large company, with multiple branches.

I can't remember what services it is now, but there was some Google service that was deranked because it broke some Google search ranking policy.

It shows some integrity for the company that they're (sort of) operating their search engine objectively.

I presume that google doesn't uprank sites that specifically use Adsense versus other competing ad services?



I have a page hosted on Google Sites. It seems that Google Sites doesn't support HTTPS on custom domains either.


That is interesting. The Ads team is the same group that recommended turning off Transport Security in iOS 9 so you can run your Google's unencrypted Ads stack[1]. I'm sure there are two different departments that are fighting two totally separate wars. I've definitely seen this pattern in huge companies where one team is trying to push an agenda that forces another team to reshuffle their priorities.

[1] - http://googleadsdeveloper.blogspot.com/2015/08/handling-app-...


It is really hard for DFP to not allow non-secure creatives as long as you can create 3rd-party creatives. They do try to detect non-secure assets though, so they won't run on secure pages. See: https://support.google.com/dfp_premium/answer/4515432?hl=en


I'm jealous that you get to work for the New Yorker website. Any openings?



Yes! Send me an email to discuss: donohoe@newyorker.com


Reminds me of PageSpeed Insights bitching about assets loaded from Google (fonts, scripts, css).


The article title really, really needs an extra word: "Chrome", between "Google" and "Will". At first glance I thought it would be about the search engine, which would be a very disturbing thought indeed; it's already hard enough to find the older, highly informative and friendly sites --- which often are plain HTTP.

Nevertheless, quite convincing security arguments aside, I feel this also has a very authoritarian side to it: they are effectively saying that your site, if it is not given a "stamp of approval" by having a certificate signed by some central group of authorities, is worthless. Since CAs also have the power to revoke certificates, enforced HTTPS makes it easier to censor, control, and manipulate what content on the Web users can access, which I certainly am highly opposed to. I can see the use-case for site like banks' and other institutions which are already centralised, but I don't think such control over the Web in general should be given to these certificate authorities.

With plain HTTP, content can be MITM'd and there won't be much privacy, but it seems to me that getting a CA to revoke a certificate is much easier than trying to block a site (or sites) by other means, and once HTTPS is enforced strongly by browsers, would be a very effective means of censorship. Thus I find it ironic that the article mentions "repressive government" and "censor information" --- HTTPS conceivably gives more power to the former to do the latter, and this is very much not the "open web" but the centralised, closed web that needs approval from authorities for people to publish their content in.

There's a clear freedom/security tradeoff here, and given what CAs and other institutions in a position of trust have done in the past with their power, I'm not so convinced the security is worth giving up that freedom after all...


Repressive governments somehow convincing all CAs worldwide to refuse to issue certificates for your domain is a pretty distant hypothetical. Repressive governments monitoring and altering unencrypted communications in very sophisticated ways is a reality today. It's not a freedom/security tradeoff but a freedom/freedom tradeoff.

Not to mention that, of course, access to most websites is already gated by a central group of authorities - the domain registries - which can and do seize domains. Using raw IPs is one alternative, but if you're in that kind of position, chances are you want to be a Tor hidden service anyway.


Repressive governments monitoring and altering unencrypted communications in very sophisticated ways is a reality today

They could easily alter encrypted communications to effectively censor too, thanks to the all-or-nothing nature of encryption with authentication. Because by design, the certificate is presented in cleartext, it would be pretty easy to blacklist CAs and then cut off the connection if one of those is detected. Alternatively, whitelist CA(s) [1]. Analysing plaintext takes more computational resources, especially if things like steganography are used.

[1] Related article: https://news.ycombinator.com/item?id=10663843


Regarding the censorship:

It's obvious that censorship by western governments is never considered "censorship".

Only the evil enemy censors, we just have to enforce laws.

If one accepts this argument, it makes sense to argue that giving CAs more power is good — because, obviously, they don't censor, they just protect the interests of our economy.


Blockchain technology (which powers the Bitcoin) can easily be used to replace CAs, or provide an alternative which browsers acknowledge, provided enough site owners use it.

And going by the high issuance/maintenance fee the CAs charge for issuing certificates, the industry is a sitting duck for disruption by a Blockchain DNS/CA app.

I, as a site owner, can just sign my 'certificate' myself and put it on the blockchain DNS/CA app. The certificate will have my domain name, public key. And slso an additional field 'ownership sign' which is something like https://<my domain>.com/ownership_sign.pem (which is signed by my private key).

So if I am the true owner, I can self issue as many certificates to myself as I please. Or there could be some forced limitation to prevent any scalability (cough) challenges.

So, the problem you have pointed out is not really with enforcing/encouraging HTTPS, but with the entrenched CA bureaucracy. And I am really surprised, why is it not being disrupted already?


Tor is a tool for circumventing censorship. HTTPS is an important part of using Tor to surf the web: 1) it protects the user from bad exits that could inject malicious javascript into a page and 2) some exits refuse HTTP connections and only allow HTTPS.

Maybe HTTPS makes it easier to censor in theory, but in practice it helps fight censorship by enabling Tor.


There are already points of centralization at the domain registrar and DNS layers.

That was a major (really the major) basis of the fight against SOPA--it would have required ISPs to interfere with DNS resolution as a way of shutting down serial copyright infringers.

And the U.S. federal government can already seize domain names for some reasons.

So, the question is: does the value of pervasive over-the-wire encryption outweigh the risk additional centralization via CAs? Right now I think it does, but that is in part because I believe that the CA infrastructure itself will improve over time.


What we really need is opportunistic unauthenticated encryption with key pinning as a fallback between CA-signed https and plain http. Beating mass passive snooping is worthwhile even if MITM is still a risk.


The Fenrir project does something like this. They first establish a encrypted connection and then you can authenticate, or not. The authentication can also be federated.

Its pretty cool, but its not production ready.

The GNUNet has multible layers and do bottum up encryption on the lower levels.


I agree! There is TCPCrypt, for example: http://www.tcpcrypt.org/


Consider this:

- Squarespace doesn't support SSL (other than on their ecommerce checkout pages) [1]

- Weebly only allows it on their $25/mo business plan [2]

- Wordpress.com doesn't support SSL for sites with custom domains [3]

- If you've never experienced the process of requesting, purchasing, and then installing an SSL certificate using a hosting control panel like Plesk or cPanel, let me tell you–it's a nightmare.

All that to say, this is an interesting development that will leave a large % of small business websites with a red mark in their browser.

[1] https://support.squarespace.com/hc/en-us/articles/205815898-...

[2] http://www.weebly.com/pricing

[3] https://en.forums.wordpress.com/topic/support-for-https-for-...


Then maybe those platforms will finally implement it. In any case, there's an alternative: putting Cloudflare in front of the site. In fact, Google shows me a guide to do so when I search for "squarespace ssl".

Of course, that's hardly as secure as end-to-end HTTPS, but still, I trust the path between CF and SquareSpace much more than between the user's browser and SquareSpace.


Please do not put Cloudflare in front of your site. It makes it impossible for tor and VPN users to view your site since they have to solve an impossible captcha to even see the static content.


It's possible to turn off security in the CloudFlare control panel. I think the bigger issue is that CloudFlare has become a single point of interception for MITM'ing huge portions of web traffic.


I'm not sure, but I think CloudFlare will still hit Tor users with (unsolvable) captchas even with the lowest security settings.

But yeah, this NSA slide is extremely relevant to cloudflare: http://cdn01.androidauthority.net/wp-content/uploads/2014/06...


> I think CloudFlare will still hit Tor users with (unsolvable) captchas even with the lowest security settings.

That is correct. I have not been able to get passed a Cloudflare captcha over tor for any website.


I wonder how it's even allowed for CF to do this. "your site" <--- http ---> CF <--- https ---> clients is a poor solution and it only hides that the connection is actually not secure. Isn't this a misuse of many CA's TOS and should result in certificate revocation? Maybe I'm wrong though.


Well, at least for several threat models, "your site to CF" is at least slightly better than "CF to clients". It's not vulnerable to things like unsecured wi-fi sniffing. Then again, neither are self-signed certs and for some reason all browsers consider them even worse than no cert.

On the other hand, it doesn't protect against government spying, but then again, I think some governments straight-up MitM HTTPS traffic anyway. For instance:

https://news.ycombinator.com/item?id=10663843


But putting CloudFlare in front of your site makes mass surveillance even easier than it would be for plaintext traffic without CloudFlare.


Yeah I'm all for SSL shaming but my personal site with SquareSpace is about to look like shit for me since I'm a web developer. I mean as a web developer it's not going to look good if your portfolio is shown with a security warning.

I wonder if SquareSpace is going to finally fix their shit or if I'm going to have to move elsewhere which is going to be a pain (I went with SquareSpace because I didn't want to be assed with dealing with much of anything for a personal site).


No offense intended with this, but, as a web developer, what the heck are you doing creating your site on SquareSpace? Shouldn't you...I dunno...develop your own web site?


No offense intended, but as a web developer, why the heck would I waste my time coding things from scratch, setting up tooling, deployment infrastructure and managing yet another server when I could use a service that does all of that for me? For certain contexts it is far superior. Right tool for the right job, and all that.


Because they don't get paid to build their own stuff. They get paid to do work for clients. A variation on the "shoemaker has no shoes."


With that argument, there should be no marketing departments.


...no, the marketing department's job is to market the company and keep it in the public eye. Perhaps you're thinking of an advertising agency. And I've observed they have the same problem.

If a web developer was large enough to have its own marketing department it could maintain its own site.

In any event, I'm busy developing apps for clients all day - I don't get paid to work on my own stuff so it has lower priority.


I would much rather spend my time creating cool shit to share on GitHub and my site rather than maintain and pay bandwidth, CPU usage, etc bills just so I have a very basic blog and portfolio. Any web developer can write their own blog and website so who cares about that?


Why not just put your SquareSpace site behind Cloudflare? Then you get free SSL.


Good point! I might look into doing just that.


DreamHost now supports Let's Encrypt through their admin panel. The only instructions, however, are a community-maintained wiki page that is already outdated, referring to panel menus that no longer exist. I successfully obtained my certificate, but it was not easy.


Based on your comment, I went over to see if I could obtain a certificate.

It took all of 5 seconds for me to do - it's all automated via the admin panel now. Just tick the box to make your site secure. Looks like DH has resolved any initial issues they had.


Plesk built a Let's Encrypt extension: https://devblog.plesk.com/2015/12/lets-encrypt-plesk/


I've managed to get Let's Encrypt working on a shared hosting environment using letsencrypt-nosudo in less than half an hour from the time I started cloning the repo to finally pasting the cert into cPanel. And every step in that process except the final one of installing the cert using cPanel can be automated.


The --webroot module for LetsEncrypt is as easy as pointing to /var/www.


This is how it always should have been.

It was mind boggling that mixed content was "insecure" but HTTP was "secure." HTTP is and always has been insecure and should be marked as such.

I know there are a few people who will moan and groan about how overkill HTTPS is, but this isn't about banning HTTP it is just about reminding users that they shouldn't be entering sensitive information into a HTTP site.

Even phishing sites should be DV secure.


Not only that, but encrypted content from an unverified source is fucking sin for every browser out there, while unencrypted content from an unverified source is fine.

Go figure.


HTTP was never marked as secure.

Mixed content was marked insecure because there were assets on the page that might not be from where you think they were from. It was an indicator that the little https lock in the URL bar wasn't telling you the whole story.


I think this is at the core of Google's thinking on this: unless presented with a negative, users' assumptions are that they're secure.

Which is fair, given that I bet you'd get about a 5% or less recognition rate if you polled a random sampling of people on whether they could define "HTTPS" / "SSL" / "TLS" / "That lock thingie" to any degree of accuracy.

A server shouldn't have the opportunity to serve an insecure connection to the user without the user being made explicitly aware of that fact.


Mixed content is insecure because of active content to be very honest. Most people don't care about passive, but of course, you can make some fake banner if you are able to MiTM. You'd like javascript coming from HTTPS rather than HTTP. HTTP itself is insecure but doesn't mean every website has to be over HTTPS. However, given HTTPS is cheaper to deploy it should be encouraged. Do I really need HTTPS to show an album of cat photos I share with the world? No. But I do anyway.

However, the biggest challenge is actually internal traffic are almost always over HTTP, and the reason is almost always "because self-signed cert is invalid." In some way this is okish since internal traffic is a darknet, but as we have proper toolset make Let's Encrypt available, more people should consider deploying full SSL support for internal traffic as well. At this point, the toolchain to actually make Let's Encrypt simple and useful is still, ugh, a little hackish. Cron job here and there. Sort of complicated process to get started...


>It was mind boggling that mixed content was "insecure" but HTTP was "secure." HTTP is and always has been insecure and should be marked as such.

Why is it mind boggling?

Content served over HTTP is obviously less sensitive than content served over HTTPS, mixed content breaks HTTPS.


No. Obviously less is wrong. Here is an example: login over HTTP deliberate because the site doesn't support HTTPS is definitely not less sensitive.


How is it so hard to understand that its not just about your information. A attacker can easly put in new elements that insentivise users to put in information, or to help identify him.


Of course, but there's a general expectation that stuff served over HTTP isn't sensitive.

Breaking HTTPS where it's deliberately used is something that certainly deserves a warning.


That's true, but I think at some point HTTP should go away. The deprecation should happen. I think we need to get to state where HTTPS is HTTP and there is no "HTTPS" at all. Everyone can easily get a free certificate, and for commercial they can spend hundreds if they want to "prove" more. Like I said in another comment, I don't see a problem with sharing cat photos over HTTP. But if possible, https is definitely not going to hurt. But given most sites are HTTP, yes, probably going to hurt ranking. Old websites running on old CMS won't be able upgrade much. Simiarily, no one should be running FTP. It should SFTP, but setting up SFTP is pain in the ass with chroot and all that. Technology really need to made simpler. Speaking from an ops standpoint.


> Of course, but there's a general expectation that stuff served over HTTP isn't sensitive.

For us, sure. For the other 95% of the population, not really, which is why Google is doing this


Yeah, but I'm just explaining why warnings for mixed content are more important than for plain HTTP.

I'm absolutely not arguing against such warnings for HTTP.


Sounds good. I wonder when Google Cloud Storage will start supporting https on static websites hosted through them:

https://cloud.google.com/storage/docs/website-configuration?...

If they don't then they're not keeping up with hosting on Amazon's S3, which does support it.


Similar (very, since appengine static files are served from GCS), is to write a "python" appengine yaml file that only serves the static content with secure: always.


Google should offer stupid SSL certificates either for free or for $1/yr.

Perhaps at least to customers of Google domains. I won't mind switching from namecheap to Google domain in latter case.


They are a platinum sponsor of Letsencrypt, so...done?


That doesn't mean anything other than "we like the idea, you convinced us, we have some budget, we will sponsor in some way money and human resource."


It also means they quite literally at least assist with offering free SSL certificates.


If you're interested in more direct support, please star my ticket[1], it's likely that the same functionality would work for the https loadbalancer as well.

[1] https://code.google.com/p/googleappengine/issues/detail?id=1...


Money and human resource are what make up a company. They give that, and they put their name behind it in support. What else could they do?


Er, isn't that how it'd work internally too?


It means they are supporting a project that provides free SSL certificates. Which more than solves great-grandparent's quip.


Getting a Google domain means giving up getting new features from Google :( (pauses to clean up bitterness)


No, it does not. Signing up for google apps and choosing to use the google apps account as your primary google user account causes you to get new features on a delayed schedule.

You can get a domain through google without switching your google identity to it. You can also sign up for google apps on a non-google domain. google domains and google apps are not the same thing.


On Google Apps there are features that have been deployed years ago for regular accounts and that are still not available for Google Apps customers.

The one feature missing and that was painful for me were Contacts photos with a resolution higher than 96x96 pixels. On a latest generation Android with good resolution it sucks and I would have preferred if Contacts photos weren't synchronized at all. I ended up switching to a CardDAV provider and in the end I gave up on Google Apps for other reasons as well. And for the record, Google accounts had this resolution increased in 2012 ;-)


Hold on. Are you saying that that's why Android's Contacts keeps resizing all the photos to like 4x4px? Wow, I've tried everything I could find on Google but it never crossed my mind that it would be related to Google Apps. Thanks so much!


TIL, though many people with a domain are going to want their accounts through it.

That said "delayed schedule" above means 1+ years.


Note that by default, Google apps stuff is on a delayed schedule compared to the general public, but you can go into your google apps profile and change to the "Rapid Release" feature deployment, which means you get stuff as soon as the general public does.


In the hopes that it will help spread adoption of HTTPS, I wrote a web server that serves your sites over HTTPS by default, using Let's Encrypt: https://caddyserver.com - It also redirects HTTP -> HTTPS.[1]

There's a lot of misinformation out there about certificates and HTTPS, but don't let it stop you from encrypting your site. Regardless of Google's move, there is no excuse for any site not to be served encrypted anymore.

[1] Here's a 30s demo: https://www.youtube.com/watch?v=nk4EWHvvZtI


This is an awkward argument. One of my sites documents how to configure servers, for example. What excuse is there that something like that needs to be encrypted?

The most legitimate reason I've heard is for privacy. I don't believe the gov't is going to lock someone up for learning how to serve web pages.


Integrity protection. There are a lot of ways to instruct someone to configure their web server in a way that is subtly insecure, not to mention attacks like http://thejh.net/misc/website-terminal-copy-paste

It'd be slightly nice if we were able to have integrity-protected HTTP without encryption (lower overhead, easier debugging with packet dumps), but the advantages are minimal (ciphers are not really the overhead, SSLKEYLOGFILE is a thing) and it's a lot of complexity to the web platform, which is a downside for web developers like you and me: the rules for mixed content between HTTP, HTTPI, and HTTPS are going to be much more involved and confusing.


You can already send unecrypted authenticated data with HTTPS.


Via one of the NULL-cipher suites? That's a somewhat expansive definition of "can" and "HTTPS," since most if not all browsers are unwilling to negotiate any of those suites. Indeed, most SSL libraries make it hard to use those suites: for instance, OpenSSL says (`man ciphers`), "Because these offer no encryption at all and are a security risk they are disabled unless explicitly included."

Which makes sense, since they'd have the exact same problems as an explicit HTTPI protocol, just even more confusing: you'd want to not send things like secure cookies across those ciphers, you'd have to handle mixed content with actual-HTTPS carefully, etc.


Using HTTP instead of HTTPS allows an evil ISP injecting, for example, ads into your website or modifing its content in any way while serving it.


Keep in mind you're also ensuring the integrity of the document is kept and the user has (to some degree) a good idea that the document is actually from you. Confidentiality is only one aspect. I think a couple of ISPs in the US were injecting ads/content at one point into pages served over HTTP.


Consider Tor: in this case, your "ISP" is a random server on the internet. Maybe your Comcast or TimeWarner ISPs will not be malicious, but with Tor, any one in the world can register to be an exit node/ISP. HTTPS helps protect you from attacks in this "random ISP" model.


>I don't believe the gov't is going to lock someone up for learning how to serve web pages.

That's essentially the same as not locking your car doors because you feel your car isn't worth breaking into.


Sorry but I actualy can't load the website because of an HTTPS error (Firefox 43/Linux) (Error code : sec_error_ocsp_old_response).


I just downloaded and installed Firefox 44 today and it works great. Clear your cache?


So I've updated to Firefox 44 and it works indeed but it seems broken on Firefox 43.* on both of my computers (work and personal), you might want to have a look at it since 43.* is a quite recent version. (I'm not the one who downvoted you).


Here's a good excuse for not using https for everything: it breaks caching of files by proxies!


Right. So what's the solution? I run my wife's retail website. Am I supposed to just stop worrying about caching static assets like product images, scripts, etc.? Do I just throw my hands in the air and assume it evens out because I switched to HTTPS?

Serious question, what are my options?


Do you run the cache / contract with someone to run the cache, or are you worried about third parties who run caching servers out of your control (like mobile ISPs, corporate networks, etc.)? If the latter, I'm surprised/curious what the use case is.

If the former, you can stick those on HTTPS too just fine. CloudFlare will be an entire SSL-enabled CDN for you for free. Amazon Cloudfront will serve SSL for you for free (though you still have to pay for Cloudfront itself, and get a cert on your own, though you can do that for free).


Amazon Certificate Manager will issue certificates for CloudFront for free.


* Ensure your server is setting ETags correctly so the clients can determine which assets they need to re-request.

* Make use of edge CDNs with https termination


Turns out my CDN supports HTTPS (using cloudinary), so that's good. Thanks for the ETag reminder, I'm not doing that yet.


That's what CDNs are for. If you control your proxy, nothing prevents you from giving it access to your https by setting up your private key on it. Https is also tapable, but only by servers you trust.


The more privacy-conscious amongst us probably consider that a positive reason not a negative one...


I tried out caddyserver about an hour ago, and the ease of use is awesome. Had it serving my domain with a letsencrypt auto-generated cert in 2 minutes from never having looked at the caddy docs before.


> Regardless of Google's move, there is no excuse for any site not to be served encrypted anymore.

Honest question: are you willing to indemnify your users when the next Heartbleed-like attack comes out for the underlying SSL library you are using in your product?

If you are willing to do that, and will offer me a no-cost wildcard domain certificate, I will switch to your product and start using HTTPS.


Why do we have to go through this whole SSL certificates thing and can't just have a simple, automatically secure, I-do-nothing-and-my-website-is-secure protocol?

Seriously though. If secure is the default from now on, why can't it actually be the default?


Isn't that what Let's Encrypt is aiming for? Install a package, which configures a cronjob for you?

https://letsencrypt.org/howitworks/

Which could just even become a default but optional dependency of your distro's web server package, or part of your Docker container, or whatever.


Ok I'm new to this and I know it's still beta, but it seems:

1. Still WAY too complicated (look at all the stuff you have to know and type)

2. Doesn't seem to support my preferred OS (Windows) or web server (IIS) what-so-ever. Which is strange since, from my experience, installing certs in IIS is already far easier than in Apache and Nginx. (Although maybe that's why they perceive it as less of a priority?)


Hi, I think the IIS support effort that's furthest along is described at https://community.letsencrypt.org/t/how-letsencrypt-work-for... ; maybe that will be useful for you if you want to try Let's Encrypt on your IIS system.

We've had hundreds of people remark that they found Let's Encrypt faster and easier to use than other CA offerings (though most of those people were using Apache on Debian-based systems), so I think we are getting somewhere. But we definitely hope that upstream web server projects and hosting environments will integrate ACME clients of their own, like Caddy has done, so that eventually most people won't need to run an external client at all and won't have to worry about compatibility or integration problems.


You have a ";" at the end of your URL which breaks it.

https://community.letsencrypt.org/t/how-letsencrypt-work-for...


Thanks, edited.


> 1. Still WAY too complicated (look at all the stuff you have to know and type)

The website mentions at the bottom that they're intending to get all of this automated, but they're not at that point yet; they're still in public beta. Certainly all those commands look automatable, just with enough integration with lots of distros / web servers, testing, and debugging. The Let's Encrypt protocol (ACME) is very much designed so that a web server can acquire a certificate with just about no human interaction besides telling it to do so, and keep it up-to-date with no human interaction.

I certainly agree that the instructions on that website are still way too complicated for general use, though far, far simpler than the status quo ante Let's Encrypt.


> 1. Still WAY too complicated (look at all the stuff you have to know and type)

I didn't realise that people getting SSL certs and administrating servers don't know how to read a literally one-page rundown of what to run. They also have helper scripts to make it much simpler.

> Which is strange since, from my experience, installing certs in IIS is already far easier than in Apache and Nginx. (Although maybe that's why they perceive it as less of a priority?)

nginx literally takes less than 10 minutes to set up not only SSL, but also CSP and several other very important security features.


I tried to set up LE for my personal bunch of websites, but sadly the rate-limiting is still too strict for automation to be a viable option.


Huh, the rate limits look pretty generous (500 certs every 3 hours): https://community.letsencrypt.org/t/rate-limits-for-lets-enc...

Do you actually own hundreds of personal websites? (And you could still desync them, anyway.) Or is this a use case where wildcards would be useful. I sort of disagree with LE's decision to not care about wildcards for now, though I understand that it's simpler, at least while it's in beta.


That's per IP, you're also limited to 5 requests per domain name per week. In my case, I have a bunch of subdomains for various stuff that all counts against the limit for the main website. I suppose I ought to combine the CSRs, but implementing that makes it a bit more complex than just automatically requesting a certificate per nginx vhost.


Oh, that's pretty rough.

Still, with enough automation, you can request 5 per week in a cronjob, which will let you get at least 40-something websites, even with the recommended 60-day renewal cycle. :-P


>you're also limited to 5 requests per domain name per week

Huh, I'm pretty sure I used more than that when I was first setting it up with no problems.


To quote the website:

> Certificates/Domain you could run into through repeated re-issuance. This limit measures certificates issued for a given combination of Public Suffix + Domain (a "registered domain"). This is limited to 5 certificates per domain per week.


If SAN certificates make sense for your setup (i.e. all used on the same server or for the same service), you can have up to 100 (sub)domains on one certificate, or basically 500 per week.

Maybe that's how you managed to get more than 5.


I did a bunch of requests starting with one subdomain, then a second, adding SANs multiple times, setting a cron to do one request a month and testing it, then adding yet one more SAN to the list.


Let's Encrypt is awesome but you still need to have root access to the machine. I host my stuff on a shared 1&1 node and I can't seem to find any way to add SSL to my websites without having to pay them.

(Yes I should move to another host but that is too much hassle for me right now.)


The ACME protocol is open and as a result there are several alternative clients which do not require root. Here's a few:

https://github.com/diafygi/letsencrypt-nosudo

https://github.com/kuba/simp_le

https://github.com/lukas2511/letsencrypt.sh

Or you can go a more manual approach via https://gethttpsforfree.com/ but you will need to manually renew your certificate every 90 days.


If apache and nginx follow along the lines of Caddy[1], we might.

[1] https://caddyserver.com/


I tried Caddy the other day and was pretty impressed. It's a single binary, it automatically installed a Let's Encrypt cert for itself and it had a bunch of other nice features.

I'm not going to switch production to it yet, but it's looking like it'll go on my home server pretty soon.


That's... impressive. I'm going to mess around with this over my weekend - thanks for sharing!


Seriously this. I don't see why encryption and website verification have been wrapped up in the same thing (SSL certs). They're two different things. Encryption should be free, automatic and default.


If you don't have a way to confirm that the key you're seeing from the other site is right, you're inherently vulnerable to a man-in-the-middle attack which removes the benefits of the encryption against the attacker.

https://en.wikipedia.org/wiki/Man-in-the-middle_attack

httpS://en.wikipedia.org/wiki/Zooko's_triangle

It's not clear that the certificate authority system was or is the best solution to this problem, but it is a problem that calls for some solution. In the case of Domain Validation, we only try to confirm that the key is appropriate to use with the domain name, which is the smallest possible kind of confirmation that can be done to address the crypto problem. There's no attempt to validate or verify anything else about the site.


However, having one and not the other isn't totally useless.

Having the browser be able to track and tell me that "Though we aren't sure this is actually google.com, we do know that the exact same cert has been used the last 50 times you visited this website" is something I'd consider to be useful. (Actually, telling me if it changes would be the useful bit).

That would be at least be useful for self-signed certs (though those aren't really needed in light of Let's Encrypt...)


> (Actually, telling me if it changes would be the useful bit).

I'm curious. Has anyone ever encountered that scary warning you get when an SSH host key changes, and thought "oh man, I'm getting MITMed, I'd better not connect to this server!", instead of thinking "oh right, I guess they reconfigured the server, now what command do I type to make the warning go away"?


I have. Usually it's because i reconfigured the server, but I am ultra paranoid. Most people don't care, but I would expect sysadmins to do so. And who else should login with ssh?


I've never thought it was likely to be an attack, but I always thought it was my responsibility to check why it changed or at the very least confirm it looked the same via a separate network path.


Take a look at google's certs. They're only valid for a few months. A system that tracks certificates encourages site operators to share the same cert and key across many servers and to allow it to live for a long time. With the sorry state of certificate revocation this is not ideal.

On the server side it's better for each server to have it's own private key and certificate which is valid for a short period of time and frequently renewed. So the compromise of one server does not compromise certificates on any other servers and the useful lifetime of a compromised key is very limited.

I think DNSSEC and DANE is the best solution. Allow the certificate thumbprints to be published securely in DNS. At least then we reduce the number of trusted authorities to the TLDs and the scope of authority for each one is automatically restricted to it's own TLD.


> However, having one and not the other isn't totally useless.

> Having the browser be able to track and tell me that "Though we aren't sure this is actually google.com, we do know that the exact same cert has been used the last 50 times you visited this website" is something I'd consider to be useful. (Actually, telling me if it changes would be the useful bit).

Isn't that what you do when you make a security exception for a self-signed certificate? Having that enabled by default lulls people into a false sense of security.


Be nice if self-signed certs were compatible with vanilla HTTP. Then no warning or complaint from the browser, but minimal security boost over naked transmission.


> Seriously this. I don't see why encryption and website verification have been wrapped up in the same thing (SSL certs). They're two different things. Encryption should be free, automatic and default.

Becuase you either have to do DH and all of the key negotiation anyway (at which point you already have a key, so why not encrypt and HMAC at the same time?). If you had two systems for this, it would be pointlessly inefficient (why have two DH key exchanges for the same channel).


Because without a trust anchor (a certificate), encryption is pretty much worthless against MitM attacks.

You need a way to verify that the site you're connecting to really is who it claims to be before you can trust even an encrypted connection to that site. Otherwise you don't know whether you just established an encrypted connection to the website, or an encrypted connection to a malicious attacker.


SSL/TLS actually does support unverified encryption, but browsers have decided to disable it because the UI for "encrypted but non-verified" is deemed too confusing for users.

See eg https://bugzilla.mozilla.org/show_bug.cgi?id=220240#c6


Because you need to create a public key for the browser to use.


SSH gets this right -- create a host key when the server is installed, and have the client check the key and only warn/error when it changes. Sure, this isn't super-secure for first time visitors to their banking website or whatever, but those websites can continue to use the current system.


SSH doesn't get this right. It's no better than a (auto-pinned) self-signed cert, in our world.

I challenge everyone to find in their extended group of friends and colleagues, and their friends and colleagues, a single person who consistently checks the fingerprint* on every first SSH connection.

Id personally have a hard time finding someone who even knows it matters.

And if you don't? Mitm can get your password, or tunnel your key to another host, bar some crazy ~/.ssh/config which nobody has.

WiFi's WPA2 actually does this better than SSH; the passphrase authenticates both parties to eachother, not just one way. I can't set up a hotspot with your home SSID and intercept your PSK---even on initial connection.

SSH: nice in a cryptographic utopia, not better than self signed SSL certs when applied to human beings.

SSH is just not suitable for humans. Apparently.

* a significant part of it, not just the security-through-obscurity random 2 letters in the middle and the last four.


Still, there's a difference between being less than 100% secure and being a totally useless feature.

Being able to make the statement "Either you've been consistently MitM'ed by the same entity for the past three years, or the your little cloud-based debian box is actually secure" is a lot more useful than not tracking SSH fingerprints at all. I certainly wish my browser would track my self-signed certs in this way.


-o VisualHostKey=yes


A band-aid, I'm afraid.

Without going into the question of how many bits of entropy that actually has when used with human beings in real settings, and just assume it's a perfect check; my question stands: how many people can you find who use this?

Many SSH clients don't even support it, at all. PuTTY and almost anything that uses SSH for tunneling.

When they do: how many of your hosts do you know the image of?

Again: nice idea, but utterly impotent in our universe.

Compare to the efficiency of e.g. WPA2 keys: less theoretically beautiful, but much more efficient with humans.


>Without going into the question of how many bits of entropy that actually has when used with human beings in real settings, and just assume it's a perfect check; my question stands: how many people can you find who use this?

Probably not very many, but it's really only useful for people that ignore basic security features anyway. (Key auth)

>When they do: how many of your hosts do you know the image of?

None, I use key auth like any reasonable person would.


Does key auth protect you from a MITM on the first connection?

That is, key auth as reasonable people use it, as you said.

And this:

> but it's really only useful for people that ignore basic security features anyway. (Key auth)

is precisely the point: that's a lot of people. SSH doesn't work for those people. We can play the blame game, but at the end of the day, clearly something is "not right".

And these are people who use SSH to begin with. Not typically technologically illiterate, I would guess. If they can't even be arsed to use "basic security features", what good is this system, then?

Again: there is a way to use SSH properly, yes. But rare is the person who does this.

(But key auth is orthogonal to host fingerprinting anyway, this is kind of a red herring)


>Does key auth protect you from a MITM on the first connection?

Yes. Key auth will protect you from your SSH connection being listened to, and will make credential theft reliant on social engineering. However, someone could still pretend to be the server (potentially stealing your commands), but there really doesn't exist any way to solve that.

>is precisely the point: that's a lot of people. SSH doesn't work for those people. We can play the blame game, but at the end of the day, clearly something is "not right".

Nothing works for those people, at least generally with SSH users you can assume that they should know better.

>Again: there is a way to use SSH properly, yes. But rare is the person who does this.

I'd hardly consider SSH key auth users rare.

>(But key auth is orthogonal to host fingerprinting anyway, this is kind of a red herring)

But it almost completely fixes the main problem caused by MitM, someone gaining access to the server you're logging into.


SSH gets this right

No, it doesn't.

When was the last time you verified a host key out of band?

And if you're using SSH, you know well enough to know why you should do the damn legwork to verify the key. What do you expect for end users?

Furthermore, if nobody is doing out of band verification on the first pass, how do you expect users to distinguish between an attack and legit host key change?


As I said above: Sure, this isn't super-secure for first time visitors to their banking website. But it's fine for the common case where someone tries to MITM you when you move from your home to a coffee shop or vice versa and you're just browsing sites that would otherwise be using http.


But the worst case scenario with SSH MitM isn't someone being able to eavesdrop on your connection. But someone pretending to be the server, which is hardly as serious. (Unless you're using password auth, in which case you deserve to get owned)


If someone impersonates your server, it can then pass the authentication request to the original, and gain full MITM without your knowledge. Yes, even if you use public key auth.


I will personally pay you the sum of 500 Bitcoins if you can demonstrate a realistic active MitM attack on OpenSSH that allows an active network level attacker to "pass the authentication request to the original" and gain full MitM.

Conditions:

Public key authentication must be used for authentication.

If it's possible to perform the attack passively(e.g on pcaps), it doesn't qualify.

This attack has to affect setups using both the latest OpenSSH client and server with default configuration.

This attack has to be able to be performed in realtime using the processing power of a 2015 macbook model of your choosing.

This attack cannot rely on attacker having any other access but the ability to tamper with the connection however much he wants.

This attack cannot rely on known flaws in the encryption algorithms.

With full MitM I am referring to the ability to at least access the plaintext communications between the client and server. Eg if the user runs 'sudo', the ability to see the password entered.

Please consider this offer legally binding, if you have any questions I will answer them and you can consider the answers binding too.

Good luck.


You're exempting the obvious "MITM on initial connection" attack, right?


Is there a strategic business reason for this on Google's part other than a safer web is better for all? I don't doubt that a more secure web is better for everyone, I'm just more curious about the business drivers of this from their perspective.

The reason I'm wondering is because with AMP, there seems to be a clear strategic benefit from having all of that ad serving data running through them even if the advertisers and publishers are not using the DoubleClick stack or Google Analytics.

By bringing this to market from the standpoint of "improving" the mess publishers have brought upon themselves and speeding everything up, there's definitely a clear win for consumers here. That said, it leaves the door open for something similar to mobilepocolypse where Google updated their ranking signals on mobile to significantly favor mobile-friendly sites. I could easily see this going a similar route where it is a suggestion...until its not because if you don't implement it you'll lose rankings and revenue (and coincidentally feed Google all of your ad serving data in the process).

To be clear, I don't knock them for taking this approach, because if it works it is a very smart business move that will be beneficial to a lot of parties (not just Google). Just looking for other insights into the business strategy behind something like pushing for encryption, and AMP.


> Is there a strategic business reason for this on Google's part other than a safer web is better for all?

The two common reasons for MitM are spying and inserting/replacing advertisements. The latter is stealing from Google, so they want to stop it before it grows too common.


We can only wonder how long it will be until Google starts openly advertising and buying newspaper articles against that new ad-replacing browser.


For Google, it’s not just about providing a secure environment and secure websites. In fact, Google actually has a monetary incentive to get as many websites to move over to HTTPs as possible: convincing website owners to move to HTTPs will help get rid of competing ad networks.


How does it get rid of competing ad networks? Does Google have a monopoly on serving ads over HTTPS?


It means your internet provider can't inject ads or profile you based on the content of the sites that you visit. Comcast, AT&T, and Verizon have all done similar: https://certsimple.com/blog/ssl-why-do-i-need-it#4-not-havin...


Sure they can. Your ISP can easily MitM you.


Not without throwing cert errors on every site I visit.

The only way they can MITM me is if they compromise my PC as well and install their root CA.


To connect to the internet you must install comcast internet-enhancing-certificate. It's the only way to make all websites secure by default™

No reason to compromise when you can force the user.


Ah. True, my mistake.


... or rather get an intermediate certificate from one of the umpteen root CAs your operating system embeds by default.

Is VeriSign going to refuse a certificate to AT&T?


Verisign will happily issue a certificate to AT&T for a domain that AT&T controls.

Verisign will not issue a certificate to AT&T for google.com--no matter how nicely AT&T asks.


Yes, and furthermore there's a very good reason to believe that this claim is true: as soon as they do, every copy of Chrome behind AT&T's network will go and snitch to Google, who will promptly investigate and get Verisign in deep trouble.

Here's what happened when Symantec issued fake Google certificates last year:

https://googleonlinesecurity.blogspot.com/2015/09/improved-d...

https://googleonlinesecurity.blogspot.com/2015/10/sustaining...

"Therefore we are firstly going to require that as of June 1st, 2016, all certificates issued by Symantec itself will be required to support Certificate Transparency. After this date, certificates newly issued by Symantec that do not conform to the Chromium Certificate Transparency policy may result in [annoying certificate warnings, just like self-signed certs]."

And that was just the work of a couple of employees who were inappropriately testing their issuance system and weren't even intending to attack anything. They got fired, which I expect is also a big part of why Google's response was so light.

http://www.symantec.com/connect/blogs/tough-day-leaders


>Is VeriSign going to refuse a certificate to AT&T?

I certainly hope so.


For one IIRC it kills referer headers and so search engines/ad networks can't build out a graph of where a user was prior. Google OTOH sends the majority of the traffic and it's reach in ads allows it fill in the gaps better than any other network.


HTTPS does not kill referrer or referer headers. See https://referer.rustybrick.com/


..so why are all of the search terms suddenly gone from google searche referer headers? Which happened at the same time google defaulted to https?


They stopped linking search results directly to the webpages. You have no Google search referrer headers in your logs/analytics any more.

When the SERP loads, all the results link to the real webpages, so that you see their address in the browser status bar when hovering over a link. Clicking any result link triggers a script that replaces the URL with https://google.com/url?url=the_real_webpage_url.

When you click through, you're clicking a link from google.com to another link on google.com, which redirects to the webpage you intended to visit. The referrer the webpage sees is the intermediate google.com/url page, instead of the search result page. This prevents websites from getting search term data from the SERP URL, if it was present, by removing that URL from referrer headers entirely.


> ..so why are all of the search terms suddenly gone from google searche referer headers? Which happened at the same time google defaulted to https?

Not related to HTTPS at all. This happened completely independently. It happened because Google went from having search URLs like this

    https://google.com/?q=term
To

    https://google.com/#q=term
And the things after the anchor-mark is never in your referer. Effectively this means that the only tool on the planet who knows what people searched for before entering your site is... Google Analytics.

As a website owner you're basically being co-erced into letting Google snoop on your users, at least if you want to know how they entered your site. And the fact of the matter that is most (all?) companies are willing to make that trade-off.

All in all pretty sad and very creepy.


At that time they started using horribly annoying redirects.


Did you read the page I linked to? My referer was https://encrypted.google.com/search?hl=en&q=What%20Is%20My%2...


Uh, the referrer is the page you came from. So if he opened the page on HN, then he wouldn't get the Google referrer, he'd get a page off HN.


If you read the page I linked to, it instructs you to search to try it out for yourself to see that search results are not stripped from the referrer.


I think it's pretty funny that on the HN front page right now is a NYTimes article from the company's Google beat reporter about how trying to interview Larry Page is "emasculating" and then this announcement is accompanied by an image "shaming" the NYTimes web site for being unencrypted.

As to the feature itself, I don't think it's a big deal at all. We all know that the average internet denizen doesn't understand HTTPS at all and would just as likely ignore it as anything. The only people that would see and understand this new red X for what it represents would know that it doesn't really matter that the lolcat meme they just downloaded came through an unsecured channel.


I work for a SaaS company, we absolutely have customers who email us complaining about putting credit cards in a page served over http.


Certainly, and I would be one of them. I'm not saying nobody does care or that nobody should, only that enough people don't care enough to make this "red X of shame" that shameful, really.

Chrome and Firefox have both had to take extreme measures for very similar things, such as web sites using expired (or even unvalidated/spoofed) SSL certificates. Google even reported that using a giant red page with warning labels didn't stop people from clicking through!


Right, and I guess I meant to imply that it is some of the unwashed non-elite masses that notice that stuff. Our product is for people who are bad at software and want an easier way to do task X, but they still know to look for the green lock. I don't have strong data but I'd just say-- don't underestimate the web knowledge of people who are mostly making cat pictures.


Main problem still is Google: They consider HTTPS and HTTP links the same. When switching a site to HTTPS you lose all your incoming links. Redirects only transfer a small amount of juice. You're toast.

We tried migrating several times to HTTPS only, every time got a huge penalty from Google.

So Google is the main driver for HTTP websites.


> They consider HTTPS and HTTP links the same. When switching a site to HTTPS you lose all your incoming links.

Do you mean that they don't consider HTTPS and HTTP the same? Otherwise, I don't understand your point here.


You're right.


Just enabled Chrome to show the little crosses by default for http:// and I already like having this showing. If you wish to be an early adopter go to:

chrome://flags/#mark-non-secure-as

It is good to see how sites that matter are mostly https:// already for me. The http:// tabs I have open such as this article actually are insecure when you think about the amount of trackers on them, so the 'x' is very apt.


They should do something useful for the web and remove most if not all the current root certificates. There are so many places that have what is essentially a master key to the internet - and that master key is only going to be more important as more and more sites become SSL.


So is there already a solution for https on Github Pages with a custom domain?


Stumbled upon Kloudsec here on HN couple days ago [1] and gave it a go. The dashboard is a bit clunky where you kinda have to figure out what to do, but HTTPS works without needing to move the DNS to them, as in case of Cloudflare (which costs 20$ when moving from Gandi).

Basically register account, enter your domain, update your DNS records with an A (replacing the Github pages IP) and TXT record (for verification).

While the change in DNS was in couple minutes on Gandi, Kloudsec DNS took an hour or two to register the change. After that, you go in the "Security plugin" and enable it. If you're using an apex domain, you can remove the www. HTTPS request, since you won't get the cert for that (if you do have an apex domain then you probably know about the CNAME trick on Pages, unless your DNS provider supports ANAME or ALIAS records for the apex domain - Gandi doesn't). It took couple hours again to get the cert.

When it's done click on the "Settings" cog icon for the desired HTTPS domain and enable HTTP-> HTTPS redirect and HTTPS rewrite, then you're set.

[1] https://kloudsec.com


I'm not sure what you mean about Gandi charging $20 to move the DNS to Cloudflare? I'm using Cloudflare to add HTTPS to a website on a domain registered with Gandi, and it hasn't cost me anything above the usual domain registration fee.


Hm yeah, after reviewing the transfer policies, I guess I mistakenly thought the price for transfer TO Gandi stands for transfer FROM as well. It's still a bit more hassle then what Kloudsec offers.


Check out netlify (https://www.netlify.com) - we're like GitHub Pages on steroids (integrated continuous deployment, proxying, redirect and rewrite rules + lots of other features) and we launched free SSL on custom domains a couple of weeks ago :)


In addition to CloudFlare, you can also use AWS CloudFront for this. We just implemented this to get https working on our custom-domain Github Pages site [1] this week.

You first have to upload your SSL certificate to AWS IAM [2] (you only have to do this once, or you can just purchase your certificate from the AWS console now too). Then, all you have to do is create a new CloudFront distribution and point the origin to your subdomain.github.io URL and select your SSL certificate from the drop-down, then point your CNAME record to the CloudFront distribution.

[1] https://os.alfajango.com/

[2] https://bryce.fisher-fleig.org/blog/setting-up-ssl-on-aws-cl...


CloudFlare works best.


Yes! Funnily enough, the site this story is hosted on is using HTTP.



All of this raises the question: why does the new default state require action, while the non-default state requires none?

Is that more or less bass-ackwards?


Should static content be encrypted over https? I think it's fair for chrome to call out with an x as I've literally seen local lunch joints take orders with credit card info over http but to serve mostly static pages like the new yorker over http only means that the user's privacy is compromised in that people can see what you're reading - does that warrant down ranking searches? I'm just curious - I work mostly on platforms so I'm not too aware of all of the incentives for trying to move everyone to https as it's not my problem domain necessarily.


One issue is content injection. You never know what transparent proxies are between you and the server, any one of them can add / remove content, scripts, tracking stuff etc to the static pages. You can't even be sure if your current DNS server resolved to the actual server and not some shady proxy.

I believe Comcast has been accused of doing something shady like that but I don't live in US and have no idea. Just read the news.


Because mobile carriers are given broad discretion to do whatever they want to do to your traffic.

They cheerfully modify content, and have built infrastructure to do it even more.


hmm ya this is a good point.


How will this impact page-speed?

I recall switching the product pages of an e-comm site, which had up to 50 small images per page, from https to http and the change very significantly increased page load speed for the end user


I'll guess that the browser opened several connections to fetch all of the content (to work around broken http/1.1 pipelining) and needed to complete many tls handshakes. http2 probably would have done a better job.


Several weeks ago I have installed certificate to my web site on NGINX and it wasn't hard. It was fun to do. Also I got A+ from Qualys SSL Labs. What I mean is it is easy to deploy an HTTPS site.


Deploying TLS in simple environments isn't overly complicated. It's just cost prohibitive.


HTTP + HTTPS is fine

HTTPS for SaaS and e-commerce web apps is fine

HTTP for normal websites is okay (for me, I have no hidden agenda)

HTTPS-only for normal websites is silly, why not offer HTTP too? Every request is unique, no internet anonymity.


HTTP for normal websites may allow your website visitors to participate in DDoS: https://blog.cloudflare.com/an-introduction-to-javascript-ba...

Happened to GitHub once.


Why is HTTPS-only for normal websites silly? It ensures the page was not tampered with enroute.


Why not HTTP + HTTPS? Let the user decide. HTTPS-only is silly for normal websites. Visitors from cooperate networks or some countries would say, your second sentense doesn't hold in the real world.


While I am a fan of HTTPS everywhere, there are just some use cases that are not feasible for HTTPS.

One edge case I am familiar with is a case where a webapp is used to setup a headless device (like a wireless repeater or hub for example). In this scenario, a user loads the page from a web server, the page instructs the user to put the device in a mode that it acts as a WiFi access point, the user then changes the access point of their machine, the page can now make AJAX requests to the device access point which is also acting as a server allowing the user to POST things like WiFi credentials for the device to use.

In this edge case one of two things need to happen, either the original page must be served as HTTP since no CA will issue you a cert that can be served by such a device. Or the device must act as more than a simple API server and would have to serve HTML pages which would get linked to by the original page. While the second solution is fine to get HTTPS everywhere (with the exception of while connected to the device), it means that the development and improvement of setup UI is tied to device firmware updates as opposed to speed and flexibility of web development.


Surely it could be a lot easier...

Everyone already has a known public third party authority -- their domain registrar. Surely the browsers could come up with some protocol where you generate your own keypair, register the public key with the registrar and keep the private key on the server.

Then it could work pretty much like SSH, but with the browser doing out-of-band public key checking. Rather than needing a certificate chain, the browser just checks the server's public key matches the published one. If ok, happy to encrypt. If not, wave flags, and yell "spoof".

Verifying the registrar isn't a fly-by-night can be as complicated as needs be, but that way the registrar (who already gets paid to register the domain) does the complex hassle, while the ordinary domain-buyer just has to keep their server's public key up-to-date with the registrar (ie, fill in one web form at the time you register the domain or whenever you change the server key).

But alas it is not so at the moment. Instead, the current process puts hassle onto millions of individual domain-owners, while keeping life easy for the few (paid) certifying authorities. And then we wonder why so few people want to do it.


So the browsers need to do a secure request to a server of a registrar for each connection? Even assuming you cache the result, latency has just been shot. And what if those servers are down, or slow, or under DoS attack? And what about EV certs?


Even better.

Put the public key in DNS, auth the DNS.


This exists: https://en.wikipedia.org/wiki/Domain_Name_System_Security_Ex...

It gets some pushback from notable security experts (e.g. tptacek here) because the DNS system (notably the root) is largely state-owned, while registrars are largely privatized.


I thought about that, but there's so many DNS caches out there with potential bugs to make them insecure and poisonable.

But a set of public keys from the registrar is "content". It's small, so they could just have a server or two to handle it. (But if load/latency was an issue, well everyone's pretty good at content-distribution networks these days...)


Well, if someone can manipulate your DNS, you have other issues than your SSL certificate anyway.

Ideally, DNS should be signed by your registrar.


I feel a bit of dissonance on hn on the issue of https. On issues of surveillance and spying the responses are often measured and there is generally a balanced debate, and yet on ssl suddenly its a matter of extreme urgency with strident positions backed up by references to mitm, spying and isp injected ads.

This is not as urgent a matter as some here tend to make it, better to resolve it properly than rush into half baked solutions like Google shaming websites. Why should a private company have the ability to shame websites and drive decisions a certain direction without any consultative process and accountability. Surely these are decisions for industry groups and wide consensus, and not individual corporates driven by self interest.

Corporates routinely mitm ssl traffic and no one is shaming them or the equipment makers for that, so ssl and mitm is hardly going to be problem for state actors. For protection against less influential actors, banks and those who process sensitive data have been on https for a long time now so where is this urgency and the need to take action coming from?

Everyone agrees security is good but the mechanism to enable this cannot be given up to browser makers and CAs. This is a complete loss of end user control and a significant step back from the open net that cannot just be brushed aside.

Not everyone needs https and for ads injections the pressure should be on ISPs to stop the illegal behavior. Why can't we shame ISPs instead of forcing all websites to https?

Other solutions like signing content that empowers individuals rather than corporates and vested interests should be explored. The same browser makers went ahead and arbitrarily started flashing grave warnings on self signed certs without any consultative process or accountability.


Why don't I like this?

I don't think it's HTTPS.... I think I don't like that one company has this much power over the web.

This seems awfully familiar...


I don't like this because I've always thought that HTTPS shouldn't be a mandatory baseline. It doesn't make a whole lot of sense to me that a random website with no financial transactions or anything should require HTTPS. [Edit: And thus, it makes less sense to me that the site should be penalized by anyone for NOT having it.]

"Ah, yes. Bob's Trivia Emporium has HTTPS. I know this is really Bob's site and that the data is from his site."

If anyone has compelling arguments to the contrary, I'm open to hearing them.


HTTPS assures the integrity of the data transferred is from the origin domain, so it prevents your ISP from injecting additional ads into tour site, which some ISPs like to do [1].

[1]: http://arstechnica.com/tech-policy/2014/09/why-comcasts-java...


That doesn't convince me; it's like saying every Tom, Dick, and Harry should get encrypted phones because Verizon has the capability to tap into conversations. The onus should be on the provider, not the customer.


You dont think that normal user should use encrypted phone sevices? Are you serious?


Suppose you go to Bob's trivia emporium in your browser and a MITM inserts malicious javascript content into the response.

Looked at another way: Is there any reason http should not be secured with some sort of privacy and integrity check?


There are occasionally times when I want to suffer a MITM attack. For example, when I am on an airplane, at a hotel, or basically any other time I have to fill out a webform to get online. Perhaps those forms should not exist, but until they don't, I hope http://xkcd.com continues to work.



To clarify, you don't want to suffer an MITM attack; that's just the current standard way of finding the captive portal login page. I believe the Wi-Fi Alliance is working on Hotspot 2.0 (Passpoint) to fix this problem:

http://www.theruckusroom.net/2014/10/hotspots-get-hotter-wit...

Briefly, it looks like there's a secure (WPA2) hotspot for internet access and an associated open hotspot if you need to sign up for an account (with a published signup URL).


I wonder if this will reduce malware that injects advertising into users' browser sessions. Seems like a win-win, but I don't trust Google at all. They want all private data un-encrypted and available for their own analysis/mining/auction when it comes to their own servers and services.


So is there a lets encrypt solution for shared-hosting systems?


The solution for shared hosting environments is for your provider to integrate with Let's Encrypt (or any other free CA that might pop up in the future).

Once this change goes through, providers will be forced to either do that or (if they think forcing users to keep paying for SSL, even though it's de-facto mandatory) watch their customers move somewhere else. There's plenty of competition out there, and a lot of them already support Let's Encrypt[1].

[1]: https://github.com/letsencrypt/letsencrypt/wiki/Web-Hosting-...


It looks like `letsencrypt-auto --webroot` does this: https://letsencrypt.org/howitworks/

If your shared host has a way to automate deployment of new SSL certificates, this should be easy. (Or if they're willing to manually configure a new cert every 3 months.)


Domain validation is coming, which makes things easier, but some like Dreamhost are adding one click support to their hosting panels, and openly talk about making it default.


Yes, shame all libraries/swimming pools giving their schedule online without HTTPS. Shame gutenberg project, the documentations for OS, code, your washing machine.

Why would money from libraries gutenberg project, NGOs informations go to more expansive OPEX for web hosting when an information is clearly designed and OK to be public?

And does not require adds or payment.

Google has some godwin point very authoritative views on the internet and the protocols that make me dislike them.

Especially that their business model it is to not pay transit for distributing their contents. Basically every fucking internet users pay the 95th percentile transit to google products even if they don't watch videos on youtube, don't use gmail or else.

These people are like ... catholic priests. Do as I say, not as I act and be good members of the community.


What additional money is needed to implement HTTPS? It's like an afternoon of a sysadmin's time; it doesn't require any more opex.

If you have a favorite library or NGO that doesn't support HTTPS for lack of funding, I am personally happy to donate an afternoon's of a sysadmin's wages to them. (Or to set it up for them, honestly.)

Project Gutenberg is already over HTTPS, so I'm not sure what you mean by that. If you think they were strongarmed by Google into it, instead of having decided this long ago as a simple and obvious step for the good of their mission, a reference for that would help inform the discussion.


Libraries in particular can get help from the Library Freedom Project to set up HTTPS. Some librarians have come to appreciate its importance in protecting information about what library resources (like books) patrons are interested in, for library web sites that allow people to do catalogue searches online, for example.

https://libraryfreedomproject.org/ourwork/digitalprivacypled...

(It's true that that's not the original poster's exact example, which hypothesized a static site that just tells you the library's schedule. But I think library catalogues are a super-great example where information is completely public -- it's not secret what the library has in its collection -- but information about users' interest in that information is private and sensitive, and the people providing the information strongly agree with that concern when they stop to think about it; librarians care very much about not revealing who is interested in which books.)


buying a certificate and changing it yourself. So it costs minimum price a certificate, and at least one person competent enough.

And competence in terms of spending is way more than the certificate.

Outsourcing security without knowledge is praying for being abused.

So sometimes you are better in terms of costs and efficiency without.

And HTTPS cost more for rural users because you cannot cache SSL contents.

So in africa, alaska, yukon, peta ouchnok people with small providers have to pay a tax.

And it increases also the 95th percentile.

So basically everybody except google will pay for this but it will impact more the poorest content provider & users.


Certificates are free. The required competence is only marginally more than the required competence of setting up a website, and is rapidly dropping to zero more. (And is, if your website is hosted by a third-party service, which is a pretty reasonable approach for the organizations you name.)

I don't understand what you mean by "Outsourcing security without knowledge is praying for being abused." From your original argument, the website admins don't care about security, right? So the worst that happens is their setup is insecure, but their setup was insecure to start with, and the were okay with that. You aren't more at risk from outsourcing your HTTPS versus just not doing HTTPS.

I'd be very surprised to hear that people in Alaska are getting their internet via an HTTP cache. Frankly, I'd also be surprised to hear that people in Africa are, except possibly for certain mobile internet. I'm curious, where does this happen?

Even so, and especially for mobile internet, HTTPS isn't a problem. The provider gives you the phone, so they can control the software, certs, etc. on it. They can run a caching proxy server that takes "HTTPS" requests in plaintext, and sends them out over HTTPS if they can't be satisfied from the cache. (It's pretty easy to configure an HTTP forward proxy this way.)


Certificates are free? Some are. Do you trust certificates given without checking the real identity of the user?

I do not. It has a cost. If you do not check you have the perfect tool for a MITM.

Then is https more expensive than http?

Of course you idiot. Have you never seen your CPU burn under https? With TLS the load is on the first connection. Meaning it is around x% (x said to be 1 by google) for long sessions IF and only IF you have a costly engineer optimizing your stack. And I know by experience that google cipher suite are sometimes close to be ridicusly weak in order to achieve this. (I saw an RC4 while playing with gmail). And that is servers side. Add the client side.

So what about no "engineer". What about an SSL2.0 cipher suite having RC4 MD5 ... and all the default weak settings recommended you can see on the internet?

Well, the illusion of security is no security. the S of https is for security. Having a tag saying I am secure if any government or criminal organisation can break your ciphering in a matter of seconds is no actual security. And it defeats the purpose of TRUST granted by security.

Then way pay the extra x% of https in this case?

Oh! I just read your last paragraph.

Can someone explain to this person why HTTPS cannot be cached? Oh! It can it : has been sold to Tunisia by MS in 2007, France to lybia circa 2005, China is doing it ... for intercepting and deciphering citizens conversation.

Caching HTTPS is basically doing a Man In The Middle attack. It requires a "joker" certificate nicely given by a root authority. Doing so (as microsoft did) normaly induce "the death penalty" of security firms. MS is alive, hence anyway we cannot trust https ... since snowden revelations.

The proxy would still have to do the handshake anyway to have the https => cpu load.

The premise of security are trust. https nowadays is a costly joke.

For the record operators especially when IP routing goes through shared tunnel of collects (3G) have been traditionnaly using a protocol called WCCP to cache transparently your http content. And 4G is deployed substantially in USA & wealthy European countries, but not everywhere.

So even if you don't have a proxy your operator may.

And if you think all USA is at 1Mb/sec for 50$/months read this http://seclists.org/nanog/2015/Oct/337

Last and least : I worked in an ISP 10 years ago. One of the datacenter was 5% of france global traffic, the electricity used was as much as a city of 40K inhabitants.

It makes internet as an industry the biggest user of fossil energies. According to my approximation we should be around 2%[+1%?] of the global national consumption. Very near transport industry (plane + trucks). And https will not help.

So I see no other arguments for https than being sheeps.


> Certificates are free? Some are. Do you trust certificates given without checking the real identity of the user? I do not. It has a cost. If you do not check you have the perfect tool for a MITM.

But you just said that HTTP was fine, right? You can MITM plaintext HTTP too, so I don't understand why that's a problem for HTTPS to be (potentially) MITMable.

> Of course you idiot.

I have stopped reading here. Please read https://news.ycombinator.com/newsguidelines.html


> Yes, shame all libraries/swimming pools giving their schedule online without HTTPS. Shame gutenberg project, the documentations for OS, code, your washing machine.

So it's OK for someone to tamper with your documentation to trick you into doing something dangerous? Is it OK for a librarian to give out information on who looked at what?

> Basically every fucking internet users pay the 95th percentile transit to google products even if they don't watch videos on youtube, don't use gmail or else.

Except Google does pay for their transit. They pay for bulk transit to go around the world. Just like everyday internet users pay their ISP to get data the rest of the way down the road to where they are.


Seriously. Who would care about a library?

Not everything on the internet is privacy.

And even with https any proxy (WCCP) already knows which URL I went to.


So what would this mean for web development agencies and the likes, who've made hundreds of thousands of websites (most without SSL) and would now be forced to convert them all to use https?

How many clients would be willing to pay to cover some of the time wasted doing that?

And how about the potential SEO impact? I mean, Google says switching to https shouldn't affect things, but I've seen tons of sites lose rankings after a change.

Okay, using https is a good thing and all, but the inconvenience caused by having to make this change could be massive.


It's very surprising that Google doesn't sell certificates on Google Domains or offer any ssl support in GCE storage stack and yet they are going so strong after web encryption. What about setting an example ?


Sounds good to me, thanks to Let's Encrypt. All my personal sites are now on https and HTTP/2 (thanks to Go 1.6). I don't use ads so I have nothing blocking me from https.

The only situation where I still want to use HTTP is one page I serve, which uses websockets, and I don't want to use secure websockets, but that's the only exception.

I'm glad to be in this boat. Once again, it's mostly thanks to Let's Encrypt being awesome, and I'm thankful to it.


Cue the sound of 100,000 static-hosted S3 bloggers grabbing their free Amazon SSL cert and setting up CloudFront. And man that AWS console sure is wonky.


I tried setting up SSL with Cloudfront yesterday and it was a complete mess. The validation method is sending an email to the domain contacts as listed in whois. So if you have whois privacy enabled, you cannot receive the email and therefore cannot setup the cert.

This is definitely a bug, because the system supposed to also send emails to admin@domain.com, hostmaster@domain.com, and a few others. With whois privacy enabled, I never received any of those emails.

Even with whois privacy, you are supposed to be able to receive an email via the privacy registrar's proxy email... but Amazon parses it incorrectly and ends up sending the email to legal@whoisproxy.com

I'm not the only one:

https://forums.aws.amazon.com/thread.jspa?messageID=698280&t...

https://forums.aws.amazon.com/ann.jspa?annID=3510


This is stupid. Making https a requirement will break most web pages on hardware older than 2005 or so. This sucks for anyone without money.


It doesn't make HTTPS a requirement; it just shows a red lock.


Are there hardware requirements for encryption? And, if there are, who are these people building web sites on hardware older than 2005?


Yes let's cripple the web by continuing to use unencrypted http, just because computers from 15 years ago won't be able to properly display some webpages...


Hopefully costs for certificates will come down to encourage it as well. Services like letsencrypt can help.


Supply and demand would dictate otherwise


Well it's not like certs are a limited quantity; they take no time to produce, no limited resources to produce, and no manpower to produce. Supply and demand works when demand outstrips supply, so the price goes up to put downward pressure on the demand. There's no possible way for demand to outstrip supply of certificates, so prices shouldn't go up.


As another pointed out, the supply curve for these certs is probably close to horizontal so we should expect the equilibrium quantity to increase but not the price.


I think one of the big problems with unencrypted websites is shared hosting, who refuse to use SNI certificates (often because it would require upgrading their infrastructure). So users have to pay for a static IP which effectively doubles their hosting costs so most don't bother.


If they don't bother why should people bother to pay for their hosting? There are many, many hosting companies that do bother and I am using one of them. Had no problem installing Let's Encrypt cert on shared hosting via cPanel there.


Who are you using?


I am using ASO.


My website runs on a vps with NodeJS as the backend. I could easily self sign a certificate but chrome displays a huge warning to users. How about signing certificates for free google. Https is easy to implement but the fact that it costs is bs.


LetsEncrypt offers free, automated certificates and is recognised in all major browsers (IE, Chrome, FF, Safari). https://letsencrypt.org/


Oh wow I've actually heard of this before thanks for the tip.


"mark non-secure origins as non-secure." The name of that option seems to be chosen to apply a bit of pressure to anyone who sees it. Nothing wrong with that of course - it's our existing habits of trusting HTTP that are strange.


I think 80% of web sites will be labelled as red-unsafe.

SSL layer security is good but sometimes a certificate is expensive and not free. Suppose that you have 10 domains and not all of them are for SNS, banks and etc..

At what minimum cost will you purchase a HTTPS certificate?



Most shared hosting accounts charge extra for a dedicated IP address, both for setup and on a monthly basis. Don't underestimate how many blogs, churches, small businesses, etc still use services like that.

To be fair, many of those sites probably ARE insecure, but it seems to be a little bit overkill to "shame" them for not implementing encryption.


You only need a dedicated IP address for clients that don't support SNI. If your hosting model supports it, you can also still support these clients with a single IP address with a SAN cert that includes all of the possible hostnames.


SSL hasn't required a separate IP since Windows XP. And XP no longer has any security support, so anyone running it has bigger problems.


Guess you're right, fair enough. I still don't agree with putting a scarlet letter on these types of sites though.


Nothing short of that will get HTTPS adoption to approach 100%. Many people have commented that it seems odd to complain about broken HTTPS but not about HTTP; I agree with that. As long as browsers show unencrypted HTTP as "neutral" rather than "bad", far too many sites simply won't care. This has been a long and gradual step, but it needs to happen for HTTP to finally go away.


HTTPS is rather more secure than what HTTP is. Because it creates a relative secure tunnel between the client and host. But HTTPS does not mean 100% secure, it's easy to be hacked by MITM or traffic been spied.

I think that getting rid of HTTP should not be shamed in that way. But google is planning on doing this thing.

Just as someone said, MITM attackers can switch google ads to others, and I think this is the reason why Google wants to shame those sites who use Google Ads and not use HTTPS. Google can make an increasing revenue by this act.

And yet HTTP2 is out, will google shame those sites who only support HTTP1.0/HTTP1.1 ? I don't think so. Because this has almost nothing to do with revenue for Google.


> But HTTPS does not mean 100% secure, it's easy to be hacked by MITM or traffic been spied.

I don't know what properties you think HTTPS lacks here, but no, HTTPS doesn't allow "easy" MITM or eavesdropping. If you want to break HTTPS, you either need to compromise an endpoint, or pressure an accepted certificate authority to risk destroying their entire business by issuing a fraudulent certificate.


I have shared hosting at Dreamhost. Installing Let's Encrypt certs was a two click procedure. I guess more hosting companies will follow.


This looks cool. It seems dreamhost did a good job.


nginx + TLS, and TLS is many-certs-same-IP friendly from the start.

SSL is insecure already anyway.


letencrypt is in beta testing for now. And they did not claim whether they were going to make this service free forever.

I tried letencrypt, and it works like a charm. But the sad thing is that it needs you to update every 3 months at least for now. And they may have auto-updating script for this, but it is not supported well. Currently the scripts only work for Apache HTTP servers, rather Nginx ones (will work in the future) . I update certificates every 2 months by hands...


https://news.ycombinator.com/item?id=5238164

Still not solved... calm down Google!


Use Let's Encrypt to automatically issue a cert valid for each domain?


Is there a good guide on making S3 sites work with SSL?


You can use the new AWS Certificate Manager[0] with CloudFront[1], which you can attach to your S3 bucket. The docs aren't brilliant, though.

[0]: https://aws.amazon.com/blogs/aws/new-aws-certificate-manager... [1]: http://docs.aws.amazon.com/acm/latest/userguide/gs.html


Pretty easy. Use CloudFront instead, which has full SSL support, to serve up the content stored on S3.


So, can someone tell me: Will they be assholes and shame BlogSpot sites (i.e. Google's own service)? Or will it be upgraded or something?


I used Cloudflare's free SSL - is this enough?

https://www.soundshelter.net


Yes. Google can't know what happens between Cloudflare and your server. (Communication there is also less likely to be intercepted as between the user's browser and the internet.)


Good to know - thanks


This might have the unintended consequence of causing people to go numb to the "false positive"


Is there a checklist somewhere that specifies each of the things that Google requires / forbids?


So what about the overhead of https?


For the last few years, effectively zero.

https://istlsfastyet.com/

https://www.maxcdn.com/blog/ssl-performance-myth/

https://www.imperialviolet.org/2010/06/25/overclocking-ssl.h...

Not to mention that if you use CloudFlare just to get a free SSL certificate out of them, you're also getting a CDN, so the performance overhead is negative.


I thought the same, but reality isn't that nice. Got this response: https://news.ycombinator.com/item?id=10602621

I was able to replicate this on my own server too, but haven't immediate solution (all the obvious things like OCSP stapling were already configured, following common sense and various "best practices" guides) and I hadn't enough spare time to properly investigate why TLS takes longer.

If someone had encountered this or knows the possible culprits, would be glad to hear suggestions.


I don't currently see a 500 ms difference, so maybe they figured something out. From my shell, I see about 35 ms to http://www.stavros.io/404 and about 85 ms to https://www.stavros.io/404 (the HTTPS site serves actual content and the HTTP a redirect, which confounds the numbers).

The HTTPS server is currently offering me a 4096-bit-RSA certificate, signed by the 2048-bit-RSA StartCom class 1 intermediate CA. There's no security benefit in a 4096-bit cert signed by a 2048-bit one, since any attacker capable of breaking 2048-bit RSA but not 4096-bit is just going to attack the CA cert and sign their own forged cert (and any attacker sorta capable of breaking 2048-bit RSA will dedicate their brute force effort to CA certs). And to my knowledge, all current CA intermediate certs are 2048-bit. Meanwhile, because of math, 4096-bit certs take a lot longer to handshake: see e.g. https://certsimple.com/blog/measuring-ssl-rsa-keys

CertSimple's data indicates a 25 ms difference between 2048-bit and 4096-bit keys on their server, so I'd expect that the 4096-bit key is responsible for at least most of the performance difference here. A few years ago I screwed this up on a production shared web host, and I believe we saw a greater than 50 ms difference. (While we're at it, that cert is SHA-1, so it's possible they can get a reissue for free.)

Were you able to replicate the 500 ms (!) performance difference on your own server? Are you using a 2048-bit cert and reasonable cipher suites?


Currently I see ~200ms difference (repeated those tests a good number of times, of course, those are results closest to average):

    $ time curl -4 -s -o /dev/null https://drdaeman.pp.ru
    curl -4 -s -o /dev/null https://drdaeman.pp.ru  0.00s user 0.00s system 2% cpu 0.300 total

     $ time curl -4 -s -o /dev/null http://drdaeman.pp.ru
    curl -4 -s -o /dev/null http://drdaeman.pp.ru  0.00s user 0.00s system 7% cpu 0.107 total
The host isn't doing anything, although the server is old weak Atom machine so it could take some time to do RSA. I followed some guides (say, used Mozilla-recommended cipher list) to get "A+" rating with SSLLabs. Currently it's just "A", I guess because of SHA1 deprecation on Startcom intermediate certs. https://www.ssllabs.com/ssltest/analyze.html?d=drdaeman.pp.r...

I'm also using 4Kbit RSA keys, maybe that's the cause, especially given that the server is a tiny Atom HTPC sitting in the kitchen (100ms is because I'm accessing it from the other country). Will try to find some time on weekend and test with 2Kbit ones to see if this is indeed the cause.

--------------

Added: seems that this worsens with latency, because I see extra 200ms. Maybe the cause is extra network round-trips, not crypto overhead. Or maybe there's something with my curl...

    $ time curl -4 -s -o /dev/null http://stavros.io/404
    curl -4 -s -o /dev/null http://stavros.io/404  0.00s user 0.00s system 4% cpu 0.180 total

    $ time curl -4 -s -o /dev/null https://stavros.io/404
    curl -4 -s -o /dev/null https://stavros.io/404  0.01s user 0.00s system 2% cpu 0.370 total
Unfortunately, don't have time to meditate on Wireshark output right now. :(


> I'm also using 4Kbit RSA keys, maybe that's the cause, especially given that the server is a tiny Atom HTPC sitting in the kitchen

Yeah, the combination of those two things is very likely to not do you any favors.

It is worth clarifying that Google et al.'s claim that SSL is essentially no overhead is conditioned on the assumption that you're using reasonably modern and full-featured processors, especially with AES-GCM in hardware. (Which is pretty common on laptop processors these days even without trying hard to find it, but probably won't be on an Atom HTPC.) I think that's reasonable, since if you're seriously worried about performance and latency, you're probably starting off with good hardware, and your worry is that investment will go to waste if you turn on SSL. At least for running a web server for fun on an old personal machine, the added latency is real and is unfortunate but I'd guess also not such a big deal. But maybe that's a bad assumption?


On Intel x86_64 platforms with AES-NI hardware accelerated AES, sure. On other platforms, not so much.



Yeah. Still not paying for a cert on my person home-pages just so I can have my own page come up first when people google my (worldwide unique) name.

That page contains static HTML and does not need SSL, and it's not "insecure" just because you may be on a network which MITMs traffic. That makes your network insecure, not my page.

So yeah. Not interesting. Not worth it.


> That makes your network insecure, not my page.

At which hop does it stop being "my" network and starts being "our" network? Your webhost? Your IX? Your country?

You can't shift the responsibility; only you can definitively secure the content coming out of your webpage.


I've just wired you the amount needed to buy an SSL certificate from any of multiple reputable and well-priced providers. You can use the money I sent you to buy a cert from https://letsencrypt.org/ , https://www.startssl.com/Support?v=1 , or https://www.cloudflare.com/plans/ .

If those options aren't enough for you, let me know why and also how to non-vacuously send you money, and I'm happy to buy you a $4.99/year certificate from https://www.ssls.com/ssl-certificates/comodo-positivessl .


Just set mine up for free, today. Letsencrypt.org works great. I recommend the simp_le client.


Yeah, there's really no excuse anymore really. They've made it insanely easy to generate certs.


Please read the article, this isn't about google the search engine, it's about google the browser vendor. Firefox nightly is doing the same already by default.

I'm always wondering if there's a correlation between the relevance of integrity for a site and the relevance of the site itself.


> Still not paying for a cert on my person home-pages

lets encrypt?


That means moving to a webhost and plan which supports SSL. They are usually more expensive. It's not just getting the cert.


> That makes your network insecure, not my page.

Sometimes you NEED to use an insecure network due to censorship (example: Tor or VPN).


If they could just wait until cPanel has LetsEncrypt support that would be great...


It's funny that vice themselves doesn't automatically forward to https.


And I always thought TLS is the virtual equivalent to those TSA locks.


What gave you this impression?


The uncontrollable number of masterkeys.


[flagged]


Sorry?


HN could do its part: for example, start marking all http:// links red. For our content sites, we can also announce to users this change and roll something out.


Basically efficient, low-latency caching for html and css content is over unless there's SRI for them. It makes sense to have a mini webpage delivered securely that lists hashes for all static assets, and then serve some static assets insecurely to take advantage of CDNs as long as they don't disclose individual app actions (assets everyone sees on many pages). The downside is the balancing of risk for activity leakage based on insecure assets. Of course, some dynamic content and sensitive state needs to remain secure. The issue is that securing everything depends on whether you're willing to trust your CDNs and caches with your certs and private keys (granted, you already trust them to display the correct content.). That sort of technical risk management needs to be considered carefully if insecure assets can dramatically speed up UX (because TLS sessions take some or a lot more work... since how would the browser and backends do session caching or pipelining across infrastructures and providers that likely have multiple IPs? One connection per provider, each keeping their own cache for their HA boxes?)

Maybe there needs to be an insecure HEAD or CACHE open standard to check content freshness of a secure page via crypto hash (say canonical uri, etag and last modified) to avoid building up a full TLS session to see nothing's changed?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: