Hacker News new | past | comments | ask | show | jobs | submit login

TL;DR: bing SSL certificate is wrong.

https://bing.com: subject=/CN=*.bing.com

https://www.bing.com: subject=/C=US/O=Akamai Technologies, Inc./CN=a248.e.akamai.net




TL;DR: Bing doesn't support SSL on www.bing.com and has never publicized it as a supported feature. The submitter had to manually type https://www.bing.com into the address bar to generate this 'error'.

Bing does support SSL on ssl.bing.com and publishes various links on that sub-domain, such as https://ssl.bing.com/webmaster/home/mysites

The fact that the https://www.bing.com redirects to the HTTP version should be enough to show that this a known, unsupported case on the primary domain. The behavior has been like that for years.


>TL;DR: Bing doesn't support SSL on www.bing.com and has never publicized it as a supported feature. The submitter had to manually type https://www.bing.com into the address bar to generate this 'error'.

Or use HTTPS Everywhere. Personally, I'd also like it if in future, web browsers would try HTTPS first and HTTP second.


I don't think browsers should assume that http://blahblah and https://blahblah refer to the same resource.


Why not?


Because it's up to the individual website administrator to ensure he published them both the same way.

I can construct a web server that sends version A of a site over normal HTTP, and version B of that same site over HTTPS.

In fact, sometimes I do that by accident. :)

It's a bad assumption.


[deleted]



The bing.com certificate says it is valid for the following names: ieonline.microsoft.com , .bing.com , .windowssearch.com. So why does https://www.bing.com not match *.bing.com?


Because www.bing.com is served by akamai CDN which is different to bing.com. Therefore the SSL on the site is different.


Why don't they just support the thing that is most reasonably expected and not surprise n00bs like me.


Because it would change a massive amount of infrastructure and get about 10 hits a day.

(OK, 10 hits a day is probably an exaggeration but shockingly little of one.)


Wow, that's the first time I've ever seen a 0-word article get the TL;DR treatment.


Actually the warning message from chrome is pretty lengthy


At first I thought it was a joke from Google. Something like: "No you don't want to use bing, here is google."


...what kind of massage you received?

My browser only talk about the warning, but i only have old chromium... Not auto-updating g chrome


All people giving downvotes to this comment, I think you are being unfair to this person who just seems to have a bad English. He probably meant "message".



was typed on swype :) didn't notice the typo


lol


This is the latest message: http://i.imgur.com/ogDhMQp.png


back to safety takes me to about:blank Does it take you to https://www.google.com/ ?


TL;DR means "here's a very short explanation".


Who else remembers when it used to be an insult? I preferred it that way.


Yeah. I remember TL;DR meant wall of text, kinda same way RTFM was used. It was a response to something not a tag to indicate what you were doing. How things change so fast on the Internet.


The new meaning is a lot more useful (in the sense that I find lots of opportunities to use it).


Agreed, I still have insults a plenty in my quiver.


TL;DR: You prefer the old way of doing things.


Funny, I thought it meant...

Too Long; Didn't Read


I thought it stood for "too long; don't read"


Why not use the word "summary"?


I think it's the same reason people write "lol" instead of "hahaha" even though they are not actually laughing out loud.

It sounds like internet speek, it's cooler and more 1337.


To be fair, this is the way language has always evolved.

Interesting article on a related topic: http://www.bbc.co.uk/news/magazine-21956748


"Executive summary" is the original TL;DR.


Working in information security, I see this far, far too often in support tickets from employees who are unable to get to a site because our proxy is blocking misconfigured certificates. Usually we like to reach out to the owner of the site and have them update their configuration, and it gets quite frustrating when we find an unresponsive organization. Having to bypass cert checking for a site on our end is a huge security risk, and defeats the purpose of even having an SSL cert.

Companies! Make sure your certs are all in order! There's no reason to send a page to your users over HTTPS if they can't trust the certificate. Canonical has been a long-time offender of this, with many of their pages sporting a certificate signed to canonical.com but being served by ubuntu.com.


> There's no reason to send a page to your users over HTTPS if they can't trust the certificate.

There can be. HTTPS still gives you encryption over the wire. It still protects against a passive eavesdropper, like a casual packet sniffer on a public wi-fi network. The whole certificate deal protects against a Mallory with power to intercept and spoof messages. Of course nobody on the public internet can be sure there isn't a Mallory in or at the edge of the user's ISP, but in plenty of intranet or otherwise controlled networking scenarios, HTTPS with certificates lacking a trust chain can be reasonable.


Valid points, but in effect what you're doing is training users to believe that HTTPS means trusted. What happens if your site is compromised? The users will see the same untrusted SSL warning that they're seeing if your certs aren't in order. You're giving them security for your site, but removing their security awareness. This hurts them, this hurts the Internet, and this could come back to hurt you.

Training users to click through messages that are completely valid warnings is just shitty behavior.


Right. MitM attacks and variants are a proper subset of the mischief that can be achieved with an unsecured connection. Among other things, it's routine that people (hi!) use self-signed certs for personal or temporary TLS sites. I'd be very annoyed to pass a basic-auth-protected https git url to someone for quick-and-mostly-secure read-only access and have their access be denied by a local proxy...


>I'd be very annoyed to... have their access be denied by a local proxy

Untrusted certificate chains is a valid security risk. If it annoys you that users are blocked from viewing your site because you haven't gone through the established and accepted practice of actually completing the SSL trust chain, that's not the local proxy's fault.

My number one concern is the integrity and security of my company's data. Your personal or temporary TLS site is much lower down that list, especially if you're serving up untrusted certificates and expecting that users will completely ignore the warnings that try to keep them from falling victim to the kinds of attacks SSL is meant to avoid.


Arrgh. Sorry, but this is an absolutely classic example of why "security professionals" get laughed at by engineers. This requirements analysis is just completely backwards.

In the example in question (which frankly isn't very interesting, I can come up with hundreds of scenarios like this) I have a git archive I need to share with someone on an ad-hoc basis. For whatever reason, I'd like to do it securely, so it needs authentication and encryption and shouldn't go on anyone else's site. So the obvious solution is to throw it up on a static webserver somewhere and use TLS (or use ssh, of course, but that's subject to exactly the same root-of-trust problem as a self-signed cert -- surely you disallow outbound ssh access too, right? right?).

Your fantasy world wants to pretend requirements like this don't exist, and that you can simply refuse to support this kind of transfer via fiat. But it's not the real world. In the real world, this is what people have to do. Unless you break their systems, in which case they'll work around your silly rules in even less secure ways.


To answer your question, yes outbound and inbound SSH is blocked except through our secure gateways. Every security organization at every company will have procedures in place for bypassing the official policy on an ad-hoc basis. There's always going to be a one-off situation that requires a different set of rules on a limited time basis, and there are many ways of dealing with these exceptions.

On a broad basis, though, untrusted certificates are blocked outright. I'm sure you can understand that for every one-to-one or one-to-few valid communications like your case, there are dozens or hundreds of one-to-many not-so-valid communications. SSL has certificate trust built in for a reason. Engineers may laugh at security professionals, but we're here for a reason. Even though we're directly at odds with each other in approach, our goal is the same. You want to help further the business by pushing new software. We want to further the business by making sure that software is secure. No one benefits from the new e-commerce solution if the business bank account is completely drained the day after the product launches.

I can come up with a hundred good reasons why I should be able to enter a store after business hours, pick up a few things, and leave (I left the money on the counter!), but for some reason businesses still lock their doors. Do you laugh at them for having completely backwards requirements?


It's true that there's always a procedure in place for bypassing the official policy as needed. However, it often comes down to a question of whether its faster to follow this procedure or to simply undermine the security. I've repeatedly seen people solve the problem of handling unsigned certificates by simply moving back to pure HTTP. This is a great security hole than the unsigned certs ever were, but it's perfectly valid with their security policy.

I can actually take your key analogy into true story territory. I worked at an office where a manager had to be gone for a month. He had the only key and the policy to get a new key wouldn't get one to us for weeks. It was also against policy for him to give the key to someone else while he was gone. The solution was to simply leave all the doors unlocked while he was gone.

I understand the need for security. I understand that security is your job and that you don't want to compromise for mediocre security. However, when you won't compromise, we don't get great security. We get no security.


Completely understood. I can't speak for other companies, but mine has a very strict policy and also many ways to accommodate exceptions to this policy provided the employee has a valid reason for needing an exception.

I agree that strict policies with no recourse for exceptions is a bad thing. I don't know of anyone who would stand up under that kind draconian stance.


Every time a lazy and narrow-minded admin causes a browser user to click through a certificate error to access a site, God kills a RAID array.


Don't forget the X09v3 alternate subject names.

https://bing.com/

Subject: CN=* .bing.com

X509v3 Subject Alternative Name: DNS:ieonline.microsoft.com, DNS:* .bing.com, DNS:* .windowssearch.com

--

https://www.bing.com/

Subject: C=US, O=Akamai Technologies, Inc., CN=a248.e.akamai.net

X509v3 Subject Alternative Name: DNS:a248.e.akamai.net, DNS:* .akamaihd.net, DNS:* .akamaihd-staging.net


We were experiencing a similar issue with a third party analytics solution where their SSL cert all of the sudden started to be delivered by Akamai as opposed to the FQDN of the company, as it was before. I am curious if Akamai is at fault here?


Was this in the past week? Because just a few days ago I was experiencing a similar problem. I attempted to access https://rememberthemilk.com and chrome complained about the SSL cert, as I was receiving the cert for their CDN and not for rememberthemilk itself. I couldn't find anyone that could reproduce the issue, but it occurred consistently on my computer regardless of browser. It resolved itself 20 min after I first noticed it, but I'm still really curious as to why it happened in the first place.


Usually the CDN will have a single certificate which then has multiple SAN entries. What can happen depending on the size of the CDN is it can take a long time for the SSL to be added to the SAN entries on all the edge servers. So if a site just starts moving to a CDN they sometimes jump the gun and instead of waiting for all the edge servers to have the SSL they just change DNS records.

You then have a scenario where the DNS is waiting to propagate and the SSL is still propagating around the CDN. Hence you get soem users without a problem, some with and it eventually all clears itself up.


Yeah - happened on Tuesday and was cleared up the following morning.


This is Akamai kinda trying to serve HTTPS on domains that don't have HTTPS setup. I'm not sure why they attempt it at all, but it's not really their fault. Same thing on my domain: https://www.theblaze.com/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: