Except it's not actually true. https://www.ssllabs.com/ssltest/clients.html highlights that many clients support standard SSL features without having to update to fix bugs. How much SSL you choose to allow and what configurations is between you and your... I dunno, PCI-DSS auditor or something.
I'm not saying SSL isn't complicated, it absolutely is. And building on top of it for newer HTTP standards has its pros and cons. Arguably though, a "simple" checkbox is all you would need to support multiple types of SSL with a CDN. Picking how much security you need is then left to an exercise to the reader.
... that said, is weak SSL better than "no SSL"? The lock icon appearing on older clients that aren't up to date is misleading, but then many older clients didn't mark non-SSL pages as insecure either, so there are tradeoffs either way. But enabling SSL by default doesn't have to exclude clients necessarily. As long as they can set the time correctly on the client, of course.
I've intentionally not mentioned expiring root CAs, as that's definitely an inherent problem to the design of SSL and requires system or browser patching to fix. Likewise https://github.com/cabforum/servercert/pull/553 highlights that some browsers are very much encouraging frequent expiry and renewal of SSL certificates, but that's a system administration problem, not technically a client or server version problem.
As an end user who tries to stay up to date, I've just downloaded recent copies of Firefox on older devices to get an updated list of SSL certificates.
My problem with older devices tends to be poor compatibility with IPv6 (an addon in XP SP2/SP3 not enabled by default), and that web developers tend to use very modern CSS and web graphics that aren't supported on legacy clients. On top of that, you've HTML5 form elements, what displays when responsive layouts aren't available (how big is the font?), etc.
Don't get me wrong, I love the idea of backwards compatibility but it's a lot more work for website authors to test pages in older or obscure browsers and fix the issues they see. Likewise, with SSL you can test on a legacy system to see how it works or run Qualys SSL checker, for example. Browsers maintain forwards-compatibilty but only to a point (see ActiveX, Flash in some contexts, Java in many places, the <blink> tag, framesets, etc.)
So ultimately compatibility is a choice authors make based on how much time they put into testing for it. It is not a given, even if you use a subset of features. Try using Unicode on an early browser, for example. I still remember the rails snowman trick to get IE to behave correctly.
People fork TLS libraries, make transparent changes (well, they should be), and suddenly they don't have compatibility anymore. Any table with the actually relevant data would be huge.
One imagines though that with enough clients connecting to your site you’ll end up seeing every type of incompatible client eventually.
The point I was trying to make is that removing SSL doesn’t make your site compatible and the number of incompatible clients is small compared to the number of compatible ones.
Compatibility alone is not a reason to not use SSL on its own, arguably. The list of incompatibility doesn’t stop at SSL, there’a still DNS, IPv6 and so on.
SSL is usually compatible for most people - enough that it has basically become the defacto default for the web at large. Though there are still issues. CMOS batteries dying and having bad client time is one that comes to mind first, certificate chain issues too. SSL is complex, no doubt. Especially for server-side implementation to remain compatible client-side. That’s why tools like Qualys’ exist in the first place!
That's a fair point. HTTP changes more slowly. Makes sense for sites where you're aiming for longevity.