TLS 1.0 uses chained IVs, which is a protocol flaw. It also has an explicit protocol alert for decryption failures, which makes error oracle attacks simpler. TLS 1.0 is broken. It isn't catastrophically broken so far as we know now, but nobody should be deliberately preferring it.
Thats rather unfortunate. I think you can use Homebrew to pull in newer stuff, though, but I haven't tried (I don't host stuff on my MBP, I use my Linux workstation for that).
I'm sure you could, but bear in mind that several OS X Server services are built on top of the system Apache and its configuration file structure, so you probably don't want to replace it with a package manager-built version if you rely on any of these services.
On the other hand, it wouldn't be too hard to build and install a version of the SSL module compatible with the system Apache linked against a newer OpenSSL version, however, and I wouldn't expect this to break Apple's services, at least not until you install an update that either breaks binary module compatability or clobbers your tweaked module configuration.
I don't use Homebrew, so I couldn't tell you if it's capable of building modules for the system Apache, but building the SSL module "by hand" for system Apache with Homebrew OpenSSL should be straightforward enough.
Server software as an add on downloadable from the App Store on the Mac, not a physical server like XServe. They recommend a Mac Mini with two hard disks installed as a SOHO server but obviously aren't catering for large neworks - no hot-swappable drives or fancy RAID for example (unless you use an external Thunderbolt caddy). They're leaving that market to Windows AD and Linux (mostly Windows AD I suspect).
Oddly, the server software on the Mac gets less and less features each release since Snow Leopard apparently. I think Ars Technica has a review of the server software, and is pretty in-depth.
And, if you're curious about client-side SSL support in general, every server test page simulates about 20 most popular (or important) clients. Scroll down to the "Handshake Simulation" section. If you click on client name you get the full-page client report. https://www.ssllabs.com/ssltest/analyze.html?d=www.ssllabs.c...
Some info on how to correct this on common browsers where it can be corrected (I use FF 26; marked bad, and comments in this thread say that some of the problems can be fixed) would be a great improvement.
You can "fix" the result on this webpage by flipping some settings in Firefox, but it will cause some websites to no longer work (guess why they aren't defaulted?). Newer versions (27 and up) will detect this and work around appropriately. They also have those settings by default.
If you absolutely require this website to tell you you're safe so you get a warm fuzzy feeling and can sleep at night, update to Firefox Beta. Don't just randomly go and change the settings, then wonder 2 weeks later why your banking website no longer works.
Also note that the site marks Firefox as Bad with TLS 1.0 because it can't verify for sure whether you have BEAST mitigation. But Firefox has BEAST mitigation.
Correct. Firefox and all other modern browsers use the so-called 1/n-1 split technique to mitigate the BEAST attack. It's actually possible to test if the mitigation is present; it's just that this site has not implemented it.
The documentation of the site actually just changed to imply they test for record splitting (it said they didn't before). However, testing with Firefox 26 shows it's still incorrectly flagged.
There's a commit from about 2 hours ago that enables 1/n-1 detection. So perhaps that's what they are running now. But it does not seem to be implemented correctly: when I access the site with a Java client (which does implement the split), the site says that it's vulnerable.
Say I change security.tls.version.max to 3, which changes it's status from 'default' to 'user set'. In the future, if the default for security.tls.version.max is changed to, say, 4, would the fact that my setting has the 'user set' status prevent it from incrementing to the better default?
I'm not proposing that this is a risk or that Firefox behaves this way---I have no idea. Does anyone else know?
The (non-about:config) UI for these settings was removed for a reason. Even though it makes sense for these to be configurable for testing, end users are more likely to break their browser (make it less secure or make it incompatible with real sites they need to use) by tweaking these settings.
Firefox developers have had to reset these settings in the past in order to save users from self-inflicted insecurity.
Without an explicit effort by Firefox developers to reset these prefs, the prefs won't automatically reset to make sense in the future if the value space of the prefs grows. There is no guarantee of what explicit effort might be taken to deal with non-default values of these prefs in the future.
In my opinion, anyone who wants https://www.howsmyssl.com/ to tell them they are probably okay today should install Firefox Beta (or Aurora or Nightly) instead of manually changing these settings.
(Disclosure: I'm a Gecko developer but I don't work on TLS. Disclaimer: The above is my personal understanding and opinion, not any sort of official statement.)
Often with settings like this, they will flip the preference name to something like `security.tls.max_version` or something so user-set and extension-set overrides are invalidated. They've done this with other common, significant settings that users often overrode.
I believe it would, but I'm not certain. This is one of reasons not to mess with it and just wait until the next version which will be out RSN (real soon now) that has a better default.
For whatever it's worth though, while I'm not sure how they're doing their version numbers and it may be quite awhile until this is relevant, you could probably just set the integer really high (like 99 or something) and that would effectively translate into "try the highest version you've got" which might break things sometimes, but it wouldn't leave you stuck in a lower version later at least.
TLS 1.2 will be enabled by default in the next release, Firefox 27, which will be released the week of February 4th. So, in less than 4 weeks, Firefox release will be "good".
Site doesn't load on IE 6. I wonder if you've configured the SSL certs with SNI? Would have been nice to see the page turn red, but I guess I know the answer without having to run it...
Edit: It's not an SNI issue, IE 8 on XP can load the site.
MSIE6 does not support new enough SSL to be considered secure. I wouldn't be surprised if a lot of sites no longer work with MSIE 6 over the next two years as sites transition to turning RC4 and SHA1 off.
We all know this. My point is, the site gives some indication as to how effective any particular browser's SSL is, yet the site doesn't function for the worst offenders.
Trying to take the word "router" away from the typical home all-in-one network appliance is a silly way to waste everyone's time.
Yes, the device on my shelf at home doesn't support BGP, doesn't have any TCAM, and probably falls over with more than 10 routes, and is unlikely to ever have more than the default one. But we call them routers.
The layer 3 switches I work on can do BGP, can do layer 3 routing at hundreds of gigabits per second, but still isn't a router.
Language is flexible, terms aren't strictly used, and I don't think anyone was helped by your "correcting" the grandparent poster.
A big minus coming from the hijacking of the "router" term is that that people will start thinking that's how internet routing works, and through this newspeak come to tolerate these nat-"routers" and firewalls.
We badly need to get back to end-to-end or we won't be able to deploy new protocols and apps in a few years. Eg it's doubtful if BitTorrent could take off if it was invented today.
The vast majority of the times I hear the word "router" nowadays, it refers to the box everyone with broadband internet has at home, connecting the internet (via DSL/cable modem) and the home network (via ethernet/wifi).
Are you saying that's not a router? I'm pretty sure there are more of those deployed than the big kind.
Like the RFC says, routing is forwarding IP packets unmodified. If you mess with the insides of the packets, you're just a no good packet munging middlebox.
I think that would depend on how cURL is built, and not curl itself. OpenSSL 0.9.8 versions (still widely deployed) did not support TLS 1.[12], I think that was added in 1.0.1c.
You can check which version of OpenSSL curl is using by doing
$ curl -V
Quite a few servers (for example, any server using the version of OpenSSL in Debian Squeeze) do not support anything newer than TLS 1.0, so you'll get quite a bit of breakage if you disable it.
TLS 1.0 in Firefox 26.0 should be secure; it implements 1/n-1 record splitting, so it's safe against BEAST even though this website reports otherwise.
The site is mostly OK. It just needs to properly test for the BEAST vulnerability (ideally it would check for a 1 byte record, but a whitelist of user agents known to implement 1/n-1 record splitting would suffice in the interim), instead of assuming anyone with TLSv1.0 is vulnerable. And it should rate TLSv1.0 (with record splitting) as "Improvable" rather than "Bad".
But yes, being faced with a huge "Your SSL client is Bad" banner when visiting from up-to-date Firefox is FUD.
> Bad: Your client is using TLS 1.0, which is very old, possibly susceptible to the BEAST attack, and doesn't have the best cipher suites available either.
Interestingly I get "Probably good" using the Chrome browser on the same phone.
Will it make my client more unique (so identifiable) for a third-party passive advisory (who can sniff traffic) if I fine-tune my browser's settings for example to support only TLS 1.2 and by removing all the RC4 encryption methods?
Nice site, I've been looking for something simple like that (It sure beat https://cc.dcsec.uni-hannover.de/ in niceness, the website I've used before to "check" my browser).
Anyway I'm getting a nice "Probably Okay" using the latest Firefox Nightly.
The compression-related issue in the TLS protocol is known as CRIME. BREACH actually applies to HTTP response body compression. So, chances are that you should continue to use the breach-mitigation-rails gem, even if your server does not support compression at the TLS level. (Disclaimer: I am not familiar with this gem; just inferring its purpose from the name.)