Hacker News new | past | comments | ask | show | jobs | submit login

Snowden was a factor, but not the only one:

- CPUs didn't have hardware acceleration for encryption (AES-NI) like they have today, so activating SSL on your webserver actually decreased your throughput a lot

- It was expensive and complicated to get a certificate for your website, now LetsEncrypt provides them freely and easily




Wasn’t the server load for ssl something like 3-5%? That doesn’t strike me as much of a factor as the complexity involved, especially with the confusion added by eg Thawte hawking their enhanced validation product.


"On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load" according to Google back in January 2010 [1]. This was about the same time as Intel introduced AES instructions, but the post suggests that this wasn't a big factor in their conclusion that TLS simply isn't computationally expensive.

[1] https://www.imperialviolet.org/2010/06/25/overclocking-ssl.h...


> Wasn’t the server load for ssl something like 3-5%?

Depends on the packets per second being handled. I’ve pegged a CPU core easily doing encryption just a bit over a decade ago due to high data rate. If you’re pushing >500Mb/sec without CPU accelerated encryption (or NIC offloading) it puts a pretty hefty strain on resources.


My P4 could do >400gbps AES128 in 2003. (We tested encrypted connections over Firewire.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: