Hacker News new | past | comments | ask | show | jobs | submit login
Tcpcrypt (tcpcrypt.org)
92 points by rnicholson on Aug 16, 2010 | hide | past | favorite | 70 comments



I really like when people acknowledge the difference between passive and active attackers. I really, really want there to be more passive protection by default without requiring every computer/node/server to have a trusted certificate.

I really would like to see a better handling for this in the case of https as well. I want a way to see a mechanism that a site can offer to provide passive communication protection but where the site does not guarantee its identity, thus meaning it's possible for a MitM attack if you're willing to accept that risk.


I really dislike it when people distinguish between passive and active attackers. Modern attackers are active by default; they proxy connections from phishing sites or attack the DNS.

Moreover, even in the '90s, when people actually did stuff like attaching solsniff to the SprintNet backbone, being passive equated to being active, because the passive vantage point gives you TCP sequence numbers, which is all you need to hijack a connection from across the Internet.

Your browser already does the thing you're asking for with regards to SSL. Just click through the warnings, which are telling you "encryption without authentication offers very little security, but if you're sure, go ahead and pretend it's doing something for you".


> Modern attackers are active by default; they proxy connections from phishing sites or attack the DNS.

The biggest attackers by far are the NSA and similar agencies -- that's passive attack -- scanning your emails, VOIP calls, browsing habits, etc.


Just because you don't have signed certificates you can still protect against MitM attacks. Your browser just has to remember what the certificate was the first time, and alert you it if changes.

Sound familiar? It should, ssh(1) does this by default.


The first time you connect to any SSH server, the connection can be hijacked. People used to do this for sport at Usenix.

Why would anyone accept this weakness with their bank account? My mom barely understands the lock icon.


What sort of person goes to a conference like Usenix and connects to an external server for the first time? I've been to a dozen conferences, all of whom I've used ssh at, and I've never connected to an external box for the first time.


I can't imagine what possible point you could be trying to make. Am I making this up or not? I doubt I am, but who cares? The issue with first-connection security in SSH is a fact, not an opinion.


Yes, first-connection security is not an option. But the vast majority of connections are not first connections.

If I ssh to a new box at home, then go to Usenix and ssh warns me that the host key changed I'll have protected against MitM attacks, even if the original connection wasn't authenticated.


That model works passably well for SSH connections. You make perhaps tens of those connections every day, from perhaps several devices.

It doesn't work at Internet scale. It's too insecure. It's very unlikely that key continuity is going to replace PKI in HTTPS.

Do I have strong opinions about PKI vs. key continuity? No. All I'm saying is that it's not a panacea. SSH-style key continuity is not the global solution for the certificate warning Firefox is annoying you with.

You realize that Firefox already does this, right? Just hit "add exception" when the dialog pops up. Look! It works just like SSH!


Arguably, the right way of doing it is to use both. PKI to auth first connect, remember the chain to prevent bad CAs from giving certs people pretending to be BoA.


Why not abandon central key authorities and go distributed? Bring social networks and the web-of-trust together.

  "3 of your friends have said that they trust this
  certificate from Amazon.com. Do you want to accept it?"


If I had a dollar for every time a friend clicked a .exe email attachment, I'd be a very wealthy man. I damn sure don't trust my friends to verify the security of a cert.


I have some security-savvy friends (who I'd trust) and some not so smart friends (who I don't trust on this subject). So, the obvious idea is to put weight coefficients on WoT digraph edges. But I have a feeling that this would be too complicated to manage.


Your mom would trust you and then her friends would trust her, then they'd all get burned because she meant to click No one time and it would all be your fault.


I don't care how you set up the PKI. You can use a carrier pigeon trust network if you want. If you can beat SSL's PKI then brilliant. The important part is to use PKI to auth first contact and verify no unseemly changes happen during subsequent contacts by bothering to remember the previous cert chains.


considering about every third time i visit a high traffic site with ssl i get a new cert pushed to me, i don't think ssh is a good comparison.

also, perhaps i skipped over this, but tcpcrypt seems like a level of encryption well below application, thus if we did get a hijacker trying to 'change the self signed cert' after an initial connection (or just dropping the encryption) we wouldn't know, unlike ssl/browsers. am i incorrect?


This is basically useless as-is for things like banking security, because the security of SSL is premised on inherent resistence to "active" (MITM) attackers.


The stuff that matters has already been secured. It looks like tcpcrypt is designed to encrypt everything else.


Correction: the stuff that really matters (like banks) has already been secured.

The other stuff still matters a lot, especially when users often share passwords between secure and non-secure websites. Or when companies offer a single sign-on service without mandatory encryption (I'm looking at you Facebook and Twitter)

This would be better than nothing (if it were ubiquitous), but I'd rather everything use SSL.


> I'd rather everything use SSL

That's not going to happen as long as browsers treat the continuum of security like this:

    BEST: Oh nice, a SSL connection with a CA signed cert
    COOL: A normal HTTP connnection
    OMGWTFBBQ: ZOMG SOMEONE HAS A SELF SIGNED SSL CERT THE
               SKY IS FALLING LET'S PRESENT 10 CLICK
               THROUGH DIALOGS BEFORE THE USER CAN OPEN
               THE BLODDY SITE
As opposed to:

    BEST: Oh nice, a SSL connection with a CA signed cert
    COOL: Oh nice, a SSL connection, but no CA signing, treat like HTTP UI-wise
    COOL: A normal HTTP connection, show the user the traffic isn't encrypted.


The reason your browser goes "ZOMG" when it sees a self-signed certificate is that there is no way to distinguish the "'avar made himself a handy self-signed certificate" case from the "someone has substituted a random certificate into my Bank of America login". Think about it.

Long story short: don't hold your breath waiting for browsers to chill out about self-signed certs.


Yes there is, think about it :)

Bank of America has a CA signed cert, so when you go there you display a golden "YOU ARE SECURE, CITIZEN" banner at the top with a lock icon etc.

But if someone doesn't have a CA signed SSL cert you treat it no differently than normal HTTP traffic.

Then you train users not to submit their financial information to sites that don't have a CA signed SSL certificate (as indicated by the giant "YOU ARE SECURE, CITIZEN" banner).

You have to do that anyway, when I go do my online banking I visit a plain http site and then click on my private banking link that takes me to a https site.

The easiest way for a MitM to spoof that would be to just direct the user to a plain HTTP site when he goes to do his private banking. So even today users have to understand that they must submit this sort of data over SSL and that the cert involved has to be CA-signed.


Think harder. How does Firefox know, when it sees a self-signed certificate, that that site wasn't supported to have a certificate signed by a CA?

It doesn't, and can't.

If the "ZOMG" warning --- which, let's just keep calling it that, because I think it's funny that you think one of the most important pieces of UI in Internet security is a "ZOMG" measure --- didn't exist, attackers could substitute their own "self-signed" certificates for Bank of America's Verisign-signed cert when hijacking www.bankofamerica.com.


Yes, it doesn't, and can't. But I'm saying that it doesn't matter.

When people visit "foo-bank.com" the easiest way to do a MitM is to simply do a http MitM. Then you don't have to forge any SSL certificates, or rely on the browser not yelling enough.

The only way to guard against that is to train users to recognize that they shouldn't be sending money through a non-CA SSL connection, as indicated by their browser with a friendly "Secure" icon somewhere.

Thus, if browsers treated non-CA signed SSL connections just like normal HTTP connections, i.e. just used encryption, didn't present any "secure" banners etc. you wouldn't make things any less secure.

Non-CA signed SSL traffic is just as secure as unencrypted HTTP traffic (i.e. not), and browsers should treat both of them equivalently.


This is why your bank doesn't let you make withdrawals from your account on port 80.

You need to keep re-reading this sentence: there is no difference to the browser between a TLS connection bearing a self-signed cert and a TLS connection that was supposed to bear a CA-signed cert but isn't. It can't tell the difference.

"But that untrusted connection is no less secure than an HTTP connection!", you retort. Wrong. The HTTP connection isn't lying to you about how secure it is. The HTTPS connection with the bogus cert is, as far as Firefox is concerned, lying. And for most users, most of the time, it's right. Something is wrong with the connection.

I understand that it drives you nuts that there is collateral damage to this security warning. Yes. There is collateral damage. You get a very noisy exception dialog even when you wanted a self-signed cert. But the alternative to that dialog allows attackers to spoof Citibank, and the browser vendors have all decided that your self-signed cert is less important than Citibank.


> This is why your bank doesn't let you make withdrawals from your account on port 80.

It doesn't matter that my bank doesn't allow withdrawals on port 80 as long as a MitM allows that, and you can't guard against that unless you train users to expect CA-signed SSL sites for things like that.

> You need to keep re-reading [...]

I've already read that and replied to it at: http://news.ycombinator.com/item?id=1609818

> The HTTP connection isn't lying to you about how secure it is.

A non-CA HTTPS connection isn't lying to me, it's purely a matter of implementation how you treat that sort of thing. You seem to think that certification and encryption are inseparable, they aren't.

You can have SSL encryption presuming that you're talking to a known-good party, while also having the ability to initiate connections to CA-signed parties.

> [...] There is collateral damage.

Yes, but there's no need for it. Browers can decide how they present SSL, if they consistently present a big "You're secure" banner when talking to CA+SSL sites and users get used to that when transferring money they'll spot that something's up when it's missing.


there is no difference to the browser between a TLS connection bearing a self-signed cert and a TLS connection that was supposed to bear a CA-signed cert but isn't.

As long as my browser can tell the difference between a Citibank with a CA-signed cert and a Citibank without one, why should I care about the exact manner in which the wrong Citibank doesn't have what it should? What makes encryption without authentication so much more dangerous than plaintext?

Edit: You talk about forged certificates a lot. Do you mean that browsers acts as if certificates are for a secret club only, and anyone not a member (CA) but using the technology is infringing? I could believe that, as it jives with how I initially understood the business model. The problem, then, is that non-members like to authenticate too, and it is assumed they will do so via other channels and that their users are technologically sophisticated enough to import the certificates - when the key continuity thing is actually more popular.

Since the browsers check your credentials anyway, I see no point in getting paranoid about seeing an unfamiliar signature. That's only a deception if your guards consider the mere existence of a signature as proof of authorization - that is, if they're human. If they always check, there is no point in bluffing, and so an unrecognized signature should not be considered treachery.


I'm kind of confused by your questions, but: to be valid, a certificate needs to be signed by a CA for whom your browser holds a root CA certificate.

Without authentication, anybody who controls routing, ARP, or the DNS can break your encryption; they just stick themselves in the middle.


The original question wasn't "why is encryption without authentication a bad idea"; it was "why is it worse than nothing". The only thing I can come up with is that some users look for https instead of the newer security banners.

The additional question was "is a non-CA-signed certificate irrelevant to authentication, or is it a forgery" - my opinion would be the former, browsers seem to think the latter.

It is my understanding that adding trusted third parties is possible for the client, but considered to be only for advanced users, and that adding security exceptions for self-signed certificates is unexpectedly common. I further consider the terminology unfortunate (servers with valid certificates are not certified, they are authenticated).


How many people look at the 'https' part rather than the padlocks or large green bars telling you it's secure? I can count the number of those people on one hand.


There's a few problems with your hypothetical, from the browser implementor point of view:

1) "Then you train users..." -- if your security mechanism relies on this you have already failed.

2) In the attack case, the user would have to look for the absence of an indicator. It's very hard to notice absence of an indicator as a sign of danger, especially if there are are many normal situations where it's fine for the indicator to not be there. Imagine this design for a car engine warning light:

- Normally it's always off. - Whenever you turn left, it goes on if everything is fine. - However, if your engine needs service, then when you turn left, the warning light stays off.

Do you think this would do a good job of alerting people to engine trouble? Your suggestion is the same thing. It's hard to train yourself to notice the lock icon not appearing, even if you are very security-aware (which most users are not).

3) If the user has a bookmark, Top Sites/Speed Dial page, or URL autocomplete entry for the SSL URL to their bank, it's hard to redirect them to vanilla HTTP. So in fact, silently accepting self-signed certs creates more MITM opportunity.

4) Srict Transport Security will likely close the remaining gaps in #3 (e.g. typing the domain name in a fresh browser instance) over time.


That's not strictly true. It doesn't have to be like that. Why can't it be handled like SSH? You can cache the certificate and raise the ZOMG flags when a known site changes behavior.

As it is today, browsers scream bloody murder and then require you to store a permanent security exception for the site.


It is strictly true that in the HTTPS security model, there is no difference between a self-signed certificate and a forged certificate. The browser can't know which sites are supposed to have real certs and which are supposed to have fake ones. Secure channel protocols are tricky, and this is one of the reasons why.

You propose key continuity as a (drastic) extension to the HTTPS security model. I'm ambivalent about key continuity. It sort of works with SSH, but also used to be one of the more often-exploited weaknesses with that protocol, before we entered an 8-year span where memory corruption flaws in sshd were an even easier way to mess with it.

Key continuity works when you have long-term relationships between clients and servers. It doesn't work, at all, when multiple people share machines (as at kiosks), or when people change machines or reinstall. Also, unlike cookies, which gracefully repopulate themselves when you clean them up, the "HTTPS key continuity" model requires you to store certificates, for every website you ever talk to and every site those sites depend on, forever.

Instead of the (I think kind of hacky) key continuity idea, browsers should just come up with a way to allow users to opt-in easily to new CA's.

Meanwhile, it's pretty annoying when people talk as if the Firefox bogus-certificate UI is an affront to everyone's intelligence. No, without that dialog, HTTPS wouldn't work.


It's because all these self-taught PHP developers think it's some kind of affront to have to pay $25 for a CA-signed certificate for their shopping cart application.

Even though you can now get CA-signed certificates for free ( eg http://cert.startcom.org/ )!


A lot of the SSL vs. SSH discussion on here is just demonstrating the fact that there is no one-size-fits-all solution for authentication.

The point of tcpcrypt is to get the best security possible under any setting, so it can be used with both SSL- and SSH-like settings. See slide 5 of the talk on the web site:

http://tcpcrypt.org/tcpcrypt-slides.pdf

One thing I haven't seen discussed yet is that fact that come October, the EKE patent is going to expire, which means that all of a sudden it's going to be legal to do strong authentication using only human-chosen passwords. Strong password authentication is desperately needed, because people overwhelmingly both chose week passwords and don't think carefully about where they send those passwords.

Somewhat independent of tcpcrypt, section 4.3 of the tcpcrypt Usenix paper suggests a nice and simple secure password-authentication protocol. Deploying such a protocol would make a huge difference, except... what are you authenticating? You can prove possession of a password, but this doesn't actually protect you unless the authentication is tied to session traffic, and you are authenticating communication endpoints. SSL, IPSec, and even SSH don't provide adequate hooks for doing this (though SSH would be easier to retrofit than the other two). Tcpcrypt does.

So the way to view this is that in the absence of authentication, tcpcrypt will be vulnerable to MITM. But as soon as you go to authenticate yourself to a server (by typing a password or verifying a certificate), the authentication will fail, and the MITM will no longer be able to deceive the user.


There's not much point in encryption unless you know who you're talking with.


Mark Handley makes that very point on page 3 of his paper here (sorry about the PDF):

http://www.cs.ucl.ac.uk/staff/m.handley/slides/tcpcrypt.pdf

"Encryption without authentication is like meeting a stranger in a dark alley."

However, on subsequent pages he writes about ways to do authentication over tcpcrypt using the session ID, including signing it with an SSL certificate or using HMAC with shared passwords. As he says, "many different authentication schemes [are] enabled by the session ID concept." The tcpcrypt.org site itself says "Tcpcrypt abstracts away authentication, allowing any mechanism to be used, whether PKI, passwords, or something else."

Perhaps these mechanisms can help us gradually avoid the problems with both central authority schemes and key continuity schemes, but I haven't thought about it or researched it enough yet to know. I have a love-hate relationship with SSL, and when I hear about hijacked CAs and even see root certs in the name the various governments in Firefox, I get the chills. But I can appreciate SSL's strengths. On the other hand, I really like SSH's key continuity, but I also appreciate some of its difficulties on the web. As an experiment, I recently deleted all CAs from Firefox and started approving web sites by exception one by one.

Does anyone here have a good handle on how tcpcrypt's "abstracted" authentication schemes might thread that needle, simultaneously avoiding the difficulties and pitfalls in both camps?


Sounds like the ubiquitous opportunistic encryption that was a goal of FreeS/WAN.

http://www.freeswan.org/ 2004/03/01

FreeS/WAN is no longer in active development. Although we've created a solid IPsec implentation widely used to construct Virtual Private Networks, the project's major goal, ubiquitous Opportunistic Encryption, is unlikely to be reached given its current level of community support.


I looked into FreeS/WAN in the early days (1997?) and they made two architectural mistakes: You had to put some special records in your reverse DNS zone (which most people can't do) and it introduced a 30-second delay the first time you contacted each non-FreeS/WAN IP address (which made Web browsing completely unbearable). These misfeatures protected against MITM attacks, but they also ensured that FreeS/WAN would never be deployed.


I can't find this anywhere, and I looked in the source code as well, what is the license for this?

The Linux kernel module is GPL, however there is no license on the rest of the source code. I have no idea if I can use any part of this and port it over to FreeBSD as a kernel module for example.


port it over to FreeBSD as a kernel module for example.

Just for the record: As long as I'm the FreeBSD security officer, this is not going to be in the FreeBSD source tree.

(I can't stop you from building and distributing a kernel yourself, of course.)


Thank you. My thoughts were something along the lines of, who do I trust more, the well vetted kernel tcp stack, or this unknown modification to my tcp stack in the name of security. It's hard enough making sure the normal stack doesn't have any issues (and, in my understanding, why Windows/OSX/Solaris have taken parts of the BSD TCP stack)


Oh no, it was merely an example. I would rather prefer my code not to be in the FreeBSD source tree, I don't think it is good enough yet.


Why reinvent the wheel when we already have IPsec?


Basically, IPSec requires configuration and tcpcrypt doesn't.

There's more in the Usenix Security paper: "A big challenge to IPSec is that it breaks middleboxes that require access to the transport layer. Given the increasing prevalence of NAT in particular, this excludes a large portion of the population from using IPSec. Tcpcrypt, by contrast, operates at the transport layer and so avoids these problems. Another challenge for IPSec is that it is hard to create a notion of a “session” in a connection-less environment (the network layer)."


Have you configured many IPSec connections? It's a fricking pain in the rear. Nothing is every compatible "out of the box" and there are so many knobs to tweak and configure. Which is probably part of why everyone just uses SSL anymore, even for VPN's.


"Why introduce this HTML/HTTP thing when we have Gopher? Don't re-invent the wheel!"

Re-inventing the wheel is good. It's the only way we'll ever get better wheels.


Indeed; for all we know, our wheels are oblong if not triangular. Without reinventing the wheel sometimes, you'd never come up with stuff like this: http://en.wikipedia.org/wiki/Mecanum_wheel


There were potential licensing issues with Gopher - by the time the creators of Gopher realized that this had scared off a lot of people and that they had shot themselves in the foot everyone had settled on using the Web - which had no such licensing issues.


IPSec doesn't support authentication well. For example, if you have a shared secret like a web cookie, how would you use this to authenticate one endpoint of an IPSec connection? It's hard, because the granularity of IPSec session keys is not the same as the granularity of tcp connections. Tcpcrypt, by contrast, makes it easy to do this--just hash the session ID together with the other authentication data.


i really don't like any aspect of this at all. so, my encryption is not guartanteed, i don't know if it's working, it doesn't cover all network traffic, and isn't shipped by default with any operating systems. i'm never going to use this and i'm never going to recommend it to anyone for any purpose. (sorry, i drank a jug of haterade this morning)


Obviously the goal is to be built in to the OS. Flipping your objection around, you could say that with tcpcrypt more of your traffic will be encrypted than is now; it's no worse than the status quo.


what's the point of "more encryption than you have now" if it can be disabled by mitm (this includes the 'wireless networks' cited as a good reason to use it) and you don't even know if it's working? if i want to use encryption, i want to know its encrypted. there is no point otherwise.

an example: i'm using an unencrypted network and plaintext protocol. if there's no attacker on the network, my data is "safe." if there is an attacker, my data is effectively compromised.

if i'm using this solution around the same network connection and there is no attacker on the network, my data is safe. if there is an attacker, they'll see it's this lame attempt at encryption, disable it through mitm, and my data is effectively compromised.

in a real life setting i can't depend on this to secure my data. however, ssh, openvpn, ipsec and other solutions provide assured encryption (unless you screw up the configuration). you don't even need a CA with ssh or openvpn - copy your keys or certs by a usb key once and you immediately have a strong defense against mitm.

i'd love to see sites provide a secure encryption alternative to SSL which allows for out-of-band transfer of secure keys and 1-to-1 authentication to prevent a nation's clandestine operations from peeking at my traffic. that probably won't happen though. in the meantime i protect my last-mile access with a secured tunnel; i just pray SSL is strong enough to not be attacked at a hop from my tunnel endpoint to the site.


The point is to upgrade the connections that are unencrypted now to use encryption. A MitM can only insert his own data into your dialog, he can't passively eavesdrop on an existing encrypted connection.


This is the waxed-mustache damsel-tied-up-on-the-railroad-tracks notion of Internet attackers, who, getting a sudden bee in their bonnet to torment 'avar from Hacker News, leap upon his connections only to find that, drat!, 'avar had enabled tcpcrypt and all his connections are safeguarded.

The real Internet attackers owned up your ISP 2 years ago, and on the real Internet you originate hundreds of new connections across that owned-up Internet every minute. The security negotiation on any "old" connection is irrelevant. All that matters is the security of each new connection. If they're safe, the protocol works. If they're not, the protocol is broken.


Nobody said anything about "old" connections in this thread. wmf mentioned how encrypting everything (even if you don't know who's at the other end) is "no worse than the status quo".

Protocols that don't use encryption now aren't broken, and they could be slightly improved by transparently using encryption, because at least you'd deter passive sniffing.


If you and a friend exchange certificates, TLS protects you from most governments, including US domestic law enforcement.

If you and a friend set it up, tcpcrypt protects you from people who don't know to MITM TCP connections.


Yes, and if we meet and exchange one-time pads we'll be even more secure.

However neither that nor TLS "work out of the box: require no configuration, [and] no changes to applications".


No, it's going to be about as hard for (say) the FBI to break TLS as it is for them to break an OTP; either way, it isn't happening.

Can you say that for tcpcrypt? Of course not.

I don't know what your second sentence even means. I didn't do anything to configure TLS. What did you have to do?


I'm citing the design aims of Tcpcrypt, from their website.

Not everything uses HTTP, a bunch of things like Torrent, IRC etc. use unencrypted protocols by default, and you have to explicitly add support to each one for encryption. Either by adding SSL support to the programs themselves, or by setting up a tunnel.

That's not the same as using opportunistic encryption that can happen at the kernel level. You could also do that with SSL to achieve the same aims, but the point is that it's opportunistic and happens without further userspace configuration.


Can you really imagine someone going to all the trouble to sniff your traffic - and then giving up at that point because you have tcpcrypt enabled, given that it's trivial for them to perform a downgrade attack?

So it ends up being opportunistic encryption, that's enabled when you don't need it and disabled when you do. It's just a security blanket.


Tcpcrypt is aimed at passive attacks. Not everyone who has the power to listen in and perform a passive attack can modify the traffic and execute an active attack.

Even when you can do an active attack it's not trivial in practice. The user my get suspicious if they've previously been able to initiate a tcpcrypt connection but can't do so now, and if you do this on a big scale (e.g. government sanctioned sniffing at an ISP) you'll probably be found out.


can you give me an example of a connection that can be passively sniffed and not injected into? unless you have some wacky physical media which has no Tx, if you can see their traffic at the very least you send spoofed packets to the destination or source. if you're on the source's LAN you just spoof DNS and/or ARP for either the default gateway or the destination. if you're on the destination's LAN you can spoof the destination or do packet injection. if you're in between either LAN you can arguably do any damn thing you like, depending on network topology and routes. but i don't see a case where i could see the traffic and not do something to either downgrade the connection, hijack it or mitm it.

so they're risking their data on an uncertain possibility that a user might catch on that i've been downgrading their sessions and stealing everything? once you explain this to a user, do you think they'll really trust it?


Agreed - it's just a way of wasting CPU cycles on a false sense of security. Anyone who is going to trouble of sniffing your traffic is going to go the extra yard to disable your "tcpcrypt".

Especially when an automated MITM tool to do so becomes available.


The performance comparison to SSL is not really fair, since tcpcrypt does not offer security against passive attackers.


You haven't read the site, have you?

> If, however, a Tcpcrypt connection is successful and any attackers that exist are passive, then Tcpcrypt guarantees privacy.


Good catch. I meant to say that tcpcrypt is vulnerable to active attacks, rather than passive.

The point is that it is not a useful comparison to say that tcpcrypt is 36x faster than SSL, when it offers a weaker level of security.


If you use X.509 server authentication with 2,048-bit RSA keys, tcpcrypt offers about a 25x speed-up over SSL for equivalent security. (Actually slightly better, since tcpcrypt offers forward secrecy while, in the benchmark, SSL does not.) The key optimization is batch signing, where a single RSA signature can authenticate a bunch of connections at once. There are graphs showing this in the paper and talk slides.


How serious can they be about ubiquitous encryption, if they can't even be bothered enabling HTTPS on their site?


They don't look very trustworthy to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: