Hacker News new | past | comments | ask | show | jobs | submit login

It is as simple as that. If you are not encrypting your email locally you are relying on the service provider to be honest. Claiming that Lavabit could not access your email is silly; Lavabit could easily access it, at the very least when you next log in (presumably this is why the service was shut down). There is no meaningful difference between Lavabit's design and Hushmail's design, and both services had responded to previous government requests.

If you want private email, you need to encrypt locally before the message is sent and decrypt locally after the message is received -- end of story.




You state a natural consequence of information theory. The original poster states that "non-NSL warrants conform to the Constitutional protections and will continue to be served and fulfilled." Both these statements are true.

But does that mean that because you can compromise a system then the government is allowed to force you to compromise your systems to utterly betray the purposes for which they are designed and then lie about it?


This is vague and I apologize... but it has to be that way... for now.

I couldn't compromise the system... but as it turns out the feds have a few secret capabilities the public doesn't about... they still need certain things in order to break the security though and that's what I'm fighting. Both for the ability to tell people what those secret methods are and the right to not be forced into helping the feds compromise my system/service...


It is not about what the government is allowed to do, it is about what is technically possible. If eavesdropping is technically possible, it is bound to happen, and constitutional protections will not be much help. The NSA revelations should tell you as much: the constitution is only tangentially relevant to the day-to-day decisions at the NSA about who to spy on or how to conduct surveillance.

The reality is that laws can change, and that when we create systems that are technically easy to abuse, there will be people who push for the law to change so that the system is legally easy to abuse. The police are always look for more power, and if you have something like Lavabit, the police will want to have on-demand access to it -- and they will want to reduce the barriers to getting a warrant, or even remove the requirement to get a warrant in the first place (like, say, if the emails are older than 180 days). By having a back door inherent in its design, and by positioning itself as the gatekeeper for that backdoor, Lavabit invites abuse and its users were lucky that the founder is a man of principles.


The system was only designed to protect data at rest. I followed the NIST secure coding guidelines when processing sensitive data. That should have made it difficult to compromise the system without changing the code.


...but it is easy to change the code, and easy to change it without alerting your users, and that is the point. I do not want to be rude to you, but the reality is that no matter how you designed the system there is a gaping and exploitable back door. The fact that secret keys are ever processed by your servers means that you have absolute power to decide if your users have any privacy at all.

To be clear, I think it is fantastic that you took a stand on this issue, and I wish more people had that kind of spine. The problem is that your system depends on you being a man who sticks to his principles.


Yeah.

Imagine the existence of a guy called Madar Mevinson, who runs a company Mavabit... that publicly shuts down over privacy concerns, but then reopens in triumph after a court battle... but little did we know, Madar was a government operative the whole time! (Or a non-state-actor criminal. Or just a creepy stalker. Whatever.)

And, because this is the internet, I'll mention that I am absolutely not suggesting that these things are true of Mr Levinson and Lavabit. But it's a bad security model to trust the ethics of a stranger, and from what I understand of Lavabit, that's required here. Maybe I misunderstood Lavabit?

PS: even so, 'mad props' to Mr Levinson, for taking a brave and productive stand


and the only problem with this is that unless you can unequivocally prove that the public key you are encrypting with is for the intended recipient, you're stuck. Until we have an infrastructure that can allow truly secure proof of identity, you can be assured, email (end every other form of internet communication) is insecure. The only secure email is the one you never sent... or if you're using webmail, it's the one you never wrote. If you want truly secure email, only use a public key to encrypt it if you can prove beyond a shadow of a doubt that the public key you are encrypting with is for the person you intend to write to. Internet delivery of the public key by the standard methods aren't foolproof.


Active MITM attacks are not easy by any stretch, certainly not against email. On top of that, if you do not begin the attack before the first messages are sent, you will not get another chance, at least not easily. It would also not be enough to control just the mail server; you need to control every communications channel available to the target, which is a substantial effort and far beyond the scope of what we are trying to achieve with email privacy. Frankly, anyone who can pull that off could more easily break into your home and install a keystroke logger somewhere.

PGP's model works pretty well. You get the key from a key server, you communicate through a (presumably different) mail server, and if you need more protection you use the web of trust. Imperfect, sure, but no security system is perfect, and at least with this the barrier to spying is high enough to stop mass surveillance (not true of Lavabit, whose users just have to be thankful that the service was shut down over such a request).


The WoT concept is flawed but it is the lesser evil. Also it is not the critic's place to dictate what standard of assurance anyone else should accept.

All schemes to verify identification of an entity with a key are probabilistic and in some degree unreliable. Even if the correspondent is your best friend and you exchange keys in person, there is the possibility that one of you will fail to maintain exclusive control over his/her secret key. The question is which methods are best in a relative sense - and what qualifies as "good enough" is for each operator to decide.

Of the two major alternatives, the CA system (and other schemes of similar design, relying on trusting third parties) and the web of trust based on individuals' estimations - of these, the latter is clearly more reliable. It was hard to convince anyone of this years ago, but the tech world has (mostly) now recognized the folly of third-party systems after painful experience.

EDIT: Corrected "former" to "latter", per post below - thanks!


You wrote:

> Of the two major alternatives, the CA system ... and the web of trust ..., of these, the former is clearly more reliable.

That is, you wrote "the CA system is more reliable than the WoT".

I'm pretty sure you didn't mean what you wrote. It seems to contradict the rest of your post.

Elaborating for the benefit of other readers: we have lots of evidence that the Certificate Authority system has been repeatedly compromised, certainly by state actors and probably also by (other) criminals. There are semi-solutions, like certificate pinning. One alternative (the only alternative I know of) is not trusting any Authority to get good certs, but rather getting them yourself, or from people you trust, or from people trusted by people you trust, etc... thus the Web of Trust. This alternative is pretty poor, but it might be less broken than CAs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: