Hacker News new | past | comments | ask | show | jobs | submit login
Here come the encryption apps (cryptographyengineering.com)
164 points by llambda on March 9, 2013 | hide | past | favorite | 56 comments



I work on two of the apps Matthew reviews here (RedPhone and TextSecure).

What I didn't expect when I started working on these types of projects is that the cryptography is the easy part. I'm really honored to hear that my code has the ability to make Matthew Green drool, but that ZRTP stack was a two or three day project three years ago, and hasn't changed much since. The bulk of the work over the intervening period has been almost exclusively about improving call quality and user experience.

I think this increased emphasis on the user might be what distinguishes the "new wave" of crypto apps from the last. There seems to be a real consensus between those working in the space that this is what's important now.

The things that I'm most proud of about RedPhone are typically unrelated to the crypto, and are instead things like using push notifications for signaling instead of persistent connections, using a lightweight mobile-oriented signaling protocol instead of SIP, and building a low-latency calling network: http://www.whispersystems.org/blog/low-latency-switching/

I think we're all starting to realize that our "competition" is the "insecure" versions of what we're building, and security isn't an effective point of comparison. We have to build better products, which just happen to incidentally be really secure.


I don't understand why you'd concede security to your competition; it is a genuine distinction. Most security people I know, and all the crypto people I know would choose your designs over those of the other designers in this review, not because of the quality of your application but because you're clearly a capable designer.

I worry about the message we send with this "the competition is insecure apps" stuff. Some of the tools in this review appear sound now, but started off in a state of profound unsoundness. Yours didn't. There's a reason for that which speaks to the underlying quality of your design.


What I really want is for everyone to be using secure communication tools all the time. Not just because it'd obviously be great if everyone were communicating securely day to day, but because in extreme events, people tend to use the tools they already have and are familiar with.

For the London riots, that was BlackBerry Messenger. For the Egypt riots, that was Facebook and Twitter. Maybe the next explosive event will feature WhatsApp? So my sense is that for tools to be attractive in moments where security is suddenly of really clear value, they have to first be attractive in moments where security is less clearly of immediate value.

You're right that security is always a genuine distinction, but it's perhaps not the most valuable one to most people. I certainly believe that if you say "here's an app that you can install on your phone, and it will make everything more secure without you ever having to look at it again or even know it's there," then people will install that app. But that's not what we're saying right now. I'd love it if people used RedPhone all the time for every call they made, but I know that it's still not perfectly on par or better than a normal phone call, so I know that people will have a hard time putting up with its usability deficiencies if they haven't seen or felt the direct consequences of the NSA warrentless wiretapping program (or whatever) yet.

This isn't to say that there aren't plenty of people who find it valuable for their work right now, just that I'd like for us to be setting our sights on the common case and delivering a product that exceeds peoples current expectations.

I think we're finally getting to the point with TextSecure where the normal day to day experience is as good or better than the stock Android SMS application, but we should really be chasing WhatsApp or Snapchat or whatever.

And totally, good cryptography is extremely important, and I think we should continue to be rigorous there, but I'm not entirely sure how to approach that with people that aren't actually interested in building secure products. Right now if you search "secure sms" in the Android Play Store, the first result is not TextSecure, but 100 results for apps with names like "extreme SMS locker pro!" that "hide" your SMS messages by doing hand-wavey things. We might ignore them, but other people aren't -- they have hundreds of thousands of installs. I have no idea what to do about that.


...the next step would be to be able to share contacts (WoT) securely, like:

http://www.mightbeevil.com/contacts/

using something like

http://www.mightbeevil.com/framework/download.html


I think he maybe meant that the competition is the incumbent apps that don't have encryption as their selling point.


Yes, I'm clear on that. But maybe it shouldn't be. At least those unencrypted apps send a clear signal to their users about the level of assurance they provide.


He's just making an observation about people's behavior, not saying whether or not people should behave like that.


Thanks for pointing to your low-latency-switching strategies, was an interesting read! Although I see how it is great for a high availability strategy as a side-effect, I wonder why you did not want to make use of location information on the phone. I mean having the user send his coordinates with his first request would be the obvious choice, but privacy considerations could speak against it. But if your returned list of available servers for a region would include coordinates, the app would not have to establish connections to all the servers, but maybe just the closest X. That could improve the whole process and save a tiny bit of traffic if you had a lot of servers in every region. And what about having a list of all your available servers cached with their coordinates in an encrypted sqlite database then? Firewalls that filter the single domain that is queried for server information would not be as successful. On the other hand, old or compromised servers would have to get revoked regularily. Or Have you had more problems with the idea of phone locations and local server lists?


Having tried to build a secure messaging platform, I can't agree more that, for the average user, security is so far below the bottom of the list of important things as to not exist. Unfortunately, for those of us who do care, the removes the network effects that make secure software usable.

It doesn't make how many calls or emails I encrypt if nobody I talk to can receive them. It is encouraging to hear people 1) understanding security isn't really a differentiator and 2) trying to solve the real problems securely.


I don't think the average user feels it is at the bottom of the list, rather the average user trusts webservices for various reasons without realizing how much insecurity and lack of privacy results from using common webservices. I think that is slowly changing though.


The problem I've experienced is that whenever I talk about software security with non technical people I get a lot of "I thought this dude was normal until he put on the tinfoil hat" looks and overall lack of interest.

The only time people become more aware of security is once they've suffered the consequences of a lack of security. To be fair, the only reason it concerns me is because I've been exposed to so many stories and somewhat have to deal with it in my field.

So it follows naturally that as incidents of security crimes and overreaching governments and corporations increase, then so will the awareness surrounding information security. It has to be actually happening to most for humans on a crowd scale to pay attention.


This is a genuine concern. "I don't have anything to hide" doesn't mean everyone's communication (and location) should be tracked at all times.


Hey moxie, your write-up about how you built your switching network was fascinating. Thank you for continuing to be an inspiration.

p.s. you have a broken link to https://github.com/WhisperSystems/RedPhone/wiki/Signaling-Pr... in the writeup for the text "write our own".


Thanks! (And thanks for the heads up on the broken link)


moxie, I'm curious how the "short authentication string" prevents MITM attacks — can you shed some light on this or point me to existing documentation? Thanks!


The SAS section of the ZRTP RFC has all the details: https://tools.ietf.org/html/rfc6189#section-7

But the short answer is that part of the negotiated key material for the call is used to derive two words from the PGP word list, which are displayed to the user.

If the users then have a conversation about those words and they are the same, then chances are that they have the same key material. In the case of a MITM, they would have different key material, and thus different short authentication strings.


I am not moxie, but I'll reply. The key thing is that a short authentication string is shown to both parties. Then it is the job of the parties to also verify by talking to each other that they are talking to the right person (authentication part). That is the key part -- using some other channel (not just he algorithms and the protocol to authenticate the counter-party).

Then the boring part kicks in and the way the algorithms works is that if you are convinced that you are talking to the right person, then the way the secret key is generated won't be infer-able to an eavesdropping party.

Now, if I remember correctly NSA has published a paper where they allegedly fooled this system by using voice disguise and a voice actor to basically play a man in the middle attack. Now this is highly subjective and based on the context, so take that with a grain of salt.


It's some sort of key fingerprint. You know the key you sent to the other party, and the other party can compute the fingerprint of the key they received from "you." If they don't match, someone changed keys in transit.


Does this assume that the voice channel can't be doctored? But couldn't a MITM attacker do just that — compute the "correct" fingerprint and speak it to the recipient?


Yes, it assumes an attacker cannot imitate the other person's voice without detection. If you introduce enough latency or have a system fast enough to do on-the-fly voice changing you'd be set. It's such a great, simple idea, I felt stupid for not having thought of it.

To be sure, the voice channel establishes before the verbal authentication. I see your fingerprint is "banana", you see "kitchen". We can chat and I can say "alright so banana, right?" and you say "yep, in the kitchen". An attacker would need to remove banana and kitchen from that voice conversation and put their own fingerprint words in.

I think that level of realtime audio modification is pretty out of reach for now, although I suppose if you introduced a lot of latency, you might be able to pull it off. It'd probably be noticeable, and you can keep chatting and confirming the phrase, so an attacker would have to be really on-the-ball.

Plus, this only happens the first time. After that, the client saves the keys, so you know you're good.


Okay, cool. It sounds like this sort of attack is very difficult for now — but I would rather be aware of it than not.

It sounds like RedPhone doesn't actually save the keys yet — it's a work-in-progress. But that makes lots of sense.


I don't understand how Matthew can write a comparative review of encrypted chat clients and include one for which he has no technical information, not even a binary; particularly when it mixes number theoretic and conventional block crypto, thus exposing itself to a maximal subset of possible implementation errors.

I'd also be interested in the kinds of flaws his class was able to generate for in-class discussion, in other projects. Assigning your class a code review of Moxie's code is somewhat sadistic, and, more importantly, doesn't really give us much of a benchmark.


> I don't understand how Matthew can write a comparative review of encrypted chat clients and include one for which he has no technical information, not even a binary

Well that's the point. If there's no source, don't trust that crypto app.


But that's not what he wrote. Read that part of the review again.


""" Overall code quality: Who knows

Should I use this to fight my oppressive regime? Yes -- if your fight consists of sending dirty self-portraits to your fellow comrades-at-arms. Otherwise, probably not. """

Am I missing something?


His answer to "should I use this to fight my impressive regime" was the same for all the tools, just with different wording.


My answer to that question was the same everywhere, because I think you'd be crazy to use these tools to fight an oppressive regime. Just different levels of crazy.


Right! We agree strongly on that point. :)


True. Is he suggesting that the closed-source implementation-unknown app is better that the others, somewhere? Or is the problem that he is failing to condemn it as any worse than the others?


Looks like they skipped over two open source XMPP+OTR clients for iPhone/Android: ChatSecure (https://chatsecure.org) and Gibberbot https://guardianproject.info/apps/gibber/

Disclosure: I am the original author of ChatSecure.


I guess the best point about these is that they are actually (or hopefully?) somewhat standardised - XMPP with OTR works perfectly fine on the desktop, albeit only for IM. Inventing a new protocol to provide essentially the same features appears a little unnecessary - at least the IM part could have reused OTR with voice calls using some other encryption (possibly based on the keys negotiated via OTR).


I've tried using Gibberbot with some friends. It is pretty unreliable sometimes. I think it may be an issue with OTR in general.

I feel like IM encryption still isn't a problem that has been fully solved yet on android.

(Haven't checked out ChatSecure, looks good.)


What problems have you had with Gibberbot? It's been rock solid for me with a decent XMPP server, and even works reasonably well when proxied through Tor (Orbot). I've had no issues with OTR either.


Both sides occasionally got scrambled messages. Sometimes messages wouldn't show up at all and had to be resent.

I googled the issues a few times and there seems to be a lot of similar complaints from other users of all OTR apps (like PigdinOTR).

I may have been some mistakenly from another gtalk app running or older android phones.

One big UX issue is making sure both people are using it always.


Ah, there's big issues with OTR with users who are logged in more than one location. OTRv2 didn't work well with that, but I think OTRv3 may have fixes...


Self-promotion admitted, but the Bump app is interesting for IM/picture/file etc capabilities (though not voice) in this arena. A few points:

  * Everything is tunneled over OpenSSL and (increasingly) NaCL.  
  * We do not use the CA infrastructure... we ship our server's public key
    in the app
  * Communication channels are established by a face-to-face "bump", and 
    the parties must mutually consent that the suggested "match" was their 
    intended counter party (they must validate name and mug).
  * Subsequent communications can happen asynchronously at a distance 
    on this channel
Basically, one of the things we've explored in house and with partners is that we've "accidentally" created something with properties (though not yet rigorously vetted to endorse for specific use cases) that lead to everyday users having exercised some practices normally reserved for the technical and paranoid--such as identity based on an initial in-person exchange.


What is the realistic risk of a hardware backdoor in mainstream smartphones? I assume that there's nothing apps like these could do against those. Is that a risk even worth worrying about at this point? If it is, is there anything that can be done about it? It is safe to trust, say, Google or HTC and their manufacturing chain?

Further, if the handset is attacked and silently owned, all bets are off, too, right?


A minor point of curiosity: one of the captions says "Using SilentCircle on a Huawei complete negates the point of using SilentCircle."

I appreciate that it may be somewhat tongue in cheek, but is that a riff on the US accusing Huawei of being a national security threat[0], or do Huawei phones have a track record of known security vulnerabilities?

[0]http://www.nytimes.com/2012/10/09/us/us-panel-calls-huawei-a...


Well, it's not just that one US incident, the level of paranoia regarding Huawei is much wider than that. For example IIRC the UK blocked a Huawei bid to provide cell phone service during the olympics. I'm not sure why exactly they have such a shitty reputation compared to other Chinese companies. For example, has anyone every said that running ssh on a ThinkPad completely negates the point? But the reputation is there, which is what makes it a decent joke.

(And yes, I chuckled at the image before reading the caption.)


At some point it is hard to say a system is secure if you cannot control the hardware. That is when secure systems are certified it is not just a software library, it has to be full hardware + software solution. If anything can insert itself in between boot and loading the OS then it could read the memory and just scan the memory for a key (by say emulating the memory inside a VM). A phone manufacturer could very simply add a hardware memory read access via a separate chip to their phone memory. The phone then could boot to an arbitrarily 'secure' OS and application but it would still be a all for naught as key could still be read from the memory.

Now there are these things : http://en.wikipedia.org/wiki/Trusted_Platform_Module that should help with the issue but I am not sure if there are any phones that ship with them.


How would a TPM do anything against malicious hardware? The hardware manufacturer could just as easily include a malicious TPM.


I don't know about Huawei, but backdoors were discovered in some ZTE phones:

http://www.zdnet.com/backdoor-found-in-zte-android-phones-13...


I don't think you can have security on a device you don't have root access. And since most of the phones are locked and untrusted does it matter how much exactly?

And almost all really nasty regimes have already somewhat liberal view of using thermorectal cryptoanalysis anyway. Tor style obfuscation and retransmission with very low SNR masked as a torrent client could be a better way to go. If your government know that you said something to someone they can just "ask" you to find exactly what. The trick is not to be found.


Root is not enough either. Most drivers are closed source (on phones) so you have no idea what the device is actually doing.


For those interested in building Encrypted/Private cloud apps Crypton.io (https://crypton.io), developed by the team behind SpiderOak inc launched proof of concept for a zero-knowledge private and secure cloud api a few weeks ago. Also 100% open source for those who are interested.


A `crypto app' I would like to have and see everyone use is to do the `...... beep .....' every 10 seconds.

Until a transparently layered Redphone, etc., secure and universally used protocol, this app assumes this and every connection is being recorded.

beep ...... beep ...... beep ......


I just Googled for a Chrome extension to add message body encryption to gmail. I found SafeGmail [1].

Does anyone have experience using it/encouraging others to try it? Is it worth using? Is there a better alternative?

http://safegmail.com/


[0] has some more information – it appears to be standard OpenPGP with a random encryption key stored on the server of safegmail and then retrieved by the receiver, using a common secret as identification. Given that you have to install a (closed-source?) extension and that the key management happens on the servers of safegmail, you naturally have to trust them not to do funny things with your data.

Years ago, there was a Firefox extension to do standard GnuPG in the browser, but I think it died down. If you want really secure email at the moment, I assume there to be no other way than to use a client with PGP support, such as, uh, basically every email client.

[0] http://safegmail.com/information.html


As Mr. Green points out, the key issue today is more about key disruption. And, in my opinion, that's an area closed source solutions are even more scary. (Without any form of transparency, why would we believe our keys aren't being archived to China?)


> While Cryptocat is written in Javascript (aaggh!)

Is there something inherently insecure about using JavaScript or is this not meant to actually be relevant?


The problem with javascript encryption is the environment in which it runs being impossible to control... and also some features of the language. There's no strong random number generator, for instance.

http://www.matasano.com/articles/javascript-cryptography/


If it's served from a webpage, then it's insecure.

Either the publishing server can replace it with a malicious version, or another server might inject javascript that modifies or replaces it with a malicious version.

This particular point hilariously broke a crypto protocol project of mine, so I guess I'm touchy about it.

Shipping as a browser module is more secure.


That's not really a problem with the language.


Given that javascript is the language that can be embedded in any webpage ... yes it is.


These apps remind of the guy who bought the first fax machine, nice but you need a friend with one too for it to be of any use.


Installing an app is a lot less of an adoption hurdle than purchasing a new piece of expensive equipment and provisioning a new dedicated landline was back in the 1960's. Back then there were even some legal hurdles to connecting non telco owned equipment to the phone system.

C.f., today's social networks.

http://en.wikipedia.org/wiki/Fax#Telephone_transmission




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: