I work on two of the apps Matthew reviews here (RedPhone and TextSecure).
What I didn't expect when I started working on these types of projects is that the cryptography is the easy part. I'm really honored to hear that my code has the ability to make Matthew Green drool, but that ZRTP stack was a two or three day project three years ago, and hasn't changed much since. The bulk of the work over the intervening period has been almost exclusively about improving call quality and user experience.
I think this increased emphasis on the user might be what distinguishes the "new wave" of crypto apps from the last. There seems to be a real consensus between those working in the space that this is what's important now.
The things that I'm most proud of about RedPhone are typically unrelated to the crypto, and are instead things like using push notifications for signaling instead of persistent connections, using a lightweight mobile-oriented signaling protocol instead of SIP, and building a low-latency calling network: http://www.whispersystems.org/blog/low-latency-switching/
I think we're all starting to realize that our "competition" is the "insecure" versions of what we're building, and security isn't an effective point of comparison. We have to build better products, which just happen to incidentally be really secure.
I don't understand why you'd concede security to your competition; it is a genuine distinction. Most security people I know, and all the crypto people I know would choose your designs over those of the other designers in this review, not because of the quality of your application but because you're clearly a capable designer.
I worry about the message we send with this "the competition is insecure apps" stuff. Some of the tools in this review appear sound now, but started off in a state of profound unsoundness. Yours didn't. There's a reason for that which speaks to the underlying quality of your design.
What I really want is for everyone to be using secure communication tools all the time. Not just because it'd obviously be great if everyone were communicating securely day to day, but because in extreme events, people tend to use the tools they already have and are familiar with.
For the London riots, that was BlackBerry Messenger. For the Egypt riots, that was Facebook and Twitter. Maybe the next explosive event will feature WhatsApp? So my sense is that for tools to be attractive in moments where security is suddenly of really clear value, they have to first be attractive in moments where security is less clearly of immediate value.
You're right that security is always a genuine distinction, but it's perhaps not the most valuable one to most people. I certainly believe that if you say "here's an app that you can install on your phone, and it will make everything more secure without you ever having to look at it again or even know it's there," then people will install that app. But that's not what we're saying right now. I'd love it if people used RedPhone all the time for every call they made, but I know that it's still not perfectly on par or better than a normal phone call, so I know that people will have a hard time putting up with its usability deficiencies if they haven't seen or felt the direct consequences of the NSA warrentless wiretapping program (or whatever) yet.
This isn't to say that there aren't plenty of people who find it valuable for their work right now, just that I'd like for us to be setting our sights on the common case and delivering a product that exceeds peoples current expectations.
I think we're finally getting to the point with TextSecure where the normal day to day experience is as good or better than the stock Android SMS application, but we should really be chasing WhatsApp or Snapchat or whatever.
And totally, good cryptography is extremely important, and I think we should continue to be rigorous there, but I'm not entirely sure how to approach that with people that aren't actually interested in building secure products. Right now if you search "secure sms" in the Android Play Store, the first result is not TextSecure, but 100 results for apps with names like "extreme SMS locker pro!" that "hide" your SMS messages by doing hand-wavey things. We might ignore them, but other people aren't -- they have hundreds of thousands of installs. I have no idea what to do about that.
Yes, I'm clear on that. But maybe it shouldn't be. At least those unencrypted apps send a clear signal to their users about the level of assurance they provide.
Thanks for pointing to your low-latency-switching strategies, was an interesting read!
Although I see how it is great for a high availability strategy as a side-effect, I wonder why you did not want to make use of location information on the phone.
I mean having the user send his coordinates with his first request would be the obvious choice, but privacy considerations could speak against it.
But if your returned list of available servers for a region would include coordinates, the app would not have to establish connections to all the servers, but maybe just the closest X. That could improve the whole process and save a tiny bit of traffic if you had a lot of servers in every region.
And what about having a list of all your available servers cached with their coordinates in an encrypted sqlite database then? Firewalls that filter the single domain that is queried for server information would not be as successful. On the other hand, old or compromised servers would have to get revoked regularily.
Or Have you had more problems with the idea of phone locations and local server lists?
Having tried to build a secure messaging platform, I can't agree more that, for the average user, security is so far below the bottom of the list of important things as to not exist. Unfortunately, for those of us who do care, the removes the network effects that make secure software usable.
It doesn't make how many calls or emails I encrypt if nobody I talk to can receive them. It is encouraging to hear people 1) understanding security isn't really a differentiator and 2) trying to solve the real problems securely.
I don't think the average user feels it is at the bottom of the list, rather the average user trusts webservices for various reasons without realizing how much insecurity and lack of privacy results from using common webservices. I think that is slowly changing though.
The problem I've experienced is that whenever I talk about software security with non technical people I get a lot of "I thought this dude was normal until he put on the tinfoil hat" looks and overall lack of interest.
The only time people become more aware of security is once they've suffered the consequences of a lack of security. To be fair, the only reason it concerns me is because I've been exposed to so many stories and somewhat have to deal with it in my field.
So it follows naturally that as incidents of security crimes and overreaching governments and corporations increase, then so will the awareness surrounding information security. It has to be actually happening to most for humans on a crowd scale to pay attention.
moxie, I'm curious how the "short authentication string" prevents MITM attacks — can you shed some light on this or point me to existing documentation? Thanks!
But the short answer is that part of the negotiated key material for the call is used to derive two words from the PGP word list, which are displayed to the user.
If the users then have a conversation about those words and they are the same, then chances are that they have the same key material. In the case of a MITM, they would have different key material, and thus different short authentication strings.
I am not moxie, but I'll reply. The key thing is that a short authentication string is shown to both parties. Then it is the job of the parties to also verify by talking to each other that they are talking to the right person (authentication part). That is the key part -- using some other channel (not just he algorithms and the protocol to authenticate the counter-party).
Then the boring part kicks in and the way the algorithms works is that if you are convinced that you are talking to the right person, then the way the secret key is generated won't be infer-able to an eavesdropping party.
Now, if I remember correctly NSA has published a paper where they allegedly fooled this system by using voice disguise and a voice actor to basically play a man in the middle attack. Now this is highly subjective and based on the context, so take that with a grain of salt.
It's some sort of key fingerprint. You know the key you sent to the other party, and the other party can compute the fingerprint of the key they received from "you." If they don't match, someone changed keys in transit.
Does this assume that the voice channel can't be doctored? But couldn't a MITM attacker do just that — compute the "correct" fingerprint and speak it to the recipient?
Yes, it assumes an attacker cannot imitate the other person's voice without detection. If you introduce enough latency or have a system fast enough to do on-the-fly voice changing you'd be set. It's such a great, simple idea, I felt stupid for not having thought of it.
To be sure, the voice channel establishes before the verbal authentication. I see your fingerprint is "banana", you see "kitchen". We can chat and I can say "alright so banana, right?" and you say "yep, in the kitchen". An attacker would need to remove banana and kitchen from that voice conversation and put their own fingerprint words in.
I think that level of realtime audio modification is pretty out of reach for now, although I suppose if you introduced a lot of latency, you might be able to pull it off. It'd probably be noticeable, and you can keep chatting and confirming the phrase, so an attacker would have to be really on-the-ball.
Plus, this only happens the first time. After that, the client saves the keys, so you know you're good.
I don't understand how Matthew can write a comparative review of encrypted chat clients and include one for which he has no technical information, not even a binary; particularly when it mixes number theoretic and conventional block crypto, thus exposing itself to a maximal subset of possible implementation errors.
I'd also be interested in the kinds of flaws his class was able to generate for in-class discussion, in other projects. Assigning your class a code review of Moxie's code is somewhat sadistic, and, more importantly, doesn't really give us much of a benchmark.
> I don't understand how Matthew can write a comparative review of encrypted chat clients and include one for which he has no technical information, not even a binary
Well that's the point. If there's no source, don't trust that crypto app.
Should I use this to fight my oppressive regime? Yes -- if your fight consists of sending dirty self-portraits to your fellow comrades-at-arms. Otherwise, probably not.
"""
My answer to that question was the same everywhere, because I think you'd be crazy to use these tools to fight an oppressive regime. Just different levels of crazy.
True. Is he suggesting that the closed-source implementation-unknown app is better that the others, somewhere? Or is the problem that he is failing to condemn it as any worse than the others?
I guess the best point about these is that they are actually (or hopefully?) somewhat standardised - XMPP with OTR works perfectly fine on the desktop, albeit only for IM. Inventing a new protocol to provide essentially the same features appears a little unnecessary - at least the IM part could have reused OTR with voice calls using some other encryption (possibly based on the keys negotiated via OTR).
What problems have you had with Gibberbot? It's been rock solid for me with a decent XMPP server, and even works reasonably well when proxied through Tor (Orbot). I've had no issues with OTR either.
Ah, there's big issues with OTR with users who are logged in more than one location. OTRv2 didn't work well with that, but I think OTRv3 may have fixes...
Self-promotion admitted, but the Bump app is interesting for IM/picture/file etc capabilities (though not voice) in this arena. A few points:
* Everything is tunneled over OpenSSL and (increasingly) NaCL.
* We do not use the CA infrastructure... we ship our server's public key
in the app
* Communication channels are established by a face-to-face "bump", and
the parties must mutually consent that the suggested "match" was their
intended counter party (they must validate name and mug).
* Subsequent communications can happen asynchronously at a distance
on this channel
Basically, one of the things we've explored in house and with partners is that we've "accidentally" created something with properties (though not yet rigorously vetted to endorse for specific use cases) that lead to everyday users having exercised some practices normally reserved for the technical and paranoid--such as identity based on an initial in-person exchange.
What is the realistic risk of a hardware backdoor in mainstream smartphones? I assume that there's nothing apps like these could do against those. Is that a risk even worth worrying about at this point? If it is, is there anything that can be done about it? It is safe to trust, say, Google or HTC and their manufacturing chain?
Further, if the handset is attacked and silently owned, all bets are off, too, right?
A minor point of curiosity: one of the captions says "Using SilentCircle on a Huawei complete negates the point of using SilentCircle."
I appreciate that it may be somewhat tongue in cheek, but is that a riff on the US accusing Huawei of being a national security threat[0], or do Huawei phones have a track record of known security vulnerabilities?
Well, it's not just that one US incident, the level of paranoia regarding Huawei is much wider than that. For example IIRC the UK blocked a Huawei bid to provide cell phone service during the olympics. I'm not sure why exactly they have such a shitty reputation compared to other Chinese companies. For example, has anyone every said that running ssh on a ThinkPad completely negates the point? But the reputation is there, which is what makes it a decent joke.
(And yes, I chuckled at the image before reading the caption.)
At some point it is hard to say a system is secure if you cannot control the hardware. That is when secure systems are certified it is not just a software library, it has to be full hardware + software solution. If anything can insert itself in between boot and loading the OS then it could read the memory and just scan the memory for a key (by say emulating the memory inside a VM). A phone manufacturer could very simply add a hardware memory read access via a separate chip to their phone memory. The phone then could boot to an arbitrarily 'secure' OS and application but it would still be a all for naught as key could still be read from the memory.
I don't think you can have security on a device you don't have root access. And since most of the phones are locked and untrusted does it matter how much exactly?
And almost all really nasty regimes have already somewhat liberal view of using thermorectal cryptoanalysis anyway. Tor style obfuscation and retransmission with very low SNR masked as a torrent client could be a better way to go. If your government know that you said something to someone they can just "ask" you to find exactly what. The trick is not to be found.
For those interested in building Encrypted/Private cloud apps Crypton.io (https://crypton.io), developed by the team behind SpiderOak inc launched proof of concept for a zero-knowledge private and secure cloud api a few weeks ago. Also 100% open source for those who are interested.
[0] has some more information – it appears to be standard OpenPGP with a random encryption key stored on the server of safegmail and then retrieved by the receiver, using a common secret as identification. Given that you have to install a (closed-source?) extension and that the key management happens on the servers of safegmail, you naturally have to trust them not to do funny things with your data.
Years ago, there was a Firefox extension to do standard GnuPG in the browser, but I think it died down. If you want really secure email at the moment, I assume there to be no other way than to use a client with PGP support, such as, uh, basically every email client.
As Mr. Green points out, the key issue today is more about key disruption. And, in my opinion, that's an area closed source solutions are even more scary. (Without any form of transparency, why would we believe our keys aren't being archived to China?)
The problem with javascript encryption is the environment in which it runs being impossible to control... and also some features of the language. There's no strong random number generator, for instance.
If it's served from a webpage, then it's insecure.
Either the publishing server can replace it with a malicious version, or another server might inject javascript that modifies or replaces it with a malicious version.
This particular point hilariously broke a crypto protocol project of mine, so I guess I'm touchy about it.
Installing an app is a lot less of an adoption hurdle than purchasing a new piece of expensive equipment and provisioning a new dedicated landline was back in the 1960's. Back then there were even some legal hurdles to connecting non telco owned equipment to the phone system.
What I didn't expect when I started working on these types of projects is that the cryptography is the easy part. I'm really honored to hear that my code has the ability to make Matthew Green drool, but that ZRTP stack was a two or three day project three years ago, and hasn't changed much since. The bulk of the work over the intervening period has been almost exclusively about improving call quality and user experience.
I think this increased emphasis on the user might be what distinguishes the "new wave" of crypto apps from the last. There seems to be a real consensus between those working in the space that this is what's important now.
The things that I'm most proud of about RedPhone are typically unrelated to the crypto, and are instead things like using push notifications for signaling instead of persistent connections, using a lightweight mobile-oriented signaling protocol instead of SIP, and building a low-latency calling network: http://www.whispersystems.org/blog/low-latency-switching/
I think we're all starting to realize that our "competition" is the "insecure" versions of what we're building, and security isn't an effective point of comparison. We have to build better products, which just happen to incidentally be really secure.