The problem with in-browser crypto is way bigger than the feasibility of properly implementing crypto in the browser--it's the fact that you can never ensure the security of the network. As such, I think the prevailing argument against in-browser crypto is that it provides a false sense of security.
EDIT: Nevertheless, this seems like a neat project! I'm not anything resembling a pentester though, so I'll leave the challenge to the experts :)
Network security is a problem but breaking https doesn't seem trivial in the wild. That's why I'm inviting people to try exploits on a real app and willing to use a malicious network to let them break it.
I think the idea behind crypto in the browser is cool, but it seems like anytime you're requesting the JS from the server, you need to trust the server -
Instead of serving crypto.js, you could server plaintext.js, and I wouldn't ever notice the difference, would I?
So if we agree that I need to trust you, to serve the JS crypto properly, then what's the difference between doing that and trusting you to encrypt the text for me?
Yes, you need to trust me. I'd argue it's easier to trust me if the crypto is happening on the client since if I change it, I could be caught. The probability of being caught in any one instance is very low but over many instances is high.
Would you make the same argument for native application? Unless you read the source code every time you upgrade, you're exposed to the same risk. cperciva did an inadvertent real world experiment on this by making an error during a refactoring of Tarsnap and it took 19 months for a subtle crypto bug to be noticed.
My point isn't that you should never trust webapps - It's that doing the processing in JS, or on the server, doesn't CHANGE the threat-model.
In both scenerios, I need to trust you an identical amount, so there's no security advantage do doing the crypto on the client side.
Well, it's slightly different. Every person might not check every single time, but it would be possible to do something like write a browser plugin (or setup some sort of automated system) to check what was going on since the execution is happening client-side.
I made sure to emphasize 'slightly' because this could just trigger an arms race to getting around detection. You might be safe if you kept your detection methods a secret (and only personally used them). On the other hand, if the detection methods were used/deployed on a wide scale, then anyone trying to compromise you would just work around the detection.
- Check signature of the incoming JavaScript against known good versions.
- Check the signature of the HTML page against known good versions.
- Check that the information posted back to the server 'looks encrypted' vs. plaintext[1].
- Check the external resources that the page is requesting. Is it grabbing Javascript files that are unexpected (e.g. trying to serve up a known-good version of crypto.js, but then overwrite its methods with another Javascript file).
I'm not sure if many of these things would be possible in Chrome/Chromium, but probably in Firefox.
[1] Obviously 'looking encrypted' isn't some sort of binary decision, but I'm guessing there is some amount of checking you could do to see how closely it resembles random noise. If you sent random noise, and it wasn't encrypted, it would probably pass this check, but most people trying to protect something are probably sending something that won't trip this 'alarm.' This is not fool-proof, but adds a layer of protection when used with other things.
| "Looks encrypted" isn't useful; encryption
| under a known key or with unsafe parameters
| "looked encrypted" too.
Those go without saying. It would have to be part of a layered approach, and would catch stuff like plaintext going out.
| Are you sure you've captured every case that
| could influence what functions are bound to
| what symbols in the JS runtime?
I'm not. I wouldn't trust myself to implement such a thing (at least not without a lot of peer review from people I trust as knowledgeable), and even with such a 'detection' plugin, I would be wary of using in-browser crypto.
I'm curious what other inputs into the system you think there could be though. If you verify the HTML, and the external resources against 'known good' versions, then what else is there?
- Maybe there's malware already installed on the client system that's a threat, but that's a threat to everything, not something specific to in-browser crypto.
- A man-in-the-middle attack is mostly mitigated by using SSL (though not 100%).
- A compromised/malicious server, will end up changing the JS and/or HTML, which would (hopefully, if you've done a good job) not pass your verification checks.
- The other possibility would be a browser exploit that somehow is triggered before the plugin can raise a red flag about unverified JS/HTML.
--
The entire point of my posts in this discussion thread was to say that crypto in the browser vs. crypto on the server may have the same threat model (trusting the server + SSL), but they are not exactly the same. With in-browser-js crypto, as the client you have full access to the environment where the crypto is running. If it's happening on the server, it's a blackbox to you. This opens the possibility to have software running on the client side to verify that things are kosher. In the end, by the time that you're writing the software on the client-side to verify things you may as well just be doing the crypto in a browser plugin rather than in JS. I realize that it's mostly an academic argument.
Once Colin noticed he had the wrong nonce value for his CTR mode encryption, he was able to fix it once, tell users to upgrade, and nobody ever had to consider it again.
On the other hand, every single time someone loads your page, the server has the opportunity to surreptitiously add a tiny snippet of Javascript that would fixate the CCM IV your code generates. Nobody would ever notice that, and you could do it selectively (or, more likely, based on a court order).
You're implicitly applying different threat models to browser-downloaded vs. alternatively downloaded code.
If the maintainer of JS code make a mistake and fixes it in future versions, there's no need to worry about it anymore either. However, every time I upgrade I'm just as much at risk from someone attacking either my connection or having attacked the server. So to believe it's more secure, you have to believe those things are easier to do in the context of a web application.
There are a few legal reasons: there is a difference in the expectation of privacy when you never actually hand the plaintext message to the server. This is the logical equivalent of a the difference between letter mail and a telegraph. With a telegraph you have no expectation of privacy and therefore no legal right to keep that information secret as opposed to your letter mail. Furthermore, the government cant legally force a company to not insert a backdoor but crack their own security (at least as of yet). The latter statement meaning that it cant force a security company to hack their customers to get their keys. I think that while these dont result from any specific crypto issue, they are very real legal issues.
A breach of trust would be detectable in client side encryption, but not in server side. An average user wouldn't notice a difference, but a security researcher could. Any high-profile service that systematically tampers their own client-side encryption would very likely be caught.
EDIT: Nevertheless, this seems like a neat project! I'm not anything resembling a pentester though, so I'll leave the challenge to the experts :)