Your argument is a bit disingenuous. You say you'd trust simple closed-source software more than complicated open-source counterparts. That's a false dichotomy. Open source X is always more easily auditable than closed source X.
I don't think many people are worried that BT will be evil. I think many more people are worried that they'll be incompetent, and it's very hard to be competent at cryptography. That's why we want the source.
From a "non US person" perspective, I go to BitTorrent's website and read:
Company Overview
BitTorrent Inc. is an Internet technology company based in San Francisco.
And immediately I've got _two_ things to worry about - 1) "will BT be evil/incompetent?", and 2) "will BT be leaned on by the NSA and be coerced into being evil?"
If you're a US company (or individual, or a company with US based management, developers, investors, or infrastructure) who are promoting security-related products in the post-Snowden era – many of us outside the US now have very good reasons to apply extra scrutiny to those products. Opening your source will make a _big_ difference in how readily suspicions of evilness can be allayed. As saurik points out upthread, having the source available doesn't guarantee the rest of the open source community will find and fix any carefully-enough-crafted backdoors, but keeping the source closed sends a strong message…
From a US perspective: The NSA doesn't lean on non-US companies because it is fully authorized to just hack them directly. Your data kept in foreign countries don't even have the nominal legal protection that US data does.
Unless you've got unbreakable security the NSA is well funded enough that it's irrelevant where you do business.
Except that in the case of non-US companies NSA has to do the actual hacking. In case of US ones they simply need to "ask" a company to, say, handle them their master keys - much easier.
Ok, then I will say your argument is a bit disingenuous, as I did not compare "open source X" to "closed source X": I am trying to show that "is it open source" is just one of many variables that you might consider while evaluating the security of a program, and I am further attempting to claim that it is not even the most important of these variables. In the real world, we are choosing between software that differ in many ways, not just one: all individual dichotomies are "false".
I mean, I could also say the argument "you can't trust closed source software for this stuff" is also "a bit disingenuous" via the analogous argument that that open vs. closed is a "false dichotomy": a single-threaded/type-safe/un-obfuscated X-source program "is always more easily auditable" than a multi-threaded/type-unsafe/obfuscated X-source program. Now, the question becomes "what variables are more important to you, and will your reactions be 'knee jerk' or rational"?
Sure, it's only one variable. I disagree that it's not the most important one.
You can audit a multithreaded open source program much more easily than you can audit a single-threaded closed source program. Merely compiling it adds more obfuscation than making it needlessly, complicated will, and making it needlessly complicated will immediately raise red flags.
You don't need to analyze an open-source program to see that it's been obfuscated, and, if it claims to do anything that requires security, that would probably be enough to make you suspicious.
My ultimate point is that compiling is a form of obfuscation that has extreme plausible deniability. There's no form of obfuscation that will complicate the code of an open source program as much as compiling it will, while still looking as innocuous as compiling does.
> You can audit a multithreaded open source program much more easily than you can audit a single-threaded closed source program.
I think this is our core disagreement, as I've been pretty clear about how I don't just accept this statement at face value given the large class of subtle bugs that can and do occur constantly in multi-threaded systems. I find reading through and finding bugs in complex binary-only buffer-management or even cryptographic systems "easy" (time consuming, but not requiring much brain function; even obfuscation just adds time and effort, it doesn't require greater intelligence); yet, I have never managed to remove every single concurrency bug from an open-source project I maintain that only actually uses threads to separate tasks like "searching" from "downloading update" (of course, I likely didn't put as much effort into it, so this doesn't prove anything of course: but I hope it makes you think), and I've seen people "much smarter than me" (working at Google and Apple) fail at doing so in systems that are "much more important" (working on Chromium and WebKit).
You might then just say that I'm probably just stupid and that a more reasonable programmer wouldn't have the same kinds of issues, but "concurrency is hard" I had assumed was a well-known issue. (And again, I think that this all becomes more more interesting to think about once you realize that a "backdoor" left by an intelligent opponent is going to be nearly indistinguishable from a "bug". At least once a year there is some obscure privilege escalation bug found in the Linux kernel: I think it is interesting to at least consider momentarily that any of those might have been a backdoor, and not a bug; the concept of what constitutes purposely malicious code updates tends to be way too narrow in my view, and leads people to only consider back doors that are easy to see when you print out the source code.)
> There's no form of obfuscation that will complicate the code of an open source program as much as compiling it will, while still looking as innocuous as compiling does.
Yeah, no: I seriously read through compiled code every day. I was doing a lot of reading through compiled code yesterday while working on figuring out why Substrate isn't working on iOS 7 for example. I am much much much more afraid of the bugs that are latent in a large multi-threaded project that has "lots of hands in the cookie jar" so-to-speak than of a simpler implementation distributed as a closed-source binary. I'm likewise more afraid of open-source projects that accept patches from large numbers of people (which means more people who might be actively /trying/ to add difficult-to-see exploitable bugs), or of projects that are implemented to run as native code versus ones that run inside of type-safe virtual machines (especially if the virtual machine is coded to a spec and I get to select the one it runs on).
I don't think we're going to agree on this, because at this point it is a matter of degree. I find it more likely that an error will not be malicious, it will just be a honest mistake that will make the entire system less secure.
Cryptosystem security isn't the same as binary security (i.e. against exploits). You can have a very insecure binary (in the exploit sense) but still have valid, strong cryptography (e.g. in its output).
Sure, with something like this, you want the binary to also not be easily exploitable, but I think that getting the cryptography right is more important.
Given these two points (malice vs incompetence and cryptography vs security), I think it is more important for the program to be open source, even if it's complicated, than the other way around.
There aren't many expert cryptographers who are also expert reversers, sadly.
That doesn't sound reasonable. I don't care who you are, auditing the source version is orders of magnitude easier than auditing the binary version. I say this as someone who has been reverse engineering binary code for well over a decade. This doesn't even account for the fact that requiring reverse engineering skill already eliminates the majority of potential auditors, whether due to ability or due to lack of time.
Easy and time consuming are mutually exclusive in this context. It's about cost, and time is money. Its hard in the sense that the traveling salesman problem is hard, even if the logic for the naive solution is straightforward.
Look, I'm sorry, but let's take an extreme example here to demonstrate how you are arguing something different than I am: if you are seriously trying to tell me that you have an easier time analyzing the source code for "grep" vs the binary for "false", something is seriously seriously wrong; the binary for false can seriously be less than 50 bytes large. If you show me an open source system and a closed source system, they are not going to be identical but for that one variable: that is just one of many variables.
Again, differently, you are again falling into the same problem of looking at this as a "single issue voter": open-source X vs. closed-source X. My complaint is that people go "omg, no source code, I can't trust this" as this knee jerk reaction, as if this is the only variable by which you should be evaluating your potential risks. In the real world, you are going to be comparing using this to other solutions, some open source, some closed source, and attempting to decide which one is more or less secure. Does being closed source affect your guess as to its security? Sure. But does it affect your guess more than some other key variables? I argue not.
That people then outright dismiss something closed source like "lolololololo" are being ludicrously over-simplistic in their view of where security comes from and how people audit systems, and the people like "Karunamon" who decide that it is "suspect", which assigns direct motives to the idea that they are somehow attempting to hide something in their closed source binary, don't understand the threat model.
Other people on this thread, like "bigiain", are even talking about the NSA leaving some kind of detectable backdoor in this closed source binary: that's insane... if the NSA were actually going to leave a backdoor, it wouldn't be something you'd ever look be able to look at, even with complete source code, and realize that it gives them complete control. At best, you'll find it as a "bug", assume it was a "mistake", and fix it, and they'll already have others as backup.
Oh, I agree with you on that; it's just that, when the program is closed source, it is already a big enough reason to dismiss it.
"It's open source, therefore trustworthy" is not valid, I agree. But "it is closed source, therefore untrustworthy" is valid, and that's what most people are saying.
The NSA is not omniscient anymore then the security services of any other nation are.
Most of what they do probably is just exploiting known bugs since they commit the resources to finding them as a basic part of their mission. You talk about a threat model, but you're proposing one which assigns a ludicrous amount of capability to an organization which, fundamentally, is still staffed and draws upon the same pool of human-talent that everyone else does (that is, graduates of universities principally in the western world).
I think I agree with most of what you say, but the point is, not being open source is a non-starter right out the door. Open source doesn't give anything a pass, but without that at a bare minimum, we can't even begin to take it seriously.
But I wouldn't really be worried about backdoors for something like this, just incompetence. I don't think anyone is taking it seriously anyway.
If one were to put on the tinfoil cap to the degree many are on HN with the NSA story, seems to me like the NSA would assign good programmers to contribute quality code with really subtle exploitable flaws to all kinds of open source projects. This brings up a question I have had about open source: how much of it is audited by people skilled enough to notice these kinds of flaws? I expect most major projects are, but what about the incredible numbers of lesser projects?
I don't think many people are worried that BT will be evil. I think many more people are worried that they'll be incompetent, and it's very hard to be competent at cryptography. That's why we want the source.