I'm sorry, but if you look upthread, the comment I responded to not only didn't say that verifying open source was easier, but actually made the extreme claim that there was in principle no way to verify closed source software at all.
Meanwhile, addressing your (different) argument directly: sure, reading C code is easier than reading assembly code, and reading Python is easier than reading C. The easier it is to read a program the easier it is to reason about it.
But:
* It's not terribly difficult to reason about the functionality of messaging software in any language.
* WhatsApp is an extremely high-profile target; it would be weird if people hadn't reversed it by now, since less well-known programs that are much harder to reverse have been productively (as in: findings uncovered) reversed.
* The particular things we're looking for in a program like WhatsApp fall into two classes: (1) basic functional stuff like data flow that is even more straightforward to discern from control flow graphs than the kinds of things we routinely use reversing to find (like memory corruption flaws), and (2) cryptographic vulnerabilities that are difficult to spot even in source code, because they're implemented in the mathematical domain of the crypto primitives regardless of the language used to express them to computers.
Sure, though. It is easier to spot backdoors in open source software. It's just not capital-H Hard to do it in closed-source software, so this open vs. closed debate about backdoors is usually a red herring.
> less well-known programs that are much harder to reverse have been productively
> It's just not capital-H Hard to do it in closed-source software, so this open vs. closed debate about backdoors is usually a red herring.
No, you are oversimplifying the problem a lot.
In an Open Source project it is possible to create transparency in the development process by every commit public and allowing 3rd parties to mirror the sources repositories as well as perform reproducible builds, sign the artifacts and so on.
Once a project has been reviewed, it becomes pretty difficult to sneak in a backdoor later or deliver a backdoored build only to some specific targets.
In case of closed source smartphone applications it's very Hard to reverse engineer every single release simply because it takes a staggering amount of work.
It's also Hard to verify that some unsuspecting users are receiving a "custom" apk and block the update automatically.
>> it's very Hard to reverse engineer every single release
> Nope
Nope to what? Are you saying that binary diffing is possible or that the amount of effort required is (remotely) comparable with analyzing source code?
I would like to see evidence supporting the latter statement, if this is what you are saying.
Your argument is the same one as "nuclear submarines are impossible to build because I just thought about it for five minutes and can't build one". But Electric Boat Corporation from Groton, Connecticut delivers them regularly, on time and under budget (!). Googling around will tell you that these things exist and people do build them.
You can use Google to prove to yourself that either the infosec industry really exists (including skilled full time reverse engineers) or there is a vast conspiracy. Same as you would prove to yourself that nuclear submarines exist, without ever being allowed onboard one to inspect it.
Consider all the people who study closed source browsers (MSIE) and plugins (Flash) to write malware. Consider all the people who reverse engineer malware to write protections or ransomware decryptors.
The people who can do such work don't work exclusively for the NSA and Google, and you can probably hire them for $1000 a day. but none of them will do tricks for you for free just to prove that they exist. They're too busy making money.
I saw some of the work described in this [1] excellent paper on reverse engineering NSA's crypto backdoor in Juniper equipment being done live on twitter. People exchanging small pieces of code, piecing together all the changes that were made in order to allow passively decrypting VPN traffic.
Are you asking me to "back up" the claim that security researchers use BinDiff tools to reverse out vulnerabilities from vendor patches?
At one of the better-attended Black Hat USA talks last year, a team from Azimuth got up and stage and walked the audience through an IDA reverse of the iOS Secure Enclave firmware. Your argument is that it's somehow harder to reverse a simple iOS application?
Meanwhile, addressing your (different) argument directly: sure, reading C code is easier than reading assembly code, and reading Python is easier than reading C. The easier it is to read a program the easier it is to reason about it.
But:
* It's not terribly difficult to reason about the functionality of messaging software in any language.
* WhatsApp is an extremely high-profile target; it would be weird if people hadn't reversed it by now, since less well-known programs that are much harder to reverse have been productively (as in: findings uncovered) reversed.
* The particular things we're looking for in a program like WhatsApp fall into two classes: (1) basic functional stuff like data flow that is even more straightforward to discern from control flow graphs than the kinds of things we routinely use reversing to find (like memory corruption flaws), and (2) cryptographic vulnerabilities that are difficult to spot even in source code, because they're implemented in the mathematical domain of the crypto primitives regardless of the language used to express them to computers.
Sure, though. It is easier to spot backdoors in open source software. It's just not capital-H Hard to do it in closed-source software, so this open vs. closed debate about backdoors is usually a red herring.