Hacker News new | past | comments | ask | show | jobs | submit login

Regardless of the merit of this specific accusational-and-denial cycle, the fact remains that Whatsapp is closed source crypto and there is no way in principle for the user to verify any security claims.

I happen to trust Moxie's principles, but not as much as I distrust the relationship-with-government imperatives implied by FB's vast business interests.




There's "no way in principle"? How is this whole story not evidence to the contrary? The person who found this didn't use WhatsApp source code.

Why do you feel that there's no way to verify closed-source software?


Whichever story you are talking about, this one or the guardian one, it doesn't address the closed-source point.

There is theoretically a way to verify WhatsApp even though it's closed source, but it's practically impossible. It's hard enough to verify software even when the source is open, you built it yourself and the whole platform and toolchain is trusted. A bunch of the potential NSA crypto backdoors were totally in the open.

The app could just be lying about resending old messages or even not encrypting them or any number of things much more subtle.


I'm sorry, but this simply isn't true. Software of far, far greater complexity than WhatsApp has been reverse engineered comprehensively by hobbyists and amateurs. Meanwhile, professionals have pretty sophisticated tools for doing this work at scale.


This seems to be the same angle played every time the analysis of crypto tools comes up on HN.

(Almost always) when someone mentions the 'impossibility of analysis' of closed-source programs they are actually referring to the difficulty in doing so -- not actually stating that it's impossible.

It is easier to look through source code.

Now, if we're progressing through this conversation according to script, it will be mentioned that open source projects have had tremendous security problems, too. (OpenSSL comes to mind..)

But that's beside the point expressed.

The only point, and it's the point that was originally expressed, is that open source code is easier to look through than a closed code base.

The hurdles posed by closed source, although not impossible to jump, significantly hinder the progress of analysis.

Is this not true?


Not necessarily. People who spend their days writing source code tend to think in terms of source code because that's what they know. People who spend their days analyzing binaries don't. Be mindful of the difference between difficult and unfamiliar.


I'm sorry, but if you look upthread, the comment I responded to not only didn't say that verifying open source was easier, but actually made the extreme claim that there was in principle no way to verify closed source software at all.

Meanwhile, addressing your (different) argument directly: sure, reading C code is easier than reading assembly code, and reading Python is easier than reading C. The easier it is to read a program the easier it is to reason about it.

But:

* It's not terribly difficult to reason about the functionality of messaging software in any language.

* WhatsApp is an extremely high-profile target; it would be weird if people hadn't reversed it by now, since less well-known programs that are much harder to reverse have been productively (as in: findings uncovered) reversed.

* The particular things we're looking for in a program like WhatsApp fall into two classes: (1) basic functional stuff like data flow that is even more straightforward to discern from control flow graphs than the kinds of things we routinely use reversing to find (like memory corruption flaws), and (2) cryptographic vulnerabilities that are difficult to spot even in source code, because they're implemented in the mathematical domain of the crypto primitives regardless of the language used to express them to computers.

Sure, though. It is easier to spot backdoors in open source software. It's just not capital-H Hard to do it in closed-source software, so this open vs. closed debate about backdoors is usually a red herring.


> less well-known programs that are much harder to reverse have been productively > It's just not capital-H Hard to do it in closed-source software, so this open vs. closed debate about backdoors is usually a red herring.

No, you are oversimplifying the problem a lot.

In an Open Source project it is possible to create transparency in the development process by every commit public and allowing 3rd parties to mirror the sources repositories as well as perform reproducible builds, sign the artifacts and so on.

Once a project has been reviewed, it becomes pretty difficult to sneak in a backdoor later or deliver a backdoored build only to some specific targets.

In case of closed source smartphone applications it's very Hard to reverse engineer every single release simply because it takes a staggering amount of work.

It's also Hard to verify that some unsuspecting users are receiving a "custom" apk and block the update automatically.


Nope. Professional security people have been using binary diffing tools to solve this problem since the early 2000s.


>> it's very Hard to reverse engineer every single release > Nope

Nope to what? Are you saying that binary diffing is possible or that the amount of effort required is (remotely) comparable with analyzing source code?

I would like to see evidence supporting the latter statement, if this is what you are saying.


You keep saying this. Can you back your claims? In particular, the claim that Whatsapp has been extensively reverse-engineered?


Your argument is the same one as "nuclear submarines are impossible to build because I just thought about it for five minutes and can't build one". But Electric Boat Corporation from Groton, Connecticut delivers them regularly, on time and under budget (!). Googling around will tell you that these things exist and people do build them.


No, that's not at all my argument. I didn't ask me to show me a completed nuclear submarine. I asked you to show me a Whatsapp reverse engineer.


You can use Google to prove to yourself that either the infosec industry really exists (including skilled full time reverse engineers) or there is a vast conspiracy. Same as you would prove to yourself that nuclear submarines exist, without ever being allowed onboard one to inspect it.

Consider all the people who study closed source browsers (MSIE) and plugins (Flash) to write malware. Consider all the people who reverse engineer malware to write protections or ransomware decryptors.

The people who can do such work don't work exclusively for the NSA and Google, and you can probably hire them for $1000 a day. but none of them will do tricks for you for free just to prove that they exist. They're too busy making money.

I saw some of the work described in this [1] excellent paper on reverse engineering NSA's crypto backdoor in Juniper equipment being done live on twitter. People exchanging small pieces of code, piecing together all the changes that were made in order to allow passively decrypting VPN traffic.

1 - https://eprint.iacr.org/2016/376.pdf


Are you asking me to "back up" the claim that security researchers use BinDiff tools to reverse out vulnerabilities from vendor patches?

At one of the better-attended Black Hat USA talks last year, a team from Azimuth got up and stage and walked the audience through an IDA reverse of the iOS Secure Enclave firmware. Your argument is that it's somehow harder to reverse a simple iOS application?


You keep saying it's easy, I keep saying, then do it and show me.


Have you ever heard of the obfuscated C contest? Even with the code in front of your face and looking innocent it's hard to see what it does.

When you only have decompiled assembly, obfuscation is much easier.


At the binary level, obfuscation is powerful but obvious. Is iOS WhatsApp meaningfully obfuscated?


Considering iOS forbids self modifying code, lots of tricks aren't even available.


You can demonstrate the presence of a vulnerability in closed source software but there's no way to demonstrate (or even provide evidence of) the absence of any vulnerabilities.


That's identically true of open-source software. To put it in the theoretical terms you're probably most comfortable with: the programming language used to represent a computer program has nothing fundamentally to do with whether it can be verified. Obviously some languages are easier to verify programs in than others, but the gap between assembly and C in ordinary compiled programs is surprisingly small.

Open vs. closed-source software is a concern orthogonal to verifiability.

I know this is a tough thing for people to get their heads around since it challenges a major open source orthodoxy. I like open source too. But the people who ratified it were not experts in this field, and this particular benefit of open source is overstated.


> the programming language used to represent a computer program has nothing fundamentally to do with whether it can be verified

That's not true. The design of a language can make it easier to verify with respect to certain properties. For example, it is much easier to verify that a typical Python program does not dereference dangling pointers than a typical C program.

It is true that open source does not help as much as some of its adherents like to think. But that doesn't mean that it doesn't help at all, and it is certainly not true that it cannot help substantially in principle even if it does not help much in current practice.


You're using a word, "easier", that is keeping us off the same page. I agree that Haskell programs are easier in many senses to verify than PHP programs. But our field does formal methods verification of assembly programs, for instance by lifting them to an IR.


That's news to me. At least it's news that this is actually practical for any interesting cases (i.e. real crypto code). Do you have a reference?


The Skype client was obfuscated, encrypted, and riddled with anti debugging boobytraps, none of which prevented people from figuring out exactly what it did. (Not exactly a formal analysis, but probably news to the people who think messaging apps have never been reversed before.)


I'm in a cab typing on my phone but good Google searches are "llvm lifter" and "symbolic execution" or "SMT".


AFAICT that's (at best) research level stuff. I'd love to be proven (heh) wrong, though. I think what lisper was after was actual practical applications, e.g. something along the lines of the CompCert C compiler[1].

[1] Which I'll note was written and verified in Coq a high-level proof-oriented language.


I don't know what "(at best) research level stuff" means. Here's a well-regarded LLVM lifter by a very well-regarded vuln research team that's open source:

https://blog.trailofbits.com/2014/08/07/mcsema-is-officially...

There are a bunch of other lifters, not all of them to LLVM.

Already, with the idea of IR lifting, we're at a point where we're no longer talking about reading assembly but rather a higher-level language. But this leaves out tooling that can analyze IR (or, for that matter, assembly control flow blocks).

Someone upthread stridently declared that analyzing one version of a binary in isolation was hard enough, but that the work of looking at every version was "staggering", "capital-h Hard". But that problem is in some ways easier than basic reverse engineering, which is why security product companies and malware research teams have teams of people using BinDiff-like tools to do it. "BinDiff" is a deceptive name; "Bin" refers to compiled binaries, because the tools work based on graph comparisons of program CFGs.

Part of the problem I have talking about this stuff is that this isn't really my area of expertise --- not in the sense that I can't reverse a binary or use a BinDiffing tool, because most software security people can, myself included, but in the sense that I'm describing the state of the art as of, like, 6 years ago. I'm sure the tooling I'm describing is embarrassing compared to what our field has now.

Open vs. closed source is an orthogonal concern to verifiability.


> Open vs. closed source is an orthogonal concern to verifiability.

The evidence you have presented does not support this conclusion. All you've shown is that it is possible to reverse-engineer object code, but this was never in doubt. It is still an open possibility (indeed it is overwhelmingly probable) that it is a hell of a lot easier to audit code if you have access to the source. All else being equal, more information is always better than less, and, as I pointed out earlier, the constraints imposed by some languages can often be leveraged to make the verification task easier.


I'm sorry, but once again: this thread began with a different claim. I'm not interested in debating whether it's easier to audit C code or assembly code: I've repeatedly acknowledged that it is.


Then what exactly are we arguing about? My original claim was:

"You can demonstrate the presence of a vulnerability in closed source software but there's no way to demonstrate (or even provide evidence of) the absence of any vulnerabilities." [Emphasis added] (Also please note that I very deliberately did not use the word "prove".)

Your response was:

"That's identically true of open-source software."

If you acknowledge that it is easier to audit source code then it cannot be the case that anything is "identically true" of open and closed source software (except for uninteresting things like that they are both subject to the halting problem). If it is easier to audit source code (and you just conceded that it is) then it is easier to find vulnerabilities, and so it is more likely that vulnerabilities will be found, and so (for example) the failure of an audit conducted by a competent and honest agent to find vulnerabilities is in fact evidence (not proof) of the absence of vulnerabilities.

But if you have source code written in a language designed to admit formal proofs then it is actually possible to demonstrate the absence of certain classes of vulnerabilities. For example, code written in Rust can be demonstrated (maybe even proven) to not be subject to buffer overflow attacks.


Look at the original claim you made. You said it's possible to provide evidence of the existence of vulnerabilities in closed source software, but not of their absence. To the extent that's true, it's true of open source software as well. The dichotomy you presented, about absence of evidence vs. evidence of absence, is not about open source software but about all software built without formal methods --- which, regardless of the language used, is almost all software.

The point you made is orthogonal to the question of whether we can understand and evaluate ("verify") closed-source software.


OK, I concede the point.

Let me try to advance a different thesis then: it is possible to write software in such a manner that the source code is amenable to methods of analysis that the object code is not. Accordingly, for software written in such manners, it is possible to provide certain guarantees if the source code is available for analysis, and those guarantees cannot be provided if the source code is not available. Would you agree with that?


I think it's possible that that's true, but am uncertain: the claim depends on formal methods for software construction that defy decompilation. But the higher-level the tools used for building software, the more effective decompilation tends to be. I also don't think we're even close to the apex of what decompilation (even of the clumsy, lossy compiled languages we have today) will be able to do.

So, it's an interesting question, and one I have much less of a strong opinion on.

If you can't tell, my real issue here is the idea that closed-source software is somehow unknowable. I know you're not claiming that it is. But I think if you look over these threads, you'll see that they tend to begin with people who do believe that, or claim to.


tptacek,

Over the years interacting with you here on HN, I think this basically sums up the worldview that puts you and I at odds:

> Open vs. closed-source software is a concern orthogonal to verifiability.

Is there a place where you have written at length, defending this assertion?

I am open to it. But it does not resonate with my understanding, nor my (substantial, I think) experience in deployments of open- and closed-source software with specific respect to verifiability.


What do you mean by verifiablity?

If you are using a casual, inspection = verification definition then I think most would agree that it is true that open source is easier to inspect.

But "verified" software often means formal mathematical verification, and that is orthogonal to if the source is open.


> But "verified" software often means formal mathematical verification, and that is orthogonal to if the source is open.

I think there may be cross-talk here related to who's doing the verifying too. I think the parent is assuming "verification" would imply that a 3rd party could verify the software in question. AFAIUI it's currently nowhere near practical for a 3rd party to verify closed-source software of any non-trivial size. (Correct me, if I'm wrong, obviously.)

It's still a research problem even for open-source unless the software is built with formal verification in mind (for example in Coq or Agda), but at least there's an existence proof that it's possible to do for non-trivial software (see CompCert C). That was still a multi-year effort and it's still a somewhat (architecturally) simple program as compilers tend to be.


Downthread tptacek talks about how his company does verification of binary images. I assume there are limits to what is possible, but that's really the same as any verification approach.

I do agree that 'who is verifying' is a valid way of looking at it too.

Regardless, it's pretty clear that tptacek means formal verification.

Edit: see http://arieg.bitbucket.org/pdf/seahorn.pdf


My company doesn't do this. But many companies specialize in it.


If whatsapp allows few experts or companies access to code they can always verify but the issue is centrally hosted , that is the issue not open or close. If i can build software on and host on my server that might be better


Open vs. closed is obviously orthogonal to verifiability. Those who verify software have access to the source, open or not.

More parties have the opportunity to be verifiers of open software. However, a given OSS program might not attract skilled verifiers.


> Those who verify software have access to the source, open or not.

That's not true. Black hats are essentially verifiers, and they generally do not have access to closed source.


How do you look at the last 15 years worth of Microsoft Windows OS advisories and conclude that closed source has prevented hats of all colors from discovering vulnerabilities?


Why do you keep raising this straw man? It is obviously possible to reverse engineer object code and find vulnerabilities. But it is (equally obviously) easier to examine source code to find vulnerabilities.


This slippery slope goes in both directions. Software is also easier to audit when it's built in higher-level languages, but we don't have ideological objections about verifiability to software written in C.


I'm not sure I understand what you mean by that. Do you mean that people think that it doesn't matter what language code is written in as long as it is open source? I certainly don't believe that. It's pretty clear to me that C is a terrible language for writing secure code. (But coming up with something that is actually better than C is not so easy.)


All agree that higher level (e.g. C) is easier to reason about than lower level (assembly).

Now, you say that open source (e.g. C) is not only easier, but qualitatively different: open source good, closed source bad.

tptacek points out: higher level (e.g. Haskell) is easier to reason about than lower level (e.g. C) - maybe even qualitatively.

So, why are people only complaining about closed source, when they should (by analogous reasoning) be complaining about code written in C? Granted, it's possible to analyse, but it's obviously easier when it's written in Haskell!


For what it's worth, I complain about code written in C too.


Here might be something to look for among those advisories: how many of them were discovered in the source versus in the field.

If the vendor is lazy about verifying code, it being closed is a big disadvantage. "We're not combing the code for bugs, and neither is anyone else; if it's not reported to us, it doesn't exist."


They are verifiers who cannot, with straight faces, declare a body of code verified.


I'll bet some of them can.


[T]he programming language used to represent a computer program has nothing fundamentally to do with whether it can be verified.

I would guess that programs in a Turing-incomplete language will be easier to verify than ones in a Turing complete one.


> no way to demonstrate (or even provide evidence of) the absence of any vulnerabilities

I'd say that's hard for open source software to do as well.


Sure, but so what? Hard != impossible.

EDIT: WTF people? Why is every response to this comment being downvoted into oblivion? The sibling comment to this one (https://news.ycombinator.com/item?id=13395657) was killed in a matter of minutes despite being (IMHO) a perfectly reasonable and constructive response.


Votes aren't why that comment is dead --- note that it doesn't say "flagged". There is stuff that happens behind the scenes that [deads] users, especially new accounts; I think some of it might be voting ring related but not sure.

(That comment is incorrect but I agree with you that it's constructive).


It seems to have come back from the dead too :-)


Reanimation powers are hidden behind the "vouch" link :)


Big Backdoor showing who's boss.


Yes but it is trivial to backdoor closed source software.

Introducing backdoors into open source source is also more difficult.


Open source is a red herring, as you say. Closed binary is the problem.

I need a way to verify that binary I am installing is the same as the binary that has been thoroughly vetted by security researches. In the modern mobile app ecosystem, on a major OS, running a major app, I can't carefully pick and choose which binary version to install. I get whatever the OS company's server pushes to me, and I can't downgrade to a known good version.


It's possible to discover this without looking at the code at all, possibly even by accident, because it's going to be obvious to anyone who changed their keys that messages were automatically reencrypted to the new key when they receive them. That doesn't mean that issues which aren't user-visible would be found, and it took a long time for anyone to spot this one.


> Why do you feel that there's no way to verify closed-source software?

Isn't that the very definition of security through obscurity?


No. The security of WhatsApp is not dependent on it being closed source.


[flagged]


Regardless how the comment tries to work around it, cognitive deficit is in fact rude.


[flagged]


The sense of decorum we all ought to have in participating in HN is called out in https://news.ycombinator.com/newsguidelines.html. As an example, Be civil. Don't say things you wouldn't say in a face-to-face conversation. Avoid gratuitous negativity.


Well, I don't know if they're cognitive ones but you seem to have some deficits in 'knowing how to engage people on forums' and 'knowing what security by obscurity means'. The content of your post is that isn't rude is just inaccurate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: