Hacker News new | past | comments | ask | show | jobs | submit | derpleplex's comments login

Perfect security does not currently exist. A trusted source must store the information somewhere, to authorize and validate users without spreading that information elsewhere.

You can't get around this problem unless you invent magic psychic computers. What is the point in finding every possible flaw with security here? There is a gradient of complexity, the time it takes to break these things. Currently, everything that exists is susceptible to being broken, misused, or modified.

If you assume that your attackers know everything, and have the ability to immediately find and apply that knowledge, then yes, it can seem scary. But I tend to think that the more capacity a person has to do, it's really just a bigger intellectual burden.


So it sounds like messages are associated with accounts, and accounts are linked to phone numbers, and those phone numbers are easily recoverable from a centralized database. If htat's accurate then it's not simply imperfect anonymity. It's not even pseudo-anonymous. It's about as not-anonymous as it gets, which is fine for a casual messaging/chat app where no anonymity is expected. But users should not be misled into thinking it's safe to use it for anything they wouldn't say to a government official's face.


You are right, I was reacting to a pattern of argument, which is not the point that I should have been focused on. That creates more noise over more important issues.


> What is the point in finding every possible flaw with security here?

Because security is only as good as its weakest link, and an adversary who wants your sensitive data won't choose not to break in because your security is "mostly OK".

At the risk of being overly negative, if you have this attitude, you're not building a secure system, and, for your users' sakes, you shouldn't say that you are.


That was not my point, I over-reacted to a pattern of argument that bothers me when my own mind repeats it.

I agree that security should be taken seriously, with care and caution, and that users who ethically need to be aware of potentially fallacious assumptions about the usage of a device should have the capacity to easily find that information.


Obviously there exist people who believe that the things they say in this app, which claims to provide "anonymous" communication in a country with a history of repression of free speech, aren't tracable back to themselves. We've established here that this isn't true.

It's fine if you think this is an unreasonable conclusion to draw based on the evidence; you're entitled to your opinion. But the question is, do you feel comfortable with the possibility of people going to jail or dying because they didn't understand the security tradeoffs that this app makes? I think it's because some of us see this as an extremely real possibility -- particularly given the frankly ineffective security practices described here -- that you're seeing so much backlash.


The OP probably was replying to the fact that larger nations are able and willing to lean on service providers to extract personal information and deanonymize private conversation. This is a normal mode of analysis for citizens trying to secure their conversations from the US government, for example.

The Burmeses government however is not likely to be able to perform something like this, especially if the authors are not in Myanmar. In fact, there's a reasonable chance that the authors of the software are CSOs sponsored by other governments that would like to see changes in the regime in Myanmar (like the CIA's ZunZuneo app in Cuba) - in this case it is not likely that the government there will be able to play effective whack-a-mole.

The danger here is that the government will ban phones or create some sort of licensing for them if this becomes a large enough civil unrest problem. But it's equally likely that other countries would then raise sanctions and human rights violations against the regime on account of principles of 'freedom'.


I want to see you design a system so complex it cannot be understood.

Some people think cryptography is this utterly complex thing sitting on the edge of understanding. It isn't.

Evidence of rolling your own crypto, authentication or key-exchange mechanism is the first thing an attacker you'd want to worry about will look for.

Developers design systems that are easy to break because of ignorance and hubris. That's not to say you can't learn how to implement a secure system, just that if you did any research, you'd know that Rolling Your Own Is Bad because proper design is Hard and people, with much more experience than you, are aware of choke points in your design that you aren't aware of.


Complexity = time to brute force a crypto algo. I am using the word in the formal, traditional sense, where a brute force solution (and heuristics of intelligent solving) literally is measured in terms of computational complexity metrics (big o) or probability.


Try designing a system that can't be exploited and must rely on its algorithm's correctness such that only a brute force solution exists.

Few algorithms are correct such that their computational complexity alone is what provides their security. Without a formal proof, an audit and some testimony of experts I will not believe your hand-rolled algorithm is correct.

This is why you rely on proven algorithms and implementations and never roll your own. But you should not believe that using a correct algorithm alone should be enough to deem a system secure.

Its implementation might be exploitable in a way that sidesteps the security provided by your algorithm.

The idea that it might take an attacker five billion years to brute force your cipher text for a solution is nice, but if you're exchanging keys in an insecure way then that security goes right out the window.

Storing your salt on the same DB? Your search time for a solution is cut down since you can grab that as well.

This is why emphasis is placed on tearing apart a system that claims it is secure. Most of the time it isn't, and usually in ways that are easily identifiable.

Having good faith in the developer isn't an ideal when the well-being of many people might be at stake because of claims that cannot be backed up.


perfect security may not exist, but a lot better ideas than storing encrypted phone numbers in a central database have been thought of and are widely explained in the literature on the subject


If people of this century, continue to believe this kind of domination between human to human is acceptable and desirable, I want off the planet, and possibly emancipation from the human race.


> I get that it's kind of scary, but it's not just Facebook,

It's scary because it is unregulated, automated harassment. What if you have compound PTSD? Does google know what to remind me of and what not to?

> digital equivalent of reading body language on people in your store.

Yes, data is the body language of the digital world. Thank you, that is the perfect equivalent.

It's not 100% reliable, and an intelligent person has the capacity to subvert it, in order to establish their own personal privacy.

I care more about intellectual privacy than I do about any other form of it. If people believe it is 100%, then they have to prove telepathy exists.


> GM went so far as to argue locking people out helps innovation. That’s like saying locking up books will inspire kids to be innovative writers, because they won’t be tempted to copy passages from a Hemingway novel.

This is one of the best analogies I've heard.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: