In any other context I would dismiss this comment as a troll, but yeah.. Any program that passes itself off as "secure" and is closed source, in this climate, is immediately suspect.
It is possible to analyze the behavior of a program without access to the source code that makes it easy to recompile and distribute modified builds. I routinely open binary files in disassemblers to read through their behavior. Yes, it is possible to obfuscate the hell out of parts of binary code, and that would certainly throw up a red flag for me, but concentrating on the source code seems to miss the point that source code can also be obfuscated, if the source code looks "too weird" we probably aren't going to trust it either, and it is also possible to hide a bunch of behaviors across wide areas of "open" "clear" code (so nothing looks "too weird" at any given place in the code).
To be very explicit about this, as I think this is a very subtle problem that people tend to totally misunderstand: if I wanted to distribute a chat program and have it be "evil" I would not distribute a binary with hidden behavior (if nothing else, when you find this code in my binary I'm pretty damn well screwed ;P): I'd instead distribute an open source program that involved a threaded work queue for handling multiple socket connections to peers and which had a few very subtle use-after-free race conditions that would only come up under nearly impossible timing scenarios that I knew how to trigger and exploit, giving me complete control of your client whenever I wanted.
These are the kinds of bugs people use to attack open source "secure" web browsers like Chrome year after year at Pwn2Own because people are simply bad at concurrency. In this sense, I'd thereby trust a closed source web browser that had no threads or which was implemented in a type-safe garbage collected language (executed on a simply-engineered runtime from someone separate that I trusted, which could also be closed source for all I care) a lot more than I'd trust Chrome. I'd even probably have an easier time understanding what it is doing disassembling it than reading Chrome's code. (To be clear, such a browser doesn't exist: probably you should use Chrome.)
Actually, there has been some recent research [0] in cryptography that shows it is possible to produce binaries that are obfuscated in such a way such that they are computationally infeasible to deobfuscate (see the linked reference for a formal definition of indistinguishability obfuscation).
Yes. Even more simply, it is an impossible problem in the general case to even determine in a given x86 binary what parts are code and what parts are data (there was someone at RV '04 that published a paper on that result while working on his CodeSurfer binary analysis tool).
This does not, however, contradict my argument: as we can take a binary and generate really horrible C code from it (by just emulating via C, unrolling the instructions) the same result is true of source code; however, we would find that block of code highly suspicious ;P.
Again: if I wanted to give myself a backdoor into a chat program, I wouldn't distribute a backdoor into a binary, I'd provide open source code with subtle bugs that would take people years to find and that when found would look like honest "concurrency is hard" mistakes.
I am not saying that whether something is open source or not is totally irrelevant, but the people I'm responding to seem to be having this gut reaction "if it isn't open source it can't be trusted", so I'm attempting to provide enough context to show that it isn't that simple.
I think there's also a bit of "if the crypto can be trusted, opening the source code doesn't weaken it and helps build trust".
Personally, I'm running BTSync - and even though I've got it syncing EncFS encrypted data, the app has enough privileges to read the unencrypted versions of those EncFS filesystems if it were instructed to.
I'd feel happier if a few trusted security experts from a few different countries/jurisdictions had blogged about their analysis of the source code and the likelyhood of the binary produced from the source being either intentionally or unintentionally compromised.
Having said that, as you point out, we've got the Chrome source, lots of people look very hard at it, and it _still_ fails year after year… Hopefully, BTSync and BTChat (or my hypothetical Open Source reimplemetations) are significantly less complex than a full featured browser, and not would not require nearly so much focus on performance that "provably secure" or perhaps just "significantly less likely to have obscure bugs" coding techniques could be used in spite of speed penalties - browser vendors have significant motivation to optimise for speed above all else, hopefully the much smaller subject domains of sync or chat clients would allow security to sensibly be prioritised instead.
Honest question: if the source code is compiled with the same compiler (although different computer), will it give a different binary with a different hash?
Even compiling the source code with the same compiler, same build options, and same computer will generally generate a different binary with a different hash! Special effort is required to generate "deterministic builds", sometimes seen for e.g. verifying the integrity of gambling software. Here's what one fellow has been going through in an effort to accomplish that:
Potentially yes if the compiler is not configured (at build time) with the same options and the same optimization levels are not used when building the source.
Your argument is a bit disingenuous. You say you'd trust simple closed-source software more than complicated open-source counterparts. That's a false dichotomy. Open source X is always more easily auditable than closed source X.
I don't think many people are worried that BT will be evil. I think many more people are worried that they'll be incompetent, and it's very hard to be competent at cryptography. That's why we want the source.
From a "non US person" perspective, I go to BitTorrent's website and read:
Company Overview
BitTorrent Inc. is an Internet technology company based in San Francisco.
And immediately I've got _two_ things to worry about - 1) "will BT be evil/incompetent?", and 2) "will BT be leaned on by the NSA and be coerced into being evil?"
If you're a US company (or individual, or a company with US based management, developers, investors, or infrastructure) who are promoting security-related products in the post-Snowden era – many of us outside the US now have very good reasons to apply extra scrutiny to those products. Opening your source will make a _big_ difference in how readily suspicions of evilness can be allayed. As saurik points out upthread, having the source available doesn't guarantee the rest of the open source community will find and fix any carefully-enough-crafted backdoors, but keeping the source closed sends a strong message…
From a US perspective: The NSA doesn't lean on non-US companies because it is fully authorized to just hack them directly. Your data kept in foreign countries don't even have the nominal legal protection that US data does.
Unless you've got unbreakable security the NSA is well funded enough that it's irrelevant where you do business.
Except that in the case of non-US companies NSA has to do the actual hacking. In case of US ones they simply need to "ask" a company to, say, handle them their master keys - much easier.
Ok, then I will say your argument is a bit disingenuous, as I did not compare "open source X" to "closed source X": I am trying to show that "is it open source" is just one of many variables that you might consider while evaluating the security of a program, and I am further attempting to claim that it is not even the most important of these variables. In the real world, we are choosing between software that differ in many ways, not just one: all individual dichotomies are "false".
I mean, I could also say the argument "you can't trust closed source software for this stuff" is also "a bit disingenuous" via the analogous argument that that open vs. closed is a "false dichotomy": a single-threaded/type-safe/un-obfuscated X-source program "is always more easily auditable" than a multi-threaded/type-unsafe/obfuscated X-source program. Now, the question becomes "what variables are more important to you, and will your reactions be 'knee jerk' or rational"?
Sure, it's only one variable. I disagree that it's not the most important one.
You can audit a multithreaded open source program much more easily than you can audit a single-threaded closed source program. Merely compiling it adds more obfuscation than making it needlessly, complicated will, and making it needlessly complicated will immediately raise red flags.
You don't need to analyze an open-source program to see that it's been obfuscated, and, if it claims to do anything that requires security, that would probably be enough to make you suspicious.
My ultimate point is that compiling is a form of obfuscation that has extreme plausible deniability. There's no form of obfuscation that will complicate the code of an open source program as much as compiling it will, while still looking as innocuous as compiling does.
> You can audit a multithreaded open source program much more easily than you can audit a single-threaded closed source program.
I think this is our core disagreement, as I've been pretty clear about how I don't just accept this statement at face value given the large class of subtle bugs that can and do occur constantly in multi-threaded systems. I find reading through and finding bugs in complex binary-only buffer-management or even cryptographic systems "easy" (time consuming, but not requiring much brain function; even obfuscation just adds time and effort, it doesn't require greater intelligence); yet, I have never managed to remove every single concurrency bug from an open-source project I maintain that only actually uses threads to separate tasks like "searching" from "downloading update" (of course, I likely didn't put as much effort into it, so this doesn't prove anything of course: but I hope it makes you think), and I've seen people "much smarter than me" (working at Google and Apple) fail at doing so in systems that are "much more important" (working on Chromium and WebKit).
You might then just say that I'm probably just stupid and that a more reasonable programmer wouldn't have the same kinds of issues, but "concurrency is hard" I had assumed was a well-known issue. (And again, I think that this all becomes more more interesting to think about once you realize that a "backdoor" left by an intelligent opponent is going to be nearly indistinguishable from a "bug". At least once a year there is some obscure privilege escalation bug found in the Linux kernel: I think it is interesting to at least consider momentarily that any of those might have been a backdoor, and not a bug; the concept of what constitutes purposely malicious code updates tends to be way too narrow in my view, and leads people to only consider back doors that are easy to see when you print out the source code.)
> There's no form of obfuscation that will complicate the code of an open source program as much as compiling it will, while still looking as innocuous as compiling does.
Yeah, no: I seriously read through compiled code every day. I was doing a lot of reading through compiled code yesterday while working on figuring out why Substrate isn't working on iOS 7 for example. I am much much much more afraid of the bugs that are latent in a large multi-threaded project that has "lots of hands in the cookie jar" so-to-speak than of a simpler implementation distributed as a closed-source binary. I'm likewise more afraid of open-source projects that accept patches from large numbers of people (which means more people who might be actively /trying/ to add difficult-to-see exploitable bugs), or of projects that are implemented to run as native code versus ones that run inside of type-safe virtual machines (especially if the virtual machine is coded to a spec and I get to select the one it runs on).
I don't think we're going to agree on this, because at this point it is a matter of degree. I find it more likely that an error will not be malicious, it will just be a honest mistake that will make the entire system less secure.
Cryptosystem security isn't the same as binary security (i.e. against exploits). You can have a very insecure binary (in the exploit sense) but still have valid, strong cryptography (e.g. in its output).
Sure, with something like this, you want the binary to also not be easily exploitable, but I think that getting the cryptography right is more important.
Given these two points (malice vs incompetence and cryptography vs security), I think it is more important for the program to be open source, even if it's complicated, than the other way around.
There aren't many expert cryptographers who are also expert reversers, sadly.
That doesn't sound reasonable. I don't care who you are, auditing the source version is orders of magnitude easier than auditing the binary version. I say this as someone who has been reverse engineering binary code for well over a decade. This doesn't even account for the fact that requiring reverse engineering skill already eliminates the majority of potential auditors, whether due to ability or due to lack of time.
Easy and time consuming are mutually exclusive in this context. It's about cost, and time is money. Its hard in the sense that the traveling salesman problem is hard, even if the logic for the naive solution is straightforward.
Look, I'm sorry, but let's take an extreme example here to demonstrate how you are arguing something different than I am: if you are seriously trying to tell me that you have an easier time analyzing the source code for "grep" vs the binary for "false", something is seriously seriously wrong; the binary for false can seriously be less than 50 bytes large. If you show me an open source system and a closed source system, they are not going to be identical but for that one variable: that is just one of many variables.
Again, differently, you are again falling into the same problem of looking at this as a "single issue voter": open-source X vs. closed-source X. My complaint is that people go "omg, no source code, I can't trust this" as this knee jerk reaction, as if this is the only variable by which you should be evaluating your potential risks. In the real world, you are going to be comparing using this to other solutions, some open source, some closed source, and attempting to decide which one is more or less secure. Does being closed source affect your guess as to its security? Sure. But does it affect your guess more than some other key variables? I argue not.
That people then outright dismiss something closed source like "lolololololo" are being ludicrously over-simplistic in their view of where security comes from and how people audit systems, and the people like "Karunamon" who decide that it is "suspect", which assigns direct motives to the idea that they are somehow attempting to hide something in their closed source binary, don't understand the threat model.
Other people on this thread, like "bigiain", are even talking about the NSA leaving some kind of detectable backdoor in this closed source binary: that's insane... if the NSA were actually going to leave a backdoor, it wouldn't be something you'd ever look be able to look at, even with complete source code, and realize that it gives them complete control. At best, you'll find it as a "bug", assume it was a "mistake", and fix it, and they'll already have others as backup.
Oh, I agree with you on that; it's just that, when the program is closed source, it is already a big enough reason to dismiss it.
"It's open source, therefore trustworthy" is not valid, I agree. But "it is closed source, therefore untrustworthy" is valid, and that's what most people are saying.
The NSA is not omniscient anymore then the security services of any other nation are.
Most of what they do probably is just exploiting known bugs since they commit the resources to finding them as a basic part of their mission. You talk about a threat model, but you're proposing one which assigns a ludicrous amount of capability to an organization which, fundamentally, is still staffed and draws upon the same pool of human-talent that everyone else does (that is, graduates of universities principally in the western world).
I think I agree with most of what you say, but the point is, not being open source is a non-starter right out the door. Open source doesn't give anything a pass, but without that at a bare minimum, we can't even begin to take it seriously.
But I wouldn't really be worried about backdoors for something like this, just incompetence. I don't think anyone is taking it seriously anyway.
If one were to put on the tinfoil cap to the degree many are on HN with the NSA story, seems to me like the NSA would assign good programmers to contribute quality code with really subtle exploitable flaws to all kinds of open source projects. This brings up a question I have had about open source: how much of it is audited by people skilled enough to notice these kinds of flaws? I expect most major projects are, but what about the incredible numbers of lesser projects?
I think that's missing a major point (I'm kind of agreeing with you/considering that required for 'secure', I'm not on Saurik's side of the debate):
Open source means it's not going away. I won't, ever again, buy into a network that I cannot keep alive. Which is one of the reasons I don't use Whatsapp and actively prevent my family from using it. No G+. Leaving GTalk (Sorry, 'Hangout').
BTSync? Nope, unusable. BTChat? Same. Even if some highly trusted party would explain to me that BTChat is the most secure network, period: As long as I don't see the means to keep that thing alive it is just another potential trap.
To be fair: charging for peer-to-peer software that is freely redistributable doesn't work as a business model. You make money in open source by selling related services (e.g. github, Android) or support (Red Hat). You can't do it by licensing the product.
That doesn't invalidate the point above though that in the modern world a tool like this can only be considered "secure" if the implementation(s) are completely open. It's just a poor product decision on the part of BitTorrent.
> You make money in open source by selling related services (e.g. github, Android) or support (Red Hat). You can't do it by licensing the product.
This is where the distinction between "free" (as in freedom) and "open source" is helpful.
You can, hypothetically, release the source code of a project under a license that prohibits compilation of that source code (or, prohibits running anything other than the paid binary of the source code). This would allow people to view and theoretically vet the code; they just can run it (legally) without paying for it.
Not that I would like to encourage such behavior, or think that it's valuable. But it's an important distinction to remember.
> release the source code of a project under a license that prohibits compilation of that source code
Such a license would qualify for neither "open source" nor "free software" under the relevant official definitions though.
Yes, it would be reviewable for bugs and probably preferrable to a blob. But without the ability to verify the complication you'd have no assurance that the proprietary code was actually built with the reviewed source. Basically this would just be a stunt.
If the license said that you were in violation if you executed the built code but there was instructions to build the exact version that is distributed it would still allow people to verify that the binary was built from the provided source.
I recall Transgaming Wine had a model that was effectively this, it was difficult for a laymen to build the source and binaries couldn't be distributed freely but the source was still available.
No, that's wrong. The term "open source" as commonly used has a formal definition (http://opensource.org/osd-annotated) and this violates the very first term.
That is "source visible", I guess. But please don't confuse terminology: it's neither "open source" nor "free software".
Sure, I guess. But if you don't trust your own system, it doesn't really matter whether you use their code or not. That said, "Reflections on Trusting Trust" is mentioned far too often, without people fully understanding the fact that it would be incredibly difficult, if not impossible to pull something like that off.
If the UI module does encryption and decryption, and if the said encryption is good enough, why would you care if the transport layer steals your encrypted data?
The transport layer is running on the same computer at the same trust level as the encryption layer, which means it can intercept the unencrypted data. Even if the developer's 100% honest it's easy for them to accidentally create a remote code execution vulnerability that allows an attacker to do this.
Interestingly, you can. Limewire was open source, but charged for a "pro" version. (which was also open source, with the exception of some build files)
The biggest problem with all of it was that there were a bunch of scam sites that added malware, built binaries, and bought "lime wire" keywords on google.
On the other hand, I don't think OSS is to blame for that--the scam sites could have just as easily distributed any binary.
Only enterprise companies can survive on the support business model; it doesn't work for consumer or SMB because they just won't buy support contracts (I'm not counting scams).
BitTorrent was never designed with downloader privacy in mind (and not just because of the trackers, the DHT, PEX and the core protocol are all leaky as hell). Just because something is decentralised, it doesn't follow that privacy is somehow intrinsic.
Why do we have any reason to trust BitTorrent, Inc. over any other organisation? At best all these self-centered attempts are going to fragment the messaging market and make make even more unlikely we'll see an open, federated chat protocol reach popular use.
There has been a lot of interesting development on the secure chat front lately ( secure circle, textsecure, heml.is, cryptocat etc ).
Not sure if bittorrent chat will be very interesting. Most secure chat clients encrypt on the client side so the server won't be able to read your messages, so not sure if not having a server is that big of a win here. I'm also guessing metadata would be exposed to various people on the bittorrent chat p2p net.
The one I'm most excited about right now is bitmessage. It is the only chat protocol that I feel is really revolutionary. It is also a p2p network, but the interesting thing about it is that everyone on the network gets every message ( obviously you have to have the correct keys to decrypt the messages that were meant for you ). So its impossible for an observer to tell even who is talking to who. Also they have the concept of public chans , which I think are a good mechanism to draw users. Bittorrent could do the same thing here.
In theory , bitmessage looks cool. But according to a review:
"
Although it is very nice that people are working on creating secure and anonymous messaging systems, I am afraid that BitMessage is weak to a variety of attacks. I fear that the people working on it do not have sufficient expertise, in the fields of security and anonymity, to design and implement a proper cryptographic communications system + anonymity network. After reading the two design .pdf documents, I have identified a variety of weaknesses and overall poor design choices in the BitMessage protocol.
"
There was also someone who deanonymized some bitmessage users' disposable identities: those who copy-pasted a link (that the client prevented them from clicking on) to a website they'd never visited before from a random user with whom they'd never communicated into a web browser that did not use any anonymizing method.
I must say, I have very limited knowledge on encryption, but, can't an observer possibly encrypt many possible and likely short messages (like, "hey!" or "lol") with the public keys of some users of value and sniff the network for matches? I mean it would take a while, maybe a week, to get some results but hey, I think it's a possibility.
can't an observer possibly encrypt many possible and likely short messages (like, "hey!" or "lol") with the public keys of some users of value and sniff the network for matches?
no. the same message does "never" encrypt to the same cypher:
It's a header with a version number and the ID of the receivers key that the message was encrypted with. Base64-decode and hexdump those messages and look for 54483646 (one of the subkeys of F8669BB7). The encrypted message is after that and would look random.
The format is defined in http://tools.ietf.org/html/rfc4880
edit: It's not encrypted with the primary key, but one of the subkeys.
Interesting, thanks for the overview. I'll have a poke around the doc :) I've been meaning to look into more about how these things work. I understand the very high level stuff and the very low level (how to use the tools roughly and some of the maths behind it all) but not so much in-between.
While cryptography does teach us that some methods are weak against such attacks, but since they are using asymmetric crypto, that means each of my messages would be encypted with the RECIPIENT's public key. thus you really dont know WHICH messages are encrypted with teh same key. thus you can't apply such an attack.
PS: i think asymmetric crypto is secure from such attacks anyways, though isn't that way slower than symmetric crypto?
Bitmessage is awful for the user, and for security. Minutes to send a message due to the POW requirements, but a botnet can send as much spam as it wants. It's more an excuse to be associated with Bitcoin than to introduce any real security.
Correction its called Silent Circle, not secure circle.
I'm happy to see the surge of interest and new projects, but most of the offerings are between embarrassing and pathetic. Either the concept is being exploited for marketing purposes, the individuals involved just aren't appropriately skilled at what they are doing, or there are actual nefarious purposes. (I would agree, Bitmessage, and similar schemes could prove to be the best of the bunch.)
One could respond this is just paranoia, secure software doesn't really need to be open source. Or, we should trust someone because they did something very good in their past. What the NSA leak showed us is that paranoia is real.
Politics aside, and I've said this here before, this isn't just an issue of the NSA. For 99%+ of individuals, what the NSA is doing isn't going to damage them personally. However, those techniques damn well can. What the NSA is doing, other intelligence services are doing too. In some circumstances private companies are doing it as well. It doesn't matter if you aren't a terrorist, if you work on anything that could be very interesting or very profitable you are at a real risk of being targeted for electronic spying.
Standards need to be established:
a) If its closed source, it can not be audited and thus can be considered neither secure or insecure.
b) If it forces automated updates, it can not be secure.
c) If it runs on a leaky platform (all mobile devices so far) it can not be secure.
That should tell us, in my opinion, that the number one goal of secure chat would be a secure mobile platform -- that includes both operating system and hardware. If you take a look at the fine print on Replicant, the fully free version of Android, you'll notice nearly every supported phone has major potential holes, save for one really ugly looking thing.
if everyone gets every message, people will store those. and at some point in the future it will be possible to decrypt them all. that makes it something I won't use.
A little disappointing that there's no Linux version and that clicking "Other Platforms + Betas" just takes you to the Windows download. Also it's closed source.
Just telling people this so they can save signing up if it's not for them.
I remember the old days when one person on a unix machine could chat with another person on a unix machine using the unix talk command, (piped over ssh for security)
I had many a conversation with my thesis supervisor this way, once when I was in France and he was in Japan.
PS no third party server involved, obviously. Just my box and the recipient box. I suppose one could do a man in the middle attack but we would always start the conversation with some pleasant banter anyway so it's unlikely that a third party masquerading as one of us could last long before detection.
<sarcasm>Too bad this technology no longer works, it was so simple and useful</sarcasm>
Sounds similar to already existing Open-Source projects, like Tox[1] (which was discussed on HN a couple months ago[2].)
Questions for both projects still remain though: What sort of metadata can be collected from users of these programs? How can that metadata be used? Are there any security vulnerabilities that have been overlooked?
We are still a ways from having truly secure chat as the mainstream communication medium, but I'm glad Bittorent and others are helping move it in the right direction.
The problem I have with Tox is that it's 90% hype and 10% product. They spent all their time working on a nice website and building hype. That "screenshot" on the front page is a mockup, the Tox project isn't even close to that degree of completion.
I'd love to see them succeed, but I doubt it will. Especially with the recent falling-out among their primary developers.
Development is actually going fairly well. It's just that there's a whole bunch of changes that haven't been pulled to the main repository, but there's definite progress with streaming media and other aspects.
Sure, they haven't polished the GUI to look like the mock-up yet. And they don't quite have video chat, though they are working on it. But what they do already have is what Bittorrent is promising to deliver "soon", and it is actually open-source!
I like that this and Tox are tackling this issue, but they seem to be missing a huge piece of the puzzle: the so-called metadata.
If you can hide who the messages are being sent to, you can protect yourself against them spying on who your friends are, which to me, is just as important. Also, if you don't know who the recipient of an encrypted block of text is, it makes it near-impossible to brute force the private key(s) of all encrypted text coming out of a single IP.
> they seem to be missing a huge piece of the puzzle: the so-called metadata.
I wonder if the broadcast approach would help there? Be constantly throwing out GPG encrypted data to the entire network, anyone with the private key can pick it up. No "to" or "from" headers, and traffic analysis is hard since the flow of traffic is constant:
I don't know how bad the bandwidth requirements would actually be. A few thousand bytes a second is an awful lot of text. Granted, you won't be able to do anything else like VOIP.
I've been thinking of ways to combat this as well, and I admit it's an interesting problem. You either have to do some kind of Tor-like onion protocol (which has its own problems), or send every message to every client in the world. Sending your message to [your friend] + X random people would still allow an attacker to eventually gather a very detailed map of your friends by looking at which come up most often.
That's what I do, as I can't think of any alternative that is equally analysis-resistant
> A few thousand bytes a second is an awful lot of text
I was planning for ~500 bytes / sec so that traffic spikes wouldn't block up the send queue, but now that I think about it you're probably right -- even at 50 bytes/sec, the network speed cap would still be a fairly small factor compared to the amount of time spent typing...
Still a far greater improvement to knowing the content of your conversations! If your design removes performance in exchange for removing metadata, and nobody uses it, it might as well not exist then.
This seems pointless when we have things like Off-the-Record messaging, which for all intents and purposes solves the content and trust problem, and even includes a legal defence against encryption cracking (cracking a message gives you everything you would've needed to forge the message in the first place, meaning it can't be mathematically proven to have come from you - something GPG and X509 do not).
Distributed chat systems only are advantageous because you get away from having centralized servers, but you still have a bootstrap problem to get everything up and running.
The only 'unique' thing that Bittorrent can provide although is execution, delivery & a tendency to open source their work. They have shown they can deliver by implementing usable bittorrent sync clients for the major 4 OSs (Windows, OSX, iOS, Android). That usability alone would increase the amount of the internet using encrypted communications significantly, putting a major hinderance on dragnet surveillance.
Hell we might find out that bittorrent chat uses libOTR once your in an actual conversation, since they did all the hard crypto stuff already. They'll just be adding a P2P discovery layer, since that is what they are actually specialized in. That is what I would do if I were them.
There are no usable open source OTR or PGP clients for all 4 OSs still. ChatSecure for iOS still crashes a lot. Adium/Pidgin is pretty much the only usable OTR client I know of, and they are desktop clients.
What's with [RetroShare](retroshare.sourceforge.net)?
Another question: What's the sense of a tool that nobody of your friends uses, especially in social networking? The will not migrate until Facebook etc. shut down
Retroshare has awful usability. I am a smart person (yeah yeah), a programmer and a hacker and I ended up with two and a half accounts. Between two machines in the same local network it transferred files with 15 Kilobytes/s. I had to find random posts online to figure out if a blue non-descript icon meant I was sharing files with the world or no-one or if it was the green icon or something else. It looked so promising but turned out unusable.
AFAIK every successful IM platform out there is successful because of this effect. Users push their peers to adopt the new IM because without their friends it's not very useful to them.
Of course there should be a good reason for the "seed" users to get on the new platform in the first place.
the most difficult task in the whole world.
and they have to be excited... but where I live, nobody gives a sh.t what NSA does with all the private data.
Where is the detailed technical documentation? I would really like to read it. I've read all documentation for other projects, making false claims without any facts is way too easy.
A friend who was writing a bittorrent client in x86 asm was actually going to have something like this as one of its features. Hopefully this will motivate him to start working on it again since it was such a cool project...