I only feel safe using end-to-end encrypted chatrooms. Currently, niltalk can read every message. At the very least, AES encrypting messages by the chatroom's password will reduce reliance on SSL. But it really should use public key crypto for a key exchange between users. This is what's done by other disposable chatrooms:
New keypairs would be generated on the client every time you join a chatroom. Another member of the chatroom sends you the shared_key encrypted by your public key. Server knows nothing, stores no keys. Keys exchanged between users.
When you re-download the codebase on every use, there is no way to ensure integrity of the code. This is the reason cryptocat ships as a chrome extension, because it is downloaded once. Even with these issues, I'd take javascript crypto + open source over nothing (or just SSL).
> New keypairs would be generated on the client every time you join a chatroom. Another member of the chatroom sends you the shared_key encrypted by your public key. Server knows nothing, stores no keys. Keys exchanged between users.
The question is - how does the first public key exchange happen? It has be done outside of the site for it to be secure and your private key must exist locally on your device - which is contradictory to the premise of these websites.
But all forms of exchange are potentially vulnerable, the point of using multiple channels for authentication is to increase the challenge-space for potential attackers. Indeed the chief benefit of public key encryption is that the key can be exchanged over a multitude of channels and a compromise of just some of them does not jeopardize the entire operation. Perhaps we need more authentication systems where this is made implicit, with trust based on the number of different mediums the key is transferred over (or the number of different third party signers).
Good work. Though I have to say I've seen so many of these web-based "secure, private, anonymous" chat services now, I've lost track.
What we need is end-to-end encryption and with an open source client that just has to be downloaded and built/installed once (and in such a way that it's verifiably secure, think reproducible builds).
Author here. This is meant to be something super simple and instantly accessible. Start and finish a conversation in mere seconds if need be with no traces.
I am sure downloaded clients with end to end encryptions exist, but it's definitely outside the scope of something as simple as Niltalk.
why does the client need to be built locally? Are you inherently suspicious of anything delivered over HTTPS?
I'm genuinely interested in why people feel local clients are more secure than something running in a browser. It's something I came across when writing an ssh client in browser (www.minaterm.com).
I guess it's the potential for a HTML page to updated overtime so it no longer reflects an audited version. However it seems that it's really a failing in our browsers that this is the case. Perhaps an external service that verifies the hash of a page would help? But this would need browser support of course.
The only thing I could think of that could be implemented in current browsers was a small stub page which calculates and displays a hash of the HTML/Javascript to be launched. The stub would need to be small enough that a user could manually check that nothing malicious has been added here.
"why does the client need to be built locally? Are you inherently suspicious of anything delivered over HTTPS?"
A good question.
In order to have end-to-end security, you need some sort of secret that is only known on the end points (possibly negotiated over some sort of key exchange protocol), and it should be impossible for the server in the middle to have the secrets.
The core problem is that a webpage is really, really, really designed to be a representation of the server, sitting on a client sandbox. There is no built-in way for a web browser to inject anything into the connection that could be used for a security connection in such a way that the server can't see it. All the local storage the page has access to, the server has access to. All the cookie data the page has access to, the server has access to. Anything else you can come up with that the page has access to, the server can either read or destructively set by sending down the correct HTTP or HTML. There's no independent client "context" that can be passively, safely used by the page somehow, and in a world where the page is running javascript provided by the server it's not even particularly clear what could be "used" by the page without being something that the server could "use" by reading, then sending to the server.
There is, therefore, no way to use the web through a conventional browser to create an end-to-end connection that the server doesn't have full access to. Browsers just aren't designed for this use case.
Note nothing stops you from providing an HTTPS REST interface that would allow full end-to-end encryption that is used by a client that is capable of having local secrets and does not provide any way for a server to run code against it. It is specifically the browsers making this impossible. I'd also observe this isn't necessarily fundamental, browsers could be changed to fix this, but... I'm not sure it would be a good idea. Browsers are already insanely complicated security environments that just barely work on the best of days. Not sure I want to add "secure-from-the-server secret storage" to the list of things a browser is supposed to be able to do. (It is also possible certain extensions in the browser have already hacked together this ability, such as the video chat extensions, I haven't studied them to that detail, but AFAIK secure secret storage and key negotiation aren't generically and generally available.)
1) Don't really care about privacy. Might not want their chat on the front page of the papers, but aren't going to go to great lengths to achieve that.
2) Actually care about privacy and are informed. There's not many of these people, but they're trained to be wary of every outside dependency and opportunity for hostile code injection. Crypto running in the browser can be replaced any time you load it if the host is compromised - either in the technical sense or the legal sense. Yes, it could be hashed, but it isn't and there's no mechanism for this nor plans to build one.
Not to mention that the browser itself presents a pretty large attack surface.
> That's kind of a shame. It would be nice if apps distributed over the web could be signed the same way they are from repositories.
This sounds like a theoretical impossibility. The server's source code is by nature closed, and while the server could provide you a copy of the source with a signature, there's really no way for you to verify that the code you've been promised is the code that is running.
A browser feature would be required that could calculate/display the hash of the delivered code and optionally verify it against a 3rd party server. Ideally you'd want have particular versions signed as "audited" etc.
You're neglecting the server-side code. If you have access to the full source code to verify it, you're not describing a web service, you're describing a local application that happens to be implemented in a browser.
You already can distribute signed browser add-ons.
I was looking for something like minaterm the other day, trouble is i'd be scared to put my credentials into it. A when I think about it logically that isn't rational (putty can grab my credentials just as easily), but still.
It's not entirely irrational. If putty wants to grab your credentials they have to ship a broken binary that once downloaded exists forever and can be examined and reverse engineered in the wild. Someone running a web service (or someone who has compromised said service) can target a particular user for a single session and the evidence that an attack occurred will only exist until a few caches get cleared.
yes, and I also would be scared to too. It's interesting thinking about why though. I think there's a significant social/psychological component to the decision.
I'd also be less scared if it was running on my own server, but it's not clear to me that this is completely logical either.
If the code can't change, what's the point of having it be delivered through the browser each time? Aren't you better off saving the bandwidth by downloading it once?
I assumed it would. But I'm thinking socket.io would provide better client support and have been looking at meteor as a way to get something built fast. Going Roth a prototype first as I have a lot less time to do this stuff these days. I'm about equally proficient in both go and node which is to say I can stumble my way to a solution with both.
The number of bcrypt rounds is extremely low, too[1]. While the Go bcrypt lib will actually accept a cost of 5, that seems an unreasonably low value to me.
Coupled with absolutely no encryption of the messages in memory, I think "anonymous" would be a better term than "secure" for this.
Your bcrypt complaint is pretty petty. They aren't storing the hash on disk at all, and the chat rooms are only temporary.
I do have privacy concerns about this and agree they can eavesdrop if they wish. Increasing the bcrypt rounds from 5 to 15 would in no way help with any of that.
It's in a redis database, so it's not all that hard to get at, either, should someone compromise the system. "Not on disk" stops being such a good defence when it's stored in a semi-persistent DB.
My main complaint, though, is that there's simply no reason to choose such a low number of rounds. Using the exact same code as in this app, 5 rounds takes 3551534 ns/op, 10 rounds takes 3583632 ns/op and 15 rounds takes 3623005 ns/op. In other words, it's only 2% slower to use 15 rounds than it is 5, and the default (10) is less than 1% slower.
If someone has an active compromise on a running machine, they can intercept network traffic and bypass bcrypt completely.
> In other words, it's only 2% slower to use 15 rounds than it is 5, and the default (10) is less than 1% slower.
So are you arguing that your complaint is petty or isn't? Because this isn't helping your case.
Overall your attack scenario is where an attacker has just enough access to the machine to read memory in the redis database, but not enough access to read memory in the web-server, or at the point before bcrypt has been run in the process.
If redis was stored to disk you may have a valid point. As it stands your argument actually doesn't make sense. If they can access Redis they can access pre-bcrypt passwords and therefore making bcrypt's rounds completely unimportant.
> If they can access Redis they can access pre-bcrypt passwords and therefore making bcrypt's rounds completely unimportant.
No. The unhashed passwords are not stored in redis. What I think you're missing is that there's a significant difficulty gap between connecting to, and reading data from, redis compared to gaining root access and reading arbitrary memory on the server.
> So are you arguing that your complaint is petty or isn't? Because this isn't helping your case.
You make a good point - even if it's not the one you were trying to make - and that it's that my benchmark was not particularly helpful as it measured per operation, not per hash.
You missed the point I was really trying to make, though, which is that difference between 5 rounds and 15 (your choice, not mine - I probably wouldn't choose 15) isn't that significant when you're doing legitimate stuff, like hashing chatroom passwords. It is significant if you're brute-forcing.
Never claimed otherwise. They are stored in memory though. They're in the web-server process, and the process which actually conducts the bcrypt hashing.
> What I think you're missing is that there's a significant difficulty gap between connecting to, and reading data from, redis compared to gaining root access and reading arbitrary memory on the server.
You don't need to read arbitrary memory on the server, you only need to be in the same scope as the web app runs in.
> It is significant if you're brute-forcing.
If you're in a position to steal the bcrypt-ed passwords in this case, you're in a position to steal the plain text passwords (both in memory, both in the same scope, why waste time breaking bcrypt?).
If the author altered the code so it DID store on the file system medium to long term, sure, it might be worth while increasing bcrypt's rounds. In the mean time bcrypt is almost pointless in this case as plain text exists in the same execution scope and is accessible to processes with access to Redis.
How one runs this? I installed go, and redis. The ran "go get github.com/goniltalk/niltalk", which installed. The previous command created three directories under my $GOPATH, one on which has a 'nilktalk' executable.
For someone who has never dabbled with go, how do I run nilktalk after all of the above was done?
As all clients need a password to enter a room, the messages could be encrypted with that password. There are a lot of JS libraries that could do this, e.g. Triplesec
This is an awesome service. Thanks for making it available to everyone. Can I ask what the use case is for this? I talk to my friends using FB messenger or Google chat and my customers using a chat widget on our site, so I'm curious when I would use this.
Thanks! Use cases could be, quick private convo. at a workplace, taking a discussion on a public forum private (like HN or Reddit), talking to strangers (eg: Craigslist) without adding them to your FB or Google Talk, exchanging secrets with your friends without leaving logs on your Google talk etc. :)
The problem with taking a discussion from a public forum private, is that in the current state, everyone can choose to dispose of the room.
As proven in this very thread, it doesn't really work. The idea that everyone can dispose of the room is interesting, but there probably should be an option so that only the creator (or the first to join) can dispose of the room, for such public place cases.
It's actually meant for small group of people to have private conversations and is not really ideal for take a huge public discussion private. The idea of marking a peer the creator or making the first peer an owner complicates the whole privacy and security aspect.
To make it even more instant (in terms of UX), I would display the message immediately so you don't get the little delay. From where I'm at, it's about 250 milliseconds from the point I hit ENTER to when I see the text displayed.
first off... this is great!
I wonder if you could make it so when you create a room, you can attach a message.
So for instance i could generate a password then sign it with my partners public key then paste that in the message box so theoretically only they could get access to the channel.
and and, create rooms that are meant for someone, so their public key is the index and their private key decrypts the message to get the password into the channel.
Disposing of the room does remove it and kick everyone out, indeed. And then the link is invalid. Neat. According for the privacy page, it's all living in RAM only, so in theory there is no logging (https://niltalk.com/pages/privacy). Guess we can check the code and see for ourselves, of course.
It doesn't need to be trust-based, and in fact shouldn't be trust-based, because even if I trust you, I also have to trust the people who could coerce or bypass you, or people who could maliciously access/modify your systems.
This is why end-to-end encryption is really the only way to make promises as a server about not reading / storing logs.
The dispose button is far too inviting where it is. I started to click it thinking it was the submit button till I read the text. Perhaps put it somewhere top right or someplace other than right beside the input box, also, it's kind of weird that there is no admin for the forum, so any participant can delete it I assume?
Yes, that's by design. These rooms are meant to be completely ephemeral and private amongst small groups of peers. It's critical to ensure that anyone who is connected is able to quickly dispose of the room for security / privacy reasons. Once a room is created, there is no "admin" or "host" per se, just a short lived private space.
It's critical to ensure that anyone who is connected is able to quickly dispose of the room for security / privacy reasons.
And if you're not a small number of trusted participants (e.g. anon participants, not all of whom you trust, or enough people that one might make a mistake or delete before everyone is ready), that's not going to work. See the example forums being created and deleted above. They could easily allow two passwords on setup though to avoid this - one for admins, one for posters.
On the button placement, it really would be better elsewhere - it is not related to the text submission entry, so it belongs at top somewhere, along with the sound, which again is a forum-level setting.
Deciding on a pattern on writing http APIs in Go was a bit of chore. Ended up using the `pat` library for chaining middleware. Quite extensible and light weight. Also, using context to pass objects through the requests chain is a neat trick.
Why was this comment downvoted? The NSA has built custom hardware to crack 1024 bit DH in a few days[1], so the site owner really should regenerate the DH parameters and use 2048 bits.
It would also be nice to disable 3DES ciphers and only allow ciphers with forward secrecy.
https://crypto.cat/
https://ephemeral.pw/chat/ (Also written in Go)