Hacker News new | past | comments | ask | show | jobs | submit login

From the white paper, it appears as if this system requires its users to trust the server. That's not end-to-end encryption. What do I have wrong here?

Further, it looks like the email encryption provided by this system only works between users of Skiff. At that point, why use email at all? Why not use a real secure messenger? Instead of building an "encrypted email service", you could literally just build an email-flavored frontend to Matrix; either way, you're proxying to SMTP, not speaking it directly.




Founding engineer at Skiff here.

>From the white paper, it appears as if this system requires its users to trust the server. That's not end-to-end encryption. What do I have wrong here?

It doesn't. All data is encrypted client side across all apps - Skiff Mail, Drive, Pages, and Calendar. For sending external, the whitepaper is very clear how this case is handled in section 8.2 as securely as possible (without having PGP in place. Though this is something we are looking at based on community feedback).

>Further, it looks like the email encryption provided by this system only works between users of Skiff. At that point, why use email at all? Why not use a real secure messenger? Instead of building an "encrypted email service", you could literally just build an email-flavored frontend to Matrix; either way, you're proxying to SMTP, not speaking it directly.

Lots of folks are sick of getting sold their data sold based on their email. Even corporations are sick of largely giving more information about their customers to Google even notoriously Amazon that stopped sending purchase receipts via email.

So even if not end to end encrypted, we do encrypt the emails with the recipient's keys ensuring that only the recipient can access this data. This is a strong privacy guarantee not just backed by a flimsy privacy policy but actual cryptography.


Section 8.2 seems to talk about how you send plaintext email via SMTP to users who aren't using Skiff.

But that's not what I'm talking about with respect to end-to-end encryption. The white paper refers repeatedly to "browser" users. Your server can feed arbitrary Javascript to browsers and subvert encryption in a variety of ways, can't it?

I'm still not clear why you designed a new, simplistic cryptosystem at all here; can't you do everything you're trying to do here on top of Matrix? Again: the cryptography promises you're making only work between users of your system.


> Your server can feed arbitrary Javascript to browsers and subvert encryption in a variety of ways, can't it?

That's how literally any website works. How do you encrypt in the browser if the server doesn't send JavaScript to encrypt data? You also trust Signal not to issue an update that sends data in plaintext over the network. Unless you're building an app from source, you implicitly trust the developer to some extent.


Signal doesn’t have a web client. Most of the stores it is distributed through have fairly strong resistance to compel orders.


Yep, but Signal is still a potential adversary, and could roll out a backdoor.

A couple of things that are easier in a web-delivered tool is deliver a backdoor to a user or group of users (which Skiff can track), or deliver a backdoor over a particular window of time across many users to decrease the chance of detection.

I know Skiff uses IPFS in some of parts of their solutions, and there's something they could do with that for the first -- essentially making visiting a particular version of the code part of how it is accessed, but there's some real UI challenges, which maybe they're looking into (it's been a while since I checked them out: they have some great UX in other parts of their suite).

The other tactic I've seen is to bundle the page into a browser extension, which moves you closer to Signal's status.


Signal's servers can't backdoor the Signal Client. From what I understand of Skiff, Skiff's servers are the client.


Not directly. They would have to roll out an update with the backdoor to the App Store. But as a user I’d be none the wiser.

I wish there was some way on iOS to prove that some particular version of an app was built from a certain git hash. That way these sort of attacks would be easier to detect.


That would be ideal. F-Droid on android can do this: https://f-droid.org/en/docs/Reproducible_Builds/

However there is still one advantage of even appstor: They have to push the backdoored version to everyone (or large set of users). So that drastically increases risk of being caught. Website under their control can backdoor one specific user or even just one session, making detection harder.


> So that drastically increases risk of being caught.

Only if someone out there is extracting, decompiling and auditing each version of the Signal iOS app in the app store. But I doubt anyone is doing this. If a backdoor is ever snuck into the signal ios app for a few users for a few weeks, I highly doubt anybody would notice.


xkcd://386

Of course someone is doing this. I’m not sure they are the kind to tell it to the world, though.


Not just someONE, whole bunch of researchers and automated scans.


You can change remote configuration flags post-release (e.g. to enable diagnostics).

Even “secure” softwares like Google Chrome can capture your whole browsing history if they suddenly decide to enable a flag on your IP address. No need for conspiracy or update, though Chrome is considered perfectly secure.

In Android you can also distribute updates to specific e-mail addresses, which is very convenient.


> In Android you can also distribute updates to specific e-mail addresses, which is very convenient.

Yes but this requires the user to opt-in, you can't do it silently:

> After clicking the opt-in link, your testers will get an explanation of what it means to be a tester and a link to opt in. Each tester needs to opt in using the link.

Source: https://support.google.com/googleplay/android-developer/answ...

As for the Chrome thing, I'm a Firefox user but I would be surprised if it shipped with the option to remotely upload whole history without user's knowledge or consent, do you have a source to back that up?


You can disable updates. As long as the servers (and the OS) don’t break compatibility, you can continue to use the same old likely-non-backdoored version.


At least when downloading from the Play store, Google requires the application be updated every 90 days or so. I've run into this multiple times where the application ceases to function after the 90 day window until I go update.


Not for Signal. I was periodically prompted to update with the warning that I would lose access to signal if I didn't. Then I upgraded and they robbed me of SMS features. I would have been happy to remain on a previous version.


Group invite links in Signal work similarly to Skiff: they reveal a key to the client side js so it can either present users with a Signal download link or open a URI, depending on whether they already have Signal or not. (Signal chooses not to use a URI in this case because it would break for anyone without Signal.)

So if a Signal group is using group invite links they have the same problem Skiff does.

What I would like to exist is something like Subresource Integrity [1] but for a URL itself, so that you could include a hash in a URL and let the browser warn you if the page source doesn't match the hash.

1. https://developer.mozilla.org/en-US/docs/Web/Security/Subres...


I built something that solves this problem. It uses an open source program installed on the client to encrypt and store data in a way that doesn't allow the website to access the plaintext data. https://redact.ws


> How do you encrypt in the browser if the server doesn't send JavaScript to encrypt data?

Meta has done some work along with Cloudflare on this for WhatsApp Web, specifically. In general, JS crypto is always going to be suspect if the threat model involves distrusting the server (like in e2ee protocols like Signal).


JS crypto functions now interface with browser crypto functions for the last decade or so. https://developer.mozilla.org/en-US/docs/Web/API/Crypto


What difference does this make to the thread model in the previous comment?


Before this basic cryptography was downloaded via JS files which yields no security and gave web cryptography a bad reputation. That is not true now.


Huh? It is very concerning to hear this from a founder. It is the same exact level of security as it is executing the same code, just at a different level. Really does not matter what crypto lib you are using if at the end of the day the surrounding code dictates all the security.

Regardless of all this, your open source crypto library doesn't even use the Web Crypto APIs at all, but rather the dreaded js based crypto you are badmouthing (tweetnacl, stablelib)


That's just false. Downloading crypto libraries over the web plagued Javascript crypto for years. We use tweetnacl, stablelib, and webcrypto - and tweetnacl also uses webcrypto!


From your source, chacha20 is used, which is literally not supported by web crypto apis. Again, this makes 0 difference as higher level implementation makes or breaks your crypto.


There's literally nothing stopping the page from simply not encrypting the data.


It would be far more illogical to warp Matrix into an end-to-end encrypted calendar, note-taking product, file storage product, and email platform.


Why? You'd use Matrix to distribute keys for encrypted blobs. That's how systems like these work anyways.


What do you think of Lavabit? I think they operated in the same way, but the US government forced them out of business for refusing to hand over their TLS keys to allow the US to spy on Snowden.

https://en.wikipedia.org/wiki/Lavabit


See below, Lavabit not a good comparison as it was not end-to-end encrypted. Also read https://arstechnica.com/information-technology/2013/11/op-ed...


Thanks for the information.

One similarity between Lavabit and Skiff is that tptacek calls them both non-end-to-end encrypted.


It's very simple. One of them had access to user's private keys (Lavabit).

One never has access to user private keys (Skiff).


I don't understand how you don't have exactly the same access they did. I feel like I've invested a fair bit of time to understanding how this stuff works, and the story you're telling doesn't make sense. What am I missing?


What amilich said is correct. However, what he is leaving out is that both have access to unencrypted email at send and recieve time, so you are taking Skiffs word that they dont log emails - since you have to trust the server this is not e2ee.


Lavabit is the one that used user passwords to encrypt the messages, thus ensuring that they had access to all the necessary secrets to decrypt user messages any time the user was viewing them?

And that had complied previously with US government subpoenas to provide metadata and data for users?


>And that had complied previously with US government subpoenas to provide metadata and data for users?

Interesting. Link? Are you talking about this article? https://www.forbes.com/sites/kashmirhill/2013/08/09/lavabits...

It seems to be talking about metadata, not data.


+1


Isn't that what you're Skiff is doing too? It seems like it's just the Lavabit design with a some 2010 cryptography layered on top.


No. Lavabit had fundamental flaws where passwords were sent to the server so anyone who could decrypt the HTTPS traffic could basically access the content [1].

Skiff's password mechanism actually solves this flaw cryptographically using known, established primitives.

We use argon2id to take password and turn it into two cryptographic keys. One key is used for an SRP scheme to prove you have the password in a signing flow that bootstraps session management. The other key is actually the data encryption key. These keys are never sent to Skiff's servers and this generation happens all in the browser.

Lavabit really failed fundamentally in having actual end to end encryption because the password was sent to the server.

[1] https://arstechnica.com/information-technology/2013/11/op-ed....


That's all moot though, since your server has access to plaintext emails when sending and recieving (99.99999% of email addresses would be outside skiff), which completely subverts the whole point of encryption. A vulnerability on the server could leak all user emails, without needing their keys.... This is a solution theoretically only as strong as encryption at rest.


It really is not backed by cryptographic security at all. Since your server has access to plaintext emails when sending and recieving (99.99999% of email addresses would be outside skiff), which completely subverts the whole point of encryption. A vulnerability on the server could leak all user emails, without needing their keys.... This is a solution theoretically only as strong as encryption at rest.


Bro, did we read the same comment? The encryption is handled client side; the cyphertext is the only thing the server sees.


The encryption happens after the server recieves the plaintext email and passes it to the client...


No, this is done with public-key encryption which does not require the client.


Again, only between users with skiff emails, which is a negligible portion of all emails. This needs to be made clear as it is very misleading to the average user who probably thinks all emails are end to end encrypted.


No: Skiff does not have access to a single email stored on our platform, including ones received externally. All are public-key encrypted, including subjects and content.


This is encryption at rest, with the user holding the keys, and not end to end to end encryption, since the server recieves emails coming outaide of akiff in unencrypted form to begin with.

As a simple demonstration, even if client side code is perfectly secure, an adversary with server control can simply log all emails passing through the server and instantly have access to all new user emails that way. This means users have to trust the server, contradicting any notion of E2EE.


Worth noting that Google does not do what you're describing. Google has never literally "sold" data from Gmail and stopped using it for their own ads ~6 years ago.



This is about displaying ads in the Gmail UI, not reading email content.


Those ads are still targeted to you, maybe not off of your email content (now vs 2017)


So gmail was making money off of personal emails as recently 2017. Why trust them? There is no good reason they shouldn’t e2ee that data.


Except there is a reason. Encrypting email has very little to no benefit, since it is transmitted in plaintext and usually stored in plaintext on the recipient's side, your emails almost always exist in unencrypted form. On top of that it has major usability drawbacks, for example you cant ask the server to search emails for you anymore - all emails have to be downloaded on all your devices to be able to search - which is what skiff does. It will be okay at the start, and progressively get slower and use more space on your drive the more you use it.


> Except there is a reason. Encrypting email has very little to no benefit, since it is transmitted in plaintext and usually stored in plaintext on the recipient's side, your emails almost always exist in unencrypted form. On top of that it has major usability drawbacks, for example you cant ask the server to search emails for you anymore - all emails have to be downloaded on all your devices to be able to search - which is what skiff does. It will be okay at the start, and progressively get slower and use more space on your drive the more you use it.

This are just old technical problems which are already solved, for example by Tutanota. Mails are not send in plaintext if they are encrypted, and mailbox can be encrypted too. Plaintext version of your data exist only in the memory of your computer while the session for your mail client is open.

What it comes to the search - it does not matter anymore. We have enough computational power these days and storage to hold emails on devices. User experience is about the same. Increasingly, it is just optimisation problem which can be done right. Just don’t use Electron for your email app.


Unfortunately not even close. When the server gets the email it is not encrypted (unless the sender has a skiff address too, which is a very tiny portion...). And when you send an email to anyone outside skiff it is the same problem, the email has to be unencrypted so the server can send it in a form readable by the recepient. Without anything like PGP the server does not know the reciepent's public key, so it is impossible to encrypt it.


We might be talking about different encryption, since that does not sound E2EE at all. The point of the encryption is that server never sees the content.

But that is true that if you use skiff to send message for someone, who is not using skiff, the message is unencrypted because receiver has no means to decrypt it.

That is standrdisation issue. Apparantly PGP is not considered good enough.

But if we had standards, we have techology to provide E2EE emails.


Totally agree... it is just disappointing that services like Skiff advertise total E2EE to unsuspecting users with no mention of this in their marketing, luring users into a false sense of security


European based Tutanota offers possibility for send E2EE for non-Tutanota users. It works by setting password for the content, and you need to deliver the password in other means. But usability suffers on this case, but at least there is a possibility.


Skiff encrypts all received emails with user public keys immediately on receipt. This is quite clear in our security model page and whitepaper. Skiff does not have access to any user emails, including external received ones.


Unfortunately this does not matter, since the trust model is same as it would not be encrypted at all. We still need to trust the third party.

Somehow the infrastructure should be transparent so that outsider can verify indeed at any time, that you don't collect logs from that traffic, or have no other means to inspect traffic if you want to.

There are currently no other means than just to use E2E encryption.

There is also another almost there, but that would mean that you should open-source your whole infrastructure, and use reproducible builds. Somehow there should be way to get access for outsiders, that you indeed use your infrastructure as you describe in your source code. But this is very complicated and also changeable at any time, unlike E2EE.


We use an open-source mailserver (Haraka), but security audits are the most trustworthy way to do this. We've had 4: skiff.com/transparency. Audits cover infrastructure.


You can't audit a non-E2EE design into E2EE security!


i have 2.5 million emails in a 32GB datastore. no mailprovider is going to allow me to store that much mail, and search is actually quite fast. if it isn't for you, then get a better mail client.


Google's cheapest paid plan ($6/month) gives you 30GB. The $12/month plan is 2TB of storage. I currently have over 30GB of email in Gmail and everything works fine.


I have around 17GB of pop3fetchallnokeep in the mail folder and the backup is somewhere in the btreeflatfile heap in that UtahDataCenter free of charge training darkgpt 5.8


> That's not end-to-end encryption. What do I have wrong here?

Apparently, email may not their main e2ee usecase. The CEO at Skiff wrote this on PrivacyGuides forums:

  Our solution for external sharing was not intended for email. It is much more powerful to share E2EE real-time collaborative docs/files with subpages, embedded E2EE files, and so much more. 
Curiously, in the same thread, there's is a mention of Trail of Bits auditing their codebase twice.

https://discuss.privacyguides.net/t/skiff-mail-email-provide...


See https://skiff.com/transparency, Trail of Bits has performed 2 audits, Cure53 1 audit, and we had an additional audit 2.5 years ago.


You haven't published the reports, scope, and full findings. We don't even know what Trail was testing. I don't think the security audit stuff matters at all, and Trail is a fine firm, but you can't use the mere existence of a pentest project this way.


Any security engineer would have a heart attack if any employee, friend, or colleague said "security audit stuff [doesn't] matter." I wouldn't use software that doesn't undergo security audits.

Also, pentest ≠ audit. Completely different!


I am a security engineer. You can go reach out to whoever managed your assessment at Trail and ask them about me by name if you like. What you're saying doesn't make sense. Maybe you could make it make sense! But you'd need to start by disclosing what the actual project scopes for each of these projects was.


It is misleading to tell about audits in this context.

Your transparency statement clearly says that Security audits. This is different than privacy audits. You cannot audit privacy, since you can intentionally change the functionality of your software right after the audit.

For the same reason, you cannot share open-source version of your software and say that it respects privacy. That can be only said if you use reproducible builds, and for client software only.

Both security audits and sharing your software as open, is about security, not the privacy. Open-source software and security audits help to reduce unintentional issues. And in this context it means a lot.


Actually, that's completely false. Security audits are a standard, reputable process for software. Trail of Bits is probably the best (or one of very few top) firms in this category. Check out: https://github.com/trailofbits


Is Trail of Bits doing random checks on your running infrastucture to verify that you are not changing your software against your users?

No. That is not what security audits are. Security audits ensure that software does safely what you, as service orderer claim, in a single moment. Usually including checklist.

But they cannot guarantee that you don’t change software between audits.

That is why E2EE exists as then it does not matter and we don’t need to trust.

Open-source, security audited client for E2EE communication with reproducible builds is the magical, correct combination to ensure both security and privacy.


That's why Skiff has had 4 security audits, not just 1 3 years ago. And, with multiple of the best auditors.


What exactly got tested in each of these assessments, and what conclusions did those assessments draw? I asked this upthread and I'm asking here again, because "we've had 4 audits" doesn't mean anything without that detail.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: