Hacker News new | past | comments | ask | show | jobs | submit login

I'm curious -- Mozilla says it can't decrypt the file on their side:

    Mozilla does not have the ability to access the content of your encrypted file [...] 
    https://testpilot.firefox.com/experiments/send
How is the receiver able to decrypt the file -- i.e. what is the decryption key if not the URL slug, which presumably Mozilla has as well?



The key is the hash, which isn't sent over the wire when loading a page. Now granted it's accessible via location.hash in the client, but one has to trust Mozilla not to do that.


> one has to trust Mozilla not to do that.

Exactly. One has to trust Mozilla every time one visits the page. They could easily configure it to be malicious one time out of a million (say); what are the odds that they would be caught?

Web-page-based crypto is fundamentally insecure, and Mozilla is committing an extremely grave error in encouraging users to trust it (as they also do with their Firefox Accounts). Security is important, and snake-oil solutions are worse than worthless.


Send is meant to be an improvement on Dropbox & co for a specific use case.

Is it perfect? No, it isn't. But it is still a considerable improvement.

If you have a better solution in mind for the average user crowd, feel free to suggest it, of course.


Spec out and implement resource pinning, already. Like RFC 8246, but authored more with the user's interests in mind, rather than the service's.

As a show of nothing-up-the-sleeve, a service asserts that it's in a stable state and will continue to serve exact copies of the resources as they exist now—that they will not change out from beneath the client in subsequent requests. When a user chooses to use resource pinning, the browser checks that this is true, and if finding a new deployment has occurred, the browser will refuse to accept it without deliberate consent and action from the user (something on par with invalid cert screen).

This means that for a subset of services (those whose business logic can run primarily on the client side), the users need not trust the server, they need only to trust the app, which can be audited.

When deploying updated resources, services SHOULD make the update well-advertised and accompany it with some notice out of band (such as a post about the new "release", its changelog, and a link to the repo), so the new deployment may be audited.

When new deployments occur, clients SHOULD allow the user to opt for continuing to using the pinned resources, and services SHOULD be implemented in such a way that this is handled gracefully. This gives the user continuity of access while the user (or their org) is carrying out the audit process.

Areas where this would be useful:

- HIPAA compliance

- Web crypto that isn't fundamentally broken

- Stronger guarantees for non-local resources used in Web Extensions—in fact, the entire extension update mechanism could probably be more or less rebased on top of the resource pinning policy


This sounds a lot like Beaker [1], a browser based off on Dat [2]. It allows creation of shared resources (web pages) with immutable version tracking, among other things.

1: https://beakerbrowser.com/ 2: https://datproject.org/


This would also have the added bonus that one could reload such pinned resources from anywhere once you got the pin. Even without TLS setup or having to trust certificate chains.

Caching proxies would suddenly become viable again because only the first download has to through HTTPS while "I don't have this in the local cache anymore, can you serve me this content" requests could go through sidechannels or outside the TLS handshake or something like that. Caches could even perform downgrade non-attacks.


How many pins would you expect a browser instance to have? I feel like most of the time the pinned content could fit in the browser cache and make this variety of proxy-side caching pointless.


Immutable content is a prerequisite for pins. The caching benefits mostly fall out of the immutability, not the pinning. So as long as the hypothetical standard would allow one to be used without the other additional uses could fall out of that stack.


My point is, those particular benefits only exist in a narrow circumstance where the browser is half-caching.


> But it is still a considerable improvement.

When it's secure, it's an improvement; if Mozilla, a Mozilla employee or a government which can compel Mozilla employees chooses to make it insecure, then it's worse than insecure. At least with something like Dropbox users (should) know that they are insecure and should not transmit sensitive files.

> If you have a better solution in mind for the average user crowd, feel free to suggest it, of course.

The functionality should be built into Firefox, so that users can verify source code & protocols once and know that they are secure thereafter.


And re-check them after every update?

And trust that Mozilla won't randomly distribute a backdoor to 1/n of users?

The means you're suggesting aren't possible to implement for most people today. If you care about real-world impact I would recommend thinking of other strategies.


Reproducible builds effectively solve this problem, by making it possible to actually verify that you got the same binary (from the same source) as everyone else. Not to mention that if you're on Linux, distributions build Firefox themselves and so attacks against users need to be far larger in scale for 1/n attacks.


They're both important steps, but neither solves the problem. Most users don't verify the reproducibility of their builds and don't use Linux.


It solves the problem for users that use Linux. If other operating systems cared about making software distribution sane, they could also use package managers. It's a shame that they don't. As for verifying reproducibility, if you're using a package manager then this verification is quite trivial (and can be done automatically).

Solving the problem for proprietary operating systems that intentionally have horrific systems of managing the software on said operating system is harder due to an artificial, self-inflicted handicap. "Just" switching people to Linux is probably easier (hey, Google managed to get people to run Gentoo after all).


To repeat myself, for real world impact the only metric that counts is how many actual people benefited, not how many people benefit in ideal circumstances.

If your solution is to switch the entire world to Linux then you may want to figure out how to do that. Many have tried and failed before. Good luck.


Well there's reasoning which states that bad security is worse than no security.

The reason for it, is that if you trust the site to be secure it might be devastating once security is compromised, but with no security you're typically more careful.


We are more careful, as in the HN crowd. The general public? I'd wager "not so much".


Sending the file over an end to end encrypted chat app.


Even then, you are trusting the app to do what it says it's going to do. The only way I feel 100% safe is encrypting the file manually before sending (through whatever platform), and sharing the key through some other medium (preferably word of mouth).

As a Windows user I mostly use 7-zip for this purpose, or the encryption plugin in Notepad++ for text.


If the app is free software it can be audited. The problem with web-based crypto is that you're downloading a new program every page refresh and executing it immediately. If you're worried about a free software app not encrypting things properly, I'd be worried about the tools you use manually doing the right thing as well.

While I agree that doing it manually is the only reliable way if you're going to send it over an insecure channel, if the channel is secure then it's much easier for an end-user to just send it in the app.


And that's still just a feeling. You just feel you can trust the encryption application. Which can be changed any time in the future in a way it is not secure any more.

You just need to trust. So what's wrong in trusting Mozilla, if you can easily trust your encryption/decryption software?


Why would you trust 7-zip or NotePad++ more than FireFox?



The way this gets solved in the real world is through contracts - they have a certain contract with you, and if they break it, they can get sued and lose a lot of money.

This is one of the goals of the legal system - make it so we usually trust each other. There are no real long-term technical solutions to this problem.

So if you want to make sure you're safe, read their EULA or equivalent.


Or you can design a system (like any number of crypto systems) that aren't vulnerable to these sorts of attacks. A compromised CDN could cause this problem, which now means that Mozilla would be liable for things that they don't administer. And resolving technical problems through the legal system never works well. If you can design something so users don't have to constantly trust you, then that's a much better solution.

Not to mention that Send likely doesn't have a warranty (like most software under a free software license).


I don’t think the second paragraph is a logical extrapolation of the first. If this is using the WebCrypto API (which it appears to be doing), then trusting this browser-based solution isn’t fundamentally different from trusting an installed application that can update itself.


Using WebCrypto doesn't defend against the insecurity: their JavaScript code can send a copy of the file anywhere it likes. Mozilla can, if it wishes or if it is compelled to, deliver malicious JavaScript which does exactly that to a single targetted IP address, or just every once in awhile in order to find potentially interesting files.

Using in-web-page crypto gives users a false sense of security. This is, I believe, a very real problem.


It's true; you're trusting Mozilla to deliver secure code. You'd be placing a similar amount of trust in Mozilla by using Firefox, since browsers automatically update themselves these days.

What WebCrypto guarantees is that it is truly Mozilla's code that you're trusting, since the WebCrypto APIs are only available in a secure context (HTTPS or localhost).


> You'd be placing a similar amount of trust in Mozilla by using Firefox, since browsers automatically update themselves these days.

No, because I use the Debian Firefox, which means that I'm trusting the Debian Mozilla team. I feel much better about that than about directly trusting Mozilla themselves.

I don't trust auto-updates.


Why would you trust Debian more than FireFox?

About the auto-updates. CCleaner recently had an incident where their version .33 something had a backdoor injected by some 3rd party. If you downloaded version 34 you were safe. If you loaded 32 and configured it auto-update you got the malicious update. But that didn't affect the auto-update setting as far as I know, so if you had it on you would in about 2 weeks time have gotten an automatically fixed clean version.

Point: The worst situation was if you did not have auto-updates on and downloaded v. 33. Then you were stuck with that until somebody told you you had malice on your machine.

You're damned if you do and damned if you don't.

https://www.piriform.com/news/blog/2017/9/18/security-notifi...


That's a very different position than the one you staked out above. It's not browser-based crypto you have a problem with, its crypto performed by an application whose patching is done outside of your control.

That's reasonably for a technically savvy user, but the vast majority of users do not use Debian. They use Windows or OSX and rely on trusted corporations like Apple, MSFT, Google, and Mozilla to keep their systems patched.


Using any sort of application downloaded from the internet gives users a false sense of security.


> trusting this browser-based solution isn’t fundamentally different from trusting an installed application that can update itself.

Which is still a bad idea to trust, so I'd say that it is a logical extrapolation.


We trust applications like that all the time. Browsers update themselves, and we trust them to secure our communication with banks, governments, health providers, etc. Most browsers now also store passwords in an encrypted secret repository. If you're on Windows or OSX, your OS is also constantly updating itself with closed-source binary blobs.

I mean, sure, you can never use an auto-updating application again and always manually review system updates before installing them. But realistically, I don't see anyone besides Richard Stellman adopting that lifestyle.


> We trust applications like that all the time. Browsers update themselves,

Not if you're using a Linux distribution's browser packages (we patch out the auto-update code because you should always update through the package manager). And distributions build their own binaries, making attacks against builders much harder.

While people might trust auto-updating applications, they really shouldn't. And there are better solutions for the updating problem.


> We trust applications like that all the time

Sure, nobody ever said otherwise. But that doesn't mean it's a good idea.

The software I use is open-source, so I can see what I'm running, and what updates I get. I also don't use any auto-updates. The web is inherently different in that I can't really guarantee that the code I get is going to be the same code that you are getting.


I hope you've thought very carefully about your threat model when you say you don't autoupdate.

For most people your advice is quite plainly wrong. Most people should have everything on autoupdate.


Right. Point is there are two types of bad downloads. You can download a version which is not malicious but is insecure. Then auto-updating it makes it more secure.

Or you can auto-update to a version which is malicious. Then you are screwed. But your previous version you downloaded to start with might have contained the threat to start with. So just saying don't auto-update does not really protect you from malicious versions. Auto-updating does mean that you get updated security fixes making you less vulnerable.

The original non-updated version can be malicious even with a vendor you think you should be able to trust because it is a popular product used by many others:

https://www.piriform.com/news/blog/2017/9/18/security-notifi...


> They could easily configure it to be malicious one time out of a million (say); what are the odds that they would be caught?

How could they do that easily? Their source code is public, and many third parties work on it and produce their own compiled versions - plus every security people tracking unexpected connections would catch it.


Compiled versions of the JavaScript served by the site?


If you're using Firefox, you're already trusting Mozilla every time you visit any page.


> Exactly. One has to trust Mozilla every time one visits the page

This is where IPFS could be useful. It's content addressed, so the address guarantees that you're getting the same, unmodified, content


>> one has to trust Mozilla not to do that. > > Exactly. One has to trust Mozilla every time one visits > the page. They could easily configure it to be malicious > one time out of a million (say); what are the odds that > they would be caught?

Bear in mind they also make the web browser.


Sure, but that's open source and you can disable automatic updates, meaning they can't change the code whenever they feel like doing so. And if they do, the code will be kept in the source code control history, and will eventually be caught.

It's wildly different from a JS file that's loaded every time you visit the website.


It's pretty close to being the same thing. You're downloading Firefox at some point and not verifying the binaries you get match the source.

Unless Firefox provides fully reproducible builds on your platform from an open source compiler, you have no guarantee that the binary you have is built from the public source code. You have to trust Mozilla.

Without reproducible builds, compiling the source yourself would be the way to go.

Anyway, I agree that it should be clear that this file sharing service, while convenient, essentially requires you to trust Mozilla with your data. The claim "Mozilla does not have the ability to access the content of your encrypted file..." is fragile.


If you are running on Linux, then Firefox is built by your distribution. So attacks like that are much harder to accomplish, because the distribution of software like Firefox is (for lack of a better word) distributed. I'm not going to get into all of the techniques that distributions use to make these things safer, the crux of the point is that you should always use distribution packages because we generally have much better release engineering than upstream (as we've been doing it for longer).


> one has to trust Mozilla not to do that

Well, the advantage of the client is that you can inspect the source, so you can verify that it doesn't actually access location.hash.


But software loaded from webpages are not immutable. They could be manipulated at any given time. So even if someone audits the code you still cannot trust it because all the audit would show is that the specific downloaded version which the auditor saw was non-malicious.

As long as websites have no fingerprints that encompasses all loaded resources and can't be pinned to that fingerpint crypto in the webbrowser is not trustworthy.


https://developer.mozilla.org/en-US/docs/Web/Security/Subres... can help verify everything loaded is what it's supposed to be. However you'd still need to verify that the set of things loaded is the set of things you expect, not just that the things are the things, and that requires having an out-of-band checklist or extension...


SRI alone is insufficient. Maybe a strict CSP policy that exclusively allows hash sources and forbids eval could serve as an indicator that the scripts executed on a page are immutable. Probably needs some further conditions to make it airtight.


If you don't trust Mozilla, how do you trust them not to strip the CSP header whenever they want to serve malicious javascript?


you'd still have to implement a pinning feature. the hash-only-CSP part would just be the foundation which the pin checks.


In that case, wouldn't an alternative like OnionShare always be safer? The code is open source, it's a desktop app, and the source code is obviously immutable on your computer.


Surly the safest option is to encrypt it yourself first?


You could upload it to Freenet, then there won’t even be a record what was up- and downloaded by whom: No one will have the metadata.

Info: http://www.draketo.de/light/english/freenet/effortless-passw...

Install: https://freenetproject.org


Thanks. I was just looking at the screenshots and didn't seen any hashes, but when I tried it out and copied to the clipboard I saw the form "https://send.firefox.com/download/xxxxx/#yyyyy".


#yyyyy would be the encryption key. The webserver end never sees it.

It can, however, easily be read via javascript, so mozilla needs to be trusted in any case.


If one were to build a marketing spyware add-on to analyse user traffic from within the browser and send all visited URLs to some remote server, would those sent URLs then possibly contain the anchor?


If it's running on the client, then yes it could read the anchor text.


Yes, but would that matter if these are one-time downloads? You couldn't go get the file even if you did grab the data needed to do so... or am I missing something?


The add-on could tell Mozilla which files to keep because it saved the decryption key for those files. Mozilla could then select those files to decrypt, e.g. to prove to authorities that its file-sharing service was not used for illicit purposes. Alternatively, the add-on could filter the IP addresses used to upload the files for potentially sensitive blocks and then tell Mozilla to decrypt the files uploaded by people from such blocks, e.g. in an attempt to engage in corporate espionage (of course one shouldn’t use third-party services for sensitive files in the first place, but if you have to use a third-party service, certainly an ‘open-source’, ‘private’ and ‘encrypted’ one from such a reputable company as Mozilla, right?)


Several of the alternatives linked in this post’s comments make no effort to encrypt at all; they simply try to con users into sharing files with their intermediate server in plaintext, as if somehow that’s an acceptable thing to do.

If you don’t trust Mozilla or you are sharing information that a nation-state attacker would coerce Mozilla into revealing, then you’re already set up to encrypt the file first yourself - at which point you can send it with any service, include Firefox Send.


at least you can check that its not going back to the server.


How?


Inspect network traffic. Or more arduously, analyze the JS.


There are a lot of bytes moving around. I mean, I basically trust mozilla, but if they were a bad actor, it could easily be hidden steganographically.


And if your client is compromised then you're basically hosed in any secure application.


Network traffic could be encrypted, so you'd have to analyze the JavaScript.


> It can, however, easily be read via javascript, so mozilla needs to be trusted in any case.

Or you (and some friends from organizations like the EFF and FSF) can read the source code to see what it does, and even compile it yourself. If you do that, you only need to trust the compiler.


No, you still need to trust that Mozilla's deployment corresponds to the publicly available release—that they aren't using a version with changes nor have they been breached by an attacker who can change it to sample 1/n transactions.


Key is in the hash. Check 0bin.net. We use the same trick to encrypt the pastebin content. The sources are available so you can see the gist of it. It's a very simple code.


Curious why in your FAQ at 0bin.net you say:

    But JavaScript encryption is not secure!
Is there something inherently insecure about the JS crypto library you're using (https://github.com/bitwiseshiftleft/sjcl)?


"Javascript Cryptography Considered Harmful" has been posted on here and lays things out nicely, as e1ven mentions.

Here's the original discussion: https://news.ycombinator.com/item?id=2935220


I'd imagine it's similar to the arguments at https://www.nccgroup.trust/us/about-us/newsroom-and-events/b...


Js crypto make the client trust the server. As i host 0bin, i can inject code whenever i want in the page if i change my mind.


Is this a common practice? It seems brilliantly simple and reproduceable.


I stole the idea from php zerobin. I know mega used to do it. Not common but not new.


I would guess public key cryptography at the client-side.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: