Hacker News new | past | comments | ask | show | jobs | submit login
Firefox Send: Private, Encrypted File Sharing (firefox.com)
856 points by networked on Oct 11, 2017 | hide | past | favorite | 275 comments



The last time this was posted I hacked up a quick and dirty Python client for Send: https://github.com/nneonneo/ffsend. I just updated it for recent Send changes, which streamlined the crypto and removed some redundancy.

Firefox’s JS client requires the whole file be in memory in order to perform the encryption, and has to decrypt the whole file in memory in order to download it. My client doesn’t have that limitation so it could theoretically upload much larger files (subject only to the server’s upload limit).


have you bumped already into server upload limit?


Would that not simply be your specific allotted disk space, and bandwidth?


I'm curious -- Mozilla says it can't decrypt the file on their side:

    Mozilla does not have the ability to access the content of your encrypted file [...] 
    https://testpilot.firefox.com/experiments/send
How is the receiver able to decrypt the file -- i.e. what is the decryption key if not the URL slug, which presumably Mozilla has as well?


The key is the hash, which isn't sent over the wire when loading a page. Now granted it's accessible via location.hash in the client, but one has to trust Mozilla not to do that.


> one has to trust Mozilla not to do that.

Exactly. One has to trust Mozilla every time one visits the page. They could easily configure it to be malicious one time out of a million (say); what are the odds that they would be caught?

Web-page-based crypto is fundamentally insecure, and Mozilla is committing an extremely grave error in encouraging users to trust it (as they also do with their Firefox Accounts). Security is important, and snake-oil solutions are worse than worthless.


Send is meant to be an improvement on Dropbox & co for a specific use case.

Is it perfect? No, it isn't. But it is still a considerable improvement.

If you have a better solution in mind for the average user crowd, feel free to suggest it, of course.


Spec out and implement resource pinning, already. Like RFC 8246, but authored more with the user's interests in mind, rather than the service's.

As a show of nothing-up-the-sleeve, a service asserts that it's in a stable state and will continue to serve exact copies of the resources as they exist now—that they will not change out from beneath the client in subsequent requests. When a user chooses to use resource pinning, the browser checks that this is true, and if finding a new deployment has occurred, the browser will refuse to accept it without deliberate consent and action from the user (something on par with invalid cert screen).

This means that for a subset of services (those whose business logic can run primarily on the client side), the users need not trust the server, they need only to trust the app, which can be audited.

When deploying updated resources, services SHOULD make the update well-advertised and accompany it with some notice out of band (such as a post about the new "release", its changelog, and a link to the repo), so the new deployment may be audited.

When new deployments occur, clients SHOULD allow the user to opt for continuing to using the pinned resources, and services SHOULD be implemented in such a way that this is handled gracefully. This gives the user continuity of access while the user (or their org) is carrying out the audit process.

Areas where this would be useful:

- HIPAA compliance

- Web crypto that isn't fundamentally broken

- Stronger guarantees for non-local resources used in Web Extensions—in fact, the entire extension update mechanism could probably be more or less rebased on top of the resource pinning policy


This sounds a lot like Beaker [1], a browser based off on Dat [2]. It allows creation of shared resources (web pages) with immutable version tracking, among other things.

1: https://beakerbrowser.com/ 2: https://datproject.org/


This would also have the added bonus that one could reload such pinned resources from anywhere once you got the pin. Even without TLS setup or having to trust certificate chains.

Caching proxies would suddenly become viable again because only the first download has to through HTTPS while "I don't have this in the local cache anymore, can you serve me this content" requests could go through sidechannels or outside the TLS handshake or something like that. Caches could even perform downgrade non-attacks.


How many pins would you expect a browser instance to have? I feel like most of the time the pinned content could fit in the browser cache and make this variety of proxy-side caching pointless.


Immutable content is a prerequisite for pins. The caching benefits mostly fall out of the immutability, not the pinning. So as long as the hypothetical standard would allow one to be used without the other additional uses could fall out of that stack.


My point is, those particular benefits only exist in a narrow circumstance where the browser is half-caching.


> But it is still a considerable improvement.

When it's secure, it's an improvement; if Mozilla, a Mozilla employee or a government which can compel Mozilla employees chooses to make it insecure, then it's worse than insecure. At least with something like Dropbox users (should) know that they are insecure and should not transmit sensitive files.

> If you have a better solution in mind for the average user crowd, feel free to suggest it, of course.

The functionality should be built into Firefox, so that users can verify source code & protocols once and know that they are secure thereafter.


And re-check them after every update?

And trust that Mozilla won't randomly distribute a backdoor to 1/n of users?

The means you're suggesting aren't possible to implement for most people today. If you care about real-world impact I would recommend thinking of other strategies.


Reproducible builds effectively solve this problem, by making it possible to actually verify that you got the same binary (from the same source) as everyone else. Not to mention that if you're on Linux, distributions build Firefox themselves and so attacks against users need to be far larger in scale for 1/n attacks.


They're both important steps, but neither solves the problem. Most users don't verify the reproducibility of their builds and don't use Linux.


It solves the problem for users that use Linux. If other operating systems cared about making software distribution sane, they could also use package managers. It's a shame that they don't. As for verifying reproducibility, if you're using a package manager then this verification is quite trivial (and can be done automatically).

Solving the problem for proprietary operating systems that intentionally have horrific systems of managing the software on said operating system is harder due to an artificial, self-inflicted handicap. "Just" switching people to Linux is probably easier (hey, Google managed to get people to run Gentoo after all).


To repeat myself, for real world impact the only metric that counts is how many actual people benefited, not how many people benefit in ideal circumstances.

If your solution is to switch the entire world to Linux then you may want to figure out how to do that. Many have tried and failed before. Good luck.


Well there's reasoning which states that bad security is worse than no security.

The reason for it, is that if you trust the site to be secure it might be devastating once security is compromised, but with no security you're typically more careful.


We are more careful, as in the HN crowd. The general public? I'd wager "not so much".


Sending the file over an end to end encrypted chat app.


Even then, you are trusting the app to do what it says it's going to do. The only way I feel 100% safe is encrypting the file manually before sending (through whatever platform), and sharing the key through some other medium (preferably word of mouth).

As a Windows user I mostly use 7-zip for this purpose, or the encryption plugin in Notepad++ for text.


If the app is free software it can be audited. The problem with web-based crypto is that you're downloading a new program every page refresh and executing it immediately. If you're worried about a free software app not encrypting things properly, I'd be worried about the tools you use manually doing the right thing as well.

While I agree that doing it manually is the only reliable way if you're going to send it over an insecure channel, if the channel is secure then it's much easier for an end-user to just send it in the app.


And that's still just a feeling. You just feel you can trust the encryption application. Which can be changed any time in the future in a way it is not secure any more.

You just need to trust. So what's wrong in trusting Mozilla, if you can easily trust your encryption/decryption software?


Why would you trust 7-zip or NotePad++ more than FireFox?



The way this gets solved in the real world is through contracts - they have a certain contract with you, and if they break it, they can get sued and lose a lot of money.

This is one of the goals of the legal system - make it so we usually trust each other. There are no real long-term technical solutions to this problem.

So if you want to make sure you're safe, read their EULA or equivalent.


Or you can design a system (like any number of crypto systems) that aren't vulnerable to these sorts of attacks. A compromised CDN could cause this problem, which now means that Mozilla would be liable for things that they don't administer. And resolving technical problems through the legal system never works well. If you can design something so users don't have to constantly trust you, then that's a much better solution.

Not to mention that Send likely doesn't have a warranty (like most software under a free software license).


I don’t think the second paragraph is a logical extrapolation of the first. If this is using the WebCrypto API (which it appears to be doing), then trusting this browser-based solution isn’t fundamentally different from trusting an installed application that can update itself.


Using WebCrypto doesn't defend against the insecurity: their JavaScript code can send a copy of the file anywhere it likes. Mozilla can, if it wishes or if it is compelled to, deliver malicious JavaScript which does exactly that to a single targetted IP address, or just every once in awhile in order to find potentially interesting files.

Using in-web-page crypto gives users a false sense of security. This is, I believe, a very real problem.


It's true; you're trusting Mozilla to deliver secure code. You'd be placing a similar amount of trust in Mozilla by using Firefox, since browsers automatically update themselves these days.

What WebCrypto guarantees is that it is truly Mozilla's code that you're trusting, since the WebCrypto APIs are only available in a secure context (HTTPS or localhost).


> You'd be placing a similar amount of trust in Mozilla by using Firefox, since browsers automatically update themselves these days.

No, because I use the Debian Firefox, which means that I'm trusting the Debian Mozilla team. I feel much better about that than about directly trusting Mozilla themselves.

I don't trust auto-updates.


Why would you trust Debian more than FireFox?

About the auto-updates. CCleaner recently had an incident where their version .33 something had a backdoor injected by some 3rd party. If you downloaded version 34 you were safe. If you loaded 32 and configured it auto-update you got the malicious update. But that didn't affect the auto-update setting as far as I know, so if you had it on you would in about 2 weeks time have gotten an automatically fixed clean version.

Point: The worst situation was if you did not have auto-updates on and downloaded v. 33. Then you were stuck with that until somebody told you you had malice on your machine.

You're damned if you do and damned if you don't.

https://www.piriform.com/news/blog/2017/9/18/security-notifi...


That's a very different position than the one you staked out above. It's not browser-based crypto you have a problem with, its crypto performed by an application whose patching is done outside of your control.

That's reasonably for a technically savvy user, but the vast majority of users do not use Debian. They use Windows or OSX and rely on trusted corporations like Apple, MSFT, Google, and Mozilla to keep their systems patched.


Using any sort of application downloaded from the internet gives users a false sense of security.


> trusting this browser-based solution isn’t fundamentally different from trusting an installed application that can update itself.

Which is still a bad idea to trust, so I'd say that it is a logical extrapolation.


We trust applications like that all the time. Browsers update themselves, and we trust them to secure our communication with banks, governments, health providers, etc. Most browsers now also store passwords in an encrypted secret repository. If you're on Windows or OSX, your OS is also constantly updating itself with closed-source binary blobs.

I mean, sure, you can never use an auto-updating application again and always manually review system updates before installing them. But realistically, I don't see anyone besides Richard Stellman adopting that lifestyle.


> We trust applications like that all the time. Browsers update themselves,

Not if you're using a Linux distribution's browser packages (we patch out the auto-update code because you should always update through the package manager). And distributions build their own binaries, making attacks against builders much harder.

While people might trust auto-updating applications, they really shouldn't. And there are better solutions for the updating problem.


> We trust applications like that all the time

Sure, nobody ever said otherwise. But that doesn't mean it's a good idea.

The software I use is open-source, so I can see what I'm running, and what updates I get. I also don't use any auto-updates. The web is inherently different in that I can't really guarantee that the code I get is going to be the same code that you are getting.


I hope you've thought very carefully about your threat model when you say you don't autoupdate.

For most people your advice is quite plainly wrong. Most people should have everything on autoupdate.


Right. Point is there are two types of bad downloads. You can download a version which is not malicious but is insecure. Then auto-updating it makes it more secure.

Or you can auto-update to a version which is malicious. Then you are screwed. But your previous version you downloaded to start with might have contained the threat to start with. So just saying don't auto-update does not really protect you from malicious versions. Auto-updating does mean that you get updated security fixes making you less vulnerable.

The original non-updated version can be malicious even with a vendor you think you should be able to trust because it is a popular product used by many others:

https://www.piriform.com/news/blog/2017/9/18/security-notifi...


> They could easily configure it to be malicious one time out of a million (say); what are the odds that they would be caught?

How could they do that easily? Their source code is public, and many third parties work on it and produce their own compiled versions - plus every security people tracking unexpected connections would catch it.


Compiled versions of the JavaScript served by the site?


If you're using Firefox, you're already trusting Mozilla every time you visit any page.


> Exactly. One has to trust Mozilla every time one visits the page

This is where IPFS could be useful. It's content addressed, so the address guarantees that you're getting the same, unmodified, content


>> one has to trust Mozilla not to do that. > > Exactly. One has to trust Mozilla every time one visits > the page. They could easily configure it to be malicious > one time out of a million (say); what are the odds that > they would be caught?

Bear in mind they also make the web browser.


Sure, but that's open source and you can disable automatic updates, meaning they can't change the code whenever they feel like doing so. And if they do, the code will be kept in the source code control history, and will eventually be caught.

It's wildly different from a JS file that's loaded every time you visit the website.


It's pretty close to being the same thing. You're downloading Firefox at some point and not verifying the binaries you get match the source.

Unless Firefox provides fully reproducible builds on your platform from an open source compiler, you have no guarantee that the binary you have is built from the public source code. You have to trust Mozilla.

Without reproducible builds, compiling the source yourself would be the way to go.

Anyway, I agree that it should be clear that this file sharing service, while convenient, essentially requires you to trust Mozilla with your data. The claim "Mozilla does not have the ability to access the content of your encrypted file..." is fragile.


If you are running on Linux, then Firefox is built by your distribution. So attacks like that are much harder to accomplish, because the distribution of software like Firefox is (for lack of a better word) distributed. I'm not going to get into all of the techniques that distributions use to make these things safer, the crux of the point is that you should always use distribution packages because we generally have much better release engineering than upstream (as we've been doing it for longer).


> one has to trust Mozilla not to do that

Well, the advantage of the client is that you can inspect the source, so you can verify that it doesn't actually access location.hash.


But software loaded from webpages are not immutable. They could be manipulated at any given time. So even if someone audits the code you still cannot trust it because all the audit would show is that the specific downloaded version which the auditor saw was non-malicious.

As long as websites have no fingerprints that encompasses all loaded resources and can't be pinned to that fingerpint crypto in the webbrowser is not trustworthy.


https://developer.mozilla.org/en-US/docs/Web/Security/Subres... can help verify everything loaded is what it's supposed to be. However you'd still need to verify that the set of things loaded is the set of things you expect, not just that the things are the things, and that requires having an out-of-band checklist or extension...


SRI alone is insufficient. Maybe a strict CSP policy that exclusively allows hash sources and forbids eval could serve as an indicator that the scripts executed on a page are immutable. Probably needs some further conditions to make it airtight.


If you don't trust Mozilla, how do you trust them not to strip the CSP header whenever they want to serve malicious javascript?


you'd still have to implement a pinning feature. the hash-only-CSP part would just be the foundation which the pin checks.


In that case, wouldn't an alternative like OnionShare always be safer? The code is open source, it's a desktop app, and the source code is obviously immutable on your computer.


Surly the safest option is to encrypt it yourself first?


You could upload it to Freenet, then there won’t even be a record what was up- and downloaded by whom: No one will have the metadata.

Info: http://www.draketo.de/light/english/freenet/effortless-passw...

Install: https://freenetproject.org


Thanks. I was just looking at the screenshots and didn't seen any hashes, but when I tried it out and copied to the clipboard I saw the form "https://send.firefox.com/download/xxxxx/#yyyyy".


#yyyyy would be the encryption key. The webserver end never sees it.

It can, however, easily be read via javascript, so mozilla needs to be trusted in any case.


If one were to build a marketing spyware add-on to analyse user traffic from within the browser and send all visited URLs to some remote server, would those sent URLs then possibly contain the anchor?


If it's running on the client, then yes it could read the anchor text.


Yes, but would that matter if these are one-time downloads? You couldn't go get the file even if you did grab the data needed to do so... or am I missing something?


The add-on could tell Mozilla which files to keep because it saved the decryption key for those files. Mozilla could then select those files to decrypt, e.g. to prove to authorities that its file-sharing service was not used for illicit purposes. Alternatively, the add-on could filter the IP addresses used to upload the files for potentially sensitive blocks and then tell Mozilla to decrypt the files uploaded by people from such blocks, e.g. in an attempt to engage in corporate espionage (of course one shouldn’t use third-party services for sensitive files in the first place, but if you have to use a third-party service, certainly an ‘open-source’, ‘private’ and ‘encrypted’ one from such a reputable company as Mozilla, right?)


Several of the alternatives linked in this post’s comments make no effort to encrypt at all; they simply try to con users into sharing files with their intermediate server in plaintext, as if somehow that’s an acceptable thing to do.

If you don’t trust Mozilla or you are sharing information that a nation-state attacker would coerce Mozilla into revealing, then you’re already set up to encrypt the file first yourself - at which point you can send it with any service, include Firefox Send.


at least you can check that its not going back to the server.


How?


Inspect network traffic. Or more arduously, analyze the JS.


There are a lot of bytes moving around. I mean, I basically trust mozilla, but if they were a bad actor, it could easily be hidden steganographically.


And if your client is compromised then you're basically hosed in any secure application.


Network traffic could be encrypted, so you'd have to analyze the JavaScript.


> It can, however, easily be read via javascript, so mozilla needs to be trusted in any case.

Or you (and some friends from organizations like the EFF and FSF) can read the source code to see what it does, and even compile it yourself. If you do that, you only need to trust the compiler.


No, you still need to trust that Mozilla's deployment corresponds to the publicly available release—that they aren't using a version with changes nor have they been breached by an attacker who can change it to sample 1/n transactions.


Key is in the hash. Check 0bin.net. We use the same trick to encrypt the pastebin content. The sources are available so you can see the gist of it. It's a very simple code.


Curious why in your FAQ at 0bin.net you say:

    But JavaScript encryption is not secure!
Is there something inherently insecure about the JS crypto library you're using (https://github.com/bitwiseshiftleft/sjcl)?


"Javascript Cryptography Considered Harmful" has been posted on here and lays things out nicely, as e1ven mentions.

Here's the original discussion: https://news.ycombinator.com/item?id=2935220


I'd imagine it's similar to the arguments at https://www.nccgroup.trust/us/about-us/newsroom-and-events/b...


Js crypto make the client trust the server. As i host 0bin, i can inject code whenever i want in the page if i change my mind.


Is this a common practice? It seems brilliantly simple and reproduceable.


I stole the idea from php zerobin. I know mega used to do it. Not common but not new.


I would guess public key cryptography at the client-side.


As I recall, there was a Bitcoin wallet service which relied on securing the access key behind the '#' in the URL for its security -- turns out it's not perfectly reliable and shouldn't be used for protecting money. Likewise for files you really need to be secure.

Since boring crypto shouldn't have weird failure modes like this, I'm thinking this design is a big mistake?

EDIT: I think it was Instawallet and apparently while they had robots.txt set to prevent crawling, the theory was people typing their URL into Google (or Omnibar) would alert Google to the URL and it got into search results anyway.

I know that web-keys is based on the theory that since the fragment isn't sent by User-Agents in the Request-Uri that it's secure, but there are things that see the full URL which aren't conforming agents, and it just seems risky for any long-lived secret.


Given that it's explicitly a short-lived secret (below 24h), and one-use - so probably Google crawler (or typing into url bar - presumably to immediately follow by Enter) would invalidate it on first access - I don't feel convinced any of those concerns apply?


It really is a shame that there still isn't a easy way (A person whose computer knowledge extends to using facebook), that I know of, of sending arbitrarily large files that isn't tethered to a specific cloud service and is also reliable (can tolerate connection dropping).

It seems that bitorrent protocols are pretty close, but I don't think there is a seamless client that allows for "magical" point to point transactions.


Resilio Sync (formerly Bittorrent Sync) pretty much offers that. They use relays in case both peers cannot pierce NAT, but the relays only see ciphertext. They also have apps for Android and iOS, making transfers to mobile devices also possible. Since it is closed-source software, it's up to you to decide whether you trust the software.

SyncThing is a free software alternative, but I wouldn't say that it is usable (yet) for the average user.


Instead of Resilio Sync, I would suggest that people take a look at Syncthing. They have similar features but Syncthing is open source.


I haven't tried Synthing in about a year, but to me Resilio and Syncthing look like they are from completely different classes of apps/they target different markets.

Syncthing is simply not designed for the mainstream. Resilio is. If Syncthing devs can fix that, I'll gladly start using it over Resilio with my non-technical friends.


How do you fix "not being mainstream"? What exactly is your issue with Syncthing?


There is no iOS app for Syncthing as far as I can tell


They're crowdfunding one: https://www.bountysource.com/issues/7699463-native-ios-port-...

(There's also a closed-source iOS app called fsync(), but I found it too slow/unstable to use.)


That’s great to know, and I can see there has been quite a bit of discussion the topic. I’m glad I’m not the only one that was missing an open implementation for iOS.


You can use fsync() [0] for that.

[0]: https://itunes.apple.com/us/app/fsync/id964427882?mt=8


Thank you! So as it is available on iOS, I need to adjust my comment to reflect what I mean. There isn’t a free, open-source, community-driven implementation for iOS, keeping with the nature of the project.


I used BitTorrent Sync for a couple of years and loved it. Has it evolved much under the Resilio banner?


Me too. I stopped using it after I read somewhere that it could be hacked to include new peers to your connection without your awareness, which is a serious concern. I don't know how legit the claim was... but I opted to avoid the risk.


I keep mine updated to the latest version and still love it. The 'Ignore' file that is in the top-level of each share alone is worth the admission (similar to .gitignore). Dropbox/OneDrive/etc. all lose their damn minds if you try to sync a folder with 'node_modules' or 'vendor' to the share.


Both parties would have to be online for any p2p solution to work. And if they are online, there are plenty of ways to create a p2p link. It is complicated by things like NAT though. However, that is the less common case. In most cases, the receiver isn't going to be online. So you need an intermediate server, and that's how you end up with third party solutions.


Smartphones in Europe are close to being online 24/24. Energy management by OSes might limit apps in what they can do, of course. In any case, 4G networks rise might lead to interesting new solutions.


Yeah, the "being online" detail seems like the problem. It would be best if the encrypted data were distributed (something like IPFS) but then why would peers host files that aren't for them.


You would need a seedbox to function as a cloud. Then when you send a file, send to seedbox simultaniously. This approach might also increase dl speed for the reciever.


They don't currently, but Filecoin is supposed to create an incentive for people to host other people's files. Obviously, you'll have to pay some amount to have your files hosted.


Storj is based on a similar concept.


AFAIR, there's already some service on IPFS which will pin files for you for money. I.e.: why would someone? Because you'd pay them for that - simple :)


If you can assemble IKEA furniture you can also run a Linux server.


I've found https://github.com/warner/magic-wormhole really useful, although it only meets 1/3 of your requirements. I'm hoping someone will add a nice GUI and re-transmission.

Previously on HN: https://news.ycombinator.com/item?id=14649727


On the principle it look really nice, but with a quick glance at the code I'm not too sure I would trust it...

https://github.com/warner/magic-wormhole/blob/master/src/wor...


You're at a computer.

You need to connect to another computer.

First you need to know a route to that computer. If it has an externally reachable IP address and you know that, then great. If it has an externally reachable IP address and a DNS entry and you know _that_, then also great.

If you don't know the IP address or domain name of the other computer then you'll have to do some kind of lookup/exchange to find it. That means some kind of centralised service to provide the lookup functionality.

If the other computer doesn't even has an externally reachable IP address then a central service is going to have to act as a connection point which you can both connect to (or provide some other method of helping the two of you connect).

I'm not aware of any entirely decentralised system which would allow two computers which are behind NAT to find and then talk to each other. Or any obvious design which would work there.


WebRTC solves all the issues you're mentioning, and is available in Chrome and Firefox (in some for in Edge, and soon in Safari).

Three examples of this, implemented and working:

- https://www.sharedrop.io/ - https://www.justbeamit.com/ - https://file.pizza/


So, three systems which all require a central service?


Those services are for the initial peer discovery and SDP exchange, which can be done by other means (using any other side-channel).

Ongoing work is happening, for example by the webtorrent folks, to remove this constraint.


Yeah, it's initial peer discovery and connection setup (if behind NAT) I'm talking about. Once you've found the other PC and got a connection set up then all is well ;-)

Do these services also act as connectors in case of double NAT?


STUN allows all sorts of hole-punching for cases where both peers are behind NATs. For cases where the hole-punching doesn't work (a user behind two NATs might be one case), you can use TURN which is a dumb UDP or TCP relay.


They certainly can, I don't know if they do in practice. It always work for me, so I haven't investigated too much.


Which is a step backwards compared to bittorrent, since that can also provide decentralized hole punching, bootstrapping and peer discovery and not just the bulk data transfer. Plus being implemented at the TCP/UDP level allows more fine-grained control over the data flow, making it more friendly towards the network.


They're also needed for the actual javascript code that runs everything.


What about something like Tor. You can run a hidden service to get an onion address that any Tor client can access.

Would that meet the requirements without being a "central" service?


If privacy isn't your primary concern, and this is absolutely not private, you can also just use a redirection service and access .onion domains without installing Tor.

Just take any .onion domain and append it with .to and it should work. examplelettersblah.onion becomes examplelettersblah.onion.to and that works without needing to muck about with Tor.

This would be a horrible idea if you're concerned with privacy/anonymity. But, it'd make it easy for people to download your shared files.


That's exactly how Onionshare works. https://onionshare.org/


Never seen that before. But running a hidden service on Tor is absurdly easy, then you just shove a file server through whatever port Tor is forwarding.


I'm not aware of any entirely decentralised system which would allow two computers which are behind NAT to find and then talk to each other. Or any obvious design which would work there.

then you're clearly not the person we should be asking to build this kind of thing are you? :-)

my proposal actually solves two important problems with peer-to-peer systems, and I barely have to write any new code to make it work! the solution? i2p! it's an anonymous mix network, much like Tor, but completely decentralized. using an intermediary dex mix network fixes the nat issue and prevents you from leaking your IP address to other peers.

https://geti2p.net/en/comparison/tor


Assume I'm at my computer with i2p installed. How do I find your computer?

(Serious question - I've not used it myself, and I'm intrigued at the process.)


https://geti2p.net/en/about/intro

endpoints are identified by their public key hash. each endpoint maintains a set of anonymized routes. this routing information is stored in a Distributed Hash Table (DHT). if you want to connect to another endpoint, you lookup the route for the public key hash, build an outbound route, and you're good.

more concretely, an ipfs transfer would work by using public key hashes in place of IP addresses to identify peers and a known set of endpoint keys for bootstrapping.


So all you need is the hash (sent to you by your friend over IM or email or postcard) and you're good to go?

I like it.

Now, is it available today in a form (as the OP put it) which "A person whose computer knowledge extends to using facebook" can use?


if they have the ability to install software, they can use i2p

https://geti2p.net/en/download


Odd. I looked at the description on the main page, and it lists eDonkey, BitTorrent, and Gnutella, but not any kind of direct file transfer.

Am I missing something?


yes.

1. click bit torrent 2. click create torrent 3. type the path to your file 4. share magnet link

magnet:?xt=urn:btih:d0da0a2cac2bb3fd7ba6548edef12a24122ef481&tr=http://diftracker.i2p/announce.php


Does the magnet link work without DifTracker? Because that's still a centralized system.


yes, the tracker is optional. the built in torrent client supports BEP 5.

http://www.bittorrent.org/beps/bep_0005.html


Two thoughts:

- IPv6 kind of helps here, at least if we hope that no NAT standard ever makes it into IPv6. Crossing my fingers.

- There does exist at least one NAT hole-punching technique that can traverse two NATs with no central server using ICMP-based holepunching and UDP. Obviously, like all hole punching techniques, it only works on certain kinds of NATs, and firewalls can kill it.


> There does exist at least one NAT hole-punching technique that can traverse two NATs with no central server using ICMP-based holepunching and UDP. Obviously, like all hole punching techniques, it only works on certain kinds of NATs, and firewalls can kill it.

I think you're referring to this? Clever hack indeed. https://github.com/samyk/pwnat


IPv6 doesn't help in corporate environments, where it's often firewalled harder than IPv4.

The fact that it doesn't support NAT is seen as a risk in such environments ( which rightly or wrongly consider it as a second level of firewalling ). Some deployments that I've seen use only site-scoped with no globally-routed prefixes. Everything Internetty has to go over IPv4, which makes the Security team's job easier; drop all IPv6 at the DMZ and drop all non-NAT IPv4.


> I'm not aware of any entirely decentralised system which would allow two computers which are behind NAT to find and then talk to each other. Or any obvious design which would work there.

I think the GNUnet has some stuff in it which can even work across different protocols, e.g. hopping across TCP/IP over Ethernet to Bluetooth to packet radio. It has some neat stuff where, as I recall, it probes out a random path, then tries to get at its intended destination, remembering successful attempts. It definitely sounds cool, but it's not currently in an end-user-usable state.


Regarding this bit:

> I'm not aware of any entirely decentralised system [...]

What's the user value of having something entirely decentralized? I see the value in making sure the actual file transfer doesn't go through the central rendezvous. But I don't understand what's gained by eliminating the use of a little help setting up the connection.


The lack of a centralized entity collecting metadata. Who is sending files, when and how often.


The ISPs can do that, and they're pretty centralized.


https://upspin.io/ is in super early concept stage, but trying to solve file sharing issues. It allows for hosting on multiple platforms and trying to keep control in the hands of the users.

https://upspin.io/doc/overview.md


file.pizza is pretty solid, but uses webrtc so ive had failures when sending to some iOS users. Apparently webrtc is included in safari 11 though.


thanks for sharing this - very cool.


> It seems that bitorrent protocols are pretty close, but I don't think there is a seamless client that allows for "magical" point to point transactions.

instant.io works pretty well for me, it works on the bittorrent protocol, but over webrtc


Well, you can with FTP and UPnP. You the client gets the external IP of the machine, then use UPnP to open ports on the router/firewall, then uses PASV and connects to the destination. The destination runs an FTP server, that uses UPnP to open up the incoming port, and gets its external IP and uses it for PORT mode. The end result is two individuals behind NAT sharing arbitrary files in bulk. You can even tie this into FXP to do syncing with the cloud.

Or run a local SFTP server with UPnP and avoid the extra complexity.

In terms of finding each other, the client could get its external IP, open up a UPnP port, and provide the user with a QR code or brief snippet to paste into a chat conversation, which the other user would feed to their client to initiate the file transfer.

You could do this commercially by providing an external website with which the clients could communicate out of band to set up their transfers (for free), and (for pay) optionally provide a cloud sync service (which would help person A "pre-send" a big file transfer until person B was available to receive it).

While some of this is available today, you're right, I don't think there are accessible clients. ICQ, AIM and IRC used to be the solution, as they all did P2P file transfer. But now everything is on the web, so everything sucks.


magic-wormhole ( https://github.com/warner/magic-wormhole ) is not so far from that, just add a simple GUI on top of it (or just use the terminal and convince the user that a terminal does not have to be complicated), and you're good to go. I don't know how reliable it is w.r.t. connection drops though.


This was once a common use of AOL Instant Messenger.


I just want to be able to right-click somewhere and have the system bring up a a simple FTPS or SFTP daemon and have it try UPnP to open a firewall/NAT transversal. This daemon would spin up just for the transfer then shut itself down once done.

Even if both parties are behind a firewall that won't respect UPnP, there's UDP hole punching which should usually work. Mozilla would just need to host a server to handle the port number handoffs.

I imagine this would handle 90% of the use cases. The rest would use traditional 3rd party hosting, IM transfers, email, etc.


With the recent shuttering of AIM, I'm reminded of the old client's Direct IM feature which would allow you to transfer arbitrarily large files through the chat session. That tool could not have been simpler to the user, as they just had to click "Direct IM" and could then drag and drop files into the chat. However, that system still relied on the cloud service (AIM) being a broker of connecting a handle to an IP address, and that there was little guarantee of the identity of the person on the other end of the IM.


Local-wise you can use Nitroshare. I use it all the time to quickly send files between my computers.

https://nitroshare.net/


And this isn't? (Honest question, hasn't looked at it hard enough yet)

I don't disagree with you but I kind of expected a note as to why this wasn't a solution in this context.


I think he considers this as "tethered to a cloud service".

I think a solution to this problem that wasn't tethered would be something like in an IPV6 world where everyone has an IP address, if I could just put in your address, send you a file, and as long as our computers were on/connected to the world wide web, it'd get to you.


Technically a one to one connection to send files should be simple, just open a socket on both sides or use nc. Of course there is NAT but its not insurmountable. The sending or receiving party has to share their IP. And this can only be real time, there is no scope to store files anywhere.

Aside I have recently noticed visits to ecommerce sites sit near permanently on top of my Firefox history than other sites, is Firefox doing deals with Amazon and others, and has this been been disclosed?


File.pizza is very seamless point to point transfer of arbitrary file sizes.

Doesn't work in safari though (haven't tested ie) so maybe it is not general enough for your use case.


Does it work in Safari in High Sierra? I thought they had WebRTC now.


That's a good question. My box is too old for High Sierra, so I can't check and didn't think that my browser is now outdated feature-wise, before writing the comment.


There are lots of them with varying trade-offs between ease of use and power.

The problem is that there are none with enough of a user base to have a network effect, and users hate installing new apps.

It's 2017 and it's still hard and somewhat dangerous to install apps.


I'm not sure it tolerates connection dropping, but I've been using justbeamit for years and it never deceived me : https://www.justbeamit.com


Magic wormhole


IPFS


They should add this as a shortcut button in Firefox. I think it makes even more sense than having the Pocket icon there. Not everyone may want to save their articles on another service, but pretty much everyone needs to send a file privately to someone else every now and then. So something like this should be super-easy to access.

I learned about Firefox Send when it launched but completely forgot about it until now. I would definitely use it more if it had an easy-to-access (read: not burried in the Settings page) shortcut in the browser.


It's a Test Pilot project, so easy-to-access may be deferred until they choose to upgrade it from Test to Release, but it is certain they'll consider ease-of-use if/when they do decide to release it.


This is actually very useful. You can send files from and to mobile devices and desktop. Delete on download doesn't seem to be a problem, just adjust your storage strategy. Security could be acceptable for most use cases. I hope they add it to the browser.


I think Mozilla should create a more privacy friendly analytics service, since they experiment with new projects


Do you know Piwik? Its a really nice open source, self hostable alternative to Google Analytics. I've been running it for years, very stable and great insight.

www.piwik.org


I don't think Mozilla should do one thing at a time.


I've just checked the code. It indeed deletes the file once a download has been completely consumed, so it can't really replace other file-sharing services. BUT I wonder how does it deal with race conditions ? My guess is that if many people start a download at the same time, they'll all be able to complete it before the file goes away. At least that's how I'd see it working with S3 or local storage.

You could abuse it by sending the same file many times in order to create lots of download links; but there's little to gain in bandwidth savings: you might as well run your own server. The only advantage I'd see is hiding your IP address, but then you could also run a tor hidden service. The other would be bandwidth amplification by synchronizing all the clients (for big files).


So, I just tested, and the race-condition method does work. I was able to download a 200M the encrypted file three times, and they match. You could imagine a site operator with hundreds of users uploading a file once every hour or so and lots of synchronized users downloading it for each generated link.


The sender could open a download connection and stall it as far as possible (maybe keep the download rate shaped to some bytes/second, to prevent an inactivity timeout). That would open a large time window for further downloads.


I assume the file would be deleted as soon as any client finishes downloading. (If the files are backed by regular files on a unix-like system, then the existing open handles to the file will keep working, allowing currently-connected clients to finish downloading, but preventing any new clients from starting the download.)


Indeed, I tested this too, but forgot to mention. That's how it works.


It seems nice, but I think it should be made more explicit upon downloading, that you can only do that once. I can see myself e.g. downloading a file I received on my phone to take a quick look, intending to then download it again on my computer later. I would be surprised to see it just gone.



One thing that could be improved with this is to have an option for a human readable/typeable link. I wanted to quickly transfer a file from my desktop to my phone. Used Send and realized I didn't want to type that cryptographic URL.

I ran the Send link through Typer.in (specializing in hand-typed urls) and it worked as I initially expected. However, it would be nice if Send had this functionality by default.


Typer.in stores the link you give it, and returns a human readable lookup link. The whole point of Send is that the link includes the secret key to decrypt your file. Whoever you give that secret to, including Typer.in can get the file.

The Send link must include the secret key, because no one else should get it, and that key must be of sufficient length to protect your file. Thus human-readable-izing it could do nothing to decrease its complexity and would just result in a huge string of words that were just as much as a pain to type in.


At least at the moment, downloading the file once removes it from Mozilla's servers. So in the event typer.in used the URL you provided to download the file, you would know as the file would no longer be accessible.


The private key is in the url, and mozilla says explicitly that they don't know this private key. They could shorten the url but then they would have the private key.


You could have also just generated a QR code of the URL and scanned it with your phone, if you didn't want to send it electronically. I'm sure they expect users to just email these links to their recipient.


Well, if you use Firefox on your phone you can leave tab openned on your desktop and choose this tab on your phone.

Or, if you are a Telegram user - you can send the link you yourself.

You can also use you mail inbox.


Yes. My workflow is to get the link from Firefox Send (which I love btw), email the link, open email on other device, click link.

There are I suppose two use-cases. 1) To send to yourself. 2) To send to others. The second use-case, is fine under the current workflow, but the first use-case the workflow is annoying.

If using Firefox Sync, perhaps the link could be synced over, maybe?


If you only need to send text, https://cryptopaste.com does everything client side and shows the ciphertext in a "staging area" before you commit it to the cloud.

You can instead save it as a self-decrypting document, attaching it to email, copy to thumbdrive, upload dropbox, etc.


Great implementation but doesn't work with Greek, spews out garbage. Here against original text in notepad.exe https://i.imgur.com/n1WCsnZ.png


If you have a chance, please try again. UTF-8 support is added.


It works, thanks!


Thanks for the report (especially the effort to take a screenshot!), I'll look into it!


This is technically a dupe of https://news.ycombinator.com/item?id=14901998 from a couple months ago.


1 download limit looks really problematic to me. Some times download just do not start or get aborted and then everything need to be done again.

Something like 3 downloads limit would make this more usable!


No, this is only deleted if all the file has been transferred.


These services get extremely abused, they'd be 99% piracy within minutes. But 3 is reasonable


Why do the Mozilla people keep doing this sort of thing? Aren't they supposed to be making a good browser?

I remember them telling me they are now going back to their core competences. I think it was after Firefox OS failed.

Not trying to piss on anyone's parade here, just wondering how this kind of thing keeps happening. I was wondering the same thing when Mozilla added Pocket and now Cliqz to Firefox.

What is the rationale here? Do they have leftover money they need to spend before January 1st or something?


The engineers on the product are listed as two interns. There are eight people on it total, and I bet most of them aren't working on it full time. This hardly seems like a massive waste of resources.

https://testpilot.firefox.com/experiments/send


What you're telling me is that I'll see this at the top of HN in another year as the vector of some major security breach.


That would require people actually using it. /zing


Perhaps peruse this: https://www.mozilla.org/en-US/mission/

Firefox is Mozilla's flagship, and the largest by far way in which we achieve our mission, but our goal is a healthy and open internet.

Additionally, this is a great way to determine whether something like this would work well as an in-browser feature, and we've built it in such a way that it works in more browsers than just Firefox on day one.


> ...and we've built it in such a way that it works in more browsers than just Firefox on day one.

Sure wish other browser vendors would consider other browsers when releasing their products.


Google won't because the browser isn't their product. Your data and attention are their product, which they acquire in exchange for a free browser.


Unsure why this seems controversial. Google is an advertising company, not a Web browser company.


An advertising company with 60,000 employees whose interests overlap with this forum ;)

Most of the wealth in silicon valley comes from productizing eyeballs.

In related news, this just triggered someone into downvoting my entire post history!


just bring back my goddamn group tabs, and no I don´t want to install an extension.


Because browsers have evolved beyond applications to serve up static web pages, and convenient features like this are a selling point. Google has created an entire "OS" ecosystem built on a port of their web browser, and they've been pushing people to use it for years. I think I'll give Firefox a pass at adding some super neat and useful features from time to time.


This isn't a browser feature, it's a webapp


Software development doesn't always (usually doesn't, in fact) work that way. After a point, throwing more money and engineers at Firefox is unlikely to speed up its pace of development, or improve its quality.


Exactly, a company should only do one thing, and just stick to it. Never try to expand. That worked for Apple and Microsoft and Google and Amazon.


It's possible this is a test for a feature they're considering adding to the browser.


I think this is for free advertisment.


I have to agree with your concerns. All they have to do is make a good web browser and they have the funds to do that. It concerns me that they are wasting time, energy, and money chasing rabbits.


I've just uploaded a 140MB file from my mountain cabin (in Viet Nam) and it was lightning fast. I then gave the link to a friend of mine residing in Melbourne, Australia and his download speed was blazing fast as well. Would be keen to learn more about the infrastructure setup behind this service to achieve such a good performance.


Probably Amazon.


This has been out for a while. I use it occasionally, but a single download is a narrow use case.


> a single download is a narrow use case.

Is it really? I mean, if I wanted to send a file to someone via Google's e-mail servers or a chat application like Facebook Messenger but wanted to make sure it didn't stick around for those two companies to data mine, this seems like it would do the trick.

From Mozilla's perspective, doing "one upload, one download" also kind of solves the problem of becoming a new "megaupload" for illegal content. Not the problem of the illegal content being there (since it's unsolvable), but the accountability of being the party responsible for spreading it around.


I've used this once or twice a week since it was announced a month or two back here on HN, and love it. It's my go-to way to send any file that's > 10MB.

I remember doing some mortgage applications last year and being frustrated that the built-in Zip encryption in Windows is weak/broken, and begging them to have a secure file upload feature.

With this, it's all set - as long as the recipient picks it up within 24 hours of course. 48 would be better but I'm not quibbling, I can always resend and if 24 is what they need to keep the service up, 24 it is.

This is simple and awesome and I appreciate everyone who worked on it.


For many users, a single download is the most common case.


What does it mean "it’s best to keep your file under 1GB"? Can I send 2GB+ size files?


Can anyone confirm whether this works in Safari/iOS now? It seemed great but when I tried to send some files to friends to promote it it completely shat the bed and my mobile friends ended up getting nothing but garbled text when the (fairly large) downloads finished. It was quite frustrating since the upload was painfully slow.


Seems to work fine on my end.


How does this compare to Keybase folders?


Pushbullet offers a great service. I'm not related to the team, but really enjoy the product. It's both an addon for browsers, desktop app, and standalone mobile app. Easy as pie, and now they have 'portal' which is local network direct transfers


I made a subreddit for anyone to upload any file and the first person who accesses the link gets to download it.

https://www.reddit.com/r/send/


Is this a Wetransfer killer?


No, since its just downloadable once.


meh, Mega.nz is still better


Just my experience/test: it takes longer to upload the file to mozilla then it would have sending directly to DSL of a friend..

edit- Download tested now too, not bad, though nothing to write home about.


What software did you use to create a direct connection to your friend?


https


That's a protocol. Not a software.


You must be great at parties. Software is irrelevant imo. Indicating same protocol as FF send is imho more relevant.


The whole point of this discussion is that sending files to another computer is still cumbersome. Software is entirely relevant. So if you're complaining about speed of FF Send, you should specify what you're comparing it to.


lol no.. my point was that it wasn't as fast as i would expect a service from mozilla to run and broadband connections here are a lot faster. Second, it wasn't a complaint, just an observation.

And if you really think one 'https' enabled webserver software on a home subscriber cable/dsl connection is much faster then the other for a single download, then i will have to inform you that you are mistaken.

And setting up a simple webserver to exchange files isn't that hard at all. To say file sharing is cumbersome is a bit stretching the truth.


The whole point of this discussion is that sending files to another computer is still cumbersome.

No, the discussion is about Firefox Send generally. The cumbersomeness discussion is a specific subthread.


In order to take this further, this functionality needs to be part of the browser so that one can trust the page is not maliciously modified in transit or at the server.


Mandatory XKCD link: https://xkcd.com/949/


I need something like this to send myself links from my phone to my pc.


Chrome, Firefox or Samsung browser have all bookmarks sync


So, where is the money coming from ? Who's the product here ?


When thinking about a Mozilla offering, it is healthy to think beyond "money and product" as this type of analysis will usually leave you with a conclusion of "this doesn't make business sense".

Mozilla is in the game of keeping the internet healthy. Part of it involves products, such as Firefox and its quest to recover userbase. Parts of it involves money, such as MOSS awarding grants or prizes for FOSS stuff they use and see value.

But most of Mozilla actions can be thought on "how close they get Mozilla to achieve its mission as stated in the Mozilla Manifesto[1]". In the case of Firefox Send, it enables less friction sending files and protects the user privacy, so it IMHO advances item #4 of the manifesto: "Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional." It also serves as a nice branding reminder for people, as it works with other browsers, makes people using some other solution remember Firefox...

[1]: https://www.mozilla.org/en-US/about/manifesto/


Not sure how true that is anymore. They seem to by trying hard to squander their reputation. First their MITI officially signaled they now consider themselves a political force, then the whole Cliqz debacle following shortly after showed their willingness to sacrifice their principles.


I dunno about all that :-)

MITI is Mozilla trying to keep the internet credible as a source of news and information. This is in keeping with Mozilla's core principles. Give the blog post a careful read, as it might change your mind: https://blog.mozilla.org/blog/2017/08/08/mozilla-information...

Re: Cliqz, working with a search-oriented company might be a way that Mozilla can help avoid monopolies in search. The goal is to find new ways to keep the web open, so that new challengers have a level playing field vs. dominant companies.

Andreas wrote a good blog post about the worrying monopolistic trends in search a couple years ago: https://andreasgal.com/2015/03/30/data-is-at-the-heart-of-se...

Edited to add: Cliqz is a very privacy-conscious company, and I think that respect for the user was a major factor in Mozilla partnering with Cliqz. This is just to say that, AIUI, the partnership was evaluated, again, in keeping with Mozilla's principles.

Also, here's the Mozilla manifesto, which documents those guiding principles: https://www.mozilla.org/about/manifesto/


How is Mozilla been ever anything but a political force? Like 12 years ago, when Firefox came out, it was accompanied by probably the largest marketing campaign for an open project barring Wikipedia.

They followed that up with some ten years of "web standards! web standards! open web standards!". This is the game Mozilla plays. That's why they won on the web: everyone does it their way now, even if their browser isn't dominant.


What's MITI stand for here?


Mozilla Information Trust Initiative


Don't forget their switch to a walled garden for extensions in order to protect users from themselves.


That is FUD and also a lie. You can still distribute your extensions on whatever site you want, you just need to sign them on AMO.

The workflow is:

1 - Build webextension 2 - Upload to AMO 3 - Choose "distribute on AMO" or "sign and distribute on your site"


If I need to get Mozilla's approval before I can run code in the browser running on my machine then I see a wall. That they've automated it so much as to mitigate the protection a walled garden normally gives almost makes it worse.


You can install Firefox Beta or Nightly and have unrestricted extension installation options (or use a Linux distro's packages: most of such firefox packages do not require signed extensions).

It's a measure to prevent casual trojans, there are many ways around it for non-casuals and developers to employ.


>or use a Linux distro's packages: most of such firefox packages do not require signed extensions

That's actually pretty cool. Can you give me an example distro that does that with Firefox (w/FF brandings)?


Both Debian and Arch do this.


Power users can use unsigned add-ons in Dev Edition. Regular users would be exploited by any middle ground solution in the main release. It's unfortunate the web is a dangerous place for the average user.


Asking people to use beta as a main driver is part of the reason why people think,

>They seem to by trying hard to squander their reputation.

The other part of it is Mozilla building the browser for the lowest common denominator at the price of user control.


Firefox Send is still an experiment at Mozilla (as noted on the site). The question now is whether it is popular and useful, and how much it actually costs to run. Those questions aren't answered yet, and there's no big schemes behind it.

If Firefox Send graduates from being an experiment, then it recoup its costs implicitly if it can increase engagement with Firefox itself (which does produce revenue), or awareness of Firefox and Mozilla. Firefox's revenue comes primarily from search companies who pay for preferred placement in the browser. In that sense, people who search using Firefox are ultimately the product.

However, Mozilla is a mission-led company, and so profit maximization isn't ultimately the answer to why Mozilla does something. I think it's accurate to say that Mozilla is trying to maximize its positive impact. Revenue certainly enables that impact. But direct impact is also worth something, and it's not necessary for everything Mozilla does to involve profit maximization.


Good question.

In this instance, the product is Firefox. The money is coming from the usual sources of revenue of Mozilla, i.e. Firefox.


how is this different from Mega.nz ?


But why?


because it's easier to do this than give mobile Firefox users option to pull down to refresh, thus enjoy Brave or Samsung browser and stay away from Firefox


I don't understand why is this called Firefox Send. Shouldn't it be called Mozilla Send?


Probably branding purposes. Even non-techies recognize the Firefox brand. Mozilla – not so much.


eh, what's current Firefox market share? I would not bet my money many non techies recognize Firefox at all, it's pretty niche nerd product for people with addon fetish, rest of the world just use Chrome/IE/Edge


Test Pilot experiments are by the Firefox team. They're usually more embedded in the browser, is all.


I wonder how many downloads can be done before it expires


Another commentator found you can initiate multiple downloads, and complete them all. As long as you start them before any finishes.


One download


[flagged]


Mozilla was always and primarily a political organization.

To answer your question, while I'm not a Mozilla employee or developer, I'd say yes since the contents are encrypted end-to-end, so they can't actually tell the user has different political opinions.


Ok, since you've ignored our requests to stop breaking the site guidelines, we've banned this account. If you don't want to be banned on HN, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future.


Let me guess, with free Pocket® integration?

Mozilla blew it for me recently in so many ways, I am taking them about as seriously as I would Facebook right now.


How can you compare the Pocket thing with Facebook? The good kid messed up this time. And you can simply ignore it, don't have to use it. Yes like you can decide not to use Facebook, but then all your friends and family use it, or use Instagram of Whatsapp and force you to use it. That includes uploading your complete contact list two or three times.

Do you have to use Firefox because your family uses it? I don't think so.

How do those two compare?


They are repeating it with the Cliqz integration now. I love Firefox, but these decisions are baffling!


Months after the whole Pocket thing, they destroyed the Firefox on Android homepage to replace it with 60% "Recommended by Pocket" articles. I'm not sure how they determine what's recommended for me, but it seems an awful lot like sending browser history to this company. And even if not, I'm not interested in this bullshit, I just want my bookmarks homescreen (with HN and some other sites I check regularly). Oh and the WebExtension thing which breaks every single reason I'd use Firefox over Chrome. All extensions that do not exist for Chrome because Firefox' model was more powerful, can no longer exist in the new Firefox.

It's terrible decision after terrible decision. In another HN thread someone called it intentional decisions by someone wishing Mozilla harm. I don't think there's bad intentions, but I can see why it seems like it.


I've learned not to trust companies when they tout "completely encrypted" and "totally private".

Show me the open source code, otherwise they're likely collecting data in some way to pay for hosting this web application.



Is there something about Mozilla that I'm missing? They're absolutely an OSS group in my mind, and the source is on GitHub like someone else pointed out.


Do they share my files and "anonymized" upload history with a third party (edit: was "advertising company") for this service as well?


You can see exactly what they collect here https://github.com/mozilla/send/blob/master/docs/metrics.md and decide for yourself.

I think it's misleading to refer to Google as an advertising company when talking about Analytics since this data won't be used for targeting ads.


I was actually referring to this incident where Mozilla started shipping an opt-out "addon" in Firefox that automatically shares your browsing history with a third party:

https://www.reddit.com/r/firefox/comments/74n0b2/mozilla_shi...


OK, well this doesn't do that.


[flagged]


This type of comment without any reference or source is the kind of stuff that is destroying the internet and good journalism. It is users like you that spread FUD and fake information that makes it really hard for people to discern noise from signal online.

First, you can read all about MOSS[1] which is a program inside Mozilla that gives grants to open source projects.

Part of Mozilla current efforts alongside with many other foundations is to provide/help means of safe internet communication and tools for people who are in danger of surveillance such as political dissidents in dangerous countries, journalists doing investigative works and many other cases. You can see a lot of the good stuff that is being worked on for Journalism work at the coral project[2] and the Knight-Mozilla fellowship[3]

So it is a part of the Mozilla Foundation mission and ethos to back up projects that make the internet healthier from the common people. In an age with the state spying on everyone, rampant profiling happening in the name of advertising, and other dangers, giving a grant to a project providing "a coordination platform used by activists across the political spectrum, to improve the security of their email service", by the way, Mozilla also donates to Tor and other projects too...

Now, in all I have said above, do you know what is missing? Sharing your data... Yes, Firefox has telemetry, Mozilla sites have analytics scripts, but those two are necessary to make Mozilla more efficient. Sharing your personal data is not. Mozilla is not in the business of profiling you or selling you out. STOP SPREADING FUD

PS: I am a mozilla community member, if I can read the source code and the discussions around policy, so can you. Make an informed decision before spreading fud.

[1]: https://www.mozilla.org/en-US/moss/ [2]: http://coralproject.net/ [3]: https://opennews.org/what/fellowships/info/


What you're referring to is this: https://blog.mozilla.org/blog/2017/10/03/mozilla-awards-half...

RiseUp seems to be an organization providing VPNs, Email-service, Jabber, and general security advice.

Citing from https://riseup.net/en/about-us (at least what you seem to be worried about):

> The Riseup Collective is an autonomous body based in Seattle with collective members world wide. Our purpose is to aid in the creation of a free society, a world with freedom from want and freedom of expression, a world without oppression or hierarchy, where power is shared equally. We do this by providing communication and computer resources to allies engaged in struggles against capitalism and other forms of oppression.

I'm interested what exactly they do wrong in your eyes to deserve your passion?


I guess you’ve got nothing to back up that assertion?


In another comment you say the donated money to that organisation. Why do you think user data would be shared with them?


Because by donating to RiseUp, Mozilla have made themselves a political organisation, and political organisations do things for political ends, not privacy ends.


Defending privacy rights is a political goal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: