The last time this was posted I hacked up a quick and dirty Python client for Send: https://github.com/nneonneo/ffsend. I just updated it for recent Send changes, which streamlined the crypto and removed some redundancy.
Firefox’s JS client requires the whole file be in memory in order to perform the encryption, and has to decrypt the whole file in memory in order to download it. My client doesn’t have that limitation so it could theoretically upload much larger files (subject only to the server’s upload limit).
The key is the hash, which isn't sent over the wire when loading a page. Now granted it's accessible via location.hash in the client, but one has to trust Mozilla not to do that.
Exactly. One has to trust Mozilla every time one visits the page. They could easily configure it to be malicious one time out of a million (say); what are the odds that they would be caught?
Web-page-based crypto is fundamentally insecure, and Mozilla is committing an extremely grave error in encouraging users to trust it (as they also do with their Firefox Accounts). Security is important, and snake-oil solutions are worse than worthless.
Spec out and implement resource pinning, already. Like RFC 8246, but authored more with the user's interests in mind, rather than the service's.
As a show of nothing-up-the-sleeve, a service asserts that it's in a stable state and will continue to serve exact copies of the resources as they exist now—that they will not change out from beneath the client in subsequent requests. When a user chooses to use resource pinning, the browser checks that this is true, and if finding a new deployment has occurred, the browser will refuse to accept it without deliberate consent and action from the user (something on par with invalid cert screen).
This means that for a subset of services (those whose business logic can run primarily on the client side), the users need not trust the server, they need only to trust the app, which can be audited.
When deploying updated resources, services SHOULD make the update well-advertised and accompany it with some notice out of band (such as a post about the new "release", its changelog, and a link to the repo), so the new deployment may be audited.
When new deployments occur, clients SHOULD allow the user to opt for continuing to using the pinned resources, and services SHOULD be implemented in such a way that this is handled gracefully. This gives the user continuity of access while the user (or their org) is carrying out the audit process.
Areas where this would be useful:
- HIPAA compliance
- Web crypto that isn't fundamentally broken
- Stronger guarantees for non-local resources used in Web Extensions—in fact, the entire extension update mechanism could probably be more or less rebased on top of the resource pinning policy
This sounds a lot like Beaker [1], a browser based off on Dat [2]. It allows creation of shared resources (web pages) with immutable version tracking, among other things.
This would also have the added bonus that one could reload such pinned resources from anywhere once you got the pin. Even without TLS setup or having to trust certificate chains.
Caching proxies would suddenly become viable again because only the first download has to through HTTPS while "I don't have this in the local cache anymore, can you serve me this content" requests could go through sidechannels or outside the TLS handshake or something like that. Caches could even perform downgrade non-attacks.
How many pins would you expect a browser instance to have? I feel like most of the time the pinned content could fit in the browser cache and make this variety of proxy-side caching pointless.
Immutable content is a prerequisite for pins. The caching benefits mostly fall out of the immutability, not the pinning. So as long as the hypothetical standard would allow one to be used without the other additional uses could fall out of that stack.
When it's secure, it's an improvement; if Mozilla, a Mozilla employee or a government which can compel Mozilla employees chooses to make it insecure, then it's worse than insecure. At least with something like Dropbox users (should) know that they are insecure and should not transmit sensitive files.
> If you have a better solution in mind for the average user crowd, feel free to suggest it, of course.
The functionality should be built into Firefox, so that users can verify source code & protocols once and know that they are secure thereafter.
And trust that Mozilla won't randomly distribute a backdoor to 1/n of users?
The means you're suggesting aren't possible to implement for most people today. If you care about real-world impact I would recommend thinking of other strategies.
Reproducible builds effectively solve this problem, by making it possible to actually verify that you got the same binary (from the same source) as everyone else. Not to mention that if you're on Linux, distributions build Firefox themselves and so attacks against users need to be far larger in scale for 1/n attacks.
It solves the problem for users that use Linux. If other operating systems cared about making software distribution sane, they could also use package managers. It's a shame that they don't. As for verifying reproducibility, if you're using a package manager then this verification is quite trivial (and can be done automatically).
Solving the problem for proprietary operating systems that intentionally have horrific systems of managing the software on said operating system is harder due to an artificial, self-inflicted handicap. "Just" switching people to Linux is probably easier (hey, Google managed to get people to run Gentoo after all).
To repeat myself, for real world impact the only metric that counts is how many actual people benefited, not how many people benefit in ideal circumstances.
If your solution is to switch the entire world to Linux then you may want to figure out how to do that. Many have tried and failed before. Good luck.
Well there's reasoning which states that bad security is worse than no security.
The reason for it, is that if you trust the site to be secure it might be devastating once security is compromised, but with no security you're typically more careful.
Even then, you are trusting the app to do what it says it's going to do. The only way I feel 100% safe is encrypting the file manually before sending (through whatever platform), and sharing the key through some other medium (preferably word of mouth).
As a Windows user I mostly use 7-zip for this purpose, or the encryption plugin in Notepad++ for text.
If
the app is free software it can be audited. The problem with web-based
crypto is that you're downloading a new program every page refresh and
executing it immediately. If you're worried about a free software app
not encrypting things properly, I'd be worried about the tools you use
manually doing the right thing as well.
While I agree that doing it manually is the only reliable way if you're going to send it over an insecure channel, if the channel is secure then it's much easier for an end-user to just send it in the app.
And that's still just a feeling. You just feel you can trust the encryption application. Which can be changed any time in the future in a way it is not secure any more.
You just need to trust. So what's wrong in trusting Mozilla, if you can easily trust your encryption/decryption software?
The way this gets solved in the real world is through contracts - they have a certain contract with you, and if they break it, they can get sued and lose a lot of money.
This is one of the goals of the legal system - make it so we usually trust each other. There are no real long-term technical solutions to this problem.
So if you want to make sure you're safe, read their EULA or equivalent.
Or you can design a system (like any number of crypto systems) that aren't vulnerable to these sorts of attacks. A compromised CDN could cause this problem, which now means that Mozilla would be liable for things that they don't administer. And resolving technical problems through the legal system never works well. If you can design something so users don't have to constantly trust you, then that's a much better solution.
Not to mention that Send likely doesn't have a warranty (like most software under a free software license).
I don’t think the second paragraph is a logical extrapolation of the first. If this is using the WebCrypto API (which it appears to be doing), then trusting this browser-based solution isn’t fundamentally different from trusting an installed application that can update itself.
Using WebCrypto doesn't defend against the insecurity: their JavaScript code can send a copy of the file anywhere it likes. Mozilla can, if it wishes or if it is compelled to, deliver malicious JavaScript which does exactly that to a single targetted IP address, or just every once in awhile in order to find potentially interesting files.
Using in-web-page crypto gives users a false sense of security. This is, I believe, a very real problem.
It's true; you're trusting Mozilla to deliver secure code. You'd be placing a similar amount of trust in Mozilla by using Firefox, since browsers automatically update themselves these days.
What WebCrypto guarantees is that it is truly Mozilla's code that you're trusting, since the WebCrypto APIs are only available in a secure context (HTTPS or localhost).
> You'd be placing a similar amount of trust in Mozilla by using Firefox, since browsers automatically update themselves these days.
No, because I use the Debian Firefox, which means that I'm trusting the Debian Mozilla team. I feel much better about that than about directly trusting Mozilla themselves.
About the auto-updates. CCleaner recently had an incident where their version .33 something had a backdoor injected by some 3rd party. If you downloaded version 34 you were safe. If you loaded 32 and configured it auto-update you got the malicious update. But that didn't affect the auto-update setting as far as I know, so if you had it on you would in about 2 weeks time have gotten an automatically fixed clean version.
Point: The worst situation was if you did not have auto-updates on and downloaded v. 33. Then you were stuck with that until somebody told you you had malice on your machine.
That's a very different position than the one you staked out above. It's not browser-based crypto you have a problem with, its crypto performed by an application whose patching is done outside of your control.
That's reasonably for a technically savvy user, but the vast majority of users do not use Debian. They use Windows or OSX and rely on trusted corporations like Apple, MSFT, Google, and Mozilla to keep their systems patched.
We trust applications like that all the time. Browsers update themselves, and we trust them to secure our communication with banks, governments, health providers, etc. Most browsers now also store passwords in an encrypted secret repository. If you're on Windows or OSX, your OS is also constantly updating itself with closed-source binary blobs.
I mean, sure, you can never use an auto-updating application again and always manually review system updates before installing them. But realistically, I don't see anyone besides Richard Stellman adopting that lifestyle.
> We trust applications like that all the time. Browsers update themselves,
Not if you're using a Linux distribution's browser packages (we patch out the auto-update code because you should always update through the package manager). And distributions build their own binaries, making attacks against builders much harder.
While people might trust auto-updating applications, they really shouldn't. And there are better solutions for the updating problem.
Sure, nobody ever said otherwise. But that doesn't mean it's a good idea.
The software I use is open-source, so I can see what I'm running, and what updates I get. I also don't use any auto-updates. The web is inherently different in that I can't really guarantee that the code I get is going to be the same code that you are getting.
Right. Point is there are two types of bad downloads. You can download a version which is not malicious but is insecure. Then auto-updating it makes it more secure.
Or you can auto-update to a version which is malicious. Then you are screwed. But your previous version you downloaded to start with might have contained the threat to start with. So just saying don't auto-update does not really protect you from malicious versions. Auto-updating does mean that you get updated security fixes making you less vulnerable.
The original non-updated version can be malicious even with a vendor you think you should be able to trust because it is a popular product used by many others:
> They could easily configure it to be malicious one time out of a million (say); what are the odds that they would be caught?
How could they do that easily? Their source code is public, and many third parties work on it and produce their own compiled versions - plus every security people tracking unexpected connections would catch it.
>> one has to trust Mozilla not to do that.
>
> Exactly. One has to trust Mozilla every time one visits
> the page. They could easily configure it to be malicious
> one time out of a million (say); what are the odds that
> they would be caught?
Sure, but that's open source and you can disable automatic updates, meaning they can't change the code whenever they feel like doing so. And if they do, the code will be kept in the source code control history, and will eventually be caught.
It's wildly different from a JS file that's loaded every time you visit the website.
It's pretty close to being the same thing. You're downloading Firefox at some point and not verifying the binaries you get match the source.
Unless Firefox provides fully reproducible builds on your platform from an open source compiler, you have no guarantee that the binary you have is built from the public source code. You have to trust Mozilla.
Without reproducible builds, compiling the source yourself would be the way to go.
Anyway, I agree that it should be clear that this file sharing service, while convenient, essentially requires you to trust Mozilla with your data. The claim "Mozilla does not have the ability to access the content of your encrypted file..." is fragile.
If
you are running on Linux, then Firefox is built by your distribution.
So attacks like that are much harder to accomplish, because the
distribution of software like Firefox is (for lack of a better word)
distributed. I'm not going to get into all of the techniques that distributions use to make these things safer, the crux of the point is that you should always use distribution packages because we generally have much better release engineering than upstream (as we've been doing it for longer).
But software loaded from webpages are not immutable. They could be manipulated at any given time. So even if someone audits the code you still cannot trust it because all the audit would show is that the specific downloaded version which the auditor saw was non-malicious.
As long as websites have no fingerprints that encompasses all loaded resources and can't be pinned to that fingerpint crypto in the webbrowser is not trustworthy.
https://developer.mozilla.org/en-US/docs/Web/Security/Subres... can help verify everything loaded is what it's supposed to be. However you'd still need to verify that the set of things loaded is the set of things you expect, not just that the things are the things, and that requires having an out-of-band checklist or extension...
SRI alone is insufficient. Maybe a strict CSP policy that exclusively allows hash sources and forbids eval could serve as an indicator that the scripts executed on a page are immutable. Probably needs some further conditions to make it airtight.
In that case, wouldn't an alternative like OnionShare always be safer? The code is open source, it's a desktop app, and the source code is obviously immutable on your computer.
Thanks. I was just looking at the screenshots and didn't seen any hashes, but when I tried it out and copied to the clipboard I saw the form "https://send.firefox.com/download/xxxxx/#yyyyy".
If one were to build a marketing spyware add-on to analyse user traffic from within the browser and send all visited URLs to some remote server, would those sent URLs then possibly contain the anchor?
Yes, but would that matter if these are one-time downloads? You couldn't go get the file even if you did grab the data needed to do so... or am I missing something?
The add-on could tell Mozilla which files to keep because it saved the decryption key for those files. Mozilla could then select those files to decrypt, e.g. to prove to authorities that its file-sharing service was not used for illicit purposes. Alternatively, the add-on could filter the IP addresses used to upload the files for potentially sensitive blocks and then tell Mozilla to decrypt the files uploaded by people from such blocks, e.g. in an attempt to engage in corporate espionage (of course one shouldn’t use third-party services for sensitive files in the first place, but if you have to use a third-party service, certainly an ‘open-source’, ‘private’ and ‘encrypted’ one from such a reputable company as Mozilla, right?)
Several of the alternatives linked in this post’s comments make no effort to encrypt at all; they simply try to con users into sharing files with their intermediate server in plaintext, as if somehow that’s an acceptable thing to do.
If you don’t trust Mozilla or you are sharing information that a nation-state attacker would coerce Mozilla into revealing, then you’re already set up to encrypt the file first yourself - at which point you can send it with any service, include Firefox Send.
> It can, however, easily be read via javascript, so mozilla needs to be trusted in any case.
Or you (and some friends from organizations like the EFF and FSF) can read the source code to see what it does, and even compile it yourself. If you do that, you only need to trust the compiler.
No, you still need to trust that Mozilla's deployment corresponds to the publicly available release—that they aren't using a version with changes nor have they been breached by an attacker who can change it to sample 1/n transactions.
Key is in the hash. Check 0bin.net. We use the same trick to encrypt the pastebin content. The sources are available so you can see the gist of it. It's a very simple code.
As I recall, there was a Bitcoin wallet service which relied on securing the access key behind the '#' in the URL for its security -- turns out it's not perfectly reliable and shouldn't be used for protecting money. Likewise for files you really need to be secure.
Since boring crypto shouldn't have weird failure modes like this, I'm thinking this design is a big mistake?
EDIT: I think it was Instawallet and apparently while they had robots.txt set to prevent crawling, the theory was people typing their URL into Google (or Omnibar) would alert Google to the URL and it got into search results anyway.
I know that web-keys is based on the theory that since the fragment isn't sent by User-Agents in the Request-Uri that it's secure, but there are things that see the full URL which aren't conforming agents, and it just seems risky for any long-lived secret.
Given that it's explicitly a short-lived secret (below 24h), and one-use - so probably Google crawler (or typing into url bar - presumably to immediately follow by Enter) would invalidate it on first access - I don't feel convinced any of those concerns apply?
It really is a shame that there still isn't a easy way (A person whose computer knowledge extends to using facebook), that I know of, of sending arbitrarily large files that isn't tethered to a specific cloud service and is also reliable (can tolerate connection dropping).
It seems that bitorrent protocols are pretty close, but I don't think there is a seamless client that allows for "magical" point to point transactions.
Resilio Sync (formerly Bittorrent Sync) pretty much offers that. They use relays in case both peers cannot pierce NAT, but the relays only see ciphertext. They also have apps for Android and iOS, making transfers to mobile devices also possible. Since it is closed-source software, it's up to you to decide whether you trust the software.
SyncThing is a free software alternative, but I wouldn't say that it is usable (yet) for the average user.
I haven't tried Synthing in about a year, but to me Resilio and Syncthing look like they are from completely different classes of apps/they target different markets.
Syncthing is simply not designed for the mainstream. Resilio is. If Syncthing devs can fix that, I'll gladly start using it over Resilio with my non-technical friends.
That’s great to know, and I can see there has been quite a bit of discussion the topic. I’m glad I’m not the only one that was missing an open implementation for iOS.
Thank you! So as it is available on iOS, I need to adjust my comment to reflect what I mean. There isn’t a free, open-source, community-driven implementation for iOS, keeping with the nature of the project.
Me too. I stopped using it after I read somewhere that it could be hacked to include new peers to your connection without your awareness, which is a serious concern. I don't know how legit the claim was... but I opted to avoid the risk.
I keep mine updated to the latest version and still love it. The 'Ignore' file that is in the top-level of each share alone is worth the admission (similar to .gitignore). Dropbox/OneDrive/etc. all lose their damn minds if you try to sync a folder with 'node_modules' or 'vendor' to the share.
Both parties would have to be online for any p2p solution to work. And if they are online, there are plenty of ways to create a p2p link. It is complicated by things like NAT though. However, that is the less common case. In most cases, the receiver isn't going to be online. So you need an intermediate server, and that's how you end up with third party solutions.
Smartphones in Europe are close to being online 24/24. Energy management by OSes might limit apps in what they can do, of course.
In any case, 4G networks rise might lead to interesting new solutions.
Yeah, the "being online" detail seems like the problem. It would be best if the encrypted data were distributed (something like IPFS) but then why would peers host files that aren't for them.
You would need a seedbox to function as a cloud. Then when you send a file, send to seedbox simultaniously. This approach might also increase dl speed for the reciever.
They don't currently, but Filecoin is supposed to create an incentive for people to host other people's files. Obviously, you'll have to pay some amount to have your files hosted.
AFAIR, there's already some service on IPFS which will pin files for you for money. I.e.: why would someone? Because you'd pay them for that - simple :)
I've found https://github.com/warner/magic-wormhole really useful, although it only meets 1/3 of your requirements. I'm hoping someone will add a nice GUI and re-transmission.
First you need to know a route to that computer. If it has an externally reachable IP address and you know that, then great. If it has an externally reachable IP address and a DNS entry and you know _that_, then also great.
If you don't know the IP address or domain name of the other computer then you'll have to do some kind of lookup/exchange to find it. That means some kind of centralised service to provide the lookup functionality.
If the other computer doesn't even has an externally reachable IP address then a central service is going to have to act as a connection point which you can both connect to (or provide some other method of helping the two of you connect).
I'm not aware of any entirely decentralised system which would allow two computers which are behind NAT to find and then talk to each other. Or any obvious design which would work there.
Yeah, it's initial peer discovery and connection setup (if behind NAT) I'm talking about. Once you've found the other PC and got a connection set up then all is well ;-)
Do these services also act as connectors in case of double NAT?
STUN allows all sorts of hole-punching for cases where both peers are behind NATs. For cases where the hole-punching doesn't work (a user behind two NATs might be one case), you can use TURN which is a dumb UDP or TCP relay.
Which is a step backwards compared to bittorrent, since that can also provide decentralized hole punching, bootstrapping and peer discovery and not just the bulk data transfer. Plus being implemented at the TCP/UDP level allows more fine-grained control over the data flow, making it more friendly towards the network.
If privacy isn't your primary concern, and this is absolutely not private, you can also just use a redirection service and access .onion domains without installing Tor.
Just take any .onion domain and append it with .to and it should work. examplelettersblah.onion becomes examplelettersblah.onion.to and that works without needing to muck about with Tor.
This would be a horrible idea if you're concerned with privacy/anonymity. But, it'd make it easy for people to download your shared files.
Never seen that before. But running a hidden service on Tor is absurdly easy, then you just shove a file server through whatever port Tor is forwarding.
I'm not aware of any entirely decentralised system which would allow two computers which are behind NAT to find and then talk to each other. Or any obvious design which would work there.
then you're clearly not the person we should be asking to build this kind of thing are you? :-)
my proposal actually solves two important problems with peer-to-peer systems, and I barely have to write any new code to make it work! the solution? i2p! it's an anonymous mix network, much like Tor, but completely decentralized. using an intermediary dex mix network fixes the nat issue and prevents you from leaking your IP address to other peers.
endpoints are identified by their public key hash. each endpoint maintains a set of anonymized routes. this routing information is stored in a Distributed Hash Table (DHT). if you want to connect to another endpoint, you lookup the route for the public key hash, build an outbound route, and you're good.
more concretely, an ipfs transfer would work by using public key hashes in place of IP addresses to identify peers and a known set of endpoint keys for bootstrapping.
- IPv6 kind of helps here, at least if we hope that no NAT standard ever makes it into IPv6. Crossing my fingers.
- There does exist at least one NAT hole-punching technique that can traverse two NATs with no central server using ICMP-based holepunching and UDP. Obviously, like all hole punching techniques, it only works on certain kinds of NATs, and firewalls can kill it.
> There does exist at least one NAT hole-punching technique that can traverse two NATs with no central server using ICMP-based holepunching and UDP. Obviously, like all hole punching techniques, it only works on certain kinds of NATs, and firewalls can kill it.
IPv6 doesn't help in corporate environments, where it's often firewalled harder than IPv4.
The fact that it doesn't support NAT is seen as a risk in such environments ( which rightly or wrongly consider it as a second level of firewalling ). Some deployments that I've seen use only site-scoped with no globally-routed prefixes. Everything Internetty has to go over IPv4, which makes the Security team's job easier; drop all IPv6 at the DMZ and drop all non-NAT IPv4.
> I'm not aware of any entirely decentralised system which would allow two computers which are behind NAT to find and then talk to each other. Or any obvious design which would work there.
I think the GNUnet has some stuff in it which can even work across different protocols, e.g. hopping across TCP/IP over Ethernet to Bluetooth to packet radio. It has some neat stuff where, as I recall, it probes out a random path, then tries to get at its intended destination, remembering successful attempts. It definitely sounds cool, but it's not currently in an end-user-usable state.
> I'm not aware of any entirely decentralised system [...]
What's the user value of having something entirely decentralized? I see the value in making sure the actual file transfer doesn't go through the central rendezvous. But I don't understand what's gained by eliminating the use of a little help setting up the connection.
https://upspin.io/ is in super early concept stage, but trying to solve file sharing issues. It allows for hosting on multiple platforms and trying to keep control in the hands of the users.
> It seems that bitorrent protocols are pretty close, but I don't think there is a seamless client that allows for "magical" point to point transactions.
instant.io works pretty well for me, it works on the bittorrent protocol, but over webrtc
Well, you can with FTP and UPnP. You the client gets the external IP of the machine, then use UPnP to open ports on the router/firewall, then uses PASV and connects to the destination. The destination runs an FTP server, that uses UPnP to open up the incoming port, and gets its external IP and uses it for PORT mode. The end result is two individuals behind NAT sharing arbitrary files in bulk. You can even tie this into FXP to do syncing with the cloud.
Or run a local SFTP server with UPnP and avoid the extra complexity.
In terms of finding each other, the client could get its external IP, open up a UPnP port, and provide the user with a QR code or brief snippet to paste into a chat conversation, which the other user would feed to their client to initiate the file transfer.
You could do this commercially by providing an external website with which the clients could communicate out of band to set up their transfers (for free), and (for pay) optionally provide a cloud sync service (which would help person A "pre-send" a big file transfer until person B was available to receive it).
While some of this is available today, you're right, I don't think there are accessible clients. ICQ, AIM and IRC used to be the solution, as they all did P2P file transfer. But now everything is on the web, so everything sucks.
magic-wormhole ( https://github.com/warner/magic-wormhole ) is not so far from that, just add a simple GUI on top of it (or just use the terminal and convince the user that a terminal does not have to be complicated), and you're good to go. I don't know how reliable it is w.r.t. connection drops though.
I just want to be able to right-click somewhere and have the system bring up a a simple FTPS or SFTP daemon and have it try UPnP to open a firewall/NAT transversal. This daemon would spin up just for the transfer then shut itself down once done.
Even if both parties are behind a firewall that won't respect UPnP, there's UDP hole punching which should usually work. Mozilla would just need to host a server to handle the port number handoffs.
I imagine this would handle 90% of the use cases. The rest would use traditional 3rd party hosting, IM transfers, email, etc.
With the recent shuttering of AIM, I'm reminded of the old client's Direct IM feature which would allow you to transfer arbitrarily large files through the chat session. That tool could not have been simpler to the user, as they just had to click "Direct IM" and could then drag and drop files into the chat. However, that system still relied on the cloud service (AIM) being a broker of connecting a handle to an IP address, and that there was little guarantee of the identity of the person on the other end of the IM.
I think he considers this as "tethered to a cloud service".
I think a solution to this problem that wasn't tethered would be something like in an IPV6 world where everyone has an IP address, if I could just put in your address, send you a file, and as long as our computers were on/connected to the world wide web, it'd get to you.
Technically a one to one connection to send files should be simple, just open a socket on both sides or use nc. Of course there is NAT but its not insurmountable. The sending or receiving party has to share their IP. And this can only be real time, there is no scope to store files anywhere.
Aside I have recently noticed visits to ecommerce sites sit near permanently on top of my Firefox history than other sites, is Firefox doing deals with Amazon and others, and has this been been disclosed?
That's a good question. My box is too old for High Sierra, so I can't check and didn't think that my browser is now outdated feature-wise, before writing the comment.
They should add this as a shortcut button in Firefox. I think it makes even more sense than having the Pocket icon there. Not everyone may want to save their articles on another service, but pretty much everyone needs to send a file privately to someone else every now and then. So something like this should be super-easy to access.
I learned about Firefox Send when it launched but completely forgot about it until now. I would definitely use it more if it had an easy-to-access (read: not burried in the Settings page) shortcut in the browser.
It's a Test Pilot project, so easy-to-access may be deferred until they choose to upgrade it from Test to Release, but it is certain they'll consider ease-of-use if/when they do decide to release it.
This is actually very useful. You can send files from and to mobile devices and desktop. Delete on download doesn't seem to be a problem, just adjust your storage strategy. Security could be acceptable for most use cases. I hope they add it to the browser.
Do you know Piwik? Its a really nice open source, self hostable alternative to Google Analytics. I've been running it for years, very stable and great insight.
I've just checked the code. It indeed deletes the file once a download has been completely consumed, so it can't really replace other file-sharing services. BUT I wonder how does it deal with race conditions ? My guess is that if many people start a download at the same time, they'll all be able to complete it before the file goes away. At least that's how I'd see it working with S3 or local storage.
You could abuse it by sending the same file many times in order to create lots of download links; but there's little to gain in bandwidth savings: you might as well run your own server. The only advantage I'd see is hiding your IP address, but then you could also run a tor hidden service. The other would be bandwidth amplification by synchronizing all the clients (for big files).
So, I just tested, and the race-condition method does work. I was able to download a 200M the encrypted file three times, and they match. You could imagine a site operator with hundreds of users uploading a file once every hour or so and lots of synchronized users downloading it for each generated link.
The sender could open a download connection and stall it as far as possible (maybe keep the download rate shaped to some bytes/second, to prevent an inactivity timeout). That would open a large time window for further downloads.
I assume the file would be deleted as soon as any client finishes downloading. (If the files are backed by regular files on a unix-like system, then the existing open handles to the file will keep working, allowing currently-connected clients to finish downloading, but preventing any new clients from starting the download.)
It seems nice, but I think it should be made more explicit upon downloading, that you can only do that once. I can see myself e.g. downloading a file I received on my phone to take a quick look, intending to then download it again on my computer later. I would be surprised to see it just gone.
One thing that could be improved with this is to have an option for a human readable/typeable link. I wanted to quickly transfer a file from my desktop to my phone. Used Send and realized I didn't want to type that cryptographic URL.
I ran the Send link through Typer.in (specializing in hand-typed urls) and it worked as I initially expected. However, it would be nice if Send had this functionality by default.
Typer.in stores the link you give it, and returns a human readable lookup link. The whole point of Send is that the link includes the secret key to decrypt your file. Whoever you give that secret to, including Typer.in can get the file.
The Send link must include the secret key, because no one else should get it, and that key must be of sufficient length to protect your file. Thus human-readable-izing it could do nothing to decrease its complexity and would just result in a huge string of words that were just as much as a pain to type in.
At least at the moment, downloading the file once removes it from Mozilla's servers. So in the event typer.in used the URL you provided to download the file, you would know as the file would no longer be accessible.
The private key is in the url, and mozilla says explicitly that they don't know this private key.
They could shorten the url but then they would have the private key.
You could have also just generated a QR code of the URL and scanned it with your phone, if you didn't want to send it electronically. I'm sure they expect users to just email these links to their recipient.
Yes. My workflow is to get the link from Firefox Send (which I love btw), email the link, open email on other device, click link.
There are I suppose two use-cases. 1) To send to yourself. 2) To send to others. The second use-case, is fine under the current workflow, but the first use-case the workflow is annoying.
If using Firefox Sync, perhaps the link could be synced over, maybe?
If you only need to send text, https://cryptopaste.com does everything client side and shows the ciphertext in a "staging area" before you commit it to the cloud.
You can instead save it as a self-decrypting document, attaching it to email, copy to thumbdrive, upload dropbox, etc.
Why do the Mozilla people keep doing this sort of thing?
Aren't they supposed to be making a good browser?
I remember them telling me they are now going back to their core competences. I think it was after Firefox OS failed.
Not trying to piss on anyone's parade here, just wondering how this kind of thing keeps happening. I was wondering the same thing when Mozilla added Pocket and now Cliqz to Firefox.
What is the rationale here? Do they have leftover money they need to spend before January 1st or something?
The engineers on the product are listed as two interns. There are eight people on it total, and I bet most of them aren't working on it full time. This hardly seems like a massive waste of resources.
Firefox is Mozilla's flagship, and the largest by far way in which we achieve our mission, but our goal is a healthy and open internet.
Additionally, this is a great way to determine whether something like this would work well as an in-browser feature, and we've built it in such a way that it works in more browsers than just Firefox on day one.
Because browsers have evolved beyond applications to serve up static web pages, and convenient features like this are a selling point. Google has created an entire "OS" ecosystem built on a port of their web browser, and they've been pushing people to use it for years. I think I'll give Firefox a pass at adding some super neat and useful features from time to time.
Software development doesn't always (usually doesn't, in fact) work that way. After a point, throwing more money and engineers at Firefox is unlikely to speed up its pace of development, or improve its quality.
I have to agree with your concerns. All they have to do is make a good web browser and they have the funds to do that. It concerns me that they are wasting time, energy, and money chasing rabbits.
I've just uploaded a 140MB file from my mountain cabin (in Viet Nam) and it was lightning fast. I then gave the link to a friend of mine residing in Melbourne, Australia and his download speed was blazing fast as well. Would be keen to learn more about the infrastructure setup behind this service to achieve such a good performance.
Is it really? I mean, if I wanted to send a file to someone via Google's e-mail servers or a chat application like Facebook Messenger but wanted to make sure it didn't stick around for those two companies to data mine, this seems like it would do the trick.
From Mozilla's perspective, doing "one upload, one download" also kind of solves the problem of becoming a new "megaupload" for illegal content. Not the problem of the illegal content being there (since it's unsolvable), but the accountability of being the party responsible for spreading it around.
I've used this once or twice a week since it was announced a month or two back here on HN, and love it. It's my go-to way to send any file that's > 10MB.
I remember doing some mortgage applications last year and being frustrated that the built-in Zip encryption in Windows is weak/broken, and begging them to have a secure file upload feature.
With this, it's all set - as long as the recipient picks it up within 24 hours of course. 48 would be better but I'm not quibbling, I can always resend and if 24 is what they need to keep the service up, 24 it is.
This is simple and awesome and I appreciate everyone who worked on it.
Can anyone confirm whether this works in Safari/iOS now? It seemed great but when I tried to send some files to friends to promote it it completely shat the bed and my mobile friends ended up getting nothing but garbled text when the (fairly large) downloads finished. It was quite frustrating since the upload was painfully slow.
Pushbullet offers a great service. I'm not related to the team, but really enjoy the product. It's both an addon for browsers, desktop app, and standalone mobile app. Easy as pie, and now they have 'portal' which is local network direct transfers
The whole point of this discussion is that sending files to another computer is still cumbersome. Software is entirely relevant. So if you're complaining about speed of FF Send, you should specify what you're comparing it to.
lol no.. my point was that it wasn't as fast as i would expect a service from mozilla to run and broadband connections here are a lot faster. Second, it wasn't a complaint, just an observation.
And if you really think one 'https' enabled webserver software on a home subscriber cable/dsl connection is much faster then the other for a single download, then i will have to inform you that you are mistaken.
And setting up a simple webserver to exchange files isn't that hard at all. To say file sharing is cumbersome is a bit stretching the truth.
In order to take this further, this functionality needs to be part of the browser so that one can trust the page is not maliciously modified in transit or at the server.
When thinking about a Mozilla offering, it is healthy to think beyond "money and product" as this type of analysis will usually leave you with a conclusion of "this doesn't make business sense".
Mozilla is in the game of keeping the internet healthy. Part of it involves products, such as Firefox and its quest to recover userbase. Parts of it involves money, such as MOSS awarding grants or prizes for FOSS stuff they use and see value.
But most of Mozilla actions can be thought on "how close they get Mozilla to achieve its mission as stated in the Mozilla Manifesto[1]". In the case of Firefox Send, it enables less friction sending files and protects the user privacy, so it IMHO advances item #4 of the manifesto: "Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional." It also serves as a nice branding reminder for people, as it works with other browsers, makes people using some other solution remember Firefox...
Not sure how true that is anymore. They seem to by trying hard to squander their reputation. First their MITI officially signaled they now consider themselves a political force, then the whole Cliqz debacle following shortly after showed their willingness to sacrifice their principles.
MITI is Mozilla trying to keep the internet credible as a source of news and information. This is in keeping with Mozilla's core principles. Give the blog post a careful read, as it might change your mind: https://blog.mozilla.org/blog/2017/08/08/mozilla-information...
Re: Cliqz, working with a search-oriented company might be a way that Mozilla can help avoid monopolies in search. The goal is to find new ways to keep the web open, so that new challengers have a level playing field vs. dominant companies.
Edited to add: Cliqz is a very privacy-conscious company, and I think that respect for the user was a major factor in Mozilla partnering with Cliqz. This is just to say that, AIUI, the partnership was evaluated, again, in keeping with Mozilla's principles.
How is Mozilla been ever anything but a political force? Like 12 years ago, when Firefox came out, it was accompanied by probably the largest marketing campaign for an open project barring Wikipedia.
They followed that up with some ten years of "web standards! web standards! open web standards!". This is the game Mozilla plays. That's why they won on the web: everyone does it their way now, even if their browser isn't dominant.
If I need to get Mozilla's approval before I can run code in the browser running on my machine then I see a wall. That they've automated it so much as to mitigate the protection a walled garden normally gives almost makes it worse.
You can install Firefox Beta or Nightly and have unrestricted extension installation options (or use a Linux distro's packages: most of such firefox packages do not require signed extensions).
It's a measure to prevent casual trojans, there are many ways around it for non-casuals and developers to employ.
Power users can use unsigned add-ons in Dev Edition. Regular users would be exploited by any middle ground solution in the main release. It's unfortunate the web is a dangerous place for the average user.
Firefox Send is still an experiment at Mozilla (as noted on the site). The question now is whether it is popular and useful, and how much it actually costs to run. Those questions aren't answered yet, and there's no big schemes behind it.
If Firefox Send graduates from being an experiment, then it recoup its costs implicitly if it can increase engagement with Firefox itself (which does produce revenue), or awareness of Firefox and Mozilla. Firefox's revenue comes primarily from search companies who pay for preferred placement in the browser. In that sense, people who search using Firefox are ultimately the product.
However, Mozilla is a mission-led company, and so profit maximization isn't ultimately the answer to why Mozilla does something. I think it's accurate to say that Mozilla is trying to maximize its positive impact. Revenue certainly enables that impact. But direct impact is also worth something, and it's not necessary for everything Mozilla does to involve profit maximization.
because it's easier to do this than give mobile Firefox users option to pull down to refresh, thus enjoy Brave or Samsung browser and stay away from Firefox
eh, what's current Firefox market share? I would not bet my money many non techies recognize Firefox at all, it's pretty niche nerd product for people with addon fetish, rest of the world just use Chrome/IE/Edge
Mozilla was always and primarily a political organization.
To answer your question, while I'm not a Mozilla employee or developer, I'd say yes since the contents are encrypted end-to-end, so they can't actually tell the user has different political opinions.
Ok, since you've ignored our requests to stop breaking the site guidelines, we've banned this account. If you don't want to be banned on HN, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future.
How can you compare the Pocket thing with Facebook? The good kid messed up this time. And you can simply ignore it, don't have to use it. Yes like you can decide not to use Facebook, but then all your friends and family use it, or use Instagram of Whatsapp and force you to use it. That includes uploading your complete contact list two or three times.
Do you have to use Firefox because your family uses it? I don't think so.
Months after the whole Pocket thing, they destroyed the Firefox on Android homepage to replace it with 60% "Recommended by Pocket" articles. I'm not sure how they determine what's recommended for me, but it seems an awful lot like sending browser history to this company. And even if not, I'm not interested in this bullshit, I just want my bookmarks homescreen (with HN and some other sites I check regularly). Oh and the WebExtension thing which breaks every single reason I'd use Firefox over Chrome. All extensions that do not exist for Chrome because Firefox' model was more powerful, can no longer exist in the new Firefox.
It's terrible decision after terrible decision. In another HN thread someone called it intentional decisions by someone wishing Mozilla harm. I don't think there's bad intentions, but I can see why it seems like it.
Is there something about Mozilla that I'm missing? They're absolutely an OSS group in my mind, and the source is on GitHub like someone else pointed out.
I was actually referring to this incident where Mozilla started shipping an opt-out "addon" in Firefox that automatically shares your browsing history with a third party:
This type of comment without any reference or source is the kind of stuff that is destroying the internet and good journalism. It is users like you that spread FUD and fake information that makes it really hard for people to discern noise from signal online.
First, you can read all about MOSS[1] which is a program inside Mozilla that gives grants to open source projects.
Part of Mozilla current efforts alongside with many other foundations is to provide/help means of safe internet communication and tools for people who are in danger of surveillance such as political dissidents in dangerous countries, journalists doing investigative works and many other cases. You can see a lot of the good stuff that is being worked on for Journalism work at the coral project[2] and the Knight-Mozilla fellowship[3]
So it is a part of the Mozilla Foundation mission and ethos to back up projects that make the internet healthier from the common people. In an age with the state spying on everyone, rampant profiling happening in the name of advertising, and other dangers, giving a grant to a project providing "a coordination platform used by activists across the political spectrum, to improve the security of their email service", by the way, Mozilla also donates to Tor and other projects too...
Now, in all I have said above, do you know what is missing? Sharing your data... Yes, Firefox has telemetry, Mozilla sites have analytics scripts, but those two are necessary to make Mozilla more efficient. Sharing your personal data is not. Mozilla is not in the business of profiling you or selling you out. STOP SPREADING FUD
PS: I am a mozilla community member, if I can read the source code and the discussions around policy, so can you. Make an informed decision before spreading fud.
> The Riseup Collective is an autonomous body based in Seattle with collective members world wide. Our purpose is to aid in the creation of a free society, a world with freedom from want and freedom of expression, a world without oppression or hierarchy, where power is shared equally. We do this by providing communication and computer resources to allies engaged in struggles against capitalism and other forms of oppression.
I'm interested what exactly they do wrong in your eyes to deserve your passion?
Because by donating to RiseUp, Mozilla have made themselves a political organisation, and political organisations do things for political ends, not privacy ends.
Firefox’s JS client requires the whole file be in memory in order to perform the encryption, and has to decrypt the whole file in memory in order to download it. My client doesn’t have that limitation so it could theoretically upload much larger files (subject only to the server’s upload limit).