Hacker News new | past | comments | ask | show | jobs | submit login
Send: A Fork of Mozilla's Firefox Send (github.com/timvisee)
666 points by andredz on May 5, 2021 | hide | past | favorite | 141 comments



Some other WebRTC file transfer options:

* https://wormhole.app/ (my recent fave, by creator of WebTorrent, holds for 24h, https://instant.io by same)

* https://file.pizza/ (p2p, nothing stored)

* https://webwormhole.io/ (same, but has a cli)

* https://www.sharedrop.io/ (same, does qr codes)

* https://justbeamit.com/ (same, expires in 10 minutes)

* https://send.vis.ee (hosted version of this code)

* https://send.tresorit.com/ (not p2p, 5 GB limit, encrypted)

I track these tools here: https://href.cool/Web/Participate/


love Wormhole, but that animated background chews up an incredible amount of battery power on my M1 Macbook Air (even when the tab is in the background). Please wormhole crew, turn it off.


We recently made a one-line change that massively reduced GPU usage, specifically:

- Intel integrated graphics: 60% reduction

- AMD Radeon: 40% reduction

- Apple M1: 10% reduction

What was the change? We removed "opacity: 85%" from the <canvas> element. We were using opacity to slightly darken the animation but now we just darken the texture image directly.


Would love to read more about this.


We just massively reduced GPU utilization on Wormhole for a second time (as of May 12). Give it a try and let me know if you still have issues.

GPU utilization when window is full screen, 4K resolution:

Chrome:

  - Radeon: 70% -> 15%

  - M1: 65% -> 29%
Firefox:

  - Radeon: 60% -> 8%

  - M1: 55% -> 23%
Safari:

  - 15% Radeon + 75% Intel -> 8% Radeon + 8% Intel

  - M1: 75% -> 35%
A bit more detail: We're running into the limits of WebGL. It seems just rendering a white screen at 60fps at 4K resolution is enough to make people's fans turn on.

So we reduce the frame-rate when the wormhole is not warping (render every 3rd frame). We also lower the resolution and scale it up.


You can "hide" it with uBlock Origin by blocking large media elements in the popup.


Thanks, much better experience


100% agree - I've used Wormhole a few times when I try to quickly get data into interactive VM sessions, because it works great otherwise, but the graphics consistently grind the whole thing to a halt.

I really don't get the reasoning. It looks kind of cool, but it makes it super unusable for a bunch of use cases from very old computers to interactive sessions on raspberry pis to constrained VMs. And those are exactly the kind of places where I want friendly easy tools to copy files across for quick system admin or to get logs back out! Doesn't seem like a good tradeoff.


I use rclone serve for this task:

rclone serve http ./dir/or.file --addr :9000

File/s can be grabbed with curl/get etc or a browser



Set webgl.disabled to true if you're using Firefox.


And if you have a sucky connection, it's slow has hell to load.


Sorry to hear the site was slow for you. We get pretty good scores on Lighthouse and from real-world data according to https://developers.google.com/speed/pagespeed/insights/?url=...

What OS, browser, and connection type/speed did you use?


Ubuntu, Firefox, sucky country adsl with unkown speed.

I don't mind for the upload to be slow, I just let it run, but getting to the page should not be.

I mean, this galaxy.jpg file is a third of you webpack JS bundle size!

It has no added value for me. I get the cool factor, I know it's pretty, but still.


You could add a "fast" or "lite" or similar subdomain that avoids anything like the complaints that have been shared here. Not perfect (the default should be the best), but at least you could avoid this problems as a user.


Turn on "reduced motion" setting on your Mac.


Wouldn’t it be easier for you to manage what you expend your battery on?


Great idea, do you have any suggestions that will help?


Hmmm, it seems like https://webwormhole.io/ has changed from last time I used it. https://wormhole.app/ seems more similar to what I remembered as frontend.

I remember trying many of those services and I decided to use this one because I could send large files without any problem (was trying to move sqlite dbs that were several Gbs, as it seemed to stream the file instead of trying to store it on ram first, but now I see wormhole.app allows up to 10GB, and I don't remember to have any limit.

WebRTC services seem to have problems to get up to speed, but for streaming files between devices it seems the best solution in terms of friction.


The speed problems interestingly seem to be caused by a bug in an implementation of SCTP used by most (all?) WebRTC browser implementations:

https://github.com/saljam/webwormhole/issues/55

The symptom seems to be that the SCTP data rate drops with increasing latency (which used to be a problem with very old TCP implementations too, but all modern ones handle high-latency networks much better).


I think SCTP has everything we need, just more knobs need to be exposed [0]

I have gotten feedback about the performance about Pion's SCTP implementation as well. It is a hard problem. The amount of people who care about SCTP performance + WebRTC and are able to work on it is very small.

If anyone is interested in more about this [1] is a fun issue to read.

libwebrtc is also planning to stop using libwebrtc soon. That would mean all browsers (Chrome, Safari, FireFox) will be on something new. Could be good no idea yet. The ongoing work is here [2]

[0] https://github.com/w3c/webrtc-extensions/issues/71

[1] https://github.com/pion/sctp/issues/62

[2] https://webrtc.googlesource.com/src/+log/refs/heads/master/n...


Thanks, this is great context! Especially the very different focus in these two projects explains a lot (lightweight control channel for A/V [1] vs. the entire reason for using WebRTC in all of these file transfer projects).

Curious to see how the new implementation will play out for the browsers!

[1] https://github.com/w3c/webrtc-extensions/issues/71#issuecomm...


*libwebrtc is also planning to stop using usrsctp soon


I‘ve also built a file transfer tool (CLI) with emphasis on decentralization. It’s a fully decentralized p2p file transfer tool based on libp2p:

https://github.com/dennis-tra/pcp

I‘m currently trying to make it interoperable with https://share.ipfs.io/#/ which resembles the functionality of the posted tool.


Seems very similar to 'hyp beam': https://github.com/hypercore-protocol/cli


There's also https://snapdrop.net which seems extremely similar to sharedrop.io but has an additional useful feature of letting you send messages which I sometimes use to send links to devices that aren't logged into any service.


Ah neat! I've added all of the links in the comments here to my list - great to see what's out there and to combine our collections.


As I recall, there's a difference between file.pizza and webwormhole. file.pizza allows the sender to specify files and then generates a share url, whereas webwormhole creates a share link first. The latter can be useful if you're not sure exactly what you'll send before you share the link.


There's also https://github.com/schollz/croc which is a very simple P2P CLI transfer tool.



Is that using WebRTC, or are they hosting the files on their backend?


I’m a fan of Kipp! Not p2p, but has optional encryption

https://kipp.6f.io/



> I track these tools

How frequently do you validate that they are still functional?

I tried File Pizza several months ago, and neither I nor the recipient could get it to work.


Links are checked at least once daily - I do have a few that are recently broken that I need to address - but File.pizza is okay again. I have switched the main link to Wormhole, because it's now my preferred option - and because File.pizza has been up and down for me in the past as well.


The website is up - the actual transfer service no longer seems to function in modern browsers. Have you (or anyone) had a successful transfer via File Pizza recently?


Just tried with a tiny file, it tries to work, as I can see the filename/size on the receiver end. But, it never downloads. It does say "Peers: 0, Up: 0, Down: 0".


Ok - I see. I suppose I will need to find a way to verify some of these links a bit deeper. Thank you for doing the footwork on this!


I swear I've tried it several times over several years. I've never gotten it to work.


http://Gofile.io File sharing and storage platform, unlimited and free



We build a web app for e2ee file transfer:

https://arcano.app


That website or yours should be its own hn article.


Thanks, useful site!


Maintainer here, thanks for posting!

Feel free to ask any questions.

Want to try it out? I've a public instance at: https://send.vis.ee/

Other instances: https://github.com/timvisee/send-instances/

A docker-compose template: https://github.com/timvisee/send-docker-compose


I was wondering why I recognised your name - you're the main developer of ffsend. Thanks for all the work! I really hope you get more people interested in maintaining and developing Send.


Yes! Thanks! :)


yes thanks a lot! also retrospectively for implementing basic auth in ffsend!


Hey - love this project! I was able to get an instance deployed with Nginx reverse proxy without too much trouble. Password encryption doesn't seem to be working, but that might be some weird header issue thing with the reverse proxy setup and I'm not too worried about it.

One thing I was wondering is if/how expired files are cleaned up. I uploaded a large file, set it to expire after 5 minutes, and although I can't download it anymore I see that it's still in the files directory on my server.

I glanced through the code, but I didn't see any mechanism for periodically purging expired files or anything like that. Is there something that I missed, or should I just set up a cron job or something to delete all files in that directory older than a week?


> but I didn't see any mechanism for periodically purging expired files or anything like that. (...) should I just set up a cron job or something to delete all files in that directory older than a week?

You're right. Expired files that don't reach their download limit are kept on the server. Due to implementation details there is no 'nice' way to do this from Send itself. If using S3 as storage you can configure a LifeCycle Policy, if using raw disk storage you can set up a cron.

See an example here: https://github.com/timvisee/send-docker-compose/blob/master/...

All uploaded files have a prefixed number which defines the lifetime in days (e.g.: `7-abcdefg` for 7 days expiry). So you can be a little smarter with cleaning up.

I should describe this clearly in documentation.


Thanks for the quick reply! This makes sense and works fine. Thanks again for the great project


Why does it say "We don't recommend using docker-compose for production."?

I'd like to understand the reasoning behind this. Thanks.


Good question. I don't have very good reasoning for it, and I haven't put it there. I might need to remove it.

Someone asked this before, here is my answer (bottom quote): https://github.com/timvisee/send-docker-compose/issues/3#iss...


What level of logging/privacy can we expect from a self-hosted instance? I had faith in Mozilla's commitment to privacy, but I don't necessarily trust some random dude's AWS instance.


> What level of logging/privacy can we expect from a self-hosted instance?

It really depends on who is hosting it.

Send itself doesn't really log anything except for errors. A reverse proxy in front of it might be used for an access log, which is default with the docker-compose template for it. Files are always encrypted on the client, and it or its keys are never seen on the server.

If you're wondering for the instance I've linked: it runs on Digital Ocean. I have an access log (IP per request, for 24h), I can list encrypted blobs and their metadata (creation time, size), and that's pretty much it.


Naïve question here, but is there a config setting that would work without HTTPS?

I run a home server just for internal use and it might be nice to send files via a link for memes, jokes, quick one-shot uses rather than storing it on a samba share, etc, but it doesn't have a public-facing URL for confirming a LetsEncrypt certificate.


If you really don't want to use a certificate, just configure the base URL to be a http: address. That should work fine! Feel free to open an issue otherwise.


Note that if you do this, rather than actually setting up HTTPS, outside of Send itself a bunch of stuff becomes impossible (you presumably don't want to do any of this stuff, but if you ever do try it just won't work) because you lack Secure Context, and gradually over time you can expect more errors and problems.

Already if you give me a plaintext HTTP link I'm going to have to consciously decide that's fine and click past the interstitial warning me it wasn't able to be upgraded to HTTPS, if you use it to inject an image somewhere that's otherwise HTTPS, the image just counts as broken unless I go out of my way to authorise it.


You could self-sign a certificate, or if your internal URLs use a subdomain of a public domain you control you could use DNS challenges for Let's Encrypt.


Have you had many issues with abuse?

For private instances could there be an option for requiring a login before upload?


> Have you had many issues with abuse?

In the last year, I've had 1 DMCA request. And I've blocked one IP that was uploading half a terabyte.

> For private instances could there be an option for requiring a login before upload?

Not built-in, right now. But you can easily set up HTTP Basic Auth on a reverse proxy that you put in front of it.


Do you have any major new features in mind you would like to implement (assuming you had time + help)?


I don't have anything planned.

But with infinite time, I'd:

- add some form of authentication, to limit uploads for example

- add a way to preview files on the Send page itself

- provide integrations with other platforms

- resolve outstanding issues


Some simple authentication would be fantastic, so I can run my own instance and only allow myself to upload things.


This can already be done with a reverse proxy and HTTP Basic authentication, but having it built-in would be nicer.


But then I'd have to give the password to anyone I wanted to receive files. I want to be able to send files to people but not have them be able to send to others.

Maybe adding HTTP basic auth is fine, as I mainly want to keep random bots from finding the service. I'll try that, thanks!


You can exclusively configure it on the upload page. It's not super nice, but it works.

I have not seen any bots on my public instance by the way. It has been running for more than a year.


I was wondering where I had seen your name before, and then after scrolling through your GitHub, I realized it was your Advent of Code 2020 solutions in Rust. Those were absolutely beautiful.


Thanks a bunch! That's awesome to read!


How is end-to-end encryption achieved? By storing the password in the URL and not logging the URL when the file is fetched at the receiving end?


Encryption is done with JavaScript on the client. The decryption key is attached as hash to the download URL on the client side as well.

When visiting the URL, the key never reaches the server because the hash-part of an URL is never sent and is a local-only thing. So there's no need to strip logging. The client downloads the encrypted blob, and decrypts it on the client.

More info: https://www.reddit.com/r/firefox/comments/lqegb5/reminder_th...

And: https://github.com/timvisee/ffsend#security


#xxxx contents aren't sent to the server at all, if you trust the underlying javascript running in browser.


You are welcome (poster here :-)). And thanks to you for maintaining a great, and useful, piece of software. I recently needed something like Firefox Send that could have files uploaded for longer than 1 day but no more than 7 days and Send (and your public instance) was perfect for such task.


Thanks for maintaining this! I just upgraded our local mozilla copy to your version, works great and was seamless!


Thanks for doing this. I used Send regularly and miss it.


This is fantastic, well done! A very useful service, and I loved it when it was Firefox Send. I'll be sure to use this now.


@timvisee I have no questions, but just wanted to thank you for your work on this!!! Thank you, thank you, thank you!


:)


An observation. This is the second time today we've had a submission link to github, even though the main repo is on gitlab (the first was https://news.ycombinator.com/item?id=27047243)


How is ClearURLs' main repo on GitLab when their main site[0] links to both and their docs[1] link *only* to GitHub?

[0]: https://clearurls.xyz

[1]: https://docs.clearurls.xyz/latest/


The README on the repo itself links to gitlab, including the "create an issue" link: https://github.com/ClearURLs/Addon/#contribute



This is excellent! I've been missing Firefox Send ever since they took it down.

However, it needs to be hosted somewhere.

...and if I'm going to be using a hosted service, I'd like the ability to easily pay for it (so that it doesn't eventually collapse or resort to shady things like ads), either though donations or microtransactions for bandwidth/storage.

Unfortunately, there's no good microtransaction service.

Wasn't Mozilla working on one? Where did that go?

...and thus, we've gone full circle.

And I'm typing this comment in a Chrome browser, because my company is migrating away from Firefox due to "security issues".


This is a comment for my instance specifically, but you might find it nice to know:

The https://send.vis.ee/ is mostly funded by donations right now. I do not plan to take it down, unless the cost becomes a problem. I'll never resort to ads.

If this ever happens, I'll likely show a warning beforehand. Some time later I'll disable the upload page, and will take the rest of it down the week after. Files have a maximum lifetime of a week anyway. So if you discover this when uploading, you can simply switch to some other service. Existing links should not break.

There's a donation link on the bottom of the page (https://vis.ee/donate). But feel free to use it without a contribution.


You can host your own Send, and the host need not exist when there's nothing you are sharing which is right in-line with utility computing as provided by "cloud" hosting companies like Amazon, Microsoft, Oracle, &c. Should be possible for under $5 for a month of Send operation provided low disk space suffices, say under 5GB. Perhaps not micro enough though.


Yes, the problem is that it's "not micro enough". The value of this service is low enough that it's not worth it for me to self-host, and the financial overhead of cloud providers is enough that it would cost far more for me to spin up a dedicated instance than pay someone for the fractional cost of usage of their instance.

More generally, I want the ability to make microtransactions (substitute "extremely low-friction donations" if you will) for everything that could be "free" but also costs money (bandwidth, compute, storage), because no matter how much free time I have, there will always be services that I could benefit from, but are low-enough-value that it's not worth it for me to self-host or get a cloud host myself.


Did your company actually gives any details for those supposed security issues?


Probably more related to financial security...


Do you know what led Mozilla to stop this experiment (I'm assuming spam)? Will this not be an issue for your instances as well?


The announcement is here: https://support.mozilla.org/en-US/kb/what-happened-firefox-s...

Quick summary: it was being used for malware and phishing, aggravated by the trustworthy-seeming firefox.com URL.


I think the shifting "product focus" is probably the main factor here, simply because such a service being used for malware hosting was completely predictable from day 1. When they started they probably thought that it was worth it, then later on they changed their mind. That or they were incredibly naive.


In the context of a large corporation, incredibly naive is just an euphemism for bad management. They launched it. An internal security audit found that it was being used for phishing. They planned to fix it but layoffs came along and they had to sunset Send.

So, yes, incredibly naive.


They said they stopped due to spam, but there might be more to it because they also had quite a lot of layoffs that period. I don't know.

I can imagine spam being a problem with such a service with a well recognized brand name.


This is self-hosted, so probably easier to apply security through obscurity.


That means no security at all. Without a way to link files being hosted to identity or inspecting the contents of the files, there is no barrier to prevent spam and illegal files from being hosted.


they have a public instance... and just like Mozilla's version you can self-host... but either way we need more services like this.


> we need more services like this

I don’t know. The internet had hundreds of file sharing sites at one point. They all suffered fates similar to the epic MegaUpload although with not as colorful founders as Kim DotCom.

I don’t see how having them again would be different than last time?

https://en.m.wikipedia.org/wiki/Megaupload


Mediafire is still alive and I think it's the last hold out from the "big" file sharing websites of the mid/late 2000s. Though honestly I don't miss the download limits, timers and adf ly spam that came with them. Common cloud storage (gdrive, dropbox) are much easier to use and share files from, although they require you to be logged in. Send seems to be the best of both world though.


> They all suffered fates similar to the epic MegaUpload although with not as colorful founders as Kim DotCom

Well it doesn't matter as much in this case because "Send" is a temporary file host.


Storage in S3 is not cheap.


After trying all these WebRTC options and the NAT traversal service (STUN, iirc) always being down, I ended up using IPFS instead. With public gateways from CloudFlare it is very easy to effectively drag and drop files and have them accessible via the IPFS-to-HTTPs gateway.


For the rare times I have to do this I run a local server and the free version of CloudFlare argo tunnel. It provides an https url so the upload/download is safe from ISP snooping, there are also no size limits, you can send a 30GB file if you need to do that.

https://blog.cloudflare.com/a-free-argo-tunnel-for-your-next...


https://share.ipfs.io/#/ is also very convenient for simple p2p file transfers.


How does this work. Does the file need to pass through a server before reaching the other end? or is it streaming directly between sender and receiver?

Also, does it need to put the whole file in RAM first?


There is a server involved for mediating peer discovery but the file transfer itself is p2p.

Regarding your second point: I’m actually not sure if the file is copied into memory or if the browser just keeps a reference. I haven’t tried it with large files yet.


Doesn’t IPFS have problems with persistence? IOW you can’t guarantee a file will be available?


There's two persistence types: pinned and unpinned files. Pinned files persist, but someone needs to seed them at least occasionally. Unpinned files get eventually garbage collected. If you want to share a file, you don't need to pin it if the recipient for example tells you when the download is complete. In this sense, all these WebRTC examples are more equivalent to unpinned files.


As long as you keep your node up and running your content will never disappear. So if you just want to share files with friends I can see this working well - just keep your node available.


Semi-related, but is GitHub's search by programming language feature broken on this repo?

I'm curious about "FreeMarker" being the top language so I clicked on it, surprisingly it returns zero code: https://github.com/timvisee/send/search?l=freemarker

So does "javascript": https://github.com/timvisee/send/search?l=javascript


Search is disabled for forked repositories in github. It's better to create a new repo and push the code if you want a fork.

Search in original repo works: https://github.com/mozilla/send/search?l=javascript


The other option too would be just to clone the repo locally and use grep, find, etc. That seems the simpler option if you just want to perform a search.


Or open the repo in github1s so you don't have to clone anything: https://github1s.com/timvisee/send/

For more info see: https://github.com/conwnet/github1s


Naive question: The github page says 62% of the languages used in this repo is FreeMarker. I checked the repo and every file I look at is js, what and where is FreeMarker?


I can't find any freemarker templates on the project, but apparently it's using i18n files with an ".ftl" extension which is the default extension for freemarker templates.

Ex: https://github.com/timvisee/send/blob/master/public/locales/...


Yeah, those files are for https://projectfluent.org/


I don't know. Have been asking myself the same question the past month.


I think it's the locale files, like this one: https://github.com/timvisee/send/blob/master/public/locales/...

If you look at github/linguist, that's what recognizes languages in repos. It has this rule for FreeMarker: https://github.com/github/linguist/blob/32ec19c013a7f81ffaee...

It seems a .ftl extension means FreeMarker to linguist, so those localizations show up as such.


It should be excluded from the stats as it's marked as documentation though: https://github.com/timvisee/send/blob/master/.gitattributes


It shouldn't be excluded because that pattern doesn't match any of the files.

If you run `git check-attr --all public/locales/foo/send.ftl` with the current .gitattr file, you'll get no attributes.

If you update the attr match to `public/locales/**` or `public/locales/**/*.ftl`, then the `check-attr` command above will match it and show 'linguist-documentation'.


I think this has something to do with the repo being a fork.

The parent repo doesn't have this issue. The "languages" list doesn't mention Freemarker on https://github.com/mozilla/send


GitHub's language "detection" is ridiculously naïve and basically limited to examining file names only, not content.


I'm liking croc with a CLI on each end.


croc just had multiple major vulnerabilities discovered that required protocol breaking changes to fix: https://redrocket.club/posts/croc/


So fixed right?



You should be wary of projects that claim to be secure but have a history of game over vulnerabilities.


Crypto is hard, it doesn't wrongly claim its secure. Its a one man show. Isn't that where beauty of open-source lies? Some students were able to get a bug(purposeful) into linux to show how easy it was. Or even the example of Openssl after heart bleed. Some fresh set of eyes look into the code, things get fixed. We have a log of it, developers learn something, and project moves ahead.


As I was saying... another vulnerability was found in croc's Spake implementation in the last day: https://mailarchive.ietf.org/arch/msg/cfrg/icl1AGo62iq8vQM3-...


I know everybody's posting tons of alternatives already, but I'm curious why https://transfer.sh isn't included. It has very simple instructions for encrypting against a recipient's Keybase GPG key, works from the site or command line, and has 14 days of retention.

Just curious, since I keep seeing Wormhome mentioned, but I never seem to see anyone mention Transfer (unless it's just a lesser known option and I happened to hear of it early).


There's also a CLI for Send (ffsend): https://github.com/timvisee/ffsend


The big advantage of Firefox Send was that it was hosted by Mozilla, and I could trust that Mozilla wouldn't have any backdoor in the service.

When the same project is hosted by someone that I don't know, I can't be sure that they won't modify it to peek at the files (I'm not going to perform a full code audit on every page load).


Is there a way to use ffsend if I drop some basic auth in front of the upload?


Yes! `ffsend upload --basic-auth USER:PASS FILE`


It would be nice to set arbitrary (or at least Bearer) Auth tokens too in case you put it behind some oauth enforcing gateway like authelia



I'd like to set this up one up of my own servers...


Best HN post in months...


Oh, I forgot about that one. Yet another Mozilla project that worked well that was abandoned. (Remember Firefox OS? https://killedbymozilla.com/) I know what you're thinking : They did not abandon Rust ! Well I just learned from a post that recently made it to the HN front page that management was considering dropping Rust. The only reason they did not was because of someone who fought hard for it.


It wasn't "abandoned" it was shut down because it was being used by malicious parties to deliver malware, and worse.


Can't any cloud storage service be used to deliver malware?


It depends. Some services are much more suitable than others.

Some services are 1:1 ratio. That is, uploading a file results in a download that only works once. So that makes them rubbish for malware, you have to be spear phishing somebody and even then it buys you less than using Tor would.

Some services are only encrypted in transit. So bad guys can't intercept or alter the data, but at rest on your server it can be scanned for malware, copyright infringement, whatever the provider wants to scan for.

Some services cost money to use which is an obstacle to bad guys who most likely want more money and not to be paying money up front first.

Firefox Send was encrypted in situ (the keys live only in clients, so the server doesn't know your keys), it was free to use, and it allowed either unlimited or very large ratios.

So that makes it potentially very attractive. On top of which, it has this nice trustworthy Firefox name. Grandma Jenny's kids have told her not to go around installing stuff from just anywhere, but they did tell her _Firefox_ is trustworthy after she got flustered when it auto-updated. How is Jenny supposed to understand that this link to Firefox Send isn't Firefox?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: