Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The #1 most used file transfer tool seems to be Dropbox. I put stuff into mine, give you a link and everything magically works.


Yes, that's a nice workflow for non-technical users. However, if you're transfering 50GB files, this takes some time. UDP behind the scenes could fix that.

I guess consumers just aren't punting 50GB of data around all that much, so there's not enough wait time pain to justify it.


The biggest problem with UDP is NAT, since UDP is connectionless. Still, it is extensively used in all sorts of applications.

The reason it's not used to much in file transfer is because you basically need to re-implement everything you get for free with TCP for reliable transfer. Secondly, one of the big things that made TCP slow for large file transfer was that the way most TCP congestion control algorithms worked meant that transfer rates would drop quickly as latency increased. Google have come up with an algorithm that drastically improves this [1], and contributed it to the Linux kernel. I expect their strategy will be adopted by most operating systems. I believe Google is already using it, and I read something on Netflix's tech blog that they are at least trialing it.

For small files, TCP slow-start is an issue but protocols like HTTP/2 can work around this by multiplexing multiple downloads through one connection.

So I don't think there will be much advantage to UDP based file transfer for things that you want reliability for.

1. http://queue.acm.org/detail.cfm?id=3022184


I just realized I have no idea if Dropbox uses UDP behind the scenes. And I love it.

From a larger "tragedy of the commons" point of view, using UDP to force maximum packets down the shared pipes is not fair towards everyone doing TCP. Congestion detection/prevention tries to make traffic more fair to everyone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: