I have seen some people get performance gains from using UDP. TCP tends to back off rather aggressively after even a single dropped packet, and the connection is slow to get back to speed. With UDP, the uploader can just stream packets with sequential ids, and the receiver can respond whith a negative ack if it misses a packet in the series.
TCP backs off aggressively to avoid congestive collapse of the network. It is backing off in order to preserve shared infrastructure. If everybody used your UDP trick then the Internet would literally collapse. This is detailed at http://en.wikipedia.org/wiki/Network_congestion#Congestive_c... . You should respect the rules of the road.
Isn't this ultimately up to the congestion control algorithm that is used alongside of UDP? Aspera and UDT, for example, have tunable congestion control that can be more or less greedy.
I'm not a protocol hacker, so hopefully one will weigh in here. But I do remember some Sky Is Falling discussion over uTorrent's use of UDP for transfer, and the uTorrent line was always that they were implementing UDP transfer in a way that played nice with TCP.
uTorrent's UDP congestion control algorithm (LEDBAT) goes further than playing nice with TCP. Unlike TCP, which only responds to packet loss, LEDBAT also responds to delay. This makes it yield remarkably quickly to anything else that might use the link.
What do you set the limit to? What do you do when a router fails and your available bandwidth suddenly drops? All of the TCP end hosts will backoff properly and avoid melting the network with endless re-transmits. All of the TCP flows will converge towards sharing the available bandwidth, even if that is a moving target. Applying a congestion control algorithm on top of UDP like the other response says is fine. Perhaps TCP's specific congestion control scheme isn't what you need (it sucks at live video for example). But you really do need a congestion control algorithm.
I'd like to add to this by saying that the new congestion control algorithms ought to be "friendly" to TCP to avoid congestion collapse. Backing off by a factor smaller than 2, or increasing additively by a delta more than 1 MSS per RTT are all tricks that would cause harm to normal TCP flows.
Granted that TCP isn't really "fair" between flows that have different RTTs, one could really justify tuning their TCP behaviour. But the challenge is to do this _automatically_, which is what TCP has so successfully done for the past few decades.
EDIT: DCCP decouples congestion control from reliable delivery and its congestion control algorithms are TCP friendly.
As seen in the tracerts that I posted, while S3 is not on the direct subnet of EC2, it is certainly much much closer. However, without having tested it, I am rather certain that the added complexity and latency of putting an EC2 based UDP proxy in betweeen the colo servers and S3, probably wouldn't be worth it.
UDP Data Transfer (Open Source): http://udt.sourceforge.net
Bandwidth Challenge: http://www.hpcwire.com/hpcwire/2009-12-08/open_cloud_testbed...
Aspera: http://www.asperasoft.com/