One thing I always fail to see in these types of comparison articles is that BitTorrent will happily destroy any ability for those sharing your connection to use it if you let it.
Probably due to the sorry state of affairs that is the consumer router space, running BitTorrent renders even web browsing painfully slow and sometimes completely nonfunctional.
Why BitTorrent does this while normal http transfers do not is not clear to me. Perhaps due to the huge number of connections made.
Either way, when given a choice I'll always take a direct HTTP transfer over a torrent, for no other reason than the fact that I'd like to be able to watch cat videos while the download completes.
On your upstream side (i.e. data you send out to the Internet), you can control this by using a router that supports an AQM like fq_codel or cake. Rate limit your WAN interface outbound to whatever upstream speed your ISP provides. This will move the bottleneck buffer to your router (rather than your DSL modem, cable modem, etc., where it's usually uncontrolled and excessive) and the AQM will control it.
Controlling bufferbloat on the downstream side (i.e. data you receive from the Internet) is more difficult, but still possible. You can't directly control this buffering because it occurs on your ISP's DSLAM, CMTS, etc., but you can indirectly control it by rate limiting your WAN inbound to slightly below your downstream rate and using an AQM. This will cause some inbound packets to be dropped, which will then cause the sender (if TCP or another protocol with congestion control) to back off. The result will be a minor throughput decrease but a latency improvement, since the ISP-side buffer no longer gets saturated.
While bufferbloat is an issue it's not the only one. Poorly designed NAT implementations easily suffer from connection tracking table saturation due to the many socket endpoint pairs they have to track when you're using bittorrent. Doubly so when using bitrorent with DHT.
The best thing to do is avoid home networking equipment entirely. The Ubiquiti EdgeRouter products are cheap and good, as is building your own router and sticking Linux/*BSD/derivative distros on it.
@pktgen Thanks for the good note. You've nailed the science behind this and the proper fix. In your note below, you also note that Ubiquiti router firmware has fq_codel/cake.
I'd like to mention that both LEDE (www.lede-project.org) and OpenWrt (www.openwrt.org) were the platforms used for developing and testing fq_codel/cake. That means that people may be able upgrade their existing router to eliminate bufferbloat.
My advice: if you're seeing bufferbloat (and a great test is at www.dslreports.com/speedtest) then configuring fq_codel or cake in your router is the first step for all lag/latency problems.
I've been using it for 15 years and it's still working great. Even with multiple P2P clients, stuff like HTTP, SSH, and gaming keep a low latency. Also you learn a lot about networks just by configuring it :-)
> Why BitTorrent does this while normal http transfers do not is not clear to me.
Two key reasons, usually, both related to congestion control (or practical lack thereof).
> Perhaps due to the huge number of connections made.
This is one of those reasons: unless the other end of your incoming connection is prioritising interactive traffic somehow packets for each stream will get through at more or less the same rate once the connection is saturated. So if you have a SSH link and are requesting a http(s) stream (for a web page or that cat video) while a torrent process has 98 connections getting data, for every 100 packets down the link only two are for your interactive process. On fast enough link this isn't an issue, but "fast enough" needs to be "very fast" in such circumstances as it is relative to the combined speed of all the hosts sending data. You can mitigate this by telling the torrent client to use minimal incoming connections (limiting incoming bandwidth can have some effect but is generally ineffective as bandwidth limits like that need to be applied on the other side of the link).
The other problem is due to control packets such as those for connection handshakes and so forth fighting for space on the same link as those carrying data. As soon as the connection is saturated in either direction so that there are packets queued for more than an instant, latency in both directions takes a massive hit. This is particularly noticeable on asymmetric links such as many residential arrangements. You can mitigate this by throttling the outgoing traffic either within the torrent client or at other parts of the network (assuming the traffic isn't hidden in a VPN link that means you can't reliably distinguish it from other encrypted packets) and reserving some bandwidth for giving priority to interactive traffic and protocol level control packets but you have very little control (usually practically none) over traffic coming the other way as you the measures have to be taken before the packets hit the choke point and you don't control those hosts your ISP does (they will implement some generic QoS filtering/shaping but more than that requires traffic inspection which we don't want them to do, and they don't want responsibility either legally or in terms of providing/managing relevant computing capacity).
(the above is a significant simplification - network congestion is one of those real world things that quickly gets very complicated/messy!)
Because bittorrent is designed to be easy to implement to favour adoption, they could have created a more performant DHT and added something like eMule's automatic upload speed sense but that would have been more complex.
Why should it? I like that the protocol itself favours performance. Traffic shaping/QoS should be done by the maintainers of the pipes (ie routers or OS).
The most common problem is with asymmetric network connections with limited upload bandwidth. If you don't limit your upload rate, BitTorrent will consume your upload bandwidth so thoroughly that even TCP ACKs for other applications aren't sent in a timely fashion.
As mentioned by someone else, yeah, this is almost certainly the TCP ack thing. If you throttle back the upload about 10KB/s under your max upload speed, it won't choke your download ability.
For a long time the answer was to throttle downloads as others have said; however today the solution is to use uTP (https://en.wikipedia.org/wiki/Micro_Transport_Protocol), which is specifically designed to yield to other applications that want better interactivity. Pretty much any recent bittorrent software implements it.
Really? I'll take Bittorrent any time, because, no matter how large the transfer, I can just leave my client to download even after I close my browser. Since I have a home NAS with Deluge installed, I can even turn my PC off and the torrent will keep going (and will send a notification to my phone when done), etc.
>Some network operators do funny things, so a client can actually get higher transfer performance by connecting multiple times and transferring parts of the data in several simultaneous connections.
Actually there is a FF extension call downthemall that multithreads HTTP downloads which will infact max out your inbound speed just like torrents do. As far as router are concerned if you set a particular device with a higher QOS then the one that is torrenting you should not have the problem.
Probably due to the sorry state of affairs that is the consumer router space, running BitTorrent renders even web browsing painfully slow and sometimes completely nonfunctional.
Why BitTorrent does this while normal http transfers do not is not clear to me. Perhaps due to the huge number of connections made.
Either way, when given a choice I'll always take a direct HTTP transfer over a torrent, for no other reason than the fact that I'd like to be able to watch cat videos while the download completes.