Hacker News new | past | comments | ask | show | jobs | submit login

It sounds like a fun comp sci exercise to optimise the algo for randomised block download to reduce disk operations but maintain resilience. Presumably it would vary significantly by disk cache sizes.

It's not my field, but my impression is that it would be equally resilient to just randomise the start block (adjust spacing of start blocks according to user bandwidth?) then let users just run through the download serially; maybe stopping when they hit blocks that have multiple sources and then skipping to a new start block?

It's kinda mindbogglingly to me too think of all the processes that go into a 'simple' torrent download at the logical level.

If AIs get good enough before I die then asking it to create simulations on silly things like this will probably keep me happy for all my spare time!




For the completely randomized algorithm, my initial prototype was to always download the first block if available. After that, if fewer than 4 extents (continuous ranges of available bytes) were downloaded locally, randomly chose any available block. (So, we first get the initial block, and 3 random blocks.) If 4 or more extents were available locally, then always try the block after the last downloaded block, if available. (This is to minimize disk seeks.) If the next block isn't available, then the first fallback was to check the list of available blocks against the list of next blocks for all extents available locally, and randomly choose one of those. (This is to chose a block that hopefully can be the start of a bunch of sequential downloads, again minimizing disk seeks.) If the first fallback wasn't available, then the second fallback was to compute the same thing, except for the blocks before the locally available extents rather than the blocks after. (This is to avoid increasing the number of locally available extents if possible.) If the second fallback wasn't available, then the final fallback was to randomly uniformly pick one of the available blocks.

Trying to extend locally available extents if possible was desirable because peers advertised block availability as pairs of <offset, length>, so minimizing the number of extents minimized network message sizes.

This initial prototype algorithm (1) minimized disk seeks (after the initial phase of getting the first block and 3 other random blocks) by always downloading the block after the previous download, if possible. (2) Minimized network message size for advertising available extents by extending existing extents if possible.

Unfortunately, in simulation this initial prototype algorithm biased availability of blocks in rare files, biasing in favor of blocks toward the end of the file. Any bias is bad for rapidly spreading rare content, and bias in favor of the end of the file is particularly bad for audio and video file types where people like to start listening/watching while the file is still being downloaded.

Instead, the algorithm in the initial production implementation was to first check the file extension against a list of extensions likely to be accessed by the user while still downloading (mp3, ogg, mpeg, avi, wma, asf, etc.).

For the case where the file extension indicates the user is unlikely to access the content until the download is finished (the general case algorithm), look at the number of extents (continuous ranges of bytes the user already has). If the number of extents is less than 4, pick any block randomly from the list of blocks that peers were offering for download. If there are 4 or more extents available locally, for each end of each extent available locally, check the block before it and the block after it to see if they're available for download from peers. If this list of available adjacent blocks is non-empty, then randomly chose one of those adjacent blocks for download. If the list of available adjacent blocks is empty, then uniformly randomly chose from one of the blocks available from peers.

In the case of file types likely to be viewed while being downloaded, it would download from the front of the file until the download was 50% complete, and then randomly either download the first needed block, or else use the previously described algorithm, with the probability of using the previous (randomized) algorithm increasing as the percentage of the download completed increased. There was also some logic to get the last few chunks of files very early in the download for file formats that required information from a file footer in order to start using them (IIRC, ASF and/or WMA relied on footer information to start playing).

Internally, there was also logic to check if a chunk was corrupted (using a Merkle tree using the Tiger hash algorithm). We would ignore the corrupted chunks when calculating the percentage completed, but would remove corrupted chunks from the list of blocks we needed to download, unless such removal resulted in an empty list of blocks needed for download. In this way, we would avoid re-downloading corrupted blocks unless we had nothing else to do. This would avoid the case where one peer had a corrupted block and we just kept re-requesting the same corrupted block from the peer as soon as we detected corruption. There was some logic to alert the user if too many corrupted blocks were detected and give the user options to stop the download early and delete it, or else to keep downloading it and just live with a corrupted file. I felt there should have been a third option to keep downloading until a full-but-corrupt download was had, retry downloading every corrupt block once, and then re-prompt the user if the file was still corrupt. However, this option would have resulted in more wasted bandwidth and likely resulted in more user frustration due to some of them hitting "keep trying" repeatedly instead of just giving up as soon as it was statistically unlikely they were going to get a non-corrupted download. Indefinite retries without prompting the user were a non-starter due to the amount of bandwidth they would waste.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: