Hacker News new | past | comments | ask | show | jobs | submit login
Snatch: A simple and fast download accelerator, in Rust (github.com/derniercri)
111 points by nukifw on Jan 21, 2017 | hide | past | favorite | 29 comments



My biggest pet peeve with projects is when the feature list is actually a mix of features and things that are "upcoming"/"to be supported." This feature list isn't really a feature list either: it's just "simple" and "fast" (but fast is just 'written in a new exciting programming language'). Interruption and resuming is the first actual feature, but that's "soon."

Feature lists should be feature lists. On open source projects especially, I've seen "soon" take weeks or months. Move features that aren't features into a "upcoming" or "planned" block or make an issue, but don't list it as a feature when it isn't implemented.


I think you're being overly-critical for alpha software.

If they want to mix the feature list with the roadmap or keep them separate, it doesn't really matter. I assume the incomplete feature list is precisely the reason for the alpha label.

I may be reading too much in between the lines, but your comment sounds like "I don't want to hear about software until it is finished. If it's not ready for prime time, keep it on your hard drive."


Im sorry but I find the parent very correct here. The headline is hype based. The README shows no signs of benchmarks or comparisons to define its acclaimed speed. Also mentioning an upcoming feature in the headline is not something people would prefer. Its still okay as its open source and people can do what they want. But to be taken seriously, it needs to be as realistic and true as possible.


If you are serious, you can alternately use aria2 https://aria2.github.io/. Aria2 has been around for a long time and is quite robust and feature complete. It makes a compelling replacement for curl and wget.


Another accelerator is Axel [0]. While aria2 supports all kinds of protocols, Axel supports Axel HTTP, HTTPS, FTP and FTPS. Both are excellent and I've used aria2 for a very long time.

[0] https://wilmer.gaa.st/main.php/axel.html


I love axel. Whenever I have issues with video streaming due to the website having per client connection throttling, I extract the media src from the page and do axel -n 32 <url> and saturate my downlink (200mbit+).


Aria2 can't manage recursive downloading. Wget can, but wget does not come with a multithreaded option.

I'm still looking for something that can do both.


I found out about this a week ago, super useful for downloading from multiple very slow mirrors at the same time.


Downloading from several mirrors at once makes sense, but using "download accelerators" to cheat on TCP congestion control is just wrong. Some mirrors will even ban you for making more than 4 connections at once.


Hi, actually, this was an idea that we shared a few days ago, and I am agree with this. As I mentioned earlier in a next comment, Snatch is a side project that has been created for a presentation of Rust. Thank you for this comment - we will work on this idea soon :-)


Have you considered adding support for download-accelerating strategies like https://news.ycombinator.com/item?id=11842517?


Indeed, but TCP's default congestion control might be suboptimal on some pipes.

Maybe an option to choose an alternative controller via socket options (linux and windows offer alternatives) would make sense in some situations.


> TCP's default congestion control might be suboptimal on some pipes

Default TCP congestion control is Reno, but Linux uses Cubic, increased initial CWND and all the other modern modifications. CDNs have their own proprietary congestion controls that are arguably better, but Google has already pushed its own congestion control (BBR) into Linux, so it is just a matter of time until it becomes new default. There is no reason to tamper with these settings unless you are operating Netflix/YouTube, some sort of CDN or unusual (satellite) link.

> Maybe an option to choose an alternative controller via socket options (linux and windows offer alternatives) would make sense in some situations.

Only the server congestion control matters. Client simply acknowledges whatever it receives as soon as possible and waits for more data.


> Only the server congestion control matters. Client simply acknowledges whatever it receives as soon as possible and waits for more data.

To be clear, that's true if the server is the only one transmitting data. (I.e. for a download)


Some users can have more than one external IPv4/IPv6 address (One for each Internet connection, if they have multiple). In order to use all internet connections for the download, they would have to initiate multiple tcp sockets.

What kind of logic should this app use to determine how many external addresses are actually available in the case of IPv4? As you say, it is more "tcp fair" to have only one TCP connection per client/server relationship in normal cases.

Just some random thoughts. Apologies if it's difficult to follow. I can reply to any questions if anything is confusing.


> In order to use all internet connections for the download, they would have to initiate multiple tcp sockets.

Opening one connection per interface is still a violation of TCP design. Look into MPTCP RFCs. It is specifically designed to use multiple connections as if you had only one. Developers of the standard made sure you don't have any advantage compared to someone using only one connection.

Two connections over two different interfaces will meet somewhere on the bottleneck near the mirror so you still have the advantage of multiple TCP connections.

Sure MPTCP is not ready yet, but multipath QUIC will be deployed by the end of this year so all your Chrome downloads from google drive will utilize multiple connections.


Google does the banning, but they limit their download speed to some slow speed and I have a 1 Gbps server which I use to download stuff, so I often open up to 32 connections to their server just to download at a normal rate. They have the bandwidth, why do the limit the speed is beyond me. You can see this on the googlevideo.com domain, for Google Drive transcoded videos, thought not for YouTube.


> Fast: written in a new exciting programing language ;

IMO change this description, it's rather strange...


Because Rust!


Wow, does download accelerators still exist? I remember using them back in the days on my 56k modem. I guess it still makes sense though if your home internet is faster than what the server allows per connection. Usually you don't need it anyway because today most things are already fast enough.


Hi, I am Antonin, a maintainer of Snatch at DernierCri. Actually, Snatch is a (cool, for me) side project that I began with a collegue, for a presentation of Rust. After this presentation, we decide to continue this project just for fun. We know that we have to improve a lot Snatch, we just wanted here to share it ;-)


Some video streaming sites do the whole per-connection limiting thing, which is annoying when you want to download multiple things to watch later (especially really long playlists).

Normally I'd say that accelerators are abusive, but being forced to download hours of video in near-realtime is such a pisstake that I'm willing to make an exception (the alternative is to download multiple videos in parallel, which is even more abusive, yet less frowned upon).


They're useful for adding continuation and some other features. However, they're also a huge target for malware: potential to tamper with the binaries, discovery of traffic patterns, etc. And they are absolutely clunky as a third-party app launched from the browser.

I use a Chrome extension, though, Chrono. It's been very good.


I still keep a copy of Flashget 1.73 around, it comes in handy for certain situations. queuing up a bunch of downloads without hammering a server, it also comes with a site explorer that has helped out a few times.


I usually use it when I download large files from my server overseas.


I remember using GetRight back in the 90s.


Prozilla[1] is still one of the best and tiniest download managers I've ever seen. Blazingly fast. Squeezes as much speed as allowed by your ISP if throttled.

[1] https://github.com/totosugito/prozilla-2.0.4


Do you have a link to the original source repository/project page?


You can use prettier progress bar symbols, in terminals which support Unicode.

See https://en.wikipedia.org/wiki/List_of_Unicode_characters#Blo...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: