Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My two (possibly biased, much like the author's) cents.

- No network-based tests; e.g. a typical fast internet connection (say 100/40 or 50/20 MBit/s) with a few dozen ms latency to some server or cloud service. This is of course difficult because these tend to be bad on reproducibility. For a network-based test, not only time is interesting, but total RX/TX as well.

- I'm really surprised at restic's performance. It uses far more CPU than Borg in almost all tests... and Borg is already notoriously inefficient in it's CPU usage when looking at object throughput (restic: "fast, efficient"?). I don't mean to bash, I'm just surprised.

- restic's deduplication performance might hint at Rabin Fingerprints being worse than Buzhash, but there might be other issue(s) leading to this result.

- Besides CPU time, memory (peak) usage would be interesting.

> For instance, file hashes enable users to quickly identify which files in existing backups are changed. They also allow third-party tools to compare files on disks to those in the backups.

To be fair, Borg can calculate a variety of file hashes (MD5, SHA1, SHA2, ...) on the fly with "borg list". There are "borg diff" (to compare two archives) and "borg mount -o versions" as well, though the latter is generally impractical for looking at a large number of archives.

> Again, by not computing the file hash helped improve the performance, but at the risk of possible undetected data corruption.

I can't deduce how the last part follows (", but..."). Care to explain?



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: