Hacker News new | past | comments | ask | show | jobs | submit login

I have spent a lot of time trying out backup solutions and I feel strongly enough to write this to stop others from using this. As other commenters mentioned, Duplicati is pretty unstable. I was never even able to finish the initial backups (less than 2 TB) on my PC over many years. If you pause an ongoing backup it never actually works again.

I'd use restic or duplicacy if you need something that works well both on Linux and Windows.

Duplicati's advantage is that it has a nice web UI but if the core features don't work.. that's not very useful.




Also can't recommend duplicati. I never got it to work despite sinking many hours into it using different storage options. Not even local disk worked.

Instead, I'd recommend Arq backup.


It seems hard to find a universal recommendation. I've heard good things about Arq although it didn't work well for me personally whereas ironically Duplicati did, although I'm currently using Restic.


I've had a good experience with Kopia [0] for over a year. Linux and Windows boxes all writing to the same repository, which is synchronized to both B2 and a portable drive every night. The one thing it lacks that I'd like is error correction, so I store it on a RAID1 btrfs system. ECC is apparently being developed [1], but is currently experimental and I believe requires a new repository.

[0] https://github.com/kopia/kopia

[1] https://github.com/kopia/kopia/pull/2308


I've had issues trying to use multiple different Kopia repos from one machine. (A dedicated back-up server basically)

With compression landing in the most recent Reseic release, I'll probably switch back to that for my servers. Though I'm still keeping Kopia for my clients where I like a GUI once in a while.


After hearing a lot of praise for Arq here, I tried it out hoping it would become my new Windows backup solution. (I'm looking for a linux one too, but Arq doesn't do linux). But I was very underwhelmed. The user experience for browsing file versions in time was not really there. If I recall correctly, I could only browse by snapshot. And it was extremely slow for just a few gigabytes. The backup process didn't inspire confidence, I was never sure if something had interrupted it or what the status was.


I recommend Arq also at least for Windows (have not tried on Mac). I'm using Arq 7 cloud (something like $60 a year) on a Windows desktop. The software is straightforward, generally stays out of your way, gives alerts when needed, reliable, saves versions similar to time machine, fairly configurable, and backups are end to end encrypted, and can be saved to Arq's own cloud service, any local media, and most other cloud services. I had lots of permission errors when starting for a small bunch of files but was able to fix them out by either resetting permissions or excluding files (e.g., caches). I think these are the kind of problems you can expect on Windows when using Shadow copy, no reflection on Arq.


Arq on windows for me just stalled forever and didn't complete anything after 2-3 weeks.


Same for Duplicati running in Docker, as well as TimeMachine on macOS. Due to this thread I've swapped to Restic/Rclone.


I have had similar experiences. I could not get a non-corrupt backup from one machine; it would repeatedly ask me to regenerate the local database from remote, which never succeeded. Oddly, another machine never seemed to have an issue, but that's not an argument in favor of using the software. It is possible there are "safe" versions, but without a way to identify them (all the releases I used were linked from the homepage).


I had a similar experience with Duplicati. I attempted a 2TB backup of my NAS to a cloud storage and it went up to ~500GB and would just hang there.

I switched to restic and recomned it over Duplicati.


Just another stat point... Been using it against 1TB storing encrypted to Backblaze B2 for about a year and a half. I've tested restoring and so far it's been very stable.


Just to balance this. I use duplicati for both my web server where I host client websites, and my personal home nas.

I've had to use it to restore multiple times, and have never had an issue with it. It's saved my ass multiple times. It's always been a set it and forget it until I remember I need it.


Never tried Duplicati, but restic + B2 has been great as "a different choice", and for my use case of backing up a variety of OS's (Windows, Mac, and different Linux distros, anyway), it's worked great.


Restic and B2 "just work". Works how I expect it to, and restores what I expect it to. Not amazingly fast in backups or restorations, but it works reliable for me. I have restic running on everything from workstations and laptops, (~200G each), to servers (500G-2TB) to a mini 'data hoard' (25TB+) level of backups, and its been doing great on each.

I did not like and could not trust duplicati to finish backups or restore from them.


I'll throw a +1 in for Duplicacy too. I think I'm backing up something like 8TB to Wasabi using it and it's excellent in terms of de-duplication.


I had a very similar experience with Duplicati on a small (disk space wise) backup set but a very large number of files bloating the sqlite data store.

I use Urbackup to back up Windows and Linux hosts to a server on my home network and then use Borg to back that up for DR. I'm currently in the process of testing Restic now that it has compression and may switch Borg out for that.


What does restic offer that borg doesn't?

I've been using borg for a while (successfully, with Vorta as UI on mac) and curious to learn if there is something I've been missing that restic provides.


You probably aren't missing anything unless you are doing ridiculously large amounts of backups. I'm using Borg as a disaster recovery backup of a backup server.

Borg has issues properly maintaining the size of its local cache and that results in RIDICULOUS amounts of ram being consumed at runtime unless I manually clear the cache out periodically. It also brings in some python package for something FUSE related that constantly vomits a warning to the console on each run on Ubuntu.

I'm still not 100% sold on migrating to Restic. It seems to not suffer the same cache or FUSE problem (since it isn't Python) so far but the overall archive sizes seem to be a bit larger than Borg and I have to pay for every byte of storage I consume.


At BorgBase.com the largest Borg repo we host is about 70 TB. Still manageable with server-side pruning. Mostly larger files from what the user told me.

We just added support for Restic too. Using Nginx with HTTP/2. Fastest combination I've seen so far. So very excited to offer two great options now.


The main thing I was going to mention was deletion but it looks like borg has that now.


How strange. I have been backing up my own computers (4) and those of my family (another 3) using Duplicati for over three years now, and aside from the very rare full-body derp that required a complete dump of the backup profile (once) and a rebuild of the remote backup (twice), it’s been working flawlessly. I do test restores of randomly chosen files at least once a year, and have never had an issue.

Granted, the backup itself errors out on in-use files (and just proceeds to the next file), but show me a backup program that doesn’t. Open file handles make backing up rather hard for anything that needs to obey the underlying operating system.


I started to use Duplicati 2 for about a month now to try it out, and it was working flawlessly for me, except for occasional time-out of the web UI. I only backup local directories, and the destinations I tried out include an external drive over USB, Google Drive, and an SSH connection.

I'm using it to backup a Firefox profile while I'm using Firefox. It backed up active files as they are being written too! I'm also using it to backup a Veracrypt container file (single 24GB file), and incremental backups worked quite well too.

Thanks for the words of advice, I will keep testing longer before I make the switch.


Agree duplicati is quite immature.

I've looked around quite a bit too but did you actually use restic and duplicacy?

They've eaten my RAM quite heavily, it caused the machine to freeze up by exhausting the RAM on not that huge data sets and I've stopped using them a year or so ago.

I've come to the conclusion to use Borg and zfs as backup solutions (better to run multiple reliable independent implementations), latter being quite fast by knowing what got changed on each incremental backups as a file system itself unlike any other utilities that need to scan the entire datasets to figure out what got changed since last run.

You can run a 1GB memory instance and plug HDD (far cheaper) based block storage (such as on Vultr or AWS) for cheap zfs remote target. Ubuntu gets zfs running easily by simply installing zfsutils-linux package.

If you need large space, rsync.net gives you zfs target with $0.015/GB but with 4TB minimum commitment. Also good target for Borg at same price but with 100GB minimum yearly commitment. Hetzner storage box and BorgBase seem good for that too.


If you use restic/kopia, how are you managing scheduling and failure/success reporting together?

That's one thing I can't seem to quite figure out with those solutions. I know there are scripts out there (or I could try my own), but that seems error-prone and could result in failed backups.


You could use one of those services that expect a regular http heartbeat. I'm personally using uptimerobot for that. Within a .bat or .sh file, add a

  restic [...] && curl <heartbeat-url>
and you'll get eventually notified if backup jobs fails too often.


I've tinkered with that using healthchecks, but I don't really trust that I know what I'm doing when setting it up.

Restic is also confusing to me with forgetting snapshots and doing cleanup, I don't understand why that isn't run as part of the backup (or is it? The docs aren't clear).


no, you have to run "restic forget" with the policy you want (keep last X, last monthly Y, etc.) followed up with a "restic prune". Or you can pass "--prune" to the "forget" command I think.

You don't always want to forget/prune snapshots. Especially if you're using a cloud service like B2. It can easily cost you more to prune than actual storage costs if you're not careful.

See here: https://www.seanh.cc/2022/04/03/restic/#maintaining-your-bac...

and

https://kdecherf.com/blog/2018/12/28/restic-and-backblaze-b2...


Thanks for the links! That's helpful, the part about B2 makes sense.


Yeah I had to invent my own.

On Linux I used cron + email. You can setup postfix such that you use your personal gmail or whatever, then you will be able to do "echo message" | mail -s youremail.com to send an email. They (big email providers) always allow you to send an email as yourself to yourself.

On Windows, I used the native task scheduler (with various triggers like time, lock workstation, idle and so on) and send an email using powershell, which can also send emails using SMTP.


Same here. I have a wrapper script that runs restic commands. Whether I run it in a console or per crontab stdout/stderr is logged to a file and is emailed to me (in the crontab case). Nothing fancy yet, but it works and I am satisfied. Still pretty new to restic though. In another life I had a disaster recovery role and was using DLT for backup / restore of all the things, so ...


yeah, I scripted my backup jobs, and use good old email notifications to report.

I expect an email every day. If I don't receive one, I know there's a problem with email delivery.


I read that Duplicati is also in beta (for years now), and that really seems discouraging. Restic looks great, but it's also 0.14 as of the moment. Would you consider restic a stable product, despite the version number?


Restic's versioning doesn't denote that it's not production-ready: it absolutely is. Stable, reliable and developed thoughtfully, with data integrity and security in mind. I highly recommend it.


I've used restic for years now without issue. I'd definitely consider it stable.

I started with duplicacy and moved to restic.


Could you provide your reasoning for the switch? I've had good enough luck with duplicacy but I'm curious about it vs restic now that restic supports compression.


To me, it shows "beta" and "not supported" options.. so it's hard to choose :)


Yes, it's stable. They even added compression this year. We just added support for Restic on BorgBase.com. Will have more user feedback in a few months, but first tests and benchmarks are pretty encouraging.


Restic is rock solid. I have backed TBs servers with it. It never failed.

Encryption is properly implemented.


I've been using it since 2018, no issues so far.


Even late this warning has to be issued: restic still has serious problems with writing to samba shares - to the honor of the auhors we can see that the manual clearly tells you about that:

On Linux, storing the backup repository on a CIFS (SMB) share is not recommended due to compatibility issues.

There seems to be some deeper system level problem with go concurrency:

https://github.com/restic/restic/issues/2659


I agree. I really liked the interface and gave it a go at least 3 or 4 times, and got burned every single time with errors or random issues.


Duplicacy seems to upload every chunk as a separate object/file, which is great for deduplication but bad for your cloud bill (S3 providers usually charge for PUT requests). There's a reason everybody else packs up chunks.


I had a mixed experience. I've been able to successfully restore backups (the most important thing), but I frequently had to fix database issues, which makes the backup less seamless (perhaps the second most important thing).


Duplicacy has worked well for several years on both my wife's and mother's laptops. Doesn't require much work and just keeps operating.


Adding to the choir. I like the web UI of Duplicati but found it buggy and unstable, which are definitely not things you want in a backup system.


In my experience, Duplicacy is most stable backup software compared to Dupli* family. I don't say it's rock solid but mostly it works.


Agree totally with this. It's a hot mess tbh and very unreliable. As suggested restic (with autorestic as a wrapper) is a great replacement.


It's hard to see restic as a Duplicati replacement when there's no official documentation about backing data via SFTP on Windows.


What do you mean? It’s just “sftp” in front of the repository name!

And SFTP is SFTP, regardless of the OS.


What "SFTP" do I have on Windows?


For client, WinSCP, and more.


And how do I use it with restic? This is what I'm talking about.


I too had huge problems with Duplicati restoring. Switched to Borg, using Vorta as the GUI and am much happier.


I did use it. It worked 90% of the time. I backed up to one-drive. I just ended up getting veeam.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: