Hacker News new | past | comments | ask | show | jobs | submit login
Ending support for Dropbox syncing to drives with certain uncommon file systems (dropboxforum.com)
394 points by ronjouch on Aug 10, 2018 | hide | past | favorite | 415 comments



It's not encrypted ext4, it's ecryptfs (which acts as a separate layer entirely, creating a virtual encrypted filesystem on top of ext4) - probably doesn't support some inotify feature or some extended attributes that aren't set correctly when used through ecryptfs.

If instead you used dmcrypt and encrypted the whole device or partition you'd probably have no issue as it looks just like any other EXT4 FS to the system.

Would be a lot better if they specified what features they need the underlying FS to support.


Home directory encryption via ecryptfs is now the default in most Ubuntu variants, and it would be insane not to attempt to support it.

That said, given that my Dropbox subscription just renewed (and that Linux is pretty much the only reason I'm still using it, since there is no OneDrive client I can rely on), I am really sad this seems to be a future direction for Dropbox.


On the contrary, Ubuntu 18.04 dropped the home directory encryption entirely and now only supports full disk encryption.


Indeed: https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes#Other_base...

> "The installer no longer offers the encrypted home option using ecryptfs-utils. It is recommended to use full-disk encryption instead for this release."


There was also a series of issues found by with the encryption used by ecryptfs:

https://defuse.ca/audits/ecryptfs.htm

The full disk block-based (as opposed to this file-based) encryption is really the way to go if you want a good security and performance margin.


Author of eCryptfs and EXT4 encryption chiming in.

"Series of issues?" Really? There were 3, and they were all scored "Low" by the authors for exploitability and security impact.

That said, I generally agree FDE is the way to go if your platform's constraints allow for it -- but only for security. Native EXT4 encryption will give you equal or better performance than FDE primarily because the file system metadata isn't encrypted. Which isn't to say that's a good tradeoff -- it's just the nature of the beast.

Because of performance and functionality issues (file name length, possibility of page cache inconsistency) eCryptfs shouldn't be used for anything any more.


Ah, you're right, I had it confused with a similar analysis on EncFS which had more significantly damaging findings.

https://defuse.ca/audits/encfs.htm

I wasn't even aware Ext4 had native encryption support though! I'll definitely have to give that a go. Thanks for the tip.


According to Michael Halcrow's LinkedIn [1], he was heavily involved on the EXT4 encryption:

"I was also the project lead for encryption in EXT4, which is now available as the mechanism implementing file-based encryption on Android."

[1] https://www.linkedin.com/in/michael-halcrow-1880601


There is (or at least _was_ when I last tried FDE) one major fly in this ointment: you lose the ability to reboot your machine remotely. Upon restarting it required that you enter the password, which you can only do from a local keyboard.


On Debian-based systems including Ubuntu, they offer an initrd based Dropbear (lightweight SSH server) which can be used to connect and authenticate. Involved a bit of custom scripting last I checked though, but a possible solution if that's a requirement for you.


For such systems I consider the primary root file-system to be part of the 'bootloader'. Everything that I want to keep actually secure is inside of the encrypted PVs backing LVM volumes.

Yes, this presents a security risk in that someone could (offhand I think the term is 'evil maid'?) attack the root filesystem, but they could still have done that to the bootloader anyway.

Remote interaction is then required to bring the VMs on that system up.


They could not have done that to the bootloader because of SecureBoot.


With secureboot on Linux you can secure as much or as little as you want. On my system, grub isn't even safe, only the shim that load grub is secure. But I could set it up so the kernel is secure, have the kernel only load verified initrd, and then have the initrd check the root filesystem.

I don't, but secureboot can detect changes to the root filesystem if you want it to. I think this generally requires setting the rootfs to mount readonly.


You could put a daemon on the initramfs (say, ssh) that allows you to remotely provide a key/password for decrypting the disk.

This would certainly be more work to set up than vanilla full-disk encryption, though.


Shit, so now I can't turn on my desktop remotely and supply the password through SSH later? That's a huge inconvenience.


> Shit, so now I can't turn on my desktop remotely and supply the password through SSH later? That's a huge inconvenience.

a) This hasn't ceased to work for your desktop - you can continue to do this through the upgrade cycle, as it would only affect new installs (safely converting native fs to dmcrypt isn't possible, AFAIK)

b) during an installation you could eschew FDE and opt for a PV, as I do, that you put your selected, secure LV's onto. I use Debian in preference to Ubuntu, but I'm sure they're materially identical in this case -- two physical partitions -- /boot and not-boot. /boot isn't encrypted in my case, but /, and /home (and a couple of others, though not swap of course) are LVM2 volumes sitting on top of the encrypted 'rest of the disk' partition. If you have /home only with noauto option in fstab , pointing to a an LV on the crypted partition, you could continue to do what you're doing, with the same reduced security confidence.


Debian and Ubuntu allow running Dropbear in initramfs to prompt for the FDE password


That's fantastic to know, thank you. Do you know if it's complicated to set up?


Though unfortunately if you want to dual-boot, you have to manually set up the encryption :/ As in https://askubuntu.com/a/293029/25639 – I do wish the installer could do that instead of asking dual-booters to compromise.


If it just renewed and this changes breaks your use case you can probably request a refund since your system is no longer supported.


What issues have you found with this Onedrive client[1]? I've been using it for over a year without any hiccups beyond the odd duplicate file

[1] https://skilion.github.io/onedrive/


I second this. I created my own systemd service so it runs when I’m not logged in and it’s been as reliable as Dropbox. On the one occasion (in 3 years) it did screw up, I simply ended up with two copies of the same file.


I get a terabyte free with onedrive, because of the office subscription. I wish I had a good way to utilize it on Linux.


Out of curiosity, what happens if you office subscription ends? Obviously you won't be able to sync but does onedrive have some sort of grace period to let you download your stuff and at what point would it go and completely delete your data?


If it's the business version, you have full access for 30 days, admins have access for 90 days, then it's deprovisioned (source: https://support.office.com/en-us/article/What-happens-to-my-...)

On the O365 Personal/Home or direct OneDrive plans, it looks like your data remains available but read-only for 3 months, then your account is "frozen." You can do a one-time 30-day "unfreeze" to get access to download/delete (to get under your quota), then it gets "frozen" again. Eventually it'll be deleted, but I don't see documentation for how long that takes. (source: https://support.office.com/en-us/article/what-does-it-mean-w... linked from https://support.office.com/en-us/article/OneDrive-storage-pl...)


I believe the modus operandi for most personal cloud storage providers here is to make the entire dataset read- and delete-only until the user gets enough storage space (by upgrading or deleting files) to be able to write/upload again. That, plus some incessant push messages to your client devices about upgrading, which is annoying but expected.


Office stops working if your subscription ends so most likely they will renew their subscription.


Use a flow to replicate your onedrive in Azure Files, then mount it to your Linux box: https://flow.microsoft.com/en-us/galleries/public/templates/... AND https://docs.microsoft.com/en-us/azure/storage/files/storage...


That would be very much not free


rclone mount


I use Google drive, but the Linux support was awful and I started using insync. It's about $25 for a lifetime license and I love it. I run Ubuntu with encrypted home and have for years. It works great and I highly recommend it, and I don't work there, just a happy user. The Nautilus integration is decent too.


Not trying to rain on your parade, I hope it fills your use case.

However, my IT company onboarded a media-intense client with insync and it has been nothing but a nightmare. There are hardcoded limits (like only syncing two files at a time, IIRC) that make it effectively useless for anything beyond small, personal use. It's cheap and you get what you pay for.


Not sure what your exact role is but is there a possibility to move this client from insync to rclone?

rclone is rsync for cloud storage services -- https://rclone.org/


I seldom use it, so I can't say how well it runs, but from Gnome version 3.22 if I'm not wrong, Google drive is integrated with Nautilus (the file manager) via GVfs/GIO and the couple of times I used it to share a file worked flawlessly.

YMMV ^__^;


You could use Nextcloud instead.


I recently switched to Nextcloud because Dropbox's Android app is terrible, and like it a lot. It does everything Dropbox does, I get as much space as my server has, and the apps are all better than Dropbox.

I also use Syncthing, which I am very impressed by, it works really well.


Security concerns aside, I found home directory encryption to have awful performance. It also caused issues with file name length.


If you feel comfortable with Google, Gdrive has a great client in the form of OverGrive.


And there’s always rclone. Combine that with whatever encryption you want and you’re set.


Just put /home on a separate partition and please the BOFH gods.


I'm surprised to hear anyone is still mounting home and root on the same partition, frankly. It's made my life sooo much easier since I started doing it ages ago.


The (supposed) Dropbox engineer refers to Extended Attributes.

Maybe they’re doing uncommon stuff with ’em that uncommon filesystems cannot cope with... ;)


> "It's not encrypted ext4, it's ecryptfs (which acts as a separate layer entirely, creating a virtual encrypted filesystem on top of ext4) - probably doesn't support some inotify feature or some extended attributes that aren't set correctly when used through ecryptfs."

Thanks, I updated the title.


What do you think are the chances that Dropbox “just works” on a file system on which it isn’t regularly tested but which nominally has the right feature set? I’d go with “not good”. :-)


This whole article read (to me!) as them wanting to reduce test load, and probably workarounds in their codebase.


The backing files for your ecryptfs will still get synced though, correct?


If you point it at the backing files and they're stored on a supported FS, I see no reason why they wouldn't.


For those interested in open source alternatives, syncthing, NextCloud (fork of ownCloud), and Seafile are some of the big names in this space, as others have mentioned. Personally I'm on syncthing. It's one of the best pieces of software I've ever used. My only complaint is it's not really instantaneous for me. In theory it works with inotify, but I've never quite been able to get it to work. I'm confident it will eventually, though. It's a fairly new feature if I'm not mistaken. For now, if I really just need to get a file from point A to point B right now, I used File Pizza.


I'd definitely throw Keybase in there as well. To me it has the easiest experience being cross platform, integrated with the filesystem, end-to-end encrypted, and having solved the identity proof piece.


Keybase is nice, but it isn't a Dropbox replacement. Your files aren't accessible offline; that's kind of a core part of a file sync instead of a file store.


I take your point that they use different mechanisms to provide access to files which are not fully interchangeable. With that said, I think many people probably don't really care or have a need for offline access, since nowadays people almost always have connectivity. Given this I would say in many (maybe most?) instances Keybase could replace Dropbox in practice.

Also, when you consider that being offline for an extended period of time means that your files are not being synced, then the purpose of Dropbox is somewhat defeated. Basically Dropbox can't sync without connectivity and if you have connectivity then you would have access to the Keybase "file store" anyway.


> I take your point that they use different mechanisms to provide access to files which are not fully interchangeable. With that said, I think many people probably don't really care or have a need for offline access, since nowadays people almost always have connectivity. Given this I would say in many (maybe most?) instances Keybase could replace Dropbox in practice.

Yeah, no, I don't think so. If my bus or train goes through a tunnel, losing my dotfiles or my OneNote notebooks or whatever else is pretty bad.

Forget first world problems--"people almost always have connectivity" is some...like...zeroth world stuff.


Really? If you lose access to your dotfiles and OneNote notebooks for 5 or 10 minutes (that would be one long tunnel) that's completely unacceptable? This also sounds like a daily commute that could have easily been planned around if that time is really that important. Besides, when I work with Keybase I copy the files to my machine ahead of time to a local directory, so I likely would never notice such a drop in connectivity.

I really doubt that in most modern environments (i.e. where smartphones are pervasive) that you are going to go very long without connectivity in an unplanned scenario.


When OneNote freaks out because it can't access its drive during an auto-save and hard-locks? When zsh can't start because it can't find files sourced in .zshrc? Yeah, that's kind of important and that should nev-er happen.

"I copy files out of it to get around that it's not good at what I'm trying to use it for" is not a super winning argument, either.

KBFS is fine. I use it for stuff like SSH keys. It's fine. It's not Dropbox and it isn't substitutable and it doesn't need capes.


You use the tool as it is intended. With Keybase if you are opening and working directly from the Keybase folder that is mounted then you are using it wrong, so your scenario wouldn't happen if you are using Keybase correctly.

The point I was trying to make is that for many people being able to copy back and forth between a local folder and a mounted directory is no different than what they do everyday at work with networked folders. Many people are just moving the occasional document, picture, or folder. For those people Keybase could definitely work as a Dropbox replacement. I don't think the typical Dropbox user is expecting their ZSH profile to be omnipresent on their multiple computers/laptops/tablets.


> You use the tool as it is intended. With Keybase if you are opening and working directly from the Keybase folder that is mounted then you are using it wrong, so your scenario wouldn't happen if you are using Keybase correctly.

You may be misunderstanding intent.

Consider https://keybase.io/blog/encrypted-git-for-everyone


I don't think so. The encrypted git functionality in Keybase operates differently from kbfs. AFAIK the encrypted git ability just functions as a git remote helper that doesn't use the local kbfs mount on your system, so you wouldn't run into the same situation as say a file you are copying to your private keybase folder.

Also, to be clear, you could use the mounted files directly, but you just have to be aware that if you lose connectivity you may run into issues. If I was traveling somewhere and I'd like to work on a document in transit then I would recommend copying that to another folder on your local machine first.


Airplanes are a common case where people have long stretches without connectivity, either due to not paying for wifi or any if the other reasons it might be unavailable or unusable.

Being on a NYC subway train is an even more interesting case, because depending on the precise client setup there's often intermittent or bad connectivity but rarely a good sustained connection between stations.

As in most of the world, smartphones are pervasive in both of these contexts.

Mobile and computer apps used to support these cases a lot better. As Bay Area connectivity has gotten better in those areas where well-off techies congregate (e.g. not underground Muni Metro), apps have gotten much worse at this except for apps targeting developing countries like India and Africa.


Agreed, but these are planned events where you won't have connectivity, so you can copy the files to a local folder prior to the trip. I recognize that might not be as convenient as having the system perform a push to your machine to sync it. I think in those cases I'd probably just run rsync against the keybase folder.


I know that in a day where I have to produce a live show, write a raft of code, negotiate a new job offer, and take my dog to the vet, I want to remember to copy files to a local folder.

No, wait.

I want my computer to do scutwork for me instead because that's its job.


Planned events in one sense, but routine daily life for those in NYC, or unplanned events in cases where airplanes unexpectedly lose WiFi. Among other examples.

You're describing useful workarounds for the status quo when one can foresee dealing with these situations occasionally. Those are certainly good to document.

But they're workarounds and not true solutions, especially not for frequent needs.


Offtopic but Africa is not a country.


Good point. I realized at the time that it wasn't quite right, but didn't figure out how to fix it. I should have written "those in Africa," or even "most of those in Africa."

Sorry for letting a rushed comment perpetuate the common Western stereotype of conflating all of Africa as if one single thing.


I used to use ownCloud. Just curious - why is it not in your list of alternatives? Did the project die?


Nextcloud is a fork of Owncloud that's led by the original Owncloud developer, and it seems to be more popular with the free software and Linux community. It looks like both are still active.


Nextcloud is the successor of ownCloud.


I'm still running OwnCloud. I've tried twice to move to nextcloud but run into syncing issues from my clients with the same data (and same basic setup) that Owncloud is currently handling fine.

Long term I do still plan on moving to NextCloud, the core dev team from owncloud moved over and I've always been happy with OC, but I need the time to actually figure out why I'm getting sync errors with the suggested setup after following all the suggested fixes.


I haven't run it, but previous HN threads often have comments of ownCloud randomly deleting files, along with other weird bugs.


Owncloud still exists, but most original developers moved to its fork Nextcloud.


If you just want to get a file or files from A to B you can use rsync. It works great (although I don't know if it works on Windows)


rsync is a great tool, but I'm talking about cases involving NATs and smartphones (although I'm sure there's a reasonable android version of rsync available).


Syncthing is a great piece of software. I configured it on three of my machines in March and I've had zero issues since. In fact, it has worked so well that I don't even think about it.

One of the machines even runs an old version found in Debian stable repos, but there has been no issues syncing with machines running newer versions.

Only downside for me is that there's no iOS client.


There's a (closed source?) iOS client: fsync


Is Syncthing here to stay or will it morph into a nextcloud or a btsync or that ubuntu thing or be abandonner in 6 months ?


My impression is it's here to stay. I think I've been using it for about 2 years now. The protocol is completely open as well.


Cool, are there third-party apps beside the official ones ? That would be a strong indicator.


I've been meaning to try git-annex as well. Anyone have any experience/opinions on it?


The kicker is in the couching - dropping support for "uncommon filesystems" - of which their definition includes the default filesystem used by the most popular distro. Just dumb or wilfully ignorant?


I'd bet that by Dropbox's user metrics "the most popular distro" still qualifies as "uncommon" for a Dropbox client. Despite being the year of desktop Linux, non-server Linux boxes are comparatively rare.

That said, Ext4 is still supported, just not with encryption.


According to shittyadmin, it's not even encryption it's encryption with ecryptfs.


No stacked encryption on top of ext4. LUKS+ext4 should still work (since it's visible to Dropbox as just ext4 with cryptsetup/LUKS handling the underling container).


> Ext4 is still supported, just not with encryption.

It's a terrible idea to not turn on hard drive encryption. Anyone who gets their hands on the machine can read and write to the drive by booting it with a thumb drive.


Honest question, what do you believe is "the most popular distro"? Professionally, I have never seen a BTRFS deployment, nor any of the distros commonly at the top of Distrowatch (Manjaro, Mint, Elementary...).

With the fragmentation of Linux, even the "most popular" distro may hold a tiny market share. The guys at dropbox probably know far more about their users than what is reflected in some arbitrary popularity contest, so I doubt it's either ignorance or stupidity that led them to this decision.


Ubuntu uses ext4 and the default is to ecryptfs the user's homedir.

Edit: Actually I can't find that it was ever the default, but it was a pretty prominent option so I assume this decision is going to mess with a ton of customers.


I'd argue that the normal thing to do in the Ubuntu installer is full disk encryption, rather than homedir encryption.

Both are checkboxes, though, so one could select whichever they wanted. Full disk encryption is the most sensible one, though (better evil maid protection).


Encryption is opt-in, not the default.


It's not even opt-in on Bionic anymore (at least, that checkbox is gone from the installer where you enter your user info)


I'm pretty sure home-dir encryption was the default with Ubutunu 15.04.

I know that I installed Ubuntu with default settings, and was pleasantly surprised by that. Not sure about the exact version though.


I don't think it was the default. There was an "encrypt my home dir" checkbox which was unchecked by default.


I would never attempt to use btrfs again in any environment. But if it is used, I hope the person has extremely strong backup policies in place for when it eventually crashes.


I've used it for the past 4 years as my primary development machine. Never once had an issue. However, I used Arch Linux, which always has newer kernels. I know there have been some bugs on other distros which run older kernels.


I ran it for quite a while, 4 terabytes lost specifically due to btrfs, I will never trust it again.

I've heard several admins flat out scoff at the idea of using it.

Make sure you have backups with a reliable fs.


Fair point - I'd be fairly confident that it's Ubuntu at least on the desktop, based on experience the data I've glanced at previously. But I could be mistaken.


What I am confused by is what they think will happen to revenue.

Step 1: "You can now use dropbox in fewer scenarios than before" Step 2: ??? Step 3: More customers paying more money

Dropbox is paid for and used in various places I work and personally because of the Linux support. (Their competitors ignore Linux for some reason.) In my case step 2 is going to be them losing at least $1,000 in annual revenue. And they won't have that the next year either (ie recurring revenue). Nor will they be part of future options for work or personal.


Step 2: use the 20% developer manpower required to support filesystems used by 2% of the userbase to instead better support the other 98%.


Step 3: $13b company cries over $1000/yr loss of revenue.


This type of snarky dismissal gets pretty old.


What really gets old is seeing:

News headline - "Company removes X feature"

Hacker News comment - "Company is sure to lose customers over this!"

News headline - "Company reports record profits"

If Dropbox really wanted to save some money, they'd fire their accountants and just rely on armchair HN comments to tell them how well their financials are doing.


Not as old as someone publishing a blog with the title "This is the year of Desktop Linux".

Seriously. No company this large can justify supporting a fringe operating system to their investors and shareholders. In a perfect world they would open source what they did to support it so others could pick up the work and integrate it into other products. Oh well.


Of course my example is essentially irrelevant in isolation. But you do should understand that Dropbox is different than their competitors. Dropbox are the only ones who support Linux. Google Drive, Box, OneDrive etc do not support Linux (random partial featured third party clients do not count.)

In places (eg tech) where there is a Linux user base (eg the developers, devops etc) then Dropbox was the main realistic solution for the whole company. It is now just a random entry in that list, and there is no compelling reason to chose them over the others. Heck Google and Microsoft become the top choices simply because that is where the user accounts, email, calendars and then docs end up.

We don't know just how big this "Linux" group is and it certainly is small (your point). I believe the resulting effect will be larger than Dropbox expected. For example the Windows users I collaborate with do so using Dropbox because I use that for Linux. Dropbox might think they are losing me as one user, but they are also going to lose those Windows users too (they find OneDrive far more convenient as it is already there on Windows and in your Microsoft account).

I'd also argue that in aggregate Linux users are more technical and more likely to be influencers. So again that is more future revenue dropbox won't get, unless they get better than their competitors. Recurring revenue is hurt and helped much in the same way as compound interest works. As you add up the missed revenue over the years, it does get to be a big number. And that money likely went to someone else strengthening them. Remember that Dropbox grew essentially by word of mouth. They are going to lose some of that.

The open source approach would be nice, but I am skeptical. Why would a developer spend their time helping dropbox (the server side won't be open source), and not something completely open? The Linux clients done as 3rd party projects for their competitors seem to be far less complete and reliable compared to the vendor implementation.

TLDR: Dropbox did have a unique selling point in their Linux support. Without it they are indistinguishable from their competitors.


I'm going to make the argument that Dropbox doesn't support Windows, because they don't support FAT32. That's just as true as your argument that they don't support Linux, because they very much do still support Linux.


You can't install Windows on FAT32. I'm pretty sure your home directory (where the Dropbox folder is) can't be on FAT32 either. About the only thing that uses FAT32 are USB sticks smaller than 32GB. It is extremely unlikely anyone would want to run Dropbox on a FAT32 volume, and I suspect it has never worked due to the filesystem limitations. ie you would have to struggle to end up in this situation as a Windows user. And even if you did, I doubt any of their competition supports FAT32 either.

They are only partially supporting Linux already (eg no SmartSync). They are now removing support for the default configuration on many existing distros. They are removing support for setups that have worked for years, and earned them much revenue. It is their business and they can do what they want. Linux support is what distinguishes them from their competitors. And now they will lose future revenue from me and others who have posted here about it. Also note that their technical explanation is complete nonsense which is exacerbating the problem. Hopefully they will revisit the decision, or communicate in more detail what the problem actually is. I have no doubt that Linux will fix whatever it is.


I do want to clarify that FAT32 can be up to 2TB (or more if you push up the cluster size). Microsoft just decided they wanted to be pushy in their formatting tool.


Running Linux without encrypting the drive is a big security risk. Anyone who can get their hands on the machine has full read and write access to the hard drive by booting with a thumb drive. Any company that uses Dropbox for Business and has employees who use Linux will be putting a security hole in their business if they decrypt their hard drives in order to use Dropbox. (Possibly including Dropbox, if any of their employees use Linux.)


Maybe they mean uncommon amongst their userbase.


> the default filesystem used by the most popular distro.

Ubuntu doesn't use ecryptfs (or even support it afaik) by default.


Let's not forget Fedora or EL users, XFS has been the default for some time now in both.


Only Fedora Server. Fedora Workstation, Cloud, Atomic, and all the spins, are using ext4 on LVM.


However, Fedora Workstation is strongly considering moving to XFS, as are several of the other editions.


I was under the impression that xfs was the default file system on recent versions of Redhat / Centos that's got to be a large share of the corporate Linux market.


compare that with all their Windows and OSX user base, I guess it is less common filesystems.


Your two choices unnecessarily rule out a third: that Dropbox client phones home what filesystem is in use, and so they have concrete data backing their assertion.


I basically only use Dropbox because it is supported on Linux. Also its syncing technology is sooo much better than Google Drive you would think Drive was built by interns. However, they are taking their sweet time with newer features like Smart Sync. It's disappointing because I'm paying for these features and yet they're not supported on one of the platforms my whole company uses. All in all I really hope Dropbox doesn't keep chipping away at Linux - I fought hard internally to use it and I don't want to be proven wrong.


There's a decent open source sync daemon for OneDrive on Linux - I'm the packager who looks after it on Fedora. With the odd exception when APIs change it just works.


OneDrive doesn't support dotfiles and thus can't sync Git repos.


OneDrive doesn't support dotfiles and thus can't sync Git repos.

This does not appear to still be the case based on testing just now (creating a folder named ".foldername", containing a file named ".testfile" then verifying that they've synced up to OneDrive). Several years ago it didn't support syncing folders with a . in the name, but that was resolved 3 years ago.

With Microsoft's relatively new focus on Git, I can't imagine that problems with repositories stored in OneDrive folders would remain unfixed.

I will note that there are names that Windows Explorer won't let you create - notably, names beginning with a ".". That's an Explorer issue, not a OneDrive or filesystem issue - you can create such files and folders programmatically or from a command prompt.


I think you can do that in explorer if you change some of the options.


Err... I can’t imagine a scenario where one would want to use OneDrive/Dropbox to sync git repositories. It’s like having your local Dropbox folder inside your local OneDrive folder — only a lot worse.


I keep my .tex, .md, and .org files under source control, but I don't bother syncing to GitHub. I just back them up with the rest of my personal (non-code) files.


Personally I sync my git repos with git, everything else with OneDrive - that said dotfile support could be added to the client at least in theory…


Onedrive handles Git repos on Windows fine. It does balk sometimes if you feed it large repos, but most often it works fine.


This just sounds like a horrible scary idea. What happens if I change something in a repo on multiple machines - a OneDrive merge conflict inside .git sounds like a nightmare.


I hadn't thought of that, that indeed sounds torturous!


What daemon is that?


https://github.com/abraunegg/onedrive (currently I package the original project by @skillion, but this fork is better maintained...)


It appears this might be related to dropbox (mis)using statfs's f_fsid field as part of its authentication system. The dropbox devs apparently assumed that this field was stable, but on XFS (for instance) it can change.

dropboxforum thread here: https://www.dropboxforum.com/t5/Installation-and-desktop-app...


That sounds to me like the most likely hypothesis in this thread.

I mean, the Linux manpage itself says it's stable and can be used, with the inode number, to uniquely identify a file. http://man7.org/linux/man-pages/man2/statfs.2.html

I'm not surprised, though. Fancy copy-on-write filesystems like btrfs have some other subtle gotchas. If you allocate (like, really, not as a hole) a big file on btrfs and mmap it for write, you might see a SIGBUS upon writing because btrfs needed to recompress a block and the new compressed block didn't fit where the old one did.

I've gained a new appreciation for the predictability and simplicity of ext4.


Oof, although not surprising.

Once saw a system that used inode numbers as "unique identifiers for files on local disk" and it turns out that on Linux / ext filesystems, they're not unique (they get reused if you delete a file and then create a new one). That team just decided to not support Linux at all.


Sometimes it's not about assuming something is stable, but rather finding a workaround you need and hoping it lasts.


Another alternative is nextcloud... It also has a sync desktop client and works great in my company for sharing folders with colleagues or keep business documents in sync on desktop and laptop.

There are even third parties that install it and manage it for you.

[1] https://nextcloud.com/


Seeing how an encrypted home directory is as good as the default option in Ubuntu, and seeing as it already works now, this seems like a dumb move.

This is a shame because I just started using Dropbox significantly more to back up shared photos. Time to search for something else, I guess.


Not in the latest Ubuntu installs. They do FDE instead of home folder encryption.


Because LUKS > encryptfs, even though Dustin Kirkland has done absolutely fantastic work on encryptfs.


The option to encrypt your home folder is still there in Ubuntu 18.04 (which I set up only yesterday), and appears after you set up volumes upon initial user creation.

It is (funnily enough) even possible to enable _both_ kinds of encryption simultaneously.

I'd say that they don't use full-disk encryption _instead_ of home folder. They just prompt for it sooner (and it is not the same thing if you have a modestly old machine, or a machine you share with other users).


I just spun up an 18.04.1 install in a VM to check, and I don't see an option to encrypt the user's homedir during installation.

Edit: It's listed under "Other base system changes since 16.04 LTS" https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes


IIRC it does show if you use the advanced installer option (ie where you go manually through all steps).

Atleast the ubuntu server does that, though I wouldn't enable that either way since it's a pain to deal with from the outside.


You can manually encrypt your home folder after install, and perhaps they re-enabled the option to do it automatically in 18.04.1 but I haven't read that. It was definitely not there in 18.04.


Excluding XFS is an odd choice as it's very common in the server world.

I'm assuming encryptfs-ext4 has some attribute issue that prevents them for doing efficient deltas. That aside, what's the use case for encryptfs at all? Isn't it strictly worse than LUKS on the underlying volume or a volume file mounted via loopback?


I would hold my horses on xfs being unsupported claim. They explicitly said any modern file system with xattr is supported. XFS does fit that description. OP has drawn his own conclusions from the Dropbox reply.


The HN link is pg 2 of the forum thread; on page 1 a Dropbox employee states the following:

>The supported file systems are NTFS for Windows, HFS+ or APFS for Mac, and Ext4 for Linux.

>A supported file system is required as Dropbox relies on extended attributes (X-attrs) to identify files in the Dropbox folder and keep them in sync. We will keep supporting only the most common file systems that support X-attrs, so we can ensure stability and a consistent experience. (emphasis mine)[1]

XFS may support xattrs but that doesn't mean that Dropbox officially supports it. They are pretty clear on ext4 being the only supported filesystem.

[1]https://www.dropboxforum.com/t5/Syncing-and-uploads/Linux-Dr...

Edit: added link to the proper comment in the Dropbox thread


They did not explicitly say they support "any modern filesystem with xattr". They said that they support EXT4 and nothing else on Linux.


My /home is XFS on my Arch system and I received the same popup from the Dropbox client. I can confirm that this does apply to XFS as well.


Same for Btrfs


It's the default filesystem on RedHat, CentOS and Fedora. Looks like they will be dropping that.


They probably don’t have a lot of users who install and use the Dropbox client on a server machine.


Ugh, I use Dropbox on several platforms, some NTFS, some APFS, some HFS and exFAT. The majority of my files are synced to exFAT because it's the only bloody filesystem that's officially supported across all major OS. No it doesn't support extended attributes and I don't care, 95% of my files are photos!

If Dropbox go ahead with this I'm bailing. The only reason I use a managed service like this is so I don't have to care about the technical details (filesystems etc.)


As a developer I understand that each supported filesystem adds development workload and potential for bugs and at some point you need to weigh this cost against the benefit. The problem with this is that the people who use these filesystem are more likely to be opinion makers for other people in the choice of file syncing solution. Dropbox is big enough so they think they do not have to worry about it. But there is a real possibility that they will alienate just enough influential people, that this will erode their user base more than they realize.


This is exactly why I stopped using Dropbox and moved everything I have to my own https://nextcloud.com instance. It's very easy to install and upgrade, it is stable, have way more features than Dropbox and you have total control over your own data. Overall a superior experience. If they would do something I don't like, I would just never upgrade anymore and be done with it or just copy the files out of the /data/Nextcloud folder and move to another one open solution.


I set up a little Ubuntu Core server with Nextcloud for archives and sharing, and Syncthing for slightly cleverer sync of files I'm actively working with. Syncthing is decentralized, and it's designed so you can just pick any folder and sync it between your devices, which makes for a really handy workflow. But I like good having a long-running server in the middle that I can treat as canonical. That way I have one place for backups, my devices can sync with the server over the Internet when they don't have each other, and I can access Syncthing files via Nextcloud.

I was expecting to still be using Dropbox for a while after that, but it's been surprisingly low maintenance after the initial setup, and it has replaced it perfectly for me :) Highly recommended for anyone bothered by this change.


This echoes my experience exactly. I use Syncthing for all the documents I don't need a web UI for or that I don't need accessible on mobile (only between computers), such as my ebooks, PDFs, things like that, and Nextcloud for everything else.


"If they would do something I don't like, I would just never upgrade anymore and be done with it." -- That sounds like a good strategy but in my experience, it never works because your technology environment is not a vacuum (I assume). At some point you'll need an upgrade for security, compatibility, etc.


It can run in a container, a virtual machine, hosted on a cloud instance or whatever. Nowadays, you have infinite choices for isolation. One of my colleague runs it in a DigitalOcean droplet and nothing else.


A quiet media server at home and VPN is what does it for me. That solution is not for everyone but someone could start making pre-built images or media servers with nextcloud. I believe FreeNAS and the like already have nextcloud as an app option.


If you want to run your file sync client in a container, I think that limitation alone removes a huge amount of value from a low-friction file sync tool.


Why would you run the client in a container? I was talking about the server.


Because the client can be just as vulnerable to security issues as the server is.


Isolating for security is a totally different topic. We talked about pinning the application to a specific version, and I suggested that it can be done with various tools, isolating from a bigger operating system where packages would automatically updated and Nextcloud would break after a while. It has nothing to do with security.

Of course you can do security for isolation on top of it any time, and sure, you won't get security updates after a while which would be nice, but there are tools to secure an outdated app in other ways either.


> but there are tools to secure an outdated app in other ways either.

Not really.


I switched to nextcloud a while ago. When Dropbox started sending me email after email telling me they were going to delete my account soon, I couldn't wait for it to happen. (OK, I could wait since I wasn't willing to spend time to log in and do it myself).

It is nice to know where my data is.


An easier option at the moment is probably SyncThing which just syncs directly between your devices. I use it for sending my KeePass DB and photos from my phone to my NAS.

If you're willing to invest setup effort it might also be worth considering Tahoe-LAFS's "GridSync" which seems to be making some progress as of late.


nextcloud is fine, but I have a multi-function printer with scan-to-cloud support (very insecure, but VERY useful and super easy) that support only dropbox, box, microsoft amd google. I'll try rclone :-)


Dropbox isn't even the cheapest or best file syncing service, they're basically all a commodity at this point and if they don't want you as customers... just go elsewhere.


Would you kindly provide recommendations? I'm looking for a sync service which can run on windows AND linux (which is why onedrive and google drive are out)...and i'm willing to pay a fair monthly fee.


rsync.net seems to get mentioned around here every now and again. I haven't used them though personally, so not sure of good/bad/etc.

Unlike what their name (rsync) suggests, they do seem to support Windows clients and aren't *nix only:

https://www.rsync.net/resources/howto/windows.html


The documentation for the Windows client (1) almost seems like a joke. In the screenshots the text gets cut off in the UI, there's sentences that say "don't use this versioning feature we're about to talk about" and tons of text that's been striked-through. There's also zero mention if the client supports backing up open files via volume shadow snapshots which is basically requirement number one for any Windows backup client.

1 - https://www.rsync.net/resources/howto/windows_backup_agent.h...


We're really all about the direct unix to unix connection.

The point of rsync.net is that you can log onto any unix system, anywhere, and interact with your cloud storage - with no software installation or configuration necessary.

The Windows Backup Agent works very well and has recently been updated but the documentation you are seeing reflects the fact that it is a secondary function here at rsync.net that is mostly provided for the convenience of customers in mixed environments.

We are not a dropbox alternative.


Same here, I had heard rsync.net mentioned previously...HOWEVER, i did NOT know they had a windows client. Thanks!


I had a pretty underwhelming experience with rsync.net and their cheaper version without snapshots a year ago. Speeds where initially 1MB/s on a gigabit connection between Sweden and their Switzerland location. After complaining they exempted me from traffic shaping and I got 2-4MB/s instead.


I know who you are and I am sorry to hear about this (again).

This is a weird, known issue that somehow keeps cropping up on our init7 network connection specifically to scandinavian countries.

I'm sorry we couldn't resolve it.

If you can stand your data being in the United States, our Denver location has a 10gb he.net fiber connection which is God's own Internet. Recommended.


Nice to hear that my experience was the exception and not the rule! I'm pretty happy with my current solution combining a cheap VPS in the Netherlands for the "I need a server to put some files on" usecase and (encrypted) backups to Backblaze B2, but I'll keep it in mind if I ever need some storage in the US :)


I would recommend Syncthing. Fully open source. It is however self-hosted. But they have clients for basically everything, including mobile phones.


It doesn't support symbolic links at all, deal breaker for me. I can't afford to start duplicating content just because it can't follow a symlink.


Have you tried hard links?


I used to run syncthing...but about 1 year ago (when i started with current employer) they blocked it. i forgot what port it uses, but my employer gave me the ol' wag of the finger for "using an unauthorized app". Funny how dropbox was not blocked but syncthing was.


Welcome to the world where everything except 80/443 is blocked. Sucks but you can still make things work.


Just use syncthing. It works better, is open source, and runs on everything.


I guess you could say it runs of everything unless you use one of the two most popular operating systems in the world: Android and iOS.


Not sure what you're on about, its right in F-Droid [1]. What is or isn't on iOS is in Apple's control. I recommend to take a look at Nextcloud. You can use it to control your own files, calendar, etc without some third party using it for data mining.

[1] https://staging.f-droid.org/search?q=syncthing&lang=en


There's syncthing for Android: https://github.com/syncthing/syncthing-android


Only recently started playing with SyncThing (about the time SpiderOak's warrant canary expired).

To my mind, the Android app leaves a bit to be desired. Usually you don't want your phone to hold local copies of everything in the shared folder(s) as it is usually space constrained. You want to be able to view and pull files down as desired (possibly with the option to flag individual files to always be cached locally).


Whoops, the website could make that more clear. "Works on Mac OS X, Windows, Linux, FreeBSD, Solaris and OpenBSD."


It runs very well on android with termux.

Like most open source software: don't bother with the "apps" just install it in termux the same way you would on any other computer.


Doesn't that require you to play sysadmin running your own servers?


No. There's already a peer-to-peer relay infrastructure in place for when your devices can't make direct connections to each other.


That sounds like the answer is actually yes because you have to run the various computers, provision enough storage to hold everything, secure them, and make sure you have a backup plan.


> run the various computers

Are you saying dropbox works without computers O_o?

> provision enough storage to hold everything

It's true that on all your clients put together, you need enough space to have one copy of your data; where for dropbox you can have one single sparse checkout and let their servers hold the full data set. I've never seen that be an issue in practice though - on the contrary, my laptops have enough space to hold all my data, and my dropbox account doesn't, which is why I went to syncthing in the first place.

> secure them

What security do you need to add to syncthing that you don't need for dropbox?

> make sure you have a backup plan

I guess dropbox counts as having offsite backup implicitly; personally I'm using syncthing between a few laptops and desktops in different locations and counting that as the off-site backup plan.


> > run the various computers > > Are you saying dropbox works without computers O_o?

I'm saying that Dropbox does not require you to run every computer yourself. If your house burns, floods, gets burgled, etc. not losing every copy is a really nice benefit for the average person. Not needing to make sure that their laptop is running at the same time as a desktop computer is similarly nice for being able to depend on having n > 1 copies.

> > provision enough storage to hold everything > > It's true that on all your clients put together, you need enough space to have one copy of your data; where for dropbox you can have one single sparse checkout and let their servers hold the full data set. I've never seen that be an issue in practice though - on the contrary, my laptops have enough space to hold all my data, and my dropbox account doesn't, which is why I went to syncthing in the first place.

It definitely depends on the user and is less of a problem as SSDs get larger but I definitely know people who had things like large photo/video libraries which they didn't want to have taking up 80% of storage on every computer they own.

> > secure them > > What security do you need to add to syncthing that you don't need for dropbox?

Two aspects: one is relatively low-impact but still worth making sure you're comfortable with any risk of data loss. If you use FDE on your phone / laptop but someone steals the desktop which has all of your synced data on it, is that a problem? Do you forensically wipe drives before getting rid of them? Dropbox's data is encrypted at rest and since they're hosted in a proper data center you don't have to worry about someone stealing a copy as easily as breaking into someone's apartment.

The other is more important now that ransomware is an industry: if you get malware, how robust are your recovery options? Simple versioning doesn't help if, say, the malware touches a file multiple times or if the versioning system wasn't designed to handle malice and so e.g. an attacker can just empty the trash or overwrite the old versions too.

One nice thing about the hosted model is that it has a completely different trust chain so even if you're totally compromised it doesn't allow them infrastructure-level access. That's far from perfect but enough people have recovered deleted Gmail messages, Dropbox files, etc. that it's worth asking whether you're comfortable about your data recovery options in any comparison.

> > make sure you have a backup plan > > I guess dropbox counts as having offsite backup implicitly; personally I'm using syncthing between a few laptops and desktops in different locations and counting that as the off-site backup plan.

That probably works for most scenarios other than a bad security compromise but how frequently do you verify those copies? Does syncthing have checksums to ensure that the copy you think you have hasn't been corrupted?

Again, I'm not saying that syncthing is a bad choice, only that “It works better” is a very broad claim which is clearly not true as a general statement. Convenience and reliability have significant value to most people.


> run the various computers

That's a vague assertion that can apply to anything.

> provision enough storage to hold everything

Syncthing can ignore directories/patterns at the local level.

> secure them

And this notion somehow doesn't apply to your Dropbox credentials or shared folders? Furthermore, Dropbox has access to your data - with Syncthing that's limited only to the synchronized devices.

> have a backup plan

Another non-sequitur. Either tool can be part of a backup solution.


> > run the various computers > > That's a vague assertion that can apply to anything.

It's a very specific assertion which does not apply to every thing: with syncthing, you need to operate every piece of the system. With a cloud service, you are delegating that to other people, presumably professionals. That's a big difference for most people and it has significant implications for things like backups — e.g. if someone breaks into your house and steals two computers, did you just lose every copy of your data?

> > provision enough storage to hold everything > > Syncthing can ignore directories/patterns at the local level.

That's an unrelated topic. This is about the total size of your data and whether it fits on multiple devices without inconvenience. If you have a phone and a laptop, do you have enough storage for a full copy of everything? If not, you need to add a third computer, deal with external drives, etc. One appeal of cloud services for many people is that you can save your data without needing to have enough space to have a full local copy and still be able to access it.

> > secure them > > And this notion somehow doesn't apply to your Dropbox credentials or shared folders? Furthermore, Dropbox has access to your data - with Syncthing that's limited only to the synchronized devices.

Again, this is a comparison question. A service has a separate security trust boundary so someone who compromises your account doesn't get infrastructure-level access and cannot permanently delete things without you having a chance to recover. If you're doing it yourself, you're taking on that responsibility entirely yourself. Maybe you're confident with that, maybe you're not but it's something that you absolutely have to think about for a data storage system.

> > have a backup plan > > Another non-sequitur. Either tool can be part of a backup solution.

You might want to check the definition of non-sequitur – it's not a get-out-of-jail-free card for avoiding an answer. Just to reiterate, ask what happens if your hard drive starts corrupting blocks, someone steals your computer, your house burns down, you get malware which encrypts every file on your computer, etc. With Dropbox the answer is “I buy a new computer and restore my data. Since the malware couldn't overwrite the older copies, I lost nothing”. If you're self-hosting, that could have the same answer but it requires more skills and ongoing commitment to do things like off-site backups.

What I've found to be sadly common is that people do these comparisons without actually matching equivalent levels of service and then get a painful educational lesson when something goes wrong and they lose something they cared about.


> with syncthing, you need to operate every piece of the system.

Again, this is false. The P2P relay/discovery nodes are operated by third parties donating server time and bandwidth. The user is not required to operate those parts of the network.

> That's an unrelated topic.

Definitely related to storage provisioning.

> This is about the total size of your data and whether it fits on multiple devices without inconvenience.

Convenience is entirely dependent upon the user's requirements. As I said, each node is not required to maintain full replication.

> you can save your data without needing to have enough space to have a full local copy

I'm not sure that putting a significant fraction of one's proverbial eggs in one basket is a selling point.

> If you're doing it yourself, you're taking on that responsibility entirely yourself.

I would add that you're always responsible for your data, third parties or not.

> What I've found to be sadly common is that people do these comparisons without actually matching equivalent levels of service and then get a painful educational lesson when something goes wrong and they lose something they cared about.

You and I could tell those people until we are both blue in the face, they are not going to learn until they have experienced it themselves.


> > with syncthing, you need to operate every piece of the system. > > Again, this is false. The P2P relay/discovery nodes are operated by third parties donating server time and bandwidth. The user is not required to operate those parts of the network.

Okay, let's think about this a bit more in depth: who's operating the computer which stores the data? If I have a laptop and a desktop, can my laptop backup its data if my desktop is powered off or my cable modem is down? If those third-parties decide to stop donating their time or something breaks and they don't have time to fix it, does my data still sync?

I don't see how the answers to any of those questions are compatible with “this is false” being a correct statement.

> You and I could tell those people until we are both blue in the face, they are not going to learn until they have experienced it themselves.

… and that's why for most people it makes sense to outsource these tasks to professionals who specialize in that work, just as most people pay mechanics to work on cars and contractors to fix their houses.

Again, my point was not that syncthing is bad but that an open-source project is not the same thing as a supported service. I get that you like this and want to evangelize it but misrepresenting what it does is just asking for someone to be disappointed.


If you want absolute feature parity with dropbox, then yes I suppose you do. But you don't have to stand up a server just to sync files between your devices.


syncthing is p2p, you just run it on the devices that you want to sync with


What about Adrive? They don't have Linux client, but they support WebDAV, rsync, (S)FTP(S). I guess, getting some pretty stable client for these protocols won't be a challange.


Why do you say it isn’t the best?


I was gonna suggest onedrive but realized they don't have a linux client. Honestly for linux, dropbox might be your best bet...


What makes you say onedrive is better than Dropbox?


Because of its close integration with Office 365, and it's cheaper.


You said it wasn’t the best file syncing... office 365 integration doesn’t seem to suggest that one drive is a better file syncer... especially because Microsoft owns office. I’d expect that. Gdrive does Gsuite integration, but I wouldn’t knock other services for not having that.

I’ve seen many other services have major issues syncing files... dealing with conflicts, corruption, random file types like issues around Mac resource forks and file-folder types, etc. I’ve seen weeks of work get lost because gdrive stopped syncing, then when resuming overwrote all of a colleagues updated files.

Dropbox has solved all such problems.. in part because they’be been around forever, but also because they’ve hired great talent to do nothing but sync. I trust them way more on that core competency.


I continue to wonder about Dropbox's support for Linux. It's been the primary reason that I use them so heavily and recommend them so often.

If you look at the filename on the linux download is says 2015 in the filename. Has the linux client gone untouched for 3 years?


This is a tragic decision on Dropbox's part. The service's popularity is a product of it being stable and running everywhere. If they start pulling back compatibility, where does it end? Even if this change only impacts a small percentage of users, it threatens everyone that their compatibility may be revoked.

Dropbox is violating their philosophy as a universal solution and squandering their key selling point for some small cost savings. What a horrible decision.


Their reasoning of "you need a filesystem which supports extended attributes" sounds legit.

Time for some of us to start work on adding extended attributes to more filesystems?


I believe all 3 filesystems have support for extended attributes, so it doesn't seem legit.


XFS is well-known for its extremely good xattrs support, so that does not add up.

Source: Using XFS at work for OpenStack Swift, which makes extensive use of xattrs and thus prefers to run on XFS.


This seems like the equivalent of user-agent sniffing. So even though the problem is a feature (xattrs), they will try to detect whether or not the filesystem is ext4 and stop syncing otherwise? Is that correct?

So, if other programs follow suit, will we start seeing options to lie about the filesystem type?


This should probably link directly to the message from the community moderator: https://www.dropboxforum.com/t5/Syncing-and-uploads/Linux-Dr...

Right now it just links to the second page of the discussion; the community moderator's message is on the first page.


Self-hosted Nextcloud instance + AWS for storage puts you somewhere around $10 to $20 per month depending on how much space you need. Just throwing it out there.

The sync client has worked flawelessly for me so far. Plus you get CalDAV/CardDAV right out of the box.


Well I for one will be taking the $0 per month that I spend on their free tier elsewhere


This makes no sense. For example, XFS is a very robust and actively developed filesystem and is now the default on many distros. It also supports xattr.


Why not run a small set of tests after installing and see whether the required set of FS features available?

Experienced users run plethora of filesystems which support the needs of dropbox.

IMHO, this is a lazy solution to a relatively simple problem.


Most likely Dropbox needs to set these limits so that they can allocate their quality assurance department to very thoroughly test what they claim they support.

I'm an architect for a Dropbox competitor. Sometimes we need to draw a line in the sand for what we support, and what we don't support. This is mostly due to balancing cost / benefit. A customer may do something strange that we don't support, and we have to weigh how many engineering resources it will take to support the customer. This can apply to unusual filesystems that we don't actively test our product with.

IMO, Dropbox did the right thing. It's only real technical users who get into different kinds of filesystems; and these are the same kind of users who can understand, "Dropbox only works on X, Y, and Z."


From a QA point you're actually right. I don't think Dropbox needs to support all filesystems, but as a programmer and and advocate of better user experience, I'd like to see a system which behaves a little differently:

a- FS is something we support, great! Go on... b- FS is an unsupported one, so run the tests and if they pass warn the user: "Hey! We don't support this, but it looks like working. If it fails we can not support you. Are you sure? (Y/N)" c- FS is an unsupported one, so run the tests and if they fail tell the user: "Hey! We can not work on this FS, sorry.".

The good thing is you implement the tests once. Since, POSIX standard is a standard, so run the tests over that interface. You practically don't need to maintain anything about the tests. Maybe run a couple of unit tests over a simulated environment, that's all.


I wonder if I'll get a prorated refund for the time I pre-paid. This is really silly of them, the entire reason to use dropbox is because they have a client for everything.


Seems ludicrous when XFS is the default file system for RHEL.


Yep. Dropbox is going to lose some customers. Maybe it won't be too many (compared to Windows/Mac), but the customers they will lose will be their most technical.


How many customers do you think are running RHEL as a desktop workstation and allowed to use Dropbox?


If any, I bet they can be counted on one hand.


Syncthing will do the job. Yes, it doesn't implement a cloud store like Dropbox. You have a couple of alternatives: 1) get a cheap vps host (digitalocean, vultr, etc.) and make it part of your syncthing 'cluster'. 2) get a raspberry-pi running syncthing and a big external HD and put that somewhere else (parent's, brother's, office, etc.).


I gave up on Dropbox. Linux was always a second class citizen, and it only ever seems to get worse.

If you are going to pay for something, Insync and Google Drive make a decent Dropbox alternative. I'll be honest, though, I simply stopped using syncing solutions like these, so I really don't know if there are now better options across platforms.


Is there any good alternative that has Linux support and isn't Google Drive? And supports shared folders?


Nextcloud


Seafile.


I have been using Resilio Sync on Linux, Mac, Windows, iOS, and Android, and it works a charm for me. In case this turns out to be a problem for you, maybe give them a try. Never an issue on Linux with Resilio Sync. I run a copy on my VPS in the cloud for remote storage.


I'm about to cancel my Dropbox subscription and move to Resilio, so I wanted to chime in and say I've had a great experience with Resilio in the past and my friends are also using it with great results.

Btw: Resilio Sync used to be known as BitTorrent Sync.


Ecryptfa is the default for new Ubuntu installs, isn’t it? They might have well just come out and say they were dropping Linux support going forward instead of dance around it.

Their focus seems to be on cost cutting and better value extraction than growth and features these days.


Could people whose filesystem will no longer be supported work around this by making an ext4 filesystem in a file and mounting that file via the loop device, and moving their Dropbox directory to that?


Yes, but it's kinda ugly for regular Joe user. You'll have to know how to manage it: fragmentation, setting initial size and resizing, trimming, how to automate activating and mounting it. Etc.


TL;DR:

Dropbox tells with a very unhelpful popup: "Move Dropbox location - Dropbox will stop syncing in November".

And on the forums, dropboxer Jay precises that starting Nov. 7, 2018, they're limiting support to Windows / NTFS, macOS / HFS+ & APFS, linux/Ext4. They say missing X-attrs support is what guides dropping other fs.

They didn't answer yet if displaying this message to users of common-through-Ubuntu Ext4+ecryptfs is intentional.


Missing xattrs? What?


See original message from Dropbox support: https://www.dropboxforum.com/t5/Syncing-and-uploads/Linux-Dr...


That's cool dropbox, us Linux users prefer services like tarsnap anyways: http://www.tarsnap.com/

Colin is the founder, and also a HN celebrity because of this comment (which he regrets, but never gets old):

https://news.ycombinator.com/item?id=35079


As much as I like tarsnap, comparing it Dropbox makes no sense at all. They do completely different things.


For people looking to move to a service that supports Linux as a first class citizen: Mega works well enough as far as syncing files goes.


I had used Mega since it was introduced, and I had recommended it before with its generous 50GB free space and Linux support. However, Mega had become unreliable about a year ago, when it started to delete files after bad syncs. Be very careful about putting important files. Also sometimes, upload speed is very slow (took me over a minute to upload 120k file) and the sync on the iOS became really bad - it always get stuck on "Downloading...", but fails to sync. I hope they fix the problems, but until then, I've decided not to use it.


Aside from the performance hit, can’t users of other file systems just add a loop back device and mount that for Dropbox?


I wonder if the xattr issue has anything to do with there being adequate abstraction of underlying file systems by the VFS? Btrfs, ext4, and XFS all support xattr, but I don't know if they all support xattr the same way via a single interface?

And then how does that compare on macOS where there's JHFS+/HFSX and APFS?


Its too bad there isn't a standard for Dropbox-like functionally yet. An RFC that everyone can write to.


Technically there is WebDAV. Yeah I know, but it's there and it's a spec. Another de-facto spec is rsync, iirc tarsnap was basically built on that. At some point there might have been even a business or two that offered rsync access.


Commercial rsync... https://rsync.net/


You might be thinking of Duplicity -- it's based on rsync. Tarsnap works in a very different way.


Is this a new business opportunity?


Probably not as such users are just a vocal minority, and probably not enough to use as a consumer group.


And they really hate paying for stuff. Doubly so for things they think are simple and can be done "by myself".


And you wonder why if this is what they get when they pay.


You get to either pay to be a second class citizen at some place controlled by somebody else, or invest a few hours into getting a solution tailored to your use-case that won't bring bad surprises.

So, why would one pay?


Doesn’t mean the niche user base isn’t big enough to support a small scale operation

Think nerd friendly and minimal, like NearlyFreeSpeech is to domain registrars


Good example of a small scale niche service that seems to do well (I think): https://www.tarsnap.com/


Those little already have syncthing for sync and borg/duplicity systems for backup. (With a backend of choice) Why would they use a service that locks then in to one provider?


Same reason they use Dropbox: they don’t want to manage all that


So, what's the reason? I can't see it on the linked page. I would have thought that dropbox functionality is totally filesystem independent. I can't think of a reason why they'd do this.

EDIT: Found it.


It's a shame that btrfs seems to be dead in the water. I guess nobody with the resources to build such a filesystem actually needs its snapshotting and integrity features at the filesystem level?


"dead in the water" is incorrect. SUSE's SLES (I believe OpenSUSE as well) uses btrfs on root and XFS on home for default install. And SUSE systems are very far from "dead in the water" as it's employed in lots of big servers around the globe.

Just because Red Hat marked btrfs deprecated on their systems, doesn't mean the technology itself is deprecated.

Also, from what I've read, they deprecated it because of resources issues rather than tech issues (seems like they had 0 btrfs devs and lots of XFS specialists on the team).


The story goes that most of the btrfs developers were hired by Facebook.

SUSE is pretty much dead. Anything still running on that should be planning budget and resources for a migration.


btrfs supports xattr, it's also activly developed - there are always improvements in each new kernel release. Personally I'm avoiding it because I've got burned enough times with btrfs and the current >0.7.x ZFS on Linux versions with ABD work very well for me.


While I have been burned by btrfs and I'm using it very warily, I agree that stability has improved dramatically over the last year or two. Also they make it more clear which features are done and which are not: https://btrfs.wiki.kernel.org/index.php/Status


Does ZFS gaining steam on Linux in recent years have anything to do with the state of btrfs?


Does it really gain steam? Last time I checked the CDDL license incompatibility with GPL made it impossible to ship ZFS with Linux, and as such, distros have separate packages not maintained by the core team (I'm thinking of Arch right now).

For the record I would be very glad if I could seamlessy use ZFS, but from my perspective it looks like a lot of work that can break in unexpected places.


NixOS has ZFS. I don't have long-term experience (switched to NixOS only two months ago or so), but the installation is pretty simple: (1) boot he installer CD, (2) add a line to your configuration to enable ZFS support, (3) switch to the new configuration. And then you are ready to install on ZFS root. Encrypted root also works.

I used these instructions:

https://nixos.wiki/wiki/NixOS_on_ZFS

Ironically, I felt less of a need to use ZFS on NixOS. I don't have a large pool and snapshotting the system is not really necessary, since in NixOS you can always roll back to a non-GCed previous version on your system. But I use filesystem compression and might use snapshots on /home.


ZFS is also fully supported in Ubuntu 16: https://wiki.ubuntu.com/Kernel/Reference/ZFS


ZFS is owned by Oracle now. It's more or less suicidal to embed it into any Linux distribution.


LOL. I assume you're trolling. The official ZFS implementation has been Illumos for years now.


There is no trolling whatsoever. ZFS was originally made by Sun and Sun was acquired by Oracle almost 10 years. ZFS is owned by Oracle and is a legal minefield.



This made me laugh. Maybe I'm missing something; I don't understand the downvotes.


For desktop use btrfs is still a better choice than zfs due to its ability to restripe almost arbitrarily and do on-demand defragmentation and deduplication.


This appears to include ReFS on Windows (although with the creation of ReFS volumes being limited to enterprise-targeting editions of Windows 10, perhaps this won't affect too many people).


No, it doesn’t. Only NTFS is supported: https://www.dropbox.com/help/desktop-web/system-requirements...


I clicked expecting btrfs or zfs or something. Nope, NTFS is apparently "uncommon". Fascinating perspective.

EDIT: Shit, misread -- those are the supported ones. Thanks for the headsmack. :-)


I think you misread. NTFS is one of the supported OSes


NTFS, HFS+, APFS and Ext4 are the supported file systems, not the ones being dropped. Btrfs is being dropped.


What's the reason for the move? I am on LUKs+EXT4 on my development workstation as full disk encryption on all devices is mandated by corporate policy and our contracts with clients.


I'm fine with this cuz I'm using ext4, but why? As people in the thread say Dropbox have been supporting those filesystems without any problems for ten years.


Why do they need to directly support the underlying filesystem? Can't they have support for the high level fs api? Sure not as performant but at least will work.


read the post. they need xattrs support from the file system. now, why btrfs wouldn't work is beyond me.


Not really contributing anything useful. Just have to agree that this is a terrible move. Now I have to find another cloud solution for all of my linux computers.


Would this impact FUSE-based filesystems like EncFS?


Definitely EncFS (which Ubuntu used to use for /home encryption), most probably anything besides ext4.


Well, that's just stupid. Why not check for extended attributes, and if they aren't there, call it unsupported.


Maybe I can figure out a way to automount a file formatted as ext4 in my home directory using systemd mount units.


And then you'll have two problems...


Joke's on them because I already dropped Dropbox many moons ago due to poor performance and high price compared to the competition. Not to mention that they hadn't released a new feature in half a decade. I'm not sure they still have.

Also, I'm pretty sure Btrfs supports xattrs unless I'm missing something.


XFS and BTRFS are uncommon? Filesystem should be transparent to such things.


Wow, I'm glad I deleted my dropbox account a while ago and now use gasp hard drives and usb sticks and github to put up all my opensource code and bitbucket for closed source stuff.


> we’re ending support for Dropbox syncing to drives with certain uncommon file systems. The supported file systems... and Ext4 for Linux.

Yeah this is not how it works. This is not how any of this works


That statement is bit dubious. The next line also says:

> A supported file system is required as Dropbox relies on extended attributes (X-attrs) to identify files in the Dropbox folder and keep them in sync. We will keep supporting only the most common file systems that support X-attrs, so we can ensure stability and a consistent experience.

Certainly XFS supports xattr and hence ideally should be supported. I don't know why they singled out ext4. I am running Dropbox on ext4 LUKS encrypted partition and I haven't seen the warning yet.


I noticed Dropbox started setting their own xattrs on my files a year or two ago if I recall correctly.

My guess at the time was that they were using it for cheaper rename detection, using an xattr UUID, instead of using heuristics to compare {inode, btime, mtime, size, name, folder}.

In other words, they would match up the xattr UUID across CREATE and DELETE events to merge them into RENAME events.

Missing RENAME events is pretty bad for the user experience as it can lead to a loss of version history when the file ID changes through the DELETE, CREATE events.

Anyone have any other ideas as to why they would be reliant on xattrs?


Or you keep a local db of synced files and compare hashes


Not if you consider speed and CPU consumption.


Yes, on file hashes for rename detection:

It prevents a whole lot of pipeline and concurrency optimizations since it forces you to rehash every file in the change set, before your rename detection algorithm can kick in.

For example, if a user renames a root folder with tens of thousands of descendants, then the hash approach would have poor time-to-first-change-synced latency.

Furthermore, relying on file hashes won't actually work.

A user could have done both a RENAME and UPDATE on the file, since the sync app last ran (you need scanning logic for this, and can't rely solely on inotify). To survive a RENAME and UPDATE, you would then need to compare partial file hashes. But again, it affects latency too much.


And it getting out of sync with reality. Metadata on the files seems much better.


extN xattrs are fairly limited, because iirc all xattrs of a given inode must be stored in the same block. I don't think XFS has any serious limitations (count/size) for them, but the Linux kernel itself limits the size to 64K.


You are correct. "man 5 attr" says: In the current ext2, ext3, and ext4 filesystem implementations, the total bytes used by the names and values of all of a file's extended attributes must fit in a single filesystem block (1024, 2048 or 4096 bytes, depending on the block size specified when the filesystem was created).


I would assume the block size is at least 4k for anything created in this era.


Btrfs also has supported xattr since the start. It might be that they're dropping ecryptfs support which is file level encryption (blockdev>filesystem>encryption), whereas you're doing block level encryption with LUKS (blockdev>LUKS>filesystem). Since 2016, ext4 crypto and f2fs crypto were moved to VFS, open question if Dropbox will support that setup which is very common on recent Android versions.


It's because Linux users know what they're getting and store more data then "regular" users. It's the same reason "unlimited" storage plans from Backblaze don't have a Linux client.


Do you have any support for that claim? It seems incredibly unlikely that, say, Windows/Mac users don't store pictures, video, or music in their Dropbox folders and those are by far the most common way people use space in the modern era.


Yeah, this is probably about cutting out the big data users. They don't even offer an unlimited plan, but they probably noticed that certain linux users were using most of their 2tb plans while everyone else wasn't.

Lose the most active users, keep the ones paying for something they don't need.


Are you and XorNot guessing, or are you basing this off a statement they made?

Because the alternate explantation I assumed is that they don't want to continue maintaining a bunch of different filesystems. That complicates development quite a bit (because you need to have developers who know the quirks of the file systems) and testing a lot (because all changes have to be tested on all variants, and it becomes multiplicative when a platform has multiple file systems).


> the alternate explantation I assumed is that they don't want to continue maintaining a bunch of different filesystems.

But they don't have to maintain a bunch of different filesystems. That's the OS's job. From the application's PoV there shouldn't be any difference between these filesystems, all they need to do is check if the required feature (xattrs) is enabled for a certain FS and that's all.


You still need to test. And I'm skeptical that there is no observable difference from the application level.


That certain subset seems to most likely to adapt their setups to just use ext4 if that's what's supported?


Dropbox for Business is unlimited, isn't it?


No service is "unlimited". Its a marketing lie.

Why? Because there's always exceptions for 'abuse of service' and other exemptions.

The only time something is unlimited is if you do it yourself. Then you only have yourself to lean on.


Then put a limit and don't claim unlimited


Bad for business. Better slap "unlimited" everywhere but just shoo away the hoarders.


Well assuming this paranoid reason for their move is true... (an assumption I don’t buy into) it would be more about churners than hoarders.


Those same Linux users know they can just download rclone which is the client that has support for backblaze ;)


On their business plan, which bills for every GB (at a very good rate) but is commercialized at presumably a sustainable value right out the gate.


Well, so long Dropbox.


Can anyone tldr the solution? I'm on Ubuntu and get the warning (stop syncing/move folder).

Move to where? How do I get it to work on Ubuntu again?


Tragic


Here is a direct link to the response from Dropbox: https://www.dropboxforum.com/t5/Syncing-and-uploads/Linux-Dr...



No problem, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem.


Just in case anyone doesn't recognize this quote, it is from the 2007 hacker news post about Dropbox: https://news.ycombinator.com/item?id=8863


It’s a really great example of worse is better.[1] We went from sophisticated network file system to a daemon that destroys your battery life (at least on Mac) watching for file changes in a directory.

[1] https://www.dreamsongs.com/RiseOfWorseIsBetter.html


FTP is neither sophisticated nor a network file system. Frankly I'd take Dropbox over FTP any day of the week - FTP needs to die.

Thankfully your options are not limited to either Dropbox or FTP. Thus people who want simplicity can have Dropbox (or similar) and people who want control can have sshfs or any number of other tools out there that require some assembly but also don't suffer from the numerous problems that pre-TCP/IP protocols like FTP suffer from.


I’m not talking about FTP, which isn’t a network file system. I’m talking about NFS, AFS, and sucessors in that line of development. E.g. https://www.slideshare.net/mobile/snehcp/coda-file-system.

I’m literally talking about the theory of the paper by Richard Gabriel (https://www.dreamsongs.com/RiseOfWorseIsBetter.html) which is that worse solutions often win because it takes too much time to bring a good solution to market.

If you were trying to make a “good” solution to the problem addressed by Drop Box, it probably would not look like Drop Box. For example, you’d do a real network file system that wouldn’t need to do a binary diff of the file each time to see what changed, because it would have access to the block level changes at the file system layer.[1] You'd have file locking in the protocol (like CFS), and could sync data from a locked file instead of waiting for the lock to be released.[1]

It also probably wouldn’t have made Houston a billionaire because who is going to install a kernel driver off the internet? But on the flip side, Dropbox almost certainly killed much of the interest in real network file systems, because it is good enough.

Which is why we’re all using an internet powered by Javascript, Electron apps on the desktop, etc. Worse is better.

[1] https://www.dropbox.com/help/syncing-uploads/upload-entire-f...

[2] https://www.dropbox.com/help/syncing-uploads/stuck-syncing


In that case it might have been helpful if you stated that you were shifting the context away from FTP; as was the context in the GPs comment. ;)

NFS and AFS (from what limited I know of it) are more designed for local networks thus to leverage NFS over a WAN you'd then need to tunnel your connection (eg via SSH or VPN). So while there is obviously overlap between them and Dropbox I wouldn't really say the two are all that comparable.

However to answer your point, I don't think anyone would disagree with the specific part of your point regarding how simplicity is often better than something arguably more powerful. However just because something is simple it doesn't mean it isn't also good. "Good" is just a question of whether it meets requirements. If your requirement is that it can be installed and operated by layman then Dropbox is a far better solution than any other the other proposals you've mentioned.


>you'd then need to tunnel your connection (eg via SSH or VPN)

Well, there is an sshfs FUSE filesystem, for what it's worth


Which works on Windows via Dokan [1]

[1] https://github.com/dokan-dev/dokan-sshfs


Yup, mentioned that in my previous post. Awesome project. I remember the first time I discovered it - I felt like a caveman who'd just discovered fire.


AFS works fine over a WAN. Sometimes a little slow on metadata operations.


Network file systems are harder than they sound, and writing additional file systems for Windows is a huge pain. Even Microsoft's own solution - OneDrive - they've had to walk back slightly on their "placeholder" implementation for detached operation, which was great when it worked but occasionally managed to blow up badly.


> Which is why we’re all using an internet powered by Javascript, Electron apps on the desktop, etc. Worse is better.

That's being a bit alarmist, isn't it? It's not worse is better. It's the right amount of compromises is better. Or, "good enough is better".

Perfection usually has diminishing returns and is rarely obtainable. Worse is not better. Good enough is better; objectively so.


As my brother likes to say, "good enough is, otherwise they wouldn't call it that."


I think you mixed up worse is better. The idea behind the phrase is that quality does not increase with functionality. Functionality often means features along with bells and whistles. Sure that application looks pretty and offers tons of features but with that complexity and glut comes a steeper learning curve as well as possible security and stability issues in the code (more bugs).

A good comparison would be comparing using make files and a simple text editor like vim to visual studio. Is visual studio truly better because it offers more features when a make file and vim can do much the same? A programmer used to VS might call the make file method worse, but the reality is that it is the simpler path which makes it better. Realize that simple doesn't mean "simple to use" but simple in terms of complexity (philosophical simplicity is at play here).

Worse is better is better translated as "A simple design is better, but not from the users perspective."


I'm talking about these two points from the paper, regarding the "Worse is Better" philosophy:

> Simplicity -- the design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.

> Correctness -- the design must be correct in all observable aspects. It is slightly better to be simple than correct.

Dropbox is simple in implementation, at the cost of simplicity in the interface and correctness. E.g. it is simpler to simply punt on locked files in Windows. Tell the user to quit Word if they want their file to sync, instead of handling file locking at the protocol level. Likewise, it's simpler to detect changed files after the fact than write a filesystem driver to knows what blocks are changed. But it degrades the user experience (their computer burns clock cycles re-figuring-out information a real filesystem driver would've had).

The upside of all that is that Dropbox was simple to implement, simple to port, and simple to deploy, which made it popular:

> Therefore, the worse-is-better software first will gain acceptance, second will condition its users to expect less, and third will be improved to a point that is almost the right thing.


NFS is probably the most unreliable network file system that is still in use.

Even SMB is better


Dropbox may come through for you in the end. They've got a feature called "Smart Sync" that is a network file system implemented as a kernel driver. See https://www.dropbox.com/smartsync and https://www.dropbox.com/help/desktop-web/filesystem-integrat.... Not sure if it's doing block level diff or not though.


Your situation is different than mine; this is the first I've heard of Dropbox even registering as a top use of battery.

How many files do you have? I've got 26000 files in my Mac's dropbox folder. (Granted, very few of them change more than once or twice a day; maybe 20 or so of those do.)


It definitely happens in some scenarios. Especially if you do fancy things like using links, etc, and especially when Apple incremented MacOS.

IIRC, there was a variable, hard limit for objects in a folder or locks in a folder where it would go wacky.


My top five in the 'Average Energy Impact' column of the Energy tab in Activity Monitor, as of this moment:

Docker (3.13) Dropbox (2.08) Outlook (2.03) Safari (2.02) Slack (1.57)

Total 131771 files currently on disk (I'm using Selective Sync, because SSD prices)


Interesting. Mine's quite far down on the list at < 1.


I last used Dropbox in 2011 or so (I stopped using it because it killed my battery life). It may have gotten better since then. (But my point is addressed to what it takes to get popular, i.e. that its easier to make a “dumb” tool popular, and Dropbox was popular back then.)

EDIT: Clearly not just me: https://news.ycombinator.com/item?id=12464901 (thread from 2016)


Going out on a limb here, but I think it's possible things have changed in 7 years (and that it was probably something specific to your setup, not something millions of Dropbox users with a Mac had to put up with).


> i.e. that its easier to make a “dumb” tool popular,

If this was your intent, then using the tired "worse is better" trope and claiming an issue from 7 years ago, that no one else has claimed to have seen is seems a far cry from it.

Dropbox became popular because it was easy, it worked, and did exactly what it said it did. That might be "dumb" in that it's not feature packed, but you use a lot of negative connotations when none are required.


There has never been a network file system that really worked for this use case - detached operation by average users. The nearest we ever got was WebDAV.


I thought AndrewFS was supposed to cover that case. But I have to admit I never saw it in action, so this is hearsay at best.


Don't know what you're talking about. Dropbox doesn't ruin my battery on my Mac.


It really sums up nerds' lack of comprehension of the importance of user experience. It's like the UX version of the apocryphal "640k is all anyone will ever need."


The other famous one of those is "No wireless. Less space than a nomad. Lame." https://slashdot.org/story/01/10/23/1816257/apple-releases-i...


I like it. iTunes, for those that haven't used a Mac, is REALLY slick.

This has aged gloriously, thank you!


Back in the day, iTunes is the best music manager. Now it still is a great one.


I used iTunes from about 2007 through 2014 and it was trash throughout that period. I doubt it's gotten any better.


Lol what.


Back in the day, it was pretty slick!


I know it's usually taken as an example, but I disagree. CmdrTaco was giving his opinion, not saying it would fail in the market. To him, it was lame.

This idea that we're always making sale predictions is a vice of the startup culture.


The first iPod was terrible though and sold poorly. The 3G was the first good model, and they didn’t start taking off until the 4G. History only proves that his analysis was completely correct.


3G was when they added Windows support. Probably coincidence :)


No, Windows support came with the 2G in 2002. iTunes for Windows wasn’t a thing and few Windows machines had FireWire, but the Windows-compatible iPod came out in 2002.

The 3G was the breakthrough model, but I lend that as much to the dock connector (which was available in USB and FireWire) and the growth of x-platform iTunes as anything.

The broader point that it required Windows support for the iPod to become mainstream is of course true. That said, the iPod was also the reason so many of us became Mac users in the early 00s because the “halo effect” was undeniable.


You're right. It really was the dock connector that made it a viable product for many people. The fact that the first iPods were Firewire (which I totally had forgotten about and I even had a 1G iPod) made it very difficult to make it work outside of the Apple ecosystem.


It was firewire only for awhile too.


It's not merely UX. At the heart of utility, you'll find simplicity and efficiency of use.


I'd argue that "simplicity and efficiency of use" clearly falls under User eXperience.


So literally UX


Are you telling me that I could build a startup with 3 lines of bash? (EDIT: and a fancy easy windows version)


A great many successful tech companies can be described in terms of more consumer-friendly Unix tools.

Slack has the better part of a billion dollars in funding for what's essentially resource-hungry IRC with pictures. Dropbox does little you couldn't accomplish with a server and rsync.

What I'm really eager to see is git for everyone else.


>> git for everyone else

When you simplify and generalize git to the point where "everyone else" can use it, you get Apple's Time Machine and Windows' File History. I'm not that familiar with Time Machine, but if File History had a more visible interface that you could use to easily "checkpoint" individual documents or directories on demand, you'd pretty much be there.

Branching is too complicated for most people to work with and overkill for most scenarios.


What would "everyone else" do with Git? It's a Rube Goldberg solution implementing a highly specific set of ways to do file versioning.

Most cloud storage providers offer rudimentary versioning; are you referring to the idea of promoting commits to being first-class? It would need to be baked into Word, Excel and similar, and those tools already have builtin version tracking, as horrible(?) as it is, so... :/


People I know who definitely have this problem tend to work in creative fields - especially musicians and (digital) visual artists. They almost universally have a bunch of files with names like project_final2_revised_reallyfinal3_with_edits_from_sally.psd.

Of course, a solution that works well with all the different binary file formats people in those fields use wouldn't be easy.


Ah, that. Yes, absolutely I'd love for a solution that fixes that.

It would definitely need to work with large files though, which categorically precludes Git.

The first point, I think, would be building a delta engine with case-specific code for the most common ubiquitous file formats, like docx, xlsx, psd, etc. Of course it wouldn't be able to be perfect with everything but it would certainly be better than eg just recompressing each version or something equally naive.

The UX would be the next major hurdle. Time Machine is a good example of the kind of simplicity that would be needed, but it would need to be a have a bit more surface area to be applicable and useful to all scenarios.

One other feature that comes to mind, which would be incredibly difficult to get right but probably critical, would be useful version diffing. I think keeping this simple and just building something that can do $anything->SVG (with maybe cheats where bunches of the SVG is mostly just a bitmap in certain cases) and then doing something fast on the SVG (and/or its bitmap contents...) would probably be the most viable target.


It think it does need branching and collaboration. There needs to be a way for Sally to contribute those edits, and they might not actually make it in to the final result.

No doubt there would be a learning curve. I think that's OK. The target market here is serious users who already dedicate time to learning professional tools like Photoshop.


Oh no. Now you need full file format parsing so that you can resolve merge conflicts.

:(

That would take years to get anywhere with ._.

The diffing idea I suggested above is just manageable, a la macOS Preview, with hacks. Branching requires folding-back-in, and that's not just a case of "A or B", it's a case of "A, B and C conflict with D and E, while F G and H are okay," where I could then say "save B and E but drop F". If B is a layer, E is an imported asset and F is a custom filter... you get the picture. You need a reimplementation of Photoshop (halting problem).

:/

Sad that everyone hates GIMP.

But... hmm, this could get folded into Blender, and then make the rest of the industry jealous.......


What about grep/sed/find for everyone else?

Also, to get more ideas:

   echo "Make a startup with "$(ls /bin /usr/bin | sort -R | head -n 3)


I thought it sounded familiar!


The dhouston on that YC post is this guy now: Andrew W. Houston is an American Internet billionaire entrepreneur who is best known for being the co-founder and CEO of Dropbox, an online backup and storage service. According to Forbes magazine, his net worth is ~$3 billion.


The modern alternative is probably Syncthing.

I intermittently maintain a list of these things at https://github.com/pjc50/pjc50.github.io/blob/master/secure-... ; none of them have ever been precisely what I wanted. The cheap alternative to Dropbox with Linux support you want was "Hubic" from OVH (edit: now discontinued)


How about Keybase? It has a pretty seamless files experience, I find. Open source, end-to-end encryption, Windows client, painless sync, free for 250GB...


Completely agree. I'm not sure why someone would choose Dropbox over Keybase nowadays.


Continuity? I mean, does Keybase have a business model yet? Or is it still "$bigshot_vc who is friends with the CEO believes a few crypto/security gambles are in order"?

Not meant critically, I love that they exist and found funding. It's just, as long as the model is "once the privacy shit hits the fan in some widely published scandal, we'll be the one that's ahead" there's only two outcomes: 1. It doesn't happen soon enough and Keybase runs out of runway, or 2. It happens, one of the many Keybase products becomes wildly popular because of it, and Keybase will ditch the others because "yada yada focus core business".


My understanding is they are focused on user growth and improving the UX right now. Also, I believe they could be sustainable without being as widely popular as dropbox. IIRC eventually they will have a paid tier for their kbfs solution. I know I would certainly pay. For me Keybase is a one stop shop: identity management, individual and team chat, encrypted git repos, and secure file sharing all on a cross-platform system. No one really offers the service they do.


Stores your private keys, though.


Well, according to this, they can't read your data.

"These folders are encrypted using only your device-specific keys and mine.

The Keybase servers do not have private keys that can read this data. Nor can they inject any public keys into this process, to trick you into encrypting for extra parties. Your and my key additions and removals are signed by us into a public merkle tree, which in turn is hashed into the Bitcoin block chain to prevent a forking attack. Here's a screenshot of my 7 device keys and 9 public identities, and how they're all related."

https://keybase.io/docs/kbfs


> The cheap alternative to Dropbox with Linux support you want is may be "Hubic" from OVH.

Hubic has been discontinued recently as a non-core business to OVH[1].

[1] https://www.ovh.co.uk/subscriptions-hubic-ended/


But where would you sync to? Dropbox also keeps a copy of your files which makes it attractive without the need to set up configure your home nas.

Although it's arguably a more intelligent decission to share private files not with a company like dropbox.


> But where would you sync to?

You sync across your devices (or anything where you can run a standalone binary). Phone, PC, server, whatever. It's pretty good and very stable when your packager doesn't fuck up the service file (https://svnweb.freebsd.org/ports/head/net/syncthing/files/sy...).

> without the need to set up configure your home nas

That's only needed if you need access to your files from the outside world (which is probably a bad idea). For instance, my ~/Documents are syncthing-only, not available through my Nextcloud instance. Can't access my payslip from last year on my phone. Can't have it stolen through that channel either!


On my "ideal" requirements list is the ability to sync, encrypted, to a standard cloud backend as well. https://github.com/syncthing/syncthing/issues/2647


Then use git-annex, which has had that for years :)


Apparently git-annex won't let you store a git repo in it? http://git-annex.branchable.com/forum/Storing_git_repos_in_g...

(Sure, "don't do that then", but I'd rather not have to remember to not do that)


Similar to the Mac backup software Arq, just give me a list of object stores I can point to (Backblaze would be my first pick). Although, for my purposes, I think iCloud Files is going to turn into most of what I need from Dropbox fast enough that'll be where I move to.


For whatever reason, no other service does binary delta uploads like Dropbox, including Google drive. No other service takes advantage of the local network when syncing multiple PC's on the same LAN.


More seriously: Syncthing exists, and it is beautiful.

I set my grandmother up with Synctrayzor and she doesn't know the difference between that and Dropbox.

It is missing the ability to share things with a direct link, or share a repository or folder "easily" (read: in the same way it's done with Dropbox), but the trade off has been worth it for me.


Unison is beautiful, as it has a formal specification with proofs of correctness for its bidirectional syncing. Bi-directional syncing is hard to get right and many devs have been subsequently shown to not understand the problem fully, for example DropBox: https://www.cis.upenn.edu/~bcpierce/papers/mysteriesofdropbo...

My instinctive reaction is not to trust any brand new effort without more evidence of its correctness.


> formal specification with proofs of correctness for its bidirectional syncing

Where can I find more information on this? Search is failing me.


There are lots of papers referenced in the "Mysteries of DropBox" paper above, but I think the full spec is here: http://www.cis.upenn.edu/~bcpierce/papers/unisonspec.pdf


Which cloud backend do you have it sync to?


None.

You run it on your machines and they will sync among themselves. No need for any cloud backend.

Some people with Synology or QNAP run an instance on their NAS.


But I (and I suspect quite a few other people) want a cloud backend to cover both the disaster-recovery cases and syncing while outside my LAN.


1) you can sync with a device inside your LAN, even if you are outside, with global discovery (enabled by default).

2) you can run your own off-site instance, that can be hosted with your favourite cloud provider.


> you can run your own off-site instance, that can be hosted with your favourite cloud provider.

How much would it cost to hire an admin to set up and maintain that instance?

The whole point of Dropbox is that I don't have to do any work.


There are many possibilities, as it is not a pre-packaged solution.

For example, if you have Synology or QNAP, there are packages to sync with cloud providers, like Backblaze B2, Amazon S3, Azure Cloud Storage, etc. So if one of your syncthing sync instances (which you can make with few clicks) is such a device, with few more clicks you can choose your cloud platform and backup to that.

Or you can be using something entirely different. That's the beauty, you can use whatever fits your needs and budget, you don't have to fit yourself to limitations of one or two pre-packaged solutions. You can do something entirely different, e.g. if someone from your family also has some NAS, you can backup to each others devices and not rely on third parties at all. The possibilities are limitless.


Organizations already have such an admin, and workers can sync whilst they're at work. Not very difficult to set up, and saves tons of costs (although a NAS is a small initial investment to make it doesn't require much maintenance).

Individuals don't, but most individuals have no clue about the repercussions of hosting their plaintext data on Dropbox in the USA. Its a ticking timebomb.


They think they want, but they don't.

For every human being on the planet needs their own privacy. Its a common theme in the EU, but it isn't only important within the EU.

Therefore, there are two viable options:

1) You store your data locally (encrypted if you want to protect against burglars).

2) You store your data remotely, but encrypt it before you send it and decrypt it after you receive it (public key cryptography).

There is no other, viable, long-term option. Dropbox's solution is a short term solution.

Now, the question is whether you really need to sync to your LAN right away. You most likely don't. You can just sync in the evening and at night while your device(s) recharge. If you really need to sync ad hoc you have the option to punch holes in your firewall, or use a VPN. One opportunity for innovation here is to allow a user to define what must be synced right away over WAN and what shouldn't. Another opportunity is to make cloud backups easier. But these require the above requirement #2, and as you might know, public key cryptography just doesn't seem to be user-friendly.


I sync to my NAS where it gets backed up to b2 using duply, cloud backup is out of scope for Syncthing itself.


I can't read that name without rearranging it to Snytching. Am I dyslexic?


Yeah.

Speaking of which, you can use such together with Cryptomator or GPG or whatever and synchronize your encrypted files. To any cloud. Or you can sync it to your NAS which stores them on an encrypted filesystem.

You can do that with Dropbox/Google Drive/etc as well. But I'd use that as yet another encrypted backup of my most important NAS content.


The android app sucks. No proper SD card support last I checked.


In case anyone is wondering if there truly is something like Dropbox (delta sync, desktop app, mobile app, web interface, ...): There is. It’s Seafile. The Docker image is reasonably easy to set up and run.


Reference for the uninitiated:

https://news.ycombinator.com/item?id=8863


I spent about a year of my life trying to get WebDavFS (backed by apache mod dav svn) working. I want that year back.


What? No! The whole idea of paying dropbox 10 bucks per month is to get a ready-made functioning solution.

It is not like they are not implementing a feature but removing one.


Or you can use Syncthing instead, you know.


Or just use Google Drive


Sorry, Google Drive doesn't support Linux.



No official support, but you can easily access the content read/write in multiple ways. I've used such a few years ago with a FUSE module, written in OCaml. The only complicated, annoying thing was getting authentication to work.


I've heard of it but is that reliable? I meant I'm a bit afraid of loosing data by using unofficial programs.


I keep wanting to use ZFS's incremental syncing across SSH - guess this is now a great excuse to set it all up on a Linux laptop!


It is time for Dropbox to die.

Seriously.

Dropbox is but a new Linux kernel module away from being completely and utterly irrelevant.

$ insmod ifps


This sounds fine to me. Ya'll are making sound like they're refuseing to sync because the FS is encryped. The file systems mentioned are really old. You can use newer FS's that are encrypted. They need to support extended attributes for obvious reasons.


Age has nothing to do with it. If you keep dismissing tech due to age, you're going to be spending your life re-inventing the wheel and rediscovering the same problems and mistakes over and over again.

By the way, NTFS dates back to 1993. It not included in the expiry list. Every file system mentioned as losing support is newer than that, and they're all under active development and getting better all the time.

XFS became the default file system in Redhat 7 and derivatives. ecryptfs is the default for home directories on Ubuntu. They're talking about wiping out a significant proportion of Linux users here.


ext4: October 2008 btrfs: March 2009 ecryptfs: May 2016 xfs: 2002

Which one of these are old?


XFS is from 1994, ported to Linux in 2002, so it is contemporary with NTFS which is from 1993.


btrfs is "really old" compared to ext4? Default setups of major distros are not worth supporting?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: