I think that Dropbox typically handles conflicts quite well, and the issues I had are more likely bugs outside the conflict resolution implementation. I was a bit brief in my comment above, so let me elaborate in case you or someone else is interested:
The issues I had didn't result in conflicted files. Rather, after making a big change (i.e. switching git branches) some files were never updated or synced. Dropbox stopped picking up changes in the folder and eventually removed new changes once restarted.
The order of events were something along the lines of:
1) Did work on computer A that caused massive file changes (i.e. moving between git branches).
2) Moved to computer B to continue work.
3) Noticed files were old or missing on B.
4) Syncing files in some other folders worked, but nothing happened in the folder with missing files.
5) Restarted Dropbox on both machines in hope that this would trigger a fresh sync.
6) Observed files being reverted to old versions or deleted on machine A.
The end result was that Dropbox threw away the changes I had made on A and left me with the original state of B. I was able to recover the changes from a backup, so it was no big deal in the end (although it left me a bit scared I could have lost those files without noticing).
I was in contact with Dropbox support about the issue and explained in detail what I had done and what happened. I was offered help to recover the files, but since I had already done so, I just told them I didn't need any more support on the issue. I thought it might be because /proc/sys/fs/inotify/max_user_watches had a low value on one machine, so I wrote back that they might want to add back the old warning about this. However, the same problem with deleted files happened again after I had verified that this value was high enough on all machines.
I have also seen how a script run by a colleague managed to confuse Dropbox. The script was running a test which repeatedly created and deleted the same file before checking its correctness. Running the script in the Dropbox folder left him with some old version of this file and a failed test. Running the scirpt in a folder outside Dropbox left him with the correct final version of the file. He was only working on one machine.
And yes, I know it's "bad" to run scripts like this or switch git branches on top of sync software, but it happens, and it is interesting to see how different software handles these cases.
It should be noted that Dropbox usually handles these massive file changes well, so moving to Syncthing has for me been more about it being open source and the possibility to keep files on my own machines. I was just glad to see that Syncthing also handles heavy use cases gracefully.
The issues I had didn't result in conflicted files. Rather, after making a big change (i.e. switching git branches) some files were never updated or synced. Dropbox stopped picking up changes in the folder and eventually removed new changes once restarted.
The order of events were something along the lines of:
1) Did work on computer A that caused massive file changes (i.e. moving between git branches). 2) Moved to computer B to continue work. 3) Noticed files were old or missing on B. 4) Syncing files in some other folders worked, but nothing happened in the folder with missing files. 5) Restarted Dropbox on both machines in hope that this would trigger a fresh sync. 6) Observed files being reverted to old versions or deleted on machine A.
The end result was that Dropbox threw away the changes I had made on A and left me with the original state of B. I was able to recover the changes from a backup, so it was no big deal in the end (although it left me a bit scared I could have lost those files without noticing).
I was in contact with Dropbox support about the issue and explained in detail what I had done and what happened. I was offered help to recover the files, but since I had already done so, I just told them I didn't need any more support on the issue. I thought it might be because /proc/sys/fs/inotify/max_user_watches had a low value on one machine, so I wrote back that they might want to add back the old warning about this. However, the same problem with deleted files happened again after I had verified that this value was high enough on all machines.
I have also seen how a script run by a colleague managed to confuse Dropbox. The script was running a test which repeatedly created and deleted the same file before checking its correctness. Running the script in the Dropbox folder left him with some old version of this file and a failed test. Running the scirpt in a folder outside Dropbox left him with the correct final version of the file. He was only working on one machine.
And yes, I know it's "bad" to run scripts like this or switch git branches on top of sync software, but it happens, and it is interesting to see how different software handles these cases.
It should be noted that Dropbox usually handles these massive file changes well, so moving to Syncthing has for me been more about it being open source and the possibility to keep files on my own machines. I was just glad to see that Syncthing also handles heavy use cases gracefully.