This is a great development, but I'll have to wait and see how reliable it actually is. I've had a few droplets running with them over the years, and that has been rock solid (years of uptime on one droplet, no problems whatsoever), but we recently started using Spaces for a commercial product and it has been a catastrophe. There are connectivity issues leaving the service mostly unavailable on a regular basis, and the status updates about it aren't particularly timely.
While trying to migrate away to GCS, synchronizing data (using gsutil) has proven practically impossible. The API is incredibly slow to list objects and occassionally responds with nonsensical errors.
(Every once in a while a random "403 None" appears, causing gsutil to abort. We could probably work around that by modifying gsutil to treat 403 as retry-able, but since overall performance is so awful and we can regenerate most data, we decided to give up.)
Yeah, DO Spaces is all around awful. Deleting is extremely slow as well. We had to write special code because DO cannot delete 1000 objects at a time (takes like 2 minutes for the api call to succeed, if it succeeds at all). To the extent that we had to just resort to delete entire buckets. The UI also keep crashing when there are many objects :(
I recently had to delete a multi-TB S3 bucket and learned that S3 isn't great at deleting tons of files either. The AWS Console just hangs forever. I let it go for hours before finding another solution.
It sounds like you’ve already resolved this, but for the benefit of any others that stumble upon this, my solution for deletion of a large bucket is to set a lifetime rule with a short TTL, after which the objects are deleted.
Set that rule, and come back to a beautifully empty bucket 24 hours later, after Amazon’s gnomes have takes care of the issue for you.
Ditto, droplets are great and stable (and, somewhat surprisingly, CPU-optimized ones seems to perform better on average than the equivalent from AWS). I've tried to make it work with Spaces for the last 6 months (NYC3), but it's just a disaster, I could barely sync the data back to AWS S3 last week (you have to do it from a droplet in the same region, otherwise your chances are close to 0%). Downloading/uploading large objects is perfectly fine, but something's inherently broken at the metadata layer, so listing objects is either ridiculously slow or you get timeouts and weird errors like 'limit rate exceeded'.
TL;DR: droplets are good; avoid Spaces like the plague.
While trying to migrate away to GCS, synchronizing data (using gsutil) has proven practically impossible. The API is incredibly slow to list objects and occassionally responds with nonsensical errors.
(Every once in a while a random "403 None" appears, causing gsutil to abort. We could probably work around that by modifying gsutil to treat 403 as retry-able, but since overall performance is so awful and we can regenerate most data, we decided to give up.)