I'm fully okay with that when the goal is throwing commodity infrastructure at a problem. I still use AWS for most things; the point of DO, for me, is that for "worker"-based tasks, I can do a lot more with 8 $5/mo DO droplets than with one $40/mo on-demand EC2 instance (esp. regarding network-IO-bound tasks.)
I don't need my infrastructure to live in DO; I just need my, well, elastic compute to happen there. Oddly enough, DO is better than EC2 at being an Elastic Compute Cloud. (EC2's advantage, meanwhile, lies in how configurable it is for the non-elastic parts of your workload. It's great for being the host for the infrastructure components that form the "skeleton" of a service, with known work-pool sizes; it's not-so-great for just being a cheap place to offload a bunch of work.)
Linode, Slicehost (RIP), the cornucopia of LEB, prgmr, DigitalOcean and friends target hobbyists and small fleets. Their model encourages precious snowflake named machines that aren't disposable and last a while. That model also falls apart beyond on the order of 50-100 nodes, depending on admin competence and documentation. (If you are holding 1,000 machines named after stars or authors or something together with sheer will, consider going disposable and ask me for a beer.)
Amazon is playing a whole different game. Azure is there too. There's a middle, too, where Rackspace Cloud is ending up. They were headed for the Amazon game and seem to have lost momentum. Come to think of it, the middle is littered with those with their eye on competing with Amazon.
With Google's default project quotas, they almost fall in that smaller bucket too (I was surprised by the core quota in particular), but the limits are easy to raise, so they're in the middle somewhere as well.
That's how I'd frame this, not necessarily amateur/pro, but it does sort of break that way. Snowflake/disposable is also the sysadmin/SRE inflection point.
I'm not sure that I thoroughly agree with you re. documentation. I actually find the community tutorials generally very high-quality despite being user-generated and seemingly not curated [1]. Linode's documentation is not bad either, I just happen to particularly like the idea of community tutorials in DigitalOcean.
Hi! I'd love to hear your feedback about our documentation. I work with the team that produces our tutorials. Are there areas you don't think we cover well, or is it something else? With 1,200+ tutorials, our docs are actually something I'm extremely proud of, but we're always looking to improve.
Your tutorials are awesome and while you may not even see this I would love for someone at DO to get on blob storage.
If that was available I could move a lot of things over ($25k/mo on AWS currently). Load balancers and VPCs are also nice but blob storage is a killer.
Storage and networking are both areas we're investing a lot of work into right now. Blob storage isn't the first on the roadmap on the storage side, but I'll definitely pass on this feedback.
Sometimes you got tutorials from different times (I think one of them is like using Flask with Nginx) - they explain the same problem with two similar solutions but one works and the other one does not (some config parameter change).
IMHO I think you should streamline them and mark the older ones as "OLD" or "ARCHIVED, proceed at own risk".
Or what you can do with the Linode manager vs the DO one. That's pro vs beginner.
I'm not saying anything about the Droplets itself, but everything seems a bit Linode inspired, but cheaper executed.