I for one have always liked the simplicity of DO for hosting but I've never wanted to take on the full liability of self-rolling a DB server (and backups and replicas). So everything my company has I've put on heroku or azure. This has potential to be really significant as I'd wager there are a lot of folks in similar situations.
I am definitely on that particular boat. I'll add one more point:
- Not very willing to setup DBs for several projects both paid and hobby because it's a fixed time sink. And before anybody tells me "but with this script it can take 2 minutes!" please don't forget that to learn to use your magical script I have to learn a few other things beforehand. (Although admittedly that's most likely a small time investment.)
Agreed. It's why I think us the programmers should eventually just settle for 5-10 languages globally and not touch anything else -- so [together with all the other problems that still exist] we can also finally get to writing the one UltimateDataMapper™ library that can work with whatever is out there.
I seriously can't be bothered to setup yet another cool and young database promising me quantum entanglement teleportation anymore. It's what is stopping me from trying 99.9% of what I see on the net.
I have a few scripts to which I just pass a DB name / user / pass and it brings me up (or tears down) a Postgres or MySQL/Maria database. I'd do the same for Elastic and a few others if I wasn't so lazy about it for years now.
Even if you do have magic scripts and understanding for every possible scenario, you still have to deal with getting woken up to ascertain which scenario(s) you are in and run them.
Yep. One thing that the managed services give you is exactly that peace of mind you mention. Plus the fact that they are much better at fine tuning security, availability and performance settings than myself.
If you have a relatively small set of users, setting up your own database is usually as simple as setting it up locally and you won't need shards or anything. And setting up backups is as simple as adding a cron job that calls your backup shell script, which you can test separately. And by "small set of users", consider what SQLite's own website[1] says:
Generally speaking, any site that gets fewer than 100K
hits/day should work fine with SQLite. The 100K hits/day
figure is a conservative estimate, not a hard upper bound.
SQLite has been demonstrated to work with 10 times that
amount of traffic.
If SQLite is able to comfortably handle 100k hits/day, I imagine that more "legitimate" databases can handle more traffic comfortably without needing to jump to scale horizontally.
The real benefit to having someone else manage your DB is that it eliminates the "unknown unknowns." I don't want to spend the requisite time becoming an expert DB sysadmin--I'd rather let someone else do it so that I can sleep at night. Also, databases are in a different category of risk. Misconfigure an nginx config? No big deal, fix it and move on. Set up your database incorrectly, resulting in data loss down the road? Could be game over.
SQLite doesn't really have concepts like replication (HA) or concurrent writers.
Notably, the SQLite website is (as far as I can see) read-only. So it's great if all you need is a SQL read API atop your structured data (and 100k hits/day is probably only limited by the filesystem/os since SQLite isn't a server). But you're setting yourself up for headaches by using SQLite if you need simultaneous read/writes combined with HA.
For small user counts, performance is the easy part. Failover and point-in-time restores are common examples for me that contain easily overlooked details and you don't find out until the worst possible time.
I think some cloud stuff is overpriced, but RDS easily pays for itself in my case.
Agree sqlite is great and >80% of websites will probably run fine on it, but 100K hits/day is pretty vague, does that mean 1 hit/sec or 3 hits/sec during peak time, etc...?
The SQLite website (https://www.sqlite.org/) uses SQLite
itself, of course, and as of this writing (2015) it handles
about 400K to 500K HTTP requests per day, about 15-20% of
which are dynamic pages touching the database. Dynamic
content uses about 200 SQL statements per webpage. This
setup runs on a single VM that shares a physical server
with 23 others and yet still keeps the load average below
0.1 most of the time.
It also doesn't specify a use-case. In a 98% read scenario with a good caching strategy it can easily do much more than 100k visitors per day. If you're taking in data from many devices you can easily bottleneck on writes.
It really depends. Also, configuring everything right gets hard. Most don't even think to do RAID over a few block storage devices, but that's something that comes with cloud storage. That doesn't count HA and other issues before getting to the application layer.
It's something that unless you're paying a full-time DBA, you are probably better off buying as a service. It's one of the few holes in DO's offerings and I'm very happy to see this.
I was literally going to spend the weekend testing latency between the US DO data centers and VMs on Azure and AWS just to see if any were pretty reasonable (consistently under 10ms) so I could use DO for my application and Azure or AWS for the DB hosting and management. This is incredibly great timing.
Because spending $25-50/month on side projects that may sit for a while before I pick it up again is one thing. Spending hundreds a month is another. DO is significantly faster at any price point than what you get from lightsail or aws-ec2. Beyond that, I'd be more inclined to use Azure's data services simply because I like the interfaces better and far less hassle to get started with. I'd probably use SQL minimally and lean on AWS Tables quite a bit.
For a DO only solution, I'd probably use SQL more and then rely on their blob (s3 compatible) storage for some bits. I have a few small projects that I've been getting anxious to finally work on, and don't want to spend a bunch of money on them in the interim.
In case of modest growth, I also don't want to be hamstrung doing DB operations work for something that isn't actually making me money... I'll split things up to save a bit in the nearer term so long as I can have a migration path.
If I did split with data in azure/aws and the apps in DO, I might go all azure later, or I might go all DO and take on the DB operations side... it depends on if/how things grow.
DO is cheaper for the servers. AWS is nice, but expensive. If you have the money, use it. If you are bootstrapping or willing to take on challenges yourself, DO like environments will save you a lot.
...yeah, but if best value for money would be the absolute goal, DO itself is quite expensive compared to Hetzner or Linode last time I compared. Others like OVH or Scaleway could be even cheaper.
DigitalOcean and Linode are currently the same price, what $5 buys you on DO gets you the same specs on Linode.
Hetzner is not a reasonable choice for servers IMO, its akin to hosting in a datahole in Dallas, expect mixed bandwidth quality and questionable policies when issues arise. Comparatively, OVH looks stable.
> Hetzner is not a reasonable choice for servers IMO, its akin to hosting in a datahole in Dallas, expect mixed bandwidth quality and questionable policies when issues arise.
Really surprised to hear that, can you please elaborate? I only heard good things about them up until now...
> Comparatively, OVH looks stable.
What kind of stable do you mean? Bandwidth, latency, average I/O ops, CPU load?
Hetzner does pay lip service to improving their network, but its akin to ColoCrossing, the internal network infrastructure is not amazing due to budget constraints, and the peering situation isn't apt to improve as its essentially money and politics that created it.
> What kind of stable do you mean (referring to OVH)? Bandwidth, latency, average I/O ops, CPU load?
I am referring to bandwidth, latency & jitter when comparing OVH to others. One thing OVH has nailed is keeping jitter minimal, and there has been significant optimizations for routes in their newer datacenters as time has gone on.
Mature OVH locations already have fairly good peering, to the point that many time sensitive workloads that can't be fronted/cached choose OVH in certain regions.
Its really sad to see Google Cloud and AWS flunking on this front, the lack of internal IPv6 support to the VM kills mobile performance, adding tens of milliseconds of latency and incurring a stateful connection in cellular carriers CGNAT (which gets killed after ~100 seconds), reducing performance and breaking long term open connections. Sending a packet over IPv6 to a cellphone is often faster than using push messaging on iOS or Android.
That heavily depends on where you're operating from. Ignoring the latency problems of using Hetzner et al. for a moment (if you're based on the US). Increasingly as the Internet fractures down lines of very distinctly separate legal structures nation to nation (or region to region), Hetzner, OVH and Scaleway are not going to be viable choices for most organizations in the US. Particularly as it pertains to production environments and until or unless they get proper US facilities.
If I physically operate in the US and I base my servers in the EU, then I open myself up to not only US but also EU jurisdiction and compliance in a myriad of ways. It's an entirely unnecesary additional burden in exchange for a discount on infrastructure (which is rarely the biggest cost in anything these days).
I have no intention of ever complying with GDPR for example, unless I'm running a very large organization. Not because I disagree with most of GDPR, rather, because I'm going to comply with US laws, as that's my legal jurisdiction and those are the laws I'm governed by.
Hosting with Scaleway, OVH or Hetzner is a big jurisdiction mistake in most cases for smaller US organizations, just as it would be to arbitrarily host in Japan or China or Brazil (ie foreign locales with entirely different laws).
...being physically in the EU, but building stuff that has the potential to have 80% of the customers in US, as long as traffic to end-users in US is good (it usually is unless you care about low latency for gaming, or real-time-video bandwith for video chat), I'd be in the opposite camp and see no reason to pick US-only hosts (DO has Amsterdam datacenters though, and AWS or Azure also have).
For everyone EXCEPT the US-based businesses, being multi-juristiction by default form the get-go is the default, you know. And for any small or freshly created projects GDPR compliance is pretty easy though. EU's new copyright laws though... those are an abomination, hope it changes before they start being enforced. Nowadays EU and US are probably equally horrible and competing at being the most horrible with respect to restricting internet freedoms.
What I'm actually looking for is hosting services that are outside of BOTH US and EU for some more side project ideas that risk falling on the wrong side of IP laws (US's DMCA and all are horrible too btw...). Something that would be both run by a non-US and non-EU company and with datacenters physically outside this space. Something in Middle-East, SE-Asia or Russia could have decent bandwith tot he rest of the civilized world and at the same time be blessed with the capability to delay/ignore/missfile etc. requests from US and EU authorities, giving you a time buffer to damage control if s really hits the fan, while serving end users in those regions. Maybe after Brexit even the UK could become a nice place with more freedom too.
I wrote a review (https://ayesh.me/amazon-lightsail-review) of Lightsail when it came out. Although the specs are ok in paper, their Network is slow. I still would like to switch to Lightsail because I already use Route53 and CloudFront, but I wouldn't go with them for their Network speeds.
Can you explain it more? Why you think that running your own DB instance is such overhead?
Are you ever tried running your own, with mysql/mariadb for example?
Mostly because databases are the key piece o data-persistence infrastructure. Spinning up a MySQL db to dev against, or a single server for a hobby project is quick and easy.
In production, all of a sudden you have a lot of work to do, especially around HA. Figure out replication, get it working, figure out how to monitor/alert if it stops working, figure out failover, figure out how to test that failover actually works, etc.
Support around that stuff has improved over the years, but it's still non-trivial and high-risk to DIY. It's a very different scenario than a stateless app server where you can have easy redundancy.
For me it isn't even this complicated. I just don't want to have to manage OS/MySQL/Postgres version upgrades or worry about having to troubleshoot when things on the server go south.
Just give me a database to connect to and take my money, please.
timdev2 sums up my original sentiment quite well. Is spinning up and maintaining a DB architecture doable? Of course... But there are so many complexities involved for real production that it would greatly slow us down.
If we had staff to dedicate directly to this then it wouldn't be an issue. But paying for a managed service that gives us production grade data access is a no-brainer for any non-trivial application we build.
and unless you're one of the elite few for whom good HA is actually easy, your attempts will likely end up with something that you _believe_ is as available as RDS
It's not about when things are going well... anyone here can setup a DB instance and run against it. It's failover/HA or recovery options that are not considered by most. Not everyone can afford to take a half a day or more to setup for backups, or read mirrors, or failover, other clustering options. Not to mention actual recovery modes.
I'm happy to pay a few dollars a month for someone else to automate.
It's not just the database itself, it's everything else you have to do to do it right. How will you do backups and replication? How will you recover? What are the ideal configuration settings for your particular situation? What about authentication, roles, open ports, allowed ip address ranges, etc.
Yep, that's my main motivation for not even attempting it. There are a ton of security settings alone. I am not gonna be the idiot who will let a 14-year old bored script kiddie into my VPS-es so they can have a laugh at my expense.
I prefer the managed services. Save for the high availability you get tons of best security practices -- all for a few bucks a month. It's usually a such a crazy cheap deal it's almost not fair to them.
Running your own DB is a recipe for disaster UNLESS you know what you are doing and can invest resources continuously in keeping it up. If you have a good DBA, it probably isn't too much of hassle as long as you scale up with the number of DBs and you have automation to help along the way. However, at that point you are almost to a hosted solution anyway.
For hobby projects, a single self-managed instance is fine. For production in a business critical environment, much more thought needs to go in.
So now they have managed databases, load balancers, a cloud firewall that's partially VPC like, object storage and block storage.
Assuming managed K8S is next, or maybe more "AZ/Region" h/a features. Great to have a new player coming into this space. Especially one with reasonable egress charges.
Just started using the new Kubernetes offering on Digital Ocean. Still in Beta but works pretty good. If this is a sign for how they will do databases, I'm all ears. Add CI/CD and you have a Heroku competitor.
Glad you are having fun working on your after hours side project with DO. But please DO NOT host with DO if you plan to run a real production systems, build a company, hire people etc. I have never found a better company to spun off a server and play with some settings but when their algorithm decides there is something fishy, bye bye your account, servers, backups, you never going to see any of this ever again.
The last startup I heavily pushed with switching to OVH or even Rackspace was exact sample of what happens when DO algorithm decides you are not genuine. That's it. No explanation, no phone number to call, nothing. These people blindly decided to believe their algorithm and never wanted even discuss resuming the account or even get us backup of data. There was nothing shady going on I assure you. Funny part is as of today DigitalOcean is still in violation of GPPR, as we have requested to hopefully find out what was wrong with our account by filing request for info. Nope, zit, nada, totally ignored. We have filed complain with proper authority and also notified Attorney General in NY and continue await result.
Build all you want on DigitalOcean, but please understand not people but their weird algorithm is in charge of the future of your startup, future of your company and future salary or lack therefore when you are forced to fire team of people because you fall behind with payroll. In other words: be warned and build at your own risk.
Hey friends! My name is Jarland and I'm on the support team at DigitalOcean. We do have a number of fraud and abuse algorithms, and when we are alerted to potentially fraudulent activity, we take appropriate action, which includes notifying and communicating with individual users. I also want to confirm that we are fully compliant with GDPR.
Thanks @jarland for chiming into this thread. I love how DO has evolved over the past several years and want to comment about that.
I use DO for production and have gradually migrated my infrastructure away from AWS and Linode to Digital Ocean as the platform improved.
Just a quick question: If the algorithm is triggered (regardless if it is a false positive or not) and the user is notified, what happens with the droplets in the meantime? Is there a grace period for the user to act before DO takes action? And is the whole account frozen or just the offending droplets?
It seems the major concern amongst commenters here is the sudden lost of service.
Thanks for the great service, and I look forward to your insight on this.
Depending on which items are flagged the account is put into a locked state, which means that access is limited. However, the droplets for that account and other services are not affected at all.
The account is also notified about the action and a dialogue is opened, to determine what the situation is.
There is no sudden loss of service. There is no loss of service without communication. If after multiple rounds of communication it is determined that the account is fraudulent, even then there is no loss of service that isn't communicated well in advance of the situation.
The answer depends on a variety of factors, but in general, when we're alerted to something that could be a violation of our Terms of Service, we attempt to engage with customers. In some cases, we may take actions against the resources running against an account and a vast majority of the time, there is a grace period before any permanent action is taken. If you have questions about specific cases, we recommend contacting our support team directly.
Yeah, I want to know if the execution is before or after the trial. Part of DO's appeal to me is the simplicity and predictable (low) cost. It would be really great if they published well defined account termination procedures. Do I get a phone call? An email? Do I get to respond before being disconnected? Is there an appeals process?
As anyone who runs a service that provides full root access to servers understands there is a tremendous amount of opportunity for potential abuse. It becomes a game of cat and mouse to catch the abusers and prevent them from creating numerous accounts which ultimately impact system performance and can lead to potential problems for real legitimate customers.
Those guidelines aren't published specifically because if they were, then the abusers would immediately begin to route around them, so it's meant to be opaque for a reason, but that is against fraudulent use, not legitimate use.
I just had my account locked with no warning or explanation. All 8 droplets were turned off. Account was unlocked 40 minutes later (also with no notification), and I could go in and turn droplets back on.
Over 2 hours later and still no response to my support ticket asking why and how it happened.
I'll be interested in the response I get. Unless there's a good reason why, or a plan to prevent the how from happening again, I'll be shifting anything critical away from DO, and go back to just using it for spinning things up to play with or test on.
Ufortunately nobody here is going to listen to you. A flock of DO fanboys downvoted my comments into obl livion despite I was just merely posting my experience. Some ambulance chaser even turned out to be a psychic reader because he knows better than me myself whether we been provided with GDPR related response or not, because - well, some DO CS worker told so on this forum.
I guess getting your droplets cutoff in the middle of business day and waiting 4 days for customer support copy and paste template answer has to happen to everyone before they themselves realize how crucial over the phone support is when it comes to hosting a production website. Its all good anyways.
Our team is currently migrating our whole production system from Linode and GCP to Digital Ocean. Your comment is raising warnings, I'd like to hear more from DO itself.
I'd love to chat with you. If you have some time, send an email over to jdonnell@digitalocean.com and let's talk. I promise nothing but honesty, transparency, ideas, and maybe a few laughs :)
Out for curiosity but asking honestly: what made you chose DO from all providers out there? It sounds like you have a serious setup that most likely is production and probably makes money. You do know that DO does not provide over the phone help, right? So if your server goes down you open the ticket and you wait... Yes they do offer SLA because I was in heavy email chain back and forth when moving one of my clients from Rackspace and they were sold on DO being the right choice, but again I was explained no phone support - only that your emails to customer support are prioritized.
I suspect that any cloud provider has enormous amounts of fraud on it and no tooling is 100% error free. It's also important to note that GDPR has specific exemptions for companies responding suspected security/fraud related issues.
I love DO because the performance for the price is great. Also have no issue supporting emerging tech companies with cultures I connect with.
No question fraud exists on all these platforms, however, if anyone is wrongly flagged by an algorithm, reaching out to the company must be followed by a prompt and timely response, so any misfortune can be remediated. I would assume fraudsters wouldn't typically reach out to have their accounts reinstated.
In the digital age, startups and businesses rely on cloud providers for their livelihood. These providers must be reliable and trustworthy, otherwise they shouldn't get a penny.
Also, I'm not intimately familiar with OP's situation, my comments are just common sense generalizations; I think!
We have an entire fraud and safety team whose sole purpose is to deal with these situations. Every account that is flagged is notified. Every account is communicated with and there are always replies sent. Unless a droplet is actively being malicious, such as sending out a DDoS attack, or performing some other sort of determined malicious activity, there is absolutely no interruption in service. The account is locked so that the account can not create more resources, but there is no disruption to the underlying running resources such as droplets and otherwise. The intent here is to establish a dialogue with the user and determine if the activity is fraudulent or otherwise.
Thanks for chiming in. I’m very pleased to see you engaged with your customers/potential customers; especially in a high stakes community such as HN.
I’m not going to comment any more on this situation as I don’t know all the details, I do hope I won’t see comments of unhappy DO customers on HN in the future, as that would be a sign you guys are not up to expectations.
P.S. I have been a happy DO customer for years; thus far (:
GDPR does not have specific exemptions for fraud. It is often possible to process personal data for anti fraud purposes but it requires a full legal assessment in the same way as any other processing activity would.
This sounds terrible. I never had any issues with DO. They actually refunded me once when I actually made a mistake. Would love to hear the DO side on this.
Just making sure to reply to each person that raised a concern. There is a lot of fraud that comes into every single cloud provider as root access to a virtual server can be used for a lot of malicious activity. As a result every cloud provider has automatic and manual processes that they run to find these accounts and flag them.
In the case of DigitalOcean when an account is flagged it is locked, which simply prevents a user from creating more resources and an email notification goes out to the user to establish a line of communication.
There is no service interruption, and certainly the account, it's droplets, and other resources are not deleted, and never deleted automatically when the account is initially flagged.
There are numerous communications that go out even if a user is unresponsive.
I've had an experience like this but with less consequence. I was running a server years ago that scraped information from various cryptocurrency sites and it was flagged as fraudulent. Support refused to say what the issue was or offer temporary access to the server for data retrieval. This wasn't some cowboy unbounded crawler either, it was making very specific requests, around 1000 a day for around two weeks, probably less than an average day of web browsing for the majority of people.
I'll never use DO for anything, even testing and mucking around. At the very least you would have to provide excessive evidence of identity if you get flagged by their magic algorithm, and if they restore your service it could be after days of wrangling with support (it took support over a day to answer my support tickets at the time).
I was only technical adviser for a while, but I know they did offsite backups but its hard to have these up to a minute. So they tried to recover the latest version of their DB so that truly no records are missing.
Edit: my original post got downvoted severely in few minutes from posting. Hello DigitalOcean staff and/or owners!
I think more background information would be more appropriate than conspiracy theories of what happened to you. Usually there are two sides to a story.
I downvoted your original comment because it was a bunch of unsupported conspiracy BS and at least one of your claims was directly refuted by the company itself.
If you have a real complaint against DO, please provide specific allegations and support for those allegations.
This is exactly why i'm ok with paying 10x more for AWS.
You can speak with them in few minutes, you have a grace period before service shutdown unlike ovh, DO and most others
Wow, this is fantastic news! I think this opens the floodgates for lots of people who currently use Heroku, and would like to use simple VPS's without jumping into the headache that is AWS. Any word on pricing?
Personally, I find AWS's pricing complicated, and there are so many different services that it's confusing. I think it's mostly a UX issue, but it's really overwhelming.
DO, Lonnie Linode, and Vultr are very simple, so that's what I use. Perhaps if I start needing more from my hosting I'll look at AWS, but it's just not worth my time to figure it out for the scale of projects I'm doing.
I see some DO employees around.. What is the trick for getting access to the new Kubernetes and/or postgres betas? I'd love to use both for my side hustle (Currently on normal DO droplets)
EDIT - I now see the kubernetes option in my DO account. Thanks!
I love that they are adding this. They are becoming a more and more viable solution for real production projects and their UX is top-notch, especially compared to the mess that is AWS and GCP.
Excellent, I've just finished setting up a Postgres instance on a DO Droplet. I hope they still allow SSH into the instance, so that I can load data directly from there.
How is DigitalOcean's resilience to DDoS attacks? After Linode's giant DDoS-related outages a couple of years ago, I moved everything to AWS on the theory that they (along with Google and Azure) would be better at mitigating similar attacks. Would be nice to have DO as an option.
Wait I thought AWS made you pay if you got hit with a DDoS or even a moderate amount of traffic. It's just there is no cap on costs if you have any kind of autoscaling enabled. There use to be many posts complaining about surprise charges on AWS.
You basically politely ask AWS if they will refund you when you're DDoSed and hope the attack is apparent enough to pass their threshold.
They were generous enough, but at the expense of uncertainty + you needing to play an active role in getting them to pay your bill. Depending on what sort of service you run, it may be a constant ordeal.
After getting some large CloudFront bills taken care of, I left and wondered what sort of person had the stomach it.
I'm not worried my own instance/droplet getting DDoSed, but DO's network being taken offline for days because of a large-scale DDoS attack directed at another customer, as happened with Linode in 2016.
You can get something with a MongoDB API if you use Microsoft's CosmosDB, although it is very expensive and not actually Mongo underneath. But it has some pretty impressive technical specs too, especially geographically distributed multi-master writes and impressive response times. I don't work for MS and haven't used the Mongo API, just aware of it.
Hi there, this is Shiv - I am the VP of Product at DigitalOcean. Thanks for asking about pricing. We are still working out the pricing details. I can tell you it won't be just the cost of the Droplet because this is a managed service with lots of additional features that you would not get with the current Droplet product.
If you can offer some sort of “hobby” plan with a limited I/O and DB size that would be great for hobby projects like what I build. I’ve used RDS before and it is great, but the cost is a bit eye watering when we are talking about less than one user every 10 minutes.
I’ve found Jaws DB works well for my needs (and pricing), but given they are layered on AWS it does feel like I’m still a bit limited - ie it would be good to still access the DB server so I can spin up multiple DBs (pre-prod, production etc), despite my usage being minimal.
If DO can hit a sub $10pm price point (even with severely restricted performance) that would be awesome!
This is something that has put me off services like this for personal / open source projects. I'm time and cost limited and usage will be minimal so both the expense of a managed service and the time cost of setting up my own are both unattractive.
Would DO be open to a limited 'hobby tier' for this set at the price of the droplet?
I'll second what others have said. Please consider a hobby plan. I have many proof-of-concept applications that don't require many resources at all but have the potential to suddenly get a lot bigger. Having a single stop shop where both types of projects are reasonably priced would be a godsend.
#1 thing you can do to make this compete with RDS and Cloud SQL would be to support extensions out there that they don’t. HypoPG needed by Dexter, pg_partman, etc. Lack of certain extensions is the biggest failing of those offerings IMO.
I believe most of those are offered by Citus Cloud [0]. I know it's not the same, but you can spin just a single worker and essentially end with a faster PG than the vanilla PG.
It starts at a minimum of 2 workers. And the pricing is steep too. The workers only operate on distributed tables so if you can't shard your data then you won't get any use out of those nodes, and sharding comes with its own problems. Citus isn't recommended unless you really need the size and want to stick with Postgres/OLTP.
This looks fantastic, Heroku definitely could use some competition. I love them but the only other option is using some insane AWS stuff with nonsensical icons and naming. Hope this takes off and adds a nice CI.
Insane? AWS RDS is rather self-contained and simple enough to get started with if you use the defaults. They also offer managed dbs as a simpler option with Lightsail: https://aws.amazon.com/lightsail/
And there are tons of managed database hosting providers outside of Heroku.
We've been slowly moving from AWS to DO over the last 6 months due to cost, performance and ease of use. We haven't had any issues and the support has been far superior as well. We wrote up a benchmarking comparison with some reasoning behind the switch: https://goldfirestudios.com/blog/150/Benchmarking-AWS-Digita....
very interesting. With this and the new DO kubernetes offering (if both are reliable), DO could be a compelling option for postgres-based kubernetes clusters baseload - circa half the price of google cloud.
All the other stuff you need like logs and monitoring can be installed with helm charts.
Any chance someone could fast-track an inclusion for me of both the k8s and managed db solutions?
I'm leading a team that is _literally_ days away from provisioning a cluster for an existing production SaaS application that's currently on DO vm's. Would prefer to stay with DO..
My email is in profile - happy to answer any questions.
I think that is the final major puzzle to going All in on DO, hopefully it would not be built on top of their current Droplet config but much more customised for DB's need.
On the Subject of DO, I am wondering am I the only one who felt the need of 1vCPU to 1GB RAM droplet? Given the vCPU aren't even core but thread. Now that Managed DB is in place that is even more need for frameworks like Rails, my guess for other framework would need even less memory per process / Thread. I was hoping AMD EPYC will make that happen, but so far nothing has happened yet on all other VM hosting providers. May be EPYC 2.
And if DO will someday provide an CDN, or reselling of CDN through partnership.( Although BunnyCDN seems to be working great for me at the moment.)
I was looking for hosted PostgreSQL and eventually wound up at http://elephantsql.com, with which I've been happy. I don't have a 'production' service in that I'm not relying on a DB to sell a service, like underpinning an app/website. But my DB is mission critical for knowledge management for my business, and I don't trust myself to manage the server properly. Nice to see competition in this space, and DO is a good one to be offering hosted databases - I'm happy with their other services too.
I’ve been hoping for this for a while. I’m eagerly awaiting access to their Kubernetes service, so this is a perfect compliment to the upcoming service releases.
Also interesting timing too. Amazon Lightsail (the cheaper AWS alternative) is similarly priced to providers such as DigitalOcean, Linode, etc and released their managed database offering the other day: https://aws.amazon.com/blogs/aws/new-managed-databases-for-a...
DO has definitely a lot to offer via some simple cloudy things as
1) Good ol VMS and load balancers - they already do
2) Managed K8s - may be they already do? I dunno
3) Managed DBs - good to see this.
4) Big fat blob store (like s3) - not sure. CDN that operates over blob store would be nice too.
With this pieces, one can develop quite a significantly complex and scalable application without worrying about infra.
AWS, Azure, GCP are wayyy too complicated. There’s definitely a niche for an IAAS Company to only do the few important things and do them better (speed, reliability, price) than the big 3 and steal a meaningful chunk of the market.
Would be great to see a benchmark vs a database in a regular DigitalOcean droplet, or information about hardware. This information would be useful for projects requiring high performance per sec.
No details on pricing, but wouldn't be surprised to see the pricing close to matching a droplet, and deploying managed over existing infrastructure. There are of course tweaks I've seen for performance on existing clouds (raiding over block storage devices), tweaking swap space/usage, etc.
Not sure how deep it'll go in the managed usage, but very happy to see it. The one piece I've really felt was deeply missing.
We've been working on this for some time. We announced earlier in the year that we want to offer this type of a managed service and we will start with one engine for now. Disclosure: I lead Products at DigitalOcean.
This is great but I wish they would give some more insight into the early access system. Ive been waiting for Kubernetes access for what seems like an eternity.
Caught this late this morning - any DO employees still here? I'm signed up for the k8 & postgres betas but haven't had any luck getting k8 opened up. Would LOVE to participate in the postgres managed beta - definitely trying to move off from AWS for this.
This is a huge deal for me personally, and probably the main and only reason why I haven't used Digital Ocean for real yet. Let's see how the pricing goes, since having many small projects each one with very little requirements normally means paying a lot of $.
This is fantastic timing—I didn't feel like dealing with Postgres configuration/management, so I went with CockroachDB instead (which also forced me off a CPU core since I required more RAM).
Here's to hoping that receiving access doesn't take too long ^_^
If anyone from DO is around - Will libprotobuf-c be available on managed postgres? I ask because for the longest time AWS RDS didn't support cutting Mapbox Vector Tiles from PostGIS.
MemSQL is a great product but it's a niche offering and requires proprietary licensing by capacity so it'll probably never be provided as a managed offering by anyone, especially by a smaller player like DO.
DO's block storage is around $10:100GB though it probably won't have as flexible capacity planning for you, so you'll likely want to start around 2TB to account for 2+ years growth.
note: I'm not affiliated with DO in any way and pricing and capacity are only speculation.
this is a useful thing as i often did these by hand on DO before, and they are a bit of a pain. but if their pricing for volumes is any indication, these will be pretty expensive. curious what the price is going to be.