Hacker News new | past | comments | ask | show | jobs | submit login
DigitalOcean App Platform (digitalocean.com)
646 points by digianarchist on Oct 6, 2020 | hide | past | favorite | 346 comments



I am so glad to see this. I was looking to deploy an app and the choice is either Heroku or manage your own server which I don't want to do.

Heroku gives instant deployment for the most common types of apps (python/java/ruby). It's PaaS done right, it's fantastic. You should really have a look if you're not aware of it, it's only $7 for a starter app.

Problem is, scaling up is about $50 per gigabyte of memory which makes it a dead end for anything non trivial. You're forced to go to digital ocean / Linode / OVH instead to have something affordable.

That leaves Digital Ocean as the only alternative (don't trust Linode) and it sucks because it only gives me a server to manage. I don't want to manage a server I want to run a (python) application. It's 2020 this sort of things should auto deploy from GitHub without bothering me to manage an operating system.


Why not take the initial complexity cost and learn k8s and containerization? That's what I've been doing as a step-up from Heroku and have been very happy with it.

My project currently runs on Digitalocean managed k8s and setting it up really wasn't hard. I had everything already in containers for dev/prod anyway, and having those run on k8s just meant I had to write the deployment manifests that pull the containers and setup the pods.

What I love about managed k8s (and also shared a couple times in comments on HN) is that it's separated from the servers below. I can have 20 containers (that can be separate things all-together) running on the cheapest Droplet and would only pay whatever that droplet costs, so under $20. Then when I need more power, I just scale the Droplets used for the k8s cluster and my pods/containers get shoveled around the available resources automatically.

I liked this approach so much that I now have a private 'personal projects cluster' that runs on digitalocean with the cheapest/weakest droplet avvailable, and whenver I have a small hobby project that needs to be hosted somewhere, I just add that container to the k8s cluster and be done with it.


I’m waiting for digital ocean to have something like google cloud run.

Google cloud run is essentially here is a docker image that listens on the $PORT env variable. Spin it up when you get requests. It will handle X queries per second (you can set limit). If more than X, scale it up to this many replicas.

I pay about 10 cents for my site. Zero maintenance. I push code to GitHub, GitHub builds an image, pushes to GCR and tells cloud run to use new image.

This is how things ought to work for simple web server like functionality. “Here’s a dockerfile and source tree, build it, run it and auto scale it with this https domain” Boom!


What sort of cold start time do you get with that out of interest?


Even with no cron, it was <500ms cold start. With a cron that hit every 5 mins, I saw less <100ms to hit US central and back (from Seattle).

I pay 10 cents a month to google. They have no shame charging me 3 cents on my credit card.


Not sure about exact numbers on the cold start but we avoid it altogether with a keep-warm request every minute.


Makes sense, and the 10 cents in your above comment is literal with a keep-warm every minute? That's pretty solid!


Why not take the initial complexity cost and learn k8s and containerization?

I would argue that the complexity cost is ongoing not just front loaded. There is an overhead for every new application. For instance putting it in a Docker image, deploying it using gitops flavour of the month and then any extra policy management and routing.


> Why not take the initial complexity cost and learn k8s and containerization?

Security, OS patches, maintaince and more than anything DDOS attacks. I don't want to handle all that, I just want to concentrate on development not maintaince.


Managed k8s offerings usually take care of everything below the k8s API. Our GKEs auto-upgrade their control plane and the worker nodes, both OS and k8s versions. I could force my way onto the workers via SSH if I really wanted to, but by default I can't even get on those machines. All you ever do yourself is kubectl this, kubectl that. I believe DO's k8s offering is like that as well.


You're still self-hosting, just on top of k8s rather than on top of VMs. All the ingress is your problem, logging, monitoring...

It's only managed in the same way that AWS Elastic Beanstalk is managed.


This is always great until there is a bug somewhere in the infr layer. Then having k8s is no longer such a good idea.


A non tech person could say the same about having an "app" at all.


True, but at while later of “tech” is useful to you? I often solve problems with only google sheets. Why bother with an app if you don’t need it. Same for other layers of “infra”.


This is spot on, except for one thing: Google Cloud Run.

It's the closest offering I've found to Heroku and am planning to migrate all our services to it due to significantly better pricing. Make sure you look into it.


Google Cloud Run doesn't provide nearly the same features as Heroku. For example, there is no easy way to manage secrets with Cloud Run. There is no way to run a worker process. Integration with other Google Cloud things like Cloud SQL is clunky. Cloud Run is okay to get started but almost all apps will need more.


> there is no easy way to manage secrets with Cloud Run

See https://cloud.google.com/secret-manager/

I recommend this community-maintained FAQ on Cloud Run:

https://github.com/ahmetb/cloud-run-faq


We've been successfully using Cloud Tasks and firing payloads right back at our HTTP cloud run service to solve for the lack of background workers. We had to build a mini-framework around it, but it works surprisingly well.

https://cloud.google.com/tasks


It's built around everything being an HTTP request, and billing for the continuous seconds of time the app needs to run to service those calls.

You can run worker code by moving them into individual HTTP calls, triggered by whatever workflow you want. Cloud Tasks, PubSub, Cron, etc.


How would you like to see the integration with Cloud SQL improved?


The Google part of the name means that the service could be shut down at moment's notice if it isn't providing millions of dollars in revenue and even then it's a toss up if the people in charge get bored of it.


Google is unique in that it somehow lets employees start random projects under the google brand. If they had a separate brand for these experimental projects they would have avoided a lot of headaches.

Google Cloud isn't someone's side project though so the risk is much lower. You're right that services can be shut down at moment's notice but that applies to any service of any kind. Unless you host with multiple cloud providers at the same time you cannot avoid that risk.


That's one of the things I like about Cloud Run. There's no vendor lock in really. Your app is just a Docker image running any normal 12 factor app which could be deployed to Heroku, Cloud Run, DO or any other PaaS.


What Google Cloud product has been shutdown prematurely? You still mad about the Google RSS reader?


There was a thread the other day that mentioned a variety of products that were killed by Google, which were aggregated here: https://killedbygoogle.com/

There was also a similar one for Mozilla: https://killedbymozilla.com/

Perhaps they were being weary due to the practices the entirety of the company has displayed in the past, in regards to killing off products.

Not sure about Google Cloud in particular, though.


Google seemingly kills more products than they run and I wouldn't rely on them for any aspect of my business, based on this, and their historically awful CS.

The petty snipe is not necessary or appreciated.



That was never a part of Google Cloud. It was a feature of Chrome browser/OS and allowed for sharing printers over the internet because of the limitations back in 2010.

Google has a terrible reputation with consumer services but has been pretty decent with Google Cloud so far. The biggest negative example would be Google Maps pricing changes but that seems to be different class of issue.


Recently discovered cloud run and it's absolutely amazing. I will be using it for as much as possible going forward. Cheap, easy, scaling. For a crud app it does everything you could ever want.


Agreed; we reduced our cost from close to 50$/month to a few dollars per month by simply switching from google app engine to cloud run. The initial reason for switching was that I wanted a dockerized solution so that we are not locked in to arbitrary versions of whatever app engine supports. It does that. But the cost advantage makes it a pretty sweet deal.

Cloud run is very easy to get started with. It's basically a service that runs and scales a docker based application. They bill per request based on some notional cost of cpu and memory. They give you some options for using bigger instances, which influences the per request cost. You can go from 256MB to 2GB memory and I think you can have up to 2 or 4 vcpus (one of those, we use 1). You can specify a minimum (default 0) and a maximum (default 1000) number of instances. If a request comes in and nothing is running, it starts an instance. After idling a while it disappears again. So, if you get a low amount of traffic, mostly there will be something running without costing too much. At some point you raise the minimum to 1 instance and it starts costing a bit more.

Crucially, there is a freemium layer: you don't get charged unless you hit a certain amount of requests. So, when prototyping, this is very cheap. Basically we managed to keep our cost around a few dollars while in the last months.

As this is part of Google cloud, you can transition to other solutions (like their hosted kubernetes) eventually. You also have access to other stuff. E.g. we use firestore as a cheap (but limited) database and the google secret store for secrets, and some storage buckets.

When you click together a cloud run deployment, it can create a cloud build for you that points to your git repository and set up automated deployments. If you have a Dockerfile, it will try to build and deploy that. If you need to customize what the build does, you can provide a custom cloudbuild yaml file (similar to github actions, travis-ci, and other popular ci options).

After it comes up, you get a nice https endpoint. You can actually add password protection to that as well if you need it. And you can optionally use your own domain.

So, we are running a spring-boot project for next to nothing. When the time comes, we'll likely swap out firestore for a proper database as this may get expensive as they bill per read and write and certain operations are just going to use up a lot of reads (e.g. a count operation). It's fine as a dumb key value store but it is very limited in terms of other features (e.g. querying).

However, Google's hosted databases are expensive and don't really make sense until you are ready to spend a few hundred dollars per month. Same with Kubernetes. Kubernetes does not make sense until you are ready to commit to a burn rate of hundreds of dollars per month. And it's completely overkill unless you actually are deploying multiple things.


I've been using Google App Engine as the backend of my scratch-my-own-itch Android app. Granted it only has 60+ active users (according to Google Play Console) but my monthly bill for that has always been <$1 (the parts that's not free are secret manager and storage to store my deploys).

You can even run multiple App Engine apps for mostly free, because their free tier is calculated based on the actual instance-hours running, and with App Engine you can configure it to that when you don't get any traffic there's no instance running (they spin up a new one when there's a new request in that case).


There are more options than Heroku:

- Google App Engine (PaaS)

- Google Cloud Run (serverless containers)

- Fly.io (serverless containers with a minimum of 1 container running per configured region)


Fly.io is more exciting for me because it's edge app servers and it terminates web sockets. I've been envisioning a live-view like framework, or heck even a regular old rest api, where the client connects to the fly.io server with a single request then the app server makes a request to N number of back end processing servers using an efficient protocol like grpc.


The only missing piece for fly.io is data layer. The given redis is only for caching purpose. Once there is a managed distributed data store add-on, it's done. It's very likely liveview will need to hit database. Right now pubsub with redis adapter is the easiest solution for cross-region phoenix application (well, phoenix IS my application!)


It is cool, but it does not scale to zero, so if you have N services you will end up paying for at least N instances.


That's a good thing actually since it means you never have cold starts (unlike Google Cloud Run). Their micro instances cost like $3-4 which isn't that much.


It should be an option in my opinion, as cold start time could vary a lot by which image and runtime you are running, if is a Java app or a statically linked Rust program. I have a bunch of very small and simple services, which I run on a single VPN. Moving to fly.io would mean to pay various time more. I tried that, and the service is nice indeed, but not for me I guess.


FortRabbit.com Platform.sh

There is probably a good hundred of them at this point.



Add to that Cloud 66 as well


It's probably worth looking into the big cloud providers rather than the little guys. In Azure you can have an app service (a deployed app in any one of loads of languages without looking after the machine it sits on) with 1.75GB RAM for about $12 a month. Obviously your usage may vary and that will effect the price. But I get the feeling that the big players are cheaper than people think they are for small projects.


The big players have separate charges for bandwidth and disk and other hidden stuff. They are way more expensive than Digital Ocean / OVH all inclusive. Worse, the costs is unpredictable which makes them a no go for a side project, I can't risk accidentally getting a $1000 bill.

As a real world example, I run a personal blog. If it were running on S3, my personal finance would have been obliterated when it got featured on HN and served 1+ TB of traffic.


> If it were running on S3, my personal finance would have been obliterated when it got featured on HN and served 1+ TB of traffic.

Maybe I'm reading this wrong, but it looks like 1 TB of outgoing traffic would be ~$90

https://aws.amazon.com/s3/pricing/

I've had things hit the HN front page a few times while just hosting on ec2 and never had a noticeable increase in charges. Then again, I wasn't hosting very large files.


If you do host large-ish files, even a couple MB JPGs, your personal finance wouldn't be happy.

I've had some images hit reddit (hotlinked) and exceeded 10-15TB per image, and that cost under $10 at other (non-AWS/Azure/*) places.


I have a similar experience and using UpCloud VPS as been able to keep my cost down (less than $5/m or a cup of coffee)


Can HN really deliver enough traffic to a static site to cost a significant amount? I've had mildly popular posts on HN for my Netlify blog (John Carmack tweeted about it!) and not had to pay for bandwidth.


No. I don't think so.

The concern for me is a lack of hard limit on spending on GCP, Azure, and AWS. If I screw up and allocate a bunch of resources unintentionally, I'm left holding the bill. That's a terrible setup for PaaS because all programming involves mistakes eventually, especially for new users learning the system.

Granted, there are likely limits on accounts, but those are to protect the services from fraud, no to protect the user from overspending. The limits aren't well defined and it's not something you can rely on because MS might consider $10k / month a small account while it's a ton of money for me.

Azure customers have been asking for hard limits on spending for 8 [1] years with radio silence for the last 5.

There's a difference in goals I guess. If I spend more than expected I WANT things to break. Microsoft, Google, and Azure want me to spend unlimited amounts of money, even if I don't have it. At least AWS can be set up using a prepaid credit card so if I screw up they have to call me to collect their money and I negotiate.

1. https://feedback.azure.com/forums/170030-signup-and-billing/...


It's a difference in goals.

- Hobby kid doesn't want to overpay, shut everything down

- Business absolutely doesn't care about spend, if they get some kind of marketing result traffic spike they just want the site to stay up even if it blows the average budget

Guess which one they optimise for?


Very large businesses might not care about spend, but pretty much everyone else does.

Almost everyone will be unhappy if they're stuck with a six figure bill for non-converting visits because their site went viral. Everyone will be unhappy if they're stuck with a six figure bill because their site was used in a DDoS reflection attack, or got pwned and used in a DDoS attack directly.

Everything I run on nickle-and-dime-to-death cloud services, such as AWS, won't even respond to unauthenticated requests (Nginx return 444, or reachable only via Wireguard) precisely to mitigate this risk. To do anything else is just financially irresponsible.

I've even considered coding a kill switch that will shut down AWS instances if they exceed billing limits, but the fact that AWS charges a fee to check your spend via an API makes this awkward and speaks volumes about Amazon's motivations.

Amazon's refusal to offer spending caps on AWS benefits Amazon and only Amazon.


They have free anomaly detection on spending now (not sure how useful yet).


>"Business absolutely doesn't care about spend, if they get some kind of marketing result traffic spike they just want the site to stay up even if it blows the average budget"

While this statement can be true in some cases I vividly remember bosses of largish (budget wise company) running around like headless chickens yelling to kill every running instance of service just because they were hit by way more "success" that they've planned for.


Hard spend limits are not an easy problem with cloud. There are too many things that incur costs. Everytime this comes up, I ask the same question: what do you expect to happen when the quota is hit?

Shutdown your servers? Wipe your SSDs and storage buckets? Remove your DNS records? Should it be permanent? If not then they're just subsidizing the costs. If it's soft-limit then its just a warning, and if you just want a warning then billing alarms already exist in every cloud.

Also for most customers, the data and service is far more important than the cost. Bills can be negotiated or forgiven afterwards. Lost data and customers can't.


>Shutdown your servers? Wipe your SSDs and storage buckets? Remove your DNS records? Should it be permanent? If not then they're just subsidizing the costs. If it's soft-limit then its just a warning, and if you just want a warning then billing alarms already exist in every cloud.

You know. When I hit the storage limit of my SSD it doesn't wipe my data. It just ceases to store more data. When I rent a server for a fixed price and my service is under a DDOS attack then it will simply cease to work for the duration of the attack. If there is a variable service like lamba that charges per execution then lambda can simply cease to run my jobs.

You can neatly separate time based and usage based charges and set a limit for them separately. It doesn't even need to be a monetary limit, it could be a resource based limit. Every service would be limited to 0GB storage, 0GB RAM, 0 nodes, 0 queries, 0 api calls by default and you set the limit to whatever you want. AWS or Google Cloud could then calculate the maximum possible bill for the limits you have chosen. People will can then set their limits so that a surprise bill won't be significantly above their usual bill.

Your comment is lazy and not very creative. You're just throwing your hands up and pretending there is no other way even though cloud providers have created this situation for their own benefit.


The vast majority of overages are due to user error. These errors would just be shifted to include quota mistakes, which can incur data or service loss. Usage limits might be softer than monetary limits which are bounded by the time dimension, but can still cause problems since they do not discriminate between good vs bad traffic.

Before you go around calling people lazy, I suggest you put more thought into why creating more options for people who are overwhelmed by options is generally not productive and can cause unintended consequences and expose liability. With some more thought, you'll also realize that AWS is optimized for businesses and, as stated, losing customers or data is much worse than paying a higher bill, which can always be negotiated after the fact.


I want all services to be rate limited. What I don't want is for some runaway process (whatever the cause) to bankrupt me before I can respond to any alerts (i.e within hours).

In other words, I don't necessarily need to set a hard spending limit, but I want to set a hard spending growth limit (allowing for short bursts), either directly in monetary terms or indirectly through rate limits on individual services.


I avoid those for the sane reason. I do not mind to pay for a few dollars for side projects. But not unlimited bill.


> Shutdown your servers? Wipe your SSDs and storage buckets? Remove your DNS records? Should it be permanent?

I'd be absolutely fine with that in a sub-account or resource group as long as I had to enable it.

A while back I wanted to try out an Azure Resource Manager template as part of learning something. Since I was _learning_ it, I wasn't 100% positive what it was going to do, but I knew that it should cost about $1 to deploy it.

With a hard limit on spending I would have set it to $10, run the thing and been ok with the account being wiped if I hit $10. Even $100 I could tolerate. Unlimited $$ was too risky for me, so I chickened out.

The worst part is I can't even delete my CC because it's tied to an expired trial that I can't update billing for.

> Also for most customers, the data and service is far more important than the cost.

So don't enable the hard limit.


Since the vast majority of errors are user error, this is just another potential disaster waiting to happen.


Yes it can, if you consider hundreds of dollars a significant amount. I do.

A good article is around 50k visits. The most I've done was 300k over a few days of going viral on HN/reddit/twitter/other. I published some stats there https://thehftguy.com/2017/09/26/hitting-hacker-news-front-p...


I can't agree more with that.

I run a small side business, and these unlimited cloud plans are just a no go. A medium to large company could totally absorb a 5 figures bill, but that would be a dead sentence to my side project. Also, considering the variable costs of bandwidth of AWS, Azure or Cloudflare, one competitor could simple rent an OVH server and incur insane costs to my business while only spending 1/10 of the money.

Right now, I'm using Heroku (with a limited number of dynos and a single PgSQL database) together with BunnyCDN (which allow me to pay for prepaid usage). If I ever get DDoS'ed, my app will most probably be inaccessible or at least significantly slower, while I'll receive an email alert, from which I can decide myself to allocate more resources.


No. I once had a site hit #1 on HN. It was hosted on a Dreamhost shared VPC with Wordpress. It barely broke a sweat. I have no idea what these guys are doing who are having their sites bulldozed by HN traffic but it's worryingly common for something that should never happen.


This has always confused me. What is going on when someone's site is taken down by HN traffic? (Maybe the fact it's on HN when this occurs is just coincidence: maybe the real traffic loads are always from reddit or twitter or something in these cases?)

(My experience with high-ranking HN posts: initially with DreamHost, later with cheapest AWS ec2—never a noticeable impact with either)


Among articles you see on the front page, there is a two orders of magnitude difference in visits between the more popular and the less popular.

HN/reddit/twitter/android can all send a similar amount of traffic. There's one order of magnitude there, how many places an article is featured at the same time?

Then there's an order of magnitude within each place, how much interest and readership the article could gather? Highly variable. The first comment alone can make or break an article.


This sounds off. Both reddit and twitter have the potential for vastly more traffic than HN.

I also haven’t had the number one spot on HN (except maybe briefly), but was in 2 and 3 for long stretches and even an order of magnitude more traffic wouldn’t have been a problem.

Two orders probably would have been, but I have a hard time imagining a 100x traffic difference between the 1 spot and the 2 spot. Then again, if it was a very slow day here vs a very busy day maybe (though in my case it wasn’t a very slow day).


I assume you're targeting reddit programming and similar subs, they're similar to HN in aggregate. You're right that Reddit and twitter have way bigger audience in total but only a fraction of all reddit users is relevant. Assume we're talking about a tech blog, not articles on election or brexit?

It's not about rank. It's about the specifics of the article, mainly the title and the content. It simply attracts more or less readership.


I've had #1 multiple times. I've had articles that stayed on the front page for multiple days.

Wouldn't be surprised if I'm top 1% of personal bloggers on HN or something like that. I'd be shelling out AWS thousands of dollars over the years if I were using anything AWS, or more likely I'd be either broke or the blog would have crumbled under the traffic each time never going viral.


I don’t usually do this but I decided to check your post history. I don’t know if anyone else posted your blog posts to HN but assuming it’s just you, I counted five posts (excluding the flagged ones) that would have made it to the front page of HN for any meaningful amount of time. Based on this, I would say that you are unlikely to be HN’a top blogger.

And I don’t know how you’d set up your blog with AWS but I don’t see how it could be expensive to host static content there.


Wrong assumption, a fair bunch of the posts came from other people :p

I honestly wonder what's the average distribution for HN contributors. I imagine it's not much for personal blogs. Not trying to compare myself to the new york times or cloudflare blog obviously.


Heh. Checked by the domain instead and got 10 submissions with double digit or more vote counts. I still think pg and jacquesm have you beat by quite a bit but yes you have 2x the front page posts I initially spotted.


That's simply not true. I hit #1 a few times with content hosted on S3. Ended up paying maybe extra $2 those months. I'd be worried if I hosted any large files that came with it, but just a blog post? Barely noticeable.


I have images in almost every post. Diagram, schemas, stock images, gif, anything. I guess you don't?

The difference between $2 and $20 will strike when you start having pictures and with $200 the day you (accidentally) put a large image or GIF.


You can lose money accidentally in many ways. I agree you have to watch out, but still disagree with the number of people dismissing S3 as a quick way to bankruptcy if you get HN #1.


Cloudflare/Cloudfront, restrict origin IP. Done


This could be an issue (from your site link in a previous comment):

With an ad blocker, this page takes 83 requests to load a total of 2.31 MB.


It's because of images and comments, both of which are critical to a blog. There's one request per image no matter what.

Wordpress is actually really good on that, it's resizing the images automatically for thumbnails and mobiles with caching.


I’ve had a couple of front page posts with good discussion. Generated about 35,000 unique visits over 24 hours.


I'm currently on AWS for my site and in the process of researching alternatives. I share your concern of something going wrong and being stuck with a huge bill. Someone pointed out that 1TB of outgoing traffic from Amazon EC2 would cost $90. I'm fortunate enough that that won't obliterate me, but I won't be happy if that happens. I'd rather my blog get hugged to death. Going viral isn't worth $90 to me.

But I don't think DO really solves this problem either. They say they have spending caps in some of their marketing materials, but the finer print says that overage billing is $0.01/GB. Now that's a whole lot better than Amazon's $0.09/GB, but it's not a cap.

DO can say they have "predictable pricing" because in the vast majority of the cases the "free allotment" that comes with your droplet is enough, so you never see a bandwidth charge, you pay the cost of your droplet and you're done. So yes, it's more predictable because Amazon would charge you $5.23 one month, $4.87 another month, and DO charges you $5 every month.

But I'm not worried about the 99% case, I'm worried about the extreme scenario where I somehow go viral or get DOSed. And both options leave me exposed.

That's not to say DO isn't a better deal for the hobbyist than AWS. The equivalent of DO's $5 droplet will run you much more on AWS, especially if you actually use the bandwidth they're allotting you. And the big 3 do a lot of nickel-and-diming, which is a nuisance compared to the simpler pricing model of the smaller providers.


You should be able to get the cost down significantly by caching on cloudflare. My company managed to deliver 99.9%+ from cloudflare for static pages and allowed us to serve large amounts of traffic from a small backend.


Would cost ~$80 with cloudfront.


Anecdotally. When I set up my account on Azure there was a bug in the web client that filled out my region to Canada. So I opened a support ticket and they said they can't change the region on my account and the workaround was to open another account with a different email =/.


This is common to many companies. There are thousands of regulatory, taxation and licensing things that depend on the customers region, and it simply isn't practical to support a user journey that starts following one set of laws and then changes to any other.

Companies that allow it almost certainly are not meeting all the relevant laws for those customers that do change region.


The digital ocean volume pricing of $1/10GB per month seems very steep... I can literally buy fresh SSDs every month for that money. The container pricing is reasonable though.


You're reading it wrong.

For $5/month you can get 250GB of storage with 1TB outgoing.

Additional is $1/100GB and $10/TB


I must be missing something: https://i.imgur.com/3PNqSt5.png

Edit: You're talking about spaces[1], I'm talking about volumes[2].

[1] https://www.digitalocean.com/products/spaces/

[2] https://www.digitalocean.com/products/block-storage/


In digital ocean, I believe you get a volume for free with your $5/month instance/droplet (it's not added separately, which differs from most other cloud providers).

The screenshot you shared is attempting to add additional volumes to a droplet. See the pricing for droplets, it includes 25GB of SSD and 1TB of transfer.

https://www.digitalocean.com/pricing/#basic-droplets


This is the approximately going rate for all the major cloud providers (links [0]). Sure, you could buy your own SSDs, but how are you going to connect them to the VMs? I suppose this might be where their profit is, especially because these are logical volumes anyway. But it's not like you can just go out and beat this price at home with minimal effort.

[0] Amazon ($0.1/GB): https://aws.amazon.com/ebs/pricing/

Azure ($0.075/GB): https://azure.microsoft.com/en-us/pricing/details/managed-di...

Google ($0.17/GB): https://cloud.google.com/persistent-disk/#section-7

edit: formatting


> But it's not like you can just go out and beat this price at home with minimal effort.

That's exactly what I did. Taking the cost of my time for setting everything up and maintaining it, I estimate the net cost is about 1/10th of what it would be in the cloud.


There is also Cloud 66 which works with not only digital ocean but many other cloud providers, like a Heroku on top of any cloud.


I use cloud66, it works great!


Cloud66 it's really a great service! I have been using for almost a year to deploy a rails stack and it works good.


This is why if you're in the Rails world, I'll always recommend Hatchbox [0]. It takes the PaaS layer from Heroku and applies it to generic nodes on DO or AWS - I'm grandfathered into a really good plan price, but even as it stands today, if you're building Rails apps, it's a great option.

[0] https://www.hatchbox.io/


What problem did you have with linode? I had been using them for a couple of months and the experience so far has been great


They have a history many years back, of not being fully truthful with customers about hacking incidents. Do a google search.

relevant: https://news.ycombinator.com/item?id=5667027


Why don't you trust Linode?


You can search HN for top articles mentioning linode. The comments speak for themselves. Basically Digital Ocean is better in every aspect you can think of.


Strongly disagree. Very been a customer for years before DO was launched. I've always benchmarked the same apps under the same load in both of them over the years and Linode has beaten DO every single time. Never had any issues with them.


Guessing due to multiple security incidents


Not only that, but outright lying about the breaches. When I used them in early 2010s, they managed to expose two different virtual CC numbers (which I _only_ used for Linode) to fraudulent charges. But both times they insisted I was not part of the breach they were suffering at the time ...


Oh wow I think I just figured out how a VERY limited-use card number got stolen around that same time.


> Problem is, scaling up is about $50 per gigabyte of memory which makes it a dead end for anything non trivial.

That isn't exactly true, for a few reasons.

First is, the top tier public sticker price is roughly $35/GB.

Second is, at higher scales, you'll sign a contract with them that discounts your rates further.

Third is, this is presuming you're paying $ for memory alone. While that might be relevant for individual apps which need that specifically, on the whole you're paying for the ecosystem, the standardization, the PaaS. You're trading money for your time back. The product you're buying is not simply GB.


> Third is, this is presuming you're paying $ for memory alone. While that might be relevant for individual apps which need that specifically, on the whole you're paying for the ecosystem, the standardization, the PaaS. You're trading money for your time back. The product you're buying is not simply GB.

Except when the only thing you need over the $7 hobby instance is more memory.


Not exactly cheap but run.pivotal.io (cloud foundry) and openshift online are both $25 GB/min which is a little more accessible. I'm not sure about Pivotal's online platform but PCF has some pretty simple autoscaling plugins that could spin down instances during low usage

A lot of Fortune 500 companies have Cloud Foundry setups and it's built on some of the same tech as Heroku so it's fairly accessible


Unfortunately, Pivotal Web Services will be closing down: https://blog.run.pivotal.io/pivotal-web-services-end-of-avai...

Disclosure: I work for VMware via the Pivotal acquisition.


What about just write golang-one-static-binary app and you don't need a deployment platform at all? rent some cheap VPS and put a nginx in front for load-balance. To update it just stop the binary and copy over a newer one, and restart, that's it.

The way to deploy java/python/ruby/node.js is complex on their own, I feel golang can fix that part by the language design itself.


Don't forget Google App Engine! Though, it's even more expensive than Heroku.


Has GAE sorted out secrets management? Last I checked, they required you to commit secrets to the repo you push, which necessitates your secrets being on whatever computer (or whoever's computer) does production deploys. Contrast this with DO/Heroku/etc. which lets you set environment variables.

Some folks suggest using a DB to store secrets on GAE, but this is (IMO) just obfuscation.


Yes, they added Secret Manager early this year: https://cloud.google.com/secret-manager


And before this you could implement Cloud KMS in your app to decrypt the encrypted secrets you can store in your repo.


This still seems ridiculous. Why did I need to keep secrets in my repo to begin with? GAE, as far as I can tell, has been the only major PaaS that hasn't offered a solution for this. It's so easy to get wrong...it contradicts one of the biggest rules of version control: keep your secrets out of your repo.


There are a million ways to do it that don't require Google? Your CI system builds the production image, it can get secrets from anywhere.


My CI system arguably shouldn't have access to production secrets any more than my developers' macbooks.


GAE has a free tier for their smallest instance.


How does Heroku and these app hosting services handle updates to the underlying OS and other dependencies?

Do they only patch non-breaking updates? Or are you on their schedule to ensure your app will run on the latest version?


Who cares? The point is that it's their problem, you don't need to think about it, it's done for you. =)

In practice your application restarts at least once a week. It's transparent because a new instance is started first and takes over. The provider can move applications around to add/drain servers and perform maintenance.


AFAIK Dyno is alive for 24 hours or less +-1h, not a week.


You are indeed "on their schedule" -- see eg https://devcenter.heroku.com/changelog-items/1603. But that's the trade-off you deal with, so that you don't have to manage all this stuff yourself.


You might take alook at PythonAnywhere? https://www.pythonanywhere.com/ I hear good things about it.


serverless.com is perfect for this use case. Easy to setup, but can scale infinitely in the future. Here is an example: https://github.com/mikestaub/slack-lunch-club


You shouldn’t trust Digital Ocean, either. They’ve a history of being shady liars about platform security.

https://github.com/fog/fog/issues/2525

https://news.ycombinator.com/item?id=6983097


I read that thread, it's quite concerning. Now I'm wondering how to determine if other providers are doing the same thing. I suppose I can test on a new instance, but that can only tell me definitively if they are doing it, not if they're not.


The thread is 7 years old, and DO hired competent people along the way.

The issue is not that it happened, or that they had clueless staff; the issue is that their board and senior management thought that the best way to respond to their own error was to blatantly lie in their blog that there was no problem at all.

It seems to have worked; they're massively valuable now.


Er..no. Lookup Dokku


I was really excited about this when I got the email a few weeks ago with a beta invite but after two minutes I realised I’d be unable to use it, and still can’t now it’s launched...

Why oh why is GitHub now considered the single place where code lives? Even as alternative providers gain in popularity.

It’s really sad to see DigitalOcean requiring this for some reason.

Please either allow use of arbitrary Git repositories (e.g. self-hosted GitLab in my use case) or provide a stand-alone CLI tool to enable deployment of anything, like Firebase does.

Disappointing so far.


You can use the [doctl](https://www.digitalocean.com/docs/app-platform/references/co...) CLI along with an app specification yaml to deploy from your self-hosted GitLab.

The App Specification supports a normal Git repository. However, I can see how this might be a problem if your repository is private as there are no configuration options for auth.


From article:

> You can deploy the source code directly from your GitHub repositories (support for GitLab and Bitbucket is coming soon).

Naming specific git hosts suggests it may not be available on "arbitrary Git repositiories" in the near future, but looks like they are at least planning to broaden the options for where to have your code live


Such a weird approach that seems repeated by many. Instead of building something general that works for most cases and is easier to implement, they focus on specialized cases of specific SCM platforms.

If they started with support Git Remotes and then supporting specific platforms, they would have covered all use cases from the get-go, albeit with slightly poorer UX.

But instead they chose to only support specific platforms, missing plenty of people who actually want to use this, but can't.


Git remotes is certainly not "easier to implement". You need to have ssh infrastructure setup to receive the repository, deal with key management, provide the user some way to input ssh keys, etc. Contrast that with GitHub web hooks + API and I can see why they didn't bother.

I bet GitHub covers the vast majority of their target users.


Git supports a number of transfer protocols[0]. There's no requirement to support SSH in order to implement pulling from generic repos.

[0]: https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protoco...


You really don't need all of that just to support pulling a Git repository, private or otherwise. Adding async nature to a technical problem (webhooks + API as you suggest) immediately makes a otherwise sync problem way harder and error prone.

> I bet GitHub covers the vast majority of their target users.

Sure, but if they added Git, they would reach everyone using GitHub + everyone else who is using Git.

With powerful primitives you can build powerful abstractions, Git is one of the examples of this. But if you instead focus on a single-use case (using GitHub), adding the rest becomes a lot harder, than if you started with the basics. Something 90% of product people in SF seems to have forgotten lately (or never learned?)


Yeah, but that’s how the 80%-rule of “good enough” entrenches the bigger actors, ejecting the otherwise perfectly fine alternatives. It’s the power law that makes the rich richer (and turning everything in a mess of managment-driven kludges.)


I hope this is in the backlog, because I'm not switching for GitLab for this.


Maybe 95% of people use it and it made sense to them, who knows?


Is it just me, or is $15/mo for the cheapest Postgres-with-backups a bit steep? Heroku's free DBs (or $9/mo basic plan) support daily backups.

I've currently got a web app I'm just self-hosting on a DO VPS for $5/month. I have a Postgres DB on the same VPS (via a Docker image), with a 10-line shell script & cron job for backups to Backblaze B2 (which costs ~nothing/month for my tiny DB).

Additionally, my web app is a Kotlin API and a Nuxt.js SSR server, so I think I'd have to set it up as two separate "apps" on this platform. That means I'd be going from $5/mo to $25/mo.

On one hand, that's not a ton in the grand scheme of things. On the other hand, the whole reason I use DigitalOcean and self manage my infrastructure is to _not_ have to pay that kind of money for my projects with no revenue.


This sounds a bit like the "Why pay for dropbox when rsync is free?" argument. Sure you can do all of that yourself, but for just $10 / mo (5->15) you don't have to worry about it. That's really what they're selling.

$120 / year for peace of mind that your production DB is backed up is well worth it for a lot of people, especially if the alternative is potentially a bug in a homegrown shell script which could silently fail catastrophically and lose your whole DB.


> This sounds a bit like the "Why pay for dropbox when rsync is free?" argument. Sure you can do all of that yourself

It doesn't sound like that argument. He compared it to Heroku's offerings, which are $0-9.


Didn't know rsync has free cloud storage.


If you're going to do this yourself, Caprover is the best.

Cram it into that $5/month, bump the swap to 2GB and then deploy your DB into it... backups are supported if you just straight up map a persistent volume out to your B2 (https://github.com/caprover/caprover/issues/410)

Edit: Be aware during automated upgrades you will trip CPU alarms.


This is great, do you have any recommendations/comments on usage? Aside from the CPU one


I've used Caprover coming from Flynn and it's much better.

It's not super pretty like Heroku but it works.

Recommend adding NetData integration so you can monitor your hardware without logging into instances.


I think you're stating the reasons already. There are people who don't want to self manage. They want a fully managed solution and don't want to think about the ops.


In my experience, the Heroku $9/mo plan isn't usable for anything production-ready. I had to upgrade to the next tier when I crossed about 100 active users. For $9/mo, you're also getting a pretty prohibitive row limit and a really low connection limit. DO's $7/mo dev database is more in line with Heroku's dev offering.


OTOH the PG Standard 0 (4GB RAM, 64GB storage) of Heroku costs $50:

https://www.heroku.com/pricing#data-services

On Digital Ocean the first offering with 4GB of RAM costs $60 and it only has 38GB of storage:

https://www.digitalocean.com/pricing/#managed-databases


From what I can tell, the cheapest managed database on DigitalOcean is $15/month, as was mentioned previously in this thread.

https://www.digitalocean.com/pricing/#managed-databases


I've given some thought to the PG offering -- $15/mo is really just for a dev instance with some minimal backup, it costs more to do anything production-level.

My conclusion -- YMMV! -- is that I'd happily pay that much to "set and forget" the DB in a proof-of-concept or hobby context. I realize it's not perfect, and there are cheaper options, but I really think they gave us a cheap-enough deal, and you can always play sysadmin if it's too much for you.

I'm not a big DO customer but I appreciate their pricing transparency: having worked professionally with 2/3 of the major Cloud companies I would never put anything there that's billed to my own account.


A small note, the DigitalOcean dev DBs do have daily backups though these are not accessible by the user and only come into play when there is a non-recoverable issue with the node they are on.


Tried this out in early access and wanted to love and use it however a few things that made it not workable:

* wildcard subdomains aren't supported.

* Only CNAMEs to digital ocean's CDN are supported (can't have an A record).

* There was no way to run a console command.

* Bandwidth is 12x higher than usual droplets and unlike standard droplets there isn't an included bandwidth pool.


Thanks for the feedback! I will be sure to address your points with the team as we continue to develop our roadmap. One thing to point out is that there is now console access available in the product.


The bandwidth piece is interesting. I guess if you really have a need for bandwidth just use their K8s offering which is free and you just pay for droplets (with much more bandwidth)/networking. It will require more setup, but you will end up with higher bandwidth.


"Just use k8s" isn't a solution as k8s is a complex beast.


Agreed, but they provide the management of the infrastructure. You will need to setup the services/pods, but it does then give you droplets with higher bandwidth if that is your need.


> Bandwidth is 12x higher than usual droplets and unlike standard droplets there isn't an included bandwidth pool

I'm particularly disappointed to hear this. Digital Ocean likes to say they have simple and predicable pricing, they even say so in TFA - charging high prices for bandwidth is surely the most despised practice of the big 3 cloud providers. I don't expect this from DP. I'm not angry, I'm just disappointed...


> * There was no way to run a console command.

There is a console now, was added a few weeks ago.


I like the idea of rebuilding Heroku without the command line. It's not for me, but it's probably for somebody.

There's just too much of a premium for this over a standard DigitalOcean droplet though. At the low end you're paying double for the same resources, and at the high end you're paying 4x.

I deployed a toy Node.js app that was a bit too resource hungry for 1 virtual CPU. The cheapest plan with 2 virtual CPUs is $150 per month (vs. $15 for a droplet with 2 virtual CPUs).

Seems like there are some pricing issues to work out.


Hey futhey,

The team agreed that we had some gaps that needed to be filled, you will now see new plans:

Basic $40/month 4GB RAM & 2 Shared vCPUs

Pro $75/month 4GB RAM & 1 Dedicated vCPU

We've also increased the vCPU count on the Pro $50/month plan from 1 to 2 vCPUS so it's now:

Pro $50/month 4GB RAM & 2 Shared vCPUs


Thanks for your feedback on the pricing plans. We will definitely take a look at options and potentially roll out some additional plans soon.


Not sure what you mean. Heroku can autodeploy from a Github repo just as well? You don't need the CLI for that


The only way this would make sense is if Digital Ocean offered the PaaS layer for free (or very cheap) to boost the sales of their underlying services (droplets, spaces, etc). Otherwise, what's the point? It simply won't be able to compete.

For example, static sites should be included for free on top of Spaces. There's no way this offering will be able to compete with Vercel, Netlify, or Firebase which all offer static sites for free with CDN. Vercel even includes 1TB of traffic per month, for free.


I tried this out today and figured out how to run a custom Python application there. You only need two files in a GitHub repo for that: a requirements.txt listing your dependencies and a Procfile that starts a process listening on port $PORT and IP 0.0.0.0.

My Procfile looks like this:

    web: datasette . -h 0.0.0.0 -p $PORT
Full notes here: https://til.simonwillison.net/til/til/digitalocean_datasette...


This is built on top of Heroku runtime, more info on Procfile here: https://devcenter.heroku.com/articles/procfile

Also, note that you shouldn't use SQLite, at least with heroku, because apps should be stateless https://12factor.net/processes


I'm deliberately using SQLite here because my use-case is read-only. https://fivethirtyeight.datasettes.com/ is a demo that I've been running on Heroku for over a year now - the trick is that if your data never changes you can package up the SQLite .db file as part of the deployment.


Please avoid prescribing best practices without understanding the architecture OP is working with.

First, you don't actually know that using SQLite would make the OP's app stateful—if datasette is set up in immutable mode, then the .db files are no more indicative of a stateful process than a static CSV or a JSON or YAML config.

Second, not every app needs to be a 12 factor app, and you're not in a position to understand the trade-offs OP is dealing with. "Best practices" rarely are best in every circumstance, and often conflict.


A lot of folks come to DO for their lower costs, especially for egress. However, this App Platform is charging $0.10 per GB for bandwidth overage. That's 10x the cost of running your own Droplet/K8s cluster.


Probably because it's served by Cloudflare, so DO have to pay for Cloudflare bandwidth (0.10 per gb for enterprises products)


Nice initiative but I'm gonna stick with Dokku for now, after jumping between google app engine, heroku, plain old tar over scp and other custom solutions I find dokku the perfect option for small and side projects.

dokku manage to hit that sweet spot of making everything easy and convenient but still let you do fine tweaks if you need it


I really like this, but we need lower plans for smaller managed databases.

A very small app composed of:

- App Platform using 2 containers - $12 x 2 = $24 (not sure, price is weird)

- PostgreSQL - $15

- Redis - $15

- Object Storage - $5

At ~$60 it becomes a bit too expensive.


Agreed. For me PaaS seems nice in theory until I start adding up the costs. For something that would generate revenue I'd go PaaS instead of playing sysadmin. But for my hobbies it's not worth it. I want a tiny postgres db, a tiny redis instance, a tiny bit of object storage...basically a tiny bit of everything. Maybe I'm not a good customer, I can accept that. I'm just happy it's possible to pay $5 and get a VM with 512 MB ram and 25GB of SSD storage, that would have been unthinkable $15 years ago. I have to manage everything on my own but I can run a few apps on that one VM.

To steal an old saying, VMs are cheap if your time is worth nothing. My time on weekends is worth nothing. Shrug.


Bandwidth limits seem excessive?

Outbound transfer – 40GiB per app

Overall a cool idea. Would personally still prefer GCP Cloud Run which will bill me for request time only and allow me to scale based on requests per second.


Azure Static Apps could be a choice. Provides SPA + Functions support all in one deploy.

https://azure.microsoft.com/en-us/services/app-service/stati...


I'd be interested to know if this is integrated and rebranded nanobox [0]? Or something completely from scratch?

[0]: https://nanobox.io/


We certainly took a lot of inspiration and leaned on the awesome experience of the nanobox team, but this is largely a completely different architecture.


Wasn't the Appsail nanobox? IIRC, it was planed to be DO's PaaS, so it's cancelled and then DO's PaaS is based on k8s instead?


Appsail was nanobox from what I saw, they redid it between appsail and "apps".


If Appsail was cancelled, what's the point of nanobox acquiring .. I wonder?


Maybe an acquihire? I don't recall seeing any of the nanobox people even mentioned or reply in any of the slack messages. So I am not really sure.


Can you tell us anything about the architecture?


This is what I found out by printing the environment variables: https://twitter.com/alexellisuk/status/1306343018488791040


We will be revealing a lot more about the architecture at our upcoming virtual conference called Deploy that is happening Nov 10-11th. There are a few sessions that touch on how the platform was implemented and I encourage you to tune in for those! (schedule to be announced soon). https://www.digitalocean.com/deploy/


I was guessing KNative, which powers Google Cloud Run.

https://knative.dev/


Google's Cloud Run the SaaS doesn't use KNative + K8s under the hood. It just uses a compatible HTTP API.


That's correct, though "Cloud Run for Anthos" is KNative + k8s + secret sauce though (I work on it).

Google has reached Microsoft-level naming complexity...

- https://cloud.google.com/anthos/run/


It's completely from scratch, and based on playing around with it, it's a weird kubernates system. Which is why the pricing seems super weird compared to existing DO products (100GB's base and then you pay per a GB).

Nanobox was a tool that allows you to build and manage a container with setups for heroku build packs. It handled deployment, load balancing, etc. Similar to heroku for example. But it also worked with Vultr, Amazon, etc.


I am out of the loop, but at a quick loo it seems nanobox provided a way to deploy apps on top of Digital Ocean. Since you said "integrated and rebranded", does that mean DO bought nanobox? Or they just "copied" it*?


Nanobox was acquired by Digitalocean. More details at https://www.digitalocean.com/blog/digitalocean-acquires-nano...


It‘s so sad nanobox was acquired and now trashed by digital ocean. It was comparable to a dokku/flynn on steroids, it was a very promising stack for self-hosted herokus.


I was hoping to see more about what happened to Nanobox after Digital Ocean aquired them. I thought what they had was really well done and was excited to see where they took it.


Yeah, I've been keeping an eye on their site which seems to indicate they were going to release something. I guess that's not happening any more!


We've been happy https://render.com customers for over a year now and on the surface this looks very similar. I'm glad to see another company take a stab at iterating on Heroku while keeping ops burden low.

I'm interested to know - what's the sales pitch for DO over Render? I'm noticing some pricing differences but at first glance it seems they tip Render's direction. I'm also noticing Render is a bit further along (persistent storage, cron jobs, custom domains).


There are various areas where App Platform pricing is in your favor, including the initial getting started pricing for services on both the basic and pro plan, as well as the cost of databases as you scale beyond the cheapest plan on Render. App Platform supports custom domains today, but we do plan to roll out enhanced functionality around domain management very soon. Today we support pre/post-deploy jobs with cron job coming soon, as well as a persistent storage solution. In addition, beyond the $7 price point we support MySQL/Redis/Postgres as opposed to just Postgres. We will also be rolling out App Platform to many more of our DigitalOcean regions very soon and have an exciting roadmap what we can't wait to get built out and released. Thanks so much for checking out the product and providing your feedback.


Can you consider expanding a 3rd data center to somewhere else in the US besides SF and NYC? Like texas, chicago, or virginia?


Thanks for mentioning Render, I had not heard of them. I've been evaluating options over the last few months as we may outgrow Dokku hosted on DigitalOcean soon (both of which have been very reliable and a delight to work with). Render looks excellent.


Congrats to DO on this launch. There's also completely free & OSS platforms that provide a similar experience, perhaps a little more Cloud Native (auto-scaling, scale to zero, team features) like OpenFaaS Cloud - https://github.com/openfaas/openfaas-cloud


Does Render run on AWS? Or it's own servers? Having a little trouble figuring out things like that and what regions they're in etc. Looks cool though.


Render runs on multiple VM providers (including GCP and AWS), and soon on bare metal. More info on geographies here: https://render.com/docs/regions


This was a pain point for me when I was first evaluating them too. I had to manually email support for them to inform me they run in GCP US West

EDIT: see sibling comment from anurag. It appears they now list the regions they support in their documentation.


I looked at the page and FAQ but can't find a clear answer for this: does this have billing limits? That's the main reason why I don't use AWS or any major cloud for personal projects because there's potential for unlimited cost if something goes wrong. I would rather have a hard billing limit that just shuts down everything if it reaches the limit.


Not sure about the other cloud providers, but AWS has billing limits with email warnings when you approach them. https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2...


They aren't limits though, the email warnings is the only thing they actually do.


> If you have apps in other languages, you can quickly deploy using the App Platform by providing the appropriate Dockerfile in the source repo

I'm happy to see this. Makes it a viable contender to GCP Cloud Run, which has become my favorite way to deploy serverless apps.


Like all DigitalOcean products, the App Platform provides predictable, easy-to-understand pricing that allows you to control costs to prevent surprise bills.

I can't see any features listed that enable me to control costs to prevent surprise bills. If a site got submitted to HN and hugged to not-to-death-because-it-autoscales then I'd wake up to a bunch of alerts and massive outbound bandwidth bill. I don't want that. I want something that stops that happening.


Thanks for the feedback! Autoscaling is not yet supported on the platform (but is coming soon). Before autoscaling lands as a feature, insights based alerting will also land. You'll be able to setup alerts for scaling events, bandwidth, cpu, memory, and more that can be sent via email or slack.


My point is that alerts aren't really enough. If I'm asleep or on a plane or something then it could be many hours before I even see an alert, by which time it's probably too late. I'd prefer to be able to set up rules beforehand that prevent an unforeseen bill.

If the App Platform doesn't have that feature, and it isn't planned, that's OK but I'd argue that isn't really preventing a surprise bill as the marketing site claims. A warning isn't quite the same as prevention.


Sorry, I think I misunderstood what you were saying there. Could you clarify a bit more about what kind of functionality you'd prefer we support in the scenario you described? A large influx of traffic hits your application while you are unavailable to handle it. If you do not have autoscaling enabled, your application performance could suffer, if you do have it enabled your bill could grow out of bounds. We do plan to let you set min and max bounds for autoscaling once it is implemented. Thanks for your help!


We do plan to let you set min and max bounds for autoscaling once it is implemented.

This is exactly the sort of thing I'm talking about, but for money rather than computing resources or bandwidth. I'd like a feature that where the requirement is effectively "Fall back to a serving sorry-I'm-poor.html instead of the app if the total monthly bill has exceeded $xxx". For a side project I'm more interested in not paying an unexpected bill than getting traffic.


For what it's worth, we (Fly.io) have this feature, and announced it as part of our launch post on HN. But literally no one has asked us to enable it on their apps. So we never made it self service.

I think for most companies, it's better to set the expectation that the service costs money, bursts will cost more money, and then forgive outlier charges once or twice. It's tremendously difficult to compete against the AWS's of the world, putting work into features specifically to minimize how much people spend seems like a good way to fail a company.


Very interesting that this feature is often demanded in tones of righteous outrage on posts about autoscaling cloud services, yet when actually implemented, nobody has actually set it up.


> So we never made it self service.

I may not underestand parent, but it sounds like its not equivalent to a button next to the auto-scale but more like an email and pre-requisite knowledge that it exists?


Yes we advertised it as a feature to all new signups, with a link to request limits. It's not equivalent to a button, the dirty secret is that when people requested it, I planned to put a notification in my calendar to go look at their usage the last day of the month and adjust their bill accordingly. :)

We were concerned about spending time on marginal features that didn't actually matter much to people, so we "launched" it without building it to see if anyone cared.


I'm not one of your customers but when I have looked at pay-as-you-go services or autoscaling services before I basically don't even consider any that don't allow me to cap the costs per month or similar.

So you could perhaps also consider that if it is only marketed when you sign up and not a clearly defined feature some people (like me) will just never sign up at all.


Personally, I don't think that capping costs explicitly is a good idea, because it doesn't carry enough information about what to do when you hit that cap.

Not even considering something like AWS with a bazillion types of services, say you have a cloud hosting service that runs nothing but auto-scaling clusters of servers billed by the hour. So I set a cap on costs but not cap on scaling up and turn it loose. It gets a hug of death from something, scales to the sky, does serve all of that traffic fine, but burns through my monthly cost cap in an hour. Oops, now it's down entirely for the next 3 weeks or whatever. Does anybody in the world who's willing to pay for hosting something actually want that? I have to doubt it.

On the other hand, capping the scaling size by number of hosts sounds pretty reasonable. Set a cap at say 2x your normal peak traffic. You get a hug from something, it hits the cap. Most of your extra traffic gets errors or really slow responses, but your service stays up, even when the traffic dies down. Your monthly costs are a bit higher than usual, but manageable. That sounds like a much better result to me.

That doesn't even get into stuff like, oh hey we dropped your DB server because you hit your cost cap early and it costs money to keep it running and to create a new backup too, hope you don't miss that data too much!


That's actually why we made a big deal out of it in our launch post. We thought maybe it would attract underserved customers. I am surprised that it had no effect.

My guess is that people who are interested in controlling costs are also not getting enough value from auto scaling services. So we're already not attracting those folks, and offering a cost control feature isn't enough to get us over the hump.


Thanks for replying this deep into the thread, I found the whole thing interesting and informative.


I’m also not interested in this feature, but do want to say that’s a lovely example of implementing the bare minimum to gauge the value of a feature.


That was a great MVP plan for that feature! Figuring out what not to build is a great way to save development time.


In addition to there being no self-service, the kinds of people who want this aren't going to be using Fly.io in the first place. Look at the prices. DigitalOcean's $5 droplet would cost over $50/month on Fly.


That's not true. Our CPU VMs are similar to DO's CPU optimized droplets. The cheaper droplets are all shared CPU.


No, what I wrote is true based on the info listed on fly.io. Either those prices and specs are incorrect, or your comment is narrowly focusing on the CPU axis (which still doesn't make for a good comparison, considering the options Fly gives are $2.67 or $8) and calling it sufficient. DigitalOcean's cheapest droplet comes with 1GB RAM, whereas to match that with Fly, that's $35 minimum. You don't get to ignore that. Add in the costs listed under Fly's "Network prices" and we're nowhere close to $5.

If for some convoluted reason this is wrong, then you guys really need to reconsider how you're presenting your prices and to lay out exactly what those convoluted reasons are, because Fly's pricing page certainly tells people that trying to match DigitalOcean's $5 plan is going to cost 10x on Fly.


I'd argue that many users that end up with surprise bills are either inexperienced or are deploying something small without giving it much thought.

In those cases, the users will either not know or not think about such expense limits.

The solution is to set relatively strict limits by default and even occasionally warn users about unused/underutilized resources.

But as you said: such features are bad for the bottom line .

And I can appreciate the enormous difficulty of competing with the big 3.


I think such features create really negative surprises too. People expect their app or service to keep working at almost any scale. Disabling an app that hits a threshold sounds terrible, because most peoples' apps _benefit_ from more (legit) activity and they are delighted to pay the extra fees if things just continue to work.

For people who are inexperienced or don't give it much thought, just waiving their overage fees creates a really nice experience. We've had multiple instances of people asking why their bill is so high and being relieved/excited when we explained it and made it go away. People love when we're responsive to problems.


> putting work into features specifically to minimize how much people spend seems like a good way to fail a company.

What a customer-hostile thing to say, never working with you!

What if I told you that if you're actually trying to build a relationship with your customers and have them want to do business with you, it's best to focus on solving their problems as opposed to taking as much of their money as possible.


> What a customer-hostile thing to say, never working with you!

The point is that I'd rather just wave surprise fees than build a bunch of infrastructure to prevent them. From what we can tell, customers almost never have surprise bursts of traffic, but it's something they think about a lot. It's pretty easy to just say "you're not responsible for expenses associated with abuse or attacks or other nasty surprises".

The purpose of a metered service is to give people access to tools and features that would be prohibitively expensive otherwise. The trade off is that the expense also grows incrementally.


Sounds like we agree on what you're doing, we just disagree about whether it's a good approach.

I don't want to need to depend on you being in a good mood to waive fees (or more likely, to hope you're not so low on runway that support gets incentives not to waive them until after your next round closes). I don't want to have to guess whether your terms mean what they say, or whether that giant potential bill might magically go away if I incur it and can convince you it was, quote, nasty.

I do want to deal with people who have thought hard about how to build their product in a way that I can depend on and reason confidently about, and who treat me like the seasoned adult I am.


I think you're not a good customer for us? I'm not sure that makes us hostile, or you wrong, but I'm pretty comfy with how we treat our customers (and they seem cool with it too).


> It's tremendously difficult to compete against the AWS's of the world, putting work into features specifically to minimize how much people spend seems like a good way to fail a company.

It's also something that people actively dislike about AWS, and thus a good selling point. Albeit it's probably mostly smaller projects that care about this.


It sounds like a good selling point but I don't think it works out that way. People dislike a lot of things about AWS and it's still something that almost every technical business spends money on.


If there's any confusion about this, look at nearlyfreespeech.net.

You deposit funds (let's say $20) into your account, and as your site consumes resources, it draws from your balance. So $20 is $20. If you set up a crummy low-traffic blog and that $20 lasts four months, then fine. If it lasts two weeks and then there's a surge in traffic, or the server hits some bug that causes it to spiral out of control, then your site gets killed when your balance hits $0. You can set up alerts, but there's a hard limit for how much you can be charged—it's whatever funds you deposited into your account. (Really, that's not even the right way to put it, because you're charged at the moment you deposit them.)

This isn't acceptable for everybody—many businesses would prefer to stay up and be billed after-the-fact. But that use cases is already well-served by the industry. (Overserved, really.) The market segment where the alternative situation is the best fit (pretty much every hobbiest who isn't bringing in a single cent off their side project) is extremely underserved. NFSN is the only provider I know of that even offers this.


I think he is simply requesting threshold limits. So for example you could set a threshold of 2 - 5 nodes/droplets. This way you will always have at least 2 nodes running, even with zero traffic, because that is the minimum you requested. Likewise the upper bound limit of 5 nodes/droplets would limit the maximum nodes that the service would generate.

So it autoscales up until it hits the maximum threshold and stops. Yes, if the app needed more resources than this, due to a spike in traffic, than performance would obviously suffer. But depending on the budget and needs of the project this might be the preferred outcome over incurring unexpectedly high server costs that you are not prepared to handle.


Way to many folks end up with surprise bills. I'd like the option to say - over x dollars cost, shut the damn thing off to the public. Back in the /. days, got a minor project hit by the mass internets and hit with bandwidth charges by the time I noticed.


Most autoscaling platforms let you define min and max types of restrictions. I assume DO would have the same functionality in their autoscaling.


Correct me if I'm wrong, but AWS doesn't allow you to do hard limits.

I was creating a public S3 bucket today and wanted there to be a hard limit, so I couldn't get slapped with a huge AWS bill. Looking at the docs, it appears I can get alerts but not set a hard limit on my billing.


AWS allow to set limits on autoscaling groups, ECS, EKS. All the things with autoscaling really.

If you're worried about billing, S3 is not a great choice. S3 is precisely unlimited storage for enterprise.


Saying that something that is billed based on usage is not also able to have it's usage capped seems weird.


What good are alerts when I sleep? I usually don't even check my personal mail until I'm done at work. There's a lot that can happen in 16h


On AWS I tie high billing threshold alerts with PagerDuty so it wakes me up at night.


Oh man, what a user friendly solution


We need limits, not alerts. That is, autoscale up to X.


Thanks for the clarification. We do plan to implement hard limits in the autoscaling feature.


I've said this elsewhere in this thread, but there's a difference between predictable pricing and controlling runaway costs.

Predictable pricing means that each month my bill is the same: I'm on a $15 plan, I pay $15 a month. No nickel-and-diming, no hidden fees, life is simple. This is a good thing.

Controlling costs means that no matter what happens, my costs will not exceed $x/month. Service might degrade if necessary, but I will not pay for anything above some upper limit.

Seems that DO has the former but not the latter, because if things get out of hand (caused by an attack, a bug in my code, going viral) the customer can be stuck with a bill. Alerts do not alleviate this risk. Pricing that is generally predictable does not alleviate this risk. Only a hard cap does. I want a plan that says "you have 250 GB of traffic, after which requests will fail and you'll get an email". You can make it nicer by sending me a warning email at 200GB so that I have time to upgrade my plan if I want.


That seems somewhat opposite to the point of PaaS, no?

If you want controllable costs and you don't want people using your service because interest explodes, then just stick to a regular droplet, no?

The entire point of this is that if usage explodes, you want to pay to support that usage. That's the entire feature.

(Also, HN frontpage traffic isn't that much -- it's a few requests per second at most, not thousands per second.)

The only scenario I can imagine where this would be a genuine concern would be if you were subjected to a (D)DoS-type attack. But in that case you still want legitimate users to get through, so you really need a separate DoS-protection layer which is totally orthogonal to this.

Am I missing something?


> That seems somewhat opposite to the point of PaaS, no?

Why would that be true? I thought the point of a PaaS was getting a Platform - as a Service. I don't interpret that to mean having no control or limits on potential costs. To me a PaaS is about not wanting to manage a the metal or infra under the app. Which i don't think is misguided or inaccurate.

To look at it differently: I am a customer to someone. I am a customer who wants to buy a product that manages all of the infra for my application. I also have a limited budget, and the notion of an unlimited budget is an instant no for me.

Call the XaaS whatever you like, but my motivation is clear. Maybe DO doesn't want me as a customer (if what you say is true about PaaS), but i think there's opportunity there for a new XaaS.


A little tidbit: everything you deploy on App Platform comes with DDoS protection built in.


This is laughable - billing is logically separated from droplet use, so unlike other providers DO charges for the potential to use a capability, rather than actual use, regardless of whether it is consumed. I got stung badly by this - charged for X droplets capability when zero droplets capability was being used for several months. Explained the misunderstanding to DO, got no sympathy & no refund. Won't touch them again, don't trust them further than I could throw them.


You bought access to X servers knowingly (because you can't do that by accident), let them sit here knowingly, then got billed the exact number that was listed, but somehow that's DigitalOcean's fault ?


> let them sit here knowingly

Nothing was deployed. Zip, zero, zilch resources were used month-on-month. My fault, yes, but naively I assumed if this was the case, billing would automatically drop to zero.

Whilst I bought access to X servers but had no way to remove the charge associated with that without contacting customer services when I decided X=0, permanently.

I mean, really? I have no problem with PAYG or PAYG to a capped amount, but PAYG for a fixed amount regardless of whether or not the resources are actually deployed is disingenuous at best.


> Nothing was deployed. Zip, zero, zilch resources were used month-on-month.

This is like leasing a car, leaving it parked, and then complaining you have to pay the lease.


So, you simply did not read the pricing page, that explicitly says that it's going to bill you that, and that you do whatever you want with it. And /or have never used a VPS service before . You are quite literally renting space and CPU time on their servers, that they are keeping (mostly) free and reserved for you.

You don't complain that you were charged $200 for renting a parking space and never using it, no reason for this to be different for servers.


Except in this case, other cars are being parked in the space when it is "empty", and you can only find the car park attendant on every third Monday in the month to rescind the agreement


This is what makes "the cloud" profitable.

If your droplet isn't using all of it's resources (CPU, RAM, DISK), they are able to oversell their capacity.


To be clear, I had no deployed resources, but was still charged as if I had. As it turns out, it's impossible to cancel the "standing charge" without recourse to support.

There's a difference between load balancing of shared resources and (what I believe must be) deliberately deceptive practices. A customer-centric company would send an email to notify you of this kind of over-charging.

It's premeditated and dispicable.


What you call "premeditated and despicable" is actually a huge value proposition for others. Whereas AWS/GCP have pricing structures based on usage and you never know until the end of the month how much you owe, DO instead has defined "you will pay $50/mo for this regardless of if you do or do not use it" and from what I've seen, many people really value and appreciate that, and specifically choose DO over AWS/GCP because of that.

The pricing model for App Platform seems antithesis to that, though, which is interesting. DO is becoming more like AWS/GCP with every feature release, which I don't necessarily find to be a good thing.


The problem I have is not with the model, but with the fact it is so difficult to cancel the standing charge. If it could be done from the web UI, and/or there was an interlinked pop-up when zero droplets are deployed, fine.


Wait, I'm confused. What standing charge? What exactly did you get charged for? I've been using DO for a few years and I have no idea what you are referring to. When I delete my unused resources, I don't get charged.


He probably forgot to turn off the instances and kept getting charged.


Ah, right, shutdown but not deleted. DO are upfront about that charge though and its not like its any different on any other cloud provider.


I didn't say it wasn't a shitty outcome, but this is what people advocate for when there's a push "for the cloud" or "how aws is doing great things"


Please change the URL to

   https://www.digitalocean.com/blog/introducing-digitalocean-app-platform-reimagining-paas-to-make-it-simpler-for-you-to-build-deploy-and-scale-apps
Yours is tracking clicks.


I got quite excited at being able to stick my static sites on DO for free, but notice on this page[1] that only the first three are free, then it's $3/m/site.

Perhaps I've been spoiled by services like Netlify. I'd be interested to know what the benefits are of using DO's service over free alternatives.

[1] https://cloud.digitalocean.com/apps?i=6bf7f8


Netlify Analytics $9/month per site [1], no way to download logs [2]. That said it is easy to deploy, free HTTPS [3] and they support MIME definitions in _headers [4].

I host on Google Cloud Storage, it is not that easy but not that hard either [5], GoAccess for web analytics, no HTTPS. I would be interesting to have a matrix of supported features on different platforms. And how deployment compares to Nginx, letsencrypt, git.

[1] https://www.netlify.com/products/analytics/

[2] https://community.netlify.com/t/download-raw-server-access-l...

[3] https://docs.netlify.com/domains-https/https-ssl/

[4] https://docs.netlify.com/routing/headers/#syntax-for-the-hea...

[5] http://sergeykish.com/google-cloud-storage-static-hosting


The logs would be useful, but there are alternate options that could be used over Netlify's $9/m option. For personal/side projects that price is a no go for me.

I've used GCS before and I agree it's not easy/hard, but certainly not as simple as Netlify.

> I would be interesting to have a matrix of supported features on different platforms. And how deployment compares to Nginx, letsencrypt, git.

That would be useful, especially now that there are more and more options out there. I've seen benchmark comparisons but features would be more useful in my opinion, unless the speed is really poor.


Thanks, that is valuable feedback. We definitely recognize that there are other options out there that offer unlimited static sites. We feel that the additional features and capabilities offered by the platform including running scaled our dynamic workloads, workers, jobs, and databases - at a price point that is very competitive will provide you with value to make up for it - having these features just results in a different economic model for us.


It seems rather silly to limit the number of free sites unless you somehow restrict the number of accounts people can open. Also, by restricting, you lose the analytics to see how people are using the service when, not if, people start creating multiple accounts.


Someone already mentioned GitHub Pages. I'm pretty sure Azure supports serving static sites straight from Blob Storage too - put Cloudflare in front, and you should be able to host dozens of static sites for a few pennies a month.


Why not just use github pages for static sites?


I might do in the future. To be honest, Github Pages has not been on my radar until I saw the recent article posted here with benchmarks across popular static site hosts.

I'll have to look at the benefits of that over Netlify, too.


I just tried it out with a Jekyll site I have running on Cloudflare Worker Sites. It detected everything and is deploying but one thing I noticed is that deployment is kinda slow. My GitHub Actions pipeline to build the Jekyll site and deploy to Cloudflare takes less than 2 minutes (latest run took 1m 43s). DigitalOcean took 6 minutes in comparison. Sure, its not crazy, but 3x the time still seems excessive.

It seems the bulk of the time was from installing ruby gems for Jekyll. Maybe GitHub Actions mirrors them or something so it can run faster?

Besides that, it works great. Actual live performance seems as snappy as one would expect from a static site and setting it up was almost one click. I'll definitely be looking into this in more detail.

First impressions are good and I may well migrate over for static site and API server (API server is currently running on a droplet as a single docker container, so seems like a nice way to lower the effort I have to put in). Everything else is running on AWS for.. reasons.. so this looks like a nice way to simplify my non-AWS stuff. I'll be experimenting with it over the coming days! The bandwidth limits are the biggest concern.


Seems like someone from DigitalOcean is reading and responding to comments here. I won’t touch or recommend DigitalOcean to anyone unless it employs some humans to address and respond to queries instead of sending the same automated replies. The very first time I created an account, it charged my card for verification and then blocked my account (the charge was also immediately reversed) since its systems believed this was fraud. There was no way to get to a human, get any additional verification done and get my account reinstated. All I got was a string of automated replies with the same text. I decided then that this is not a company to rely on.

I don’t mind fraud protection measures, and am willing to provide additional verification based on what’s asked for and relevance. But there’s no such process with this company.

Later I see large scale layoffs in DigitalOcean, and I don’t think the situation could get any better.

I know the same kind of lack of customer service could be said of Google and certain other companies too. But DigitalOcean made my very first experience a bitter one and to advocate against it.


Hah I had the opposite effect of getting to deal with a rude gatekeeper when I needed to create more than the I itial 5 instances they limit you to.

I moved to lightsail. It has problems but at least AWS support treats you like a human.


I've had a different experience in regards of their communication - they've been nice to answer my questions in a timely fashion, for example about the instance amount, while running just a few servers


By contrast, as a free user of heroku I've gotten support from _actual engineers rather than support staff._ AS A NON-PAYING USER!! literally never happened to me before. Love them.


Surprised you need to post anonymously ;/ I just had an issue and I got a support staff to respond to me within a day. Maybe bad luck.


I don't know why they make this scaling stuff so expensive, I understand we get technology but I mean no-one is actually going to get to that point from start. I was looking into Google App-engine it's so ridiculously expensive I was like hey it's cheaper for me to do my own clustering technology than anything else or maybe just buy a good server and host it myself.


Cool. More players joining the "focus on code, not infrastructure" arena. I like it!


$5 a month for the cheapest option?

I mean, the cheapest Droplet is $5 a month with 1GB of Memory, 1 vCPU, 1TB transfer and 25GB storage. While the same tier of App Platform only gives you 512MiB of Memory and 40GiB Outbound transfer, plus you have to pay extra for the storage/db.

Why the price varied this much? Kubernetes was that expensive?


So if I understand correctly, DigitalOcean just built an inferior version of caprover? (for which a digital ocean one-click droplet exists BTW)

Caprover is essentially a really nice GUI for docker swarm, Let's encrypt and nginx.

https://caprover.com/


Not an answer to your question - but as an aside, why the tone? In the classic HN example, isn't Dropbox just an inferior Rsync?

Eg, if someone wants to clone something and it manages to succeed, they clearly got _something_ right. Either a captive audience, a better UX, a better pricing structure, etc. There's almost always room for ~innovation~ competition, so why poke fun at it?

I suppose if you think it's truly inferior in every way compared to Caprover your tone might make sense - but even then, i've never heard of Caprover, if i was a customer looking for this product (and i may be) i'd probably have defaulted to DigitalOcean for this. If that alone ends up being profitable for them - a Caprover clone with a well known name - why wouldn't they do it?

I see these types of "HN Dropbox" comments on HN so frequently and i just don't get it. There's lots of angles to product viability, and these comments seem to be purposefully ignorant. Not saying you are ignorant, just that the comments seem to try to ignore any logical reason for competition.


I like the sentiment here, competition is great.

But there's also the problem of fragmentation in open source.

I've been using caprover from the early days, and I've seen how much work githubsaturn has put into making it really easy to use.

I think digital ocean has left a lot to be desired on their new offering, which could have been avoided, if they considered collaborating.

And ofcourse I never said digital ocean is just rsync. Just like caprover isn't just docker swarm. The whole is greater than the sum of its parts.

But their new offering is definitely inferior in terms of functionality.


Digital ocean also has a lot of love. Maybe it’s the heyday MacBook of cloud providers and developers will try this just because it is a DO thing. And I think the love comes from providing a really good platform for side projects. It’s probably a badass user kind of a thing (see the book called badass).

I had to go to Heroku before to be badass but now I can do that with DO who align more to my way of thinking as a developer.


Well DigitalOcean usually have pretty amazing docs, which was something I found lacking with caprover. DO documentation feels like it was written for humans instead of.. I dunno caprover just feels too technically written. It activates the CBA "I'll find something easier to use" feeling

Or yes, use their one click droplet and be using DO anyway except you manage the server if it breaks. Also auto scaling? That's on you


I don't see DO mentioning auto scaling in their docs, links?


It's from this comment thread. Upcoming rather than available

https://news.ycombinator.com/item?id=24698927


I won’t mention the dropbox trope!


Their value add is that it's a managed service


So it's what - easier to buy? Can be managed from the same web site that you use to manage your droplets, instead of having two UIs?


First time I'm hearing of this thing. It looks like another take on Dokku?


Why are the bandwidth costs 10x higher than Droplets, Kubernetes, Spaces...?


Azure Static Web App is free and no limits compared to this offer, I can also scale Web Apps in Azure, not sure why I would choose the digital ocean option. Competition is good for us at least :).


Free with no limits? For how long? What a load.


Free, as in bear trap in the woods.


They give about 60 mins of compute per day for every app for free. Since 2012.

Google AppEngine is very similar.


This looks fantastic. I'm a big fan of DigitalOcean and their ongoing efforts to provide simpler, cheaper alternatives to the cloud giants.

Their managed Kubernetes offering is a bargain compared to EKS.


Happy to see easy docker support. But I guess my ideal PaaS would be to just tap in a docker image from a private registry rather than running a Dockerfile from a repo.


That feature is on the roadmap!


Heh - this is extremely close to the company I am building! Shared Kubernetes is a great starting point, but unless the PaaS can be installed on _any_ Kube cluster I wonder how much of a step forward this really is.

That said, auto-configuration guessing based on the contents of a GitHub repo is probably the future - Automatically get a redis server when you've installed a redis driver, etc.


> The App Platform is one of the few PaaS products built on a shared Kubernetes platform.

I wonder how they are doing this from a security standpoint. For customer workload isolation is every container actually a VM? The pricing/sizing kind of makes it look like that.

> App Platform provides predictable, easy-to-understand pricing

How? Outbound network is charged per-GB. I know autoscaling isn't supported yet, but hopefully they support setting a max number of instances. One of the things people like about DigitalOcean over other cloud services is that you can sign up to pay $X per month and know that is what exactly what you will pay, no surprises.

> Upcoming features

The list of things not supported yet includes auto-scaling and VPC. It is hard to imagine using a PaaS without autoscaling. And I wouldn't want to build out a microservices-like architecture without VPC.

Overall I really like the concept of the offering. Simple PaaS, with custom container support, Kubernetes (for what it is worth in a PaaS, I don't know), and predictable pricing. That all sounds really good.


So this is something that in theory would compete with Laravel Forge but more expensive?

I have more than 30 sites with Digital Ocean and I manage them via Forge. Total I pay is about $100/month including the droplet.

I would have to pay $150/month ONLY for the apps without including the droplet if I use this system?

Am I missing something?


Is this based on knative? It seems very similar :)


It looks to be K8s + Istio, no scale to zero etc.

https://twitter.com/alexellisuk/status/1306343018488791040


I Like the example RSS app (https://github.com/do-community/rss-reader-api) that DO provide. It's Django REST Framework but without a lot of the 'magic'. I reckon the DRF docs should link to it as a good example. When learning DRF from the docs tutorial it's so easy to get bogged down in the features that are meant to simplify everything that when something breaks it takes someone unfamiliar with the source a long time to troubleshoot. This example though, is very easy to follow.


I tried deploying a static site. DO App Platform makes the file page1/index.html available both at /page1 and at /page1/. If the page contains any relative links or images, it is impossible for both URLs to reference these links correctly. Most web servers handle this by redirecting /page1 to /page1/ before serving it. Netlify does this as well. Vercel works the same as DO in its default mode, but it allows the user to configure the trailing slash behavior to get the redirect. Hugo, for one, assumes all files are served with a trailing slash.


$0.10/GiB bandwidth. Ouch. That's 10x what they charge for droplets exceeding their included bandwidth. Seems to follow the trend of grossly overcharging for bandwidth that most other PaaS providers do.


That's insane - it's more than even Azure charge! They say in this very article how simple their pricing is, but that's just dishonest of they are charging for bandwidth (and through the nose at that).


Yea, that was my thought exactly. It's nuts, considering they offer Space (S3 alternative) with 1TB out outbound for $5/m (ignoring the fact that they are providing 250gb of storage as well).

I was really keen on on until I saw that section.


How is it different from Heroku? I'm and I think majority of the market in this segment is deeply invested in Heroku. How's it different?


It seems like their pitch is that it's a bit cheaper. Although it doesn't have most of the features Heroku has... nowadays they have some pretty great stuff, like a managed Postgres database connected to Kafka.


Jelastic PaaS did this a long time ago, and in much better form. It is possible to automatically scale your apps horizontally or vertically without any additional configs. I really don't understand why it is not that popular, It is the smoothest cloud experience I had compared to any other PaaS I tested.


Very cool. I'll definitely give it a test spin. I've been a big fan of PaaS but Render / App Engine / Beanstalk fall a bit short on dev ux. This looks the most promising. If you can add multiple regions and wildcard ssl, you'll solve 2 big problems we have with Heroku.


If you're up for doing us a favor, I'd love to learn about the UX specifics we fell short on. Email is my username @render.com. Thank you!


The pricing is crazy!!


In terms of cost this seems silly. They say that it won’t get more expensive as you grow bigger, but that’s because it’s essentially a pre-deployed droplet.

The benefit of PaaS offerings is generally that you pay per request.

The bandwidth limits on this also seem incredibly out of whack with standard droplets.


Details aside, I think having a PAAS is awesome and something we’re really missing on AWS.


AWS has a PaaS: Elastic Beanstalk.

Though I would forgive you for forgetting about EB since it sometimes feels like AWS's neglected step child.


I'm not an AWS expert by no means, but from what I've played around with it it's miles away from something like Heroku where you just have a two-line config file in your Github repo, you point Heroku to the repo and say "deploy whenever there's a commit that passed the CI/CD pipeline).

Or from this here DO innovation either.


Yes, this is what I was referring to when I said EB sometimes seems like a neglected stepchild of AWS.

AWS in general follows a paradigm where the target market is generally large, huge enterprises with niche use cases and large IT organizations that don't mind (and often require) taking fine-grained control of things like setting up CI/CD pipelines, deployment configurations, etc. These things are all possible and powerful on AWS but they mostly require self-configuration, which is sort of the opposite of what Heroku/DO App Platform are trying to be.

Beanstalk is in a weird place because it still follows that AWS paradigm of "we want to expose all of these fine-grained controls to the power users at large enterprises" while also still attempting to make it easier for the average developer. The end result is that Beanstalk gets stuck somewhere in the middle.

Beanstalk is a very capable and powerful service, and you certainly can set up a CI/CD pipeline in the way you've described, but you have to set it up yourself using AWS CodePipeline or by using the beanstalk CLI... which is certainly not as developer friendly as something like Heroku, especially if it's just a hobby app that you're toying with on the weekends.

And to muddy the waters a bit more, AWS also has Amplify, which actually does have one-click-setup for a GitHub linked CI/CD pipeline, but AFAIK it's mostly meant for static websites or for mobile apps, so it isn't exactly the same target use cases as Beanstalk.


I had no idea that was the point of beanstalk. Now I know what I’m doing this evening.


I was in the same boat until someone made me feel dumb after I smugly said why I moved from aws over to Heroku for a project. Too late for me though on this project.

Have fun tonight!


Oh, the name Elastic Beanstalk didn't make it obvious???

/s


Weirdly enough no, and Amazon is normally so good and consistant in their naming of products.


Yep. I deployed https://www.seamless.cloud to AWS Elastic Beanstalk, and I have been very happy with it. It is currently setup as a Load Balancer + 2 nodes + RDS PostgreSQL db. SSL is free and included if you also use Route53 for DNS. Pretty easy to setup, scale, and configure for an AWS product.


Amplify is also PaaS-ish.

https://aws.amazon.com/amplify/


And it's been there for more than 6 years IIRC.


This sounds good. I like DO.

This sentence is a bit worrying though:

"We automatically analyze your code, create containers, and run them on Kubernetes clusters."

I still want to be able to make _some_ decisions as a developer.


They analyse your code for things like composer.json or package.json to determine what base buildpack to use.

You do have complete control over the configuration, see https://www.digitalocean.com/docs/app-platform/references/ap...


Disappointing to not see .NET Core support in 2020


Completely agree! I mean not even Java, which are two of the most used Languages/Platforms in the world.


All these new platforms seem to solely focus on JS/TS/Node (fair enough), Python (hmm I suppose that has a reasonable market share) and Go.

Really, Go? It's barely used!


Does this work with MongoDB? The documentation seems to imply you can only use MySQL, Redis, or PostgreSQL.


I would love to see an interface to Gitlab as well.


What are the differences with Cloud66?


HN when discussing their own project milestones: "Agile everything! Organic features."

HN when discussing a new product: "OUTRAGE! It must support EVERYTHING I want on launch day!"


Right, it’s almost as if there’s more than one person here, with more than one temperament.


A message board centered around startup accelerator might be expected to have a median temperament closer to a startup than a 100 year old accounting firm.


HN has many different subpopulations and the startup founder subgroup became a small minority many years ago. There are probably more users here now who actively identify against startups than the other way around. Cycle of life.


> HN when discussing a new product: "OUTRAGE! It must support EVERYTHING I want on launch day!"

You are describing internet mostly. There's simply too many unreasonable people on the internet.


I would love a non-trivial demo app or webcast that uses some of the components (eg. service + DB + static site + multiple workers) to check out the configuration required to make everything work together.

Maybe examples of how server/workers communicate, support for RPC, event queue. I'm in the process of figuring out all this stuff and App Platform sounds perfect, but without a starting point it may become more straightforward to take the platform-agnostic approach and spin up droplets where I control all of this.


Free plan has 1 Gib/month egress => 1 Gib / 8 = 125 MB / 30 days = 15,6 MB / 86400 seconds = 0,0001808 MB/s

Basic plan has 40 Gib/month egress = 0,0001808 * 40 = 0,00723 MB/s

Professional plan has 100 Gib/month = 0,0001808 * 100 = 0,0018 MB/s

That's not much if you need to server some actual content. This is really just to run cheap workers that don't require much cpu/ram and networking. And even then it might be cheaper to just get a vps.


Is it just me or is Heroku mostly dead now? Most people I know have given up on "PaaS" and gone back to plain EC2 etc. It's just not that hard to spin up an instance and deploy a rails/django or whatever app, especially with Docker now.


> Is it just me or is Heroku mostly dead now?

Not just you, I think Heroku is getting a bit pointless.

After all, writing k8s yaml files and automating my infrastructure instead of focusing on my product gets me up in the morning, that's the real excitement. I shouldn't be the only one who is ecstatic about this.

So anyway, why would anyone what to use a PaaS?


I think its just you. Lots of people use Heroku, I have like 4-5 apps deployed on it, and I know quite a few smaller companies that were completely on it last I checked.


Qovery helps any company to have a state of the art deployment platform on top of AWS, GCP, Azure, and Digital Ocean as well. More info here https://www.qovery.com/business


Why are people upvoting this? Serious question.

An easy and proprietary deploy layer isn't anything new. Docker makes it already quite easy to deploy anything, either to a single machine with docker-compose or k8s for multiple nodes. Former isn't much harder than any "app platform", is enough for most, you aren't vendor-locked-in and hosting costs are at the lowest since it's bare-metal. You want something easier? There are already gazillions other (proprietary) options.

Maybe I missed something but I don't get what makes DO any special here.


Digital Ocean has really changed. Just get the feeling leadership or something changed internally to pivot away from being customer/developer focused. I'll stick with gcp.


Disagree on GCP.

Extremely hesitant to GCP ever since they performed that price hike of their GKE product for their control plane.

I really can't trust them anymore, given that nearly all of Google's entire product suite can just get deprecated whenever they feel like it.


It's really not comparable. Google is for companies and the smallest bill is $100 a month. Digital Ocean is for enthusiasts and the highest bill is $100 a month.


They recently increased the price of on-VM-storage by a minimum of 50% for larger plans (>100GB) in a pretty sneaky way, so I agree with the above sentiment. The pricing of recent releases is more towards large enterprise deployments or overly funded startups.


Seems a bit nonsensical to comment this on an article about how they're investing more in developer focus?


What are you talking about? They sponsor Hacktoberfest, which gives people free T-shirts for spamming meaningless pull requests on open source projects on Github. How much more developer focus do you want?!?


Yes, there was a huge shift in leadership in 2018. Check out the linkedins of the founders and original employees. You'll see that they either left, or shifted from executives to board members.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: