I am so glad to see this. I was looking to deploy an app and the choice is either Heroku or manage your own server which I don't want to do.
Heroku gives instant deployment for the most common types of apps (python/java/ruby). It's PaaS done right, it's fantastic. You should really have a look if you're not aware of it, it's only $7 for a starter app.
Problem is, scaling up is about $50 per gigabyte of memory which makes it a dead end for anything non trivial. You're forced to go to digital ocean / Linode / OVH instead to have something affordable.
That leaves Digital Ocean as the only alternative (don't trust Linode) and it sucks because it only gives me a server to manage. I don't want to manage a server I want to run a (python) application. It's 2020 this sort of things should auto deploy from GitHub without bothering me to manage an operating system.
Why not take the initial complexity cost and learn k8s and containerization? That's what I've been doing as a step-up from Heroku and have been very happy with it.
My project currently runs on Digitalocean managed k8s and setting it up really wasn't hard. I had everything already in containers for dev/prod anyway, and having those run on k8s just meant I had to write the deployment manifests that pull the containers and setup the pods.
What I love about managed k8s (and also shared a couple times in comments on HN) is that it's separated from the servers below. I can have 20 containers (that can be separate things all-together) running on the cheapest Droplet and would only pay whatever that droplet costs, so under $20. Then when I need more power, I just scale the Droplets used for the k8s cluster and my pods/containers get shoveled around the available resources automatically.
I liked this approach so much that I now have a private 'personal projects cluster' that runs on digitalocean with the cheapest/weakest droplet avvailable, and whenver I have a small hobby project that needs to be hosted somewhere, I just add that container to the k8s cluster and be done with it.
I’m waiting for digital ocean to have something like google cloud run.
Google cloud run is essentially here is a docker image that listens on the $PORT env variable. Spin it up when you get requests. It will handle X queries per second (you can set limit). If more than X, scale it up to this many replicas.
I pay about 10 cents for my site. Zero maintenance. I push code to GitHub, GitHub builds an image, pushes to GCR and tells cloud run to use new image.
This is how things ought to work for simple web server like functionality. “Here’s a dockerfile and source tree, build it, run it and auto scale it with this https domain” Boom!
Why not take the initial complexity cost and learn k8s and containerization?
I would argue that the complexity cost is ongoing not just front loaded. There is an overhead for every new application. For instance putting it in a Docker image, deploying it using gitops flavour of the month and then any extra policy management and routing.
> Why not take the initial complexity cost and learn k8s and containerization?
Security, OS patches, maintaince and more than anything DDOS attacks. I don't want to handle all that, I just want to concentrate on development not maintaince.
Managed k8s offerings usually take care of everything below the k8s API. Our GKEs auto-upgrade their control plane and the worker nodes, both OS and k8s versions. I could force my way onto the workers via SSH if I really wanted to, but by default I can't even get on those machines. All you ever do yourself is kubectl this, kubectl that. I believe DO's k8s offering is like that as well.
True, but at while later of “tech” is useful to you? I often solve problems with only google sheets. Why bother with an app if you don’t need it. Same for other layers of “infra”.
This is spot on, except for one thing: Google Cloud Run.
It's the closest offering I've found to Heroku and am planning to migrate all our services to it due to significantly better pricing. Make sure you look into it.
Google Cloud Run doesn't provide nearly the same features as Heroku. For example, there is no easy way to manage secrets with Cloud Run. There is no way to run a worker process. Integration with other Google Cloud things like Cloud SQL is clunky. Cloud Run is okay to get started but almost all apps will need more.
We've been successfully using Cloud Tasks and firing payloads right back at our HTTP cloud run service to solve for the lack of background workers. We had to build a mini-framework around it, but it works surprisingly well.
The Google part of the name means that the service could be shut down at moment's notice if it isn't providing millions of dollars in revenue and even then it's a toss up if the people in charge get bored of it.
Google is unique in that it somehow lets employees start random projects under the google brand. If they had a separate brand for these experimental projects they would have avoided a lot of headaches.
Google Cloud isn't someone's side project though so the risk is much lower. You're right that services can be shut down at moment's notice but that applies to any service of any kind. Unless you host with multiple cloud providers at the same time you cannot avoid that risk.
That's one of the things I like about Cloud Run. There's no vendor lock in really. Your app is just a Docker image running any normal 12 factor app which could be deployed to Heroku, Cloud Run, DO or any other PaaS.
There was a thread the other day that mentioned a variety of products that were killed by Google, which were aggregated here: https://killedbygoogle.com/
Google seemingly kills more products than they run and I wouldn't rely on them for any aspect of my business, based on this, and their historically awful CS.
That was never a part of Google Cloud. It was a feature of Chrome browser/OS and allowed for sharing printers over the internet because of the limitations back in 2010.
Google has a terrible reputation with consumer services but has been pretty decent with Google Cloud so far. The biggest negative example would be Google Maps pricing changes but that seems to be different class of issue.
Recently discovered cloud run and it's absolutely amazing. I will be using it for as much as possible going forward. Cheap, easy, scaling. For a crud app it does everything you could ever want.
Agreed; we reduced our cost from close to 50$/month to a few dollars per month by simply switching from google app engine to cloud run. The initial reason for switching was that I wanted a dockerized solution so that we are not locked in to arbitrary versions of whatever app engine supports. It does that. But the cost advantage makes it a pretty sweet deal.
Cloud run is very easy to get started with. It's basically a service that runs and scales a docker based application. They bill per request based on some notional cost of cpu and memory. They give you some options for using bigger instances, which influences the per request cost. You can go from 256MB to 2GB memory and I think you can have up to 2 or 4 vcpus (one of those, we use 1). You can specify a minimum (default 0) and a maximum (default 1000) number of instances. If a request comes in and nothing is running, it starts an instance. After idling a while it disappears again. So, if you get a low amount of traffic, mostly there will be something running without costing too much. At some point you raise the minimum to 1 instance and it starts costing a bit more.
Crucially, there is a freemium layer: you don't get charged unless you hit a certain amount of requests. So, when prototyping, this is very cheap. Basically we managed to keep our cost around a few dollars while in the last months.
As this is part of Google cloud, you can transition to other solutions (like their hosted kubernetes) eventually. You also have access to other stuff. E.g. we use firestore as a cheap (but limited) database and the google secret store for secrets, and some storage buckets.
When you click together a cloud run deployment, it can create a cloud build for you that points to your git repository and set up automated deployments. If you have a Dockerfile, it will try to build and deploy that. If you need to customize what the build does, you can provide a custom cloudbuild yaml file (similar to github actions, travis-ci, and other popular ci options).
After it comes up, you get a nice https endpoint. You can actually add password protection to that as well if you need it. And you can optionally use your own domain.
So, we are running a spring-boot project for next to nothing. When the time comes, we'll likely swap out firestore for a proper database as this may get expensive as they bill per read and write and certain operations are just going to use up a lot of reads (e.g. a count operation). It's fine as a dumb key value store but it is very limited in terms of other features (e.g. querying).
However, Google's hosted databases are expensive and don't really make sense until you are ready to spend a few hundred dollars per month. Same with Kubernetes. Kubernetes does not make sense until you are ready to commit to a burn rate of hundreds of dollars per month. And it's completely overkill unless you actually are deploying multiple things.
I've been using Google App Engine as the backend of my scratch-my-own-itch Android app. Granted it only has 60+ active users (according to Google Play Console) but my monthly bill for that has always been <$1 (the parts that's not free are secret manager and storage to store my deploys).
You can even run multiple App Engine apps for mostly free, because their free tier is calculated based on the actual instance-hours running, and with App Engine you can configure it to that when you don't get any traffic there's no instance running (they spin up a new one when there's a new request in that case).
Fly.io is more exciting for me because it's edge app servers and it terminates web sockets. I've been envisioning a live-view like framework, or heck even a regular old rest api, where the client connects to the fly.io server with a single request then the app server makes a request to N number of back end processing servers using an efficient protocol like grpc.
The only missing piece for fly.io is data layer. The given redis is only for caching purpose. Once there is a managed distributed data store add-on, it's done. It's very likely liveview will need to hit database. Right now pubsub with redis adapter is the easiest solution for cross-region phoenix application (well, phoenix IS my application!)
That's a good thing actually since it means you never have cold starts (unlike Google Cloud Run). Their micro instances cost like $3-4 which isn't that much.
It should be an option in my opinion, as cold start time could vary a lot by which image and runtime you are running, if is a Java app or a statically linked Rust program.
I have a bunch of very small and simple services, which I run on a single VPN. Moving to fly.io would mean to pay various time more. I tried that, and the service is nice indeed, but not for me I guess.
It's probably worth looking into the big cloud providers rather than the little guys. In Azure you can have an app service (a deployed app in any one of loads of languages without looking after the machine it sits on) with 1.75GB RAM for about $12 a month. Obviously your usage may vary and that will effect the price. But I get the feeling that the big players are cheaper than people think they are for small projects.
The big players have separate charges for bandwidth and disk and other hidden stuff. They are way more expensive than Digital Ocean / OVH all inclusive. Worse, the costs is unpredictable which makes them a no go for a side project, I can't risk accidentally getting a $1000 bill.
As a real world example, I run a personal blog. If it were running on S3, my personal finance would have been obliterated when it got featured on HN and served 1+ TB of traffic.
I've had things hit the HN front page a few times while just hosting on ec2 and never had a noticeable increase in charges. Then again, I wasn't hosting very large files.
Can HN really deliver enough traffic to a static site to cost a significant amount? I've had mildly popular posts on HN for my Netlify blog (John Carmack tweeted about it!) and not had to pay for bandwidth.
The concern for me is a lack of hard limit on spending on GCP, Azure, and AWS. If I screw up and allocate a bunch of resources unintentionally, I'm left holding the bill. That's a terrible setup for PaaS because all programming involves mistakes eventually, especially for new users learning the system.
Granted, there are likely limits on accounts, but those are to protect the services from fraud, no to protect the user from overspending. The limits aren't well defined and it's not something you can rely on because MS might consider $10k / month a small account while it's a ton of money for me.
Azure customers have been asking for hard limits on spending for 8 [1] years with radio silence for the last 5.
There's a difference in goals I guess. If I spend more than expected I WANT things to break. Microsoft, Google, and Azure want me to spend unlimited amounts of money, even if I don't have it. At least AWS can be set up using a prepaid credit card so if I screw up they have to call me to collect their money and I negotiate.
- Hobby kid doesn't want to overpay, shut everything down
- Business absolutely doesn't care about spend, if they get some kind of marketing result traffic spike they just want the site to stay up even if it blows the average budget
Very large businesses might not care about spend, but pretty much everyone else does.
Almost everyone will be unhappy if they're stuck with a six figure bill for non-converting visits because their site went viral. Everyone will be unhappy if they're stuck with a six figure bill because their site was used in a DDoS reflection attack, or got pwned and used in a DDoS attack directly.
Everything I run on nickle-and-dime-to-death cloud services, such as AWS, won't even respond to unauthenticated requests (Nginx return 444, or reachable only via Wireguard) precisely to mitigate this risk. To do anything else is just financially irresponsible.
I've even considered coding a kill switch that will shut down AWS instances if they exceed billing limits, but the fact that AWS charges a fee to check your spend via an API makes this awkward and speaks volumes about Amazon's motivations.
Amazon's refusal to offer spending caps on AWS benefits Amazon and only Amazon.
>"Business absolutely doesn't care about spend, if they get some kind of marketing result traffic spike they just want the site to stay up even if it blows the average budget"
While this statement can be true in some cases I vividly remember bosses of largish (budget wise company) running around like headless chickens yelling to kill every running instance of service just because they were hit by way more "success" that they've planned for.
Hard spend limits are not an easy problem with cloud. There are too many things that incur costs. Everytime this comes up, I ask the same question: what do you expect to happen when the quota is hit?
Shutdown your servers? Wipe your SSDs and storage buckets? Remove your DNS records? Should it be permanent? If not then they're just subsidizing the costs. If it's soft-limit then its just a warning, and if you just want a warning then billing alarms already exist in every cloud.
Also for most customers, the data and service is far more important than the cost. Bills can be negotiated or forgiven afterwards. Lost data and customers can't.
>Shutdown your servers? Wipe your SSDs and storage buckets? Remove your DNS records? Should it be permanent? If not then they're just subsidizing the costs. If it's soft-limit then its just a warning, and if you just want a warning then billing alarms already exist in every cloud.
You know. When I hit the storage limit of my SSD it doesn't wipe my data. It just ceases to store more data. When I rent a server for a fixed price and my service is under a DDOS attack then it will simply cease to work for the duration of the attack. If there is a variable service like lamba that charges per execution then lambda can simply cease to run my jobs.
You can neatly separate time based and usage based charges and set a limit for them separately. It doesn't even need to be a monetary limit, it could be a resource based limit. Every service would be limited to 0GB storage, 0GB RAM, 0 nodes, 0 queries, 0 api calls by default and you set the limit to whatever you want. AWS or Google Cloud could then calculate the maximum possible bill for the limits you have chosen. People will can then set their limits so that a surprise bill won't be significantly above their usual bill.
Your comment is lazy and not very creative. You're just throwing your hands up and pretending there is no other way even though cloud providers have created this situation for their own benefit.
The vast majority of overages are due to user error. These errors would just be shifted to include quota mistakes, which can incur data or service loss. Usage limits might be softer than monetary limits which are bounded by the time dimension, but can still cause problems since they do not discriminate between good vs bad traffic.
Before you go around calling people lazy, I suggest you put more thought into why creating more options for people who are overwhelmed by options is generally not productive and can cause unintended consequences and expose liability. With some more thought, you'll also realize that AWS is optimized for businesses and, as stated, losing customers or data is much worse than paying a higher bill, which can always be negotiated after the fact.
I want all services to be rate limited. What I don't want is for some runaway process (whatever the cause) to bankrupt me before I can respond to any alerts (i.e within hours).
In other words, I don't necessarily need to set a hard spending limit, but I want to set a hard spending growth limit (allowing for short bursts), either directly in monetary terms or indirectly through rate limits on individual services.
> Shutdown your servers? Wipe your SSDs and storage buckets? Remove your DNS records? Should it be permanent?
I'd be absolutely fine with that in a sub-account or resource group as long as I had to enable it.
A while back I wanted to try out an Azure Resource Manager template as part of learning something. Since I was _learning_ it, I wasn't 100% positive what it was going to do, but I knew that it should cost about $1 to deploy it.
With a hard limit on spending I would have set it to $10, run the thing and been ok with the account being wiped if I hit $10. Even $100 I could tolerate. Unlimited $$ was too risky for me, so I chickened out.
The worst part is I can't even delete my CC because it's tied to an expired trial that I can't update billing for.
> Also for most customers, the data and service is far more important than the cost.
I run a small side business, and these unlimited cloud plans are just a no go. A medium to large company could totally absorb a 5 figures bill, but that would be a dead sentence to my side project. Also, considering the variable costs of bandwidth of AWS, Azure or Cloudflare, one competitor could simple rent an OVH server and incur insane costs to my business while only spending 1/10 of the money.
Right now, I'm using Heroku (with a limited number of dynos and a single PgSQL database) together with BunnyCDN (which allow me to pay for prepaid usage). If I ever get DDoS'ed, my app will most probably be inaccessible or at least significantly slower, while I'll receive an email alert, from which I can decide myself to allocate more resources.
No. I once had a site hit #1 on HN. It was hosted on a Dreamhost shared VPC with Wordpress. It barely broke a sweat. I have no idea what these guys are doing who are having their sites bulldozed by HN traffic but it's worryingly common for something that should never happen.
This has always confused me. What is going on when someone's site is taken down by HN traffic? (Maybe the fact it's on HN when this occurs is just coincidence: maybe the real traffic loads are always from reddit or twitter or something in these cases?)
(My experience with high-ranking HN posts: initially with DreamHost, later with cheapest AWS ec2—never a noticeable impact with either)
Among articles you see on the front page, there is a two orders of magnitude difference in visits between the more popular and the less popular.
HN/reddit/twitter/android can all send a similar amount of traffic. There's one order of magnitude there, how many places an article is featured at the same time?
Then there's an order of magnitude within each place, how much interest and readership the article could gather? Highly variable. The first comment alone can make or break an article.
This sounds off. Both reddit and twitter have the potential for vastly more traffic than HN.
I also haven’t had the number one spot on HN (except maybe briefly), but was in 2 and 3 for long stretches and even an order of magnitude more traffic wouldn’t have been a problem.
Two orders probably would have been, but I have a hard time imagining a 100x traffic difference between the 1 spot and the 2 spot. Then again, if it was a very slow day here vs a very busy day maybe (though in my case it wasn’t a very slow day).
I assume you're targeting reddit programming and similar subs, they're similar to HN in aggregate. You're right that Reddit and twitter have way bigger audience in total but only a fraction of all reddit users is relevant. Assume we're talking about a tech blog, not articles on election or brexit?
It's not about rank. It's about the specifics of the article, mainly the title and the content. It simply attracts more or less readership.
I've had #1 multiple times. I've had articles that stayed on the front page for multiple days.
Wouldn't be surprised if I'm top 1% of personal bloggers on HN or something like that. I'd be shelling out AWS thousands of dollars over the years if I were using anything AWS, or more likely I'd be either broke or the blog would have crumbled under the traffic each time never going viral.
I don’t usually do this but I decided to check your post history. I don’t know if anyone else posted your blog posts to HN but assuming it’s just you, I counted five posts (excluding the flagged ones) that would have made it to the front page of HN for any meaningful amount of time. Based on this, I would say that you are unlikely to be HN’a top blogger.
And I don’t know how you’d set up your blog with AWS but I don’t see how it could be expensive to host static content there.
Wrong assumption, a fair bunch of the posts came from other people :p
I honestly wonder what's the average distribution for HN contributors. I imagine it's not much for personal blogs. Not trying to compare myself to the new york times or cloudflare blog obviously.
Heh. Checked by the domain instead and got 10 submissions with double digit or more vote counts. I still think pg and jacquesm have you beat by quite a bit but yes you have 2x the front page posts I initially spotted.
That's simply not true. I hit #1 a few times with content hosted on S3. Ended up paying maybe extra $2 those months. I'd be worried if I hosted any large files that came with it, but just a blog post? Barely noticeable.
You can lose money accidentally in many ways. I agree you have to watch out, but still disagree with the number of people dismissing S3 as a quick way to bankruptcy if you get HN #1.
I'm currently on AWS for my site and in the process of researching alternatives. I share your concern of something going wrong and being stuck with a huge bill. Someone pointed out that 1TB of outgoing traffic from Amazon EC2 would cost $90. I'm fortunate enough that that won't obliterate me, but I won't be happy if that happens. I'd rather my blog get hugged to death. Going viral isn't worth $90 to me.
But I don't think DO really solves this problem either. They say they have spending caps in some of their marketing materials, but the finer print says that overage billing is $0.01/GB. Now that's a whole lot better than Amazon's $0.09/GB, but it's not a cap.
DO can say they have "predictable pricing" because in the vast majority of the cases the "free allotment" that comes with your droplet is enough, so you never see a bandwidth charge, you pay the cost of your droplet and you're done. So yes, it's more predictable because Amazon would charge you $5.23 one month, $4.87 another month, and DO charges you $5 every month.
But I'm not worried about the 99% case, I'm worried about the extreme scenario where I somehow go viral or get DOSed. And both options leave me exposed.
That's not to say DO isn't a better deal for the hobbyist than AWS. The equivalent of DO's $5 droplet will run you much more on AWS, especially if you actually use the bandwidth they're allotting you. And the big 3 do a lot of nickel-and-diming, which is a nuisance compared to the simpler pricing model of the smaller providers.
You should be able to get the cost down significantly by caching on cloudflare. My company managed to deliver 99.9%+ from cloudflare for static pages and allowed us to serve large amounts of traffic from a small backend.
Anecdotally. When I set up my account on Azure there was a bug in the web client that filled out my region to Canada. So I opened a support ticket and they said they can't change the region on my account and the workaround was to open another account with a different email =/.
This is common to many companies. There are thousands of regulatory, taxation and licensing things that depend on the customers region, and it simply isn't practical to support a user journey that starts following one set of laws and then changes to any other.
Companies that allow it almost certainly are not meeting all the relevant laws for those customers that do change region.
The digital ocean volume pricing of $1/10GB per month seems very steep... I can literally buy fresh SSDs every month for that money. The container pricing is reasonable though.
In digital ocean, I believe you get a volume for free with your $5/month instance/droplet (it's not added separately, which differs from most other cloud providers).
The screenshot you shared is attempting to add additional volumes to a droplet. See the pricing for droplets, it includes 25GB of SSD and 1TB of transfer.
This is the approximately going rate for all the major cloud providers (links [0]). Sure, you could buy your own SSDs, but how are you going to connect them to the VMs? I suppose this might be where their profit is, especially because these are logical volumes anyway. But it's not like you can just go out and beat this price at home with minimal effort.
> But it's not like you can just go out and beat this price at home with minimal effort.
That's exactly what I did. Taking the cost of my time for setting everything up and maintaining it, I estimate the net cost is about 1/10th of what it would be in the cloud.
This is why if you're in the Rails world, I'll always recommend Hatchbox [0]. It takes the PaaS layer from Heroku and applies it to generic nodes on DO or AWS - I'm grandfathered into a really good plan price, but even as it stands today, if you're building Rails apps, it's a great option.
You can search HN for top articles mentioning linode. The comments speak for themselves. Basically Digital Ocean is better in every aspect you can think of.
Strongly disagree. Very been a customer for years before DO was launched. I've always benchmarked the same apps under the same load in both of them over the years and Linode has beaten DO every single time. Never had any issues with them.
Not only that, but outright lying about the breaches. When I used them in early 2010s, they managed to expose two different virtual CC numbers (which I _only_ used for Linode) to fraudulent charges. But both times they insisted I was not part of the breach they were suffering at the time ...
> Problem is, scaling up is about $50 per gigabyte of memory which makes it a dead end for anything non trivial.
That isn't exactly true, for a few reasons.
First is, the top tier public sticker price is roughly $35/GB.
Second is, at higher scales, you'll sign a contract with them that discounts your rates further.
Third is, this is presuming you're paying $ for memory alone. While that might be relevant for individual apps which need that specifically, on the whole you're paying for the ecosystem, the standardization, the PaaS. You're trading money for your time back. The product you're buying is not simply GB.
> Third is, this is presuming you're paying $ for memory alone. While that might be relevant for individual apps which need that specifically, on the whole you're paying for the ecosystem, the standardization, the PaaS. You're trading money for your time back. The product you're buying is not simply GB.
Except when the only thing you need over the $7 hobby instance is more memory.
Not exactly cheap but run.pivotal.io (cloud foundry) and openshift online are both $25 GB/min which is a little more accessible. I'm not sure about Pivotal's online platform but PCF has some pretty simple autoscaling plugins that could spin down instances during low usage
A lot of Fortune 500 companies have Cloud Foundry setups and it's built on some of the same tech as Heroku so it's fairly accessible
What about just write golang-one-static-binary app and you don't need a deployment platform at all? rent some cheap VPS and put a nginx in front for load-balance. To update it just stop the binary and copy over a newer one, and restart, that's it.
The way to deploy java/python/ruby/node.js is complex on their own, I feel golang can fix that part by the language design itself.
Has GAE sorted out secrets management? Last I checked, they required you to commit secrets to the repo you push, which necessitates your secrets being on whatever computer (or whoever's computer) does production deploys. Contrast this with DO/Heroku/etc. which lets you set environment variables.
Some folks suggest using a DB to store secrets on GAE, but this is (IMO) just obfuscation.
This still seems ridiculous. Why did I need to keep secrets in my repo to begin with? GAE, as far as I can tell, has been the only major PaaS that hasn't offered a solution for this. It's so easy to get wrong...it contradicts one of the biggest rules of version control: keep your secrets out of your repo.
Who cares? The point is that it's their problem, you don't need to think about it, it's done for you. =)
In practice your application restarts at least once a week. It's transparent because a new instance is started first and takes over. The provider can move applications around to add/drain servers and perform maintenance.
I read that thread, it's quite concerning. Now I'm wondering how to determine if other providers are doing the same thing. I suppose I can test on a new instance, but that can only tell me definitively if they are doing it, not if they're not.
The thread is 7 years old, and DO hired competent people along the way.
The issue is not that it happened, or that they had clueless staff; the issue is that their board and senior management thought that the best way to respond to their own error was to blatantly lie in their blog that there was no problem at all.
It seems to have worked; they're massively valuable now.
Heroku gives instant deployment for the most common types of apps (python/java/ruby). It's PaaS done right, it's fantastic. You should really have a look if you're not aware of it, it's only $7 for a starter app.
Problem is, scaling up is about $50 per gigabyte of memory which makes it a dead end for anything non trivial. You're forced to go to digital ocean / Linode / OVH instead to have something affordable.
That leaves Digital Ocean as the only alternative (don't trust Linode) and it sucks because it only gives me a server to manage. I don't want to manage a server I want to run a (python) application. It's 2020 this sort of things should auto deploy from GitHub without bothering me to manage an operating system.