Hacker News new | past | comments | ask | show | jobs | submit login
Almost free serverless on-demand Minecraft server in AWS (github.com/doctorray117)
431 points by ptrik on Sept 8, 2021 | hide | past | favorite | 219 comments



I wrote effectively the same thing without AWS lambdas https://playmcnow.com/

It’s so cheap[1] to start and stop servers on demand that I’ve decided to give “away” servers for free. I wrote a little proxy in Go that detects minecraft login requests and starts a server with the specific world. After a dropped connection I stop it.

[1] For 15€/month you can have ~30 servers running in parallel and thousands of powered down worlds. https://contabo.com/en/vps/


BTW, other fun hacks include:

1) Initial world creation is quite slow - 10-15 seconds on a moderately powerful hardware. Since I want first joins to be as fast as possible, I keep 1000 pre-generated worlds and one of them is chosen randomly to be used as template on your first login.

2) In addition to login packets, minecraft clients send a ping packet to check if the server is online. I forge a valid response because I don't want to start a server just so you can see "server is up, 0 players online".


This gives me an idea for recycling the powered down worlds into "new" worlds for new users. Being able to see and use the abandoned bases of past players would be quite fun.

You could take this a step further by "decaying" the bases in some way (remove torches, remove some large percent of the items in chests, add vines, weather rock, move blocks from the ceiling to the floor, etc)


At least the issue with that years ago was when minecraft would have an update that introduced some new generated resource where you would have to get to the edge of the map to generate some to use. Most servers restart periodically to get fresh resources near spawn, not only for new resources that might have come out in recent updates, but just because the area has been completely harvested by past players like some scarred piece of land and you need to start venturing out far to find trees or ore.


Is there a mod that gradually adds new resources to old worlds, gradually adds ores back to their gaps in caves, gradually restores the environment, as well as gradually removing torches and “decaying” old structures?

Otherwise someone should make one. No need to full reset servers any more.


I think some servers do "chunk resets". The idea is to take chunks with little or no player activity and delete them so they'll be regenerated next time someone comes along. I think I've heard of this being done automatically (chunks that a player hasn't been to in X days will be deleted). It would lead to a discontinuous world though, with older chunks having sharp borders with newer chunks that aren't even the same biome.


You get to explore an ancient civilization that went extinct!


This is great idea, but may be tricky as people often just mess around with the world. You would need to, somehow, decide which abandoned base is visually appealing and/or matches the environment.


If my memory serves me correctly, it is rather easy to decouple natural from player made blocks. It was either stored separately on storage or at worst you can deduce it by "subtracting" a fresh chunk generated from the seed.

Then you just need a good heuristic to guess whether or not a group of blocks matches your definition of a base to be explored.


Yep!

You could take this a step further; once you have determined that a set of chunks have been modified significantly, you could apply that set of changes to the same coordinates of any map generated from that seed, meaning you can combine changes from multiple worlds into one (with the same seed).


Fwiw, there are public servers that have been running for a decade now that have a similar effect. The world is chock full of player content just sitting around waiting to be rediscovered.


Even without modifying the world it’s an interesting idea. Basically the option to fork some random person’s world as your new one.


Cool way to see and judge phallus build quality.


I was looking for a way to incorporate forging responses to the pings but couldn't find a way to consistently have a socket open on 25565 that didn't incur a hard monthly cost. Your service and approach looks great, I'm surprised I didn't run across it when researching before...


I'd love to discuss helping out with this project, but I don't see a way to contact you on your HN profile or on the site. Is that something you are open to?


Sure, get in touch at stanislav.ltb[at]gmail[dot]com :D


Sent!


Those Contabo VPSs are shockingly cheap. Seemingly several times cheaper than, say, vultr. How do they do it?


I could see a couple ways to make it work - use the NVME drives they mention as swap space to get cheaper, but lower perf RAM, and put multiple vCPUs on each core. Lower performance, but still exactly what they offered.

But that's my skepticism talking.


Massive cpu steal and poor storage io performance.


It would be good to add some explanation to your site describing what it is and how to use it. I clicked the link and had no idea what to do next.


Wow this is really cool, thanks for making it free as it seems really handy, one question though, is there a way to get op to use commands for creative mode etc? Also my skin doesn't seem to load on the server for some reason but it's not the end of the world. Thanks.


Thanks for trying it out! Op commands are disabled because I haven't gotten around to making every user admin by default. I also remember some vague security concerns on the back of my head ;D. It would be cooler if I could also let you change the game mode from the web interface.

As for the skins - servers run in "offline mode", which means no communication with the Microsoft authentication API responsible for validating accounts and (I believe) giving skins to players.


Would you say their lower M tier is enough to run just a couple of worlds? (Possibly modded.)


Yup. In my experience a stable server requires about 700MB of RAM. There are reports on https://www.reddit.com/r/admincraft/ saying you can run it in 200 MB but I haven't managed. Probably with the right swappiness settings, performance tuning of the JVM and the Spigot / PaperMC instance.

So 16GB of ram should be enough for 20-60 servers.


You can host Minecraft servers for free on Oracle Cloud’s “Always Free” tier [1] now that they’ve added free ARM cores. You get 4 cores and 24GB(!) of RAM to assign to up to 4 VMs - more than enough for a server for friends or family.

[1] https://www.oracle.com/cloud/free/


Oracle and "always free"? Boy are these interesting times...


I'm sure the TOS says "terms and conditions may change at any time in the future". Always is a trademark and not meant to convey any future services.


> To enable us to provide free Oracle Cloud accounts to our valued customers, we need to ensure that account holders are real people. We use your email, phone number, and credit/debit card for account set-up and identity verification. For users in the United States, you may see temporary charges of $1 on your account statement. Users in other countries will see a similar charge in their local currency. These are verification holds that will be removed automatically, typically within 3 to 5 days.

> We will not use your credit/debit card information to automatically upgrade your Always Free or Free Trial to paid without first getting your explicit approval.

I gave a throw away google email and google voice and just used my first initials, the one thing I could not use was a throw away debit card, they really wanted a real credit card.

So far so good.

One guy complained they shut his server off after he transitioned from the $300 free credits in the first month to always free.

I didn't have that issue, although I do check the billing statements periodically.


Pretty sure every website will tell you everything is subject to change at any time at their leisure.


It's hilarious given that they used to charge you per CPU _for software_. You had to bring your own CPUs.


Oracle and VMware per-socket licensing is one of the main reasons server-class CPUs have gazillions of cores. A vSphere licence+support for a single machine can easily run into 5 figures.


That's still a common billing model- how Red Hat bills for openshift for instance.

It's a simple and pretty good proxy for usage


Pretty sure they still do for some software they sell ;)


Maybe thats why POWER chips have so many SMT threads per core


They're probably always free, until $x happens.

Then there's an audit, you're found non-compliant, and now they own your house.

Oracle, 2021.


From what I understand, suing people is a great business strategy. Much better than producing any actual value.


I mean, it bought a Hawaiian island. Capitalism at its finest, really.


Almost sounds like a paradox


I did this literately yesterday following this guide: https://blogs.oracle.com/developers/post/how-to-set-up-and-r...

Kids played for a couple of hours last night without issue.


It should even be able to load mods without any issue right? If I recall correctly mod installation in minecraft is nothing more than placing some files in a folder?


Yes and no. Although Minecraft is Java, there’s none of the “ahem” pleasure of using something like Maven/Gradle to manage plug-in dependencies. So plug-in A might require plug-in B, but not too new. And then these only work on the not-quite-current version of Minecraft.

It’s usually an hour or two of work every time my kids want a new plug-in or MC version.


I tried this today, picking a local home region that wasn't in their list of "oversubscribed" regions (Zurich) and have not been able to create an instance. Even 1vCPU/1GB RAM is seemingly unavailable.


There is a script for that, I was able to score a 3CPU/12GB Ram after a few days.

https://hitrov.medium.com/resolving-oracle-cloud-out-of-capa...


Well, thanks for this. Now I can have fun running https://www.brow.sh when ssh'd in from by original Raspberry Pi.

Yes, there is no point to this, but if it's free... ?


This still needs credit cards or phone numbers right?


Yes, although in the signup, it claims that they won't charge you unless you upgrade your account from the free tier - i.e. at the end of the trial period, you won't get a surprise subscription bill because you forgot to cancel something.


Yes.


I don't remember giving them my credit card during my registration.


I explicitly remember them asking for both when they first launched


Yeah that's not the case for me. I only gave them a phone number. The salesperson did give me a call, but gave up after I explicitly said I'm only here for the free stuff.


Are those ARM-based instances fast enough? Last I tried, Oracle Cloud's free tier "AMD" instances (with 1/8 vCPU) were so slow that I could not use them for any useful applications. Even their network speed was slow.


"4 Arm-based Ampere A1 cores and 24 GB of memory usable as one VM or up to 4 VMs."

Here's the Anandtech review of the Ampere Altra, which is what Oracle is serving these VMs from: https://www.anandtech.com/show/16315/the-ampere-altra-review...

The TLDR:

"The Altra’s strengths lie in compute-bound workloads where having 25% more cores is an advantage. The Neoverse-N1 cores clocked at 3.3GHz can more than match the per-core performance of Zen2 inside the EPYC CPUs.

There are still workloads in which the Altra doesn’t do as well – anything that puts higher cache pressure on the cores will heavily favours the EPYC as while 1MB per core L2 is nice to have, 32MB of L3 shared amongst 80 cores isn’t very much cache to go around."


I run 3 servers on a single 4-core host using the Paper fork of Minecraft. Works great, I get 20TPS even with some pretty large farms.


I'm wondering if I can use this to host Ark for 10 players. The cost of clustering all of the maps on dedicated servers is like $60/month or more.


How can they offer that for free!?

I'm sure in GCP's Always Free tier the server they offer only has 0.5GB RAM.


> How can they offer that for free!?

Opinion: They're losing badly in the Cloud Wars and need to scrape together some sort of customer base in any way they can, even if it means burning money.


I can see why....

Three times i've tried using OCI to move Oracles own products from on-prem to cloud. All three times they told me not to bother as it wasn't supported.

Seriously if Oracle can't figure out how to run RDBMS, Weblogic and Opera what hope have i got?


And if they've spent $billions building out huge capacity data centres, but only have a few customers, then they might have large amount of capacity sitting idle that costs them very little to put into the Always Free tier. It's free developer-mindshare-marketing anyway.


First one's always free. The cloud providers are vying for market share, and free tiers so that people / companies have buy-in is one of the ways to do so. It worked for Dropbox.


GCP increased that recently.


> The DNS lookup query is logged in Route 53 on our public hosted zone. > CloudWatch forwards the query to a Lambda function. > The Lambda function modifies an existing ECS Fargate service to a desired task count of 1.

I had never heard of this architecture before; a pretty creative way of doing Heroku-like scale-to-zero at nearly no cost on AWS.

> Fargate launches two containers, Minecraft and a watchdog

I'd love to see a cost analysis between running the "watchdog" as a Fargate container versus another lambda function. Even having a lambda function run once every 5 minutes 24/7 would trigger ~15,000 invocations a month, which is in the realm of "near Free".

If there was some way to trigger the scale-down event from there, it would reduce the expensive part of this setup (Fargate) even further. Though, granted; given both containers are packed into the same Fargate VM, it would really only mean freeing up some additional resources for the Minecraft server.

It looks like the watchdog is simply checking for connections on a port, which is probably too low-level to handle with lambda. But, an architecture like this could work in a ton of services, and if you had e.g. an ALB set up in front of the services, one could use the lambda to scan incoming request metrics and scale down on that.


> It looks like the watchdog is simply checking for connections on a port, which is probably too low-level to handle with lambda

Not at all. You could easily check that with any of Lambda's supported languages.


If the lambda were running on the same machine as the server, maybe, but that's not how lambda works.


Ah you're right. I was thinking the watchdog was querying the server. I see now they're both part of the same fargate task, so it's just checking for local network connections.


I always say the best way to learn a new technology is to work on a real-world project that you are interested in.

The person that set this up got an amazing education on use of real-world AWS services.

A lot of IT people aren't aware that things like this exist. They think moving to the cloud means sending all your virtual servers to your provider of choice and running them 24x7 like you did on-prem. In my opinion it's more about architecting solutions so that resources pop into existence for the exact # of milliseconds they're needed and then they're released. This is a clever step along that path.


Most people that move a whole set of on-prem machines to virtual servers actually need them on and available all day long.


What you're saying is true for a tiny minority of use cases.

The vast majority of use cases are better off with variable resource availability. Unless you're doing something akin to mining cryptocurrency 24x7x365 most workloads are variable to some degree.

So maybe instead of one giant server that processes requests you use a single small server that is available 24x7x365. Then if your workload increases at 8 am you use an autoscaling group to spin up 3 more. Then at 5 pm it goes back down to 1. And maybe you have a batch process that kicks off at 2 am every night so you spin up 4 servers to process requests. This is just one example so it's important not to focus on it and respond with, "Well what about x!" AWS has many ways to fulfill the promise of accomplishing tasks with minimal resources.

And all of this is just a step on the path to serverless computing with things like Lambda and DynamoDB or serverless RDS.


I made a service like this on Observable

https://observablehq.com/@tomlarkworthy/minecraft-servers-be

it never really took off so I mothballed it, however, I do use it at home for our personal server and it has saved me a ton of money! It makes perfect sense as you can have quite a good spec machine when you are paying by the hour. you just disconnect the disk from the VM and pay for disk storage which is very cheap.

It was based on the following terraform recipe (which I wrote)

https://github.com/futurice/terraform-examples/blob/master/g...


This is awesome! I'm still afraid that one of my friends will go on a Minecraft binge (or idle in a farm) and drive up the costs beyond the $13/month or so I pay for VPS hosting but I think this approach would objectively be quite a bit cheaper for the casual vanilla SMP server I run for a dozen or so folks. Anyone know how to estimate the "worst case" monthly cost for this config?

Edit: Just saw that the GitHub includes a link to an AWS calculator. Looks like a month of continuous usage caps out at $40-ish. Not too bad since my realistic worst case is probably more like 8/hrs per day rather than the full 24.


If you're worried about idling in farms just auto-kick after 1 hour of AFK or w/e.


Why idle in farms? Can’t you just put down a chunk loader or something?


That would have much the same cost effect if the server were kept up for it, and wouldn't be effective if the server were taken down.


It's been a while since I played minecraft, so I don't know what a chunk loader is.

When I played, if nobody was in the vicinity of the chunks with your farm in it it would unload and of course then the farm would not produce, so people would AFK in their farm to keep the chunks loaded.


Some Minecraft mods add items that force the game to keep certain chunks loaded. From the casual "Can't you just" I assume the parent meant those. It is however also possible, but much more complicated, to force the game to keep chunks loaded in Vanilla Minecraft. [0]

[0] https://www.youtube.com/watch?v=dx5Wd28AKxQ (the video is a couple years old so the current implementation might be different, but I think the basic principles are still the same).


ilmango dropped a new chunk loader a few weeks back - https://www.youtube.com/watch?v=B8z7q_pwjL4

Looks pretty easy to build.


If it's a mob farm, a chunk loader won't work on its own, I think?


Private minecraft servers as a service doesn't exist yet ?


> Private minecraft servers as a service doesn't exist yet ?

They do but, afaik, there's no "spin up and down" ones that charge you for usage; they're all "$X per month" fixed cost.

(Although looking at the costs these days, they're not that much higher than this would cost you for even a medium-sized world.)



I've seen one offhand comment about this so far from someone else, scrolling through the comments here. Wanted to make it a top-level comment though:

This is PHENOMENALLY DOCUMENTED. I am thoroughly impressed, @doctorray. Clear and easy to follow walkthrough and explanation of how it works, amazing troubleshooting tips, suggestions for managing it... This is an exemplar of a well-made README for a service. Bravo!


Thanks so much, I really appreciate your comment.


I ran Minecraft on spot instances when we used to play in university, complete with automatic terraform+ansible provisioning and automatic saves/backups in S3. Never used Fargate but I doubt it can beat spot instance pricing. More than half my bill was network traffic.


The README points out that Fargate spot instances are an option: https://github.com/doctorray117/minecraft-ondemand#cost-brea...


Aren't spot instances and Fargate preempt-able at anytime by AWS and/or AWS can throttle your instance as the cpu cores are shared like in a VPS ?.

Do you just stop playing when it happens ?.


You can set it to automatically adjust the price so it doesn't get preempted. The price used to be essentially at a constant minimum (m4.medium/large) during the evening & night when we played, so even without that we never got preempted.


I'm afraid this isn't completely true. You can still be interrupted for capacity issues no matter what your bid is. It's quite rare on more common instance types, but becomes a problem if you have more than a few GPU or high memory type instances.


I don’t think I’ve ever had my instances taken out from under me when I set the max price to the same amount as a normal instance.

That said, when we exceed capacity we cannot boot any more instances, that’s definitely true.


I thought this was true, but I have had spot instances go away even when going above reserved instance pricing.


Just don’t plan on playing in Black Friday when every box on AWS is tied up by one retailer or another.


They changed that a few years ago. It used to be the price would spike when they needed the capacity, oftentimes going higher than On Demand pricing. Now the price adjusts gradually, if at all, and they'll terminate instances regardless of bid price.


Not op or linked post, but AWS sends a message N minutes before it shuts you down.

You just turn that msg into a in-game countdown.

I always wanted to go after an auto-switch style system but never got that far.


Another take on this is to intercept the login message and use that as the trigger: https://github.com/infinisil/on-demand-minecraft


Could combine both: a vps running a login-message-only server, that spins up another server and then touches the DNS settings: As a bonus this "fixes" all the issues with proxying (doesn't erase the end user's IP or other such metadata for moderating)


To me a DNS lookup spinning up a container on Fargate looks both very cool and scary at the same time.


I’d never heard of the approach before and assumed it wasn’t possible so that’s a nice TIL- but ya relying on obscurity to contain costs seems like a recipe for a surprise bill.


Better be sure not to share DNS name to anyone


I've got a Minecraft server running in AWS with a graviton / ARM spot instance + EFS for persistence. It's also cheap to run (I run mine 24/7 and it hosts multiple other services as docker containers). Cost ~$10 per month. Infrastructure deployed with aws-cdk

https://www.shogan.co.uk/gaming/cheap-minecraft-server-in-aw...


Have you found that any web crawlers have tried accessing your subdomain?

Wondering if services like Google or Shodan may have tried querying it and causing your server to turn on?


Either the subscription filter or the lambda could be modified to only fire based on source IP; not the whole thing but perhaps the CIDR of your ISP, so that only you can start it. Perhaps it could be done with the route53 geolocation options as well.

In the 2 months I've been using this method before deciding to write it all down, I've not run into any issues with anyone else or any bots triggering the container to start, at least not yet...


Setting IP whitelisting would help.


Yet be rather inflexible.


Instead of Twilio you could use SNS and subscribe to it via SMS. You would most likely stay within the free tier there too.


I just added this in. Not super elegant but can publish to a topic with the notifications, then up to the user how they want to receive them.


Cool. Probably not as elegant as Twilio, but free is hard to beat.


This is a really smart setup, and superbly documented.

I just wish there weren't so many steps to get this kind of thing running! Even with automation it's still a LOT - getting this running myself would take me a few hours, and I have prior relevant experience.

A regular non-software-industry-professional parent has little chance.

I really wish there were better ways to make AWS stuff like this available for people to use without requiring them to have deep knowledge of how to work with different aspects of AWS.


PR a cloudformation template


My hunch is that even a cloudformation template would be way beyond the capabilities of most non-software-engineers.

I wish AWS would provide some kind of interface where I can redirect a regular human being to easy-deploy.aws.com/?cloudformation=url-to-my-cloud-formation and they would be presented with a human-readable form that tells them what it will do, sets a hard limit on how much money it will be able to burn through (for protection against crypto-currency mining scams), enter their credit card details and click "Deploy" to start using it.


What is a serverless server?


To oversimplify, it's running the Minecraft server software in a way that behaves like it's running on a specific server.

But in the background, it's run on a set of Amazon services. You don't have to rent a specific server for a given time period, like monthly server rental.

You just use Amazon's on-demand services (that use whichever server resources are required at the time).


Thank you for that clarification. Aside from interacting with AWS MTurk, I haven't had much experience with Amazon's AWS, so it sounded like a joke.


serverless == ephemeral, on demand servers


How is this different from “cloud”?


For all that work why wouldn’t they just use a droplet for $5 a month?


Hey, it's a good resource for kids without money.

Considering all the hours I spent looking for ways to do exactly this when I was 12-15... I don't doubt I would've gone through all the trouble and even learned some AWS along the way.

Back in those days the only way I could get a free server was by hosting a phpBB forum on 000webhost and somehow convincing a VPS provider to "sponsor our forum". They'd get a massive banner ad and I'd get a free server to play around with. The good days!


Not sure this is applicable to anyone else, but when it comes to being a kid with no money for me that meant literally no digital money. I had no bank account (at least not with a debit card associated) and my parents would not have given me theirs.

But the difference between a couple bucks a month and $5 once you actually have the ability to pay for stuff online does seems pretty negligible.


Here in Brazil there is a very widely supported payment method "boleto bancário", where basically the seller/provider prints you a bill that you can pay with a bank account, or in cash at physical locations (usually lottery houses and post offices).

In fact, some websites even offer big discounts (like 15%) for payments in boleto since there is basically no service fee.

That is basically how me and all my friends did "online" transactions.


You're actually right, it's an oversight from my comment. I was in the same spot: some savings, none digital, parents weren't going to pay anything.

In any case, if it gets kids learning new things under the guise of saving a very limited resource, I'm all for it!


Very good point. I remember being so excited when I got my first job at 16 and opened a bank account because I could finally purchase things online without having to do something like buy a prepaid debit card, which always had an overhead fee.

Even after that, I was always frugal and never wanted to spend something like $15 a month for a server for my friends. Now, as an adult software developer, I wouldn't think twice about the fun to dollar ratio of paying for a Minecraft server to connect with some old friends.


Same here! I had access to prepaid cards, but sadly cloud providers just don't accept them. (At least DigitalOcean didn't!)


Man, I used 000webhost a lot! Thanks for the reminder.


It was pretty great! There were a ton of free webhosts back then, they really fueled my creativity and desire to learn web dev.

000webhost, x10Hosting and SixServe (both had FREE cPanel!!), and never forget those shady reseller control panel hosts like Nazuka.


Hunting around for free cPanel hosting was essentially my part time job when I was 12-18. Many of them required certain forum activity too, so it could get time consuming.


Same here! Hours and hours searching.

I'll admit though, the shady reseller hosts were pretty good. Terrible control panel aside, they had very generous CPU/bandwidth/storage limits compared to the free cPanel hosts that had to cut down the costs there.


Because you can't run a playable Minecraft server on a 5$/mo vm (especially if you play with mods) and you don't need the server to be on 24/7 if you just play with some friends. This gives you the ability to automatically spin a powerful server when needed (say a dozen hours a week) and only pay that amount instead of the full 168 hours.


They simply don't work.

The cheap VPS's absolutely do not allow you to pin the CPU to 100% usage for a significant amount of time since that messes up the provisioning. A Minecraft server will definitely pin the CPU to 100%.

What happens is that your process will be killed repeatedly.

A $5 VPS is great for simple site hosting and a small amount of CPU workload. They do not work at all for any type of game server.

>As long as you don’t go to 100% CPU usage for a long period of time, everything will be okay. DigitalOcean are doing pro active monitoring and will see if your droplet is having 100% CPU usage all the time and may limit the CPU capacity of the droplets displaying this behavior. Since each droplet shares physical hardware with other droplets, constant 100% CPU use degrades the service quality for other users on the same node.

Note that a game server will go to 100%. It will be killed.

https://www.digitalocean.com/community/questions/cpu-usage-l...


I've run a Minecraft server on a $10/month DigitalOcean VPS for years.

What you describe has never happened to me. Have I just been lucky?


Yeah since it's a shared server others on the server can feel when someone's using a lot of CPU and complain. That's when they'll intervene. You're lucky here. Also it does depend on what's happening on the server. There's no chance of some of the complex mods working on a cheap instance.


Did you check %steal?


$5 a month would have been a LOT to young me. Maybe not unbearably so, but if someone told me of a way to do it for free, I would have definitely tried that method.


A $10/month droplet is probably closer to the minimum, and even then it struggles with only a handful of players. However that might be down to all the mods.


By droplet do you mean a DigitalOcean VM?


Yes, "droplet" is what DO calls them.


That is what they are called


This is so complicated.

I did this with a Minecraft plugin that would schedule a systemd shutdown in 30 minutes when the last player disconnects, and cancel the shutdown if a player connects.

Then a simple webpage that sent an EC2 API request to power on the instance, and a simple plugin that sends a Telegram message when the server is ready for connections.


> Then a simple webpage that sent an EC2 API request to power on the instance

You send the EC2 API request directly from a public facing website?


"ECS Fargate service to a desired task count of 1"

This qualifies as "serverless" now?


It does, as its user has little control over the underlying hardware/VM and it’s intended for on-demand use cases. It’s managing a process rather than a VM. It’s definitely a gray area… is Heroku serverless?


Gotta say, I never had good luck with AWS Lambda Serverless. Blew through the free quota playing around with a django app deployed using Zappa. Never got out of the "let me make sure this works when deployed" phase.


Id rather just host a minecraft server on an old desktop, and make it internet-available with playit.gg.

A lot less chance of me spending $$ that way.


Depending on the machine this might cost you more in electricity than a VPS. 100watts continuous costs something like $7/mo in a fairly low electricity cost region of the US.


That would be assuming the machine is on 24/7 and only be used for Minecraft. A lot of Minecraft management systems allow the Minecraft server to be shut off when no players are on which would limit how many resources are being used on the machine 24/7 too.

Overall I personally prefer a VPS or dedicated server but I don't think comparing it like you are is 100% fair.


This 100%.

I don't even bother with playit.gg - just forward a couple ports on the router and pass out my ip. Only time my dynamic ip changes is when I lose power, and if I've lost power the server is down for "Maintenance" anyways.


Or you can just use a dynamic ip service and use that URL anyway. That would keep the IP up to date. That is what I have done for years and it has worked well.


Glad someone finished this. I got 90% of the way there using a lambda to find spot instances but got too busy to finish.


I'm like why all this overengineering instead of deploying on a cheap EC2 like to be used _all the time_.


The only thing missing is an IaC definition of this architecture.


I would love to see this for Valheim.


Valheim dedicated servers are about the same overhead as a minecraft world (from what I can see running both side by side on windows anyway). It also supports linux, so you could probably do very similarly for Valheim.


    Concerned about cost overruns?
    
    Set up a Billing Alert! You can get an email if your bill exceeds a certain amount. Set it at $5 maybe?
It's 2021 and the biggest cloud platforms still don't have hard limits on spending.


Yeah. By the time you have woken up and read your billing alert email it’s too late.

However the reason it doesn’t exist I suspect is twofold. Firstly because it is bad business. All the cloud providers make a lot of money from mistakes and small things sapping cash. Secondly it’s hard to rationalise what to do when the budget runs out. What do they nuke?


There's a third reason, which I suspect is the biggest. The billing is not real-time, and it's hard to make it even nearly so, especially in such a complex and heavily distributed infrastructure like AWS.


They have realtime insights into how much read/write capacity or IOPS happens on the majority of their services. Their throttling has millisecond resolution. They have no problem giving you realtime insight into when you need to push the magic button to spend more money and increase your write/read capacities. If billing isn't realtime, it's due to sheer laziness or malfeasance (plausible deniability, my guess), because the data is there.


I asked about this a few years ago. The answer I was given by various friends throughout AWS was consistently that of "oh, our customers don't want that" for whatever that's worth.


This is probably technically accurate. After all, I'm sure Amazon, as any business, weights customer voices by the amount of spend. The people spending millions don't want a hard stop (i.e., kill our production services when we hit a certain amount of spend); the only people who want it are the people spending comparatively small amounts, pre-revenue startups, individuals, etc.

This is an example where being data driven to the exclusion of all else can hurt a company; I suspect having this feature would pay dividends down the road (by being the first to provide a safety net for a startup with a fixed budget that doesn't have production workloads yet you offer a competitive advantage between cloud providers), but the effect is completely impossible to predict or track currently since it doesn't have an immediate impact on revenue or the satisfaction of large, paying customers.


While I hate it, I agree with this in that in most settings heads will roll if your main moneymaking service is offline because of a billing snag.


Pro-tip: "the data is there" doesn't mean it's cheap to use.

There's a very simple explanation for this; realtime billing would increase the cost of the product they sell to create something most people don't need.


If you can’t accurately tell someone how much they have spent, how can you expect them to stop before it’s too late?

If you can tell, then you can set a limit.

Besides, if they can trigger alerts at a particular spend then they should be able to create a limit.


>Besides, if they can trigger alerts at a particular spend then they should be able to create a limit.

That's not really true. The alerts happen when the billing is re-calculated (periodically) and you've exceed a predefined, not when you hit that exact threshold.

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitori...

>When you enable the monitoring of estimated charges for your AWS account, the estimated charges are calculated and sent several times daily to CloudWatch as metric data.

Real time billing is actually a Hard Problem to solve.


> Real time billing is actually a Hard Problem to solve.

I refuse to believe there is no workaround. I can understand it is not easy to fix for corporations who need AWS to make money but that is not the use case for students.

If it were, Azure for students couldn't exist. Signing up for Azure for students does not require a credit card so they must have figured out a way to prevent / stop the bleeding?

https://azure.microsoft.com/en-us/free/students/


Well stop the service on that periodic cycle then if it’s over the limit?

Non-realtime limits are better than no limits at all. Besides the cloudwatch documentation seems to suggest it’s reporting on a 5 minute frequency for most of AWS.


Billing is very tricky from the metering to the final PDF bill which includes taxes, promotions and whatnot. So this is hard if you want to put limits on a certain dollar amount. You could also put hard stops on resource usage (say, no more than 500 CPU/hr/month)


Then allow to set a pre tax pre promotion limit?

Besides, AWS already complicates things way too much by handling VAT like other billable items instead of just adding it at the final step like any sane company would


I think the solution would be to allow customers to set hard limits, so they at least have an upper bound on their monthly spend, while still being charged on their actual usage (in a nonrealtime fashion).

This also solves the problem of "what to cut". If I hit my bandwidth limit AWS simply stops routing requests to my servers, if I hit my CPU limit AWS should throttle me, etc.


This feels (no disrespect to you) like a huge cop out. How many more small, "let's hack this together and hope for the best" projects are they missing out on because developers feel uncomfortable with the black box that is AWS billing? I suspect it would be a significant number, believe it or not.


Would it be that hard to build some cost overrun plans?

If threshold (x) hit then do:

- Email me

- Stop Servers XYZ

- Leave Servers ABC running.

If threshold (y) hit then do:

- Email me / Call me

- Shut everything down.


Shared hosting providers from the late 90s onward typically had systems like this standard.


Both AWS and shared hosting providers run on a “F you. Pay me!” model. The difference is that small hosting resellers knew they couldn’t collect on debts while Amazon knows it can.


If your talking about crappanel exceeded bandwidth suspension page it wasn't realltime, can't remember if default was 1hr,6hr, 24 though


Even if not realtime, it sounds like a better option than AWS' current "not supported at all; you'll have to manually shut things down if costs overrun".


I'm pretty sure they didn't have to potentially call as many clients as today. :)


AWS sorta has something like this already with CloudWatch, but it'd be nice if it was simpler and immediate instead of reactive. I just run a little reserved EC2 instance so my main billing risks are excessive data out or forgetting to renew a reservation and reverting to by-the-hour billing. So that I don't worry about it much I have three alarms that notify me if either "estimated charges > $x for 1 datapoints within 6 hours", or there's "anomalous" NetworkOut over a day, or "there's more than X total NetworkOut over a day", and another alarm that's "NetworkOut > Y for 1 datapoint within 15 minutes" that notifies me and shuts the instance down. I'd like to have a hard cap of "my instantaneous billing running total for the month, not my 'estimate', has exceeded $x, shut everything down" but what's there is something.


I use AWS for a very small side project. I used about $10 back in July, and I don't expect any more costs for several months (aside from S3 hosting).

It would be really nice if I could preload $100 into the account and remove my credit card. I don't have ANYTHING sensitive behind my username and password- except my CC #

I know they are never going to implement this because I'm small potatoes, but it would be nice



Do they say that you won't go over the prepaid amount? I can't see that in the pages that I've read, but maybe I missed it.


Bin checks will usually disallow prepaid cards to reply below


Buy a prepaid cc.

$7.00 for up to $500


For data out I have some alerts set similar to yours to shut down the instance if NetworkOut becomes unusually high—over three different time scales. Since the alerts are delayed, though, I also set up traffic shaping rules in the VM to throttle the NetworkOut to something reasonable so that it doesn't incur a huge bill before the first alert triggers. Officially the VMs have 5 Gbps network links; at that rate you can accumulate quite a large bill in a very short time if the VM starts saturating the link.


I came here to basically say something like this: if there's a hard limit, your business will basically shut down entirely. It's hard (impossible) for cloud providers to automagically shut down services based on each customer's priorities.

I don't think there's a magical solution to this. There might be a company that sets a $1K USD/month limit, forgets about it, and suddenly the cloud provider shuts down everything a year later, while "everyone" is unavailable or something like that.

There are so many scenarios, and I honestly feel that the cloud providers have decided on the most fool-proof solution both for them and their clients.


>Shut everything down

Ok. But the Auto Scaling Groups are free - so I can keep that on, right? Oh, look! They just launched more EV2s, how convenient. Should I back these up to S3? With CRR enabled?

Tee hee hee


No, it’s not hard to figure out what to do when the budget runs out. Preserve data and shut everything else down.

So switch off all VMs, but don’t delete the disks. Disable S3 read/write, but don’t delete the data. Etc…


Preserve data is also costly.


That scenario implies that someone has made a mistake. They forgot to turn off a service or turned on a service by accident. How does AWS know that the billing cutoff wasn't the accident? Maybe I accidently set a $20 cutoff during building my MVP instead of $200, but now that I have a paying customer I'm going to hit $100/month. AWS could disable my very first customer because I forgot to fix my billing setting.

Doing nothing is generally better from a legal liability point of view. The customer should be liable for turning services on and off.


[Disclosure] I'm Co-Founder and CEO of a company named http://vantage.sh/ that helps developers track and reduce cloud costs - I also previously worked at both AWS and DigitalOcean.

We hear about this all the time from AWS customers and its a large reason why people connect their account to Vantage which will help alert you if costs change intra-month. The first $2,500 in AWS costs per month are tracked for free so I thought I'd mention this here for potentially being helpful to the community.

If you don't want to remember to set up billing alerts, we provide basically a turn-key experience around this that takes less than a few minutes to setup: http://vantage.sh/


What type of iam perms does this require?


When you sign up and verify your email you will see the provided CloudFormation template found here for auditing of IAM permissions: https://vantage-public.s3.amazonaws.com/x-account-role-creat...

The list of permissions is a whittled down version of what's available in the AWS managed policy of "ReadOnlyAccess" and doesn't allow us to do things like read from S3 Buckets or read from RDS instances. Basically just List/Describe actions.

IAM permissions are written about more here in our documentation and are ultimately handled gracefully if you want to remove some. For example, if you just want to hand Vantage access to billing, S3 and EC2, it will do the job as best it can with just those permissions: https://docs.vantage.sh/permissions/

Finally, here's a blog post on our cross account IAM setup: https://www.vantage.sh/blog/how-vantage-uses-cross-account-i...


Use a disposable credit card (like lastcard or privacy) and set the limit to $5. Add it to your account, and the max they can charge is $5. If they let you run past it, the billing will fail, and if they don't shut it off it's on their dime.

To everyone claiming "ohhh that's illegal/unethical" I say to you: take it in your favor for once. For every 100 clients aws bills unexpectedly and with no controls in place to mitigate, you can be the 1 who gets a free month of service. They will not pursue you for $5. Imagine making the argument for welfare on a company that is worth a trillion dollars.


> To everyone claiming "ohhh that's illegal/unethical" I say to you: take it in your favor for once.

The rationale against doing this is as much practical as it is moral --- unless you're just doing this once for a single month and don't care if your account gets banned. AWS isn't like an auto-renewing subscription, where if the card declines, your service is cut off. They won't charge the card with a $5 limit until the end of the billing period. If you rack up more than $5 in charges in a billing period, you will be in debt to Amazon. They will certainly ban your account, so you'd have to make a new throwaway account with a new disposable CC each month.


You'd be humbled (perhaps not) by how little human capital gets assigned to review and correct anything under 5-figures at AWS. The account gets put into overdue and the services stay paused, you get an email every so often (if you even put in a valid email). Pretending they have a crack team of hundreds of analysts sitting there waiting to ban every account associated to an IP for $5 is pretty farcical. I have several 6-figure AWS accounts at present, and I can barely get ahold of a human being when there are issues related to wire payments not being applied to an account, let alone imagine they'd have anyone worrying about this beyond putting a dev on it to set up an ignore filter on such accounts. They have a manual process to allow any accounts to spend above $2000 or $5000 (I can't remember) where you fill out a credit application and they vet you to see if you are indeed good for it before allowing you to provision further. If you default on that, they will carefully weight the cost of collecting you or even reporting it to a credit bureau vs the amount due.

Not advocating for mass fraud here, or even petty fraud, just making it a bit more fair to those who have 0 provisions in the platform to prevent involuntary overspending.


> Imagine making the argument for welfare on a company that is worth a trillion dollars

You are not doing delivering some sort of poetic justice, you are just showing your lack of self-preservation instinct. For your own welfare, just don't poke the bear. You don't wanna get blacklisted for doing some dumb crap that will come bite you in the ass someday.

There are enough stories running around of people getting their job accounts banned by association for pulling idiotic stunts like these, and we don't know what crap Amazon will be running in the future.


The only problem is that they'll let you go negative for awhile before they shut you off. Then they won't let you use any services until you pay your balance. So unless you're willing to make a new account each time you go over your $5 max, you'll still be paying for the extra usage


This is a good way to just get banned until you pay.

AWS bills work a lot like postpaid phone bills. When you use the service you agree to pay the bill for usage.

Your suggestion is kind of like saying “If your card declines you don’t have to pay for your meal.” Not really true.

In my experience AWS support has been good about reversing accidental/fraudulent usage charges and helping to prevent them in the future.


I have been very happy with privacy.com for this very purpose.


Do you think it's an important feature for their paying customers? I've worked at places that care a lot about their AWS bill but I don't think any of them would have wanted a genuine hard stop before they'd do anything about it as a result of the soft alerts.


I've used AWS for my personal projects and at work. When I started at work, I didn't fully realize that "when an EC2 is spun up, you are charged for 1 hour, even if you terminate it" - so I accidentally racked up a big bill.

I was thinking it would be useful if Organizations could pre authorize users at $X before preventing them from doing more - of course the better solution is to manage releases through a pipeline that checks for stuff created and code scanning and... whatever

In the end, we use cost monitoring, but no AWS billing alerts


Do you think it's an important feature for their paying customers?

I think it is.

A few jobs ago, the boss of my boss got fired for a cloud service overage. Not a huge amount; the number on the grapevine was around $10,000. But it was enough.

For many (numerically "most," probably) companies, the IT department is a black box to upper management, and any unexpected budget overages are a serious problem.


The users who actually pay AWS decent money or are likely to pay AWS decent money in the future don't ask for this afaik. People paying $5/month for AWS aren't AWS's target audience and likely contribute negligible revenue to AWS.

People do ask for alerting and monitoring but that's not a hard stop.

Then you get complex issues such as S3 and EBS. As long as there is data you will keep paying so what do you do? Have a hard limit but not really since it doesn't cover them? Delete people's data?


You don't get to be the biggest by not charging people. They probably make more off accidental billing than the people who contest those accidents, so it's worthwhile. That's why Columbia House was so hard to cancel back in the day - it wasn't the $0.01 CD where they made the money, but on the monthly fees that were way more than the cost of one CD.


I don't think it has anything to do with wanting to collect money from accidents. That's not very reliable revenue anyways. That's just an unfortunate side effect for us plebs.

The real reason is that if you give companies a budget feature, they will inevitably, you know, use it. They'll set a budget that seems 'reasonable', and then freak out when everything turns off when it's exceeded, and then go raise it a little bit, and repeat the cycle.

Compare that to now where every place I've ever worked basically seems to forget that cloud hosting costs even exist, based on how much most companies balk at paying for simple SaaS tools for developers but will happily let the hosting costs grow to astronomical amounts. They're happy to do it cause they just see a line item and accept it. If you give them budgets, that won't happen any more.


>I don't think it has anything to do with wanting to collect money from accidents. That's not very reliable revenue anyways.

It's worked well enough for the entire fitness industry forever. No reason it can't work here as well, and at scale I'm sure it's pretty profitable. You're right, too, that we'd use it, but I think this is a situation where we can both be right.


That's not very reliable revenue anyways.

At the scale of Joe's Chicken Shack, accidental revenue is not reliable. But at the scale of a Google or an Amazon, while it will fluctuate month to month, a certain minimum revenue stream should be statistically predictable.


It was the same deal for vinyl before that! Not a bad deal if you bought your minimum and cancelled.


And books before the vinyl.

Fortunately, I never got sucked into the 8-track club.


You can just deploy your own solution. And that one can be selective, which is probably preferable anyway.

https://docs.aws.amazon.com/cli/latest/reference/ce/get-cost...


I'm sure it's a terrible idea for some reason, but I love the idea of setting up a service with a wallet that pays for itself. Anyone can add funds to the wallet. As long as the wallet has funds, the service keeps running. I especially love the idea of the provider (AWS, GCE, Azue, whatever) setting up the wallet, and guaranteeing there's no way to withdraw from the wallet, so you know the funds you deposit really do go to funding that service. Then, give the service a way to see which account has deposited funds, so they can credit you for funds you deposited... I mean, I would just love to see more services running "self-funding" like this.


BunnyCDN works like that.

I prefer to be billed whatever it costs but have my service up all the time.


Not that easy to do. I don't think any major users are requesting this, and what to do when you go over the limit? Start deleting resources automatically? What about the data? Backups?

It's just not that relevant ...


Azure has quotas that can limit things such as the number of VM's you can have running per region. It won't provision more vm's than you have in your quota. This helps you, for example, avoid automatically provisioning more vm's than you expect. These quotas can be edited manually or via API.

https://docs.microsoft.com/en-us/azure/azure-resource-manage...

https://docs.microsoft.com/en-us/azure/azure-resource-manage...


>It's 2021 and the biggest cloud platforms still don't have hard limits on spending.

Look... just be happy that they made it PAINFULLY obvious when you make S3 buckets public


Have you visited their S3 console recently? It's gotten pretty striking visual markers now for Public visibility. I don't have a screenshot handy, but thought I'd mention it


Sorry, I phrased my comment poorly.

AWS makes it clear when buckets are publicly visible. This is a good thing, and I am grateful


GCP has budgets that can be easily configured and enforced.


It's by design.

Those guys in finance know there are people whom will pay any bill.

When I was younger I would just pay for even mistakes because I was concerned with my credit number.


If you mistakenly spend a small (to AWS) amount of money, they'll refund it. They're not out to get individuals.

AWS is for businesses and hard limits on spending is a liability for their pricing structure. Imagine you run a small business built on AWS and you hit your limit -- you're basically asking AWS to dismantle your business. They'd have to null-route traffic directed to you, shut down your servers, delete your data, de-allocate your IP addresses, etc. Your business won't be any better off than if you went bankrupt from a huge AWS bill.


Or Google. When I was a student I forgot about a TPU instance and spent over $10K in a single month, on track for $100K. Google refunded me since the server was at idle for most of that month outside the one hour I used it for.


I wonder why nobody has a service to receive these billing alert emails and react to them by shutting down your AWS entities.


Lately word "serverless" is used freely on HN. Want a clickbait title? use "serverless", damn be that only 4 words later "server" is used too


An "almost free serverless server" sounds like a joke/scam.

It's a serverless server (aka nothing), and it's almost free, so you're paying money for nothing.


Blame Amazon for coining the term.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: