An hour of work put in by any of you reading this is worth several months of hosting for a starter project in an expensive provider like Heroku.
Do not invest time making sure your service runs for $6 a month if it can run for $50 with 0 hours invested. Invest that time talking to customers and measuring what they do with your service.
Most times a few customers pay for the servers.
This is just a friendly reminder. I see a lot of comments talking about running backends for cheap.
A friend of mine recently launched a side-project that does heavy processing of audio. He decided to invest ~2-5 hours properly setting up auto-scaling, a job queue, etc, before releasing v1.
Fast-forward two days later, his service and a competitor were both featured on Product Hunt. He's now making a profit on the service, as he managed to scale it up very fast, while the competitor buckled and completely lost momentum.
If you're talking about spending _a long time_ preparing a perfect infra, then your argument makes sense. Spending a few hours? It's both a great learning exercise and can literally save your project, so why not?
He used GCP, setup a pubsub topic and a cloud function. It took less than an hour to setup, the rest of the time was rewriting a portion of the code to write to the queue, etc.
There’s other ways in other platforms as well (eg if you’re using Kubernetes)
With a Kubernetes autoscaler, this takes you ~20 Minutes to setup pod autoscaling. If you run your k8s on say GKE, setting up node autoscaling is another 5 Minutes.
I totally agree with you, once you've factored in the dozens of hours gaining knowledge of k8s and hundreds+ of hours of experience dealing with it in production. You can get by with far less, but it's going to be pretty stressful when things go sideways in prod without knowing exactly why.
In practice - the application running in the pod has to be aware of this, and the most intensive part are rarely the bottleneck. Most of the time this is an architecture issue, not a resource issue. This takes time and experience, and overhauling a platform to remove a bottleneck is usually very painful if you have a bit more complex setup.
My take: depending on the nature of the business, and how the publicity was done, they may only have had one shot at gaining the customers. In 2-3 days time you might have fixed things, but by then the prospects moved on to the site that worked.
I’m not convinced that you need to superscale your infrastructure first. I think it’s normally a waste of time and money. But for the example listed this is a likely benefit.
The competitor didn’t know what he was doing, pretty much. He hacked an MVP together with Rails in Heroku, then when people flooded in, he couldn’t scale up and the site kept crashing. By the end of day two, there were articles and publicity about my friend’s site, and then it became a flywheel. He eventually made it work, of course, but the botched launch gave my friend a HUGE advantage (and paying users). I bet he’s paying a ton of money and still trying to scale on Heroku (I made a considerable amount of money as a consultant fixing cases like that too)
> Fast-forward two days later, his service and a competitor were both featured on Product Hunt. He's now making a profit on the service, as he managed to scale it up very fast, while the competitor buckled and completely lost momentum.
That's an incredible story! Could you please link to the two products on PH?
Not knowing much about your friend's service, it sounds like the value of his product is "heavy processing". Therefore, I also would have included scalability as part of a v1 deliverable and wouldn't consider it as an optimization task. Great that your friend identified that.
The few hours you used on infrastructure can be better used fixing a bug, polishing/adding a feature or even giving yourself a break so you can be better focused the next day.
A Heroku-like platform will literally do the scaling for you. The non-financial cost is that you need to develop your application in-line with their framework/platform. If you make this decision at the start, this cost in practically nil.
> $6 a month if it can run for $50 with 0 hours invested
With the risk of sounding like a broken record: yes in such a case it is better but that is often not the case at hand; this exact argument is used for spending $500/mo after optimizing the software vs $50k/mo autoscaling with ‘0 hours’ invested (between ‘ because ofcourse it takes a lot of time to even get that working, but, for many programmers, it is apparently easier work?).
A few weeks ago I commented here while optimizing a Laravel cloud install, now I am working on a Clojure one. Client is spending $28k/mo on aws, especially Dynamo and the rest on ELB.
Rewriting to Postgresql standard and the dynamo part to postgres columnstore and adding proper indexes has the lastest stress tests down to a few 100$/mo they will spend when they launch this.
The $28k is spent using exactly your argument, and like most of these projects like this I do, it was quite a lot less than 28k (1 month hosting) to optimize this (which had us rewrite a lot of spaghetti from dynamo to psql).
So yes, in some cases you are right, I would say when cloud hosting pops over 3k (especially if sudden), I would hire a me to have a bit of check to see if you are not burning money for nothing.
Ah I'm just curious about the $$ cost, I 2 RDS PG instances running on t3.medium, one is 30gb never goes over 10% cpu, the other is ~150gb and never goes over 50% cpu. I also have some dynamo stuff for some session management and wonder if it would be better to just shove it in postgres.
So I read your comment as $28k dynamo to $100 postgresql.
If you're bootstrapping a side project that you aren't confident will make a profit and you have more time than money, I think it's perfectly reasonable to prioritize low operating costs. In all other instances, though, I think you are correct.
This is the conventional wisdom but the more I think about it the less I agree. I think we have a duty to minimize resource usage and waste in general. The part of energy used by IT infrastructure is constantly increasing and has a significant impact on the environment.
Let's put your statement on another perspective : it is worth investing a few days/weeks of work of a low power machine (a human) to reduce significantly the long-term power usage of a high-powered machine (a computer or cluster thereof).
A lot of people tend to not consider time invested as a measurable resource, and that is a sad thing indeed, as that is one resource that can only be spent.
You're entirely missing the point. If I'm spending a month to optimize some code so that it is 25% faster, these gains don't go away again once I'm done optimizing. They stay in the product for months or years. And this is where the effort pays off.
If we even humor this sentiment (I don't buy that we should, 5$ vs 50$ of compute is not why we're struggling with climate change. Compute period is 10% of all electricity production, we're not going reduce that with a few hours of optimization here and there.. It's way too late to be taking such half-measures seriously), the math doesn't work out.
-
Thinking about climate change on a personal level is positive, but I also feel our efforts should be grounded in reality, not just things that feel good.
The hours you spend optimizing your bootstrapped service to reduce it's CO2 footprint... could be spent in plenty of other activities that actually reduce your carbon footprint.
The level of misinformation, misplaced priorities and uninformed conclusions in climate change has reached staggering levels.
We have people who don't believe it's real (which is silly) and, at the other end of the scale, people who believe we can actually fix it in 50 years (which is just as silly, even 100 years is silly). People are convinced they "know" the "truth" without even bothering to throw a few numbers at a spreadsheet to see if what they think they know aligns with any imaginable version of a non-science-fiction reality.
I am very concerned that politics and ignorance is driving this far more than real science.
What are you trying to get at with these numbers? By themselves, they mean absolutely nothing! It all depends on the impact you're having. If you're optimizing code running on a server that's mostly idle, then you'll not see a big reduction in energy use. If the result of your optimization is that you can shut down 10 servers because of overall load reductions, the time invested suddenly has a very real benefit.
They mean plenty, they just require you apply some critical thinking...
5$ to 50$ is not 10 servers on Heroku. In fact, it's not even one server, you'll be sharing resources at that price point.
Let's say you generate approx. 500 lbs of CO2 a year (based of figures for a half desktop PC running 24/7 because you're only getting 2 cores and 1GB of ram)
At this point you're thinking, 500lbs?! That's insane!
But 40 hrs (1 week) is a lot of time, and 500lbs of CO2 is less than it seems.
If you spent 40hrs spread out over an entire year air drying your clothes a total of 20 times, you'd save over 2 Tons of CO2 a year. (You can do the math for a dishwasher if air drying doesn't work where you are)
If you live in a cold climate, the EPA estimates you can save 15%, or almost 1000 lbs of CO2, by weather proofing your home, easily accomplished in 40hrs
-
You might say "I already do all these things!", but the point is there are so many ways to convert time to CO2 savings.
We spend a lot of CO2 trying to save time you could say.
Optimizing your bootstrapped service is not one of the places I would use CO2 expenditure as reasoning in the slightest.
Uhm... isn't that exactly what I wrote? Besides, the CO2 that we humans exhale doesn't count towards global warming because the carbon in it is part of the natural carbon cycle. That's roughly the same amount that gets reabsorbed into plants that are grown for next year's food. What really matters is the carbon that is added to that cycle from sources outside this biological cycle. In other words, the 1kWh of electical power is strictly worse than the 1kWh of human power as long as some fraction of it comes from non-renewable sources. At best, if it were sourced purely regeneratively, it would be exactly even.
If you think my comment is saying what yours said, kindly read both again.
This comment reads you didn't read my reply, humans could make 0 CO2 and spending a week optimizing 10 servers and not make a dent compared to simple lifestyle changes.
I used 1 server because that's the scale thread was about ( saving <50$ of spend on Heroku)
I know you tried to make it about 10 servers to force a point, doesn't end up changing much though...
Sure but now you are wasting significant human capital reinventing the wheel with substantially worse results.
So using your math, you’ve got humans, which have high environmental costs in order to... well... live... wasting their life and consuming tons of resources doing something that was done better for cheaper.
Humans cost tons of money to operate. More than data centers or anything else. Don’t waste them on stupid projects like writing shitty versions of AWS.
Your own logic should lead you to conclude all the people wasting expensive human lives reinventing javascript frameworks, deployment systems, cloud orchestration systems, database systems and AWS... these are the folks doing true harm to the environment. It is much better for the planet to lock yourself into AWS, Azure, or google cloud and exploit the shit out of everything they do than it is to piss away incredibly expensive resources building your own.
And I am more than happy to boldly assert if you are working on a project that aims to re-invent AWS for your company... you are a waste of human capital.
As someone who sort of held the view that the incredible inefficiencies in modern computing and 'cloud stuff' was a net negative on the environment, thanks for this comment. I still don't think mindlessly throwing extremely inefficient stuff into a datacenter for no reason is a great thing to do but this gives me a lot better perspective as to why it isn't so black and white of a thing. I forget sometimes that humans have a very high cost to operate too.
Do not invest time making sure your service runs for $6 a month if it can run for $50 with 0 hours invested. Invest that time talking to customers and measuring what they do with your service.
Most times a few customers pay for the servers.
This is just a friendly reminder. I see a lot of comments talking about running backends for cheap.