> As that happens, there will be a point of no return in which engineers can’t run the whole system on their machines anymore — at least not in a reliable way.
Definitely yes.
However, experiments, proof of concept, shy little prototypes always start on a sheet of paper and then on localhost. Before going somewhere else.
In 20 years, the only satisfying places for work, in design, software dev and engineering, were where we could run internal experiments from localhost or even from a shared, always on system on which we could deploy and share our own internal pages and blogs.
Because of the fragile, imperfect, shared, impermanent nature of that infrastructure (co-owned within the product team), expectations and care, both in building and in critique, had a profound impact on the teams.
That's a bit fatalist ain't it? In fact, it reminds me thtough opposition of the whole "Your Data Will Fit in RAM" thing.
If you can't reliably run production on localhost, it's not because there is something fundamentally wrong with localhost. You're just not running your workload on a beefy enough localhost to remain local.
Think about it. All those cloud hosted workloads? All of them exist in a relative localhost.
I hold that in reality, localhost is the ideal, and any network hosts are the compromise you make because you just can't afford "dat localhost you actually need". The detail orientation and willingness to lay out things sufficiently on localhost by a singular developer simply exhausts.
You have 65535 ports. You can with the right hardware/architecture have truly ridiculous amounts of addressable RAM. Processors? Same deal.
The weak point of course, is your funding, and your willingness to get everything lined up. Which requires attention. Which is finite.
When dealing with teams of developers, there is the additional challenge of handling individual idiosyncracies, which is another issue that network partitioning naturally lends itself to spackling over, but even then, localhost is still theoretically capable of accommodating it.
I didn't mean that production "has to" be able to run on localhost.
I meant that enabling/promoting localhost (or assumed shared, small, inconsequential dev/demo environments)-based development for sorting/trying/demoing out things, sometimes ludicrous ones, is a must when you want to favour innovation in your org.
It goes totally against IT security and silo'd orgs (especially in large ones), so it does not happen without a clear understanding/enablement of the benefits of this from very high up in the chain of management.
Totally with you on that, and in fact, I'm a massive practitioner thereof to the point I was regularly pushing an i7 2021Mac flat out to the point the kernel was just giving up handling any USB input in a remotely reasonable fashion from running several frameworks, servers, DB's VM's, and containers all at the same time.
I was basically pressing Ctrl in that one guys workflow with my project at the time. Fun as all hell. Love a thoroughly tasked computer.
I wonder if this is something web assembly can help with. I suppose it comes down to, what’s the most lean abstraction to wrap your code that can guarantee “write once run anywhere”. Docker is supposed to solve this problem, but running multiple containers can be daunting and resource heavy.
On a side note, this last year I had the displeasure of developing on an existing all serverless ETL system with 30+ lambdas all hooked together in confusing ways. Debugging was a nightmare. In the future I will try to avoid any project that requires me to dev in a non-local environment, unless there’s a platform team that’s producing tools to improve the developer experience .
Wasm changes nothing here. Your docker image will have exact same problems as wasm file. If some service preloads 10GB of data and takes 2 mins to start, wasm alone won't magically make it better, smaller or faster.
> 30+ lambdas all hooked together in confusing ways.
You can use localstack to run them locally, but tooling definitely sucks either way.
I hope there always will be a big enough minority of software engineers who will be actively working against all the complexity and resume driven development (RDD) and pushing simpler and more scalable solutions instead of Rube Goldberg machine of the cloud (there is a lot of use for a cloud, but most of the companies just doesn't need it).
On the other side, loosing deployments to some "cloud solutions" will make things worse for everyone and general PC computing - right now macOS and Windows is only usable for development and general PC computing because of software engineers needed it. The moment it will be fully replaced by cloud - all major OSs will be locked down as much as possible.
Branding "serverless" as a cloud feature was a huge marketing ploy - there were serverless software before cloud existed, it was running on yourpersonal computer, not on someone else's server.
Definitely yes.
However, experiments, proof of concept, shy little prototypes always start on a sheet of paper and then on localhost. Before going somewhere else.
In 20 years, the only satisfying places for work, in design, software dev and engineering, were where we could run internal experiments from localhost or even from a shared, always on system on which we could deploy and share our own internal pages and blogs.
Because of the fragile, imperfect, shared, impermanent nature of that infrastructure (co-owned within the product team), expectations and care, both in building and in critique, had a profound impact on the teams.