Learning dozens specific cloud services and shelling out money everywhere I go is way less appealing to me than buying a cheap dedicated server and firing up exactly what I need.
It'll take me far longer to figure out what FarGate and Cloudflare Pages and all these are and infinitely longer to keep up with the latest and greatest because it keeps changing constantly. And even once I get it, I'll have very limited control or understanding of my stack, and migration? I'm screwed. I'm not interested in playing games like that.
nginx is comparatively simple, run it, it serves. Config is easy, learn it once, run it anywhere, it doesn't get a dozen new features and breaking changes every month, just does its job.
I guess I'd just rather understand the real underlying technologies than some crappy commercial wrapper for them that will differ every time someone tries to rip me off.
Nginx is not relatively simple. It's not easy. It's an unintuitive, poorly documented minefield.
I agree with you in principle, but in practice mthese services replace dozens of hours of banging your head against software that's one misstep from blowing your legs clean off.
Sometimes, you have better things to do than tweaking nginx configs using the average of a dozen tutorials and docs that might as well be written in middle English.
I don't have better things to do, which is why I bothered. I'm not sure if it was really worth it. Cloudflare took 5 minutes to set up.
The premise that OP states isn't that Cloudflare is easier, but that when that one misstep happens and attempts to blow your legs clean off, you at least are in control of the environment and can debug/fix what's going on.
Sure it is, if your needs are simple and you follow a tutorial and don't just randomly spam stuff into your config files. Nginx can get really really complicated but it doesn't have to.
> mthese services replace dozens of hours of banging your head against software that's one misstep from blowing your legs clean off.
Yes, you can definitely spend hours figuring out Nginx, but you can shoot your foot off with a PAAS just as easily and at least Nginx is a technology that you can use in many different situations. It's definitely worth learning a tool like that, versus a commercial service where you play in a sandbox and they can jack up the price or change the rules at any time.
This is written as a long-time web developer and sysadmin. A lot of comments like the above seem to be written to try to scare younger devs away from standard tools and into the arms of PAAS vendors.
The location resolution order, the arbitrary rule inheritance, the complete lack of useful errors, the abundance of footguns, the asinine defaults. Oh and automatic certificate renewal is still a pain. Purging the cache selectively took me a few days to set up.
Blame the user all you want, but after a decade of using nginx, I'm ready to claim that it's a pain to work with.
1) Almost all of these services have generous free tiers. Even if you're running a relatively high traffic site your bill will be practically $0.
2) These technologies are stable and usually not that complicated. You are missing out on a lot of productivity by not looking into them.
Understanding the "real underlying technologies" is a myth. The "underlying technologies" are constantly changing, unless you are using an extremely outdated stack: https://www.youtube.com/live/hWjT_OOBdOc?feature=share.
Your link is a guy saying people get so good with the old technologies that there is no point trying to compete with them. That only happens if they don't change much...
If you time traveled someone from a decade ago that knows how to configure an Apache or NGinx server to today they would likely still be able to. It may not follow latest best practice, but at worst they'll use something retired, have to google it and be up and running in minutes.
Same with Spring/Java or .NET/ASP.NET Web API.
And you can still run a VPS or container that can handle significant traffic for free on many of the cloud platforms + migrating it is much much simpler.
My point is that you could make the whole "underlying technologies" argument for Apache back in the day when it was new. Why use Apache when I can understand the "underlying technologies"?
Things evolve over time and new tech slowly becomes so integrated into the stack that it is the underlying technology.
The underlying technologies very rarely change and offer very stable APIs. Apache has been around nearly 30 years. NGinX nearly 20. Both offered major advantages over existing solutions, but also interfaced with the rest of your codebase using a standard mechanism (CGI) so you could easily migrate to or from them.
A single vendors solution will not (at least very much should not) become a standard underlying technology, so this isn't moving to the next underlying technology. You're just tying yourself in to one vendor.
If it gets standardized across vendors or open sourced then it becomes a different story, but until then its a massive gamble to assume it will and the standard chosen will derive from your current vendors solution. Then what happens when the vendor decides to double prices and kill the free tier (also known as being acquired by Oracle)? deprecate it? Goes bust? etc.
There is a difference in abstraction between hooking API Gateway to a Lambda function, and writing and deploying an API on a Linux VM.
One is closer to the metal, so to speak, and provides a better basis for understanding how these cloud services (Lambda, etc) actually work behind the scenes. In this case, understanding the underlying technologies is not a myth.
> The "underlying technologies" are constantly changing, unless you are using an extremely outdated stack
So will also constantly change their behaviour in the edge cases. Thank you for giving me another reason for not using that crap. I'll stick with my extremely outdated stack.
Uh, what? Your product Saltcorn uses Webpack, React, Express, Docker and probably many more modern technologies. Those were just the ones I found looking at the repo for 5min.
Cloud and serverless platforms are similarly modern tech, except for deployment and hosting. Do you really think that e.g. AWS and Cloudflare "constantly change their behaviour in the edge cases"?
> Do you really think that e.g. AWS and Cloudflare "constantly change their behaviour in the edge cases"?
I mean major Google APIs / SDKs do... And not just in edge cases. Not at all uncommon for a vendor to decide a service is unprofitable and kill it, or to decide to launch a new better version and deprecate the old one / demise in a couple of years. That isn't fun when you heavily depend on it.
When you look at the sheer number of services AWS offer, it feels its only a bad year or a major competitor gaining an advantage and undercutting them on price before there is a risk they consider trimming to a smaller core set of services. I'd bet the VPS offerings aren't what goes...
Or a standard comes out, they adopt that and deprecate the existing offering giving you 2 years to migrate. Having to re-work everything is a major cost to large firms and can kill a startup.
yes, but I control all those dependencies with a lock file and pinned dependencies. We have tests for their behaviour including a test script in CI that boots up the service locally and connects to it, making various assertions.
I think there is a difference here between the core behaviour and the edge case behaviour. I guess I would trust that in the core behaviour they do not change on a day-to-day basis but the question is how do they behave when you are pushing the tools outside the intended core use cases. Can you then really trust services that change their core implementation constantly will work for your workflow?
TBH i would probably trust a CDN because I have a fairly simple use for such a service. If I were really pushing these tools, like running a video broadcast service or whatever, I would be much more worried.
I find it really hard to believe that you would run into rough edge cases with most platforms. If you have a small app a lot of platforms do deploys directly from GitHub repos. If you have a more complex app cloud platforms support things like managed Kubernetes and such.
What does something "outside the intended use case" even look like for a deployment and hosting platform?
If you're only using them for hosting and deployment there isn't really any lock-in either. That only occurs if you're using their other cloud services, and even then there are many platforms with similar services.
I agree that paid/cloud does not necessarily equal lock-in. I mean, an Ubuntu VPS is identical-enough (usually) across clouds. Managed databases? Maybe? what about available Postgres extensions etc.
There are lots of cases where deployment needs some kind of customisation. This usually happens either around persistent state or around building in CI. E.g. in one project some years ago, we wanted to test in CI against a subset of the production database. So there has to be a script to appropriately subset this, not easy if you have complicated foreign key relationships, and you have to make sure confidential data did not make it across. We were not the only people to do this I've seen this elsewhere.
Other example, my previous project: the main framework is Django with a React front end. The admin interface is django html templates. But one of those templates has an embedded additional react component because on that admin page we needed more interactivity. So that has to be built. And all of this has to be tested in CI.
What does non-standard deployment look like? Here is one example. For my current project I use wildcard domains so users can create their own tenants on their own subdomain. This would not work on e.g. heroku or similars, at least i don't know how.
All of this could probably be done in Kubernetes but also much simpler with bash scripts and systemd units.
You can just deploy docker containers to many of these services and the versions are pinned just like you are used to. I’m not sure what you are expecting side effects to be here
To the extent that you can put your functionality as a dependency in a container, that's not what I'm talking about. If you can do that then we are all hunky dory.
The problem is with functionality that is only available as a remote API, because for whatever reason we wanted to be cloud native rather than rely on free and open source libraries. I cannot pin that dependency version, as best I can choose which version of the API I am talking to, but if they have changed the underlying implementation then I can't request them to roll that back just for me.
For what it’s worth, there isn’t much to learn with cloudflare pages. You grant it Oauth access to a particular repo, it builds master and serves the output directory. Only thing you specify is the build command and the output directory. For example “hugo build”/“npm run build” and directory “out/“.
I use it for a few reasons. I like that it’s free and the hosting of the static content is at the edge. They also take care of some things automatically, like compressing the assets once at build time and serving them with the right headers.
I don’t think this is unique. I believe netlify and others have similar offerings. These providers are excellent for the use case of serving static content at low latency and low cost.
> It'll take me far longer to figure out what FarGate and Cloudflare Pages and all these are and infinitely longer to keep up with the latest and greatest because it keeps changing constantly.
First, Fargate and CloudFlare Pages are completely different things. Fargate is "serverless" Kubernetes while Pages is just static hosting (if you don't count workers).
While you have a point about the former, you can't beat CF Pages (or Netlify, or Vercel, or any other modern static hosting system) in simplicity. You just push your changes to your own Github repo and it gets automagically deployed. It is also free up to the point your cheap dedicated can handle. Try it and you will be pleasantly surprised.
Disagree. Fargate is very simple and as far as I know has not changed much in years. Give it a docker image, watch it go. Want to leave? It's literally a dockerfile, take it and run it elsewhere.
That's the "Linux Server Singleton" pattern from the FA, isn't it? I find it surprising that it's in 3rd position. While SSG looks simple enough to merit the first position, going with the Cloudflare serverless proposition as the next step for beginners feels strange to me. I think you would be able to "graduate" to serverless-edge functions after you grok the traditional web client/server approach, not before.
Sorry, but this advice just sounds really out of date. How is "a cheap server" ever cheaper than free? I don't think you understand just how generous and simple the free static hosting CDNs are. NGINX is fine but it's absolutely not simpler than saying "hey Cloudflare, serve my main branch at this URL" and then it does all the rest automatically.
I guess the title should have been "Four Ways to Deploy Web Apps", not "build" them. Granted, option #2 (functions) is a different beast, but all others can be built the same way but deployed differently. For example
- You can develop a static (Hugo) web site then deploy it on Fly.io using containers.
or
- You can develop a container based web site, then deploy it on a bare Linux server (k8s is not necessary for a few containers).
I could recreate one of his examples (https://oldgames.win/) in 10-20 minutes off just a CSV file... And I could use something like https://simplescraper.io/ to scrape that data + find imagery.
Why recommend lock-in vendors when there are better options in between? There was no mention of Heroku which is easy and scalable.
Hugo is wonderful, until a deployment bug or misconfiguration exposes all of your server settings in a page that should be serving a 404 instead. Wordpress, Hugo and others also get constant attacks from hackers who can exploit each bug found on thousands of sites all at once.
Go has fantastic server examples and plenty of starter templates you can use to build your own server, and deploy to Heroku and scale to millions of users far cheaper in engineering and financial terms.
This is a nice little overview that is not afraid of being opinionated.
It leaves out some of the affordances that CF Pages, Vercel and Netlify provide. (Discussed as option 1 in the article). Those are of course moving targets that provide more stuff every few months.
I wouldn't recommend PHP to anyone who isn't already invested/emerged in the language.
Wordpress for blogs or websites on a shared host with automatic updates and a few select plugins? Yes. Laravel or Symphony for applications? Ok.
But just PHP has way too many footguns and doesn't provide nearly enough affordances to make them worth it IMO. Plus it's a language that breaks backwards compatibility way too often, stops support for older versions. You need some sort of protection layer that shields you from all of this. So you might end up with something that is way more complex and slow than it needs to be.
Gotta disagree with you here. Modern PHP is quite great, especially when paired with Laravel or Symphony. They had to introduce breaking changes to move the language forward, but I don't recall anything huge after 7.0 or so. There's a huge ecosystem of packages to do just about anything web-related, and the documentation and community resources are way better than anything I've seen with say, Java/Spring.
Laravel is a great way for a small team to get an application up and running quickly and really shouldn't be overlooked.
We’re not disagreeing though. I was contrasting using something like Laravel with just PHP. And it’s not just documentation and ecosystem. You also get a stable, nice foundation with a clear upgrade path that protects you from a wonky foundation.
> Plus it's a language that breaks backwards compatibility way too often
Do you have examples of such compatibility breakages?
I noticed Nextcloud and WordPress both seem to lag behind the last version of PHP a bit. But on the other hand I also see such thing in the Java World where people are stuck on Java 8 or 11, when backward compatibility is one of the strength of the language.
I also have PHP scripts I wrote for PHP 5.1 and still work unchanged on PHP 8.2, 12 years later. Well, I had to fix some warnings for completely dumb things I did like putting mandatory arguments after optional arguments and I'm not sure why it was working, but the upgrades have been painless for me.
Plus, I find the latest PHP version quite pleasant, with the type annotation and other new features of the language, and the execution model where your app doesn't run all the time, its php files are just called by the web server (though php-fpm). No Tomcat or uWSI or unicorn and virtualenvs to manage.
But I guess I'm a person who is "already invested/emerged in the language" :-)
(I'm also invested in Python, Java and Node.js though)
> Do you have examples of such compatibility breakages?
The wordpress integration test suite broke with PHP 8.0. They seem to be lagging behind for 8.1 and 8.2 as well. Remember new keywords were introduced and code just breaks in weird ways since these versions if you have conflicting names or depend on any library that has conflicting names.
There was a somewhat recent article that talked about changes for a string template literals cleanup for the next major version that will break a whole bunch of code as well.
In the near future you won't be able to assign new properties on instantiated objects anymore. Think anyone who is creating objects dynamically to be encoded as JSON, rendered as HTML, XML or generating them from parsing/reading any of those things. I expect a lot of breakage and patches here.
> I also have PHP scripts I wrote for PHP 5.1 and still work unchanged on PHP 8.2, 12 years later. Well, I had to fix some warnings for completely dumb things I did like putting mandatory arguments after optional arguments and I'm not sure why it was working, but the upgrades have been painless for me.
In many cases the warnings, improvements and deprecations make sense. If you have a few PHP projects and use decent tooling you can fix those things often easily and are sometimes glad for it. There might be hidden bugs that are discovered that way.
But what about many projects? What about transitive dependencies?
PHP is moving fast and makes no excuses. It's simply not a language that prioritizes stability. Often for good reason, but you have to ask yourself it the churn and instability is worth it.
> Plus, I find the latest PHP version quite pleasant, with the type annotation and other new features of the language, and the execution model where your app doesn't run all the time, its php files are just called by the web server (though php-fpm).
I agree on that last part. That's where PHP shines uniquely IMO. It's really optimized for stateless execution, which makes it very easy to reason about on a macro level.
But on a micro level it's tough. There are so many little gotchas and weird behaviors. A death by a thousand cuts.
> The wordpress integration test suite broke with PHP 8.0.
Wordpress was since it creation a bad coded software and as a result they have issues with keeping up with the php release cycle.
> In the near future you won't be able to assign new properties on instantiated objects anymore.
Wrong, the stdClass that json_decode uses still works and will work, if you need dynamic properties without define it in your class add the attribute for this class #[AllowDynamicProperties] .
> There was a somewhat recent article that talked about changes for a string template literals cleanup for the next major version that will break a whole bunch of code as well.
Next major version comes in several years until then you get deprecate warnings, if you can't fix problems that will occur in 2-3 years you should think about your software development skills.
> But what about many projects? What about transitive dependencies?
If you have many projects but no manpower to maintain it thats a bigger problem for the projects itself, e.g. security.
And you can use phpunit combined with rector for many projects.
> PHP is moving fast and makes no excuses. It's simply not a language that prioritizes stability.
PHP is stable if you write good code and on the same level as nodejs or python.
> There are so many little gotchas and weird behaviors.
Thats the reason for the upcoming breaking things, if php would stick at php 5 it would be dead.
> Next major version comes in several years until then you get deprecate warnings, (...)
AKA churn. Breakage is not OK.
PHP breaking every couple of years and showering you with deprecation warnings doesn't fix that.
It's like saying: "In a month I will punch your face, you better make sure to wear a helmet!". Just because you said that doesn't make punching people's faces OK.
There are languages and tools that don't break within that time frame and there are languages and tools that _never_ break.
There are also better ways of breaking changes: If you do break then give me a canonical, reliable, automatic upgrade path that also rewrites vendored dependencies.
> (...) if you can't fix problems that will occur in 2-3 years you should think about your software development skills.
It's not that we can't fix them. It's that we have better things to do but _must_ fix them.
Many of those changes are not critical or security related. They are just changes for the sake of change without consideration for stability. Everyone hop into the hype and churn train!
The baseline expectation is that my fundamental tools are backwards compatible and stable. People also have dependencies, transitive dependencies and so on. Dependency management is already hard enough without breakage.
Without stability there must be some _major_ advantage that we cannot get anywhere else. Stability ensures that fundamental libraries and tools can be _done_ outside of voluntary optimization and extension. It means you don't have fix code in multiple places and multiple projects without a compelling reason.
That's what I was getting at.
---
Note: There are good things about PHP and parts of it's ecosystem that I'm not talking about in this comment. This is a rant based on unnecessarily inflicted pain.
I never got the endless maintaining in software. We build so much physical stuff but I remember only cars getting recalled sometimes. If you write something for a zx spectrum it will just work a million years from now. It seems there is no good excuse for it.
Comparatively speaking PHP has likely the most backwards compatibility with respect to actual wall time when compared with other common options.
It has many tools for the competent and the incompetent.
I'm sure your personal experience sucked, just like many people's personal experience with Java.
But let's not conflate the output of some set of practitioners with the design of the language. The fact that php can be so accommodating to the incompetent can be seen as a virtue. Dumpster fire makers can set things ablaze with any tool and the avalanches of trash PHP code are endless. It is what it is.
I'm sorry but this is a really ill informed comment, and nothing said here is actually true. You're going to need to spend some substantial time to back up what you've said.
I did in another comment. This is not unsubstantiated hate, but using PHP in anger in many projects and being responsible for maintenance, automation, testing, modernizing, refactoring, solving performance issues and so on.
A particular application of #2 which works well as an extension of #3 is to have your workers implement a lightweight backend-for-frontend layer in between your frontend and more traditional API
This approach works pretty well when you have a pre-existing API and need to, say, minimize # of round trips to fetch data without wanting to couple your frontend(s)<->backend too tightly.
> Eliminate the need to manage and inject API tokens into containers and servers and instead authorize containers and servers to perform those operations.
What does this look like in practice? Can someone provide example scenarios that this is describing?
Example: you don’t need AWS keys to write to SQS because the EC2 instance has an identity (“principal”) applied to it, and SQS has been configured trust requests from said identity for some set of queues. It’s typically cloud-specific in the implementation and the resources being requested
This. To state it a slightly different way, instead of "create me an instance and inject the following AWS credentials in it" and then having the application running on that instance know where to look to get the AWS credentials, you go "create me an instance and assign it an IAM Role of 'webserver'" and then you write access policy that says "any of my instances with the role 'webserver' can access this bucket/database/queue." The magic is your app uses the SDK to say "assume the 'webserver' role" and it does this under the covers by going out and finding ephemeral keys that AWS injects into your instance via the metadata service. No shared keys == goodness.
my understanding is that the major public clouds operate Public Key Infrastructure on end users' behalf. It feels like magic because there are large teams of very smart and capable people making it feel like magic.
Surprised not to see Render.com listed under option 4 despite that being an option discussed often here with comparable simplicity of deployment to Heroku
I love Render, have been using it heavily personally as well as professionally. I like it for fulfilling my demands efficiently. But I sometimes find the name is a pain point, it is really hard to search especially combined with some generic words. It seems fixed now but I remember at some point of time even "render status" wouldn't lead me to the right place.
Consider creating a tunnel with Cloudflare Argo (think more sophisticated ngrok). I just finished deploying a toy app from a local machine about 15 minutes ago.
I read this entire page, and it doesn't offer what I am wanting at all. I have a localhost server, that I want to deploy so it's publicly accessible, like option 3 in the original article.
Maybe the branding is a bit opaque, you might have more luck searching for Cloudflare Tunnels since they've changed some names. That's probably my bad for not using the word "Tunnel" in my original comment. Here's a blog post describing the product [1], and here's a link right into the middle of CLI docs to a command where they publish a site from localhost, and here's [3] a link to my crappy "blog" post that summarizes the process.
Like I said, I literally just deployed apps from localhost yesterday and today.
This seems like “how to build a web app for a very tiny subset of people who don’t want to learn about deploying web apps”.
Maybe I’m in a bubble? It seems like 99% of the people I know actually running profitable companies have an infrastructure that roughly matches to “some Linux server somewhere”, and then upward from there it all just moves to AWS/azure, but it’s still just some Linux machines maybe with a load balancer in front of them.
If you’re writing software and don’t seem covered by this blog post, don’t feel like you’re out of the loop on something. I think this author is just writing about the subset of the ways that he knows about.
I mean honestly “HUGO static sites”…how about a directory of html files hosted by nginx or Apache?
I read it differently. It looks like a good outline of 4 basic approaches to building and hosting web sites. I would word them as:
1. Stateless static site. There is a variety of ways to build it (such as React or Svelte or purely by hand) and a variety of ways to host it (such as a directory of HTML files hosted by nginx).
2. Function services like Cloudflare workers, AWS Lambda, Google Cloud Functions, Azure Functions, etc. This is a way to store state without having to manage servers.
3. Single conventional Linux server.
4. Distributed containers.
All web developers should familiarize themselves with these 4 basic approaches and prefer them in approximately the order given. It's good to avoid jumping to distributed containers if you can get away with functions, a single Linux server, or even static HTML. It's also important to move down the list as the app grows rather than reinvent K8s.
I can't have my database on a separate Linux box? Seems odd to insist on either precisely one server or go straight to container orchestration.
That said, I agree with you about the other generalisations!
EDIT: actually, where do PaaSes fit into this taxomony? I'd argue that level 2 is actually the PaaS level. You might choose an edge PaaS (Workers), distributed (Fly.io), or centralised (Elastic Beanstalk) depending on your needs. But ultimately this level is: "I write code, you run it". FaaSes are a subset of PaaS providers which focus on stateless models, but there's no reason you can't combine FaaS for your code with a managed database offering.
Maybe it's "on the order of a single server", or conveys that the servers are "statically" arranged and/or manually managed rather than automatically (auto scaling, service discovery)?
This is a much better outline of the article than the article's own table of contents. Makes much more sense, even if it's still incredibly narrow in scope.
> I mean honestly “HUGO static sites”…how about a directory of html files hosted by nginx or Apache?
Whether you put HTML files on the mentioned services or have a build step with a static site generator is not the point.
Where does your nginx server run, how do you keep it up and running, how you do provision it, how do your files end up on the server?
You can write a bunch of shell scripts and do all of this on a cheap VM for 5 bucks. Or you can get it for free on some of these services with git integration, global caching, web hooks, previews, cheap storage, image optimization etc. You get unlimited or at least a lot of sites and it's all automatic. On some you can even just drag and drop a directory into their web interface and the website is there in a couple of seconds.
It's basically a different layer of abstraction that's free/cheap and works nicely for simple use cases or as a specialized part of a more complex deployment architecture.
I have personally recommended netlify, cloudflare etc. to people who have the technical ability to write HTML/CSS or use a static site generator. Those things are very easy to use and carry around less risk than having to provision and configure a web server yourself.
They may cut the extent of a free offering, but I suppose they won't drop it. An ability to make a quick proof of concept, to just try something and play with it, is very important for bringing in paying customers. People tend to buy with more confidence things they already know and comfortable with. People tend to pay for things that are widely known, with tons of examples and explanations online; free services generate much more of these.
That might be true but it really depends how many can convert. Recently Heroku dropped their free tier as well as Fly.io having a credit card only free trial in order to deter scammers.
I don't disagree with you, but I do think the economics of static sites are pretty different which may keep the free tiers around for longer. Even if it's "free for sites under 1MB", which would cover a lot of sites.
I agree with you here. I find that certain kinds of "hosted" services have their own home grown techno-babble and they rely on the ignorance of their potential customers to lure someone in. They sound like they are hiding some insanity level complexity made easy for consumption... but really all one needs is to run a single linux instance + nginx + some files.
Not sure why the author insists on "exactly one" server. You can easily also host a SQL server in the same datacenter (or even run a SQL server on the same server!). This was how most web apps were built before the other options existed.
It is trendy nowadays to just use SQLite for a single server claiming speed advantages, but there are disadvantages as well. In my last startup, our SQL server was not directly connected to the Internet, isolating it from potential hackers.
> Not sure why the author insists on "exactly one" server.
I believe the assumption is, if you need more than one server, using containers (option #4) is a better solution in general. Obviously this is not exactly true for your 1 server for app, 1 server for SQL server scenario, but might make sense for most setups.
In my experience this is pretty common for databases only to be accessed via local host. We had our office ip whitelisted, but now some database clients (datgrip, dbeaver…) can ssh tunnel to access. Plus phpmyadmin is still no uncommon..
Option #4 will give you autoscaling without having to deal with k8s.
IIRC all of the options listed still involve writing a Dockerfile, though. Render.com, Cyclic.sh, and Railway.app are also in that category(ish) but will automate the build more like Heroku. (this is off the top of my head, please correct if I misremembered anything)
The problem with deploying using Docker is that you no longer have any of the advantages of Docker, unless you count a Dockerfile as an advantage. But it’s really just a non-standard way to write shell scripts to configure a system.
Deploying using Docker containers means hosting multiple nested operating systems on multiple separate hardware, and then trying to coordinate them, when you most likely actually want them living on the same machine sharing the same loop back interface (like you have during development) or on the same physical or at least logical network with a hardware firewall between them.
Instead, when you deploy Docker containers to the cloud, they are on different machines (often in different data centers) connected only by a VPC with only primitive software firewalling and no way to monitor the traffic.
Docker is meant to run isolated systems together, to share resources. You get none of that benefit. But the hosting provider does, at your cost.
Well there's also another, more oldschool, option - distribution via floppies and CDs. Lot of good zines were distributed this way. Just joking.
On a more serious note - it is good to be aware and track how much we are tied to a given service. Because we might end up being taken hostage by the Cloud provider who can extort us for a long time before we manage to migrate.
His ordering is all wrong. Lambda-like websites are significantly more difficult to program and manage than a monolithic linux server. Graduating from the lambda to the linux server? No thanks.
In bucket number four, there’s also the option of a PaaS experience in your own cloud. Of course there are always trade-offs on control and complexity, but if you want to deploy to fargate or Google Cloud Run, without getting bogged down in writing pipelines and infra as code, and you want to get the Vercel-level developer experience that we all are getting used to - check out this category of tools. Including one where I’m a cofounder - withcoherence.com!
Feel free to ping hn@withcoherence.com if we can answer any questions or help you get running…
My Web site is designed and programmed and runs as intended. I was delayed by independent events, beat those back, and now am collecting some initial data.
But from the original post (OP) about "Four Ways" here, apparently I made a really big mistake, nope several biggies: I wrote the software using Windows 7 Professional, Visual Basic .NET, ASP.NET, ADO.NET, and SQL Server. What a user receives is old HTML and some CSS with little, maybe no, JavaScript. I didn't write any JavaScript at all, but ASP.NET wrote a little for me.
At one point, I wanted a key-value store so used two instances of one of the .NET collection classes instead of Redis. That was a biggie mistake?
So, apparently my work is none of the "Four Ways" of the OP. Biggie mistakes, right???
I don't get it: Looks to me like for some decades now, all around the world, people have been able to build (program, develop, etc.) Web sites using Windows 7, .NET, SQL Server.
E.g., as I recall, Markus Frind developed a Web site using those tools, had the usage grow, ..., and sold the site for $500+ million. He did something wrong?
Now there are some rules I don't know about, rules that say I must just junk my code, 100,000 lines of typing, and start over?????
Those "Four Ways", they include something for my use of collection classes as simple versions of Redis? Or they include Redis?
Oh, at it's core, my Web site has some original applied math -- those "Four Ways" have that already?
I didn't get the memo that I can't use Windows 7, .NET, and SQL Server!!!
What am I supposed to do, use better than Windows 7 Professional, .NET, SQL Server, and my applied math code????
It'll take me far longer to figure out what FarGate and Cloudflare Pages and all these are and infinitely longer to keep up with the latest and greatest because it keeps changing constantly. And even once I get it, I'll have very limited control or understanding of my stack, and migration? I'm screwed. I'm not interested in playing games like that.
nginx is comparatively simple, run it, it serves. Config is easy, learn it once, run it anywhere, it doesn't get a dozen new features and breaking changes every month, just does its job.
I guess I'd just rather understand the real underlying technologies than some crappy commercial wrapper for them that will differ every time someone tries to rip me off.