Hacker News new | past | comments | ask | show | jobs | submit login

Learning dozens specific cloud services and shelling out money everywhere I go is way less appealing to me than buying a cheap dedicated server and firing up exactly what I need.

It'll take me far longer to figure out what FarGate and Cloudflare Pages and all these are and infinitely longer to keep up with the latest and greatest because it keeps changing constantly. And even once I get it, I'll have very limited control or understanding of my stack, and migration? I'm screwed. I'm not interested in playing games like that.

nginx is comparatively simple, run it, it serves. Config is easy, learn it once, run it anywhere, it doesn't get a dozen new features and breaking changes every month, just does its job.

I guess I'd just rather understand the real underlying technologies than some crappy commercial wrapper for them that will differ every time someone tries to rip me off.




Nginx is not relatively simple. It's not easy. It's an unintuitive, poorly documented minefield.

I agree with you in principle, but in practice mthese services replace dozens of hours of banging your head against software that's one misstep from blowing your legs clean off.

Sometimes, you have better things to do than tweaking nginx configs using the average of a dozen tutorials and docs that might as well be written in middle English.

I don't have better things to do, which is why I bothered. I'm not sure if it was really worth it. Cloudflare took 5 minutes to set up.


> Nginx is not relatively simple. It's not easy. It's an unintuitive, poorly documented minefield.

And that's why I use Caddy.


The original point comparing nginx with services makes sense if you replace "nginx" with "caddy". Nginx is a disaster from a usability point of view.


The premise that OP states isn't that Cloudflare is easier, but that when that one misstep happens and attempts to blow your legs clean off, you at least are in control of the environment and can debug/fix what's going on.

Not so easy on these platforms.


> Nginx is not relatively simple.

Sure it is, if your needs are simple and you follow a tutorial and don't just randomly spam stuff into your config files. Nginx can get really really complicated but it doesn't have to.

> mthese services replace dozens of hours of banging your head against software that's one misstep from blowing your legs clean off.

Yes, you can definitely spend hours figuring out Nginx, but you can shoot your foot off with a PAAS just as easily and at least Nginx is a technology that you can use in many different situations. It's definitely worth learning a tool like that, versus a commercial service where you play in a sandbox and they can jack up the price or change the rules at any time.

This is written as a long-time web developer and sysadmin. A lot of comments like the above seem to be written to try to scare younger devs away from standard tools and into the arms of PAAS vendors.


The location resolution order, the arbitrary rule inheritance, the complete lack of useful errors, the abundance of footguns, the asinine defaults. Oh and automatic certificate renewal is still a pain. Purging the cache selectively took me a few days to set up.

Blame the user all you want, but after a decade of using nginx, I'm ready to claim that it's a pain to work with.


1) Almost all of these services have generous free tiers. Even if you're running a relatively high traffic site your bill will be practically $0.

2) These technologies are stable and usually not that complicated. You are missing out on a lot of productivity by not looking into them.

Understanding the "real underlying technologies" is a myth. The "underlying technologies" are constantly changing, unless you are using an extremely outdated stack: https://www.youtube.com/live/hWjT_OOBdOc?feature=share.


Your link is a guy saying people get so good with the old technologies that there is no point trying to compete with them. That only happens if they don't change much...

If you time traveled someone from a decade ago that knows how to configure an Apache or NGinx server to today they would likely still be able to. It may not follow latest best practice, but at worst they'll use something retired, have to google it and be up and running in minutes.

Same with Spring/Java or .NET/ASP.NET Web API.

And you can still run a VPS or container that can handle significant traffic for free on many of the cloud platforms + migrating it is much much simpler.


I don't understand what you're trying to say.

My point is that you could make the whole "underlying technologies" argument for Apache back in the day when it was new. Why use Apache when I can understand the "underlying technologies"?

Things evolve over time and new tech slowly becomes so integrated into the stack that it is the underlying technology.

He even references webservers like Apache as part of this process: https://www.youtube.com/live/hWjT_OOBdOc?feature=share&t=765.


The underlying technologies very rarely change and offer very stable APIs. Apache has been around nearly 30 years. NGinX nearly 20. Both offered major advantages over existing solutions, but also interfaced with the rest of your codebase using a standard mechanism (CGI) so you could easily migrate to or from them.

A single vendors solution will not (at least very much should not) become a standard underlying technology, so this isn't moving to the next underlying technology. You're just tying yourself in to one vendor.

If it gets standardized across vendors or open sourced then it becomes a different story, but until then its a massive gamble to assume it will and the standard chosen will derive from your current vendors solution. Then what happens when the vendor decides to double prices and kill the free tier (also known as being acquired by Oracle)? deprecate it? Goes bust? etc.


There is a difference in abstraction between hooking API Gateway to a Lambda function, and writing and deploying an API on a Linux VM.

One is closer to the metal, so to speak, and provides a better basis for understanding how these cloud services (Lambda, etc) actually work behind the scenes. In this case, understanding the underlying technologies is not a myth.


> The "underlying technologies" are constantly changing, unless you are using an extremely outdated stack

So will also constantly change their behaviour in the edge cases. Thank you for giving me another reason for not using that crap. I'll stick with my extremely outdated stack.


Uh, what? Your product Saltcorn uses Webpack, React, Express, Docker and probably many more modern technologies. Those were just the ones I found looking at the repo for 5min.

Cloud and serverless platforms are similarly modern tech, except for deployment and hosting. Do you really think that e.g. AWS and Cloudflare "constantly change their behaviour in the edge cases"?


> Do you really think that e.g. AWS and Cloudflare "constantly change their behaviour in the edge cases"?

I mean major Google APIs / SDKs do... And not just in edge cases. Not at all uncommon for a vendor to decide a service is unprofitable and kill it, or to decide to launch a new better version and deprecate the old one / demise in a couple of years. That isn't fun when you heavily depend on it.

When you look at the sheer number of services AWS offer, it feels its only a bad year or a major competitor gaining an advantage and undercutting them on price before there is a risk they consider trimming to a smaller core set of services. I'd bet the VPS offerings aren't what goes...

Or a standard comes out, they adopt that and deprecate the existing offering giving you 2 years to migrate. Having to re-work everything is a major cost to large firms and can kill a startup.


yes, but I control all those dependencies with a lock file and pinned dependencies. We have tests for their behaviour including a test script in CI that boots up the service locally and connects to it, making various assertions.

I think there is a difference here between the core behaviour and the edge case behaviour. I guess I would trust that in the core behaviour they do not change on a day-to-day basis but the question is how do they behave when you are pushing the tools outside the intended core use cases. Can you then really trust services that change their core implementation constantly will work for your workflow?

TBH i would probably trust a CDN because I have a fairly simple use for such a service. If I were really pushing these tools, like running a video broadcast service or whatever, I would be much more worried.


I find it really hard to believe that you would run into rough edge cases with most platforms. If you have a small app a lot of platforms do deploys directly from GitHub repos. If you have a more complex app cloud platforms support things like managed Kubernetes and such.

What does something "outside the intended use case" even look like for a deployment and hosting platform?

If you're only using them for hosting and deployment there isn't really any lock-in either. That only occurs if you're using their other cloud services, and even then there are many platforms with similar services.


I agree that paid/cloud does not necessarily equal lock-in. I mean, an Ubuntu VPS is identical-enough (usually) across clouds. Managed databases? Maybe? what about available Postgres extensions etc.

There are lots of cases where deployment needs some kind of customisation. This usually happens either around persistent state or around building in CI. E.g. in one project some years ago, we wanted to test in CI against a subset of the production database. So there has to be a script to appropriately subset this, not easy if you have complicated foreign key relationships, and you have to make sure confidential data did not make it across. We were not the only people to do this I've seen this elsewhere.

Other example, my previous project: the main framework is Django with a React front end. The admin interface is django html templates. But one of those templates has an embedded additional react component because on that admin page we needed more interactivity. So that has to be built. And all of this has to be tested in CI.

What does non-standard deployment look like? Here is one example. For my current project I use wildcard domains so users can create their own tenants on their own subdomain. This would not work on e.g. heroku or similars, at least i don't know how.

All of this could probably be done in Kubernetes but also much simpler with bash scripts and systemd units.


You can just deploy docker containers to many of these services and the versions are pinned just like you are used to. I’m not sure what you are expecting side effects to be here


To the extent that you can put your functionality as a dependency in a container, that's not what I'm talking about. If you can do that then we are all hunky dory.

The problem is with functionality that is only available as a remote API, because for whatever reason we wanted to be cloud native rather than rely on free and open source libraries. I cannot pin that dependency version, as best I can choose which version of the API I am talking to, but if they have changed the underlying implementation then I can't request them to roll that back just for me.


> Almost all of these services have generous free tiers. Even if you're running a relatively high traffic site your bill will be practically $0.

That is what gets you hooked.


Yep. The first one is always free.


For what it’s worth, there isn’t much to learn with cloudflare pages. You grant it Oauth access to a particular repo, it builds master and serves the output directory. Only thing you specify is the build command and the output directory. For example “hugo build”/“npm run build” and directory “out/“.

I use it for a few reasons. I like that it’s free and the hosting of the static content is at the edge. They also take care of some things automatically, like compressing the assets once at build time and serving them with the right headers.

I don’t think this is unique. I believe netlify and others have similar offerings. These providers are excellent for the use case of serving static content at low latency and low cost.


> It'll take me far longer to figure out what FarGate and Cloudflare Pages and all these are and infinitely longer to keep up with the latest and greatest because it keeps changing constantly.

First, Fargate and CloudFlare Pages are completely different things. Fargate is "serverless" Kubernetes while Pages is just static hosting (if you don't count workers).

While you have a point about the former, you can't beat CF Pages (or Netlify, or Vercel, or any other modern static hosting system) in simplicity. You just push your changes to your own Github repo and it gets automagically deployed. It is also free up to the point your cheap dedicated can handle. Try it and you will be pleasantly surprised.


Disagree. Fargate is very simple and as far as I know has not changed much in years. Give it a docker image, watch it go. Want to leave? It's literally a dockerfile, take it and run it elsewhere.


That's the "Linux Server Singleton" pattern from the FA, isn't it? I find it surprising that it's in 3rd position. While SSG looks simple enough to merit the first position, going with the Cloudflare serverless proposition as the next step for beginners feels strange to me. I think you would be able to "graduate" to serverless-edge functions after you grok the traditional web client/server approach, not before.


If you want the best of both worlds install Dokku or Piku. You get a PaaS-like experience on your own VPS.


Sorry, but this advice just sounds really out of date. How is "a cheap server" ever cheaper than free? I don't think you understand just how generous and simple the free static hosting CDNs are. NGINX is fine but it's absolutely not simpler than saying "hey Cloudflare, serve my main branch at this URL" and then it does all the rest automatically.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: