Hacker News new | past | comments | ask | show | jobs | submit login

I've been the "you're not google" person for several years, but now softened my position.

The thing is -- it depends. Sometimes when everyone knows some complex system well -- it becomes easy.

One example comes to mind -- Kubernetes. 90% of teams don't need all its complexity. And I've been "you don't need it" person for some time. But now I see that when everyone knows it -- it's actually much easier to deploy even simple websites on it, because it's a common lingo and you don't spend time explaining how it's deployed.

It's not like civic engineers, when an over-engineered bridge would cost a lot more in materials.




If you have a simple website , you can containerize your backend and use much simpler services from AWS and serve your static assets on S3.

Kubernetes is rarely the right answer for simple things even if Docker is.


> much simpler services from AWS

Like what, Lambda? I've seen so much horrible hacks and shit done with it (and other AWS services cough API gateway cough), these days I rather prefer a set of Kubernetes descriptors and Dockerfiles.

At least that combination all but enforces people doing Infrastructure-as-a-code and there's (almost) no possibility at all for "had to do live hack XYZ in the console and forgot to document it or apply it back in Terraform" .


In my experience, you are better off with ECS/Fargate than Lambda for serving an API. You get much more flexibility.

Also, I've witnessed people editing Lambda code through the console instead of doing a real deploy. what a mess...


You can’t edit Lambda code in the console when you deploy a Docker image to Lambda.

As far as flexibility, while there have been third party libraries that let you deploy your standard Node/Express, .Net/ASP, Python/Flask app to Lambda, now there is an official first party solution

https://github.com/awslabs/aws-lambda-web-adapter

And as far as ECS, it is stupid simple

I’ve used this for years

https://github.com/1Strategy/fargate-cloudformation-example/...


> Also, I've witnessed people editing Lambda code through the console instead of doing a real deploy. what a mess...

Yeah, exactly that's what I am talking about. Utter nightmare to recover from, especially if whoever set it up thought they needed to follow some uber-complex AWS blog post with >10 involved AWS services and didn't document any of it.


You can’t edit Lambda code directly in the console when using Docker deployments


AWS App Runner

https://aws.amazon.com/blogs/containers/introducing-aws-app-...

Google has something similar.


GCP has Cloud Run, which looks similar. App Runner is basically a wrapper on top of Fargate, right?


Yep. Every “serverless” compute service is just a wrapper on top of Firecracker including Lambda and Fargate.


Your response is the perfect example of my point. Each time you use "much simpler services" you still _need to explain_ the setup for the simpler services. Someone might know it, someone not. E.g. some project may eventually grow out of Lambda RAM limitations, but noone in the team knew that. While Kubernetes is one-size-fits-all setup, even if I don't like it.

And yes, I use the Cloud Run myself, but only for my one-person projects. For the team projects consistency is much more important (same way to access/monitor/version etc).

PS: I would say even AWS/GCP is already a huge overkill for most projects. But for some reason you didn't see exactly the same problem starting with clouds right away.


Lambda can use up to 10GB of RAM and there is also App Runner.

And “using AWS” can be as simple as starting off with Lightsail - a VPS service with fixed pricing

https://aws.amazon.com/lightsail/


RAM is just one example. Every simpler service has its limitations, and if everyone (including new hires) knows the simpler service well -- it's perfect. E.g. in my experience everyone knew App Engine at some point and it worked well for us. Now it's a zoo of devops pieces, so I tolerate Kubernetes only because everyone kinda knows it.

And the Kubernetes was just one example of my "you're not google" point. There is many more technologies that are definitely overkill, but is a good common denominator, even when it's 1000x more complex than needed for the task at hand.

PS: Btw, I dunno why people downvoted your comment. It's fits the HN "Guidelines" at the bottom, so upvoted.


The problem is that it can create a chain-reaction of complexity because it opens up possibilities for over-engineering. In the sense of: "Yes, it's a bit over-engineered, but k8s makes it manageable for us anyway!" - consciously or subconsciously. When I'd often suspect that some restrictions in what's possible/acceptable would've created a significantly leaner overall design in the end.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: