Hacker News new | past | comments | ask | show | jobs | submit login

Unnecessarily harsh rant: senior staff/principals engineers who haven't operated (or been on-call) for anything important come to evangelize.

I suppose this is mainly a thought for projects like neondb/cockroachdb/stackgres (who I haven't heard of but was linked in the thread). It might be reasonable if you need incredibly many db instances, but for the general business who needs "a couple" of database instances, I can't imagine that putting Kubernetes on top would ever serve you better. I'm staying as far away as I can.




> I suppose this is mainly a thought for projects like neondb/cockroachdb/stackgres

StackGres is "just" a platform for running Postgres on Kubernetes. It helps you deploy and manage HA, connection pooling, monitoring, automated backups, upgrades and many other things. That you have a tiny Postgres instance; or hundreds of beefy clusters with many instances is up to you. It's not a distributed database (like the other ones mentioned), it is still "vanilla" Postgres.

Disclosure: Founder of OnGres (company behind StackGres)


You won't see databases on k8s in enterprise production environments. Startups or companies/services with lower reliability requirements, sure. But don't expect to walk into a fortune 500 and standup a postgres operator in production expecting to replace the existing federated solution.


> You won't see databases on k8s in enterprise production environments. Startups or companies/services with lower reliability requirements, sure. But don't expect to walk into a fortune 500 and standup a postgres operator in production expecting to replace the existing federated solution.

Blanket statements like that should be taken with a grain of salt.

F500's are not one thing. You don't have to scratch deeply to find teams running production DB's on k8s (ignoring or accepting the trade-offs, of which there are many including working with vendors and existing DBA's and their solutions) and you'll find DBA's evaluating the same and other trade-offs for themselves.

I personally think that running DB's on multi-tenant k8s with nodes that weren't specifically allocated for it is strapping in for a bad ride.


I don't mean to nit, but we are saying the same thing..


Ideally you should not be seeing k8s at all in mission critical infrastructure in any tech company. I know few FAANGs that stay away from it.


I disagree with that. Many fortune 500s are running k8s to power critical infra. GMF processes all OnStar data in realtime on k8s, GitHub runs entirely on k8s, etc etc. You need the personal and the tools to manage it, but at a certain point k8s makes sense. There are still use cases were k8s is not a solution.

EDIT: part of actions, codespaces and packages are not run on k8s, but 80% of github services are


> GitHub runs entirely on k8s

That's really not the endorsement you think it is.


Your comment doesn't really help understand why.


GitHub experiences outages pretty regularly, for example on 12 separate days last month: https://www.githubstatus.com/history


So what? Nothing here is sufficient to conclude it has anything to do with k8 whatsoever.

For example “users cannot resume code spaces created before the incident” sounds a lot more like an application level problem.


The point above was that it wasn't a good endorsement. Correlation is not causation but the opposite is also true.


Why? Yes the operations can be a bit messy. But in practice it solves the "I want to run, update and deploy my service without worrying about hardware allocations" problem. Otherwise you create an implementation of half of it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: