I don't think you need to take the advice of "use Kubernetes" as a literal "there are no other options", but more as along the lines of "use an existing implementation that you will be able to hire engineers to maintain, rather than a kludge of batch scripts and home rolled abstractions"
> Im sticking with Fargate, full, native IaaC support, feature parity with k8s
Fargate by itself doesn't provide anything that k8s does, ecs does. You can run k8s on fargate via EKS, for example.
> Or plain EC2s in ASG.
This is exactly what this post is saying not to do. ASG's are a subset of the features of k8s. k8s does load balancing, service discovery, deployments, health checks for example. You may not need all of those, and that's fine, but most containerised applications IME benefit from the _majority_ of k8s features.
> I don't think you need to take the advice of "use Kubernetes" as a literal "there are no other options"
It looks pretty literal there. And why everyone talks about service discovery? Put an LB in front of your app instances and give it DNS name. Here, sorted, no service discovery needed, service is reachable at this URI only.
> When you spin more instances as load grows, what updates the DNS?
If you're using cloud LBs, this happens automatically. If not, you can have the instances register themselves in DNS when they turn on. You can also have an out of band system register things in DNS based on rules (or maybe you already have software that supports registering based on health checks so you can add DNS entries from a fixed list of static IPs when they become healthy/powered on)
> When you have more than one service interacting, or when your LB restarts / fails over, what updates the DNS?
You can have the LBs communicate peer-to-peer so they can update DNS when they become unavailable (if A and B can't reach C, they remove its DNS record). Some care needs to be taken to prevent things like split brain but there's established patterns for cluster formation. Something like keepalived could be used. You can also use VIPs instead of DNS
You end up with service discovery either way. Either you discover a load balancer or you discover an endpoint directly. Load balancers allow you to more granularly route traffic with the client being aware of the server topology. This is good when you're exposing services to the web/public clients. On the other hand, load balancers add a certain overhead and can become bottlenecks for high-traffic services
> you can have the instances register themselves in DNS when they turn on. You can also have an out of band system register things in DNS based on rules (or maybe...
Yeah, don't do this. This is _exactly_ what the article is saying not to do. It's a nightmare to maintain, totally non standard.
> You can have the LBs communicate peer-to-peer so they can update DNS when they become unavailable <...>
Or you could use a system that doesn't require you to write custom P2P traffic for a solved problem. Doing this instead of using an existing system is an interesting decision.
Because service discovery is pretty much the next problem that people find themselves facing once they've decided to not use kubernetes or ECS or something, and it's also something that it solves very well, without relying on DNS with all its quirks. To use my favourite saying, "it's always DNS".
> Put an LB in front of your app instances and give it DNS name. Here, sorted, no service discovery needed, service is reachable at this URI only.
Dumping a service behind a load balancer and relying on DNS is a heavy handed approach. Sure it's "simple", for some definitions of the word simple, but so is writing 3
60 lines of yaml and/or terraform to spin up an eks or ECS cluster
> I don't think you need to take the advice of "use Kubernetes" as a literal "there are no other options", but more as along the lines of "use an existing implementation that you will be able to hire engineers to maintain
Yeah, exactly this actually :) I didn't want to get too verbose in the little bullet points (I already use parentheticals way too often lol), but it'd be more accurate to have written "Use kubernetes, or some other tool that takes care of container networking, lifecycle, health checks etc for you" -- I bet if you squint even platforms like GCP app engine would fit the bill here
> Im sticking with Fargate, full, native IaaC support, feature parity with k8s
Fargate by itself doesn't provide anything that k8s does, ecs does. You can run k8s on fargate via EKS, for example.
> Or plain EC2s in ASG.
This is exactly what this post is saying not to do. ASG's are a subset of the features of k8s. k8s does load balancing, service discovery, deployments, health checks for example. You may not need all of those, and that's fine, but most containerised applications IME benefit from the _majority_ of k8s features.