Hacker News new | past | comments | ask | show | jobs | submit login

Installing a managed log ingestor is stupidly easy in Kubernetes. For example, on GCP here's the guide to getting it done [1]. Two kubectl commands, and you get centralized logging across hundreds of nodes in your cluster and thousands of containers within them. Most other platforms (like Datadog) have similar setups.

Infrastructure level monitoring is also very easy. For example, if you're on Datadog, you flip KUBERNETES=true as an environment variable in the datadog agent, and you'll instantly get events for stopped containers, with stopped reason (OOM, evictions, etc), which you can configure granular alerting on.

Let's say you're in a service-oriented environment and you want detailed network-level metrics between services (request latency, status codes, etc). No problem, two commands and you have Istio [2]. Istio has Jaeger built-in for distributed tracing, with an in-cluster dashboard, or you can export the OpenTracing spans to any service that supports OpenTracing. You can also export these metrics to Datadog or most other metrics services you use.

[1] https://kubernetes.io/docs/tasks/debug-application-cluster/l...

[2] https://istio.io/docs/setup/kubernetes/quick-start/




I will admit that these things are slightly easier Kubernetes, my original point was mostly just to say that Kubernetes itself doesn't really provide any of these things in meaningful ways - you just described a bunch of separate, nontrivial systems, that solve many but not all logging/monitoring needs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: