Hacker News new | past | comments | ask | show | jobs | submit login

Not familiar with Nomad; how does that work under the hood? Do you proxy all traffic through this third party who then load balances with a regular old http proxy, or is it actually self-hosted by the set of friends? With multiple DNS A records, this shouldn't work (it'll just fail in 1/N cases if 1 of the N IPs is down), so I'm curious how this is different from just hosting with Hashicorp directly.



Each friend's Nomad "client" (a node in the cluster) would accept a "job" which is the HTTP server. The missing piece is communicating back to some authority which clients are running the service, and on which ports. This could be done with DNS SRV records. Most commonly, Consul is used for DNS in a Nomad cluster.

This article is likely to give you a good idea of the architecture: https://developer.hashicorp.com/nomad/tutorials/load-balanci...


So if I understand it correctly, this haproxy they suggest is the new central point of failure? Sorry for being skeptical but I'm not really understanding the advantage.


You can also run multiple haproxies with identical config. One machine goes down? Your existing proxies still balance load to jobs.


But people would need to know which other domains run the other proxies. Might as well manually type in the other domain.

The only way I see to avoid a SPOF is with anycast, which I think involves running your own ISP to be able to arrange your own BGP sessions and announce at multiple locations.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: