you should google for proxy-protocol. so here's the thing - nginx cannot INJECT proxy protocol. I'm not an expert in nginx.. but atleast the versions used in k8s couldnt.
It can read/pass-through proxy protocol headers. So if you have haproxy in front of nginx.. it will work fine (or ELB/GLB).
Which is why there's another beta ingress - the "keepalived-vip" repo in k8s ingress. However, not a lot of people seem to be using haproxy primarily because haproxy cannot have a zero-downtime deploy.
A couple of weeks back on k8s slack, I was discussing using Github multibinder to solve that one problem and move ahead with haproxy ingress.
protip: please join the #sig-onpremise channel on slack. There's not a lot of momentum for bare metal deploys otherwise.
Regardless, would I be right in assuming that your issues are mostly related to the ingress of TCP streams? My usecase is almost exclusively HTTP, so I might be missing the worst of it.
please double check - nginx's support for proxy protocol is to be able to read it. it cannot support injecting it. if nginx is the first thing your traffic hits, it will not inject proxy protocol headers. Again, I will not stake my reputation on this... but I remember going deep into this about 2 months back.
if you are using nginx ingress, then you need to move all of your existing config into their own config file format. Since setting up full TLS internally in kubernetes is buggy, I didnt want to terminate my traffic on the ingress. Also I really did not want to move my (fairly complex) nginx configuration to the ingress.
Which is why I'm a big believer in haproxy ingress - it was designed to interface with nginx.
However, you are already one of the "enlightened" ones. You actually do know what an ingress is. A significant amount of k8s slack traffic is "how do I setup a loadbalancer/how to make the cluster available to the outside world"
nginx can inject proxy_protocol - I know because I've enabled it by mistake! The kubernetes nginx ingress controller doesn't support it though. I think that is because most people use HTTP headers (X-Forwarded-For) instead. But if you do want it, you should open an issue explaining the use case so we can check our assumptions!
@justinsb - ur the expert here and i will not dispute this :D
I still believe that there is NO better tool to use kubernetes than kops. I now use the 100.64 subnet that you guys figured out ("carrier grade NAT" seriously?) in docker swarm.
but genuine question - can u show me how ? Because I honestly went crawling into the nginx codebase and could only find client side handling of proxy protocol.
I found NO place where it showed how it could be injected and chain reverse proxies together (that is what it was invented for in the first place).
It's the same proxy_protocol keyword for both, which is why this is so confusing. In the listen line it means "remove proxy protocol from the inbound connection", as a top level directive "proxy_protocol on" on the server it means "add proxy protocol to the outbound connection"
Looking at that though, maybe it only works with SSL passthrough... but that is the typical use-case for using proxy protocol instead of X-Forwarded-For
It can read/pass-through proxy protocol headers. So if you have haproxy in front of nginx.. it will work fine (or ELB/GLB).
Which is why there's another beta ingress - the "keepalived-vip" repo in k8s ingress. However, not a lot of people seem to be using haproxy primarily because haproxy cannot have a zero-downtime deploy.
A couple of weeks back on k8s slack, I was discussing using Github multibinder to solve that one problem and move ahead with haproxy ingress.
protip: please join the #sig-onpremise channel on slack. There's not a lot of momentum for bare metal deploys otherwise.
https://githubengineering.com/glb-part-2-haproxy-zero-downti...