Out of the ecosystems I’m familiar with, Python application servers have terrible http2 support: neither gunicorn nor uwsgi supports it, and even new hotness like uvicorn is pretty far from it.
I don’t think Ruby is doing much better? Correct me if I’m wrong.
But why would you need HTTP/2 perfect support in real world application server? They are never going to terminate the client traffic, they will speak with a load balancer which can speak HTTP/1.1 with them. Sure, if you are at webscale or even less you want everything on HTTP/2 for optimization sake. But in the rest of cases, even if you are in a solo project, you can easily enough put an nginx before it, or a cloud native solution, or haproxy or whatever.
The whole point of this article is that proxies speaking HTTP2 with clients and HTTP1.1 with servers introduce new vulnerabilities. The author found such vulnerabilities in AWS ALB, several WAF solutions, F5 BigIP, and others.
Yeah but serving traffic from an application server directly is probably even worse in plethora of other failure modes.
EDIT: and yes I understand that you should use http/2 on the LB and http/2 on the backend to get the best of both worlds.
EDIT2: anyway my opinion is that the general reaction to a security discovery like this one shouldn't be "let's stop using this tech immediately" but "let's get this patched ASAP"
I forwarded this discussion to the lead maintainer of HAProxy and he confirmed that HAProxy is not impacted by this. It doesn't surprise me. He implements things to the strictest interpretation of the specs.
I don’t think Ruby is doing much better? Correct me if I’m wrong.