Request smuggling is an issue when reverse proxying and multiplexing multiple front-end streams over a shared HTTP/1.1 connection on the backend. HTTP/2 on the front-end doesn't resolve that issue, though the exploit techniques are slightly different. In fact, HTTP/2 on the front-end is a deceptive solution to the problem because HTTP/2 is more complex (the binary framing doesn't save you, yet you still have to deal with unexpected headers--you can still send Content-Length headers, for example) and the exploits less intuitive.
HTTP/1.1 is a simpler protocol and easier to implement, even with chunked Transfer-Encoding and pipelining. (For one thing, there's no need to implement HPACK.) It's trying to build multiplexing tunnels across it that is problematic, because buggy or confused handling of the line-delimited framing between ostensibly trusted end point opens up opportunities for desync that, in a simple 1:1 situation, would just be a stupid bug, no different from any other protocol implementation bug.
Because HTTP/2 is more complicated, there's arguably more opportunities for classic memory safety bugs. Contrary common wisdom, there's not a meaningful difference between text and binary protocols in that regard; if anything, text-based protocols are more forgiving of bugs, which is why they tend to promote and ossify proliferation of protocol violations. I've written HTTP and RTSP/RTP stacks several times, including RTSP/RTP nested inside bonded HTTP connections (what Quicktime used to use back in the day). I've also implemented MIME message parsers. The biggest headache and opportunity for bugs, IME, is dealing with header bodies, specifically the various flavors of structured headers, and unfortunately HTTP/2 doesn't directly address that--you're still handed a blob to parse, same as HTTP/1.1 and MIME generally. HTTP/2 does partially address the header folding problem, but it's common to reject those in HTTP/1.x implementations, something you can't do in e-mail stacks, unfortunately.
Argubaly, the complexity issue is not only the protocols themselves but also the fact that thanks to the companies pushing HTTP/2 and 3, there are now multiple (competing/overlapping/incompatible) protocols
For example, people passing requests received by HTTP/2 frontends to HTTP/1.1 backends
HTTP/1.1 is a simpler protocol and easier to implement, even with chunked Transfer-Encoding and pipelining. (For one thing, there's no need to implement HPACK.) It's trying to build multiplexing tunnels across it that is problematic, because buggy or confused handling of the line-delimited framing between ostensibly trusted end point opens up opportunities for desync that, in a simple 1:1 situation, would just be a stupid bug, no different from any other protocol implementation bug.
Because HTTP/2 is more complicated, there's arguably more opportunities for classic memory safety bugs. Contrary common wisdom, there's not a meaningful difference between text and binary protocols in that regard; if anything, text-based protocols are more forgiving of bugs, which is why they tend to promote and ossify proliferation of protocol violations. I've written HTTP and RTSP/RTP stacks several times, including RTSP/RTP nested inside bonded HTTP connections (what Quicktime used to use back in the day). I've also implemented MIME message parsers. The biggest headache and opportunity for bugs, IME, is dealing with header bodies, specifically the various flavors of structured headers, and unfortunately HTTP/2 doesn't directly address that--you're still handed a blob to parse, same as HTTP/1.1 and MIME generally. HTTP/2 does partially address the header folding problem, but it's common to reject those in HTTP/1.x implementations, something you can't do in e-mail stacks, unfortunately.