It's safe to say that if you have enough signals from every possible layer (of which the above a barely a few) that it becomes trivial to build a model that can identify the majority of bots.
However, then you're left with the really hard problem of when real browsers are used. But hey, you went a long way before you had to actually look at traffic patterns and in the meantime you've significantly raised the costs for those operating the bots.
It's also worth noting that if you really get enough signals, that bot writers cannot control them all. Everyone can rewrite a HTTP header, but can you pick the right HTTP headers in the right order with the right TLS cipher and TLS HELO to appear to be the same as Chrome on Windows? Good luck.
However, then you're left with the really hard problem of when real browsers are used. But hey, you went a long way before you had to actually look at traffic patterns and in the meantime you've significantly raised the costs for those operating the bots.
It's also worth noting that if you really get enough signals, that bot writers cannot control them all. Everyone can rewrite a HTTP header, but can you pick the right HTTP headers in the right order with the right TLS cipher and TLS HELO to appear to be the same as Chrome on Windows? Good luck.