1) Bandwidth is pretty irrelevant now. Even players on cellular networks have megabits of bandwidth. I stopped spending a large amount of time optimizing for packet size while building the networking for Dota 2. Nobody is playing a 14.4k modem anymore.
2) Server placement is still an issue. It's still ~200ms round trip from New York to Sydney for example. Fortunately, cloud infrastructures can make getting servers to closer your players much easier now. You don't have to physically install servers into data centers in the region.
3) Packet loss still occurs, but is incredibly rare that the gap between using TCP and UDP is narrowing. Modern TCP implementations like Microsoft's are amazing at handling loss and retransmission. However, I'd probably use QUIC for game networking if I was to write an engine from scratch these days.
Having worked on a fairly popular .io game mostly played by kids on phones and chromebooks over wifi, I concur with everything you said.
1) We updated around 60Hz, and bandwidth was never an issue (everything was binary encoded and many values were hand compressed to the number of bits they needed, but we didn't run any additional compression or ever feel the need to optimize further, and these were games with 100 players in an instance).
2) Probably the biggest key to success was global server placement. That mattered most, and we ended up renting servers in 10-20 regions around the globe to keep latency down for players. I didn't work on this part, but I know it was quite a bit of work, and also very experimental. Physical proximity of servers didn't always translate to lower latencies. Country borders could by surprisingly laggy, in certain cases.
3) This is the one that really shocked me. As I said, players were engaging with the game in almost the worst conditions imaginable, weak chromebooks over wifi and all communication was over websockets (which basically behave like TCP). Still, packet loss was not an issue. We prioritized low-latency over smoothness, so our sever just blasted out the latest state to the client ~60 times a second, and the client would display mostly like a dumb terminal. This is approximately how you're supposed to do it with UDP, but dropped packets and resends are supposed to make it unworkable over TCP. But we just YOLO'd it with TCP and it worked great! I'm sure at some point in the past before internet infrastructure got so good, it would have been a disaster, but seems like, for most players, we've advanced past that.
Now, I know partially this just weeded out the people with bad connections, but I really don't think it was that many. Certainly my own experiments taking my laptop and checking out various wifi spots around town with wireshark indicated that modern infrastructure is just that good.
(Actually, in terms of improving latency, beyond making sure people were playing on local severs, the next biggest thing was just optimizing javascipt. Both the client and server were written in it, and GC stalls were a huge problem. The eventually solution was to rewrite the server to not generate that much garbage in the first place and then just disable the GC. Reboot the server process after ~10 minutes between games. Another big js issue was code getting de/reoptimized continually. The biggest issue was inconsistent numerical literals between float and integer. Once we figured out it was an issue, we became very disciplined about that.)
Another key requirement that must be considered is packet ordering. With games you care about the latest state and thus discarding older out of order packets is a better strategy than waiting for packets to serialize them like TCP would do.
You only care about the latest state for some events. Only events which will soon be superseded by a later event should go over UDP. Move A to X, sent on every frame, fine. Create monster at Y, no.
If you find yourself implementing reliability and retransmission over UDP, you're doing it wrong. However, as I mention occasionally, turn off delayed ACKs in TCP to avoid stalls on short message traffic.
Reliable, no head of line blocking, in order delivery - pick any two. Can't have all three.
2) Server placement is still an issue. It's still ~200ms round trip from New York to Sydney for example. Fortunately, cloud infrastructures can make getting servers to closer your players much easier now. You don't have to physically install servers into data centers in the region.
3) Packet loss still occurs, but is incredibly rare that the gap between using TCP and UDP is narrowing. Modern TCP implementations like Microsoft's are amazing at handling loss and retransmission. However, I'd probably use QUIC for game networking if I was to write an engine from scratch these days.