Excellent article, and sections "NAT Problems" and "NAT Solutions" are a good starter on that topic.
Except even third-choice solution is not always feasible. Reserving fixed RTP/UDP port range is not possible with carrier-grade NAT, which is quite common with residential ISPs and nearly-universal with cell ISPs.
Fourth-choice would be to reserve port range on a personal server (which would run B2BUA, asterisk in OP's case; or an RTP proxy), and force calls, including media, from/to SIP handsets to go via that.
All of the NAT problems would instantly to away with IPv6, but with adoption still at a meager 50% I suppose you'll need a PBX of some kind to receive at least half the calls.
For those stuck behind CGNAT, there are guides online for how to set up a VPN to a cheap VPS and forward all network traffic to your network so you can have almost-real connectivity at home. If you're content with 50mbps, you can even use Oracle's Always Free tier.
Yes, Asterisk can poke holes in NAT on its own just fine. I was surprised how pessimistic the article is on this. I have systems running for months and years behind NAT with no issue. You might have to disable direct media (endpoint/disable_direct_media_on_nat).
Also, this is just uptime related tip not NAT, you must explicitly set registration/max_retries to a huge number otherwise Asterisk just gives up permanently at some point. It’s a really weird default.
Trunk and internal, and I usually put all the phones in their own VLAN w/o direct Internet access. I don’t really see a use for dialing arbitrary SIP URIs. If I need to add a remote phone I’ll just connect it directly with a network tunnel.
The idea is if you send UDP packets to destination so arranged by middleman(STUN) or to a proxy so arranged by middleman(TURN) as an outgoing traffic, your Wi-Fi should be smart enough to set up a temporary NAT entry to allow responses to reach your $LOCAL_IP:$PORT. In reality, the Wi-Fi may have short memory or may be dying behind a refrigerator covered in dust and not able to handle all necessary combinations and ranges of addresses and ports, resulting in various partial failures such as one-way audio or missing participant in a group call.
Fifth-choice option is to just encapsulate everything into a VPN, preferably L2 VPN over HTTPS to a server on a global IP. If it isn't working, there must be no Internet.
Makes it boolean. It's connected, or it's not. "One of RTP media transports to one of destinations is failing to establish DTLS ciphering and I think it has to do with either RTC issue or Chrome bug" is a self inflicted pain.
UDP is unreliable transport by specification, so I guess that if a network equipment such as a router cannot cope with the general workload, it would probably sacrifice UDP first without a second thought.
This is not how congestion control works on the internet.
Indeed TCP depends on packets getting dropped as the feedback mechanism for knowing when to slow down.
It's important that packets are dropped fairly, as otherwise on a loaded network only the preferred protocol(s) would keep working and the others would get starved. You don't want DNS to stop working when a HTTP flow is running at capacity on your link for example.
Huh? It's an obvious thing to do. If you have to drop a packet because your queues are full, any engineer with an IQ over 50 will pick the victim from the UDP packets, because the sender expects it might happen, and also because it won't necessary cause a retransmission - e.g. an RTP packet.
Why is that the obvious choice? TCP can recover through retransmission, UDP can not. Sounds just as logical of a choice to prioritize UDP and allow TCP connections to have a slowdown rather than allow UDP connections to have data loss.
As I said, application programmers expect and accept that their UDP packets might be lost or duplicated. This is sort of part of the contract. Even datagram integrity is in theory not guaranteed, as the checksum field of UDP is optional.
Sometimes people don't see a point at first in UDP because you eventually have to implement sequence numbers, CRCs, time-outs, retries, etc. that are similar to what TCP does. One can finds the reasons why one wants to do this anyway in [1]. In a nutshell, reliability is often insured by the application layer anyway so you don't need the transport protocol to do extra stuff you have no control over and might even get in the way (see the numerous esoteric ioctl and sysctl settings under Linux).
It is an obvious choice because, as I said again, a router dropping a packet does not necessary triggers a resend, e.g. RTP or syslog (over UDP). In TCP, this is guaranteed. If you are overloaded, you'd rather take the action you can get away with than probably just buy time.
Except even third-choice solution is not always feasible. Reserving fixed RTP/UDP port range is not possible with carrier-grade NAT, which is quite common with residential ISPs and nearly-universal with cell ISPs.
Fourth-choice would be to reserve port range on a personal server (which would run B2BUA, asterisk in OP's case; or an RTP proxy), and force calls, including media, from/to SIP handsets to go via that.