"The IPv6 designers made a fundamental conceptual mistake: they designed the IPv6 address space as an alternative to the IPv4 address space, rather than an extension to the IPv4 address space."
I can highly recommend reading RFC1365 ( http://www.faqs.org/rfcs/rfc1365.html ) which defines IP extension blocks to provide huge address spaces within IPv4 while retaining full downward compatibility with legacy routers and hosts. Such extension blocks should also be compatible with NAT and jumbo frames.
The article's right that IPv6 isn't going to happen.
I'm not sure it makes any points besides that.
(1) It's hard to fear our NAT-ed future when we're all living in the NAT-ed present to little ill effect. A typical large company today has more than a /19 allocated to it; if we're really talking about exhausting the TCP endpoint space, that /19 offers 536,000,000 concurrent connections. ISP distribution layers might not do that many packets per second.
(1a) If multiplexing is the problem, we can solve that in a subtransport-layer protocol without a worldwide ISP flag day that renumbers the whole world. There are already transports that are more multiplexing-friendly than TCP --- for instance, SCTP, a well-researched telco protocol.
(2) IPv6 exacerbates routing problems; it doesn't solve them. Routing table explosion and address space size are orthogonal.
(3) If we're embracing our NAT'd future anyways, I'm not sure I care how routable IPv4 are traded, or who has to go ask Halliburton to start paying for their obscene legacy allocation. You decided to work for the ISP, not me. Have fun!
Re: your 1.
NAT breaks one of the founding principles of the Internet, that of end-to-end connectivity [1]. This is may not seem important, but universal end-to-end connectivity, coupled with lower-layer abstraction and transparency, is required for further innovation and deployment of new protocols and applications.
Think about the trouble that VoIP protocols (SIP, H323) have when traversing NAT. We invented UPnP NAT Traversal, STUN or some equivalent, all patches for the lack of end-to-end connectivity, hoops that NAT makes us jump through, with no benefit (but added complexity) for the end user.
Thus, we can see how NAT can slow and discourage innovation by raising starting cost.
The alternative, IPv6, is simpler and solves a host of problems in a single shot. Unfortunately, inertia and fear of change will delay its adoption more than necessary.
That's just a religious argument. You can argue it exactly the opposite way: that applications requiring direct IPv4 addressing are enlisting the Internet to accomplish things the apps should be doing for themselves, and are in fact breaking e2e. For a concrete example, see FTP, arguably the most famously bad protocol we have.
The "hoops" we have to jump through to traverse NAT seem kind of trivial when you compare them to the fluid-dynamic-theoretic modeling we had to go through to avoid congestion collapse in the early '90s. If you're new to the game, I'm sure NAT traversal seems like overkill, but you really should try BGP prefix filtering or writing a TCP-friendly transport protocol.
The fact is, the overwhelming majority of Internet users are now NAT'd. The Internet works better today than it did when everyone got static IP addresses. And the problems that remain can be solved at the transport layer, without forklifting out routers.
No, it's not just a "religious" argument. NAT really does violate the way the internet was supposed to work; it causes real technical problems, not just aesthetic ones. In fact, NAT is a nasty kludge by the strictest definition of the word. Have you ever written code that routes packets? I have! Not only does NAT break the end to end connectivity rule, but it also breaks data encapsulation. In order to work it has to open up the data and tweak IP and port numbers to give the illusion that the NATed networks have end to end connectivity. That's why NAT causes problems with VoIP and is stifles the development of new, innovative protocols because it doesn't transparently handle new protocols. IP was conceived in such a way that TCP, UDP, and other protocols could be layered on top of it, but because NATs break the encapsulation, with NAT each new protocol requires every router's NAT to be reprogrammed to know how to handle it.
If IP had been designed with NAT in mind, it might not be so broken now. That's one of the problems IPv6 is supposed to solve - it doesn't need a NAT, but it also works better over NAT than IPv4! So if you think NAT beats static IPs, you should actually prefer IPv6 because it has non-broken NAT support!
Furthermore, please define exactly what you mean when you say that applications which don't work with (leaky, broken IPv4 NAT) are "enlisting the Internet to accomplish things the apps should be doing for themselves, and are in fact breaking e2e." How exactly is an app supposed to "do something for itself" in getting a communication channel open when it can't get any packets through the NAT to communicate in the first place? For instance, most NATs don't allow unexpected incoming connections. Say I have a computer and a friend has another one on another ISP, so we're behind different NATs. Now say I'm developing (in fact I really am developing) an online MMO game. Without a public, static IP address for one of us, or a public server to bounce off of (which we don't have yet because we're only just starting our company) it will be impossible for us to connect directly to each other to test our setup over the internet. What IP:port would one of us type in to get to the other's machine? All each of us can reach is the public IP of the ISP's NAT. In order to make a connection, we have to punch out through the NAT first but punch out to where? We need a public IP, but in a world where every consumer is behind a NAT and all the IPv4 addresses are used up or cost more than we can afford, how are we going to get our start?
On the other hand, if the world, right now, switched to IPv6, we'd have no trouble connecting to each other to test our game server, and we could do it from a consumer home internet connection. Ah, but that's the real reason we're stuck with IPv4 and NAT. The internet was meant to be a network where every computer is equal, and in that environment anyone could set up a server and start doing business or providing their own content. No more. Now, thanks to a completely avoidable numbering shortage and a NAT-hell, the internet is getting to be more like television - big company's servers provide the content, and you consume it, just as the media companies and cable companies like it. You won't be able to say, "hell with their Terms of Service, I'll host my site myself". And fat chance making much money for yourself off your web-2.0 company if you are forced to host your servers with a bigger company in order to get a public IP. That's what the world without IPv6 looks like.
You're really just saying the same thing as the preceding comment did, and your argument really is aesthetic. Real e2e apps don't care who's routing them or how they're being routed; they have a service model ("I need reliable delivery" or "I need real-time out-of-order"), and take care of the details themselves.
This is exactly the dispute that killed multicast. The purists demand all-sources Deering multicast, the pragmatists think they can at least make it work with single-source, and the rest of the world just goes and builds apps that actually work using TCP and app-layer multicast. We should learn from that. Instead of asking everyone to forklift out their routers to switch in an expirimental protocol, we should embrace the constraints that we have and make stuff work regardless. Skype works even with double-sided NAT.
So, yes, I definitely have built apps that route packets, and yes, I already did bring up the example of FTP, and yes, it's a bad example: it's the worst protocol in common use, and we're not really surprised that NAT causes problems with it. What's another common protocol that requires middleboxes to alter payloads? Note that the whole web works without that requirement.
The reality is, it's just silly to assume that the future of the Internet is going to be controlled by apps that will only be able to rendesvouz using IANA-assigned IP addresses. The future is multicast, peer-to-peer, and presence-based, and it's going to come not from standards groups but from app developers. My AIM screen name is already 100x more valuable than my IP address, and Google is already 1000x more valuable than my domain name. Why exactly am I supposed to be psyched about "16:c4:d2:bc:23:71:85:a2:a6:17:7a:56:09:b5:e5:15"?
You still haven't answered my question - how can I, sitting behind one NAT, receive a connection from my friend behind another NAT, if neither allows direct incoming connections? All your multicast, peer-to-peer, etc. buzzwords don't mean a damn thing if the connection can't be made in the first place.
No matter how you twist around and cloud the issue, the fact is somebody has got to have a public IP to make the rendesvouz work, and if those public IPs are not available to ordinary people who might be entrepreneurs prototyping new kinds of ways to use the internet, then those prototypes will not get off the ground.
Edit: BTW, IPv6 is not an "experimental" protocol as you allude to it.
IPv6 hasn't been scaled in the same way that IPv4 has. The operations community knows how to handle IPv4.
The rest of your comment? You're arguing a straw man. Nobody is arguing that IPv4 addresses will be impossible to obtain; any such argument would be moronic. But if address allocation tracks application development rather than end user population, I could see that as a positive development.
I don't think you're following my point. When I sent an AIM message to a friend, IPv4 isn't routing it; AIM is. IPv4 is just a detail. This is the future. The overhead of app-layer routing is small --- tiny compared to the overhead of running rich client apps in Javascript over HTTP. If IPv4 matters in 10 years, that's a huge failure.
Which future would you rather put your energy into? The one where we can address realtime content to groups of people by name, or the one where we run basically the same apps we do now but with a bigger address space? Caring about IPv4 addresses is like caring about Ethernet addresses.
Yeah, but that breaks the principle of encapsulation. The IP layer has routing that works great, the app layer can take it for granted. Why should each app re-implement routing? This would raise the cost of entry, again.
The same was said of Internet Protocol. Why would you want to send packets over a public network with no guarantee of throughput? Well, it greatly increases the connectivity between hosts and it significantly reduces costs. However, it is less secure.
Likewise, why would you want to send data via application-level multicasting? Again, it greatly increases connectivity between hosts, some of which may be behind firewalls. Furthermore, it greatly improves the efficiency of data distribution because the reduces the significance of the bandwidth of the initial host.
At present, there is some duplication of effort. Some duplication is to avoid proprietary interests. Some duplication allows specific types of data to be handled. Finding and promoting a more general solution would be desirable.
There is no principal of encapsulation! There is no such thing as a "layering violation". TCP/IP isn't an OSI-modeled protocol.
You had it right earlier with the End-to-End Argument. E2E is very meaningful. We shouldn't push functionality into the network that can be handled by the endpoints, because doing that constricts applications and makes the network less useful. Which is why, if applications can shoulder app-layer routing and multicasting, we should embrace that and let them do it.
The alternative means getting Verizon, Sprint, and AT&T to buy in to whatever our next-generation app is. Look how well that's worked out so far.
> You still haven't answered my question - how can I, sitting behind one NAT, receive a connection from my friend behind another NAT, if neither allows direct incoming connections?
This is exactly the problem that STUN solves. But you need a third party server, most of the time application-specific, and yet another layer of complexity.
> IP was conceived in such a way that TCP, UDP, and other protocols could be layered on top of it, but because NATs break the encapsulation, with NAT each new protocol requires every router's NAT to be reprogrammed to know how to handle it.
> If IP had been designed with NAT in mind, it might not be so broken now.
Perhaps UDP should have been defined with session numbers. Some NAT complexity is due a lack of standard session identifiers.
> The internet was meant to be a network where every computer is equal, and in that environment anyone could set up a server and start doing business or providing their own content. No more. Now, thanks to a completely avoidable numbering shortage and a NAT-hell, the internet is getting to be more like television - big company's servers provide the content, and you consume it, just as the media companies and cable companies like it.
There's no shortage of network addresses. However, there is a shortage of products that support practical protocol extensions, such as RFC1365. Router and operating system vendors have been keen to push IPv6 compatible products but there's less sales to be made by supporting downwardly compatible extensions.
You may be conflating your remaining problem - ISPs firewalling desirable packets for the purpose of creating more profitable grades of service. Admittedly, its no fun being squeezed between large corporate interests but be positive.
So if there becomes a call for it, I expect they could lanch an ISP arm and use their IPs. Or else lease out their IP space to ISPs that can't find enough IP space.
The market will sort itself out and all will be fine.
The limit of concurrent connections is only between two hosts. If a Host X can maintain 2^16 TCP connections to Host Y, then it should also be able to maintain 2^16 TCP connections to Host Z, Host A, Host B, etc.
Thus, a host could theoretically maintain 65536 TCP sessions with every host on the Internet.
http://cr.yp.to/djbdns/ipv6mess.html
"The IPv6 designers made a fundamental conceptual mistake: they designed the IPv6 address space as an alternative to the IPv4 address space, rather than an extension to the IPv4 address space."