All this is because IPv6 addresses are too long. If they’d made it 48 or 64 bits we would be fully converted by now. We are dragging because people hate using it.
I’ve been saying this for years. Nobody gets it because geeks don’t get ergonomics.
I've said it for years too. It's not JUST because they're long - years ago (and maybe even today?) there's still some hardware issues with keeping large sets of addresses for routing (I'm not an expert on this - I seem to remember reading about this years ago - larger ISPs not being able to keep all their routing rules in memory because of IPv6 address sizes - maybe I'm WAY off).
But, yes, generally, you're right. It's been seen from the very beginning as "a big move". If every address A.B.C.D was addressable as 0.A.B.C.D, and we opened up another 255 * 4 billion addresses... we'd have been converted a long time ago. And we'd have been better at actually implementing 'upgrades' because they'd be already done/completed - it wouldn't be a 'monumental task(tm)'.
We don't need every atom in the universe to be able to have 16 public addresses.
> (I'm not an expert on this - I seem to remember reading about this years ago - larger ISPs not being able to keep all their routing rules in memory because of IPv6 address sizes - maybe I'm WAY off).
in modern (last 10 - 15 ish years) routing table size has been roughly the same for IPv4 and IPv6.
Modern, ISP grade routers have control and forwarding planes seperated between different (usually redundant) hardware components.
The control plane is responsible for keeping states of routes (which routes do i recieve from a routing protocol? where is my next hop according to rule XYZ etc).
Forwarding plane is responsible for forwarding packets across interfaces.
Route lookups happen in the control plane, but a route lookup is almost never for a dedicated address (especially in IPV6). route lookups happen at the subnet level, and IPV6 has a "standard" subnet size which leaves half of the address space for the subnet itself. (the first /64 subnetmask bits are used for network differentiation, while the other /64 is used to create host specific addresses).
This cuts down on TCAM size considerably, because the router doesn't need to store 128 bits of information per host, but only 65 bits + subnetmask for a very large group of hosts.
besides this, IPv6 has another advantage because fragmenting routes is far more difficult then in IPv4.
Usually, organisations get a /56, the ISP usually handles /48's and RIPE/IANA etc work with /32.
This all keeps the IPV6 routing table far smaller then the IPv4 routing table, which was one of the reasons IPv6 was invented in the first place.
> But, yes, generally, you're right. It's been seen from the very beginning as "a big move". If every address A.B.C.D was addressable as 0.A.B.C.D, and we opened up another 255 * 4 billion addresses... we'd have been converted a long time ago. And we'd have been better at actually implementing 'upgrades' because they'd be already done/completed - it wouldn't be a 'monumental task(tm)'.
would this actually change the amount of "momumentalism" in switching ipv4 for something else? Backwards compatibility with larger address sizes (be it 128 bits, 33 bits or whatever) is not possible because ipv4 stacks can only hadle 32bit address space. Updating those is about as a monumental task as implementing IPV6, considering you would still need two network layer stacks for each device to handle both IPv4 and the "ipv4+" version.
> in modern (last 10 - 15 ish years) routing table size has been roughly the same for IPv4 and IPv6.
Really? I see 700k routes v4 and 70k v6 routes.
IPv6 will keep routing table size smaller since they can preallocate HUGE subnets to every AS (AS is what people would call an ISP pretty much) so that they only have to split their subnets by geolocation.
what i meant to say was, that in modern routers, IPv4 and IPv6 theoretical routing table size can be the same. There is no difference in terms of maximum routes in the routing table between both protocols.
> If every address A.B.C.D was addressable as 0.A.B.C.D, and we opened up another 255 * 4 billion addresses... we'd have been converted a long time ago.
That has nothing to do with the address being long, but with being compatible.
In designing ZeroTier I put a ton of effort into creating a secure P2P layer with addresses that are only 40 bits long. This effort continues with new solutions being worked on to maintain security while allowing more openness and federation.
It would have been much easier to use long addresses that are long hashes of keys. Having only 40 bits means we need two layers of defense in depth to prevent intentional collision: a work function to make the cost substantial (about USD $8M per collision on today’s public cloud) and a single source of truth for lookup that still supports federation. You could punt on all that with 128 or 256 bit addresses.
Yet I did it because I was quite aware that it was very necessary for usability. I have had many people tell me they love that they can type a ZeroTier address.
I would bet anyone that if the addresses had been gigantic we’d have 1/10 the adoption.
Software is first and foremost for people to use. Most of the complexity in software exists for this reason.
ZeroTier has a flat address space governed by a single algorithm. The Internet is a loose hierarchy of independently-managed networks. These problems have quite different addressing requirements.
Analogy: ZeroTier is to https://plus.codes/ as IPv6 is to mailing addresses. A mailing address is pretty long, but you can use its structure to route the mail efficiently.
The Internet is governed by a single algorithm: IP routing. Short IP addresses are a lot easier than short cryptographic addresses.
Adding 16 or 32 more bits to IPv4 would have been trivial. The existing IPv4 address space becomes 0.0.n.n.n.n or perhaps 0.n.n.n.n.0 if you wanted to give every existing IP 256 addresses to assign while also multiplying the IP space by 256.
You're describing 6to4, where the existing IPv4 address space becomes 2002:nnnn:nnnn::/48. You can treat the 80 bit suffix as 8 bits when designing a network.
Problem is, stacking the new protocol on top of IPv4 was never very reliable, so 6to4 is mostly dead now. It would've worked a bit better if the Internet had used 2002::/16 exclusively.
Adding 16 bits or 32 bits doesn't matter: The networking stack of every device would still need to be updated to understand the new address structure (just like IPv6!) You can't magically fit 48 bits in a 32 bit field.
IPv6 was the correct long term approach. You wouldn't want to pick only 48 bits and have to do this again in 20 years.
Yes. I'm saying if we had to update every device anyway, we might as well do it right and not some short term solution (48-bit addressing or whatever.)
IMO it's because they used stupid semicolons in the syntax instead of sticking with periods. Nobody likes hitting the shift key, especially so rapidly and while typing numbers.
DNS names already conflict with v4 addresses, and we deal with that ambiguity just fine.
For an actual conflict, someone would need to be using hostnames that had at least 16 segments, none of which were longer than 4 characters. Putting the burden on someone who wants to use extremely deep hostnames that look like bare IP addresses to type a trailing . on their hostname seems plenty reasonable to me. And if they want to use resolv.conf:search while still typing in 16 segments of a hostname, then that ambiguity could be resolved with a leading period.
I suspect the real reason is people who wanted to be able to write ad-hoc parsers using strchr().
We deal with it by requiring v4 addresses to be entirely numeric, which... well, it's possible for v6 but would make it even more annoying to type v6 addresses out.
No, that is not how it is dealt with. A DNS hostname can be entirely numeric as well. For example, add 'search in-addr.arpa' to your resolv.conf.
We deal with the ambiguity by making it clear that if you expect to use DNS names that look like IPv4 addresses, you're going to experience the pain of unexpected behavior. I see no reason this general expectation couldn't also have been set for 16-segment hostnames that look like hexadecimal IP addresses.
Alternatively, a full IPv6 address without any '..' abbreviation could have been defined to start with a period. Then there would be no ambiguity.
I’ve been saying this for years. Nobody gets it because geeks don’t get ergonomics.