Will any device made of matter / operating in polynomial time ever be able to hold the full 2^128 address space? It seems like we will forever be sub-netting into huge blocks simply because the entire space if mathematically infeasible to operate on. I think its fair to ask: Whats the point?
You are completely misunderstanding the purpose of IP address space. The purpose of IP address space is not to all be used, the point of address space is to enable connectivity. You enable connectivity by (a) having addresses available whereever you need them while (b) allowing for efficient delivery of messages. Both of those you achieve with sparse allocations, where it is easy to add machines and networks with minimal changes to the overall structure and minimal administrative overhead.
Designing the IPv6 address space to be filled with devices is about as sensible as designing the DNS system to be filled with domains. Like, instead of allowing for domain names with 63 characters per label, we could have just said DNS names are alphanumeric strings 7 characters long, and you get assigned a random one if you register a domain, to maximize the utilization of the address space. But that would be simply idiotic, because the purpose of that address space is to provide readable names, not to "use all the addresses".
For IPv6, the Internet routing table will never be split into more than 2^64 blocks, as a /64 is the general minimum BGP-announcable IPv6 block.
Regardless, I don't get your argument. Why would not being able to address all of a given address space be a bad thing? Isn't the whole point to have more address space than will ever by physically possible to need?
It is over-engineering the same way that setting the maximum length of a database field longer than absolutely necessary is over-engineering, so not at all, except even far less so, because resizing a database field tends to be trivial, resizing the IP address size we've been working on for 20 years already and there is still no end in sight.
Really, this has absolutely nothing to do with "over-engineering", as there is absolutely no "engineering" in making the address larger, it adds zero complexity, and it massively simplifies network design because it removes constraints that really complicate the building and maintenance of networks.
Edit:
And if you say the real routable space of IPv6 is only 2^64, then my point still stands. Rather than it being 3 IPs for every atom, it’s still an IP for essentially every cell of every human on earth for the foreseeable future.
Do you also have a problem with 64-bit systems that have provisions for 64 bits of address space? In your book they also seem like a waste, and we should have designed something closer to ~48 bits of address space.
No, I wouldn’t make the same comment if it was just a 64bit address space. Some excess isn’t necessarily bad, but excess on top of excess is. If that makes sense.
My edit I made last night might seem to imply otherwise, but that was due to me being a bit terse with that part and I can’t go back and edit further now.
A 64 bit address space wouldn't be an excess though, it would be outright too small.
The whole point of L3 is to provide a layer of routing and aggregation on top of L2. Routing and aggregation requires sparse allocations, so L3 requires more address space than L2 does. L2 is 64 bits for new protocols today (which are supposed to be using EUI-64), so L3 needs to be more than 64 bits.
People would complain loudly if we didn't use a power of two, so here we are at 128 bits.
> If that’s not the definition of overkill, I dunno what is.
Or perhaps you are short sighted. :)
Look at all the drama and effort that we have had to go through over a decade or two to get past IPv4: if we went with "only" 64 bits for addresses, and we miscalculated and ran out again, we would have to go through that all over again.
It's the same reason why the ZFS folks (Jeff Bonwick) designed in 128 bit from the start:
> A fully-populated 128-bit storage pool would contain
2^128 blocks = 2^137 bytes = 2^140 bits; therefore the minimum mass required to hold the bits would be (2^140 bits) / (10^31 bits/kg) = 136 billion kg.
> Thus, fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.
Very definition of excess as well, so thanks for proving my point!
> Very definition of excess as well, so thanks for proving my point!
Except you're skipping over the part about running out at 2^64:
> Some customers already have datasets on the order of a petabyte, or 250 bytes. Thus the 64-bit capacity limit of 264 bytes is only 14 doublings away. Moore's Law for storage predicts that capacity will continue to double every 9-12 months, which means we'll start to hit the 64-bit limit in about a decade.
2^64 bits ought to be enough for anybody. -- jsjohnst
And then we expand to the stars, and suddenly one planet full of people isn't that great. And we'll develop nanomachines that need to be individually addressed, and suddenly our nice and new IPv6 space only lasts us only a hundred years.
These are just silly examples, but the point is that we don't yet know what will happen to make us run out of space.
A subnet (i.e. a layer 2 network) always gets a /64 in IPv6; so the actual address space available for layer 3 routing is 64 bits.
The point of this ridiculously large space is that it makes routing tables smaller, because topologically-near areas can be given common prefixes without worrying about address exhaustion.