It is over-engineering the same way that setting the maximum length of a database field longer than absolutely necessary is over-engineering, so not at all, except even far less so, because resizing a database field tends to be trivial, resizing the IP address size we've been working on for 20 years already and there is still no end in sight.
Really, this has absolutely nothing to do with "over-engineering", as there is absolutely no "engineering" in making the address larger, it adds zero complexity, and it massively simplifies network design because it removes constraints that really complicate the building and maintenance of networks.
Edit:
And if you say the real routable space of IPv6 is only 2^64, then my point still stands. Rather than it being 3 IPs for every atom, it’s still an IP for essentially every cell of every human on earth for the foreseeable future.
Do you also have a problem with 64-bit systems that have provisions for 64 bits of address space? In your book they also seem like a waste, and we should have designed something closer to ~48 bits of address space.
No, I wouldn’t make the same comment if it was just a 64bit address space. Some excess isn’t necessarily bad, but excess on top of excess is. If that makes sense.
My edit I made last night might seem to imply otherwise, but that was due to me being a bit terse with that part and I can’t go back and edit further now.
A 64 bit address space wouldn't be an excess though, it would be outright too small.
The whole point of L3 is to provide a layer of routing and aggregation on top of L2. Routing and aggregation requires sparse allocations, so L3 requires more address space than L2 does. L2 is 64 bits for new protocols today (which are supposed to be using EUI-64), so L3 needs to be more than 64 bits.
People would complain loudly if we didn't use a power of two, so here we are at 128 bits.
> If that’s not the definition of overkill, I dunno what is.
Or perhaps you are short sighted. :)
Look at all the drama and effort that we have had to go through over a decade or two to get past IPv4: if we went with "only" 64 bits for addresses, and we miscalculated and ran out again, we would have to go through that all over again.
It's the same reason why the ZFS folks (Jeff Bonwick) designed in 128 bit from the start:
> A fully-populated 128-bit storage pool would contain
2^128 blocks = 2^137 bytes = 2^140 bits; therefore the minimum mass required to hold the bits would be (2^140 bits) / (10^31 bits/kg) = 136 billion kg.
> Thus, fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.
Very definition of excess as well, so thanks for proving my point!
> Very definition of excess as well, so thanks for proving my point!
Except you're skipping over the part about running out at 2^64:
> Some customers already have datasets on the order of a petabyte, or 250 bytes. Thus the 64-bit capacity limit of 264 bytes is only 14 doublings away. Moore's Law for storage predicts that capacity will continue to double every 9-12 months, which means we'll start to hit the 64-bit limit in about a decade.
2^64 bits ought to be enough for anybody. -- jsjohnst
And then we expand to the stars, and suddenly one planet full of people isn't that great. And we'll develop nanomachines that need to be individually addressed, and suddenly our nice and new IPv6 space only lasts us only a hundred years.
These are just silly examples, but the point is that we don't yet know what will happen to make us run out of space.
Is that not the definition of over-engineering?