Hacker News new | past | comments | ask | show | jobs | submit login

>Isn't the whole point to have more address space than will ever by physically possible to need?

Is that not the definition of over-engineering?




It is over-engineering the same way that setting the maximum length of a database field longer than absolutely necessary is over-engineering, so not at all, except even far less so, because resizing a database field tends to be trivial, resizing the IP address size we've been working on for 20 years already and there is still no end in sight.

Really, this has absolutely nothing to do with "over-engineering", as there is absolutely no "engineering" in making the address larger, it adds zero complexity, and it massively simplifies network design because it removes constraints that really complicate the building and maintenance of networks.


Nah, in this case I agree. We thought the IPv4 space was more than we were ever going to need, and see where that left us...

Now we just took what we thought we needed, and added a few tens of orders of magnitude.


Let me give you an example to illustrate the scale.

An average human cell is composed of 100 trillion atoms. [0]

An average human body is composed of 100 trillion cells.

By that we see there are about 10^28 atoms in a human body.

Let’s be conservative, and say the world population for the next fifty years stays under 10 billion (most estimates say closer to 100 years).

That gives us about 10^38 atoms across all human beings on earth.

Why does this matter? Because 2^128 is 3.4 × 10^38.

That’s three IPv6 addresses for each and every atom in all the humans on earth for at least the next 50 years.

If that’s not the definition of overkill, I dunno what is.

[0] https://www.thoughtco.com/how-many-atoms-in-human-cell-60388...

Edit: And if you say the real routable space of IPv6 is only 2^64, then my point still stands. Rather than it being 3 IPs for every atom, it’s still an IP for essentially every cell of every human on earth for the foreseeable future.


Do you also have a problem with 64-bit systems that have provisions for 64 bits of address space? In your book they also seem like a waste, and we should have designed something closer to ~48 bits of address space.


No, I wouldn’t make the same comment if it was just a 64bit address space. Some excess isn’t necessarily bad, but excess on top of excess is. If that makes sense.

My edit I made last night might seem to imply otherwise, but that was due to me being a bit terse with that part and I can’t go back and edit further now.


A 64 bit address space wouldn't be an excess though, it would be outright too small.

The whole point of L3 is to provide a layer of routing and aggregation on top of L2. Routing and aggregation requires sparse allocations, so L3 requires more address space than L2 does. L2 is 64 bits for new protocols today (which are supposed to be using EUI-64), so L3 needs to be more than 64 bits.

People would complain loudly if we didn't use a power of two, so here we are at 128 bits.


> If that’s not the definition of overkill, I dunno what is.

Or perhaps you are short sighted. :)

Look at all the drama and effort that we have had to go through over a decade or two to get past IPv4: if we went with "only" 64 bits for addresses, and we miscalculated and ran out again, we would have to go through that all over again.

It's the same reason why the ZFS folks (Jeff Bonwick) designed in 128 bit from the start:

* https://blogs.oracle.com/bonwick/128-bit-storage:-are-you-hi...


> A fully-populated 128-bit storage pool would contain 2^128 blocks = 2^137 bytes = 2^140 bits; therefore the minimum mass required to hold the bits would be (2^140 bits) / (10^31 bits/kg) = 136 billion kg.

> Thus, fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.

Very definition of excess as well, so thanks for proving my point!


> Very definition of excess as well, so thanks for proving my point!

Except you're skipping over the part about running out at 2^64:

> Some customers already have datasets on the order of a petabyte, or 250 bytes. Thus the 64-bit capacity limit of 264 bytes is only 14 doublings away. Moore's Law for storage predicts that capacity will continue to double every 9-12 months, which means we'll start to hit the 64-bit limit in about a decade.

2^64 bits ought to be enough for anybody. -- jsjohnst


Never once did I say 2^64 ought to be enough for anybody, so how about stop being a jerk?

Try reading what you quoted again, nobody is running out yet.

There are a lot of intermediate steps between 2^64 and 2^128! ;)


And then we expand to the stars, and suddenly one planet full of people isn't that great. And we'll develop nanomachines that need to be individually addressed, and suddenly our nice and new IPv6 space only lasts us only a hundred years.

These are just silly examples, but the point is that we don't yet know what will happen to make us run out of space.


> only lasts us only a hundred years

Even if that’s true, that’s about 3x longer than IPv4.

A hundred years ago, computers didn’t exist. Can you imagine how well a network address scheme invented back then would work for our needs?

Why do you presume we can do a better job of predicting what the networking needs of a hundred years from now will be?


then why stop at 2^128...why didn't they use 2^256?


Not if this extra bit space is cheap (and it is, both on the wire and in routing tables / TCAMs / tries).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: