Hacker News new | past | comments | ask | show | jobs | submit login

Will any device made of matter / operating in polynomial time ever be able to hold the full 2^128 address space? It seems like we will forever be sub-netting into huge blocks simply because the entire space if mathematically infeasible to operate on. I think its fair to ask: Whats the point?



You are completely misunderstanding the purpose of IP address space. The purpose of IP address space is not to all be used, the point of address space is to enable connectivity. You enable connectivity by (a) having addresses available whereever you need them while (b) allowing for efficient delivery of messages. Both of those you achieve with sparse allocations, where it is easy to add machines and networks with minimal changes to the overall structure and minimal administrative overhead.

Designing the IPv6 address space to be filled with devices is about as sensible as designing the DNS system to be filled with domains. Like, instead of allowing for domain names with 63 characters per label, we could have just said DNS names are alphanumeric strings 7 characters long, and you get assigned a random one if you register a domain, to maximize the utilization of the address space. But that would be simply idiotic, because the purpose of that address space is to provide readable names, not to "use all the addresses".


For IPv6, the Internet routing table will never be split into more than 2^64 blocks, as a /64 is the general minimum BGP-announcable IPv6 block.

Regardless, I don't get your argument. Why would not being able to address all of a given address space be a bad thing? Isn't the whole point to have more address space than will ever by physically possible to need?


>Isn't the whole point to have more address space than will ever by physically possible to need?

Is that not the definition of over-engineering?


It is over-engineering the same way that setting the maximum length of a database field longer than absolutely necessary is over-engineering, so not at all, except even far less so, because resizing a database field tends to be trivial, resizing the IP address size we've been working on for 20 years already and there is still no end in sight.

Really, this has absolutely nothing to do with "over-engineering", as there is absolutely no "engineering" in making the address larger, it adds zero complexity, and it massively simplifies network design because it removes constraints that really complicate the building and maintenance of networks.


Nah, in this case I agree. We thought the IPv4 space was more than we were ever going to need, and see where that left us...

Now we just took what we thought we needed, and added a few tens of orders of magnitude.


Let me give you an example to illustrate the scale.

An average human cell is composed of 100 trillion atoms. [0]

An average human body is composed of 100 trillion cells.

By that we see there are about 10^28 atoms in a human body.

Let’s be conservative, and say the world population for the next fifty years stays under 10 billion (most estimates say closer to 100 years).

That gives us about 10^38 atoms across all human beings on earth.

Why does this matter? Because 2^128 is 3.4 × 10^38.

That’s three IPv6 addresses for each and every atom in all the humans on earth for at least the next 50 years.

If that’s not the definition of overkill, I dunno what is.

[0] https://www.thoughtco.com/how-many-atoms-in-human-cell-60388...

Edit: And if you say the real routable space of IPv6 is only 2^64, then my point still stands. Rather than it being 3 IPs for every atom, it’s still an IP for essentially every cell of every human on earth for the foreseeable future.


Do you also have a problem with 64-bit systems that have provisions for 64 bits of address space? In your book they also seem like a waste, and we should have designed something closer to ~48 bits of address space.


No, I wouldn’t make the same comment if it was just a 64bit address space. Some excess isn’t necessarily bad, but excess on top of excess is. If that makes sense.

My edit I made last night might seem to imply otherwise, but that was due to me being a bit terse with that part and I can’t go back and edit further now.


A 64 bit address space wouldn't be an excess though, it would be outright too small.

The whole point of L3 is to provide a layer of routing and aggregation on top of L2. Routing and aggregation requires sparse allocations, so L3 requires more address space than L2 does. L2 is 64 bits for new protocols today (which are supposed to be using EUI-64), so L3 needs to be more than 64 bits.

People would complain loudly if we didn't use a power of two, so here we are at 128 bits.


> If that’s not the definition of overkill, I dunno what is.

Or perhaps you are short sighted. :)

Look at all the drama and effort that we have had to go through over a decade or two to get past IPv4: if we went with "only" 64 bits for addresses, and we miscalculated and ran out again, we would have to go through that all over again.

It's the same reason why the ZFS folks (Jeff Bonwick) designed in 128 bit from the start:

* https://blogs.oracle.com/bonwick/128-bit-storage:-are-you-hi...


> A fully-populated 128-bit storage pool would contain 2^128 blocks = 2^137 bytes = 2^140 bits; therefore the minimum mass required to hold the bits would be (2^140 bits) / (10^31 bits/kg) = 136 billion kg.

> Thus, fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.

Very definition of excess as well, so thanks for proving my point!


> Very definition of excess as well, so thanks for proving my point!

Except you're skipping over the part about running out at 2^64:

> Some customers already have datasets on the order of a petabyte, or 250 bytes. Thus the 64-bit capacity limit of 264 bytes is only 14 doublings away. Moore's Law for storage predicts that capacity will continue to double every 9-12 months, which means we'll start to hit the 64-bit limit in about a decade.

2^64 bits ought to be enough for anybody. -- jsjohnst


Never once did I say 2^64 ought to be enough for anybody, so how about stop being a jerk?

Try reading what you quoted again, nobody is running out yet.

There are a lot of intermediate steps between 2^64 and 2^128! ;)


And then we expand to the stars, and suddenly one planet full of people isn't that great. And we'll develop nanomachines that need to be individually addressed, and suddenly our nice and new IPv6 space only lasts us only a hundred years.

These are just silly examples, but the point is that we don't yet know what will happen to make us run out of space.


> only lasts us only a hundred years

Even if that’s true, that’s about 3x longer than IPv4.

A hundred years ago, computers didn’t exist. Can you imagine how well a network address scheme invented back then would work for our needs?

Why do you presume we can do a better job of predicting what the networking needs of a hundred years from now will be?


then why stop at 2^128...why didn't they use 2^256?


Not if this extra bit space is cheap (and it is, both on the wire and in routing tables / TCAMs / tries).


No, it's a /48.

Login to a route server and see for yourself.


You're right!

That makes it even less possible routes then, 248.


A subnet (i.e. a layer 2 network) always gets a /64 in IPv6; so the actual address space available for layer 3 routing is 64 bits.

The point of this ridiculously large space is that it makes routing tables smaller, because topologically-near areas can be given common prefixes without worrying about address exhaustion.


also in IPv6 there are regional allocations so you can have a route that covers for Europe and another for APAC




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: