What is the rational reason, if any, for bringing up things like these?
The time to have this discussion would have been in like 1993 or so. Now, IPv6 is what we have, and the standards are what they are, flaws and all.
The only reason I can think of is psychological: People don’t want to learn new things, so they find reasons to dislike the new thing to be able to pretend they don’t need to learn it.
I think it could be argued that the flaws in IPv6 are significant enough to result in an scenario where IPv4 is never actually retired and therefore IPv6 is never fully adopted.
Its been decades, but still IPv6 is deployed (at best) as a parallel network that effectively doubles management/maintenance overhead. And at worst sites just add an IPv6 reverse proxy.
What's the problem with an IPv6 reverse proxy? If the Internet-exposed endpoint is IPv6, then from my point-of-view of a client, it's effectively that. For what I care about, they could be running fax machines over an analog phone line beneath that point.
We are way past the critical mass for IPv6, it has enough momentum to become predominant "internet" in the next decade or so. Thinking that it won't happen is pretty ludicrous at this point
My concern is that v4 may never be decommissioned. Sure, v6 could become predominant, but fundamentally there being two separate protocols running in parallel is a failed migration.
On a much smaller scale it's analogous to migrating from mariadb to postgres, but the migration takes 50 years and you end up realistically maintaining both.
Retiring IPv4 was never a strong goal of IPv6. The "inter-" in internet was always about "interoperability" between networks and side-by-side networks that interoperate was always the plan. Dual stack IPv4/IPv6 was in part chosen because it doesn't sweep the interoperability problems under the rug like some of these old "IPv4 extended" proposals did. You are never going to upgrade all devices at once and you were always going to have two networks, it was just a little bit easier with some of the other proposals to pretend it was one network if you squinted really hard and ignored all the interoperability problems that would crop up in practice because devices didn't update to understand the extensions.
Dual Stack. Means twice the work for barely any benefit.
I should be able to have a router with a ipv4 network and multiple ipv6 networks without losing access to the ip4 network. The ip6 network should be reachable via port natting on the ip4 IP
I.E I have three networks on my router, two on ipv6
1000:2000:3000:4001::1/64
1000:2000:3000:4002::1/64
192.168.0.1/24
the ipv6 only server on
1000:2000:3000:4001::10
should cope with sending a packet to
1000:2000:3000:4002::10
By routing it via the router
It should also transparently convert a target of 192.168.0.3 to ::ffff:c0a8:3 and send via the gateay.
The router should then convert it to 192.168.0.3, with a source of 192.168.0.1, and maintaining a NAT state so return traffic goes back to 1000:2000:3000:4001::10
If there's a service that 1000:2000:3000:4001::10 needs to expose, the router can do a dst-nat on say 192.168.0.1 port 80 and forward to 1000:2000:3000:4001::10 port 80
That way you can comfortably deploy ipv6 wherever possible and not have to worry about ipv4 other than at the router, where you just have the ipv4 subnet
Likewise my ipv6 network can reach 209.216.230.240 by running ipv6 across my network until it comes to a device with an ipv4 address, where it gets natted. Just like it runs across my private rfc19xx ipv4 range before srcnatting at the edge of my network.
Poor interoperability with NAT is a feature, not a bug, IMHO. NAT rarely makes sense in IPv6. In your scenario, why wouldn't you just set up your router so you can directly address the V6 subnets over V6 to begin with?
IPv6 works best when all addresses are globally routable (whether firewalled off or not). We're all so used to RFC1918 that we forget it was an ugly kludge that fundamentally broke how the internet was meant to work. IPv6 is the fix for that breakage: the address of every individual device can actually mean the same thing everywhere on the internet, as it was meant to be. L3 routing can be stateless again.
The primary intent of IPv6 is to replace IPv4, not coexist with it. Coexistence is transitory and not worth optimizing for over the future of the internet when V4 is dead.
V6-V6 sure, that doesn't need nat. It's the V6->V4. NAT is preferably to dual stack. I don't want to double my administration efforts by maintaining a v4 and a v6 network on every machine and every router, which appears to be best practice. Let me deploy v6, and only v6, but still interoperate with v4 until everyone has migrated.
As for nat being a cludge, lets assume I have a simple small office network with two independent ISPs. Normally I want to send half my users out of ISP1 and half out of ISP2.
If ISP2 fails, I want to send them all out of ISP1, OK there's less bandwidth to go round, but better than having no bandwidth for half my users.
How do I do that with ipv6 without natting (assuming I'm not large enough to be running my own AS and peering with two different providers)
What I've been told is both ISP's v6 prefix should be advertised with different priorities to the clients, which also yields the net bonus that applications that are stateless (UDP) or use MP-TCP can seamlessly fail over or adapt to network conditions without the intervention of another network device.
I don't have easy access to multiple V6-PD enabled providers to test this theory, and as someone with quite the neck beard I really don't know how I feel about ceeding this level of control to endpoints. But also, I'm not sure I hate it either.
Oh and don't forget link-local and a ula prefix for your local addressing requirements for pinters and whatnot that shouldn't be using dynamic discovery.
> What I've been told is both ISP's v6 prefix should be advertised with different priorities to the clients
That basically doesn't work with real clients. They'll do dumb stuff like use address from provider A to send through the router advertising addresses from provider B. And take forever to do anything in response to prefixes that are advertised as no longer usable or simply no longer advertised.
Is this conjecture or something you’ve tested or known to be tested? V6 devices are actually expected to be able to understand multiple route advertisements and I know for sure they do properly understand the mix of link-local, ULA and public prefixes.
I tested it; I was trying to get failover (preferably automated failover) between DSL and LTE on IPv6. Should be simple: advertise from DSL as normal priority (would do high priority, but I can't change how the modem advertises it), advertise from the LTE as low priority, somehow make the DSL modem advertise deprecated or at least stop advertising when it's disconnected.
V6 devices are expected to understand that and do the right thing, but Windows (10) doesn't, Linux was worse, and I don't remember what Android did and I didn't get around to testing FreeBSD, and that's all the OSes I have.
If you've got experience otherwise, I'd love to know, one of these days I need to setup IPv6 again, but what I'd really like to do is too much work, so I'm IPv4 only for the foreseeable future.
I tried turning on IPv6 on my VPS server running Docker. Not knowing much about IPv6, I naively assumed it’d just work. Evidently not, and you either need to run a NAT service or your containers become publicly routable, both having major caveats that I didn’t want to deal with.
The notion that every endpoint has a globally unique address, with prefixes coming down from some upstream provider is just fundamentally incompatible with how IPv4 networks are designed today.
I believe that the comment is not a personal opinion but a reflection of the acceptance rate of IPv6. If most of the web still uses IPv4 and not using IPv6 is still defended with the "people against new things" arguments, it shows that there is at least one thing that prevents people from using it. Considering that the new kid on the block is 26 years old, I think, there is more than one.
Any new protocol is going to have adoption costs. The "issue" with ipv6 is that for a lot of people/organizations, the tolerable cost of adoption is still pretty much zero. Adding yet another protocol to the mix would not make that better.
By no means I did support the use of another protocol. All I focused on was that the IPv6 could not be a huge success over IPv4. For the previous 26 years, pros weren't enough to eliminate the cons. At least the overall usage ratio can be comprehended this way.
2012 wasn't a bad time to discuss alternate options. Now is maybe a little late, but that's arguable.
There's a whole generation of people and experience that wasn't able to enter the discussions in 1993 and is available now. Maybe new eyes or experienced eyes would have a better solution.
The problem is clearer now than then. IPv6 makes many changes to address many problems, but maybe only address space is a real problem, because only address space seems to be motivating people to join IPv6.
Of course, the downside is not a lot of people want to be on three parallel networks, it's a lot of work. But perhaps, if that third network were more desirable and easier to implement, it could get larger coverage in a shorter amount of time than IPv6. Sort of like how TLS 1.1 never had more users than TLS 1.0, but TLS 1.2 overtook them both.
You would need real consensus and commitment from lastmile ISPs, backbone ISPs, networking vendors, OS vendors, CPE vendors, mobile networks, content providers, etc, though and that's tough. The consensus on IPv6 seems to be clearly, we'll do it eventually, when we really have to (which for some networks was 2012, and some networks is now, and some networks seems to be never)
I wrote a proposal back in 1999 or so exploring IPv4-in-IPv4 as an alternative to IPv6. You would have used the outer IP header to traverse the internet backbone, and the inner IP header locally. It completely avoided the need to change any inter-domain infrastructure and, with some DNS-based gatewaying where the gateway to the Interent added/removed the outer IP header, could even be incrementally deployed to unmodified end systems (the idea was you would eventually modify end-systems/applications to add the outer header, as this simplifies applications and avoids the DNS hack).
But in the end, IPv6 is cleaner, and it seemed better not to distract too much from progress there. If I'd known it would have taken another quarter century to make much progress, I might have made a different decision.
There were quite a few "extend IPv4" solutions over the years.
ISP's can already do that... by repackaging IPv4 traffic into IPv6 (when I worked at an ISP this was an experiment that was being done to test the viability to reduce IPv4 address requirements across the backbone, but you could also tunnel it over IPv4 I guess) so that the traffic traverses their entire network inside an IPv6 tunnel, and would get translated back to IPv4 on the edge.
As a user you lose some visibility in terms of what routers your packet traverses, but the same can be said about MPLS tunnels/IPSec tunnels where your packet magically seems to have gone just 1 hop instead of the 10 it actually took.
Can someone explain to me how about a decade ago we were going to run out of IPv4 addresses in a matter of months and we needed to switch to IPv6 or else, but now we’re still using it fine?
That having to NAT on your home Linksys/Asus and then getting NATed again by your ISP is fine? If you want to allow a connection to your PC or gaming system, how do you hole punch a double-NAT exactly?
The price per IP used to be almost nothing, now it's about $5/month and will keep going up; you can see these costs clearly with inexpensive hosting providers like Hetzner.
As a result many home and mobile users only get personal IPv6 addresses with IPv4 connections being tunneled over shared addresses.
I would pretty strongly oppose the notion we're using it "fine". IPv4 address allocation requests are on waiting lists. As of November, almost all requests have stopped fulfilling. We've completely run out of them.
Of course that SIP (not my one) and Paul Francis's PIP were what got merged to become IPv6. The rationale was that 128 bit addresses allowed all the flexibility of address hierarchies from PIP within a fixed length address. With hindsight, we've never really tried to use that flexibility in the real world, but the capability is there if we need it.
Curious why this hasn't taken off, it seems to make so much more sense to me.
Let IPv4 addresses run out. Allow parties to trade them. Slowly migrate from "one IP per box" to "one IP per net" with what looks like NAT, but can easily be traversed if you understand the extension. And once we start running out of IPs again, rinse and repeat. What am I misunderstanding / oversimplifying?
Too little too late. Even in 2012 it was obvious that ipv6 is the future; pretty much everything already supported it, maybe the hardest obstacle was overcome. Maybe 15 years earlier some alternative format could have gotten traction, but then dot-com boom was in full swing and nobody worried about future and ipv4 exhaustion.
It's fun with all these proposals that so many people think they make sense, presumably including their authors.
This one has a bunch of grave problems.
1. Notice that the address space shortage is not in fact alleviated although it is shy about admitting that. Nobody who doesn't have addresses gets addresses from this scheme. Instead, everybody who already has addresses gets even more of the new addresses and the author simply hopes they'll choose to give them away to those who don't have any. You know, like that time Bobby Kotick got a huge bonus and so he gave the money to er... oh right, he just kept the money.
2. But wait, how would they give away these addresses? The author proposes they can just give away a /29 at a time. In fact, a IPv4 /29 is not routable as a global route, so this will not work. The smallest size you can carve out from the global routes is a /24 and every time you do this you're making things worse for everybody in the backbone game by increasing fragmentation, gosh they're going to be pleased about so much of this "charity".
3. OK, well maybe instead of giving away addresses, our Good Samaritans will give back their existing allocation and take only one /24 for their own network now that is plenty big enough with the new addresses. But that means they must renumber absolutely everything which is one of the things this proposal was supposed to avoid and all their services lose the ability to interoperate properly with everybody who didn't upgrade yet, getting a degraded "sort of like NAT" mode until everybody in the world upgrades. Suckers.
4. It doesn't bother fixing all the other related infrastructure. That work was done for IPv6. PKIX works for IPv6 (certificates for e.g. the DNS service 1.1.1.1 contain IPv6 addresses, no you can't just write any arbitrary text, that's not how it works at all), DNS works for IPv6, all the fancy modern stuff works for IPv6, but you need to begin over for this "Enhanced IP" and the paper neither proposes any way to avoid that, nor does it include all that work, so you're beginning very late in 2012.
Still, there have been much worse attempted solutions written up. My favourites are the ones which don't realise addresses are just bits and propose we can fix everything by writing bigger numbers like 300.400.500.600 ...
> My favourites are the ones which don't realise addresses are just bits and propose we can fix everything by writing bigger numbers like 300.400.500.600 ...
My favorite was one which "realized" that addresses are just bits in the physical wire and proposed to use intermediate values for these bits (that is, using more than two voltages). I wish I had bookmarked that one, it was truly baffling. It was wrong on so many levels that it was hard to know where to start.
> […] but can easily be traversed if you understand the extension.
You need new code to "understand the extension". Any old network stack will not, and will thus not be able to send packets to it… just like old code does not understand IPv6 addresses.
> What am I misunderstanding / oversimplifying?
IPv4 is 32 bits, and all the data structures are 32 bits. We are running out of 32 bit addresses. If you want more address space you have to have more than 32 bits, and it is impossible to squeeze >32 bits in a 32 bit data structures, last time I checked.
So you need to ship code on every single Internet device to update it to handle >32 bits.
We've just spent the last few years shipping new code for larger address data structures, i.e., the 128 bits of IPv6. Look how long that's taken.
IPv4 options stopped being viable on the internet (everyone drops packets with options) sometime in early 2000s (the quickest doc with references I could find is [0] dated 2014, but I remember the practice started much earlier…), otherwise it looks like an interesting proposal at first sight.
Edit: I didn’t read much into detail, assuming that the hosts that do not understand this extension could just ignore the option. If the scheme doesn’t work in this case as sibling comment suggests, then yeah that’s a problem indeed :)
Literally everything about IPv6 is easier and better. IPv4 is already much slower than IPv6, now you're adding another layer of complexity on top of it? That doesn't sound too good. IPv6 is and has been ready to go forever now.
Your example is mostly a non-issue if you have working DNS. Which ... I know is a big assumption in a lot of networks, especially small to medium sized businesses, it seems. I don't know why DNS is such an issue for so many companies, but I can't help but think people would be more positive about ipv6 if we could get internal DNS solved first.
The only difference I see between EnIP and NAT is that both ends have to understand it instead of one end. It's almost exactly NAT but with an extension to store the private-side IP in a header so the router doesn't have to track state.
i have a question, when iphone was introduced in 2007, 3G came AFTER that, then 4g and now 5g. why didn't the mobile revolution help move ipv6 adoption? i mean at the time, why wasn't 3G built on ipv6 instead of basing on ipv4? couldn't apple have forced websites to play nice with ipv6 to get more users from iphone crowd? android would have followed suit?
i get the whole legacy thing but since transitioning to lets say gigabit internet and beyond, that is somewhat fresh tech so why wasn't that stack made ipv6 primarily and not ipv4?
3G predate iPhone by a lot. There were 3G Smartphones ( Symbian ) or 3G Mobile phones years before iPhone. The first 3G network if I remember correctly "launched" before year 2000 in Japan. ( That is why some analyst suggest that iPhone was bringing the Japanese Internet to the world with touch screen. )
The 3G Spec ( Now known as 3GPP ) predate 2000. 4G was the first system moving from circuit switch to packet-based switching. As that was a lot on their plate already. Remember 4G was designed in an era where 3G was considered a flop. Billions were paid to buy spectrum and equipment but MNO for years were losing money. iPhone was the saviour to MNO as Apple managed to push ARPU instead of their death spiral.
There were talks of a completely new network stack for 5G and later 6G. I think that is still an ongoing research. But without Smartphone I am willing to bet ipv6 would have been no where nearly one tenth of today's usage.
In India, on 11 December 2008, the first 3G mobile and internet services were launched by a state-owned company, Mahanagar Telecom Nigam Limited (MTNL),
maybe in japan but i saw it AFTER the iphone came out and that is what i wrote.
Mobile is driving IPv6 adoption. Look at Google's IPv6 usage rate over time (https://www.google.com/intl/en/ipv6/statistics.html), and you'll notice there's a clear cyclical nature in IPv6 usage, with IPv6 peaking in weekends and troughing in weekdays (except in late December). And the COVID lockdown suddenly causes the weekday troughs to jump up 1.5 percentage points.
Not just mobile, home users too. And although some of us wanted IPv6 just like the Internet itself the vast majority of people just got it because it seemed like that's what everybody had. Which is fine.
When Sarah's mom's ISP rolls out IPv6 (or maybe when she gets new CPE because she upgraded service, moved home, or it just eventually died), her devices get IPv6, but she doesn't care, Facebook still works (it might be slightly faster, but not noticeably) and Sarah's mom doesn't know what the Internet Protocol Version Six is except that it sounds like something from a Star Trek convention.
Sarah's employer is Big Corp. When Big Corp's ISP rolls out IPv6, Big Corp IT agree that since not everybody went on the IPv6 training course yet, they should explicitly disable IPv6 to avoid unspecified "problems". Everything still works as before and Big Corp's IT department are cheerfully running stuff that actually matters in the Cloud, so what do they need more addresses for anyway? Maybe in 2025 there will be a budgetary requirement for IPv6 at Big Corp. Or maybe not. Perhaps the best chance for Big Corp to get IPv6 is if IT screws up and mistakenly doesn't disable it, then they find that later doing so makes things worse.
Mobile is leading the IPv6 transition, for exactly the reasons you think it is. It just happened bit later than what you expected. I'm pretty sure that vast majority of 5G deployments support IPv6.
iPhone 2G was released when 3G networks were becoming common among better telcos. For critical period in the early 3GPP network stack, it was niche player.
However, Mobile networks are actually biggest users of IPv6, especially if the rumour I heard about licensing being cheaper on IPv6 is true (IPv6 is also in many ways cheaper on backbone implementation).
This is why Apple recommends IPv6 accessible sites, because for many mobile networks IPv6 is faster - it avoids multiple levels of network address translation through possibly limited number of gateways.
The mobile stack itself doesn't really care for IPv4 vs IPv6 except for v6 making it much easier to build the network and having easier IP Mobility (keeping connections across moving addresses). Protocols run perfectly well on both v4 and v6 (SIP, IPsec, various other L4 and higher protocols involved)
The only reason I can think of is psychological: People don’t want to learn new things, so they find reasons to dislike the new thing to be able to pretend they don’t need to learn it.