FFS: MX records didn't take 20 years to get deployed!
This is crap. After twenty years, none of the problems with IPv6[1] have resolved, and when you get an IPv6 address from some places (hotels, mostly), much of the Internet turns into a unusable ghetto.
There's also zero IPv6 at BT or Virgin Media, two of the largest Internet providers in the UK. We've been hearing we'll get it "next year" for a while, and I'm frankly tired of dealing with the incompatibility.
If IPv6 offered a single user-visible feature it would sell like hotcakes: Mobile IP? IPSEC? All just as optional as with IPv4. Heck, if they even nailed auto-configuration or let multicast onto the Internet at least they'd have some customers.
Instead, it's sold a twenty-year clusterfuck of doomsday predictions about running out of IPv4 addresses, and the experience has been impossible to learn from because (a) it's spanning three decades now, and (b) everyone disillusioned by IPv6 can't come around without losing face.
I'm tired of IPv6 patting itself on the back; This was the worst deployment ever.
The killer feature of IPv6 is that is has more addresses than IPv4. For anything else, for every good feature of IPv6 there are a couple of bad features.
So if you look at it, if you want to delay investments as long as possible, you wait for IPv4 addresses to run out before moving to IPv6.
And guess what, ARIN ran out only last summer. APNIC ran out almost 5 years.
So for any big company that doesn't really grow, why do IPv6, you have enough IPv4 addresses.
For any small company, in many cases you now have to buy addresses on the open market. Doable, but not nice.
And small companies wanting to become big companies? They are out of luck. Even if they can buy the addresses they need, it will be insanely fragmented.
Even Microsoft found out that for their corporate network, they don't really have enough IPv4 addresses and for some reason they also don't want to buy them in large quantaties. So they are trying to move to IPv6 internally.
Of course, this creates a very complex network effect. Big companies have no real incentive to offer IPv6, and small companies cannot switch to IPv6 only if big comanies don't support it.
Fortunately, some big companies do run out of IPv4 addresses so they do start using IPv6 to offload their carrier grade nat boxes.
For some new markets, say vehicle-to-vehicle communication, IPv4 is not an option. So that is likely to be IPv6 from day one.
I bet that legacy desktops and Web servers will be IPv4 for as long as they exist. There will never be an impetus to convert these.
If "Internet of Things" takes off, those things will be IPv6. Then you can give an IP address to every thermostat, every light bulb, every fire sprinkler head, every shelf tag price readout (saw some LCD shelf tags at a Whole Foods near Houston), etc. Those things don't care if they can't talk to some legacy Web site.
Then, in twenty years, everything on IPv4 is some creaky obsolete thing...sort of like how we have Windows XP things around now that aren't being replaced for various good reasons.
If with legacy web server you mean a dedicated server somewhere in a colo that has an IPv4 address, then yes. You can keep that until parts of the internet will no longer have any connection to IPv4 anymore.
If you expect to move that server, say to move to a new colo provider, do you get an IPv4 address there? If you have to pay extra each month for IPv4 and get IPv6 for free, do you pay?
Legacy desktops, there are these things like youtube, netflix, they eat huge amounts of bandwidth. As an ISP you don't want that through your carrier grade NAT boxes. Better try to get customers to use IPv6.
These days many web pages are insanely loaded with ads, trackers, etc. Lots of connections, again really bad for CGN.
Big CGN boxes are also really bad for geolocation. IPv6 is better for that.
There is also reason to believe that gaming behind carrier grade NAT is going to be pain.
Finally, if there was one group of products supporting IPv6 early on, it was operating systems. Even XP can do IPv6. Web browsers support IPv6 for a long time now. So as soon as the network provides IPv6, just about all desktops pick it up and start using it.
If you want to serve web pages to most of the world, you have to have an IPv4 address, and that will continue until IPv6 adoption is well above 50%. Yes, there are places offering discounts for no-IPv4, such as lowendspirit. I have one. I have to bounce through another shell account to get to it as neither my home ISP nor workplace supports IPv6.
Above 50%? The threshold for not supporting IPv4 would be far higher than that. Think about how long it took to for companies to stop supporting IE6 when there was still only residual usage. The effect of not supporting IPv4 would be far worse as well - not just a buggy looking site, but zero functionality.
He/she's just saying that in 20 years, anything using IPv4 will be creaky and obsolete, similar to how we view WinXP now -- creaky and obsolete. Not that WinXP is stuck on IPv4.
> The killer feature of IPv6 is that is has more addresses than IPv4.
While technically true, reason this is the killer feature of IPv6 - by far - is that it removes the need for NAT, restoring the most powerful feature of the internet: being able to publish as an equal peer without requiring the approval of a 3rd party.
Stuck on a "party line", network software was forced to find complicated workarounds[1] and de facto prevented most serious development of entire categories of network software. In practice, you still needed a 3rd party - usually some centralized service - to grant each peer their imprimatur[2].
The power of the internet is that it allowed unrestrained, ad hoc publishing. Entire protocols - such as HTTP - could be developed without first getting the approval of some governing body. We have done a very good job of pretending we still have this capability, IPv4 went the way of cable TV a long time ago.
Why is it taking so long to deploy IPv6? That's easy: a conflict of interest. Why would a big company that controls a scarce resource - IPv4 addresses and the publishing power the represent - want to dilute their power by with IPv6? These companies were based on refining Shannon's ideas bringing the cost of coping data rapidly to zero. They saw what happened when groups like the {RI,MP}AA became obsolete. Why would they want to invest significant amounts of money in their own obsolescence?
APPENDIX:
Whenever this topic is brought up, someone inevitably cries that that NAT is necessary "for security". This misconception happens when NAT/"IP Masquerading" and a statefull firewall are conflated. You can drop packets without changing any addresses (firewall only), and you can rewrite addresses without doing any packet filtering (NAT only). While these features are often used at the same time - especially in cheap home routers - they can be (and are) used independently. A firewall should still be used with IPv6.
Occasionally this "for security" concern is a concern for privacy. The NAT "party line" is seen as a source of anonymity ("you don't know which machine behind the NAT is making a request. First - how many bits of anonymity are you gaining in practice? Second - if HTTP is involved, the browser is probably leaking addresses anyway (WebRTC[3]). Finally, is your source port or TCP timestamp[4] leaking information?
This cannot be understated. It's amazing how many networks still have absolutely shitty NATs that breaks things like VOIP, video conferencing, remote access, file sharing, and online games. It make the internet almost unusable.
> It's amazing how many networks still have absolutely shitty NATs that breaks things like VOIP, video conferencing, remote access, file sharing, and online games. It make the internet almost unusable.
What's amazing is how many companies who buy expensive firewalls for their corporate networks to do this on purpose under the illusion that somehow having LAN computers not being able to access anything besides outbound HTTP and HTTPS on the internet makes you more secure.
Haven't they noticed that everything piggybacks on top of HTTP/HTTPS these days? Even VPNs. You're protecting nothing.
> Haven't they noticed that everything piggybacks on top of HTTP/HTTPS these days?
Blocking outbound traffic at layer 3/4 does not require an expensive firewall. It provides protection against certain attack vectors, but as you say, most stuff nowadays uses http/https and in the case of trojans, likely reverse shells. Any firewall should be able to do this.
The expense usually comes in when doing application layer filtering and/or transparent/active proxying, which is the point at which you can block SSL VPN clients and the like. To do this over HTTPS it obviously requires MITMing TLS traffic, which I have mixed feelings about.
The best solution I've seen is PCs ("LAN computers") without a default gateway. Give them static routes to access the local (within the company) resources they need, point them towards a proxy server for outbound web access, and you've went a long way towards reducing your risk.
Bernstein's rant shows its age. It kind of proves the reverse of what you are trying to state.
The text starts with a question that undermines the whole text: "does every node need direct Internet connection?". Today, with P2P traffic on the uprise, the answer is a striking Yes. If, even with the hurdles of poor performing NATs, there is market pressure for P2P protocols, it's absurdly obvious the Internet must be symmetrical, not server-client.
The text then goes on about the impossible transition plan for IPv6. It's now obvious that moving servers to dual-stack, then clients progressively onto IPv6 (through provider address-space exhaustion) is a sound plan. It's happening, and it's approaching the critical point. 2016 starts at 10% IPv6 traffic, will end at 25% IPv6 traffic. No sound company today will launch its services in IPv4 only and eschew 10%-25% of potential traffic.
> No sound company today will launch its services in IPv4 only and eschew 10%-25% of potential traffic.
The thing is, no sound company has to. No consumers have IPv6 only. Everyone is dual stack. By not deploying on IPv6, you don't lose a potential customer, you just make them use their existing IPv4 connection.
That's not entirely true though. IPv6 has been prioritised on various devices and Google taught us over a decade ago that response time makes the web experience feel better to the user.
You'd better be on dual stack if you don't want your users to think the competitors feels faster/smoother. Otherwise you'll be the Yahoo of your market.
Not true at all - the problem is that IPv6 routes are less mature and less monitored than IPv4 routes, so using v6 preferentially is actually often slower.
I work with VOIP and we had to disable v6 for our customers because we were getting complaints about latency issues repeatedly from customers on v6. Disabled IPv6, no more complaints.
IPv6 is more likely to make you "the Yahoo of your market" than the inverse, especially if you work in latency sensitive applications.
Facebook and many other companies are finding that IPv6 is faster, personally at home I have found that disabling IPv4 allows me to more quickly buffer and stream Netflix, and about 30 - 40% of my traffic is IPv6. My Apple TV is the last hold-out that still seems to use IPv4 over IPv6 for Netflix.
It's actually not too contrarian - The problem isn't the average speed or what happens when things are working, it's what happens when it fails or when things need servicing. Packet loss and latency or slow response times for servicing of the routes affected our customers greatly. Far more than a small bandwidth improvement would buy us on a service that isn't bandwidth constrained at all.
In large part, this just boils down to the fact that less people use it, so less people notice problems and they're addressed less quickly. However, it makes it very unusable for applications like VOIP to customers when there are issues occurring. So while the bandwidth may be better on IPv6, I have no experience with that side of things personally, we're far more concerned with link stability - packet loss, latency spikes, that kind of thing. Again, just anecdotal, but not necessarily contrarian either.
Convincing people to do things is hard. It may be that dealing with IP address exhaustion requires throwing away the Internet and creating a new Internet, but it must be accepted this is a tremendous ask. Finding a WIIFM[1] can help with that.
Sorta half-true. There are a lot of WISP's where users only get IPv6 addresses. The WISP runs 4to6 CGN devices for the IPv4 only sites.
Greenfield IPv6-only ISP's are new now, but will be much more common in the future. Comcast kicked this idea around (at either NANOG or the @Scale conference) of not giving end users v4 addresses at all and doing 4to6 NAT in the cloud within 2-3 years.
Despite its overwhelming usage, IPv4 is already a second class citizen.
Terminating the IPv6 connections in the load balancers is technically pretty primitive but covering "just" the IPv6 web clients in this way is adequate for most people using AWS. Here's eg Netflix on the subject (from 2012): http://techblog.netflix.com/2012/07/enabling-support-for-ipv...
You're going to hit a wall with every ISP in the US and any country whose major telecos don't give a shit.
Fundamentally, why do they have to support ipv6? Especially when they have monopoly power over their customers? Its not like the customers can see the difference, and if a website doesn't work for a network of millions because the site v6 only the only loser is the website.
I'd be surprised if we hit 15% this year. Web services are doing a good job supporting v6, but now you have to get all ISPs to do it as well, and those ISPs are always ranked as some of the most consumer unfriendly companies in a lot of countries because they have no market pressure to improve or serve their customers well.
Actually a huge chunk of the US ISP market has IPv6 in production. Comcast, Time Warner Cable, AT&T, Verizon Wireless, T-Mobile, Sprint, Google Fiber. Ironically, it is their huge monopolistic size that actually makes them great candidates for IPv6. They are getting too large to use IPv4. It will be the small consumer friendly shoestring ISP operations that will take the longest.
The devices are set top boxes, modems and more. Those devices only have a single plug that plugs them into the network, technically it is all inband.
It's only separated in that your cable modem gets a non-routable IPv6 address. It's non-routable within the internet but is a stock standard globally unique IP address. This is used for management purposes.
> It's now obvious that moving servers to dual-stack, then clients progressively onto IPv6 (through provider address-space exhaustion) is a sound plan.
It's not obvious. A twenty-year deployment plan is a joke: Maybe this is the best they could come up with, but that doesn't convince me this was a good plan, or IPv8 should be modelled on this deployment.
> No sound company today will launch its services in IPv4 only and eschew 10%-25% of potential traffic
You have that backwards: No sound company would launch its services in IPv6 only because the Internet is IPv4.
> It's not obvious. A twenty-year deployment plan is a joke: Maybe this is the best they could come up with, but that doesn't convince me this was a good plan, or IPv8 should be modelled on this deployment.
It weren't broke, so nobody would ever fix it. A twenty year deployment period has no intrinsic problem, only philosophical ones. IPv6 was not needed, if not for the IPv4 exhaustion. When it became necessary, deployment started.
> You have that backwards: No sound company would launch its services in IPv6 only because the Internet is IPv4.
You read me wrong. I did not mean single stack IPv6. I meant dual stack IPv6/v4. The Internet is no longer IPv4 only. If it isn't clear to you yet, it will become so along the next 12-24 months.
> If, even with the hurdles of poor performing NATs, there is market pressure for P2P protocols, it's absurdly obvious the Internet must be symmetrical, not server-client.
I think there's a fallacy here. IPv6 does not solve this problem. Even if all devices were IPv6, typical homes will still have a router that blocks incoming connections by default, even if it is now a deliberate stateful firewalling decision rather than a consequence of NAT.
Firewalling P2P traffic effectively remains just as hard with IPv6 as with NAT.
The only way to fix this currently is to either stop using firewalls (very dangerous in this era of never-security-updated IoT devices) or to use IPv6 equivalents of the same solutions the IPv4 NAT world has developed (NAT-PMP, uPnP, etc). The latter case won't improve anything in P2P connectivity using IPv6 over anything we have already with IPv4 NAT.
This is not correct, there are many ways to do this without resorting to NAT hacks like uPnP.
For example, ISPs hand out entire /64 per customer (at a minimum, almost all will give you larger subnets if you ask). This allows you to have a separate IP address per application, which can be allocated from an unfirewalled pool.
There's also the fact that almost all major OSs now come with sane firewall defaults, also allowing applications to punch holes via APIs. (You could also have done this with IPv4 if it had a larger address space.)
If you spend some time thinking about how to solve these problems with IPv6, you will find that there are lots of ways to do it that are simpler and far more secure.
None of the solutions you suggest exist today, to my knowledge. Am I wrong?
For example, a single IP address per application would be fine, but there is no protocol that allows me to allocate addresses per application on a central home router such that I can then identify them to permit them individually.
And applications punching holes via APIs does exist in uPnP and NAT-PMP, but isn't that the same thing as what you dismissed above as a hack?
You are not wrong. There is no consistent way to do this today -- most of it is in debate, and there is no standard.
The most reliable way is to rely on your system firewall to prompt you to open up a port when an application binds to it (which can be accomplished with the default firewall software on a modern system.) Again, this is technically possible with IPv4 too, it's just impractical due to address space limitations.
> there is no protocol that allows me to allocate addresses per application on a central home router
Correct. But the whole point it to take the router out of the loop for this, and only use it as a coarse firewall. With IPv6 very device gets a separate /64, so it can simply allocate from its own pool of 10^19 addresses.
> And applications punching holes via APIs does exist in uPnP and NAT-PMP, but isn't that the same thing as what you dismissed above as a hack?
Not the same. With IPv6, you do not need to coordinate across multiple NAT devices (or even worse if you're double/triple NATted as in some countries.) You're also not limited by the maximum NAT table sizes (again amplified in the double/triple NATted situations.) Since each system has 10^19 addresses (preallocated out of a global pool of 10^38 addresses), these can now be treated as system resources and managed intelligently by the kernel.
> The most reliable way is to rely on your system firewall to prompt you to open up a port when an application binds to it...But the whole point it to take the router out of the loop for this, and only use it as a coarse firewall.
I don't trust, and doubt I'll ever trust, the firewall on my Smart TV, even if it had one. That's because I don't trust that it'll ever receive timely security updates. So I must use [the firewall on] a central router.
> And applications punching holes via APIs does exist in uPnP and NAT-PMP, but isn't that the same thing as what you dismissed above as a hack?
I think applications communicating with firewalling is not a hack, but applications communicating with routing is a hack. E.g. I have two friends who share a house (and therefore an internet connection) and I can play a game with one or other of them (via uPnP) but not both at the same time. With IPv6 + uPnP (assuming that's supported), they'd still have the same firewalling they have now, but I'd be able to play a game with both of them. No?
Sorry, I've only just seen this. I agree with you on this point, yes. But I think you have a pretty rare and specific case. The more general cry of "IPv6 fixes NAT!" incorrectly expects much more than this, I think.
I agree with you and wish there was a public address/private address pair system, so that connections origionate from the private address (which has NAT-like semantics) while servers intending to be public listen on the public address. Local systems could still connect to the private address, so server applications could still listen there by default but would not listen on the public address unless explicitly configured to do so. Not having this local/wide address distinction makes it more likely that firewalls will simply block all traffic until, as you say, similar systems to those used with NAT are implemented.
Another way to fix this problem is a personal VPN that only establishes outgoing connections automatically by default but can proxy incoming connections when explicitly requested. This is the same type of thing as used with NAT, except without the expectation that your ISP is the one providing it. Considering how obnoxious ISPs are these days in terms of filtering and such, this would be a major improvement. Since I am currently doing this (minus the proxying so far which I haven't needed but could easily add) for $15/year with a IPv4 address and virtual host that is way overprovisioned for what it needs, it seems likely that this could be done on a large scale for IPv6 at a quite low cost.
> Not having this local/wide address distinction makes it more likely that firewalls will simply block all traffic until, as you say, similar systems to those used with NAT are implemented.
Yes, and that would be the sensible thing to do.
This "public/private" distinction doesn't even work: What if you have two LANs that aren't bridged? Possibly because they are connected through a VPN? Or maybe one Ethernet and one WiFi? Can you communicate between them? Or only on "public addresses"? What if the VPN is between two companies, say, who don't trust each other? And why would you make it depend on the "address type" anyhow, why not just have the application drop connections that come from outside the LAN?
Blocking all inbound connections by default is no real problem as long as you have a protocol for telling the firewall which addresses/ports to open to the public - the problem with the "port forwarding" that you need with NAT is that ports collide and that you create asymmetric address translations or bottlenecks and all that crap, a simple protocol that tells the firewall to open a specific port or address would be trivial to implement and to use.
Also, IPv6 has link-local addresses, which are essentially "private addresses for a LAN", so you can actually set up services that are reachable only on a given LAN (and nowhere else) - but that really only makes sense for local LAN management (neighbour discovery and such), not for application protocols, those really should just be reachable from everywhere, as far as the local API is concerned, and access control should then happen based on source addresses, possibly implemented as a stateful packetfilter at the network boundary, as you otherwise severely limit which network architectures can be used.
I just noticed your reply but thanks, I agree that source filtering at the OS level would accomplish the same thing, be more flexible, and be more consistent with existing practice.
I think the main additional assumption I have to (part of) what you describe is that applications (on a general purpose system) cannot be trusted to behave reasonably on the network, however users need insecure local networking to work. The public/private disctinction would work the way I imagined it, but only because I was assuming a limited form of source address filtering: essentially one bit on every interface (including VPNs) for "is this local" (I also assume separate addresses per interface). The second address is so that so wildcard listening only listens on local addresses, and explicit public addresses need to be specified (or otherwise enabled) by the user before a server will listen on it. I didn't realize that just doing source address filtering as you suggest could accomplish the same thing with one address in a much simpler way.
There's an upside to this, yes ingress will still be blocked by default (good!) but with uPnP or something along those lines, we now open a single port and it's a 1:1 connection. No longer are we limited to having one device in the household holding that forwarded port and all traffic hitting the external IP going to a single internal machine.
This would allow two users to have the same port opened. This makes it easier to communicate with others on the internet.
> Firewalling P2P traffic effectively remains just as hard with IPv6 as with NAT.
Nope, it's much easier because the addresses are easy to predict. When both parties are behind NAT, neither side knows which address and port they will appear to be coming from to the other side, while without NAT, both parties know exactly what address the other side will see (it's the one the local socket is bound to ...), that makes it relatively easy to establish a connection through a stateful firewall with a simultaneous connect if both sides can somehow tell each other where their respective sockets are.
It's outright trivial with UDP, not quite that easy with TCP: Assume both sides are behind a stateful firewall, both sides simply send a packet from their socket to the respective remote address/port, which will cause each local firewall to establish a connection tracking entry that will allow packets in the respective opposite direction to pass through. There is a race condition that might cause one of the packets to be rejected by the remote firewall (if it arrives before the locally originated packet passed through the firewall), but that doesn't matter: Afterwards, both firewalls have an entry for the connection, and all subsequent packets will pass through just fine. This is impossible with NAT because you cannot know the remote address that the NAT will assign for the connection, so you cannot send a packet to establish the entry in your firewall, and you cannot learn the address because you would need to receive the packet to see the address on it, which you can't as long as the firewall doesn't know about it.
Also, a protocol for opening ports on the stateful firewall would be much simpler without NAT, as the firewall doesn't need to allocate ports or handle conflicts between multiple clients wishing to be forwarded the same port: The address/port allocation authority is with the client, the client('s kernel/network stack) can simply allocate ports on its assigned addresses and then only needs to tell the firewall that connections to a specific address plus port should be allowed in. This also means that, for example, the firewall forgetting those open ports doesn't cause much trouble: The clients simply can tell it again whith ports to open, and they will be reachable again, at the same address, no need to tell peers a new address because a uPnP gateway forgot which ports it had allocated to which client.
And maybe most importantly: P2P traffic between devices on the same LAN becomes much, much simpler. Currently, if you have two participants of some P2P system in one LAN segment, where both establish an open port on the NAT gateway via uPnP, when one wants to connect to the other, there are roughly three possible scenarios: It can try to connect to the public address/port as established by uPnP, just like any remote client would. If you are unlucky, the NAT gateway only does DNAT, in which case the connection won't work because any responses will come from the local address of the peer instead of from the public address of the NAT gateway, so won't be recognized as belonging to the connection. If you are lucky, the NAT gateway does hairpin NAT, in which case the connection works, but all traffic has to go through the NAT gateway in order to rewrite addresses, instead of directly from one machine to the other at the ethernet level, which usually will be much slower than a direct transfer (even more so when there might be multiple people doing the same thing on the same LAN, where the switch might well be able to forward multiple gigabit flows at once, but your DSL NAT gateway most likely will not, not even if they are both in the same case). Or, if you want to avoid both of that, you'd have to somehow detect that you are in the same LAN and then figure out what the address of the peer on the LAN is in order to connect to that address instead (and note that you can't use the local addresses to decide that, even if you know them, as you might well have addresses in the same private address range, but still be connected to completely different LANs).
Without NAT? You simply connect to whatever address the peer announces. If you are on the same LAN, your machine's routing table will automatically route the packets to your local LAN, directly to the other machine, at full LAN speed. If not, they will be forwarded to your local gateway, through the internet, through the remote stateful firewall, and to the remote peer. No difference at all, apart from the speed, possibly, it just works.
It's happening, and it's approaching the critical point. 2016 starts at 10% IPv6 traffic, will end at 25% IPv6 traffic. No sound company today will launch its services in IPv4 only and eschew 10%-25% of potential traffic.
It seems like this is another doomsday prediction. Evidence would make it persuasive.
"The text starts with a question... "does every node need a direct Internet connection?""
I do not see the word "direct" in any part of the rant. Am I missing something? Are you sure that is what he meant? The text reads "do these computers really need to be on the Internet?"
Indeed an internetwork like any other network should be symmetric but I am not sure adpoting IPv6 is the only way to achieve that. Nor that adoption of IPv6 would be used for that purpose.
Instead I see asymmetric, data collection motivated ideas like "Internet of Things" being put forth as "the way of the future". Such ill advised ideas would more likley be the justification for using IPv6.
Do all these "things" (computers) need to be on the Internet?
For a pedantic asshole, DJB is on an awfully high horse here, bitching about interoperability and complexity, while his own tinydns/dnscache require a separate fucking utility to convert time/date stamps and IP addresses from hex into something actual people actually understand.
Meanwhile Belgium has nearly 50% dual-stack IPv6 deployed and there haven't really been any issues. It's configured well and works well. The vast majority of users probably hasn't even noticed.
Yeah, being from Belgium I can say that it is great. Makes it easy for me to test the IPv6 support on my servers.
The is one advantage of having an active duopoly [1] in the ISP market. Belgacom (38% market share) started migrating customers to dual stack in 2013. Telenet (40% market share) soon followed in 2014.
IPv6 does offer user-visible features though. My home VOIP line doesn't work on IPv4 without arsing around with NAT, which is something that I'm not inclined to do or troubleshoot. On IPv6, plug it in, configure the firewall and it just works.
Also, I found a simple solution to the fact that neither VM or BT offer IPv6 - use a different ISP.
If VOIP is harder to configure under the cheaper service, then it's a question of how much you value your time. Maybe for you the cheaper option is the better choice, but maybe for someone else it wouldn't be.
Likewise for gamers. Often IPv6 is lower latency or again easier to configure. For some people that's worth paying money for.
Ultimately as IPv4 addresses get more expensive and IPv6 equipment becomes cheaper, the costs will shift, and the benefits will remain.
Can you really play games using IPv6? You would need all the other players in the same party to use IPv6. That would severely limit who you can play with in the UK.
I think some games will connect peer-to-peer by default, and fall back to relaying via a central server in the case of NAT (particularly CGNAT or double-NAT cases that can't be bypassed by the usual techniques). So NATed players can still play, but players on IPv6 or directly internet-connected IPv4 get the best experience.
I can often get an IPv6 address that fails to work: I'll do DNS lookups and get a huge delay while connectivity fails-over onto the Internet (IPv4). Hotels are worse.
The best solution right now is to disable my OS's IPv6 support which is positively insane if the goal is to get everyone to do the opposite.
The thing is, IPv6 proponents want me to do something (vote? switch? not really sure), but don't equip me with the ability to do it smartly. Instead I get derided by people who judge me for not being smart enough, or valuing my time enough (wtf?) to switch to IPv6. Like I'm somehow holding back progress.
If you get a broken IPv6 address from a network, then that network is broken. It's no different from having a DHCP server hand out the wrong DNS server IPs - blame the guy who can't configure the equipment properly rather than the underlying technology. FWIW, after 10 years of extensive travel and hooking onto whatever hotel/conference/shop network you can imagine, I've seen broken DNS, broken routes and broken proxies - yet to see broken IPv6 though. Hey ho.
> The thing is, IPv6 proponents want me to do something (vote? switch? not really sure), but don't equip me with the ability to do it smartly. Instead I get derided by people who judge me for not being smart enough, or valuing my time enough (wtf?) to switch to IPv6. Like I'm somehow holding back progress.
I'm not really sure what this means. Everyone values their time differently, I'm certainly not going to judge anyone for that. If IPv6 doesn't solve any problems for you, then great. Don't try and pretend that it's not helping loads of other people solve their problems though.
> If IPv6 offered a single user-visible feature it would sell like hotcakes
Being able to connect to my home machine from anywhere. Presently, I have to manually[1] mess around with NAT to accomplish this, and the obvious ways to set up `ssh machine-a.me.com` and `ssh machine-b.me.com` simply don't work. That alone, to me, is a single user-visible feature.
Further, I've had tons of scenarios proposed by my parents that would be considerably easier to do with IPv6. "Why can't I just transfer this file between these two computers? Why does it need to go through some third party?" (This was really a bit before Drive, but even with Drive, it's still an extra step that doesn't really need to be.) As an engineer, there's plenty of problems I simply cannot solve while it remains impossible to address a computer that's supposedly on the Internet.
> Instead, it's sold a twenty-year clusterfuck of doomsday predictions about running out of IPv4 addresses
I can't actually obtain an IPv4 address for a device on my network. There's absolutely nothing I can do. So, from my perspective, those doomsday predictions appear to be correct. (And when I can, it often costs extra for something that ought to cost nothing.)
[1]: All the while we have standardize protocols that should allow software to automatically configure NAT in a router, except they're supported by nobody…
The big UK ISPs don't care because they've all acquired enough IPv4 addresses to satisfy their current demand & growth projections.
They all claim to be preparing trials but I wouldn't expect to see any real activity from them until we start seeing IPv6 only sites, which would be unreachable to their subscribers.
If you don't believe we need IPv6, read and properly understand the awful mess that is ICE: http://tools.ietf.org/html/rfc5245
This is what VoIP and other similar peer-to-peer applications do today to get connectivity in our wondeful NATed IPv4 world. Are you really arguing we're in a happy place with IPv4?
We all know we have to move to something new, we just mostly agree that IPv6 is not a very good solution to our problem. It's full of problems, has no killer feature, and has no good transition methods.
I think in this case geocar is implying that you'd only get an IPv6 address. Since vast portions of the internet aren't available over IPv6 it's not a very useful internet.
Some public wifi providers (like the mentioned hotels) hand out IPv6 addresses but don't actually route v6 traffic. So any connections which use v6 by preference will be broken from those connections.
You can agree with this rant and still want to deploy IPv6. All the arguments are valid but even in the worst case it's better than the status quo.
There simply was no technical possibility for backwards compatibility, due to the way IPv4 was designed. So a dual-stack deployment has been pretty much the way to go from the start. It's unfortunate that it takes 20 years, but no better deployment scheme has been proposed during this time, only lots of new tunnel and NAT schemes. The smart phone revolution was really slow, too.
> FFS: MX records didn't take 20 years to get deployed!
So what? IPv4 took some 25-30 years to get mass adoption, and didn't have an earlier version of itself competing for mindshare.
Except for the several messed-up deployments, IPv6 is progressing well, and even fast. It'll be available as soon as our current crisis becomes undeniable... and judging by your comment, it's still possible to deny it.
Oh yes. I'm disabling ipv6 on all my clients here and rather live in the idiotic, crazy and stupid world of ipv4 with CGNAT ('Dual Stack Lite'). That works at least somewhat consistently, if my internal machines are running dual stack instead I run into issues every other minute (usually DNS related and yeah - maybe my ISP is too dumb to run its mandatory native ipv6 line, but I cannot fix that).
I used to ask my ISPs "When will you finally provide ipv6". Then i stopped caring. Now I wish they wouldn't have bothered and consider paying a premium to go back to ipv4.
> This is crap. After twenty years, none of the problems with IPv6[1] have resolved, and when you get an IPv6 address from some places (hotels, mostly), much of the Internet turns into a unusable ghetto.
I am still able to play games and browse the internet and SSH to machines. What's wrong?
Mail doesn't silently fail if some intermediate server is using the A record. A compatible upgrade to IPv6 is simply impossible, because you would have no way to debug the failures.
The article you cited is about a decade out of date. Some points are still relevant but most are not. Deploying IPv6 in modern times is not difficult or expensive.
BT, Virgin and Sky are all testing it now, so they're well beyond just empty promises atleast, last I checked sky was at 1million households with IPv6.
No, it's a single network segment, not an end user. There unfortunately is quite a number of providers who seem to think that it should be, but that's braindead, exactly because it allows for only one network segment (if you want to use autoconfiguration), while the address space is intentionally so large as to not restrict people to specific network architectures and also to avoid any administrative overhead for allocation of additional addresses - the original assumption was that every "end site" (that is, a customer of an ISP) gets a /48 by default, unless they show that they do indeed need more (which would be a very rare exception).
You will block many more possible addresses, but you'll still only block one subscriber's network(s) which is exactly equivalent to blocking a v4 NATed address - you block all of their LAN, regardless of which individual device is compromised and which one isn't.
Sure it is, but it's still just a single firewall rule. There isn't really any difference for the firewall whether you block a specific address or some network prefix.
Agreed. IPv6 has been a disaster whenever I've experienced it. Here are some shocks I've received:
- there is concern when you browse the web with a IPv6 address that you create a target for attackers (by revealing your address). Solution ... shadow IP addresses! When I saw this on my Mac, I was shocked!
- IPv6 to IPv4 NAT exists but is stateful. WTF? This significantly increases the cost and lowers the ability to scale. A high performance NAT solution would have been the first thing I would have designed (for interop purposes)
How is this any different to IPv4? An incoming default deny rule on your firewall offers the same protection as the "accidental" protection that NAT provides, and you can translate/proxy/VPN/whatever-else IPv6 addresses/connections in all the same ways you can with IPv4 in order to disguise the source IPv6 address if you really have a need to.
Heck, if you are worried about security having a /64 allocated means that if you aren't providing incoming services you jump around a huge range if you want to so you don't need to be at the same address now as you were an hour ago (unless you have active TCP streams or similar that have been running that long), and even without moving around if you pick your address within the /64 arbitrarily you aren't going to be found by a port scanner in any amount of time that would make scanning a practical option.
> IPv6 to IPv4 NAT exists but is stateful.
Isn't IPv4 NAT stateful when translating potentially many source addresses? Otherwise how do you reverse the translation when response packets arrive?
A one-to-one mapping shouldn't need to be stateful, but where that is useful most people go with being dual-stacked instead.
I think what the OP means here is that by default some stack use to use the numbers from the MAC address for the last part of the ipv6, creating privacy concerns. Most stack these will by default, or can be configured to, get a different random ipv6 address (within your allocated range) every time they get one.
Traditional NAT implies state. If you aren't keeping track of state you can't traverse the NAT.
The state is the NAT table.
Now, there are some "NAT" implementations that don't keep state but they for very specific scenarios. You can't port forward, for example. You can map 1:1 IPv4 to IPv6 though, but why do that when you can give native v6 the whole way?
This document describes the Stateless IP/ICMP Translation Algorithm
(SIIT), which translates between IPv4 and IPv6 packet headers
(including ICMP headers). This document obsoletes RFC 2765.
Where are you getting the "19 ivory tower academics and 1 guy from Bell Labs" from?
Per [0][1] it was:
> The working-group members were J. Allard (Microsoft), Steve Bellovin (AT&T), Jim Bound (Digital Equipment Corporation), Ross Callon (Wellfleet), Brian Carpenter (CERN), Dave Clark (MIT), John Curran (NEARNET), Steve Deering (Xerox), Dino Farinacci (Cisco), Paul Francis (NTT), Eric Fleischmann (Boeing), Mark Knopper (Ameritech), Greg Minshall (Novell), Rob Ullmann (Lotus), and Lixia Zhang (Xerox).
Almost none of which are academics and it wasn't the 1980s.
No I used to do OSI based networking for the UK's main x.400 domain - and we knew about the ip4 address exhaustion back then and I looked at some of the initial proposals.
By the late 80's it was obvious that the way that ipv6 was proposed was not tenable - it should have been killed 20 years ago.
The proposal for "IPng" that became IPv6 was selected in August 1994. It and various other proposed protocols had been under discussion in the IETF for roughly the previous two years (from memory). Prior to that the IPv4 address exhaustion issue was certainly known about. For example OSI CLNS had (has?) variable length addresses and was for a while a contender for IPng. NAT actually extended the lifetime of IPv4 by decades. In 1994 we were forecasting the end of IPv4 with the introduction of Windows95! Conversely, the biggest benefit IPv6 brings today is avoidance of NAT.
This is insane... IPv6 as a completely separate network from IPv4? JUST ADD 12 BYTES TO THE FRONT OF THE ADDRESS YOU ASSTWATS, then the 0::0/96 is all the current IPv4 space and the rest is up for grabs. Yes, there are still interoperability problems with this scheme, but the alternative right now is to give every device two public addresses, be effectively on two completely distinct global networks and hope everything works. It doesn't seem to.
You might want to learn about how IP works before you tell people what it can do.
IP has on each packet both the source and the destination address, because the destination uses the source address to send back the response. Nothing works if you can't send responses.
If your device has an address from the "extended address space", you cannot talk to any device that doesn't know about that "extended address space" because they won't know how to read your source address and how to send a response back to it. Just as with IPv6: You cannot talk to an IPv4-only device if all you have is IPv6.
Essentially, IPv6 does exactly what you are suggesting. Things just don't work as you imagine them to, but rather one unavoidably ends up with "a completely separate network" if one does things as you suggest.
If you don't completely separate the networks what are legacy IPv4-only routers going to route on? The lower 32 bits of the address? Great, now you get unreliable silent failures where your packets sometimes get stuck in routing loops depending on which path they take. That would be an absolute nightmare to debug.
If a backwards compatible, "IPv4.1" version would have been deployed, designed to route transparently through the legacy IPv4 infrastructure, then public routes could have been restricted to the legacy field for a number of years by IANA.
32 bits was perfectly adequate for routing for decades and would still be if not for the route explosion caused by exhaustion itself.
A similiar problem could still happen inside your mixed IPv4-"IPv4.1" system, but in that case it's your problem to fix and not the fault of the protocol or of some distant sysadmin.
> If a backwards compatible, "IPv4.1" version would have been deployed, designed to route transparently through the legacy IPv4 infrastructure, then public routes could have been restricted to the legacy field for a number of years by IANA.
So endpoints would not have publicly routable addresses, and your packets would be sent across the Internet encapsulated as IPv4. Isn't that just NAT? It would work, to the extent that NAT works today, but it wouldn't get us any closer to having a publicly routable network with more addresses (you'd still get undebuggable problems whenever you tried to switch on public routing), and it wouldn't address the real problem which is not just IP address exhaustion but also class A, B and C exhaustion. What do you do when a new datacenter wants to connect to the Internet? Give it a single IP taken from the middle of a block owned by the same company? That's your "route explosion caused by exhaustion itself" right there.
No, the critical difference being that "NAT internal IPs" are now public IPv6s and the NAT mapping, instead of being private to the NAT box is now exposed to the other "NAT boxes" on the internet, for example encapsulated in an IPv4 option that is ignored by legacy systems. This means two IPv6 networks can converse transparently and not care what IPv4 connectivity lies between them.
This further means that:
1. IPv6 deployment is no longer a tragedy of the commons: if I deploy IPv6 I get it's benefits immediately along with my remote peers, no longer dependent of providers, AWS or some remote sysadmin over which I have no control.
2. There are no longer any dual stacks to maintain, timeouts, and switches that plague us today; IPv6 just works on the existing IPv4 infrastructure until it becomes dominant.
3. IPv4-only services are incetivized to upgrade because they can provide better service to IPv6 customers who share a common IPv4 that will presumably revert to a form of NAT when connecting to an IPv4 only server.
In short, a rational and much less risky upgrade path.
>What do you do when a new datacenter wants to connect to the Internet? Give it a single IP taken from the middle of a block owned by the same company?
It's obviously too late now to fix this. What I am saying is we did not have a datacenter explosion, but a route explosion fueled by exhaustion, where the same datacenter needed more public IPs, got them in a myriad of tiny blocks and broadcasted misery to all routers on the planet. An IPv6 success would have allowed us to keep 32 bit routes for a long time.
> No, the critical difference being that "NAT internal IPs" are now public IPv6s and the NAT mapping, instead of being private to the NAT box is now exposed to the other "NAT boxes" on the internet, for example encapsulated in an IPv4 option that is ignored by legacy systems. This means two IPv6 networks can converse transparently and not care what IPv4 connectivity lies between them.
But IPv4 connectivity can only route according to the IPv4 rules. Which means two "IPv6" networks would have to use the IPv4 rules as well, because a network where half the routers are using one set of rules and half of them are using the others will go terribly wrong. So you're stuck only using "IPv6" at the endpoints, and IPv4 in between - i.e. NAT.
> It's obviously too late now to fix this. What I am saying is we did not have a datacenter explosion, but a route explosion fueled by exhaustion, where the same datacenter needed more public IPs, got them in a myriad of tiny blocks and broadcasted misery to all routers on the planet. An IPv6 success would have allowed us to keep 32 bit routes for a long time.
The estimate I've seen is that fragmentation means there are 3x as many routes as there should be. Which sounds like a lot, but with the exponential growth in the number of internet-connected devices, that would only have covered another ~2 years' growth.
Of course, all such encapsulated "IPv6" traffic is routed according to the public IPv4 rules up to the network that emitted the rules, which will further examine the full address and take appropriate private routing decisions. To talk to your "IPv6" node, I don't need to know the full route, just be able to route to a system close enough to you that understands "IPv6" - the border of your network.
I stress again that when you connect me from behind NAT, I see your public IPv4 but have no idea how to reach you back, nor can you publish such info. When you contact me over an 6:4 encapsulation, I get your globally unique IPv6 which I can use to connect back to you at a later date - connection works end to end as if we were inside an IPv6 Internet, so it's not fair to call it "NAT".
It's more aptly a NAT64-tunneling hybrid, that's transparent, backwards compatible and can be switched to full v6 operation when critical mass is achieved, by symply publishing 128 bit routes. In hindsight, I am convinced the dual stack solution is much, much harder to deploy and was the wrong solution.
6to4 works like that. It's largely not been successful, because routing is asymmetric and failures are silent, to the extent that IETF considered deprecating it this year. 6rd exists (a replacement for 6to4 that makes the routing more symmetrical and accountable) and a few people may still be trying to use it, but most have reached the conclusion that dual-stack is easier.
>most have reached the conclusion that dual-stack is easier.
I know about the so called transition technologies and their dubious record, however the original context was "an alternative protocol backwards compatible with IPv4", not making IPv6 work on the IPv4 Internet. The former is possible (x) and was preferable circa 1996, while the latter are afterthoughts bound to fail: you obviously can't route true IPv6 on IPv4 hardware without upgrading it, and at that point, why not go dual-stack ?
Unlike a backwards compatible packet format, transition technologies require gateways to the "new" IPv6 Internet, that are checkpoints by definition and need configuration, resources and attack surface, amplifying the chicken and egg problem - why deploy gateways to the IPv6 when there's nothing there ? Add flakiness of the transition technology itself, for example Teredo clients can't even reach one other 25% of the time due to symmetric NATs.
(x) brain fart time:
If I were to redesign IPv6 today in a backwards compatible manner, I would treat the existing NAT hierarchy as a set of different routing realms, and upgrade only these points to do a stateless forwarding of the extended packets, while keeping their existing behavior for regular IPv4 traffic.
For example, to reach host 10.10.10.10 siting behind NAT 2.2.2.2, an IPv4 packet is prepared with an extension field :
IP Header: DEST: 2.2.2.2
Extended Header: DEST: 10.10.10.10, REALM=0
(src omitted etc.)
The IPv4 infrastructure will forward the packet blindly while the upgraded NAT/router will recognize the extended format and instead of dropping it will mangle it and forward it along:
IP Header: DEST: 10.10.10.10
Extended Header: DEST: 2.2.2.2, REALM=1
The destination stack will establish a connection if it recognizes the packet format, or drop it if it's a legacy IPv4 implementation.
You can repeat this for a bunch of chained NATs, incrementing the realm and swapping more routing bits from the Extended header into to the IP header, while reversing the mangling when extended packets exit from the inner routing realms. In an 128 bits address you could do this 4 times, sufficient even when dealing with something like "NAT44444".
So you can keep end to end stateless routability as long as all NATs are cooperant, and fall back to regular IPv4 over NAT when they are not. In time, NATs will transform to routers, simply mangling and forwarding extended packets and do less and less state tracking for legacy packets, until eventually tracking can be turned off.
The idea is to upgrade only key edge infrastructure (NATs), to do a very simple and stateless transformation, and not the whole Internet with a risky and complex technology. NAT clients are incentivised to upgrade (for p2p, voip etc.) without finding themselves in an Internet ghetto with high latency and routing blackholes. It's the same Internet with the same routes. In time, ISPs can release some of the massive SOHO IP space that's no longer justified.
Maybe not an entirely thought out idea, but it's a starting point. Or would have been. Despite it's troubled childhood IPv6 is of course superior, allows easy renumbering and real multicast, etc.
IPv4.1 is a thing, and has been supported in the Linux kernel for some years. It basically adds on a extra octet to IPv4, giving you 40 bits of address space rather than 32.
AFAIK this hasn't and will probably never make it out of experimental status.
All my devices at home* are dual-stack and so are some of the servers I run. Never had any issues. Browsers, ssh/sftp, nginx, whatever's behind nginx, Exim... no complications beyond adding another listen line to the configuration where appropriate.
Many American ISPs distribute both IPv4 and IPv6 addresses to their users, and almost all mobile devices have some sort of IPv6 connectivity.
With Happy Eyeballs (https://en.wikipedia.org/wiki/Happy_Eyeballs), and the other common transition technologies, IPv6 is largely transparent to most users and works quite well.
Here in Germany (Telekom as ISP) IPv6 is now provided as a standard dual-stack. Your router/DSL modem gets a given IPv6 and another prefix to distribute IP to the devices. This is working surprisingly well, my last connection to the Cloudflare control panel resulted in a warning:
Hi,
Your CloudFlare account ... was logged in from an IP address we didn't recognize ...
following IP address: 2003:72:4f0e:3600:xxxx:xxxx:xxxx:xxxx.
I had first a kind of "is it a bug?" reaction because I did not catch the IPv6 address at first read. I was expecting an IPv4 address.
For the matter, the connection is reliable with no particular latency problems. I cannot distinguish between a site on IPv4 or IPv6 from an end-user experience point of view.
It's not commercial interest as much as very high cost.
It's a huge amount of work to convert to IPv6, which includes:
- almost all networking hardware in facilities, pops, switching centers, and on customer premises (basically every device in every home.)
- almost all related software stacks like kernels, network control stacks, etc.
- redesign and planning of addressing topologies, ranges, configuration, etc.
- updates to all support systems: inventory databases, monitoring systems, provisioning systems, security systems, etc.
- updates to tooling, documentation, UIs used by engineers, customer service, field techs.
As you can see, it's a shit load of work for most large organizations and service providers.
I'm a Time Warner customer in North Carolina and have IPv6 service. It works well, but the IPv6 networks aren't statically assigned, so I can't reach my devices at home reliably.
Does TW use DHCPv6-PD to allocate networks to customer sites? If it does, are you sure that your router is presenting the same DUID to TW's DHCPv6 servers?
IME, as long as you present the same DUID to Comcast's DHCPv6 servers, and your equipment renews its DHCP lease before the expiry time, you retain the same IPv6 prefix.
I upgraded my parent's modem 1-2 years ago, soon after they implemented their bs modem rental fee. It seemed like TW was pushing DOCSIS 3 modems pretty hard. Every time I visit and check my Gmail, I see IPv6 addresses in the activity log. I'm kind of jealous, since my FIOS connection doesn't have IPv6.
> Is there a commercial interest for different companies to keep IPv4?
aside of the short-term practical issues, you also have a potential huge long-term advantage as you can stop all these pesky new disrupting services from causing you to lose money once they can't get a v4 address any more and all your subscriber are still only on v4.
Wide v6 deployment is a huge factor in helping us to keep the internet a free place where everyone can participate. At least in the long term.
Sadly, there is: IPv6 requires retraining your staff, and running dual-stack inceases the testing and administration workload.
IPv6 also brings the additional requirement for ISPs that they can't allocate a single outside address anymore, so no way to enforce one-machine-per-residence limitations (I wouldn't expect an ISP to try this in 2016, but it was still common five years ago).
> so no way to enforce one-machine-per-residence limitations...
Eh?
Have you ever seen an ISP that did this and wasn't foiled by attaching a NATting router to the ISP-provided bridge device? [0] (If the ISP is providing the router, then it's trivial to only allow traffic from either the first MAC address you see, or the MAC address configured by the customer during some setup procedure.)
Of course it's technically trivial. But that didn't stop PHB's from inserting such language in consumer contracts, so why should we assume that the same PHB's would happily embrace IPv6, since "with IPv6, we must provide every one of our customers with 2^80 addresses"?
I've been developing web applications for a relatively long time now. Maybe in the last 10 years, I had my IP addresses stored in databases etc in IPv6 aware types just in case we had an IPv6 deployment in future. Better be ready.
Last month, I was working on something that needed to store IP logs. I packed them in 32 bit unsigned integers. Was not a well thought decision. I think I don't care anymore, just gave up.
Anyone doing the opposite of that? Maybe I was naive to jump on the train that early and these are better times. Felt like that 5 years ago too though.
16% of home subscribers in Romania are connected to the internet via IPv6, most of them without even knowing it.
This is largely thanks to the massive deployment effort led by the country's biggest provider - RDS.
The connection is a dual-stack one. I've been using this for almost two years and it has worked very well. Most large sites support IPv6 without any hicups.
This is a relevant read on how they did it: http://www.internetsociety.org/deploy360/resources/case-stud...
Right, fair enough but things I can do as a consumer are choose a particular ISP, and ensure i'm using up to date OS / network drivers etc.
I feel like saying there is nothing we can do to speed up the process is giving in a little too early when I've personally seen very little shared with the public that is understandable and helps people push this forward.
If you live in an area where you have a choice of ISP then at least inquire about whether they support IPv6. Chances are if you have a choice the factors other than IPv6 will make more of a difference. I.e. I'd have a hard time recommending an IPv6 enabled 6Mbit DSL connection over a 100Mbit cable one without.
I've certainly kept ISP choices in mind when choosing which community or neighborhood to live in. Ironically I find myself looking for places with Comcast because they've had solid IPv6 support for years.
If you are involved in a decision making capacity with a company that offers services over the internet, try to pick equipment, ISPs, SaaS vendors that use IPv6 where possible. With a few notable exceptions (rrrrrrr AWS) its not particularly difficult anymore. If you are at trade shows, quiz vendors about their IPv6 support.
Lots of ISPs have gotten on board recently at the cores of their networks but CPE equipment is often old (DOCSIS 2 modems, WRT54G bases). This holiday when I was visiting family I took a few minutes to swap out their ancient router for one that supports IPv6. As a result they'll also get faster and more reliable Wifi now so its often an easy sell for other reasons.
Pretty much. I use an oldish laptop with some cheap TP-Link router at home, and suddenly I discovered I was connecting to my VPS over IPv6. All my components had been ready for years, waiting for ISP support (which finally came!).
Good point. I don't understand it very well, and haven't really spent a lot of time trying. When I need to get something going, I have the option of doing IPv4 only and being confident I've done it well. Or, also include a half-baked IPv6 solution that I don't fully understand and could have security holes and other problems. Since there's very little consequence for not implementing IPv6, it's hard to justify take that risk.
Unless you're working on a really strange OS, for normal client software, v6 doesn't work any differently than v4. It's IP with a larger address space.
It's only when you get down to the level where you're doing things like acquiring an address [0] that you need to worry about the differences.
[0] Or -I guess- doing multi/broadcast. IPv6 substantially simplified broadcast. Send your packet to ff02::1 [1] to get the same functionality as IPv4 broadcast.
This is a great comment and the kind of content that i think should be put in a single "fact sheet". I might give it a try and see if people find it useful...i have a 13 hour plane ride coming up, sounds like fun :(
> This is ... the kind of content that i think should be put in a single "fact sheet".
Things like that have been done before. [0] It seems to be difficult to get good Google juice. Perhaps your writeup will be the one that cuts through all of the SEOed clickbait and is the go-to guide for good IPv6 info. :)
However, don't let me discourage you from taking on the task! It's sure to be edifying... or -at least- something productive to do to while away that obscenely long plane ride.
Wow this is a great resource you've linked to, thanks! It's pretty long and probably a touch more than most people want/need to read, but it is very complete.
Unrelated, yeah Sydney to Toronto Canada (via SFO) terrible flight.
If you're setting up a network that might one day be connected to another network, or used via a VPN by users working from home, it's well worth using publicly routable addresses for it. It just makes all the routing setup so much easier.
If you have enough publicly routed IPv4 addresses to do that with IPv4 then then knock yourself out, but IPv6 ones are much cheayer.
I'm pretty sure IPv6 adoption will follow the standard S-shaped adoption curve [1]. If true, that sucks, because it means that the last 10% will take approximately as long as the first 10%.
Intuitively, I feel like that could in fact be the case. The last 10% to switch will probably be small ISP's without resources/knowledge, old devices that don't support IPv4 and old servers/webapps that haven't been updated in years.
There are already carriers that are looking at taking IPv4 in at the edges, encapsulating it in IPv6 and then decapping it before handing it off to their customer.
Thereby the core network can be entirely IPv6 without any legacy IPv4.
Well off the bat you can already rule out the idea that you'll have 100% IPv6 adoption in the next 20 - 30 years. There are simply too many embedded network appliances that are IPv4.
Do you run a website? Perhaps help to administer a website? Given you're on this site, it's not unlikely you do.
Well, does your site work over IPv6? If Yes, great! If you don't know, assume it doesn't. You may just need to add just a single line to your config file (e.g. `listen [::]:80;` for nginx). Go and do it. Beware that a virtual server you're using may not even have IPv6 networking enabled by default.
I finally decided to bring my sites into the modern age last month, and went and modified all my config files so I could use IPv6, TLS and PHP 7. Was worth it, I think.
Oh for sure. Don't just flip the switch tomorrow on your large site with a team behind. Even if it's a personal site, test it.
But at least try it out, see if it works, see what you need to do to get it working. Too many people stick with the defaults and lock themselves out from the IPv6 world.
So you're telling me that I have to adapt all of my codebase to work for ipv6 (storing addresses, comparing ranges, sockpuppet detection, throttling, ...) for absolutely no reason?
Yeah good luck with that, it will work as well as the python2 to 3 transition worked.
Since it's based on Google metrics - what are the stats like for countries where Google is not the most popular search engine? E.g China, Russia and Japan.
Is it really based on Google metrics? Google has been blocked by the Great Firewall on IPv4 networks since last year, but is still reachable via IPv6. So a Chinese who accesses Google either does it with a foreign IP (VPN or proxy), or with an IPv6 address within China. That would put the statistics close to 100% IPv6 usage.
tunnelbroker.net, operated by Hurricane Eletric (he.net)
^Register, setup IPv6 over IPv4 tunneling. They can give you a whole /48 if you want to play with your network. Now considering you want to test other sites, all you need is IPv6 connectivity, so the default settings they provide are enough. I recall they had decent docs on how to set things up for all major OSes.
There were a few gotchas :
* you need to make sure "ip proto 41" is not filtered by your firewall, or you won't be able to setup the tunnel
* in the web interface, make sure you provide your public IPv4, and not your NATed address.
In addition to explicit tunnels, you can use a Teredo tunnel. [0] Miredo is the Teredo implementation that everyone uses on Linux. The advantage is that Teredo automagically configures itself. Some of the disadvantages are that it can be really slow or otherwise unreliable, and kinda slow to configure.
Having said that, I run Miredo wherever I go so that I can reach my v6 hosts back home, regardless of whether or not the network I'm currently attached to has v6 service.
I bet most of those users are on mobile networks. The need is very apparent: every mobile device needs to have internet access, and there's a lot of them with more being activated every day. Mobile providers don't have enough IPv4 addresses to support that and carrier-grade NAT is even more messy than IPv6.
Most residential ISPs that support IPv6 make you go through hoops to enable it.
Isn't there a consequence to letting each "subnet" (and I'm not sure at what level this is defined... ISP?) have its own 64-bit prefix that every client uses, and then sharing 64-bit space underneath it?
Can someone help me understand the consequences of this, and how it ends up limiting the usable address space to some fraction of what 128bit would imply?
The /64 boundary is the standard boundary for each LAN. An "end site" (that is, a customer of an ISP) was originally intended to receive one /48 each, so enough space for 65536 /64 LANs.
You have to forget the idea that addresses are scarce. They were with IPv4, with IPv6, they are not. The whole point of the design is to make sure they are not scarce. Never, ever, anywhere. Conserving address space is not a virtue. Making sure that everyone has addresses available whenever they need some, with minimal delay and minimal administrative overhead, and with little fragmentation in routing tables, that is the goal of IPv6. And there is no real risk that this will lead to address exhaustion, there is enough space for about 40000 /48 per person currently alive.
I'm still not convinced by the "NAT is Bad" arguments that people use when it comes to IPv6.
Having a public address for a device in my house seems like a potential security problem - it's leaking information about my network to the outside world. And it's a dream come true for traffic-tracking.
So, you're mostly concerned about the security aspect?
The security you get from NAT is accidental (but still useful.) The problem is that it is an ugly hack and greatly restricts the types of applications that you can build on (and for) the Internet.
A good IPv6-capable home router with sane defaults can give you the same level of security as NAT, with much less complexity. You also don't need special NAT logic for applications that aren't simple request-response TCP sessions.
> it's leaking information about my network to the outside world
With sane defaults the only thing leaked is the IP address, which is already public (same as IPv4 with NAT.) Nothing about your internal network needs to be leaked.
In addition, you get a few extra benefits:
- With a /64 per network, it is almost impossible for port scanners to find you (inside a search space of 10^19 addresses.)
- The privacy extensions (which is supported by almost all operating systems) prevent you from IP-address based tracking, by assigning you a temporary address which can change frequently, and using that for external communications. This coupled with ISP-provided dynamic addresses can actually give you better privacy than IPv4.
> With sane defaults the only thing leaked is the IP address, which is already public (same as IPv4 with NAT.)
Compared to v4 NAT though, this will leak an address per device behind the firewall whereas before all these devices would not have been distinguishable.
This is why devices started to create a whole lot of temporary v6 addresses, but this causes problems for some stateful firewalls as networks grow and the amount of addresses explodes.
> Compared to v4 NAT though, this will leak an address per device behind the firewall whereas before all these devices would not have been distinguishable.
... just that they aren't really indistinguishable anyway. From cookies to TLS session cookies or session identifiers to specific TCP stack behaviour, it's usually quite easy to distinguish devices behind a NAT.
> This is why devices started to create a whole lot of temporary v6 addresses, but this causes problems for some stateful firewalls as networks grow and the amount of addresses explodes.
No, a stateful firewall keeps state per connection, not per address, the number of distinct addresses is of no consequence.
That blog post was referencing issues with FIB/TCAM/multicast state table exhaustion on switches with a small entry limit (3000), brought on by nodes with lots of IPv6 addresses, each with their own multicast group. All of the protocols involved in this are mostly contained within the same layer 2 domain.
A "stateful firewall" is generally a firewall that is tracking TCP and UDP packets (i.e L4 and above), so should be unconcerned with anything operating below this layer[1]. A TCP connection is a TCP connection whether running over IPv4 or IPv6.
[1]: Things become more convoluted when you have "switches" providing L3 or above functionality, but the problem referenced really is at the switching layer.
Think about a bigger picture. Before NAT we used to have a symmetrical internet. Everyone could consume content from other servers. Everyone by definition was a server and could produce content for others.
From a philosophical point of view this is extremely important. For example email used to be properly distributed. Nowadays the internet services are concentrated. For example gmail receives a big share of all emails. Amazon gets a big share of all http traffic, and so on.
In this centralised world the "client can be a server" attitude is in practice less important. NAT's are ubiquitous. Last-mile carriers don't give static IP addresses, etc.
But still, I would be much happier if the internet was truly decentralized. Even if it's not possible to decentralize all services, the internet, the addressing scheme and infrastructure just _must_ be symmetrical. Everyone connected to the net should be able to "publish" content without the need for anyones approval or infrastructure.
IPv6 addressing scheme, the removal of NAT's, brings back the democratization of the internet.
I guess with IoT, we may see more client-to-client connections. You will want to remotely connect to your baby monitor/nanny cam, see who's ringing to your doorbell, change the temperature of your home, give an electric shock to your dog because the neighbour called to complain about the barking, etc, all that while you are away. You can do all that with a client-server architecture but a direct connection feels more natural.
I don't see your problem. If you don't want your TV to be a server that's accessible from the internet then don't open any ports on your router's firewall to your TV. NAT routers accidentally acted as firewalls (and newer routers also acted as proper firewalls), but that doesn't mean we're going to ditch the firewall on the router now that we have IPv6.
The question is not whether your mum wants to be a server, but whether she would like to use services that are only possible or are easier/cheaper to implement technically using a "server" on her machine/network.
People also don't "want a natural gas burner". People want that their home is warm and they prefer not having to shovel coal and they prefer the general lower costs of natural gas heating vs. electrical heating. So they end up buying natural gas heating systems. Not because they somehow like natural gas burners, but because that happens to be the technology that provides the comfort they like at a price they like.
And if you don't want your TV being a server, you most likely don't want it connected to the internet at all. It's not that it not being a server somehow prevents it spying on you or from being vulnerable to outside attackers.
Google and Amazon are not leaders in their fields because of NAT creating some sort of unbalanced playing field. That is just an incredibly fallacious argument.
Not only are those security problems non-existent - the problems caused by NAT are massive. You might not notice so much because services that don't work well with NAT essentially simply don't exist much nowadays, so it's difficult to notice them not working if you aren't aware what they would even look like. Though there are some obvious ones like remote access to your machines at home. Without NAT, you simply have to open the ports in your firewall and immediately, you can connect without any port mapping or stuff, you simply can give your machines at home DNS names and ssh into them.
Also, a major problem is not just the NAT itself, but the private addresses NAT tends to come with: If you have to somehow connect two networks that use the same private address space, things become incredibly hard to manage.
Also, NAT introduces major asymmetries in routing. Have you ever tried to connect to a service that's port-forwarded to an internal machine from your NAT gateway from inside the local network, for example? Well, either it will fail because pure DNAT means thet the reply packets will reach you with the wrong source address (because they don't pass through the NAT on the reverse path), so you suddenly need different configuration on the client side, depending on where you are when you try to use a service; or you have a device that does hairpin NAT in that case, which then means that the service doesn't see your actual IP address, but the address of the NAT gateway instead, which might give you confusing debug output/logging, and possibly non-working access control, and also causes your bandwidth to be severely limited by the speed of your NAT gateway, as all the traffic has to pass through the NAT gateway in order to rewrite the addresses of all packets, even though both endpoints are on the same LAN and might be able to transfer data at gigabit speed, if only there were no NAT involved.
There are possible operational security gains that can be permitted by ipv6, if operations groups permit...
For example, because of NAT and dynamic ip addrs allocation, my VOIP provider pretty much has to accept any ipv4 address in the world. My ISP's allocations are numerous and non-contiguous and my RFC1918 LAN address is meaningless from a security perspective. So anyone on the planet can log into my VOIP provider as if they're me, if they obtain my login info. That's kind of dumb.
However, with ipv6, with significant operational changes, it would be trivial to set up my account and firewall such that my VOIP device only talks to one specific providers /128 and my provider's account and firewall will only talk to my one very specific /128. Yeah yeah there are load balancing issues and device replacement issues such that a sane provider would operate on /64 sized networks both at their data center and my house, not individual devices. But it would be immensely more secure than ipv4, anyway.
There are other operational advantages. Currently with ipv4 companies usually don't have "an allocation" they have many, maybe dozens of little /27 here and a /28 there as they added devices. In the ipv6 world of the future companies will get a single /56 or maybe a /48 and call it good. So you get DDOSed or attacked or whatever, there's exactly one address range to block. Not 50 little ones depending on random DHCP address assignments. It'll be a much faster and simpler world. Doing egress filtering? They'll be one block to permit, thats all. Just one rule, not 75.
Your post is the kind of thing that seems intuitively to be the case, but is actually not in any practical setups.
Unless you hardcode your IPv6 addresses, they will be generated automatically and change quite frequently. So the addresses become pretty meaningless and all you have to identify your devices really is the prefix of your network - pretty much the same amount of information as a network behind a NAT'ed IPv4 address.
In fact, as far as I remember you can actually statically assign your machines easy to remember addresses if you want but still have the machines generate temporary addresses that they use for outgoing connections, which is cool.
As for NAT - it basically gives you no extra security over a basic firewall, and just makes it really difficult to do a lot of things, such as serve content from two machines on your network on the same port. It's not the worst thing in the world, but it is a fairly sub-optimal hack and something I would really love to do away with on my networks.
> Unless you hardcode your IPv6 addresses, they will be generated automatically and change quite frequently.
If you're talking about "Privacy Addresses" [0], then your statement isn't always true. If you're not using "Privacy Addresses", [1] your SLAAC or DHCPv6-assigned address remains quite stable, unless your upstream provider changes the prefix that they've assigned to your network.
> In fact, as far as I remember you can actually statically assign your machines easy to remember addresses...
You don't even have to assign addresses. SLAAC automatically creates stable addresses. "Privacy Addressing" creates random, temporary addresses. You can use both at the same time.
[1] And I strongly recommend that you don't. For folks who are concerned about being able to be tracked because they have a stable last 64 on every network they attach to, there are alternative methods for generating the ID used to create the SLAAC address that mix in the network prefix. [2][3] This means that -for a particular network-, you get the same last 64 every time you connect, but your last 64 is never the same on any two networks.
[3] On linux, dhcpcd has the "slaac private" option which enables RFC7217 address generation. "slaac hwaddr" uses the traditional, MAC address based address generation. "slaac private" (or its equivalent in other daemons) is the default setting of every Linux distro I've used in the past 6->12 months. :)
I quite like the idea that some of my machines, which are running experimental code or aren't fully secure (or devices like my tv) are not even addressable from the outside but can retrieve stuff if they need to. It seems to make sense there.
I guess as/when the transition gets more widespread we'll see if NAT really was giving us any sort of security for free.
If your router is only doing NAT, they are addressable.
A packet sent to a NAT-only router destined for 10.1.2.3 will simply be routed. It may have further address re-writing applied depending on the type of NAT, but that doesn't really matter - the packet will still go through. This could be accomplished with source routing, or an attack could simply originate one hop upstream. The return packets are probably[1] unroutable, but that doesn't stop someone from simply guessing[2] you internal addresses and forging packets.
If you want to drop some class of packet - like any incoming packet destined for a local-only address - then you want a firewall, not a NAT. One drops packets, the other merely rewrites addresses. This distinction is often missed because a pure NAT-only device is rare.
> aren't fully secure
Given how easy it is to generate packets on a supposedly-"internal" network, if a device isn't secure you probably shouldn't even connect it to the network. Border-only security may be not be the worst[3] idea in network security, but it's still a bad idea that should never be relied upon.
> for free
NAT is easily the most damaging and costly problem in the entire history of the internet. It has turned a network of equal peers into a centrally controlled, cable-TV-like feudalism. What is the cost of the losing the ability to publish without the approval of a 3rd party? What kind of price tag do you put on the entire industry of network software that was never developed because it wouldn't work behind NAT?
[1] your ISP hopefully drops any packets from any address other than the IP they assigned you
[2] sending to 10.0.[0-1].[1-5] or 192.168.[0-2].[1-5] probably works often enough
If NAT was giving us any security at all (probably not, if you keep the stateful packet filter), it most certainly didn't do so for free. Working around the effects of NAT is a major headache that causes a lot of otherwise unnecessary work.
The prefix of your address may still identify your household but there isn't much tracking one can do with it, as the household may be a hotel or a coffee shop, with lots of different users. And even within the family, tracking is only useful if you can tell the difference between grandma, mum, dad and the teenagers (or even the dog who will surely soon have its own IP!). Very different targets in term of advertising.
Network topology isn't really secret under NAT - there are plenty of ways to fingerprint devices based on the packets they send.
I prefer to see my home network as just part of the internet - that way I can let guests use it without compromising my security, and I don't have to worry about vulnerabilities in my router's firmware or physical access to my network cables. IPv6 makes that very easy, because my home network really is just part of the internet.
I do not prefer my home network to be part of the internet. I have insecure devices, things I'm developing or testing, things that third parties have developed. Having my network as a non-addressable space with outbound-only connection forming is a positive for me and I suspect many people.
I hope that as IPv6 takes hold this continues to be the default setup.
Non-addressability does very little for security by itself. E.g. some routers will still route packets from outside that are sent with an internal address as the destination. Make sure you have actual firewalling in place if you're relying on that, and remember that if you're trusting any device connected to your internal network, you're, well, trusting any device connected to your internal network.
> Having my network as a non-addressable space with outbound-only connection forming is a positive for me and I suspect many people.
Most of all, it's probably an illusion. Your web browser has access to your local network, and you probably allow just about any software into your web browser. Yeah, sure, same origin policy and all that. But that doesn't help much with insecure devices or generally crappy software.
A stateful packet filter at the boundary to the public internet still would be a good idea, just without the NAT (which mostly causes complexity, which, if it has any security effect at all, makes it much harder to reason about the correctness of your firewall).
IP stacks have been fine for many years. There was some problems years back where some ISP's networks would have really crap internal IPv6 connectivity, so IPv4 routes had much lower latency than IPv6 - but that was mainly bad network design and operation.
All major operating systems have what's called 'happy eyeballs' algorithms to pick the best path for each connection. Apple updated theirs last year and reported that with most networks they tried, IPv6 was selected as the preferred option ~99% of the time it was available[1].
Apparently many vendors do not yet support RFC 6724, or support it poorly with regard to ULA addresses. As such, I get mixed results with devices that have both a ULA and a global address, with the device trying to use the ULA address (instead of the global address) for global network egress.
There are still bugs in network vendor hardware. That said, you shouldn't let that stop you from deploying IPv6[0]. It's mainly little things like you can't address a serial link as /127 (Hello HP). You should allocate a /64, but address it /127.
There's a major bug on A10 load balancers where the ICMP "too big" message is ignored for virtual servers. There is currently no fix. If you run A10, you can't run IPv6.
Wonder if anyone has made a study about the IPv6 retardation as a market failure. Underbelly of network effect comes to mind (first mover ISPs get no advantage/payoff even though the change is sorely needed for the system as a whole).
Many have, as it's a staple assignment for technology management courses. Not sure if many have been published though.
Partly, IPv6 is currently a solution looking for a problem. Its killer features, larger address space and IPSEC, have been solved in different ways: IPSEC was retrofitted onto IPv4, NAT greatly surpassed its design intent -- it was intended as a stopgap measure only.
The other benefits of IPv6 are less critical, or less obvious. The increased direct address space enables much easier p2p solutions, but because of IPv4 we now have an Internet that is mostly client-server based. It is impossible to quantify what "could've been" without NAT. Automatic addressing and peer discovery have been mostly reimplemented, as easy-to-use DHCP servers in all consumer modems, and as UPnP/DLNA/APIPA for media devices.
Finally, it took until Windows Vista for ubiquitous consumer-side IPv6 to appear (yes, *BSD and Linux were faster). Until that time, there was zero market incentive for any kind of IPv6 deployment.
> It is impossible to quantify what "could've been" without NAT.
It is, because I have lived with multiple levels of NAT before and without NAT (with both public IPv4 and IPv6 addresses) nowadays. The ability to do P2P is truly amazing. Now I don't have to wait with the abysmal download speed when the server is highly congested. Also amazing is the capability to ssh into machines without any configuration or begging the network administrators who are nowhere to be found.
No argument there. But that's not really a quantifiable statement either. In any case, that's not what I meant: I was referring to the different business and distributed computing models we might have had, instead of the heavily centralized models we have now.
Maybe we would have had technologies like bitcoin ten years earlier? Maybe IRC would have gained a P2P component, and Twitter would never have happened? That's the kind of things that "could've been" I was thinking of.
Did you read the article? Every router... routes, so every router needs to speak the new protocol. If you tried to send packets in an IPv4-compatible way, what would you put for their addresses? They'd end up in nondeterministic routing loops which would be terrible to debug.
I'm asking why did the create IPv6 instead of a version of IPv4 with an expanded address (IPv4-version 2)? Why did they not do an incremental change instead of a whole new thing?
It's a distinction without a difference. Any protocol with addresses larger than 32 bits will necessarily be incompatible with IPv4, so what does it matter whether there's a "4" or a "6" in the name?
IPv6 seems to be a whole lot more complicated than just upping the number of bits used for addresses. Its like they were trying to solve a bunch of things in addition to the address range.
Per the article it mostly has the same headers. What other things are you thinking of? Stateless autoconfig is complex but IPv4 has that option too. Privacy extensions are complex but they are extensions, not the core spec.
Develop the nanotech necessary for us to do upgrades of ASIC chips in routers, and ... it still wouldn't make a difference. You can't negotiate with etched silicon.
We did just expand the address space, is my point. The entire IPv4 range is mapped into IPv6 on a prefix - which is what you're suggesting. IPv4 can be represented in IPv6 as ::ffff:192.0.2.128 for example.
My whole point is that its irrelevant what we did or didn't do, because hardware routers are not software reconfigurable. Whether you expand the address space a little or a lot is irrelevant, because it will cost exactly the same to replace that hardware - in which case, if you have to expand the address space you should expand it by the most you possibly can in order to ensure you don't ever have to replace those routers again.
You can't just link to a long article and claim it shows things are different. What specific differences are you bothered about?
> The justification of why NAT is no longer necessary is nice in a theoretical sense but a pain in the butt if I'm trying to convert my network.
Having used IPv4 networks that used all publicly routeable addresses and ones that did not, the former have always been much nicer, even at a purely practical level of "I want to connect to my work VPN and have it work".
> You can't just link to a long article and claim it shows things are different. What specific differences are you bothered about?
It is a long article listing all the differences - my original question was why did they decide to make all those changes instead of just expanding the address space. All of the changes except for the change to the size of the address space bother me.
> Having used IPv4 networks that used all publicly routeable addresses and ones that did not, the former have always been much nicer, even at a purely practical level of "I want to connect to my work VPN and have it work".
Did you setup your work network for IPv6 or did someone else do it?
My original question was about what justified an obvious problematic change that is celebrating 20 years with only a 10% adoption.
> It is a long article listing all the differences - my original question was why did they decide to make all those changes instead of just expanding the address space. All of the changes except for the change to the size of the address space bother me.
What specific changes? Give me your top 3.
> Did you setup your work network for IPv6 or did someone else do it?
My work network isn't set up for IPv6, that's where I had the problem.
> My original question was about what justified an obvious problematic change that is celebrating 20 years with only a 10% adoption.
What makes you think an address-space-only change would have been less "problematic" or more rapidly adopted? Because as far as I can tell there's no significant difference between "just expanding the address space" and IPv6.
That graph with china being red is worrisome.
If you extrapolate the trends, in an other 15 years we will have 2 internets, one for China
and one for the rest of the world, and you will just have to pay for any routing between the two.
I'm not a big fan of 128 bit addresses with ipv6, they are too long to read and write. Does anyone know why 64 bits was not good enough? 64 bit addresses aren't just twice as good as 32, but 4 billion times more numerous.
* Only the last 64 bits of the address matter. The first 64 bits are routing information.
* You have total control of the last 64. You can number your hosts however you like, in addition to -or instead of- using SLAAC and/or DHCPv6.
* If you're not using "Privacy Addresses", the last 64 remains stable even if your upstream changes your IPv6 allocation. So, host identification is trivial even if your assigned allocation changes.
Yep. Only the first 64 bits matter. Almost every router out there is doing hardware forwarding in ASIC based on /64 and /128. In between /64 and /128 gets broken down into multiple /128's. That's what I've heard (and read) in a few places. Please correct me if I'm wrong.
For avoidance of confusion, if you're reading an IPv6 address as if it were a word written in the English language (that is, from left to right), the rightmost 64 bits are the ones that identify a host, and the leftmost are the routing information. :)
> Router's don't care much about the host stuff...
Sure. But, I was addressing mixmastamyk's assertion:
> I'm not a big fan of 128 bit addresses with ipv6, they are too long to read and write. Does anyone know why 64 bits was not good enough? [0]
If you're okay with writing and reading 64-bit hex strings, [1] the part that matters to most folks is the host identification part, which is 64 bits long. :)
Figures are like bikinis: they show almost all, except the essential.
This is a unique source from Google that is a stake holder in IPv6 deployment that has interests in you adopting this technology.
Does anyone else with critical sense would like to have more information to corroborate these claim?
Like, what is the top 100 AS percent of IPv6 exchange?
What are the devices? (mainly android on 4G networks)
What is the traffic per protocols? (is it only android phones pushing your secret data to NSA or does it have the signature of residential use too?)
Could we have for TCP/UDP a distribution graph for speed of connection/failures compared to IPv4?
How much SIP invites get dropped between v6 to v4 and the opposite?
...
One source of information, with only one metric does not tell us much more that google want us to know this.
My own personal measurements support these claims. I worked on a small academic conference website not too long ago and the percentage of signups from IPv6 addresses approximately matched the traffic % that Google published for that time period. My own website statistics see a higher level of IPv6 traffic than Google shows, though it caters to a more technical crowd so I am not surprised.
10% global IPv6 deployment is not a Google conspiracy. =)
These measurements show that in a large set of 1:1 individual comparisons where the IPv4 and IPv6 paths between the same two dual stack endpoints are compared, the two protocols, as measured by the TCP SYN round trip time, are roughly equivalent. The measurements are within 10ms of each other 60% of the time.
While the connection performance is roughly equivalent once the connection is established, the probability of establishing the connection is not the same. The current connection failure rate for IPv4 connections was seen to be some 0.2% of all connection attempts, while the equivalent connection failure rate for unicast IPv6 is nine times higher, at 1.8% of all connection attempts.
Does IPv6 speeds up connections? Does it fail more or less?
What is the cost?
A single magical digit for me is no answer.
It is the same reason that makes me dislike the GIEC.
Average are also failing to capture the non linearity of the adoption like "uneven distributions".
I have been working in real science, these figures are lacking everything that makes them serious: methodology, an abstract, estimation of the errors, the limits of the measure, a precise title ...
In real life it brings only headaches that do not worth the trouble.
And for those who says memory, I answer that it is easy to make a 32 bit CPU with a 64bit memory adressing.
IPv6 still requires IPv4 to do MPLS.
IPv6 has no de facto standard to make autoconfiguration.
IPv4 CPE are already bloated and hardly work with a fair level of simplicity
I hardly know any decent IPv4 sysadmin even in ISP.
The IoT will imply constant connection from your device to the manufacturer while being connected to your internal network... this is a leak to me. I don't want my lightbulb to be able to sniff on my mails.
What IPv6 is good for? A lot of KPEX, concentration in ISP sector, a lock in effect... else I don't see.
No I am thinking of the fact that code gets bigger, since assets get bigger, which results in execution is slower.
Actually there is a whole domain of application for which 64bits coding results in slower code.
Plus extending memory address space results in slower memory access. L1, L2, L3 caches are hardly accessible from python/ruby/...(ahhh using register is sooo fast).
Which in turn results in more watt/instructions.
Nowadays, the IT industry might be the leading one in its contribution to fossil energy consumption.
Do we need to add coal plants to support the cloud insanity bloating web pages to 16Mb for applications that are hardly more than 8 operation calculators to tell you how "green" you are?
I have a degree in ASIC design. Modern x86_64 architecture is insane. It has a lot of hidden complex mechanic built in that are de facto additional CPUs inside the CPU for on the fly optimisations (memory+OOP), "security", buses, plug n'play...
And who pays?
Actual IT industry is literally taxing the masses for something they neither ask nor need.
The IT is locking consumers in arbitrary choices that results in higher costs.
By propagating the carries in the ALU? (oh! intel announced they are going to ship this feature)
By using indirect addressing? (like PAE)
Since they are physical boundary to this world and since linear addressing is constricted by speed of light vs the frequency of computers, there is a definite optimum we will not be able to break easily through for the next decades.
Every time we extend the memory address space, we add levels of indirections on the Silicium anyway.
C0 (the velocity of light in the void) is our first limit. Available resources/power on earth is our second limit.
And money available to the consumers is the last one.
Do we really have any good reasons to adopt new technologies that do not prove to be more efficient and/or less costly?
You are correct, that the hardware can do so without too much overhead.
But I was arguing from the software point of view: segmented memory means no simple pointer arithmetic above the 32-bit limit. That means that software managing datasets above 4GB will always need special-casing, or be written with indirect allocation in all cases.
I don't see a good reason for mandating that kind of complexity in code. If you have a solution for allowing 64bit linear addressing in code, I'm all for it.
This is crap. After twenty years, none of the problems with IPv6[1] have resolved, and when you get an IPv6 address from some places (hotels, mostly), much of the Internet turns into a unusable ghetto.
There's also zero IPv6 at BT or Virgin Media, two of the largest Internet providers in the UK. We've been hearing we'll get it "next year" for a while, and I'm frankly tired of dealing with the incompatibility.
If IPv6 offered a single user-visible feature it would sell like hotcakes: Mobile IP? IPSEC? All just as optional as with IPv4. Heck, if they even nailed auto-configuration or let multicast onto the Internet at least they'd have some customers.
Instead, it's sold a twenty-year clusterfuck of doomsday predictions about running out of IPv4 addresses, and the experience has been impossible to learn from because (a) it's spanning three decades now, and (b) everyone disillusioned by IPv6 can't come around without losing face.
I'm tired of IPv6 patting itself on the back; This was the worst deployment ever.
[1]: https://cr.yp.to/djbdns/ipv6mess.html