Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Allow 0.0.0.0/8 as a valid address range (kernel.org)
241 points by cesarb on July 13, 2019 | hide | past | favorite | 124 comments


I could actually see us successfully inventing, and implementing, a multiverse concept for ipv4 to make these 32 bit addresses last another 40 years, as opposed to throwing these non-upgradable, hardcoded v4 devices out. And this is from a pure v6 evangelist. I'm complaining, not being optimistic.


A sort of "multiverse" concept can already be useful for IPv4 private addresses. I have to deal with multiple environments which all internally have the same 10.x.y.z addresses. So, 10.1.2.3 can be two different instances depending on whether I am connecting to environment 1 or environment 2. In practice, the distinction is based on which SSH jumphost I am tunnelling through.


Isn't that pretty much what we're already doing, since almost every device exists behind some kind of NAT?


Also VLANs can create a “multiverse” of IPv4 addresses behind a single NAT.

But since VLANs are a layer 2 concept it means you can in theory use them with IPv6 as well (albeit I’ve never personally tested that theory)


sure, just prepend every address with some more bits to indicate which multiverse it's from. while you are at it you might as well add a lot of bits, say, 96 of them.


If they had done just that and only that - taken IPv4 and added more bits - we might all be using IPv6 now. Instead they used the opportunity to cram every feature but the kitchen sink in there, so none of the hardware vendors were interested in implementing it and the backbones were slow to adopt it. So we got mass adoption of NAT instead of mass adoption of IPv6.


They removed, not added features. The hardware problems had more to do with other things.


Yeah it’s the feature changes that made it problematic.

They removed NAT, which made laymen deployment difficult. I can’t just plug an IPv6 router behind another router (or 3-4 levels of routers) and expect it to just work. In IPv4, DHCP+NAT handles that just fine. In IPv6 I need to worry about address assignment. I don’t care about P2P connectivity issues- the NAT trade-off of using STUN/TURN techniques works for me.

They replaced ARP, and replaced DHCP with SLAAC then realised that people like DHCP so added a version of DHCP back. Except it’s still not the same so has it’s own quirks.

Then there’s the difficulty of supporting multiple IPv6 WANs in one router in a useful fashion. SLAAC takes too long for a PC to detect a dead WAN and use the other WAN range. And there’s no ability to do policy based routing (eg prefer YouTube via dsl, prefer VoIP via fibre) without using NAT+ULA.

Don’t get me wrong, there’s a lot of great things about not using NAT, but there’s a lot of real world scenarios where using NAT is the preferred trade-off.

IPv6 originally decided they didn’t want NAT, and tried to force people into their one way of doing things. They just needed to support both, and then IPv6 deployments wouldn’t be so complicated. They added NAT and DHCPv6 far too late in the game. Even Android doesn’t support DHCPv6 yet and it’s 2019!


I don't see how NAT could be removed. It was never added to IPv4, but we use it anyway. The addressing standard gets no say in this.


You are correct there’s nothing in the core IPv6 spec itself about NAT. However the goal of IPv6 - to make all devices globally reachable - resulted in pretty much everyone not implementing NAT support until way too late in the game. And resulted in related specs required to implement NAT not appearing for years/decade after IPv6 was created.

Linux/Iptables support wasn’t until Linux 3.7 & iptables 1.4.18 [1]. So that’s only from 2013, when IPv4 NAT has been possible for well over a decade before that.

Also to do NAT66 you really need a ULA address space, that wasn’t defined until 2005 [2]. RFC1918 addresses for IPv4 were set in 1996.

The tooling, support code, supporting RFCs, and UIs for doing IPv6 NAT has been neglected. It’s a halfway house.

1. http://tldp.org/HOWTO/Linux+IPv6-HOWTO/ch18s04.html 2. https://tools.ietf.org/html/rfc4193


> They removed NAT, which made laymen deployment difficult. I can’t just plug an IPv6 router behind another router (or 3-4 levels of routers) and expect it to just work.

Is this common, plugging consumer routers with NAT several layers deep? I haven’t seen that in the wild. The only time I myself tried it didn’t work for some unknown reason.

My only real gripe with IPv6 is the fact that Duplicate Address Detection seems broken on many Wi-Fi networks (clients for some reason see their own traffic as traffic from another node and trigger DAD, which shuts down IPv6 access). I’ve seen this on routers from multiple vendors and I believe it’s some bug in their broadcast/multicast implementations.


Re consumers, I can’t comment on how common they are, but people will have the ISP router, and then their router. They should ideally bridge but that doesn’t always happen, either due to just not knowing you should do that, or the ISP router/modem is a piece of junk that doesn’t support bridging or has quirks.

In the commercial/business space it’s more common to see 3 deep. I see it every day. Petroleum in particular often has ISP Router -> Site Firewall/Router -> ServiceProvider Router, because the fuel tank monitoring equipment is behind its own router so the vendor can get remote access/send data back over VPNs they manage.

In retail environments, especially malls and concession stands within department stores, it’s common to be plugged into their network, which you’ll want your own firewall protecting your PCs etc. I’ve also seen businesses at the same office building pool resources and share the one internet connection, with each having their own firewall/router behind the primary site firewall/router.

There’s also hotspots, where the business both puts that infrastructure on a separate network from their back office, and the hotspots themselves are doing NAT too.

Also some payment processors these days are pushing for organisations to install their own router behind the customers network and route all payments via that (Rather than customer managed IPsec VPNs or straight TLS over the Internet).

Yeah it’s definitely common.


Mobile carrier NAT, mobile device hotspot NAT, vmware NAT - that's the most I've seen so far.

But IPv6 in home networks replaces the unreachability-because-of-NAT by unreachability-because-of-filtering. The usual home router protects your clients, and if it's not your box, you're out of luck.


I was really looking forward to getting IPv6 on my home network. It turns out my ISP are using DHCP-PD, which means the prefix I get assigned is dynamic, so essentially useless for hosting. I can't believe with such a large address space ISPs are still deciding use dynamic allocation.


"as opposed to throwing these non-upgradable, hardcoded v4 devices out"

Prepending bits wasn't a solution for the problem he discussed.


If that's all IPV6 was it would have been fully rolled out 15 years ago.


we could even call it IPv6, and then improve a lot of the shortcomings of IPv4

but then everyone would complain that its harder to write the numbers and try and come up with excuses for implementing it


Other people would then throw in tons of extra features like arp replacement, dhcp replacement, new ways to deal with qos, wide use of link local address, etc.

IPv6 isn’t just a few extra bytes in an IP address.


IPV6 is probably the worst "upgrade" to have ever rolled out and graced our world with its stinky existence.

It's been a decade and IPv6 has barely entered the zeitgeist of the common internet user.


> It's been a decade and IPv6 has barely entered the zeitgeist of the common internet user.

It has not been a decade! I remember printing out the draft spec in the late 1990s thinking that I have to get ahead of this curve. Twenty years in the making.

I recall at the time thinking initially that the obscure nomenclature for addressing would be annoying, but that it clearly would be fine. What do I know.

In 2010 hackernews luminaries predicted the ipv6 momentum singlularity here: https://news.ycombinator.com/item?id=1236048

The fact of the matter is very very large private corporate networks do not need more addressing space. ISPs use CGnats. Most home broadband routers|modems are junk. It is just a mess


More than a quarter of Google's traffic today is IPv6. So apparently ISP's representing a quarter of their traffic don't consider the marketing oxymoron that is "carrier grade nat" an adequate substitute for IPv6...


i don't know what's in the "zeitgiest of the common internet user" but i'm pretty sure IPv4 isn't in there either.

besides, https://www.google.com/intl/en/ipv6/statistics.html


Are you describing something different from NAT?


Yes, but I didn't imagine it very deeply; NAT is rather like a PBX, which rather necessarily provides only a subset of mappings to the Internet. (I suppose it is a math exercise what the perfect NAT would be, if the Internet were exactly two NATted networks facing each other).

When I said invent, I was supposing we do invent one little extra bit to IPv4, wherein we default to this IPv4 Internet, but we could somehow specify another "universe" and simply have another entire IPv4 network, ad nauseum. Some proprietary routing topography would presumably exist as an inner layer, providing decades of new complexities, masking the implementation for naive ipv4 stacks that couldn't be updated. Those, could continue to exist unawares and perhaps very disjoint between "universes".

Anyways, I am not an Internet architect, and this idea is entirely foolhardy since we have a perfectly good IPv6 universe to move into, one that barely has unpacked the moving boxes from the previous. I suppose all I meant, is that we might invest some more energy in IPv4 before we're really done with it.


Hardcoded ipv4 non upgradeable devices should be trashed.

They should be actively blocked or redirected to a site that offers quality hardware (Cisco need not apply)


Please explain why I was downvoted? I'd actually like to hear why non upgradeable hardware with hardcoded ips are ok?

Hell the equipment probably has hardcoded creds that are part of critical infrastructure too.


Why not start with the why of your own argument?


Because it's lazy development and 30 decade old throw away equipment that is essentially worthless should be fostered still. Rip the band-aid off and replace crappy hardware that managed to make the rest of connections terrible for everyone else

My argument is to trash crappy equipment.


The downvotes may be due to a way you formulate your arguments. Some people may see them as lacking in empathy and arrogant.

You didn’t seem to take into consideration all the places where 30-year old equipment is still good, the reasons for why stuff is not uogradeable, nor people who cannot afford spending $100 for the new stuff.


wow; that is actually one of those comments of the internet that makes you think.

The thing here is cross universe communication and legacy hardware/software stacks.

This sounds like a great mental exercise.


If only this already existed: https://tools.ietf.org/html/rfc1918


I imagine this could break some existing code. Especially if IANA starts to actually allocate addresses from the 0.0.0.0/8 range.

Luckily, this doesn't change the fact that 0.0.0.0/8 is still reserved by RFC8190[0]

[0]: https://tools.ietf.org/html/rfc8190


>Especially if IANA starts to actually allocate addresses from the 0.0.0.0/8 range.

Who's going to want to use that block? Given how many legacy devices are there on the web, it's probably not going to be reachable for a sizeable chunk of the internet.


That’s the same thing they said about addresses in bogons ranges and eventually we succeeded in using them.


By "this", I assume that you mean if people start using the 0.x.x.x block. My first read of your comment interpreted "this" to mean the new patch, which in fact makes Linux accept and work with this previously unusable block.


This is a fantastic commit, contents aside, it's a great demo: the diff's tiny; the message is so much longer, and explains why the diff's there with even (very) historical context.


I've always found (anecdotally) there's an inverse relationship between commit comment length and diff length. Sometimes I need to go into great depth to explain why we needed to change this constant from 3 to 4, but add a 100 line new feature and it's typically something like "added new --foobar feature".


Unfortunately IANA disagrees, it says it is still reserved: https://www.iana.org/assignments/iana-ipv4-special-registry/... "0.0.0.0/8 "This host on this network" [RFC1122], Section 3.2.1.3 1981-09 N/A True False False False True"

The referenced RFC is the same one as in the commit message but different section 3.2.1.3 vs 3.2.2.7. Shouldn't IANA have lifted (or indicated its intention to lift) the reservation before the kernel change?


Given the seniority of the author and committer in the networking world, I assume that they're operating under the knowledge that IANA's reservation is being reconsidered.


We have had many talks at many levels over the years. This is an attempt to break the logjam of finger pointing that ensued. We expect wide adoption of this and other ipv4 related patches over the next few years across the open source stacks... and then a standards dialog can take place.

John Gilmore (creator of the unicast extensions project) was the co-inventor of bootp in 85, which is what made using 0.0.0.0/8 feasible, then. The fact that it's been feasible since and no movement in the standards orgs to fix it, has kind of been a record long wait from bug fix to deployment in both our cases.


It’s reserved as a source, not as a destination, based on the table linked there anyways. Which implies some interesting things that aren’t as simple as “0/8 is reserved”.


Interesting in what way?


For example: A load balancer can serve requests on a 0/8 IP address to the general public without violating the terms of the reservation, as the LB can simply direct responses over an interface other than the 0/8 listening-only interface with a cooperative router upstream.

I’m not saying that this is true as written, but I’m saying that this is a plausible example in support of why I think it’s “interesting”.


"Before you tear down a fence, you should find out why it was first built."


Thx!


This will break code that uses (e.g.) 0.0.0.1 as a "fail address":

    $ telnet 0.0.0.1 22
    Trying 0.0.0.1...
    telnet: Unable to connect to remote host: Invalid argument
It's nice because the connect() syscall fails immediately, instead of sending packets and waiting for a timeout. 0.0.0.0 is not a suitable replacement, because it's interpreted as 127.0.0.1:

    $ telnet 0.0.0.0 22
    Trying 0.0.0.0...
    Connected to 0.0.0.0.
    Escape character is '^]'.
    SSH-2.0-OpenSSH_7.9p1 Debian-10
Doesn't the kernel have a policy of not breaking userspace, even if the behavior is undocumented?


If you ask me, 0.0.0.0 should not be interpreted as 127.0.0.1. Yes, I know programs like ping interpret it this way as well. I just think it's ridiculous.


It makes sense if you compare it to how listening sockets work - if I listen on 127.0.0.1:80, I can connect to 127.0.0.1:80. if I listen to 0.0.0.0:80 (all ipv4 addresses), I can connect to 0.0.0.0:80

And as a bonus, I can just 'telnet 0 80' instead of having to type out 127.0.0.1


Agree that the bug is in userspace. Unless there is a RFC that says "0.0.0.0 shall be interpreted as 127.0.0.1", it's just super-weird that it even does that.


The main use is to control which interfaces servers listen on. I.e. can you access them from other machines.


So, some software actually parses 0/8 as localhost?

This explains so many broken tutorials..


It says 0.0.0.0 is still reserved, and I've never heard of software using 0.0.0.1 as a "fail address". Is it really common?


I know I've used it myself, but from a sample size of one, I can't say whether http://www.hyrumslaw.com/ or http://xkcd.com/1172/ is more relevant.


Why would code detect a failure and then use a fake address to cause an error, instead of just erroring?


The address may not be controlled by the code in question. Think /etc/hosts, command line flags, and config files, where you want some condition to trigger an error.


Makes sense. Also 240.0.0.0/4. small focussed change in code. getting the IANA process worked out with the IAB is going to take a bit more work. (from the experience with 240/4 where I tried with some others to author a draft)


While at it, I feel like we could also squeeze the 224.0.0.0–239.255.255.255 multicast range to maybe 224.0.0.0/8 if not tighter.

For reference here is a list of reserved ipv4 addresses[0].

[0] https://en.m.wikipedia.org/wiki/Reserved_IP_addresses


This can't be done, there's a number of devices out there that do multicast on fixed multicast IPs and so doing this would break them. In particular I've got a series of HDMI<=>IP devices that forcefully go to 239.255.42.[42..141] depending on their settings.


I wonder, though, how many of those fixed multicast IPs are outside of both 224/8 and 239/8.


225/8-231/8 were reserved for future multicast use and never allocated by iana. As near as we can tell by grepping the entire world's source they are entirely unused.

128m addresses....


yea that'd probably be reasonable to try to reclaim then. probably only useful for private use unfortunately though, i bet there's a lot of stuff that'll improperly route them given the range.


We've had 225/8-231/8 up and running for 6+ months, with patches for various routing daemons. Running a gauntlet with 240/4 and 0/8 now working seemed best.

In our exploration of converting these 120m addresses from a multicast definition to unicast, we only had to change the kernel with two tiny patches, and recompile 89 fedora packages. Openwrt was less. The patches for frr and bird, straightforward.


As for private or public use, certainly there is demand for a larger CGNAT address space, somewhere, and allocation of a portion of 225-231 for that might drive adoption there.

Still, the addresses need to become routable, and from there (aside from politics and depoloyment delay), globally routable is the dream for most of these new addresses, in 5-10 years.


or you could just move to fucking v6, its not that hard.


This would break a ton of existing applications


Yes this 240/4 is the one range I wish they would allow, even if just for private ranges. The main showstopper is Windows iirc, as Linux already allows it (will route it).


What are some resources to help support this concept move along?


Drop myself and the other project members a line and see the unicast extensions project on github.

It is a trivial number of patches to enable all the open source code in the world to work.


I still am under the opinion of srv records were widely used the ipv4 shortage could be delayed a very long time.


I agree, and I and many others raised it repeatedly with the HTTP2 editors/mailing list in particular, who sadly murdered that proposal because they believe in their god-given right to squat the A record for their protocol. I believe the failure to address DNS considerations in the HTTP standard has driven a large part of ongoing IPv4 exhaustion.


While I agree that HTTP (and everything else) should support SRV records, they wouldn't do anything for IPv4 exhaustion.


I beg to differ. Having run LIRs and with friends at RIRs I know for certain that many large IPv4 allocation requests were justified on the basis of needing separate addresses for websites. Policy may be tighter now than it’s ever been, but the lack of aggregation due to well-known-port fixation drove and continues to drive unnecessary assignments.


Shouldn't the HTTP 1.1 Host header invalidate that argument?


Many people expected as much, but in practice not nearly to the extent that was desired, partly because SSL was a problem until SNI came along (and that far too late) and also because it comes after the TCP handshake, which precludes a whole class of traffic routing and service options. Fine for low-end shared hosting, not fine for many other applications.


I'd prefer a world of end-to-end communication (which IPv4 can't handle w/ it's 32-bit address space when you consider the simple example of the world's mobile phones), but I'd take a world of SRV records on IPv4 as a close second.


I don't see SRV records ever succeeding because too many networks block non-80 and non-443 ports.

Would you want to host your website on port 15988? Think how many school/office users are going to complain that Google and Facebook work fine, but your site does not...


Unrelated to code, but related to IP space - there are at least two "reserved IP spaces" that look like they are completely usable by private networks, not included in RFC1918.

198.18.0.0/15, reserved for benchmarking devices across subnetworks.

100.64.0.0/10, reserved for carrier-grade NAT. This one might have the higher chance of a collision depending on your use case or if it's part of a VPN pool.

Several ISPs I have tested all drop them when you traceroute.

So if your organization can't switch to IPv6 internally and has performed horrible IPAM on RFC1918, here is a last chance... don't screw it up.



You could also (ab)use the TEST-NET ranges as private address space.


I've been (ab)using 198.18/15 (also reserved, cf. RFC2544) at home for years.

It works just as any other network would and is a perfect solution for me.

(Previously, I abused TEST-NET-2 and TEST-NET-3 and both of those worked fine as well.)


I see those bits being used in production happen but I seriously do not like the idea of giving IPv4 even more time.


A single /8 isn’t going to meaningfully impact the exhaustion issues IPv4 faces. I believe it was APNIC a couple of years ago who said they were already facing allocation requests equivalent to an /8 a month.

It’s part of the reason hand-wringing over some of the “wasteful” /8s that were handed out to organizations in the early days is largely pointless. Even if you could get those orgs to consolidate and give back large useable ranges in those blocks, there’s simply not enough there to meaningfully change the long term mismatch between demand and supply.


This /8 won’t make a dent in that problem. Until every single device is capable of IPv6, IPv4 will continue to persist.


I mean, all of the devices already support v6. The problem is providers and services not deploying it and software ignoring it's existence.


This point was the overarching point of my talk at netdevconf. We are going to need more ipv4.


This would be really neat. However, it might confuse or complicate some tools output (like netstat) that use the standard 0.0.0.0:80 style listening notation. That is if 0.0.0.0 becomes a valid address.


It excludes 0.0.0.0/32, which is what “0.0.0.0” is when shown as a bare IP, so that will only be a complication if some tools (surely some do) assume that 0.0.0.0/32 is equal to anything in range 0.0.0.0/8.


No, 0.0.0.0 will still be reserved.

>0.0.0.0/32 is still prohibited, of course.


Gotcha! I skimmed too fast. I'm all for it then.


From the commit message:

> 0.0.0.0/32 is still prohibited, of course.


The \*.0/32 is always excluded from usable address range anyway.


Are you saying a valid IPv4 address can not have a value of 0 in the 4th octet? That is not true at all.


Zero in the last octet is problematic if you want to be generally reachable on the internet. We used x.y.z.0 as a vip at fb for a while, but learned the hard way that some people can't reach those ips.

Some additional info from RIPE: https://labs.ripe.net/Members/stephane_bortzmeyer/all-ip-add...


It's amazing how many stupid assumptions on the part of some people create these extraordinarily difficult situations to navigate.


It's a valid IP address, but it's the 'network address'. 'usable' range is 1-254. That's my understanding anyway?


That's not correct. Not every subnet is a /24. Further, you don't necessarily need a network and broadcast address for a subnet. That's a local issue. See IETF RFC 3021 for one example of not having a network and broadcast address.


it's the network address only if you define your network as a /24 . if the mask is different it could be a completely valid address.


0.0.0.0 is a "network" address no matter how many bits are in the netmask. Having said that, there really isn't a use of "network" addresses (especially as distinct from broadcast), apart from the convention being baked into basically everything.


:facepalm: right, of course, makes sense. Thanks.


Or anything smaller than a /24.


This work is a fallout of the conclusion of this report:

https://www.internetgovernance.org/2019/02/20/report-on-ipv6...

"legacy IPv4 will coexist with IPv6 indefinitely"

which was referenced in the original talk, and I wish more "just deploy ipv6" advocates would read. I have worked very hard on making ipv6 deployable (notably in the cerowrt project) and came to the conclusion after reading that that if we wanted sustained internet innovation we needed to expand and improve ipv4, also. Thus 240/4, 0/8, 225/8-231/8 (pending) and yes, even a large portion of 127 and a push to clean up other problems in ipv4 in general.



This even proposes the use of the zero node address in each network or subnet (6.1)


IMO, device addresses are irrelevant - the real pressure is whether end users can get at least one public IP.

NAT is actually a security feature. Not because it necessitates a stateful firewall (as is often believed), but rather because it hides specific client host addresses from servers. The last thing anybody should want is to leak even more information to the surveillance companies.

The problems of IPv6's larger address space can be mitigated similarly to v4 while retaining some advantage - for example NAT onto random "host" address for 16+64 bits of saddr+sport rather than just 16 of sport. The point is that a flat address space isn't the RPC-panacea it was when protocols like "active FTP" were invented. In the modern world, blithely taking one's supposed L2 address and stuffing it into a higher level PDU is basically a layering violation.


An inbound firewall for all traffic has exactly the same security level as NAT.

There's a very small argument for privacy but (a) we have privacy addresses and (b) even behind NAT many of your devices are probably already identifiable in many other fingerprinted ways that obscuring the address for this sake would be pointless


>(a) we have privacy addresses and

AFAIK those get rotated every day or so. While this is better than fixed addresses, it's still worse than NAT, especially if you use some sort of tool obscure your sessions (eg. container tabs or private browsing).

>(b) even behind NAT many of your devices are probably already identifiable in many other fingerprinted ways that obscuring the address for this sake would be pointless

That sounds exactly like Google's excuse not to implement fingerprinting hardening on Chrome[1].

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=49075


I wouldn't say _exactly_ the same security level.

https://samy.pl/pwnat/

> pwnat, pronounced "poe-nat", is a tool that allows any number of clients behind NATs to communicate with a server behind a separate NAT with no port forwarding no DMZ setup, and no 3rd party involvement. The server does not need to know anything about the clients trying to connect.


> exactly the same security level as NAT.

> There's a very small argument for privacy

You can't say it has the "same security", while also saying there is a "small" difference. Privacy is an integral part of security, especially for natural persons.

> we have privacy addresses

AFAIK "privacy extensions" merely pick a random host address rather than naively using the ethernet MAC (!!). This still allows identifying discrete hosts.

The NAT scheme I described is strictly better than this. So yes, I can envision an IPv6 future where we have convenient global addresses for incoming traffic, but outgoing traffic gets NATted to a single address (single probability distribution, really). It's just not going to be a return to the trustful 1970's air-out-your-genitals simplicity that is widely envisioned.

> behind NAT many of your devices are probably already identifiable in many other fingerprinted ways that obscuring the address for this sake would be pointless

Those other ways are also security vulnerabilities. We shouldn't excuse vulnerabilities because other vulnerabilities exist, especially in the direction of the foundation. While it is appropriate for a higher protocol to simply accept a weakness of a lower layer that it is built on, going the other way is unjustifiably throwing in the towel.


> An inbound firewall for all traffic has exactly the same security level as NAT.

I've seen at another of these discussions an argument that an inbound firewall is actually stronger. Suppose you have a NAT router with 192.168.1.0/24 as its "inside" network, and 203.0.113.0/24 as its "outside" network, doing NAT between them. Now suppose another host at the 203.0.113.0/24 network sends a packet to this router, with a destination address within the 192.168.1.0/24 network. A router which only does NAT would accept and deliver this packet, while a router with a correctly configured firewall would reject it.


> whether end users can get at least one public IP.

That only gets harder as address space limits[1] squeeze more and more people onto Carrier Grade NAT and similar workarounds.

> NAT is actually a security feature.

No, it's not. All supposed security features are actually from some other source, because NAT is only ever about changing addresses (or port) fields in the IP header.

> it hides specific client host addresses from servers.

NAT usually makes your local addresses very predictable. Is yours 192.168.[01].[1-4]? Often guessing isn't needed; the other IP fields that NAT does not touch can be used to differentiate[2] between hosts behind a NAT.

> The last thing anybody should want is to leak even more information to the surveillance companies.

That's why it's so important that we move to IPv6 asap. The theoretical problem of a few bits of identifiable entropy (differentiating between a few hosts on a private LAN) is trivial in comparison to the centralization that NAT has forced us into. NAT forces network software development into the client/increasingly-centralized-server model[3].

> The point is that a flat address space isn't the RPC-panacea

A flat address space is the most important feature of the original internet, because it makes all hosts equal in the eyes of the protocol. You don't need an imprimatur[4] - an authority's permission to publish - to publish from your own IP. You don't need a centralized server's permission to develop new network software that uses new network protocols.

Of course we don't want to leak information to a serer, especially those owned by surveillance companies. The way to do that is to write software that doesn't involve that server, which is only possible with a flat address space. Arguing otherwise is arguing we should keep using party lines[3].

[1] http://www.potaroo.net/tools/ipv4/index.html

[2] (section 0x03-2) http://phrack.org/issues/63/3.html#article

[3] https://news.ycombinator.com/item?id=20169372

[4] https://www.fourmilab.ch/documents/digital-imprimatur/


> That only gets harder as address space limits[1] squeeze more and more people onto Carrier Grade NAT and similar workarounds.

Warning: we're awfully close to violent agreement here. I'm saying that the demand for more IPs is primarily for end users needing their first public IP, compared to end users needing public subnets for their devices.

> NAT is only ever about changing addresses (or port) fields in the IP header

Ya, the changing of which has the security benefit of a remote server not receiving it! I suspect you've become too used to responding to "I like NAT because it blocks connections to my network" to read what I've actually written.

> NAT usually makes your local addresses very predictable. Is yours 192.168.[01].[1-4]

Sure, for MACs (and guests) that are listed to receive 192.168 addresses because their treacherous software (eg Android, WebRTC, etc) is likely to turn around and leak it... That is the concern I am talking about here, not my local network being actively probed from outside.

But most substantively I do not agree with the point in your [3], based on relative significance. I believe the overriding dynamic is that serverless software is much harder to fund. Implementing UPnP and NAT punching is trivial compared to developing, polishing, and marketing software for mass adoption. Meanwhile centralized surveillance is lucrative, attracting massive investment.

And don't be confused by the brief popularity blip of end-user hosted websites using DNS, because DNS is a terrible user-facing namespace. It essentially mandates a high availability server standing ready to answer requests for every specific entity. There is no ability to retrieve the latest version of a given object from an untrusted peer.


> Implementing UPnP and NAT punching is trivial

It's possible if and only if you have control of the router. Good luck doing that when it's your ISP doing CGNAT. This is already impossible in some areas (e.g. parts of China) that are behind multiple (5+) layers of NAT. This isn't a theoretical concern; there are already many places on the IPv4 network that cannot get any kind of port forwarding (including UPnP).

Also, NAT hole punching is not[5] "trivial". It only seems trivial if you only consider the easiest case. It also requires a centralized server to initiate.

edit: re: the supposed privacy benefits of NAT - even if you have globally routable addresses, you can still proxy your HTTP requests through your router if you want to. You do not need NAT to have webservers see a single IP address from your LAN.

> harder to fund

100% offtopic; I'm not talking about software funding. The point is that NAT has held back software development. There are entire categories of network software that haven't even been invented yet, because they require being able to receive calls (incoming TCP SYN packets). We don't know what is even possible, because true network software has receive4d very little development effort.

> end-user hosted websites

That is useful. I know several people that use local hosting for various useful activities. Bandwidth isn't that important if you're just sharing with friends. However, this is yet again an example of the client/server model. You seem to be proving my point that being stuck behind NAT can limit your thinking to the one model that works.

For a simple example of non-client/server network software, how about chat software that makes direct connections without the need for a server? (discovery can still use a server; that doesn't mean the chat protocol itself has to be centralized). In ~1996 I did that with very early PGP-encrypted VOIP software that let you simply dial whomever you wanted by their IP address. Just like we dial people on the telephone network by their telephone number. Yes, there are more convenient methods, most of which require trusting a centralized source. To even have a chance at developing something better, we need the ability to develop software freely at the network endpoints, which is incompatible with NAT.

> DNS

I didn't say anything about DNS. As I said above, you seem to be assuming a very limited range of network software. Of course DNS has serious problems; it's not the only way we can discover IP addresses. I want a future where we are free to invent these new methods, instead of a highly-centralized future where the surveillance capitalists have the power to decide which network connections you are allowed to make.

[5] https://bford.info/pub/net/p2pnat/


> There are entire categories of network software that haven't even been invented yet, because they require being able to receive calls (incoming TCP SYN packets)

> I did that with very early PGP-encrypted VOIP software that let you simply dial whomever you wanted by their IP address

This is a cute demo but ultimately a red herring. I think the crux of our disagreement comes down to whether we see IP addresses as sufficient user-facing/user-identity addresses. I do not believe them to be so.

By "DNS" above, I really meant DNS+IP - a namespace that assumes the ability to contact an authoritative server, talking directly to it to accomplish a task. As I said, there is no ability to retrieve the latest version of a given object from an untrusted peer - talking to the authoritative server is the only way to obtain updates!

A real communication service needs things like presence information, roaming, multiple endpoints, asynchronous notifications, etc. Which in your example (phone number == IP address) ultimately requires having a bona fide server sitting there to answer for those requests. You called this "discovery", but it's much heavier than that. This is the technical burden that is straightforwardly solved by the software-as-surveillance companies - set up a well-known server, and simply pay a bunch of people to make sure it is always up.

A thought experiment: Say I have a neighbor who is also DIY-technical. What can we actually gain by running a network link between us? I can give him an NFS export (or the like) to access my collection, but he's got to explicitly use of it (ie incorporate my hierarchy into his own mental hierarchy). For wider use with HTTP(s), the best I can do is give him transit to contact the server himself, but that is just becoming his ISP.

However if we're on the same torrent, and the torrent client knows how to discover his client, we can both cooperate without explicit coordination. Because Bittorrent addresses chunks by the hash of their content, I can verify whatever I receive from him without needing to contact the original publisher! Contrast when an object identity relies on an IP or DNS name, the only way to verify it is to contact its server yourself.

> even if you have globally routable addresses, you can still proxy your HTTP requests through your router if you want to

So, L4 address translation instead of L3... And I agree, to scrub fingerprints it makes sense to proxy at the highest protocol layer possible. So sure, maybe the future will be a nice flat v6 namespace for incoming connections, and a household HTML proxy to scrub access to legacy HTTP(s).

Don't get me wrong - I'm certainly not arguing against adopting v6. I just don't think it is wise to wait for it to usher in the peer paradise that people were imagining 20 years ago. AJAX and surveillance as a service didn't get adopted simply due to a shortage of v4 addresses.

> This is already impossible in some areas (e.g. parts of China) that are behind multiple (5+) layers of NAT

At a basic level, I do agree this development path is worrying. But IMO the real way to push back against this is to create a demand for something else. Unless the net result looks immediately different to a significant number of end users, the demand isn't there. And right now, even in the US/EU there isn't a widely-compelling advantage to having a routable v4 address.


> I think the crux of our disagreement comes down to whether we see IP addresses as sufficient user-facing/user-identity

Just to clarify, I wasn't trying to pigeonhole you to raw numeric IP addresses. I meant IPs and names that effectively translate to IPs - hostnames, DNS names, etc. Even with dynamic DNS.


This gives us only another 16.7 million more IPs and will have tons of compatibility issues with older systems. Re-purposing the old "class E" space (240.0.0.0/4) would have similar problems.

IMHO, it is too late for these hacks. We need to embrace IPv6. I've been running it for almost 10 years now...


And I've been disabling it on every system I get superuser privileges on for as long ...

I can't be bothered maintaining two sets of routes, two sets of firewall tables and for the life of me can't remember IPv6 addresses.

My toaster doesn't need to talk to your fridge.


People were saying the same thing during the multi-protocol days (x.25, DECNet, AppleTalk, Novell IPX...) They couldn't be bothered to learn this crazy Internet thing. IPv6 is the future.



Thx for the pointers to the work and talks!


WordPress doesn't allow 0.0.0.0/8 in some contexts, as an attempt to block some types of SSRF:

https://github.com/WordPress/WordPress/blob/abcbee954f4d8baa...

If allocations in 0.0.0.0/8 happen someday, users will probably see similar misfeatures in many applications.


rejecting all of 0.0.0.0/8 was their CVE-2016-2222 fix


I am pleased to see no one coming at us with pitchforks and torches ablaze.

240/4 already works in many idea also. 127 and 225/8-232/8 can be reallocated as unicast also.


AFAICT the risk/reward for this change (and others along the same line) is poor, because like it or not, as a factual matter, there are many tens or hundreds of millions of notionally "open source" devices that will never get an updated Linux/BSD/... kernel. Most of them are probably Android phones but there is also a giant tail of consumer routers, EOL'd network equipment, etc. A lot of this stuff will stay in use until total HW failure, which may be a decade, two decades or more.

There are of course also many closed-source products that will never get a TCP/IP stack update. I haven't tested it but I doubt Win7 will ever be able to reach 0.1.2.3 over the public internet. Even if that's a bogus example, you get the idea: millions of dollars worth of old closed source gear out there where it's impossible for the owner to patch the TCP/IP stack.

As a result, to prevent strange connectivity problems on 0.X% of their connections, almost everyone will pay (and, if needed, significantly bid up) the ~$20/yr "normal" IPv4 address cost to get an existing "non-reclaimed" IPv4 address instead of taking a gamble on one of these new ones that will definitely have problems with many other hosts. In short I don't see a voluntary rational buyer or user until the IPv4 market rises 10X+ and probably more like 100X+; until then they seem like more of a liability than an asset given how annoyingly long it will take to retire (or somehow otherwise ensure that you'll never need to talk with) non-updatable IPv4 hosts.

Not that I like this, or am trying to defend or justify it, but I think it is an accurate assessment.

TLDR: pretty much everyone will actively avoid these addrs given that millions of hosts will never be able to reach them.


I wonder if there are any secret computers on zeronet!




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: