I was thinking this, as well. The brilliance of the IP hourglass lies in the fact that IP makes very few assumptions about anything above or below it, thus you can put just about anything above or below it.
IP has no notion of connection, and thus no notion of the reliability thereof. That's TCP's job (or what-have-you). I understand the top comment's complaint, but I don't think IP is the problem.
Another case: satellite internet. You can't assume a reliable connection to anything when "beam it into space" is an integral step in the chain of communication. Yet, it works! Whether or not a particular service is reliable is an issue with that service, not the addressing scheme.
IP is an envelope with a to-address and from-address. The upper layer protocols are whatever's inside the envelope - a birthday card, a bill. The lower-layer protocols are USPS, FedEx, the Royal Mail, etc. Blaming IP for partitioning problems is like blaming the envelope for not making it to its destination.
But the addresses, both from and to... they're both transient names for computers. Not for people, not for data, and they must remain unchanged for the duration of the interaction. That biases everything above that layer in significant ways.
You can't say "get me this data from wherever it might be found" because you have no means of talking about data except as anchored to some custodian machine, despite the fact that the same bits might be cached nearby (perhaps even on your device).
You also can't gossip with nearby devices about locally relevant things like restaurant menus, open/closing hours, weather, etc... you have to instead uniquely identify a machine (which may not be local) and have it re-interpret your locality and tell you who your neighbors are. You end up depending on connectivity to a server in another state maintained by people who you don't know or necessarily trust just to determine whether the grocery store down the street has what you want.
It creates unnecessary choke points, which end up failing haphazardly or being exploited by bad actors.
The things you are complaining about are things that are handled by higher level human-scale protocols. They can and are layered on top of the existing low-level hardware-scale protocols.
You might think that those layers suck because they are layered on top of low-level protocols. If we just baked everything in from the start, then everything would work more cleanly. That is almost never the case. Those layers usually suck because it is just really hard to do human-level context-dependent whatever. To the extent that they suck for outside reasons, it is usually because the low-level protocols expose a abstraction that mismatches your desired functionality and is too high-level, not one that is too low-level. A lower-level abstraction would give you more flexibility to implement the high-level abstraction with fewer non-essential mismatches.
Baking in these high-level human-scale abstractions down at the very heart of things is how we get complex horrible nonsense like leap seconds which we then need to add even more horrible nonsense like leap second smearing on top to attempt to poorly undo it. It is how you get Microsoft Excel rewriting numbers to dates and spell correcting gene names with no way to turn it off because they baked it in all the way at the bottom.
> But the addresses, both from and to... they're both transient names for computers. Not for people, not for data, and they must remain unchanged for the duration of the interaction.
This is true, but
1) it's an easy problem to solve in many cases (DHCP works great)
2) the exact mechanisms by which an address could be tied to a particular resource are innately dependent on the upper portions of the protocol stack, simply because the very idea of what a "resource" even is must necessarily come from there.
3) the exact mechanisms by which an address could be tied to a particular piece of hardware are necessarily dependent on the lower parts of the stack (MAC addresses, for example)
#2 and #3 illustrate that IP benefits from not solving these issues because doing so would create codependency between IP and the protocols implemented above and below it. Such a situation would defeat the entire purpose of IP, which is to be an application-independent, implementation-independent mover of bits.
It's true that IP itself has a significant problem with devices that roam between networks. I believe that there were some attempts to get a solution into IPv6, but they were abandoned (sadly - that could have perhaps been the killer feature that would have made adoption a much easier sell).
I don't think you're right about neighbors though. IP does support broadcast to allow you to communicate with nearby machines. Of course, in real networks, this is often disabled beyond some small perimeters because of concerns for overwhelming the bandwidth.
IP has no notion of connection, and thus no notion of the reliability thereof. That's TCP's job (or what-have-you). I understand the top comment's complaint, but I don't think IP is the problem.
Another case: satellite internet. You can't assume a reliable connection to anything when "beam it into space" is an integral step in the chain of communication. Yet, it works! Whether or not a particular service is reliable is an issue with that service, not the addressing scheme.
IP is an envelope with a to-address and from-address. The upper layer protocols are whatever's inside the envelope - a birthday card, a bill. The lower-layer protocols are USPS, FedEx, the Royal Mail, etc. Blaming IP for partitioning problems is like blaming the envelope for not making it to its destination.