Well, no, transit isn't dead. But when your traffic volume rises to one of the top N -- let's say, N approximates 10 -- sources/destinations of the entire Internet, you discover it's cheaper to run your own global networks.
And that's what Google, Facebook, and Amazon, at the very least, have done: bought fiber, hired network engineers, and designed things that work efficiently for them. If YouTube is 90% of Google's traffic, it's not surprising that Google's network looks like a CDN. Amazon wants to interconnect their AWS datacenters to lower their internal traffic costs. Facebook wrote a new routing protocol (Open/R).
CDNs are simply good architecture -- in any good system, you have multiple tiers of storage, and web systems are no different. Multiple tiers of storage providing caching over intermediate links which may be saturated -- this is a pretty common model for logistics whether you're talking data or packages.
Also, all of the companies you list purchase interconnects or CDN services from large ISPs. So Amazon has a datacenter in Chicago that has a direct fiber connection to Comcast and Verizon networks, for example, that hosts a copy of its CDN endpoints. CDNs are ridiculously easy to build these days; I helped design the build-out of a CDN for a major ISP and we just used off-the-shelf open source software. The hardest part of the project was getting the purchase orders through my client's procurement process. In my mind, that means the engineering here is so uninteresting as to be commoditized -- which means that this is a business problem, not a technical one.
So transit is disappearing, but direct interconnects to big ISPs are just taking their place. On one hand, it's hard to argue against -- it's the right technical solution and there isn't a better option. But at the same time, it concentrates control to a worrying degree, especially as media, telecom and software continue to converge.
Not sure this is the case, amz data centres to other amz data centres in most cases you go across ntt, Tata etc. Google on the other hand is different e.g. Taiwan to Ireland all google network. Spin up vms and traceroute
Careful: lack of middle hops in a traceroute isn't necessarily sign that there are no transit providers. Many times a carrier/enterprise may rent overlay (i.e., MPLS) transport capacity which won't appear in your traceroute.
Although, in Google's specific case, you are probably right and this does not apply.
Scattering caches or CDN nodes around the internet has obvious value.
Its a pretty big jump though from caches to a world where the internet is structured like a cable tv service with a architecturally designated 'head end'.
Many internet architects seem to get very excited about losing end-to-end connectivity, and I can never figure out why. I guess it allows one to raise larger barriers to entry(?).
I'm one of those who values end-to-end connectivity, but I can remember when it was the rule instead of the exception.
There was a time -- before the rise of NAT -- when one could directly establish connections to others across the Internet without having to jump through hoops or implement other tools (port forwarding, UPNP, a third-party(!), etc.) to do it.
In general, as a network engineer, I dislike anything (especially NAT) that breaks end-to-end connectivity, simply because of the inherent problems that arise as a result.
In addition, some of the DDoS attacks we've seen recently would be a lot easier to prevent if NAT wasn't a thing (e.g., as an ISP, I could easily shut off a specific device; I can't just shut off a customer's entire access).
>In addition, some of the DDoS attacks we've seen recently would be a lot easier to prevent if NAT wasn't a thing
I know that NAT is not a security feature but, pragmatically, one could argue the opposite: lots of vulnerable devices today aren't part of a botnet just because they haven't been discovered being hidden behind NAT.
That's the reason why IP webcams are so popular in recent botnets: usually they need to be remotely accessible so they are outside NAT (or, seldom, they get port forwarding or in some sort of DMZ).
>I know that NAT is not a security feature but, pragmatically, one could argue the opposite: lots of vulnerable devices today aren't part of a botnet just because they haven't been discovered being hidden behind NAT.
But in the same way, I feel like NAT has allowed a false sense of security. Maybe if NAT wasn't there to hide everyone's PC, more machines would be broken into, and device security would be a lot better today.
I hope that with IPv6 NAT disappears. But almost all devices comes with only one network interface, that assumes it's on the LAN, but still needs to access the Internet ... How can that be done without NAT ?
A single IPv6 interface can have multiple IP addresses. You can either use your global IPv6 addresses for LAN communication or additionally use a unique local address (https://en.wikipedia.org/wiki/Unique_local_address) on that same interface. There's no need for NAT at any point.
I'm not sure if I'd want that, for reasons of privacy and security. At the moment, your device IP (phone, computer, laptop) is usually shared with other devices, due to the scarcity of IPv4. If this gets dropped, couldn't some providers could get the idea to statically assign IP addresses to each device?
Most people wouldn't know how to rotate IP addresses of their devices even if it was possible. Having one static address (or even a subnet) for each device seems like the worst thing that could happen to privacy.
I also could imagine that having all phones exposed directly could make vulnerabilities much worse. It's bad enough that the recent attacks were possible because cameras were exposed via UPNP, but as far as I know, it wouldn't easily be possible to build a large botnet of smartphones just because you know a vulnerability in their network communication.
AFAICT smartphones are mostly exposed already, they usually get an IPv6 address, along with some IPv4 connectivity behind the phone company's NAT.
OTOH I just tried to ping6 my phone, and got a 'no route to host'. I wonder if it's my wired connection's problem, or a security measure from the phone company's side.
Why perform a surgical drone strike over a carpet bombing?
The exact same reasoning applies to the above suggestion of just blocking access from the rogue device and then telling the customer that "The device with the address of X was misbehaving, please get it fixed and let us know".
Well, you can, but if you do it to lots of people at once your customer support phone banks will get flattened and then everybody else who has a problem at the same time will be angry at you too.
I did ISP operations once back in the day and balancing "will this make things better or will it just crush our CS dept a different way?" was one of the things you had to think about.
Fascinating. The growth of architecture containerization will help facilitate this transition away from high latency centrally-located services. At a high level, this means users on a particular continent will see their continent's 'shard' of data much more quickly than off-continent shards. I wonder if we'll start to see prioritization of locally-available data in algorithmic content aggregators (Facebook, Reddit, G+) because of this.
Also wonder if different regions will impose restrictions on building these duplication services in hopes of promoting growth of their own content-producing industries.
I mean, China is already doing this with their firewall. Maybe we'll see the WTO grow to prevent this kind of manipulation.
With the proliferation of PAAS providers (Firebase, for example) I wonder if there will be transparent ways to proactively structure access to data to prioritize latency.
And that's what Google, Facebook, and Amazon, at the very least, have done: bought fiber, hired network engineers, and designed things that work efficiently for them. If YouTube is 90% of Google's traffic, it's not surprising that Google's network looks like a CDN. Amazon wants to interconnect their AWS datacenters to lower their internal traffic costs. Facebook wrote a new routing protocol (Open/R).