The current implementation of the AMP cache servers obviously doesn't help the decentralization.
I think what Spivak is saying though is right. If we could move from location addressing (dns+ip) to content-addressing , but not via the AMP cache servers, in general, anyone could serve any content on the web. Add in signing of the content addressing, and now you can also verify that content is coming from NYTimes for example.
Also, I'd say that the internet (transports, piping, glue) is decentralized. The web is not. Nothing seems to work with each other and most web properties are fighting against each other, not together. Not at all like the internet is built. The web is basically ~10 big silos right now, that would probably kill their API endpoints if they could.
I think this would require an entirely new user interface to make it abundantly clear that publisher and distributor are seperate roles and can be seperate entities.
I don't think this should be shoehorned into the URL bar or into some meta info that no one ever reads hidden behind some obscure icon.
Isn't it already the case though with CloudFlare and other CDNs serving most of the content? Very few people really get their content from the actual source server anymore.
That's a good point. I just feel that there is an important distinction to be made between purely technical distribution infrastructure like Cloudflare's and the sort of recontextualisation that happens when you publish a video on Youtube. I'm not quite sure where in between these two extremes AMP is positioned.
Thank you for this explanation. AMP has put a really bad taste in my mouth but what you describe here does have some interesting implications. Something to consider for sure.
Please fact check me on this, but the ostensible initial justification for AMP wasn't decentralization, but speed. Businesses had started bloating up their websites with garbage trackers and other pointless marketing code that slowed down performance to unbrowsable levels. Some websites would cause your browser to come close to freezing because of bloat.
So Google tried to formalize a small subset of technologies for publishers to use to allow for lightning fast reading, in other words, saving them from themselves. AMP might be best viewed as a technical attempt to solve a cultural problem: you could already achieve fast websites by being disciplined in the site you build, Google was just able to use its clout to force publishers to do it.
As for what it’s morphed into, I’m not really a fan because google is trying to capitalize on it and publishers are trying various tricks to introduce bloat back into AMP anyway. The right answer might be just for Google to drop it and rank page speed for normal websites far higher than it already does.
They’re suggesting a web technology which would allow any website to host content for any other website, under the original site’s URL, as long as the bundle is signed by the original site. That could be quite interesting of a site like archive.org, as the url bar could show the original url.
But AMP is a much narrower technology, I’d imagine only Google would be able to impersonate other websites, essentially centralised as you say. The generic idea would just be a distraction to push AMP.
Everything would be so much better if the original websites were not so overloaded with trackers, ads and banners, then there would be no need for these “accelerated” versions.
I see where you are going, but what if my website is updated?Is the archive at address _myurl_ invalidated, or is there a new address where it can be found? I am thinking of reproducible URLs for academic references or qualified procedures, for example, which might or might not matter in the intended use case.
Could there be net-neutrality-like questions in all this as well?
AMP is a scourge. It's a bad idea being pushed by bad actors.