Why does Archive.org get a pass on this one? Signed responses mean that there's a very clear way to leverage the browser's domain blacklisting technology to stop the spread of malware, which isn't presently possible for any content mirrors on the web.
Archive.org makes it clear you are on archive.org. The URL shows archive.org. The page content shows archive.org at the top. [1]
Google AMP doesn't show Google on the page. Google is pushing for the URL to show the origin site's URL instead of Google[2].
If an attacker poisons a nytimes.com article served by Google AMP, how does a browser's domain blacklisting help? Block google? Block nytimes.com? Neither makes sense.
I believe you might be misunderstanding the idea behind signed exchanges. To be clear, Signed Exchanges are how AMP should have worked all along.
example.com generates a content bundle and signs it. Google.com downloads the bundle and decides to mirror it from their domain. Your browser downloads the bundle from google.com, and verifies that the signature comes from example.com. Your browser is now confident that the content did originate from example.com, and so can freely say that the "canonical URL" for the content is example.com.
Malicious.org does the same thing, and the browser spots that malicious.org is blocked. At this point it doesn't matter if the content came from google.com, because the browser knows that the content is signed by malicious.org and so it originated from there.
Hope this helps clarify. Obviously blacklisting isn't a great security mechanism; my point is just that signed exchanges don't really open any NEW vectors for attack.
I think the concern was more that if I can XSS example.com, Google is now serving that for some period of time after example.com's administrators notice + fix this. (In the absence of a mechanism to force AMP to immediately decache the affected page(s), that is.)
Imagine that example.com builds the bundle by pulling data from a database. If an attacker can find a way to store malicious content in that database (stored XSS) and that content ends up in a signed bundle that Google AMP serves (similar to cache poisoning) then users will see malicious content. When the stored XSS is removed from the database, Google AMP may continue to serve the malicous signed bundle. So an extra step may be needed to clear the malicious content from Google AMP.
How exactly the attacker influences the bundle is going to be implementation dependent, so some sites may be safe while others are exploitable.
I think most of the comments in this thread mean "malicious" in the sense of injecting malware (say, a BTC miner) or a phishing attach or something into the signed-exchange content. However, you also have to consider that the content (text, images) itself could be "malicious", in the sense of misinformation.
If, purely as a hypothetical, Russian operatives got a credible propaganda story posted on the NYT website 24 hours before the November elections, and an AMP-hosted version of it stayed live long after the actual post got removed from nyt.com, I'd certainly call that "malicious". Of course, just like archive.org, I suspect that in a case as high-profile as that, you'd see a human from the NYT on the phone with a human at Google to get the cached copy yanked ASAP, but maybe on a slightly smaller scale the delay could be hours-to-days, which is bad enough.