I believe that a number of AI bots only respect robot.txt entries that explicitly define their static user agent name. They ignore wildcards in user agents.
That counts as barely imho.
I found this out after OpenAI was decimating my site and ignoring the wildcard deny all. I had to add entires specifically for their three bots to get them to stop.
IA actually has technical and moral reasons to ignore robots.txt. Namely, they want to circumvent this stuff because their goal is to archive EVERYTHING.
As I recall, this is outdated information. Internet Archive does respect robots.txt and will remove a site from its archive based on robots.txt. I have done this a few years after your linked blog post to get an inconsequential site removed from archive.org.
Amazonbot doesn't respect the `Crawl-Delay` directive. To be fair, Crawl-Delay is non-standard, but it is claimed to be respected by the other 3 most aggressive crawlers I see.
And how often does it check robots.txt? ClaudeBot will make hundreds of thousands of requests before it re-checks robots.txt to see that you asked it to please stop DDoSing you.
Here's Google, complaining of problems with pages they want to index but I blocked with robots.txt.
New reason preventing your pages from being indexed
Search Console has identified that some pages on your site are not being indexed
due to the following new reason:
Indexed, though blocked by robots.txt
If this reason is not intentional, we recommend that you fix it in order to get
affected pages indexed and appearing on Google.
Open indexing report
Message type: [WNC-20237597]
They are not complaining. You configured Google Search Console to notify you about problems that affect the search ranking of your site, and that's what they do. if you don't want to receive these messages, turn them off in Google Search Console.