> After further investigation and communication. This is not a bug. The threat actor group in question installed headless chrome and simply computed the proof of work. I'm just going to submit a default rule that blocks huawei.
It doesn't work for headless chrome, sure. The thing is that often, for threats like this to work they need lots scale, and they need it cheaply because the actors are just throwing a wide net and hoping to catch it. Headless chrome doesn't scale cheaply so by forcing script kiddies to use it you're pricing them out of their own game. For now.
Doesn't have to be black or white. You can have a much easier challenge for regular visitors if you block the only (and giant) party that has implemented a solver so far. We can work on both fronts at once...
That counts as something that can solve it, yes. Apparently there's now exactly one party in the world that does that (among the annoying scrapers that this mechanism targets). So until there are more...
> Fuck AI scrapers, and fuck all this copyright infringement at scale.
Yes, fuck them. Problem is Anubis here is not doing the job. As the article already explains, currently Anubis is not adding a single cent to the AI scrappers' costs. For Anubis to become effective against scrappers, it will necessarily have to become quite annoying for legitimate users.
To the best of my knowledge, it never really worked.
Yes, it probably works in the lab, in carefully picked conditions, but in the wild I've yet to see any effect whatsoever. Nobody in the AI communities seems to be complaining about it, models keep getting better, and people even intentionally trained on poisoned images just to show it can be done.
IMO on the long end it's a complete dead end of a strategy. Models are many, poisoning can't target everything at once. Even effective poisoning can be just dealt with by finding the algorithm that doesn't care about it.
What about appealing to ethics, i.e. posting messages about how a poor catgirl ended up on the street because AI took her job? To make AI refuse to reply due to ethical concerns?
It is just sad we are in a time where measures like Anubis is necessary. The author's efforts are admirable, so I don't mean this personally: but Anubis is a bad product IMHO.
It doesn't quite do what it is advertised to do, as evidenced by this post; and it degrades user experience for everybody. And it also stops the website from being indexed by search engines (unless specifically configured otherwise). For example, gitlab.freedesktop.org pages have just disappeared from Google.
Well, haven't we seen similar results before? IIRC finetuning for safety or "alignment" degrades the model too. I wonder if it is true that finetuning a model for anything will make it worse. Maybe simply because there is just orders of magnitudes less data available for finetuning, compared to pre-training.
Careful, this thread is actually about extrapolating this research to make sprawling value judgements about human nature that confirm to the preexisting personal beliefs of the many malicious people here making them.
reply