Hacker Newsnew | past | comments | ask | show | jobs | submit | nialv7's commentslogin

Was coming here to comment the exactly same thing. Significant indentation makes me shudder.

Is it just me or the title made it seemed like the conference call was the cause of the crash?

Obviously the developer of Anubis thinks it is bypassing: https://github.com/TecharoHQ/anubis/issues/978

Fair, then I obviously think Xe may have a kinda misguided understanding of their own product. I still stand by the concept I stated above.

latest update from Xe:

> After further investigation and communication. This is not a bug. The threat actor group in question installed headless chrome and simply computed the proof of work. I'm just going to submit a default rule that blocks huawei.


this kinda proves the entire project doesn't work if they have to resort to manual IP blocking lol

It doesn't work for headless chrome, sure. The thing is that often, for threats like this to work they need lots scale, and they need it cheaply because the actors are just throwing a wide net and hoping to catch it. Headless chrome doesn't scale cheaply so by forcing script kiddies to use it you're pricing them out of their own game. For now.

Doesn't have to be black or white. You can have a much easier challenge for regular visitors if you block the only (and giant) party that has implemented a solver so far. We can work on both fronts at once...

The point is that it isn't "implementing a solver", it's just using a browser and waiting a few seconds.

That counts as something that can solve it, yes. Apparently there's now exactly one party in the world that does that (among the annoying scrapers that this mechanism targets). So until there are more...

> Fuck AI scrapers, and fuck all this copyright infringement at scale.

Yes, fuck them. Problem is Anubis here is not doing the job. As the article already explains, currently Anubis is not adding a single cent to the AI scrappers' costs. For Anubis to become effective against scrappers, it will necessarily have to become quite annoying for legitimate users.


Best response to AI scrapers is to poison their models.

how well is modern poisoning holding up?

I'll tell you in a second. First I wanna try adding gasoline to my spaghetti as suggested by Google's search

A balanced diet of hydrocarbons in your carbohydrates!

To the best of my knowledge, it never really worked.

Yes, it probably works in the lab, in carefully picked conditions, but in the wild I've yet to see any effect whatsoever. Nobody in the AI communities seems to be complaining about it, models keep getting better, and people even intentionally trained on poisoned images just to show it can be done.

IMO on the long end it's a complete dead end of a strategy. Models are many, poisoning can't target everything at once. Even effective poisoning can be just dealt with by finding the algorithm that doesn't care about it.


What about appealing to ethics, i.e. posting messages about how a poor catgirl ended up on the street because AI took her job? To make AI refuse to reply due to ethical concerns?

Uh, why haven't we drilled it into people's brains that regex cannot be used to parse matching parentheses/brackets?

It's created by Progress, which IIUC is a movement under the Labour party?

I mean, you are literally in power, you can just change it. What's the point of this?


Too bad the world isn't run by people like him.

It is just sad we are in a time where measures like Anubis is necessary. The author's efforts are admirable, so I don't mean this personally: but Anubis is a bad product IMHO.

It doesn't quite do what it is advertised to do, as evidenced by this post; and it degrades user experience for everybody. And it also stops the website from being indexed by search engines (unless specifically configured otherwise). For example, gitlab.freedesktop.org pages have just disappeared from Google.

We need to find a better way.


needs a "(2020)" in the title. this is not an active project.


I think even (2019)? According to the youtube videos

Previous discussion: https://news.ycombinator.com/item?id=31794872


Well, haven't we seen similar results before? IIRC finetuning for safety or "alignment" degrades the model too. I wonder if it is true that finetuning a model for anything will make it worse. Maybe simply because there is just orders of magnitudes less data available for finetuning, compared to pre-training.


Careful, this thread is actually about extrapolating this research to make sprawling value judgements about human nature that confirm to the preexisting personal beliefs of the many malicious people here making them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: