Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess if the crawlers can’t actually see the trap, misinformation would be attributed to your website in case model responses expose content attribution tags to end users.


LLMs already make up citations, everyone would assume it was just the model spewing nonsense citations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: