Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is actually a very interesting insight, not only do you have to worry about sponsored results but people could game the system by spamming their library/language in a places which will be included in the training set of models. This will also present a significant challenge for security, because I can have a malicious library/package spam it in paths that will be picked up in the training set and have that package be referenced by the LLM.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: