Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem with this is that you have to give your LLM basically unbounded access to everything you have access to, which is a recipe for pain.


Not necessarily. I have a small little POC agentic tool on my side which is fully sandboxed, an it's inherently "non prompt injectable" by the data that it processes since it only ever passes that data through generated code.

Disclaimer: it does not work well enough. But I think it shows great promise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: