Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s kind of surprising that these haven’t just been reverse engineered yet by language models.


That's simply not how LLMs work, and are actually awful at reverse engineering of any kind.


Are you saying that they cant explain the contents of machine code in human readable format? Are you saying that they can’t be used in a system that iteratively evaluates combinations of inputs and check their results?


Just that they're horrible at it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: