Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

However, the reliability of model outputs is questionable, as LLMs may make formatting errors and occasionally exhibit rebellious behavior (e.g. refuse to follow an instruction).

Right…sounds quite reckless?



No, it just misunderstands the request in the sense that it mismaps input and expected out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: