Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If you were in charge of a large and well funded model, would you rather pay people to find and "cheat" on LLM benchmarks by training on them, or would you pay people to identify benchmarks and make reasonably sure they specifically get excluded from training data?

You should already know by now that economic incentives are not always aligned with science/knowledge...

This is the true alignment problem, not the AI alignment one hahaha



The AI alignement problem and the people alignment problem are actually the same problem! :D

One is just a bit harder due to the less familiar mind "design".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: