Hacker News new | past | comments | ask | show | jobs | submit login

It's the usual pattern of AI safety experts who justify their existence by the "risk of runaway superintelligence", but all they actually do in practice is find out how to stop their models from generating non-advertiser-friendly content. It's like the nuclear safety engineers focusing on what color to paint the bike shed rather than stopping the reactor from potentially melting down. The end result is people stop respecting them.



Like safety engineers trying to run turbines on minimal reactor power so that the safety test has a checkmark for the 1rst May ?

(Hmm, I guess this comparison doesn't actually work...)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: