It's the usual pattern of AI safety experts who justify their existence by the "risk of runaway superintelligence", but all they actually do in practice is find out how to stop their models from generating non-advertiser-friendly content. It's like the nuclear safety engineers focusing on what color to paint the bike shed rather than stopping the reactor from potentially melting down. The end result is people stop respecting them.