The problem is the overloading of the word "safety", not really the approach. If there was some mortal danger stemming from this yeah, maybe more objective oversite is needed. But it's a stunt anyway and there is no danger involved, so whatever.
First, I think you correctly put the word in sarcasm quotes. OpenAI now has a "Safety" team.
But second, there definitely can be risk of blood and treasure in LLM use. Can the LLM reach out and stab the user... No. Can the LLM "hallucinate" and generate advice that when acted on has harmful results. Obviously yes. Now the moment you say "So you shouldn't just act on the advice of the LLM" I have to ask, then what's the point of having one?
I'm using loose phrasing here, so let's be more concrete. When Google's Gemini recommended that a user open the door on the back of the camera to tug the film loose if it wouldn't advance, this was a recommendation that would cause the loss of all photos on the roll of film. Small loss of value. But when a user gets similar "advice" on questions about how to clean something where the advice leads to adverse chemical reactions in the material, or worse, medical advice that leads to injury there is some responsibility on Google's part, or on OpenAI's part for having provided a faulty product.
Part of the issue here is we're attempting to replace search engines with answer engines, but answer engines require understanding and these LLMs just aren't there yet. They're fine for entertainment, but most users out in the world don't think of these systems that way.