Not sure what you mean by "recipe" but it can create new output that doesn't exist on the internet. A lot of the output is going to be nonsense, especially stuff that cannot be verified just by looking at it. But it's not accurate to describe it as just a search engine.
>A lot of the output is going to be nonsense, especially stuff that cannot be verified just by looking at it.
Isn't that exactly the point, and why there should be a 'warning/awareness' that it is not a 160 IQ AI but a very good markov chain that can sometimes infer things and other time hallucinate/put random words in a very well articulated way (echo of Sokal maybe)
My random number generator can create new output that has never been seen before on the internet, but that is meaningless to the conversation. Can an LLM derive, from scratch, the steps to create a working nuclear bomb, given nothing more than a basic physics textbook? Until (if ever) AI gets to that stage, all such concerns of danger are premature.
> Can an LLM derive, from scratch, the steps to create a working nuclear bomb, given nothing more than a basic physics textbook?
Of course not. Nobody in the world could do that. But that doesn't mean it can only spit out things that are already available on the internet which is what you originally stated.
And nobody is worried about the risks of ChatGPT giving instructions for building a nuclear bomb. That is obviously not the concern here.