If and only if the LLM is able to bring the novel, unexpected connection into itself and see whether it forms other consistent networks that lead to newly common associations and paths.
A lot of us have had that experience. We use that ability to distinguish between 'genius thinkers' and 'kid overdosing on DMT'. It's not the ability to turn up the weird connections and go 'ooooh sparkly', it's whether you can build new associations that prove to be structurally sound.
If that turns out to be something self-modifying large models (not necessarily 'language' models!) can do, that'll be important indeed. I don't see fiddling with the 'temperature' as the same thing, that's more like the DMT analogy.
You can make the static model take a trip all you like, but if nothing changes nothing changes.
A lot of us have had that experience. We use that ability to distinguish between 'genius thinkers' and 'kid overdosing on DMT'. It's not the ability to turn up the weird connections and go 'ooooh sparkly', it's whether you can build new associations that prove to be structurally sound.
If that turns out to be something self-modifying large models (not necessarily 'language' models!) can do, that'll be important indeed. I don't see fiddling with the 'temperature' as the same thing, that's more like the DMT analogy.
You can make the static model take a trip all you like, but if nothing changes nothing changes.