If we knew that, we wouldn't need LLMs; we could just hardcode the same logic that is encoded in those neural nets directly and far more efficiently.
But we don't actually know what the weights do beyond very broad strokes.
If we knew that, we wouldn't need LLMs; we could just hardcode the same logic that is encoded in those neural nets directly and far more efficiently.
But we don't actually know what the weights do beyond very broad strokes.