I think we're talking about entirely different levels of understanding.
We do, of course, understand ML models at a pretty deep level. What you can't do is identify the weight values that encode all information about squirrels. You can't point to a particular neuron and say this is why it hallucinates. We do not grok these models. I severely doubt that it's even possible for a human to grok a 13B parameter LLM
We do, of course, understand ML models at a pretty deep level. What you can't do is identify the weight values that encode all information about squirrels. You can't point to a particular neuron and say this is why it hallucinates. We do not grok these models. I severely doubt that it's even possible for a human to grok a 13B parameter LLM