Hacker News new | past | comments | ask | show | jobs | submit login

I think we're talking about entirely different levels of understanding.

We do, of course, understand ML models at a pretty deep level. What you can't do is identify the weight values that encode all information about squirrels. You can't point to a particular neuron and say this is why it hallucinates. We do not grok these models. I severely doubt that it's even possible for a human to grok a 13B parameter LLM




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: