Hacker News new | past | comments | ask | show | jobs | submit login

Impossible to explain the inner workings of GPT-3 without having access to the model and its weights. Does anyone know if any methods exist for this?



Since it's impossible to run inference on the model without having access to the model and its weights, interpretable AI generally does assume that you have access to all of that. Otherwise, why you would want to try to explain the inner workings of something that you don't have and can't use?


I asked ChatGPT for some in-depth source code that realistically mimics chatgpt. ChatGPT replied with various answers in python. I'm not sure any of them are correct, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: