Hacker News new | past | comments | ask | show | jobs | submit login

The AI that is not explainable is not because it cannot log things, its because the semantic interpretation of what it can log is hard. Starting with the real world input (which we understand) a lot of algorithms progressively apply mathematical transformations till reaching the output. It is the real world "meanings"of these transformations, or what is eventually learned: the stack of these transformations - that is hard to grasp.



Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: