Hacker News new | past | comments | ask | show | jobs | submit login

Because when things go sideways a human can't fix anything without copious amounts of reading/testing/poking around? I'm taking your meaning of "explain" literally here, which might be shortsighted.

Either way, the idea of machines or systems as "living" and able to communicate intent and process, even if only within their own "umwelt", is really interesting. Even a taste of that would make modern systems easier to debug and understand, if not more robust (which would be a better starting point for many systems anyway I suppose).




I believe he is referring to something like expert systems explanations, which was the holy grail 20 years ago (I don't know if it has been achieved), and as opposed to neural networks which are more like black boxes (at least to me).


Ah I see, that's quite interesting. So the idea of a system that could explain its own decision-making and inferences?

Neural networks definitely are black boxes, at least at an individual level. Sure the concept remains the same generally, but the internals are different and hidden from case to case




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: