Hacker News new | past | comments | ask | show | jobs | submit login

Have you considered that useful predictive models for reality are simply irreducible to being comprehensible by swollen savannah monkey brains? It's not magic. It simply cannot both be explained to a human and be worth anything.



By the Good Regulator Theorem, if models of reality were incapable of being made comprehensible, then we would have went extinct. What we need to figure out then becomes how to externalize what we know unconsciously.


Sounds interesting, but I'm not sure if I understand.

From Wikipedia: It is stated that "every good regulator of a system must be a model of that system".

How is that related to comprehensibility? Or going extinct?

A lot of smart people seem to be surprised that anything can be comprehended at all:

https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...

The lack of mathematics effectiveness in description of intelligence seems unsurprising to me.


Can a swollen savannah monkey brain drive a car safely on public roads, using nothing but its attached eyes and ears? Yes (most of the time). Can AI do it? Not yet...


Sure you can drive, but can you explain in details HOW you do it? Probably not and that's why a machine cannot do it yet (until it learns by itself, as we all did).


Not an AI in a publicly available release that can be obtained by Joe consumer.

Supposedly the development versions we the general public can’t get yet are getting pretty good and would meet your “most of the time” bar on public roads.

But we don’t really know how good AI is at any given moment, because those that are leading in any AI area sometimes have reasons for not telling yet. Or because we don’t believe them when they do tell us.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: