Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Data's a long tail problem, as in there is never enough to make ML that can work the way a person does. With geometrically increased effort, you can find more and more patterns that a "dumb" system will recognize, and with enough maybe you can drive mistakes down far enough for it to work in your application. But it will never be intelligent, there will always be new inputs not present in the training that will make it screw up. This in a big contributor to why self driving never happened, and why we were sliding onto AI winter before chatgpt (which is cool but has the same problem).


The data present in the conscious experience of human beings is something I've been thinking about a lot lately. It's definitely a very important part of the puzzle and it's missing from the training data, and it's not clear that it ever can be included. Leads me to wonder if the only way that AGI could ever happen for real is to have embodied, embedded, emotional agents that go around making mistakes and learning from them like we do?


I've thought about this as well: What effect might having a machine where all data was collected via colocated sensory systems, over a continuous period of time have on our approach to create a model for AGI?

I don't believe that would be required to create an AGI, but I do believe this experience would be necessary for an AGI to form the similar concepts of 'self' and 'others' the we have.

AGI itself might likely just be a combination of various specialized models, and not exclusive to any concept corporeal existence, individual identity or awareness.


Discussion on an article on that topic here fwiw:

https://news.ycombinator.com/item?id=31660272




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: