Hacker News new | past | comments | ask | show | jobs | submit login

Humans are _amazing_ because they are able to correct the deficiencies of their perception with high level cognition augmented with memory of past experience. Machines can’t do cognition, and they can’t effectively use past experience either, to say nothing of doing a combination of those two things.

Current “AI” is basically function approximation and nothing else. And humans do everything they do in a 20W power envelope.




"they can’t effectively use past experience either"

While I'm unsure whether the on board computer of your autonomous car will be able to leverage past experience, I thought it was a forgone conclusion that the telemetry from all the cars on the road will be used to iteratively improve the core model. Which would then be dispersed as an os upgrade, effectively teaching your individual unit from the past experience from all the units on the road so far.


But it’s not memory. Currently you just show your neural net a million examples of a thing and it derives a function which, given an example input, minimizes the output error. That’s it. It’s not like “last time I caught a ball in a similar situation I moved like this, so let me start with that and correct based on visuals, audio, proprioception, and cognition”, all within 20 milliseconds, before you even fully understand you’re about to catch a ball. That’s not to mention that you also maintain the illusion of a continuous and static visual field, without even noticing, in stereo.


> last time I caught a ball in a similar situation I moved like this, so let me start with that and correct based on visuals, audio, proprioception, and cognition

There are types of neural networks (and other algorithms) that work literally like that. Just because a simple deep perceptron does not work like that does not mean no network does.

This one is just off the top of my head, it's quite recent but the memory part is based on old prior work. https://medium.com/applied-data-science/how-to-build-your-ow...

UPD: without even digging deep in the different types of networks, even AlphaGo(/Zero) works like that.


> Machines can’t do cognition

Assuming you mean that machines can’t do cognition at present, why do you think we won’t solve this problem in the next 20 years?


Because nobody knows how to even begin researching something like that.


Most humans don't do cognition nor learning from experiences well either. It is certainly DONE, but we tend to not pay attention to how often we fail. If you check the actual text of an average conversation, people are speaking past each other the majority of the time. We correct, but the vast majority of the time we have no awareness.

We rewrite our memories to fit our mental schemas, to the point where someone describing what they saw is more likely wrong than right, even in dramatic ways . (see a stabbing? Did the man in the suit or the man in rags commit the stabbing?). We suffer change blindness, confirmation bias, prejudice. We rationalize and justify to a ridiculous degree, and in the few cases we become aware of this, that awareness does not allow us to change the behaviors. If someone is wrong, the worst way to get them to change their stance is to show them they are wrong.

We're born helpless and spend a strong percentage of our lives learning how to not die. We transfer information inefficiently and inaccurately, with every generation biologically starting from scratch. We spend 1/3 of our lives unconscious (in addition to that helpless period), and almost 2 decades becoming ready to function independently, at which point most people have only a few years before they dedicate an even larger portion of their life to bootstrapping the next generation.

The Turing test exists because we can't even define what we are describing as obvious (and as I mentioned previously, humans fail the turing test often). Almost everyone that drives has been in some form of a car accident, the overwhelming majority of which were caused by human error. We burn plants so we can inhale the (toxic) vapors, we overestimate rare risks and underestimate inevitable ones, we drink poison for fun, and enjoy it because it reduces our thought processes, we gamble money with the intent of winning more money when it is well known the odds of winning are terrible. We entertain ourselves with habits that target innate thinking fallacies and call it "gamification". We ignore issues that we have confidence will arrive, and then react with panic when they do arrive because we've made no preparations. We declare human life to be so precious we don't want to end the potential, even to the extent of stopping people from preventing that potential, but don't take action to support that life once it is born. We look at a list of flaws like this and shrug it off. We oversimplify, stereotype, and categorize even when errors in those systems are pointed out to us. We don't like being wrong SO MUCH we'd often rather continue being wrong than accept that we were. We eat foods that are unhealthy in unhealthy quantities, and produce and purchase foods that directly encourage those habits. We have short attention spans and short (and inaccurate) memories.

Comparing current AI approaches and human thought is apples and oranges, but to mock AI efforts as function approximation ignores how much function approximation we do. We function, and the diversity of tasks we function at is indeed amazing. The complexity and adaptability of the human species is awe-inspiring. But doing amazing things is still not the same as doing them _well_.

I don't say this to claim humans are terrible. I'm pointing out that we are poor judges of quality and that any system following different fundamental restrictions will have different emergent behaviors. I expect that a car that can drive more safely and more consistently than a human is both a complex problem and much easier than most assume. Driving _well_ is harder, but driving better than a human? Not nearly as hard. What percentage of drivers do you think consider themselves to be "above average"?


That’s another reason why humans are so amazing: we correct so well we don’t even notice we’ve corrected anything. Our eyes see a continuos visual field in color even though we only see color in the center of each eye, our gaze jumps around all the time, and the image is heavily distorted, has blood vessels interfering with capture and nose obstructing part of peripheral vision. And yet you see none of that. We can’t individually control any of our muscles, yet we have fine motor skills that require strict countrol. We achieve through a visual and proprioceptive feedback loop, which corrects our previous memory of doing the same thing.

Driving better than a human from vision alone is extremely hard. Driving better than a human in an area for which you don’t have a 3d capture is extremely hard. Driving better than a human when it’s raining or snowing is extremely hard, etc, etc. Don’t be so eager to discount humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: