The main problem with “AI revolution” today is that 95% of things that practitioners tell you it can do are just nowhere close to sufficient accuracy under the range of conditions they’d need to work in in the real world. Easily more than a half of them don’t even work at all outside a demo with hand picked examples. And nobody seems to care. This narrows down the scope of applicability to a tiny sliver where you can tolerate high error rates, and even there it’s a hard slog to get anything done, and AI is relegated to the role of the “cherry on top”, rather than baked into the pie.
There are a few areas where performance is good enough to be practically useful. One of those (facial recognition and tracking) is currently causing a “revolution” in China. I’m just not sure it’s the kind of revolution we want.
You seem to be ignoring the many, many AI applications that are already working with just fine accuracy in the real world. They are quietly working in the background where you don’t notice them.
Please don't go BS like "Google is an AI app" etc. I'd expect an app where AI is the indispensable component. You can make a decent search engine/camera/phone/car/microwave without AI.
The problem is that the definition of AI keeps changing. It essentially means "Things computers can't do". A roomba would have been considered AI not long ago, but now its just a vacuum cleaner that moves around randomly for a while and then goes back to its charger. I'd imagine turn by turn directions would be considered AI if you go back far enough.
Google Assistant voice recognition, Google translate, Google Assistant voice generation (wavenet), Google Assistant question answering, Google keyboard word prediction, Google search, YouTube video suggestions, Dropbox ocr, Spotify & Amazon product suggestions, self driving cars (even if they're not perfect), same things as Google Assistant but for alexa & cortana, a huge bunch of new diagnosis tools, etc
Image search and categorization require machine learning to work well. Similarly with recommendation engines, speech recognition, and machine translation (the last one still often not too well but much better than in the past).
One of the government departments I interacted recently here in Australia answers phones with a voice recognition system that is significantly easier to deal with than the usual outsourced-to-a-call-center-in-Pakistan systems. If this is the AI future I'm all for it.
All the more concerning that the Pentagon plans to have autonomous killing machines "ready" this year (hint: they won't actually be ready, but it's not like they'll tell us how many innocents they'll actually kill, especially with their very loose definitions of what a target is. They are yet to do that with the manual ones after all).
Not sure about what your definition of "feelings" are, but if you are talking about the perceptual system parrallel to thoughts and images etc ("I feel sad in this situation", or "I feel that this person is happy" while talking about another person) - then it certainly sounds like such a system would be very helpful in making strong AI.
Just from our own subjective experience we know that for certain situations, feelings are a much better match than for example visualization (even though both often go together, visuals activating feelings, feelings bringing up visual memories/scenarios) etc.
You can think of feelings as a mode-selection heuristic - most of them have direct analogues in any sufficiently adaptable system. So it's a pretty safe bet that "real AI" will have "feelings" or some reasonable analogue thereof.
It has to evolve, not revolve. Like cars did (and they're still -far- from perfect).
Our remarkable bodies weren't designed by engineers, they survive now because, earlier, so many didn't. Are we smarter than evolution, because we like to think we are? Time will tell.
> Are we smarter than evolution, because we like to think we are?
Of course we are. We went from "natural state" to spaceflight in mere couple thousand years, of which the most meaningful were last 300.
We design, build and test things on a scale that's orders of magnitude shorter than gene-driven biological evolution. Still, that doesn't mean everything gets perfected instantly, and we humans are quite an impatient bunch (no surprise here, given our short lifespans).
Cars (assuming you mean level 5 autonomous ones) are another one of those things which we won’t really be getting for at least 20 more years. 90% of the problem is solved, the remaining 10% are exponentially harder, so no one has a foggiest clue how to solve them, let alone solve economically enough to make the cars viable on the market.
I grew up hearing "we will eventually get computers that can play chess, but go is exponentially more complicated." People wee talking a century.
I don't know that cars will happen soon(5-10 years), but I wouldn't want to bet one way or the other past that. We don't need perfection, we just need to equal humans, and humans are actually pretty bad, we are just used to it.
Humans are _amazing_ because they are able to correct the deficiencies of their perception with high level cognition augmented with memory of past experience. Machines can’t do cognition, and they can’t effectively use past experience either, to say nothing of doing a combination of those two things.
Current “AI” is basically function approximation and nothing else. And humans do everything they do in a 20W power envelope.
"they can’t effectively use past experience either"
While I'm unsure whether the on board computer of your autonomous car will be able to leverage past experience, I thought it was a forgone conclusion that the telemetry from all the cars on the road will be used to iteratively improve the core model. Which would then be dispersed as an os upgrade, effectively teaching your individual unit from the past experience from all the units on the road so far.
But it’s not memory. Currently you just show your neural net a million examples of a thing and it derives a function which, given an example input, minimizes the output error. That’s it. It’s not like “last time I caught a ball in a similar situation I moved like this, so let me start with that and correct based on visuals, audio, proprioception, and cognition”, all within 20 milliseconds, before you even fully understand you’re about to catch a ball. That’s not to mention that you also maintain the illusion of a continuous and static visual field, without even noticing, in stereo.
> last time I caught a ball in a similar situation I moved like this, so let me start with that and correct based on visuals, audio, proprioception, and cognition
There are types of neural networks (and other algorithms) that work literally like that. Just because a simple deep perceptron does not work like that does not mean no network does.
Most humans don't do cognition nor learning from experiences well either. It is certainly DONE, but we tend to not pay attention to how often we fail. If you check the actual text of an average conversation, people are speaking past each other the majority of the time. We correct, but the vast majority of the time we have no awareness.
We rewrite our memories to fit our mental schemas, to the point where someone describing what they saw is more likely wrong than right, even in dramatic ways . (see a stabbing? Did the man in the suit or the man in rags commit the stabbing?). We suffer change blindness, confirmation bias, prejudice. We rationalize and justify to a ridiculous degree, and in the few cases we become aware of this, that awareness does not allow us to change the behaviors. If someone is wrong, the worst way to get them to change their stance is to show them they are wrong.
We're born helpless and spend a strong percentage of our lives learning how to not die. We transfer information inefficiently and inaccurately, with every generation biologically starting from scratch. We spend 1/3 of our lives unconscious (in addition to that helpless period), and almost 2 decades becoming ready to function independently, at which point most people have only a few years before they dedicate an even larger portion of their life to bootstrapping the next generation.
The Turing test exists because we can't even define what we are describing as obvious (and as I mentioned previously, humans fail the turing test often). Almost everyone that drives has been in some form of a car accident, the overwhelming majority of which were caused by human error. We burn plants so we can inhale the (toxic) vapors, we overestimate rare risks and underestimate inevitable ones, we drink poison for fun, and enjoy it because it reduces our thought processes, we gamble money with the intent of winning more money when it is well known the odds of winning are terrible. We entertain ourselves with habits that target innate thinking fallacies and call it "gamification". We ignore issues that we have confidence will arrive, and then react with panic when they do arrive because we've made no preparations. We declare human life to be so precious we don't want to end the potential, even to the extent of stopping people from preventing that potential, but don't take action to support that life once it is born. We look at a list of flaws like this and shrug it off. We oversimplify, stereotype, and categorize even when errors in those systems are pointed out to us. We don't like being wrong SO MUCH we'd often rather continue being wrong than accept that we were. We eat foods that are unhealthy in unhealthy quantities, and produce and purchase foods that directly encourage those habits. We have short attention spans and short (and inaccurate) memories.
Comparing current AI approaches and human thought is apples and oranges, but to mock AI efforts as function approximation ignores how much function approximation we do. We function, and the diversity of tasks we function at is indeed amazing. The complexity and adaptability of the human species is awe-inspiring. But doing amazing things is still not the same as doing them _well_.
I don't say this to claim humans are terrible. I'm pointing out that we are poor judges of quality and that any system following different fundamental restrictions will have different emergent behaviors. I expect that a car that can drive more safely and more consistently than a human is both a complex problem and much easier than most assume. Driving _well_ is harder, but driving better than a human? Not nearly as hard. What percentage of drivers do you think consider themselves to be "above average"?
That’s another reason why humans are so amazing: we correct so well we don’t even notice we’ve corrected anything. Our eyes see a continuos visual field in color even though we only see color in the center of each eye, our gaze jumps around all the time, and the image is heavily distorted, has blood vessels interfering with capture and nose obstructing part of peripheral vision. And yet you see none of that. We can’t individually control any of our muscles, yet we have fine motor skills that require strict countrol. We achieve through a visual and proprioceptive feedback loop, which corrects our previous memory of doing the same thing.
Driving better than a human from vision alone is extremely hard. Driving better than a human in an area for which you don’t have a 3d capture is extremely hard. Driving better than a human when it’s raining or snowing is extremely hard, etc, etc. Don’t be so eager to discount humans.
Waymo is going to have a service on the road in Arizona for the general public this year.
Sure, it isn't cars in every environment without having seen the territory, but it is an incredibly useful product that can be extended into more cities as the technology improves. And plenty of cities around the world have similar conditions to what is being used in Arizona.
Low speed, AI driven electric buses on limited routes are already all over the place too.
Driverless trucks on certain runs is likely too within a decade.
Testing autonomous cars in areas with inclement weather would be a good indicator of progress. So far no autonomous car can remain autonomous in heavy rain or moderate snow, and no car can predict if a kid standing on the sidewalk will dash in front of the car all of a sudden. Or if the thing being blown across the road is a plastic bag or something more substantial. Or where to drive if road markings have worn out or otherwise became invisible, or how to avoid a pothole, etc, etc. All that stuff which you do without thinking, all of it is unsolved.
Wake me up when they’re testing L5 in Alaska in winter, using a car with no steering wheel. Then I might consider trusting my life to it.
So what you are saying, it takes time to perfect the technology? How is that the same as "no one has a foggiest clue how to do it"? There are many people with lots of clues.
There are a few areas where performance is good enough to be practically useful. One of those (facial recognition and tracking) is currently causing a “revolution” in China. I’m just not sure it’s the kind of revolution we want.