> Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference and decision making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal scale inference and decision making systems are already exposing serious conceptual flaws.
that's a trivial statement, unless he can suggest a different way to do things. Should people have held off building any bridges, until the 20th century arrived ? Building things and science have always evolved side-by-side in a feedback loop.
Actually it’s not a trivial statement. He’s saying that we don’t have enough theory guiding us and instead we just spin up a tensorflow library without having any clue what’s going on underneath the hood. And I don’t mean understanding linear algebra in neural networks, I mean we don’t have a good theory of computation for Artificial neural networks like we do for Boolean electronic circuits. There is a little bit of work out there, but no one seems to be interested in the theory and why these work. The fact that we still represent networks as fully connected when many many of those weights w_ij are completely spurious tells me that we don’t spend enough time figuring out what’s going on and instead focus on the result. And this lack of attention will be why we’ll hit another wall. We should be reverse engineering intelligence and comparing that to AI so that we can build up the theoretical equivalent of aerodynamics.
> We should be reverse engineering intelligence and comparing that to AI so that we can build up the theoretical equivalent of aerodynamics.
Well, with a lot of assumptions, if believe they hold in practice, we have a lot of powerful theory for regression analysis, and we didn't get that theory by "reverse engineering intelligence". We got hypothesis tests, confidence intervals, prediction intervals, etc.
So, more generally, we can proceed mathematically: State some assumptions and then use those to prove some theorems. We want the assumptions to be justified for our practical applications and we want the consequences of the theorems to be powerful for the applications. These steps have been followed often enough in math before, back to Euclid's plane geometry, the Pythagorean theorem, trigonometry, spherical triangles, the area of a circle, the volume of a sphere, the wave equation, ellipses, analytic geometry, calculus, differential equations, the stiffness of space frames, etc. For the current applications with no theory, maybe we need to stir up some more theorems and proofs.
I see this as the MIT school of though. This group was saying that we’d have strong AI back in the 60s. I believe that a brute force math approach that lacks a larger theory of how neural networks give us intelligence is going to be the slow path forward.
I was proposing how to make progress in understanding some of the data manipulations being done now by what appears to be most of ML/AI. That is, for the data manipulations, get those from the consequences of some theorems. For an example, apparently regression is important in current ML/AI: Well, going way back, 50+ years, we have a lot of solid math for regression.
Then I mentioned that from Euclid through calculus to wave equations, there's more math from theorems and proofs we can and sometimes do apply. And we can stir up still more applicable math.
My goal was just suggesting how to do better with real, valuable applications to important real world problems. We have such examples from US national security -- the A bomb, the H bomb, GPS, stealth, phased array radar, adaptive beam forming sonar, etc.
By analogy, I was suggesting a better socket wrench or numerically controlled milling machine and not a self-driving car. I was not suggesting anything that we could regard as intelligent, not even as intelligent as a field mouse.
IMHO, powerful math with valuable applications is now very doable. We can schedule equipment maintenance, airline crews, airline fleets, workers, trucks, etc. We can have computers do the data manipulations specified by the math and have the Internet move the data.
But for also having systems that are as intelligent as a field mouse, kitten, puppy, octopus, song bird, even walk as well as a cockroach, that's harder.
At one time I worked in some AI based on some MIT AI work; I wrote software, worked with GM Research, gave a paper at the Stanford AAAI IAAI conference, as sole author or co-author published a list of papers. Here I was not suggesting anything that has anything to do with any of the MIT AI work I've known about.
For people at MIT who have done work more like I have in mind, I can think of D. Bertsekas and M. Athans.
Athans was in deterministic optimal control, and the OP mentioned that that field did "back propagation" long ago. Athans is, was, a good applied mathematician. When I was at FedEx, I chatted with him in his office on how to find how best to climb, cruise, and descend airplanes. He told me a cute story about how an F-4 could get minimum time to climb, say, IIRC, to 100,000 feet: Climb up to just 5,000 feet or so, go into a dive, get supersonic where actually the drag was less, and then go nearly vertical, all supersonic, directly to 100,000 feet.
For neural networks, IIRC there is some nice math that shows how general those can be for representing functions, and for some parts of stochastic optimal control Bertsekas proposed such a use for neural networks. There he, as usual, was being mathematical.
> Stochastic processes and control theory, as in Çinlar or Bertsekas, will definitely contribute towards strong AI, if we ever achieve that.
My view is that for high end applications of computing now, drawing from, building on, Çinlar or Bertsekas is about the best we can do and, due to current computing and the Internet, suddenly terrific.
But my view of something real in AI, say, as good as a kitty cat, ..., human will much more directly use very different approaches; that if in the basic core programming there is some math in there, then it will be darned simple.
So, my current view is that the good approaches to such AI will make direct use of little or no pure or applied math. Instead, my guess is that animal ... human intelligence is just some dared clever programming, rediscovered and re-refined so far many times here on earth. From the many times, my guess is that there is basically one quite simple way to do it.
My guess is that the sensory inputs, first, feed data that becomes in the brain essentially nouns: Floor, rock, water, etc. Early on the data on the nouns is quite crude, but later with more experience gets refined. E.g., a kitty cat quickly learns that floors are solid to stand and run on, and some are shaky and might result in a fall. Then with more input and experience, some
verbs are combined with some of the nouns. The strength of the combining is mostly just from experience; yes, we could write out some simple strength updating algebra. But to be cautious, the learning is deliberately slow: E.g., not everything round on the floor is good to eat.
There is a continual process to simplify this data, i.e., a form of data compression, into causality, e.g., learn about gravity. The learning is good enough to identify the concept of gravity as the cause that makes things fall and to reject irrelevant data like just what are falling from, a table, a window seat, the top of a BBQ pit, a tree limb, the second floor landing, etc. Also reject night, day, hot, cold, and other irrelevant variables -- that's smarter than current multi-variate curve fitting that has a tough time appraising what variables are likely irrelevant.
If I were going to program AI, that would be the framework I would use. I regard the learning as close to a bootstrap operation -- the first learning is very simple and crude but permits gathering more data, refining that learning, and doing more learning. To get some guesses on the details, watch various baby animals and humans as they learn.
I see no real role for math or anything I've heard of in current ML/AI, and I don't think it's much like rules in expert systems. And my guess is that the amount of memory needed is shockingly small and the basic processing, surprisingly simple.
We already hit a wall - neural nets are easily hacked. The field is called Adversarial Attacks and Defences of Neural Nets. There's no perfect fix yet and it is a major security flaw.
You’re example doesn’t prove they hit a wall, only that they’re fallible. Humans ‘get hacked’ every day. I wouldn’t exactly say human intelligence has hit a wall because of it, rather that human intelligence may have weaknesses and can be exploited. Frankly I wouldn’t expect neural networks to be any different.
We don't really have a theory for Boolean electronic circuits either. What separates bridges from computation is bridges are bound by the fixed laws of physics, computation is far more loosely bound by logic.
I’m pretty sure we have a rich body or work on information theory, Boolean logic, and circuit design. Also boolean computation is not ‘loosley bound’. It’s perfectly reproducible.
You really have no idea what you're talking about. Boolean logic - as a theoretical topic in itself and as it forms the basis for the circuitry of modern computing devices - is about as well-studied a topic as you can find.
it's not for the lack of trying however. in fact if it wasn't for people trying out random stuff they wouldn't have the leaps of 2010s in neural networks - that leap was not informed by some mathematical model. i don't see why trial-and-error cannot coexist with theory-building
I'm not sure it was intended to be read "Can't win? Don't try." An interesting corollary to that analogy is that some of the things humans built thousands of years ago are still standing... so it's possible some of our systems could also pass the test of time (they can be optimized more easily than a bridge).
he does point elsewhere that machine learning solutions must account for "long tail" data , so it seems to me he 's looking for perfect solutions that will never exist.
The biggest challenge yet unconquered is getting your "average" business ($1-100m revenue, zero AI knowledge) using ML to help with literally anything they do.
I'd wager less than 1% of businesses outside of SV even have a clue where to begin, or what to use it for, my employer included. "Do we hire some AI guys?"
We'll need to crack the 1%-using-it mark for me to consider the revolution "begun"...
Considering that the average business is unlikely to be writing any sort of software, other than perhaps basic Excel formulas... I wouldn't hold my breath waiting for them to use advanced software development techniques such as ML.
I would suggest that not writing any sort of software is a good proxy for going out of business soon.
I also think you would be surprised at how many businesses both do, and really want and value, software - but don't have anyone convincing to do it for them
I increasingly feel that almost every business would benefit from having someone with programming experience on-board. This comes from observing that businesses tend to have a lot of ad-hoc processes and unique requirements, for which there is no software package to buy - not without throwing away huge parts of the processes and rebuilding them in a "standard" way. However, a lot of those unique areas could be improved with a help of someone who's capable of writing small programs and scripts to glue things together.
A random example: my SO often works with high-res photos of their products, and sometimes this involves modifying them to suit the random requirements of other businesses. Something I currently solve for her by running ImageMagick one-liners if she needs more than a few photos modified. A person with basic familiarity of shell scripting and CLI tools could improve their workflow and keep improving it on-demand, as the requirements change.
I agree with your sentiment, but I think most business would already fare very well with having someone structured with project management skills in significant leading position.
Without that, working as a programmer in such disorganized team, you will mostly spend your time working "against" them. It's like you are trying to glue things together and people will just go and and rip those glued together things apart constantly.
I work for an actual tech company, which actually told me to look into a couple of ML projects to improve our operating efficiency, which actually produced good results and had an obvious and short path to being put into production... and which were promptly shelved.
Can you go into why they were shelved? Were they normal technology and business reasons - or related to ML directly? I ask so that potentially others can learn to either navigate around those in the future.
I wish I could. I have very little insight. I presume it's some sort of "normal business reasons", but the from my point of view the decision process went something like this:
Me: "It's finished, here are the areas of strength and weakness, and here's where we can deploy the system for maximum effectiveness."
Business: "We're thinking about the best way to deploy this."
Me: "This is how you deploy it."
Business: "We'll think about and get back to you. Don't do anything until we tell you."
That's the "weaknesses" part. And I did in fact spend a good bit of time with my manager going over exactly what the model does not mean, and what you should not conclude from it or use it for.
I would also be interested in knowing which projects / applications of ML seemed easy wins - I generally get stumped on "use CV and facial recognition" for a business that has no need of facial recognition.
1. Market prediction -- given basic demographic data, and publicly-available or easily acquirable information like when the person's house was purchased, what their credit score is, etc., how likely is this person to want / need your product, and is it worth spending a salesman's time on them?
2. Data entry. We took a picture of this customer's utility bill / bank statement / receipt / whatever. Now do we give it to human to identify relevant fields and manually type them into a spreadsheet, or do we have a computer automatically extract the business-relevant data? Or, heck, maybe that's too complex, but can we at least have a computer help--automatically filter out bad images, do perspective correction, highlight areas of interest, etc.? (This sort of thing is actually used in, e.g., digitizing census records; we don't trust handwriting OCR to be good enough on its own, but we trust it to automatically highlight relevant fields, in order, and provide a first-draft guess at the transcription to assist the human transcribers).
I could probably come up with a few more if I thought about it for a while, but those are the areas I've actually worked on recently.
The answer is, indeed, to "hire some AI guys". Many of your challenges are likely shared with your competitors, which means there's a market that some specialized "AI guys" can serve.
My employer does supply chain optimization, because many "average" businesses have supply chains that need to be optimized. We have specialized AI tools based on deep learning and differential programming, and we have human supply chain experts that can fit those tools to the specifics of a customer's situation.
I would say that despite our best marketing efforts, many of our customers don't know that we're doing this "AI" thing they've been hearing about.
The anti-pattern with "Do we hire some AI guys?" is that, if you have to teach those guys how a supply chain works, you've already lost. Pick someone who is already specialized in the problem you're trying to solve.
> The biggest challenge yet unconquered is getting your "average" business ($1-100m revenue, zero AI knowledge) using ML to help with literally anything they do.
Does the AI which can help with literally everything a business does exist? That sounds rather general purpose and extremely open-ended.
No. But the parent said 'anything' not 'everything' and that seems less ambitious. I would think that ML which could find patterns in what your employees do day to day and make suggestions about how to make improvements (like virtual efficiency audits) would benefit many businesses. Even for individual users, just having their OS be able to pipe up like a latter-day Clippy and say "hey, I notice you copying data from This.app and pasting it into That.app, want me to take a stab at repeating that until all the fields are populated?" would be potentially a huge improvement (and the OS is already gathering enough telemetry that's barely more intrusive).
ML is not magic. If you can't run controlled experiments then it isn't actually very easy to make confident recommendations about improvements to business processes.
Sure, I was exaggerating for effect. Maybe not proper ML, but something more like the OS automatically generating (and discarding if unused) macros in the background, and using CV / OCR on the windows to try to interpret what the user is doing - I am sure v1.0 would be laughable, but eventually...
I don't think that will happen until AI becomes a commodity, ie some thing that you can buy off the shelf and instruct it to do things for you. I think for that to happen it needs to be actually intelligent and not simple ML tricks.
Your average business is probably using ML indirectly via Google and Facebook without knowing it. I they want to find new customers or new suppliers and use those services ML will quite likely be used under the hood.
I'm not an AI researcher (although from my internet reading that's not a hard title to claim ;)) but I feel that ML/DL can't go much farther than they already have. The concept of "we just need more power" is an obvious fallacy to me.
I agree. In fact, I don't like the term Artificial Intelligence. It kind of implies that we understand what intelligence is, thereby able to emulate it. In my opinion, it is not the trait of an advanced species to attempt to create something that we cannot adequately describe.
I mean, all these "structures" and algorithms that people create in software, each a little different than the last, each with different performance/result trade offs. But nobody actually understands the fundamentals as to what is behind some of the very impressive results we are seeing (due to the technology allowing us to throw computing horsepower at it).
Agreed. Throwing more computing power at the problem is a cheap way to make it look like we're making progress; our AI should be efficient enough to run on limited hardware. Not to say that ML/DL isn't useful, but I think that whatever the next revolution is in AI is likely to come from a completely different direction.
It's worth noting that the training-time inference-time distinction matters here. Most of the fancy tech you've heard of recently (Semantic Seg, Pose, Localisation etc) can be pretty easily optimised for fast inference, and indeed it's not really so much of a research focus because it's so tractable (see MobileNets, v2, etc). Training, however, is still quite daunting.
I tend to be fairly dismissive of inference because what you end up with is a highly specialized algorithm rather than something that can easily continue to adapt. I suspect inference would probably fall into Jordan's category of "things that we call AI but probably shouldn't."
But that's not to dismiss how important fast/cheap inference has been in allowing companies to actually build things with AI.
What's special about 2018 computing power? Neural networks have been declared dead several times in the past. Because they couldn't do anything interesting with the measly computing power available at the time. Currently the biggest RNNs have a few thousand neurons. Which is absolutely puny compared to even animal brains.
Of course this is defending a strawman. I don't know anyone that said it was just a computing power issue. In fact I think most people vastly underestimate the role of computing power. Even algorithmic improvements are enabled by computers letting researchers do experiments that would have been impossible before. And by doing lots of experiments they gain intuition about the problem, that they wouldn't develop in a vacuum.
They aren't. You're looking for "hebbian learning". Very different from neural networks.
In 50 years people will laugh about how poorly these things were named. Just like we now laugh about what symbol we chose for a source of electrons in a circuit : "+". Whoops.
for the sake of argument i will go with the opposite idea - why not? Do you have evidence that scaling up will fail to produce more intelligence? We already know that the current deep networks (small,compared to the brain) produce fragments of intelligent behavior. we also have evidence that the brain is a connectionist system much much larger than the current ANNs. It would follow logically that intelligence could arise by simply scaling up.
>we also have evidence that the brain is a connectionist system much much larger than the current ANNs. It would follow logically that intelligence could arise by simply scaling up.
the little problem here is that it took 4 billion years of computing and a computer the size of a planet to come up with the nifty machines that are our brains. So unless you have brought a lot of tea and biscuits I think we should really think twice if just throwing more training and power at overly generalised algorithms is a good and realistic path forward.
It's not so much that the idea of "throw more things at it" is impossible, it's that it's a questionable path towards human intelligence. If you just want human intelligence without any further understanding of how to make it there are cheaper ways already
> 4 billion years of computing and a computer the size of a planet
That is not impossible to simulate, even if it includes the entire lineage of the humans. Also, nature prefers generalized algorithms as well, starting from DNA.
But we already have those machines to inspire ourselves from! Evolution had nothing. That's why it took so long. I literally can't understand you people's pessimism.
1. We don't know all the mechanisms that a brain employs to achieve intelligence. We see billions of interconnected neurons and we assumes "Yea, this might be generating intelligence".
2. We don't know if we are already at some fundamental limits of intelligence. For example, you can see may instances in nature where a pattern emerged that maximizes some sort of efficiency. (Like Honeycomb pattern).
So, the end result of this will be that, even if we transfer the process by which our intelligence work to a machine, it will have the same performance as an average human brain...
yes but we're not built on silicon nor were we guided by curated data sets nor did we have a deadline nor does another parallel universe rely on our decisions for potentially life and death outcomes.
I do not like the idea of this simple equality and I think it misses the point. We might not get to us with this tech, the model might not be near enough.
Wait, how is that actually different than what our brains do?
From what I know, our cognitive system is built in quite a similar fashion of probabilistic pattern matching with backpropagation, coupled with some "ad-hoc" heuristic subsystems.
Eh no. Our brains and how human mind works are actually very poorly understood. To claim we have a good idea how our cognitive systems work under the hood is an incorrect statement.
there is nothing like backpropagation in the brain, or a probabilistic pattern matcher. there is evidence that a connectionist model is applicable, but learning is not deciphered, and there are aspects of it, like neuronal excitability, local dendritic spiking, oscillations, up and down states etc, which do not translate at all to DL systems. That said, the increasing success of connectionist architecture does point to the conclusion that the brain is also a connectionist machine.
'if I asked my customers what they wanted, they would have said a faster horse'.
Maybe throwing more power at the current solutions won't ever make the progress we want, we need to find our car, so to speak. And a lot of people are working on that.
I used to be of the same oppinion, but not so anymore.
I believe 3 orders of magnitude more processing power we would achieve amazing results in a decade; not AGI type of results but very close to it from our perspective.
Part of the reason is that I now believe you can simply "bruteforce" some problems with existing ML algorithms (like thousands of layers deep neural nets) but more importanly, one(not me) could test new ML algorithms that are not feasible now (I don't have any examples) and people would be able to itterate much faster in developing such algorithms.
I am of a similar opinion. There was a real revolution in ~2011-2013 with deep learning. These have achieved much better results in image/speech tasks. These gains have leveled off, and we're seeing some limitations.
At current trajectory, we're not headed towards a general intelligence. Progress has been made, but there are big gaps. Smart home devices are a great case in point. They are somewhat flexible in the voice commands they accept. Specific phrasing and pronunciation are not necessarily required. Their responses and speech, however, are all pre-programmed and templated by humans.
Edit: There is potential for more breakthroughs in the future, but I am not seeing them on the horizon at the moment.
AFAIK, these are all implementations of deep learning or similar, not a fundamentally new architecture. We'll continue to see these as DL matures, but it doesn't address the shortcomings of the technique.
The main problem with “AI revolution” today is that 95% of things that practitioners tell you it can do are just nowhere close to sufficient accuracy under the range of conditions they’d need to work in in the real world. Easily more than a half of them don’t even work at all outside a demo with hand picked examples. And nobody seems to care. This narrows down the scope of applicability to a tiny sliver where you can tolerate high error rates, and even there it’s a hard slog to get anything done, and AI is relegated to the role of the “cherry on top”, rather than baked into the pie.
There are a few areas where performance is good enough to be practically useful. One of those (facial recognition and tracking) is currently causing a “revolution” in China. I’m just not sure it’s the kind of revolution we want.
You seem to be ignoring the many, many AI applications that are already working with just fine accuracy in the real world. They are quietly working in the background where you don’t notice them.
Please don't go BS like "Google is an AI app" etc. I'd expect an app where AI is the indispensable component. You can make a decent search engine/camera/phone/car/microwave without AI.
The problem is that the definition of AI keeps changing. It essentially means "Things computers can't do". A roomba would have been considered AI not long ago, but now its just a vacuum cleaner that moves around randomly for a while and then goes back to its charger. I'd imagine turn by turn directions would be considered AI if you go back far enough.
Google Assistant voice recognition, Google translate, Google Assistant voice generation (wavenet), Google Assistant question answering, Google keyboard word prediction, Google search, YouTube video suggestions, Dropbox ocr, Spotify & Amazon product suggestions, self driving cars (even if they're not perfect), same things as Google Assistant but for alexa & cortana, a huge bunch of new diagnosis tools, etc
Image search and categorization require machine learning to work well. Similarly with recommendation engines, speech recognition, and machine translation (the last one still often not too well but much better than in the past).
One of the government departments I interacted recently here in Australia answers phones with a voice recognition system that is significantly easier to deal with than the usual outsourced-to-a-call-center-in-Pakistan systems. If this is the AI future I'm all for it.
All the more concerning that the Pentagon plans to have autonomous killing machines "ready" this year (hint: they won't actually be ready, but it's not like they'll tell us how many innocents they'll actually kill, especially with their very loose definitions of what a target is. They are yet to do that with the manual ones after all).
Not sure about what your definition of "feelings" are, but if you are talking about the perceptual system parrallel to thoughts and images etc ("I feel sad in this situation", or "I feel that this person is happy" while talking about another person) - then it certainly sounds like such a system would be very helpful in making strong AI.
Just from our own subjective experience we know that for certain situations, feelings are a much better match than for example visualization (even though both often go together, visuals activating feelings, feelings bringing up visual memories/scenarios) etc.
You can think of feelings as a mode-selection heuristic - most of them have direct analogues in any sufficiently adaptable system. So it's a pretty safe bet that "real AI" will have "feelings" or some reasonable analogue thereof.
It has to evolve, not revolve. Like cars did (and they're still -far- from perfect).
Our remarkable bodies weren't designed by engineers, they survive now because, earlier, so many didn't. Are we smarter than evolution, because we like to think we are? Time will tell.
> Are we smarter than evolution, because we like to think we are?
Of course we are. We went from "natural state" to spaceflight in mere couple thousand years, of which the most meaningful were last 300.
We design, build and test things on a scale that's orders of magnitude shorter than gene-driven biological evolution. Still, that doesn't mean everything gets perfected instantly, and we humans are quite an impatient bunch (no surprise here, given our short lifespans).
Cars (assuming you mean level 5 autonomous ones) are another one of those things which we won’t really be getting for at least 20 more years. 90% of the problem is solved, the remaining 10% are exponentially harder, so no one has a foggiest clue how to solve them, let alone solve economically enough to make the cars viable on the market.
I grew up hearing "we will eventually get computers that can play chess, but go is exponentially more complicated." People wee talking a century.
I don't know that cars will happen soon(5-10 years), but I wouldn't want to bet one way or the other past that. We don't need perfection, we just need to equal humans, and humans are actually pretty bad, we are just used to it.
Humans are _amazing_ because they are able to correct the deficiencies of their perception with high level cognition augmented with memory of past experience. Machines can’t do cognition, and they can’t effectively use past experience either, to say nothing of doing a combination of those two things.
Current “AI” is basically function approximation and nothing else. And humans do everything they do in a 20W power envelope.
"they can’t effectively use past experience either"
While I'm unsure whether the on board computer of your autonomous car will be able to leverage past experience, I thought it was a forgone conclusion that the telemetry from all the cars on the road will be used to iteratively improve the core model. Which would then be dispersed as an os upgrade, effectively teaching your individual unit from the past experience from all the units on the road so far.
But it’s not memory. Currently you just show your neural net a million examples of a thing and it derives a function which, given an example input, minimizes the output error. That’s it. It’s not like “last time I caught a ball in a similar situation I moved like this, so let me start with that and correct based on visuals, audio, proprioception, and cognition”, all within 20 milliseconds, before you even fully understand you’re about to catch a ball. That’s not to mention that you also maintain the illusion of a continuous and static visual field, without even noticing, in stereo.
> last time I caught a ball in a similar situation I moved like this, so let me start with that and correct based on visuals, audio, proprioception, and cognition
There are types of neural networks (and other algorithms) that work literally like that. Just because a simple deep perceptron does not work like that does not mean no network does.
Most humans don't do cognition nor learning from experiences well either. It is certainly DONE, but we tend to not pay attention to how often we fail. If you check the actual text of an average conversation, people are speaking past each other the majority of the time. We correct, but the vast majority of the time we have no awareness.
We rewrite our memories to fit our mental schemas, to the point where someone describing what they saw is more likely wrong than right, even in dramatic ways . (see a stabbing? Did the man in the suit or the man in rags commit the stabbing?). We suffer change blindness, confirmation bias, prejudice. We rationalize and justify to a ridiculous degree, and in the few cases we become aware of this, that awareness does not allow us to change the behaviors. If someone is wrong, the worst way to get them to change their stance is to show them they are wrong.
We're born helpless and spend a strong percentage of our lives learning how to not die. We transfer information inefficiently and inaccurately, with every generation biologically starting from scratch. We spend 1/3 of our lives unconscious (in addition to that helpless period), and almost 2 decades becoming ready to function independently, at which point most people have only a few years before they dedicate an even larger portion of their life to bootstrapping the next generation.
The Turing test exists because we can't even define what we are describing as obvious (and as I mentioned previously, humans fail the turing test often). Almost everyone that drives has been in some form of a car accident, the overwhelming majority of which were caused by human error. We burn plants so we can inhale the (toxic) vapors, we overestimate rare risks and underestimate inevitable ones, we drink poison for fun, and enjoy it because it reduces our thought processes, we gamble money with the intent of winning more money when it is well known the odds of winning are terrible. We entertain ourselves with habits that target innate thinking fallacies and call it "gamification". We ignore issues that we have confidence will arrive, and then react with panic when they do arrive because we've made no preparations. We declare human life to be so precious we don't want to end the potential, even to the extent of stopping people from preventing that potential, but don't take action to support that life once it is born. We look at a list of flaws like this and shrug it off. We oversimplify, stereotype, and categorize even when errors in those systems are pointed out to us. We don't like being wrong SO MUCH we'd often rather continue being wrong than accept that we were. We eat foods that are unhealthy in unhealthy quantities, and produce and purchase foods that directly encourage those habits. We have short attention spans and short (and inaccurate) memories.
Comparing current AI approaches and human thought is apples and oranges, but to mock AI efforts as function approximation ignores how much function approximation we do. We function, and the diversity of tasks we function at is indeed amazing. The complexity and adaptability of the human species is awe-inspiring. But doing amazing things is still not the same as doing them _well_.
I don't say this to claim humans are terrible. I'm pointing out that we are poor judges of quality and that any system following different fundamental restrictions will have different emergent behaviors. I expect that a car that can drive more safely and more consistently than a human is both a complex problem and much easier than most assume. Driving _well_ is harder, but driving better than a human? Not nearly as hard. What percentage of drivers do you think consider themselves to be "above average"?
That’s another reason why humans are so amazing: we correct so well we don’t even notice we’ve corrected anything. Our eyes see a continuos visual field in color even though we only see color in the center of each eye, our gaze jumps around all the time, and the image is heavily distorted, has blood vessels interfering with capture and nose obstructing part of peripheral vision. And yet you see none of that. We can’t individually control any of our muscles, yet we have fine motor skills that require strict countrol. We achieve through a visual and proprioceptive feedback loop, which corrects our previous memory of doing the same thing.
Driving better than a human from vision alone is extremely hard. Driving better than a human in an area for which you don’t have a 3d capture is extremely hard. Driving better than a human when it’s raining or snowing is extremely hard, etc, etc. Don’t be so eager to discount humans.
Waymo is going to have a service on the road in Arizona for the general public this year.
Sure, it isn't cars in every environment without having seen the territory, but it is an incredibly useful product that can be extended into more cities as the technology improves. And plenty of cities around the world have similar conditions to what is being used in Arizona.
Low speed, AI driven electric buses on limited routes are already all over the place too.
Driverless trucks on certain runs is likely too within a decade.
Testing autonomous cars in areas with inclement weather would be a good indicator of progress. So far no autonomous car can remain autonomous in heavy rain or moderate snow, and no car can predict if a kid standing on the sidewalk will dash in front of the car all of a sudden. Or if the thing being blown across the road is a plastic bag or something more substantial. Or where to drive if road markings have worn out or otherwise became invisible, or how to avoid a pothole, etc, etc. All that stuff which you do without thinking, all of it is unsolved.
Wake me up when they’re testing L5 in Alaska in winter, using a car with no steering wheel. Then I might consider trusting my life to it.
So what you are saying, it takes time to perfect the technology? How is that the same as "no one has a foggiest clue how to do it"? There are many people with lots of clues.
To anyone upset by MJ's Medium post, please watch the video. It's a much better representation of his thoughts on the current state of machine learning than the blog post.
> The problem had to do not just with data analysis per se, but with what database researchers call “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation?
Plug: I work at a company (https://www.pachyderm.com) whose product is designed precisely to track data provenance across pipelines and through a company's larger data-processing operation for this reason
You all should really play that up more in your messaging - "provenance" is one of the hardest and least-addressed components of building AI/ML/data science systems that actually have measurable impact (rather than analysts making plots and speculating). In general having a structured, centralized representation of business processes is super valuable I'm sure.
If you write a blog post describing how critical that is to practical data science efficacy with some examples I bet you'll end up in a bunch of VP inboxes.
What are we defining as AI now? It seems like AI has had a lot of hype in the past few years, but I really don't think we're even close unless it's created by mistake.
We don't understand the brain fully, we don't understand consciousness, we just know so little in the grand scheme of things.
Do we have the computing power to come anywhere close to what we need, will we have that computing power any time soon without a major breakthrough?
This sort of distraction arises whenever we try to adjust logic to words and not the other way around. It is said that this kind of distinction is only relevant to scientists. The article is very convincing though, that in the case of AI, the general public must be able to make those distinctions.
Where is the end game? The problem with this kind of AI revolution is that it relies on data about us. And this is responsible for the feverish attitude towards collecting or scraping data, sometimes from the public domain, but more than usually, from us. I for one, am having none of it.
It will probably not end very soon. The author did mention some concerns about privacy, but only once, and then moved forward with his reasoning like nothing had happened. Also, it’s disheartening to see that his reaction to a piece of technology that could have probably killed his unborn kid (the medical device at the beginning which was fed bad data) was to say “we should use more technology!”, he was not questioning the underlying premises behind the whole operation at all: like, is it ok to “let” a machine help decide a technological untrained person (the doctor) if my unborn kid should support a life-threatening intervention or not?
Does anyone who's working/worked in the Healthcare "Intelligence Augmentation" have something to say about the challenges they've faced in setting up an "Intelligent Infrastructure"? Maybe someone who's worked with the DeepMind Streams application?
A lot of people don't think about how hard validation can be. You need a lot of high quality labeled data to confidently say that your algorithm works. I don't see a way around this even if learning algorithms become very data-efficient.
I’ve been finding it difficult to find concrete examples of where AI is useful. Suggestion engines? Alg optimization? What would be something my parents would recognize that is being helped by AI?
I like the category of "augmenting human skills". Google's Quick Draw is an example of augmented drawing, but you could use the same concept for writing, creating music, even programming. DeepFakes are basically the beginning of augmented video production. Imagine if you didn't need a green screen and body tracking suits to replace any person or scene in a movie with whatever you want. The whole field of generative deep learning is interesting and, to me, a really unexpected result of AI research.
The key is to not think about it too much. Your "logic" has no place here. Anyway, once the revolution hits your parents are gonna have those autonomous microwaves and self-cooling doctorbots they've been craving. The future is very, very, very bright.
...AND IT WON'T for a VERY LONG TIME. Those of us who know the origins of Ray Kurzweil, knew back then he was nothing special, and his ideas were derived from much smarter people around him.
As I recall, the article in WIRED from like 20 years ago, talked about how he missed his dad, and how in the future you would be able to take a room full of all his dad's old crap, and AI would be able to recompile his dad into dad 2.0 in the cloud!
(he didn't use these terms but that was the gist of the ridiculous and absurd WIRED article interview with him.)
Why do I bring up Ray? It's because circus acts like Ray and the Singularity University, sold a bunch of people certain goods, which were VASTLY overestimated in terms of delivery times, and in terms of the good themselves.
First of all, many folks don't want to admit this, but WE MAY NEVER BE ABLE TO CREATE A SYNTHETIC CONSCIOUSNESS! I think we will eventually, but it's definitely not a certainty. It just may not be possible for a completely synthetic 'consciousness' to exist.
This type of hype has become almost religious in nature, just like the folks who really think Moore's Law is an actual scientific law and not just an observation and prediction based on very limited data.
In conclusion, you'll get your AI just like those folks in the 50's, or was it 60's? ...well anyway, back when they had those world's fairs and predicted flying cars.
We're just not there yet. We don't even have basic foundations to build this yet. There is always a possibility of a brilliant individual, who may leapfrog humanity, but for right now, it's just vaporware.
AND YES. Even IBM's Watson and Google's GO winning "AI" are just vaporware machine learning and big data sets, it's not AI even a little bit.
Sorry folks, I love futurizing, and inventing words ;) , just like everyone else, but... if AI is in the stadium, we aren't even in the parking lot.
---
And now, after my rant, I shall read this article since it looks really interesting. I'm surprised it's on Medium.
Not sure how you can claim things like Deepmind are "vaporware" since it achieved exactly what it was intended to do. If you disagree that it should be called "AI", well that just comes down to semantics, but I think "Artificial Intelligence" is exactly what it is.
Of course it will be possible to create synthetic consciousness eventually, why wouldn't it be? It may take 10 years or 10,000 years, but if you think it will never happen for the rest of human existence, that is nonsense. If it already exists in nature, then there is absolutely no reason why it can't be done synthetically.
This came as no surprise to me, and hence, I was spared the pangs of disillusionment, but there are countless people who have actually dealt with IBM's marketing gimmick, Watson, and have come away with a serious hangover:
---
“Watson is a joke,” Chamath Palihapitiya, an influential tech investor who founded the VC firm Social Capital, said on CNBC in May.
A Reality Check for IBM’s AI Ambitions - MIT Technology Review
The final article, is the most telling, in its revelations. Of course, you must speak the language to understand IBM's gartner magic quadrant-esqe based reasoning: Their argument is because a bunch of businesses they convinced to sign up have 'Watson' stamped on their engagement contracts, it's PROOF that Watson is breakthrough technology, catapulting research into a new era of big data and machine learning....lol
I'm not saying some of the things that IBM claims it does, cannot be currently done, but I no doubt at all the IBM is not doing it, and Watson is most certainly behind closed doors a complete joke.
It's enough to replicate one of the 'AI' projects to see the vaporware yourself. You don't need to go above the university project level. Even a hobby level project will be enough to convince you of the absurdity of current claims of AI
What do you even mean? DeepMind is not vaporware, it really did revolutionize the world of Go. What about self-driving cars? They are actually working! Anyone who calls AI vaporware just has bizarre expectations that are based in sci-fi rather than reality.
I've discovered recently the lectures of Prof. Jordan Peterson. You can infer a lot of links between human behavior and shortcomings of AI. In some lectures he even touches the actual problems although on a tangent (e.g. what is relevant in the infinite complexity of the world and on the infinite levels on which you can analyze the world).
Long story short, my claim is that the AI revolution will happen when the AI agent will be able to abstract on a task experience and transfer that to a different task AND abstract away that and transfer that to the set of tasks and so on.
Sort of like the Alpha Zero would next be able to fly helicopters or wash dishes or cook a meal or whatever. Not anytime soon.
Thanks for the support. Being the voice of reality is a lonely road, like introducing antibiotics in a petri dish of wildly speculating bacteria, the revulsion is instantaneous.
"How dare you claim IBM's Watson isn't BREAKTHROUGH?!"
"How dare you not want the same buggy useless software which powers siri and cortana to now be used to drive cars?!"
What are you cynical reality based folks talking about! Saudia Arabia granted citizenship to it's first robot. Clearly you hate science! If AI wasn't real yet, why would they do that!?
Saying you don't want self-driving cars really IS insane. They are already way better at driving than humans in many situations, and they are only going to improve with time.
Even if you don't believe they are good enough now, to say that they are NEVER going to be good enough is ridiculous.
I saw Prof. Jordan give this talk in person, and the 2nd paragraph resonated with me. I wish there was more critical thinking involved in science/research/higher ed/industry but unfortunately most there's a lot of cheap money floating around with "experts" slurping it up without care for outcome.
"But the episode troubled me, particularly after a back-of-the-envelope calculation convinced me that many thousands of people had gotten that diagnosis that same day worldwide, that many of them had opted for amniocentesis, and that a number of babies had died needlessly."
Like we have defensive programming: assert this field is not a stupidly large number. Perhaps we can encode our assumption as probabalisic asserts, and defensively encode that our assumptions are not violated. E.g. the current distribution looks like my training data.
Michael Jordan is just mad he missed the boat on deep learning. Must be tough being that brilliant and at the same time get left in the dust by algorithms from the 90's and just sheer brute force. The AI revolution is here, from Google search to Uber pool to auto correct to recommendation engines, it is just a slow process.
??? that's not his lab, it's a blog site whose sole faculty advisor is Sergey Levine. I don't want to be a hater here, I'm just stating facts that M.J. is first and foremost a mathematician, and that deep learning has been tough for mathematicians in ML / stats.
The word he uses is "irritated" but on a different topic. On the subject at hand, what I'm seeing are significant successes on narrow niches of tasks. I'm not seeing a revolution yet.
Michael Jordan is a mathematician far more than an engineer. I've never heard of any deep learning papers or open source software coming from his lab. Like a lot of mathematicians though working in stats and ML, he's been forced to adapt. There is lots of resentment amongst mathematicians about the current AI revolution, to me this reeks of that. I could be wrong!
And also Michael Jordan is definitely _not_ at the forefront of deep learning or deep learning research (he definitely is in stats, no question). His lab is not even in the top 20 labs in the country for deep learning research. Also IMO nearly all the people at the forefront of deep learning are in industry right now.
The AI Revolution is happening right now. There's enormous proven potential which is materializing in ever increasing rates.
The first generations of hardware accelerators are slowly coming out and they're going to leave everyone astounded.
It's a bit like the early days of the internet, it's not always the most practical solution, but it definitely works.
This exact tired old argument is being applied to:
blockchain / DLT
AI
VR
CRISPR in some senses
Unless you have a reason to compare it to the "early days of the internet", don't. A technology being new and unknown doesn't make it like the internet.
> Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference and decision making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal scale inference and decision making systems are already exposing serious conceptual flaws.