I think the author is attacking a strawman. The common ground to most successful technologies is that their potential was recognized early on, and many people worked on realizing them, at least after breakthrough proofs-of-concept. That was the case for airplanes, televisions, computers, mobile phones, etc.
But what people "dismiss" -- although, I would say question -- is not the technology's ultimate success, but its timing. Machine learning, and the eventual AI, was Alan Turing's dream years before the first computer was built. He talked to his friend Claude Shannon about supervised learning in 1943, and Shannon proposed letting computers absorb culture and arts, too (playing them music, in particular, an idea that Turing first found surprising). In 1946, Turing wrote about unsupervised learning, talking about equipping computers with wheels, arms and cameras, and letting them roam the countryside. Neural networks were invented circa 1942, and Turing started researching them in the late 40s. The algorithms used today for machine learning were invented in the '60s, but the theory behind them pretty much stalled in the '90s.
The question is, then, not whether AI is ultimately achievable, nor whether current machine learning is useful in some domains. The question is how far is (actual) AI or generally useful machine learning. Given that we've been working on the problem for 75 years now, no major theoretical breakthroughs have been made in the past few decades, and that most successes are due to better hardware but with uncertain future scalability, I see no rational reason to expect a breakthrough in the next 5 years (very smart people in the '40s, '50s and '60s were equally convinced that AI is around the corner). I would never dismiss the promise of AI, but I would certainly question the unbridled enthusiasm some people have for machine learning's current form.
> Where Cellnet missed it, Orange got much closer to the actual capability: the future is wire-free. Why is your phone tied to the wall of a particular room with a piece of wire? Cellnet was guessing about applications while Orange talked about the breakthrough.
This part caused problems for me. For one, Cellnet survived. So we have evidence that whatever they did wasn't the wrong choice. Second, a sales pitch or value proposition is not a technology prediction, it's an attempt to get money in trade for a product. It's not just right to talk about specific applications, but fairly important to demonstrate the practical utility to a buyer. This has been studied widely and is falsifiable. Plus, anecdotally, I've never bought something because a sales guy said "but it's a breakthrough!", I pay for things when I see clear value to me.
Quote: "Bringing this back to 2017, I've suggested elsewhere that voice interfaces do not have a roadmap to become universal computer interfaces or platforms. Machine learning now means that speech recognition can accurately transcribe the sound of someone speaking into text and that natural language processing can turn that text into a structured query - that's one breakthrough. But you still need somewhere to send the query, and it is not clear that we have any roadmap to a system that can give a structured answer to any query that any person can pose, rather than just dumping you out to a keyword search of the web. To even start making voice interfaces useful for general purpose computing rather than for niches, I would suggest that we would need general AI, which is (at best) a few decades away."
An interesting way to distinguish the potential between new technologies. I would say that someone might argue that a fully autonomous car would also require general AI for example. If we would argue that "no, it should just have less casualties than human drivers", we can continue with arguing that a voice interface will be abundant by just having fewer mistakes than by typing words or having queries better answered than by a text interface. On the roadmap will then probably be more sensors than microphones alone and it's hard to dismiss such a technological progression only because at some time we might need general AI to continue the technological progress.
The entire field of UI design exists because it's almost impossible to capture user intent even when you present them with a damn button that has their intent written on it.
UI design is about guiding people through a series of explicit and difficult-to-fuck-up gates to make it clear what they want.
What about voice inputs or AI is so powerful that it could entirely reverse that trend and go the other way to a UI where you can say whatever you want and still end up with a UI that's impossible to fuck up?
I really think the AI bulls should spend just a little time on a serious, professional interaction* design team working on a product that's critical to daily commerce before they go announcing that voice obsolete screens.
\* not graphic design, not motion design, not UI design. Nuts and bolts interaction designers who are responsible for metrics
What mattered was seeing the value of the capability, not predicting any particular applications
How can you separate the value of a capability from its potential applications? Surely he doesn't mean scientific/intellectual value so.. how else can a capability have value?
In product as a platform, you may understand that there are problems in a particular space, but you may not understand the problems. However, you may understand how a platform might enable the solving of problems, while not really understanding all the problems that will be solved themselves. Generally to be sure of this you should see at least 2 or 3 solutions that you could have conviction in, but understand that entablement would solve very many tangential problems to those you have conviction in is more important. You can somewhat use this to size a future ecosystem market, but you have to be wary about your predictions of the number of solutions you enable. You're also very much fighting too early or just right in terms of timing. Looking for solutions to problems will help you judge the value of the capability, but trying to predict all the applications of those solutions is a fool's errand.
That quote stood out to me too. My question is, if there is a new capability (Bitcoin is a good example) then how do you invest in that capability? Sure you can pick a company working on some application, but they will almost certainly miss the real value of the capability.
Missing real value can be the same as showing potential value. If you have conviction there is real value, then you're banking on many misses, some capturing most of it and a fewer number using it to it's potential. From there things like commoditization come into play. However, you should be mindful of tying capability to value, just because something is capable does not also mean it must be adopted as valuable. Plenty of things that are not particularly capable are overvalued, and many things that are incredibly capable are under valued. I think the important part to look at is does this capability accessibility solve very many problems. If so, then you can start to think about a few solutions, but it isn't meaningful to try and understand all the applications of the solutions (because you typically don't understand that abstract customer). Cloud is a good example of this.
It's not clear to me that you can. How would you have invested in air travel in the time of the Wright brothers? None of the companies that eventually came to dominate existed then.
I think in this light you either invest in an inventor or radical innovator, or a component of their supply chain. One is: I'm going to make this happen and all these things will happen, the other is: this thing is happening so we should should to these things. This is why the path from venture (paradigm shift) to public (shifted) is important, the best public companies indicate a fundamental in other businesses, they are enablers. However, over time you become a commodity item on a supply chain and run risk the "enabled" companies will out innovating you as they build their capabilities and margins .
The article talks about superpowers, which I take to mean the technology can do some general thing that could not be done before. In that case people can invent a great many specific things to do with it, and so in is likely inevitable some of them will be great successes. So the fact that when it is first invented a given individual can't think of a specific use that would be very valuable doesn't mean much.
But what people "dismiss" -- although, I would say question -- is not the technology's ultimate success, but its timing. Machine learning, and the eventual AI, was Alan Turing's dream years before the first computer was built. He talked to his friend Claude Shannon about supervised learning in 1943, and Shannon proposed letting computers absorb culture and arts, too (playing them music, in particular, an idea that Turing first found surprising). In 1946, Turing wrote about unsupervised learning, talking about equipping computers with wheels, arms and cameras, and letting them roam the countryside. Neural networks were invented circa 1942, and Turing started researching them in the late 40s. The algorithms used today for machine learning were invented in the '60s, but the theory behind them pretty much stalled in the '90s.
The question is, then, not whether AI is ultimately achievable, nor whether current machine learning is useful in some domains. The question is how far is (actual) AI or generally useful machine learning. Given that we've been working on the problem for 75 years now, no major theoretical breakthroughs have been made in the past few decades, and that most successes are due to better hardware but with uncertain future scalability, I see no rational reason to expect a breakthrough in the next 5 years (very smart people in the '40s, '50s and '60s were equally convinced that AI is around the corner). I would never dismiss the promise of AI, but I would certainly question the unbridled enthusiasm some people have for machine learning's current form.