Rather than decry anything that resembles what we have today as old-fashioned and not futuristic, it should first be asked "why are screens so common today?"
We've been using our hands for years- keyboards, trackballs, mice, joysticks, custom wheels for driving games, the nintendo light gun for duck hunt, and so forth. Take that- and combine it with something else that should be jumping out at you from the article: every picture with a hand shows it holding a different object.
Screens are multi-purpose. They are perfect for scenarios where one control must meet many needs (compared to the pictured hammer, which at best meets two). Let's say that we could imagine some multi-purpose, tactile, 3 dimensional interface... say, like a hologram from Iron Man or Star Trek. Now compare that to the article's mention of Alan Kay's doodle pre-integrated circuit. The difference is huge... we don't even have the physics knowledge to describe how a tactile hologram could exist at safe handling energies, let alone in open air, as described by those fictions. Comparatively, Alan Kay combined things that already existed... battery, circuits, screen and keys, and imagined what could happen if they were smaller and portable.
We had things much more similar to what he drew decades before we had the ipad; the ipad is merely an incremental improvement over some of my own childhood toys, not to mention what was already available a few years after his own drawing, such as the Speak and Spell ( https://en.wikipedia.org/wiki/Speak_%26_Spell_(toy)#/media/F... )
Because putting a damned capacitice touchscreen is cheaper than even one good button.
I spent non-trivial amount of time to find devices that have tactile, physical interfaces. I certainly paid a premium in all cases. Because putting a capacitive layer and an LCD is god damn cheaper, for the reasons you describe.
It's not that the screens are better, or even good. It's that they are so versatile that they are the cheapest, lowest common denominator option for interaction design.
In a sense, Star Trek predicted this, as at least part of the reasoning behind flat panel touchscreens everywhere was that it was very, very cheap prop to make compared to complex physical controls.
The touchscreen is the natural evolution of unending drop down menus that require several trips through every time you do some operation, because the application development had no time for UX research.
This just isn't right. It's not the cheapness, by far. The iPhone didn't revolutionize the world because it's cheaper - it did in spite of it.
It revolutionized the world because a big screen means:
- More content you can view at once
- More flexible and powerful apps that can have more, more intuitive controls and display more information
- Touching an app directly and interacting with it directly means you can manipulate apps far more intuitively, quickly, with direct feedback (remember scrolling webpages before touch with tiny ball wheels or up/down arrows? yea, it sucks)
Resistive touch screen sucked, but capacitive were more expensive and were the game changer. I think you have it backwards.
Except I'm not talking about tablet/slabphone form, but talking generally.
iPhone was operating in a market where touchscreens were already the norm, it just had a better touchscreen.
It was a general lowering of costs, coupled with better networks and the ability to use more modern web pages that truly revolutionized the market. Capacitive touchscreens were nice, but not end-all.
In fact, my first experience with iPhone 3G was that it was clunkier than the smartphones we used. (and definitely felt cheap).
At least some phones, for a time, used to integrate both touch screen and buttons, which had significant benefits when it came to UX (as buttons can be navigated by tactile feedback only)
iPhone "revolutionized" the world[0] because it gave people a portable computer that combined multiple separate devices into one, with (added retroactively) possibility for extending it with further capabilities.
This justifies a smartphone in its current form. It does not justify why your car stereo is operated by a touch screen.
The truth to p_l's comment is plainly evident if you look at household appliances of today. You'll note that ovens and washing machines don't even have touch screens. They have touch panels with fixed image underneath (same with pre-CGI Star Trek interfaces). They're an UX nightmare, but are used precisely because capacitive touch detection is very cheap (doubly so at low fidelity required in these cases) - it's just a plastic/metal sandwich with zero moving parts.
--
[0] - iPhone is the one that's most remembered, but not the only phone in that space in its era.
A big problem is that you don't really choose a car based on whether it has touchscreen or not. And then there's the aspect of "distracted by the shiny" which has to be tempered by having experience with touchscreens
here's a plausible fantasy, some software will include 3D-printable files describing a physical interface device, which can have some standard electronics plugged in
as long as it doesn't need to be hard realtime and infinitely reconfigurable, it's very feasible
like, imagine a game in which you hammer nails, you would 3D print a hammer, stick in an accelerometer module and mcu/battery/bluetooth module
point being, there are definitely ways to make interfaces that are now, or will become, really usable
> We've been using our hands for years-keyboards, trackballs, mice, joysticks, custom wheels for driving games, the nintendo light gun for duck hunt, and so forth.
These are generally quite poor interfaces compared to what is conceivably possible. A precise pressure-sensitive stylus and/or multiple fingers on a responsive high-resolution display are much richer than these: higher bandwidth of a more directly relevant input. But the finger/stylus on screen is strictly 2-dimensional without any haptic feedback.
Even such tasks as rotating and moving a computer model of a single 3-dimensional object are extremely cumbersome and unintuitive using mouse, keyboard, joystick, steering wheel, etc.
Now imagine using those tools for e.g. controlling a simulated puppet in real time. Pretty much impossible. And yet people control physical puppets with amazing precision and flexibility with some strings and wooden sticks.
And that’s just moving an already existing collection of shapes. What if I want to design a new shape, say a newly imagined knot, or a new playground climbing structure, or a chair, or the face of an imagined deep-sea creature. Tools like pipe cleaners, modeling clay, paper & scissors, wooden sticks and a hot glue gun, ... are dramatically richer and more efficient than any of these electronic tools.
A physical multi-purpose, tactile 3-D interface is definitely difficult to even concieve of, but seems (relatively) straightforward to spoof. Imagine a VR inteface with an integrated pair of sensory feedback gloves. This would allow you to 'touch' objects and feel texture. To create the illusion of solidity and rigidity, you could integrate some form of shoulder-down exo-suit, which would articulate with every joint of the arm, wrist, and fingers, and could resist motion in any or all joints. Combined, these three pieces could potentially spoof a wide range of tactile interactions. Is there a VR wall 1ft in front of you? The suit would become rigid at that point, spoofing a solid surface. Are you in VR handcuffs? The suit will physically prevent you from pulling your wrists more 4" apart, reproducing the restraint. Is there a VR ball on the table in front of you? You place your hand over it and pick it up with your fingertips, feeling the sensation via the gloves. The weight of the ball would be spoofed via downward force/upward resistance from the point of the exo-sleeve that is in contact with the ball. You try to squeeze the ball, and the suit could provide varied degrees of resistance, giving a sense of the ball's firmness. Etc,etc.
The ipad doesnt, but it looked to me like Alan's drawing did. Thats why linked to the image of the 1979 spell-and-speak. Not quite the same, though with cheaper memory it easily could have been; all of the other tech was there, and plenty of real examples predate it.
All of things that made the ipad unique were incremental improvements. That is why i took issue with the assertion that Alan invented / predicted the iPad... So many other examples of tech resembling his drawing far earlier in time.
Rather than decry anything that resembles what we have today as old-fashioned and not futuristic, it should first be asked "why are screens so common today?"
We've been using our hands for years- keyboards, trackballs, mice, joysticks, custom wheels for driving games, the nintendo light gun for duck hunt, and so forth. Take that- and combine it with something else that should be jumping out at you from the article: every picture with a hand shows it holding a different object.
Screens are multi-purpose. They are perfect for scenarios where one control must meet many needs (compared to the pictured hammer, which at best meets two). Let's say that we could imagine some multi-purpose, tactile, 3 dimensional interface... say, like a hologram from Iron Man or Star Trek. Now compare that to the article's mention of Alan Kay's doodle pre-integrated circuit. The difference is huge... we don't even have the physics knowledge to describe how a tactile hologram could exist at safe handling energies, let alone in open air, as described by those fictions. Comparatively, Alan Kay combined things that already existed... battery, circuits, screen and keys, and imagined what could happen if they were smaller and portable.
We had things much more similar to what he drew decades before we had the ipad; the ipad is merely an incremental improvement over some of my own childhood toys, not to mention what was already available a few years after his own drawing, such as the Speak and Spell ( https://en.wikipedia.org/wiki/Speak_%26_Spell_(toy)#/media/F... )