Hacker News new | past | comments | ask | show | jobs | submit login
A brief rant on the future of interaction design (2011) (worrydream.com)
335 points by li4ick on Sept 30, 2019 | hide | past | favorite | 153 comments



This is probably my all-time favorite article on the internet. I've brought it up in conversation dozens of times when referring to ways to control computers. It's why I don't really use phones for anything beyond answering phone calls and handling urgent tasks when I'm not at home/office. I just feel like I have very little control over these devices, unlike the massive control with a keyboard or mouse, or even better, a fork and spoon when eating dinner (although despite its high degree of control, you can't control a computer with them).

Tech innovators and its users/consumers need to think outside of the box more often for ways to control computers. Given a physical UI device, you can place it on a spectrum of "computer-friendly to human-friendly". Touch screens are computer-friendly because it's trivial for computers to understand and developers to interact with. The downside is that users feel distanced from their control, so it is difficult to use efficiently. But developers are smarter than users at UI---the difficulty should be put on them instead, to control their software/OS.

I've imagined things like stress balls with "electric muscles" for feedback to control various applications, a cube with the ability to change its texture with suction/servos, and things like https://roli.com/products/seaboard or https://www.expressivee.com/buy-touche except for general computing applications rather than music performance. Who knows what could be invented if customers could be convinced their computing experience can be improved with better physical interfaces.


As an engineer I finally got to take the training course our techs go through and experience how a tech uses the software which we (engineers) developed and maintain. It was eye-opening. One particular example relevant to this discussion is the old version of the equipment had physical potentiometers which the tech would adjust to get a desired meter reading. The new equipment has you just type a number into a textbox in a Windows program. BUT - it doesn't even provide a slider or up/down arrows, you have to play The Price Is Right inputting guessed over/under numbers until a measured output in a second text box matches a desired value. It's clear that when the transition was made from physical controls to software based controls, not enough thought was given to ergonomics and how it would be used by an actual human.


Maybe a decent place to implement a computerized linear search algo for your techs?

Sounds like they're setting params for a PID/control loop in the first place and second order PID/control loops are notoriously finicky so I'd avoid that. But a simple/static linear search for params where it just binary searches while the tech chooses the best value amongst the 3 options [min, mid, max] probably gets them there pretty fast. Though probably still not faster than a manual dial.


As a fan/user/creator of modular synthesizer stuff and design observer in general I can usure you that you are quite definitly not the only person that feels like this. Vinyl record sales have surpased CD for the first time again not long ago, people crave for physical interfaces, modular synths and everything hardware is getting more popular despite the ton of plugins you can get for your music software. People are going back and build physical interfaces for menu-diving-synthesizers from the 80s like Yamaha’s infamously unprogrammable but amazing sounding DX7.

People are rediscovering mechanical keyboards etc.

Modular synthesizers are a wonderful playground for physical interface design. Because of course you want to expose a lot of functionality of your module, but on the other hand your panel will get big and your module will end up taking up a lot of space. Of course you can get away with using a screen and an encoder but hoe wrll can you use it live while making muic then? Of course you can add 20 dials, but maybe 3 clever ones that abstract over those 10 in just the right ways are better? Also: people might not use your module for a month, how can the design remain in their head just enough to kick a spark?

Physical interfaces. Use them blindly, shile wearing gloves, in your pocket.


Speaking of dial interfaces, this is how you manipulated 3D models on your old IRIX or Sparc workstation: https://en.wikipedia.org/wiki/Dial_box . Its audio-visual brother, the jog-and-shuttle controller, is still going strong: https://www.amazon.co.uk/Contour-00496-0-Design-ShuttleXpres... . What I'd like to see is a reversible debugger manipulated by a jog-and-shuttle controller.


I agree with all of your points, especially around synths. I'm getting back into making music right now, and the gear lust around physical synths is strong.

> Vinyl record sales have surpased CD for the first time again not long ago

This says something about vinyl but even more about CDs. Vinyl sales are up but still only about a fifth of what they were in 1982.

CD sales have plummeted. That makes sense because the form factor really has nothing going for it. It's a digital medium, so in terms of sound quality, it's exactly equivalent to the SD card in your phone. But the latter is about a thousand times bigger, a hundred times smaller, read/write, and can be easily updated from the Internet.

CDs are a dead end technology.


A have a few thousand HP of Eurorack and definitely agree that modular synths are easier, more fun, and more interactive to control than most other interfaces. But even then, they're clunky for some things, and I feel that there are further designs that can achieve control even more efficiently.


I agree. There is even more possible and not all modules are really well designed (e.g. why did Dieter Doepfer feel the need to swap around the otherwise identical inputs between his SEM- and Wasp-Filter?). Other modules just don’t go far enough. many lack attunoators/attenuverters (for economical and space reasons).

You have to really fight to get a decent physical interface, to get a good one takes even more.

I believe the future of modular interface design lies especially in more visual feedback which makes usage easier and more intuitive and systems more reliable during live usage. There we can learn from modular and semimodular synths from the software side.


Random aside on the "analog is in" train: I've heard three different people reference cigarettes as "analog" (as opposed to "digital" vapes) in the last week.

I keep waiting for the other shoe to drop on hipster backlash against social media.


For me Vinyl never was about the sound, it was about having to actively take out the thing of a beautiful cover and put it on and listen to it. I could esily imagine something that would do the same with a digital storage.

That aside: there are all kinds of modules in modular synths — digital, analog, hybrids, electromechanical stuff, ... The first thing people ask me when they see me playing somewhere is usually: “is this analog?” or “how do you know which cable do what”

The redeeming quality of a modular synth lies mostly (nomen est omen) in it’s modularity. It is however very physical in a sense: if you didn’t create a connection it doesn’t exist.


People are rediscovering mechanical keyboards etc.

But this seems like so much a matter of failure. I grew up in the 1970s, with Moog and Arp synthesizers the best you could get.

The potential of that plus programability on every level seemed astounding.

Obviously, the way that unfolded was into a world where fine-grained interactions within instruments were abandoned. But it doesn't have to be that way. That people aren't playing with modern interfaces just the promise has been abandoned. It doesn't have to be abandoned.

I mean, I can afford spinny Midi wheel but I can't afford a touch-sensative keyboard. I should be able to afford a touch /speed sensative keyboard.


I agree, I was actually refering to mechanical computer keyboards.

As a guitarist I was always curious for more haptic and less piano-like ways of control. Have you checked out Aturia’s microbrute? A touch keyboard and a synth for ~300 €


The fact that millions of people do a majority of their computing on their phone contradicts some of your argument.

I also think it highlights one of the things this article misses; that one of the important design decisions is amount of effort the action takes.

Any interface that requires full body movement is going to be too draining to use all day. It is a lot easier to touch a screen than move your whole body to manipulate a thing, even if that would get you more fidelity.

For all the downsides of a touchscreen phone, people are able to use them one handed for hours on end.


> For all the downsides of a touchscreen phone, people are able to use them one handed for hours on end.

You actually picked an example that doesn't support your argument very well: back in the day, people were able to manipulate dumbphones one-handed very dexterously and even without looking (famously, teens could text their friends while their phones were hidden in their pockets) whereas the modern smartphone demands the use of eyes and often both hands. One-handed use with most modern phones is much more difficult and error prone than with the dumbphones of yore.


> famously, teens could text their friends while their phones were hidden in their pockets

Confirmed. I could do that, because all the keys had a small set of functions and input delays were entirely predictable (a benefit of running soft real-time firmware instead of a "regular" OS). And, of course, they were all actual keys. It's easy to develop muscle memory in such conditions.


>Any interface that requires full body movement is going to be too draining to use all day

I didn't say the ideal solution should take more effort than a touch screen. In fact, it will take less. Touching and dragging glass takes enormous effort compared to holding a stress ball and twitching your fingers in certain ways to control a computer. The ideal physical interface converts muscle power to "information control" with maximum efficiency.

>people are able to use them one handed for hours on end

Yes, to produce work they could have done in 10 minutes with a keyboard/mouse.


Most people aren't 'producing work' when they are on their phone... they are consuming work.


That may simply be the result of an interface that doesn't let them be efficient or productive.


Exactly. I actually do a lot of work on my phone. I have a pen model, and I sometimes break my bluetooth keyboard pointer rig.

Few people will bother. The UX takes real work to make productive.


In that case, I'm not interested in that use case. I'm interested in increasing efficiency and control of "producing work" on a computing device.


But what percentage of the time spent on computers is producing versus consuming? I spend a lot of time producing (writing code), but I only do that on a desktop computer. My wife, on the other hand, spends almost no time producing and pretty much only consumes. In my experience, the vast amount of people are more like my wife (consumers) than like me (producers). Even my coworkers who are business people spend a magnitude more time consuming than they do producing. In my opinion, computer devices should be optimized to the most usual use case - consuming.


In my opinion, computer devices should be optimized for the world we want to have not the one we do have. Optimizing for consumption does not sound like a moral good to me.


When you say "the world we want to have" I think you mean "the world I want to have" since most people seem to want a world where they primarily consume media. I think most people would say that the moral good is optimizing for consuming media, because that is what they want to do.


> since most people seem to want a world where they primarily consume media.

553,000 people in the US are homeless, but I don't think most would say they "don't want a home". 2.3 million people are in prisons, but I don't think that's the place they "want to live". 15.1 million have an alcohol abuse disorder, but I don't think they would all characterize it as "wanting to drink". 12 million people in the US experience rape, violence, or stalking from a partner every year but I wouldn't characterize them as "wanting to be in that relationship".

Behavior reflects all sorts of things outside of the person's intentions and aspirations. One of the goals of a just society is to give people a framework that helps them reach their goals.


Most people do a whole lot of things other than consuming media during their day, and computer technology is woefully underoptimized for all those other use cases.


Unless of course, the reason most people spend so much time consuming rather than producing is precisely because their devices are so hard to produce on?


Most of the time spent with computer interfaces is consuming rather than producing, though... doesn't it make sense to optimize for that use case for general purpose electronics?


I am not sure how widespread usage correlates with ergonomics and I am afraid it doesn’t.

And before you get a wrong picture: I wrote big parts of my last academic thesis on my smartphone while I was waiting on the bus.

I’d like to use your argument against you: it is a lot easier to have your hands rested on your desk and just move your fingers a few milimeters on a keyboard than it is to hold your hand in mid air and move it around on a screen and it is faster too.

Only downside: harder to learn and the mapping between keys and actions is often unclear.

I have been working professionally as in audio editing and I cannot imagine doing that on a touch screen. I don’t want to frantically wave my hands around the screen like a conductor just to get a fraction of the speed that I would have when using a computer keyboard.

I also have a direct comparison between software modular synths and the real thing. Software can in a sense be more flexible, because you can just place down 10 effect nodes without having to own them, but the cabling is far more exhausing than with a physical cable (unless we talk about presets or removing all cables).

Touchscreens certainly also have their applications where they outperform every other interface, but it depends entirely on the task.


> Touchscreens certainly also have their applications where they outperform every other interface, but it depends entirely on the task.

Agreed. I love my ipad for a lot of entertainment type tasks and I feel the touchscreen works great for that: most general web browsing, watching youtube/netflix, listening to music (touchscreen for browsing/creating playlists), some types of games (but IMHO fails miserably at most). Drawing is great with a stylus, but IMHO not so great with my finger.

For everything else, give me a proper physical tactile keyboard and a mouse, or physical buttons/knobs/faders/whatever.


Not really. Tons of people, myself included, use the crap out of mobile phones.

I had to do real work to get good at Android. And I am. Can do amazing things on my picture behind glass thingy.

Of course mine comes with a pen. Made some tasks possible and practical.

That said, I pretty much hate it. The thing demands a lot of attention. My older tactile device didn't. Like the other comment says, tactile interfaces can be used in pockets, etc...

I do not know what the future holds, but leaving grease smears on a touch screen probably is not it.


For all the downsides of a touchscreen phone, people are able to use them one handed for hours on end.

But do they achieve more in all of those hours than they could have achieved in only minutes with better input devices and more powerful software?


A lot of the time their goal is to make time pass faster, which is not something you can speed up.


I think one part is finding the right domain for innovative UI. I often see a weird UI on a tool I use once a year for a couple minutes. I would very much prefer it use the OS default, clunky UI everything else has even if it's less efficient. Going off the trodden path usually means things like shortcut-keys or scripting breaks.

We give a lot more latitude for "professional" apps (where we spend hours every day using). Often, those UIs are built up historically. I was watching an iPad thinking how much of a mess it would be to show them all of this on day 1.

A car infotainment system would be an interesting place to innovate. I don't know if it's because car companies have squandered goodwill or if I rent/borrow too many cars, but now I just want to use my phone to plug in and prefer knobs over touch screens.


>car infotainment system

That's a great area for improvement. Controlling a car absolutely needs a good physical controller that is very close (conceptually) to our human muscle structure. But what do car companies use for their new systems? Touch screens, the absolute worst choice for keeping your eyes on the road.


They want to sell cars. And shiny smudgescreens are still something fancy that sells.


It also streamlines their work. Physical controls interfere with the design of the car, making changes and iterations more expensive. With touchscreens, the design process boils down to placing a screen somewhere, and they can pawn off UI design and software to another department or to a subcontractor.


If you enjoyed reading this then I would definitely recommend The Design of Everyday Things (https://en.wikipedia.org/wiki/The_Design_of_Everyday_Things).


Why do you need more control when scrolling on safari or twitter.


It’s sad that since this article was first written, the trend for dumbing down both the hardware and software we use every day has only continued.

When I was a kid, I used to love experimenting with programming. I first learned on a ZX81, with a magnificent 1KB of RAM, typing in simple games programs from books every session because there was no storage device to save them. That experience, that joy of being able to create something fun, sparked an interest in what these amazing technologies we have invented can do.

Some of my slightly younger friends were lucky enough to have more powerful systems like the BBC Micro available by the time they reached that stage. Those were brilliant because you could connect anything to them. When they were writing simple LOGO programs at school, they didn’t just draw a circle on the screen, an actual mechanical turtle with an actual pen would draw a circle on a real piece of paper, right before their eyes.

Now I want to offer that same joy and intrigue to the next generation of my family. Today’s ubiquitous computing devices are phones and tablets, each with numerical specs many orders of magnitude bigger than that ZX81. That little boy typing in listings from a book now has multiple decades of professional programming experience to share.

And yet, I can’t sit down with my own child and write even the most simple game on those devices, because for all their theoretical power, they lack even a rudimentary programming interface. In some cases, I can’t even write a game myself on another system and port it, because the whole ecosystem is closed off.

How is it that in a time when children seem, often all too soon, to be carrying around more processing power in their pockets than a supercomputer had when I grew up, they still can’t enjoy the sense of freedom and discovery that I experienced with my little ZX81 and its 1KB of RAM and no storage device in the 1980s?


They can!

If you're on iOS, have you tried Pythonista? It's $8 on the app store, uses Python as the programming language, features an IDE, and a UI library so budding programmers can write video games.

For those on Android and the technically minded, a rooted Android tablet offers far more options for on-device programming. There's likely an equivalent to Pythonista in the Google app store as well.

(No affiliation; I'm sure there are other apps that are similar, that's just the one I know of.)


You don't need to root Android (or even go outside the app store) to have a wide range of on-device programming options (you might need to engage developer mode for many of them, but you can do that without root.)


I'm sure you're right, but I've moved away from Android. Are there any apps in the Google store that you know of, or would personally recommend?


Well, RFO-BASIC! is an interesting modernized BASIC with relatively full integration with phone features that is available on the store (you have to get the non-Store version for SMS features, thought).

It's the thing I've seen that off the top of my head seems most related to your comment.


Also worth noting Apple has a free Swift Playgrounds app available on iPadOS. Although it's primarily designed for helping people learn how to program, it's a completely capable Swift environment where you can create multi-file projects and even build interfaces using native UIKit objects.


I stand corrected; I had never come across Pythonista before. Thanks for the idea!


Yes! Have you considered buying a zx81 for your kid? Or better, a logo turtle?


Victor is one of the people behind Dynamicland, the "live paper"/"room is the computer" startup that was shut down recently. That was on HN last week or so.

Fancier input devices seem to have come and gone. They peaked in the 1990s, when you could see a lot of them at SIGGRAPH. My favorite was the magnetically levitated sphere in a bowl. It was like a 3D joystick/trackball with force feedback. It was really cool. It never sold. There were lots of gadgets like that. An animator friend had a workstation with a keyboard, a knob box, a button box, a joystick, a trackball, a tablet, and two screens. Some of the people doing Jurassic Park had a model dinosaur where you could move all the joints, the computer could read the angles, and the on-screen image moved to match. None of this ever caught on. Even 3D joysticks are rare. Game controllers with two joysticks caught on, but those joysticks are abstractions of what's on screen, as, for example, steering, not direct interaction.

I tried Jaron Lanier's original gloves-and-goggles VR system. You couldn't do much with the gloves. That was pretty much true in later glove systems. Autodesk fooled around with VR in the 1990s, but determined that gloves and goggles were not going to make CAD easier.

Lack of force feedback is a huge problem with gloves. Without force feedback, it's like trying to work in oven mittens. Much of human precision is tactile feedback. Without that, precision work is slow and tiring. As everyone who's soldered surface mount parts under a microscope knows.

Back in the 1990s, when I was working on collision detection and physically based animation, I considered building an input device I called "The Handle". The Handle was to be a jointed arm, like a robot arm, with a grip handle on the end as an input device. A servomotor system (or, for cost reasons, I was thinking brakes only back then) would provide tactile feedback. The handle itself would have the ability to split, like pliers, so you'd have something to squeeze.

The Hammer could potentially simulate pliers, tongs, wrenches, hammers, etc. Do simulated auto repair. Assemble Meccano. This would have been what Victor is calling for.

Could it be built? Yes. Would it sell in volume? No.

That's the problem.


> My favorite was the magnetically levitated sphere in a bowl

I think you might be talking about CMU's Magnetic Levitation Haptic Interfaces: https://youtu.be/cMi75SrDbsk?t=12

I've used it before and it's pretty cool. It's strong enough such that when you hit a hard virtual surface it actually feels hard, and precise enough that when you drag across a brick-like surface it actually feels rough. Unfortunately I remember it having a tiny range of motion.

For your jointed arm "handle", I think they've managed to find a niche in surgical training simulations. I see a lot of similar products come up searching for 6DOF haptic devices (e.g. Phantom Premium 6DOF, Geomagic Touch, Force Dimension Omega 6)


I tried out the 3D Systems Touch Haptic Device a few years ago, which sounds a bit like a simpler variant of your "handle".[1] It was quite good. Feels really strange, but good, to be interacting with things "inside the screen".

[1] https://www.matterhackers.com/store/l/3d-systems-touch-hapti...


That's one of the very few devices in that space ever to be produced in volume. Small volume, but a real product.


> Lack of force feedback is a huge problem with gloves. Without force feedback, it's like trying to work in oven mittens.

I think you’re overgeneralizing. I would say:

“Without rich real-time feedback, it’s like trying to work in oven mittens”.

The human brain is absurdly plastic. Have you seen someone skateboard? I think good auditory feedback(1) is probably plenty for excellent hand control. Not “intuitive” but learnable. Visual feedback could supplement.

(1) I.e. a full multidimensional synthesizer model, no wave files. Maybe 10 dimensions-ish to start.


Where's the news that Dynamicland was shut down? I can't find any evidence of this.


Sam Altman money turned off, but hanging on looking for funders, as of Sep. 18th article.[1]

You can donate here.[2] Expected donation $5000 and up.

[1] https://tashian.com/articles/dynamicland/ [2] https://dynamicland.org/donate/


That article claims Altman stopped funding it in 2017, but it's clearly still active.


Though this has been discussed many times, I must say the trend of using touchscreens to replace dials and buttons on cars is extremely worrying. It’s entirely an aesthetic and cost saving decision at the expense of safety and UX.


My wife and I were recently searching for a new car. One of the model's reviewers dinged it for using "dated" physical dash controls instead of the present fad of a single large touchscreen. We entered that into the "positives" section of our spreadsheet.


> I must say the trend of using touchscreens to replace dials and buttons on cars is extremely worrying

Most are going back on that. It is just that what is currently being sold was developped years ago.


I'm lucky that my car was made in the in between period, so all touch screen controls have physical controls as well.

I use the physical controls, especially when driving. But my navigatrix prefers the screen, primarily because she likes the visual feedback. (Not enough of the physical controls in my car have things that light up to let you know something's happened.)


I think we are on to something here. I think it is a fundamental psychological reason why many car enthusiasts love older cars. The feedback and vibrations you get from revving the engine, a big stick shift, the pedals sometimes long range. It's why we have these anti-Tesla people occupying the super charge stations.


I was disappointed that the latest Nissan Leaf, which used to be the cheap electric vehicle option with only the necessities, has now bumped the price up and added all sorts of luxury features like auto-dimming rear view mirror, rain-sensing wipers, heated seats, climate control, and an "intelligent mobility" system.

All I care about is cost, space, reliability, road noise, and range. And physical buttons.


My friend has a BMW and the touchscreen is awful. I can't imagine using that thing to do anything while I'm driving. It looks like it was designed for someone to use on the comfort of their bed, where they can afford to mis-click, not know where the hell to find a page, etc.

The only reason I absolutely need a screen in a car is for visibility. Pretty much all the risk in parking/reversing is removed when your car has external cameras. The other functions don't make much sense to me. If you wanna play video games and watch movies while you're driving, it better be a smart car.


Let alone everything else. Try finding a good cooktop or oven with real dials.


This brings to mind the phone ringing in the middle of the night. You pick it up and add mumble into the receiver and the conversation begins. But no, now the phone rings in the middle of the night (assuming it isn't text / Slack / email / etc.) and you pick it up and then try to focus your eyes on the screen to figure out WHAT is happening and then HOW to answer. It seems the massive leap forward in the CAPABILITY of our phones has actually caused them to stumble backwards in PRACTICAL usability.


So, has anyone made a “old school smartphone dock”?

What I want is a dock I can put my smartphone into and then the old phone interface takes over. Preferably one with links to 1-2 other docks in the house so all three ring if I get a call.

Pick up the physical receiver, say hello. Talk. Say goodbye, hangup. No screen interaction at all.


The cordless phone system I have lets you make/receive calls via your cell phone from any handset in the house. It connects via Bluetooth, so isn't a dock, but the functionality is similar to what you're suggesting.

Mine is an older Panasonic system, but I'd be surprised if other manufacturers didn't offer something similar.



Yes, mine is very similar.

Though, (and I should have mentioned this in my original comment, sorry) I've never actually used the functionality in question, so I don't know how well it actually works.


There are Bluetooth units with telephone jacks on them that can connect to your phone. A friend was considering getting one to connect to his old wall-mounted dial phone. I don't know what they're called or how well they work.


That's... genius! It's so obvious now that you wrote it. I'd love to have one at my desk, and another at my nightstand.


I have had almost this exact experience multiple times, except with the alarm I set to wake myself up. With most old-school alarm clocks, you can violently fling your hand in the general direction of the clock and usually manage to at least hit the snooze button.

Ever since switching to a phone/tablet though, (Android, in my case), I now have to first locate the phone, make sure it's oriented right-side up, attempt to remember which direction I'm supposed to swipe, and if I can't remember, attempt to decipher the tiny symbols without any explanatory text.

I remember at one point that the entire UI changed when I upgraded Android and the next day I could only stare blankly at the screen trying to figure out what was going on while the alarm blared in my face.


Only for that specific use case. It's true that modern smartphones are worse than old voice phones. But modern smartphones are more like pocket computers, they're just called phones for legacy reasons basically. Personally, I think I can count the number of times one one hand that I've used my phone as a phone in the last month.


Instead of a single spectrum from computer- to human-friendliness, picture a triangle with the third vertex being "task-friendliness" or task specificity. The author is advocating a return to human-friendliness, but what he describes is more like task-friendliness.

I've become somewhat obsessed with ergonomics after a long battle with chair pain. In my mind, true human-friendliness does not require new interfaces for every task (although it may require a slightly different interface for each human). Every computing task may very well be handled best by a person sitting in a chair at a workstation. That wouldn't be bad design IMO; computers are, in fact, general purpose.

If we can get to a point where each computer user is able to use their computer comfortably for 8 hours straight, without pinching a nerve or a blood vessel (yes! it's possible!), then I would consider that a human-friendly design.

I'm not talking about iPads here - the author is totally right that these devices don't seem to be made for human beings.


As far as I understand, remaining in a static position for 8 hours straight during the day and another 8 hours straight while asleep is pretty bad for many parts of the body irrespective of what that position happens to be.

People need to be occasionally changing position, moving about, getting their circulatory system working, getting their muscles working, focusing their eyes at a longer distance, etc.

We can improve people’s experience with sit–stand desks, frequent short breaks, periodic longer breaks, a daily commute requiring walking or other exercise, etc., but reducing the total number of hours people are staring at screens every day would also be generally helpful. It isn’t all that great to exclusively work at even the most ergonomic possible computer workstation.


There's truth to what you're saying, but it implies you have to accept a certain level of pain or physical discomfort if you work in a seated position, and the only way to manage the pain is to get out of your seat. But there are types of pain that can't be managed that way. Those injuries have to do more with sustained muscle tension caused by bad equipment than just sitting, and will continue to get worse until you change your equipment to eliminate the tension.


A few things come to mind. First, I've been reading a lot on the science behind Montessori education, and there's a lot to support the idea that our brains are much more active when we're physically interacting with something. Would be logical to assume that the richer the physical interaction, the more meaningful the connection.

Second, reading Jaron Lanier's somewhat recent book on VR I was struck by how far ahead of its time the 'data glove' was. Early VR may not have had the dazzling eye-candy of today's graphics, but on interactivity we just seem to be catching up. (As an aside, I never realized the Nintendo PowerGlove was basically Data Glove 'lite'; developed by VPL.)

Edit: Third, I recently read Carl Sagan's "The Dragons of Eden" which he wrote in the 1970s. It's about the evolution of the human mind. It's SO worth reading today, even if its outdated and proven wrong at times by contemporary discoveries. One thing that really stood out was this idea that it wasn't so much that humans evolved and then invented tools – but that perhaps tools shaped our evolution as much as we shaped tools. Invent a simple tool -> use the tool -> brain develops as a result -> invent even more complex tools -> etc.



> With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?

What if you don't have an entire body at your command?

The plus side of “pictures behind glass” is that it's a fairly well standardized interface — the software has to adapt to that homogenized hardware, instead of you fighting to adapt to the hardware. (Or, more accurately, you adapt to it once, instead of once for every type of object you want to interact with.)

If interactive experiences all start to require a good range of motion, bodily sensitivity, and ability to instinctively interpret the interface, there's a risk it could be incredibly alienating for many. Unless we design with that consideration from day one, it could make adaptivity harder than it already is.

I went looking for Bret Victor's take on this question because I was certain he would have thought of this already. So far I only found this:

“Channeling all interaction through a single finger is like restricting all literature to Dr Seuss's vocabulary. Yes, it's much more accessible, both to children and to a small set of disabled adults. But a fully-functioning adult human being deserves so much more.”

http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...

I find it sad to read that. Those with access issues deserve so much more too. There's already a huge access gap. If we're going to champion new modes of interaction, we should fight hard not to make the gap bigger still.


This may be controversial view, but I'm with Bret Victor on this. Of course, we should do as much as possible to make everything accessible to everyone. But we should not strive for a single universal interface that's accessible to all, because that's sacrificing both the best and the average case for the worst case.

Consider books and vision loss. The tried-and-true solution to make books accessible is printing them in Braille. But nobody in their right mind would be suggesting we should only ever print books in Braille, because then it's equally accessible. It would be more "fair", but it would also be a ridiculous waste for the 99.5% of the world's population[0].

If we were to equalize all our technology and processes across all accessibility issues, our civilization would collapse - it would mean nothing depending on body motion, on body sensitivity, sight, sound, speech, taste, pain. The union of all existing disabilities is a life of a rock.

So since we can't really have a truly universal interface[1], we may as well give up on trying to design technology to the lowest common denominator, losing most of the benefits it gives, and instead design it to play to the strengths of human body. Fallback options should be available where possible, but at some point we have to understand that until medical technology homogenizes our bodies, not everyone will be equally suitable to every task.

(And I say that as someone who has always dreamed of being a pilot or an astronaut, but for whom that career path was cut off by large nearsightedness early in teenage years.)

--

[0] - "36 million people are blind" according to https://www.who.int/news-room/fact-sheets/detail/blindness-a...

[1] - At least not until a proper brain-computer interface exists, and we all interact with everything using our thoughts.


Thank you for this perspective. You changed my mind about this.


I want someone with the skills of Brett Victor to solve the problem of visualization and maintenance tools for change over time.

You solve that problem and there's a world open to you for using things other than flat text files as the system of record.

Without those tools, it's all talk. If we're lucky. If we're not lucky it's a death march toward a horrible, horrible boondoggles.

We are already trying to visualize four (or more) dimensional code with 2.5D projections (length, width, and a few colors). You give me a tool that can use 3 or 3.5 dimensions and the world opens up to us.

Don't forget to write a merge tool.


My understanding is the lions share of data science work is integration and cleaning the data, both of which are unbounded UI problems.


Re: Maybe, if you don't have an entire body at your command.

"Boss, I'm not disco dancing, I'm preparing our Powerpoint presentation for the big meeting."

Boss: "Please, do the Reboot Shuffle."


There are some gaps in reasoning, but overall I think the point stands.

The biggest omission is the key advantage of "Pictures Under Glass": they can be defined purely in software. Because of that, I don't think they'll ever stop serving a role in our device interactions. No amount of space-age physical device design could've supported the smartphone's explosion of apps without pictures under glass or some equivalent.

With that said, we could definitely benefit from moving some of the more constant/widespread interactions back to physical controls. Buttons and switches on smartphones are always appreciated. And don't even get me started on cars, where there's little need for a vibrant developer ecosystem and a ton of need for non-visual interface comprehension. I also think the Apple Watch's "crown" is one of the more interesting recent examples of a tactile interface that doesn't sacrifice open-ended software development.


Re: Buttons and switches on smartphones are always appreciated.

I disagree, I'm always accidentally bumping or pushing them. Plus, they usually do tasks that are not my top tasks. I'd like to reassign some of them to my favorite actions.


Surely the dream is to have a programmable tactile interface. Perhaps you could program the height or texture of parts of the screen. But I have no idea how this would work!


I'd like to extend this to fucking VR. All VR experiences lack one thing: full body haptics. Take something inspired from the "Pacific Rim" cockpits ( http://i.imgur.com/DLze6j7.jpg ) so you can have some contraption giving you a huge freedom of movement and haptic feedback everywhere. That'd make some games a lot more immersive and physical.


The reason games don’t feel physical is not because of which senses are being stimulated, it’s how they are stimulated.

If the game is just waiting for your hand to enter a bounding box, that’s a 0-dimensional interaction, which feels like nothing. If you could hear your fingers getting close to things, and see the angles between your body and other surfaces, sense through visual texture the relationship your body has with different fields, you would register those things a “physical” experiences.

Touch doesn’t feel like touch because it’s your fingers, it feels like touch because it’s rich and continuous and textured and perfectly grounded in body-space.

You can make proprioaudiovisual experiences like that too, it’s just hard and there are easier ways to make video games that make money.

The science is clear that our brains don’t care which sense data comes from, they just care about the structure of the data.

Although there are anatomical structures that make some senses better at processing one structure than another, they don’t prevent the brain from using those senses for other schemas, if the environment demands it (injury, prosthetics, etc)


This is great! Really shows what we have today is a reductionist approach to only using our finger on a 2D surface as opposed to utilizing all our motions, muscles and senses to interact with technology in 3D space.


Personally, I think it's a bit asinine.

Rather than decry anything that resembles what we have today as old-fashioned and not futuristic, it should first be asked "why are screens so common today?"

We've been using our hands for years- keyboards, trackballs, mice, joysticks, custom wheels for driving games, the nintendo light gun for duck hunt, and so forth. Take that- and combine it with something else that should be jumping out at you from the article: every picture with a hand shows it holding a different object.

Screens are multi-purpose. They are perfect for scenarios where one control must meet many needs (compared to the pictured hammer, which at best meets two). Let's say that we could imagine some multi-purpose, tactile, 3 dimensional interface... say, like a hologram from Iron Man or Star Trek. Now compare that to the article's mention of Alan Kay's doodle pre-integrated circuit. The difference is huge... we don't even have the physics knowledge to describe how a tactile hologram could exist at safe handling energies, let alone in open air, as described by those fictions. Comparatively, Alan Kay combined things that already existed... battery, circuits, screen and keys, and imagined what could happen if they were smaller and portable.

We had things much more similar to what he drew decades before we had the ipad; the ipad is merely an incremental improvement over some of my own childhood toys, not to mention what was already available a few years after his own drawing, such as the Speak and Spell ( https://en.wikipedia.org/wiki/Speak_%26_Spell_(toy)#/media/F... )


The most honest answer?

Because putting a damned capacitice touchscreen is cheaper than even one good button.

I spent non-trivial amount of time to find devices that have tactile, physical interfaces. I certainly paid a premium in all cases. Because putting a capacitive layer and an LCD is god damn cheaper, for the reasons you describe.

It's not that the screens are better, or even good. It's that they are so versatile that they are the cheapest, lowest common denominator option for interaction design.

In a sense, Star Trek predicted this, as at least part of the reasoning behind flat panel touchscreens everywhere was that it was very, very cheap prop to make compared to complex physical controls.

The touchscreen is the natural evolution of unending drop down menus that require several trips through every time you do some operation, because the application development had no time for UX research.


This just isn't right. It's not the cheapness, by far. The iPhone didn't revolutionize the world because it's cheaper - it did in spite of it.

It revolutionized the world because a big screen means:

- More content you can view at once

- More flexible and powerful apps that can have more, more intuitive controls and display more information

- Touching an app directly and interacting with it directly means you can manipulate apps far more intuitively, quickly, with direct feedback (remember scrolling webpages before touch with tiny ball wheels or up/down arrows? yea, it sucks)

Resistive touch screen sucked, but capacitive were more expensive and were the game changer. I think you have it backwards.


Except I'm not talking about tablet/slabphone form, but talking generally.

iPhone was operating in a market where touchscreens were already the norm, it just had a better touchscreen.

It was a general lowering of costs, coupled with better networks and the ability to use more modern web pages that truly revolutionized the market. Capacitive touchscreens were nice, but not end-all.

In fact, my first experience with iPhone 3G was that it was clunkier than the smartphones we used. (and definitely felt cheap).

At least some phones, for a time, used to integrate both touch screen and buttons, which had significant benefits when it came to UX (as buttons can be navigated by tactile feedback only)


iPhone "revolutionized" the world[0] because it gave people a portable computer that combined multiple separate devices into one, with (added retroactively) possibility for extending it with further capabilities.

This justifies a smartphone in its current form. It does not justify why your car stereo is operated by a touch screen.

The truth to p_l's comment is plainly evident if you look at household appliances of today. You'll note that ovens and washing machines don't even have touch screens. They have touch panels with fixed image underneath (same with pre-CGI Star Trek interfaces). They're an UX nightmare, but are used precisely because capacitive touch detection is very cheap (doubly so at low fidelity required in these cases) - it's just a plastic/metal sandwich with zero moving parts.

--

[0] - iPhone is the one that's most remembered, but not the only phone in that space in its era.


Sure, I was just responding to the parent post though. As for a touchscreen in your car, I expect people have different preferences there.


A big problem is that you don't really choose a car based on whether it has touchscreen or not. And then there's the aspect of "distracted by the shiny" which has to be tempered by having experience with touchscreens


Bret Victor pointed out Alan Kay had to wait 50 years before something like the Dynabook was possible.

And with a bit of creativity, multi-use tactile interfaces can be accomplished today.

https://en.wikipedia.org/wiki/Nintendo_Labo


here's a plausible fantasy, some software will include 3D-printable files describing a physical interface device, which can have some standard electronics plugged in

as long as it doesn't need to be hard realtime and infinitely reconfigurable, it's very feasible

like, imagine a game in which you hammer nails, you would 3D print a hammer, stick in an accelerometer module and mcu/battery/bluetooth module

point being, there are definitely ways to make interfaces that are now, or will become, really usable


> We've been using our hands for years-keyboards, trackballs, mice, joysticks, custom wheels for driving games, the nintendo light gun for duck hunt, and so forth.

These are generally quite poor interfaces compared to what is conceivably possible. A precise pressure-sensitive stylus and/or multiple fingers on a responsive high-resolution display are much richer than these: higher bandwidth of a more directly relevant input. But the finger/stylus on screen is strictly 2-dimensional without any haptic feedback.

Even such tasks as rotating and moving a computer model of a single 3-dimensional object are extremely cumbersome and unintuitive using mouse, keyboard, joystick, steering wheel, etc.

Now imagine using those tools for e.g. controlling a simulated puppet in real time. Pretty much impossible. And yet people control physical puppets with amazing precision and flexibility with some strings and wooden sticks.

And that’s just moving an already existing collection of shapes. What if I want to design a new shape, say a newly imagined knot, or a new playground climbing structure, or a chair, or the face of an imagined deep-sea creature. Tools like pipe cleaners, modeling clay, paper & scissors, wooden sticks and a hot glue gun, ... are dramatically richer and more efficient than any of these electronic tools.


A physical multi-purpose, tactile 3-D interface is definitely difficult to even concieve of, but seems (relatively) straightforward to spoof. Imagine a VR inteface with an integrated pair of sensory feedback gloves. This would allow you to 'touch' objects and feel texture. To create the illusion of solidity and rigidity, you could integrate some form of shoulder-down exo-suit, which would articulate with every joint of the arm, wrist, and fingers, and could resist motion in any or all joints. Combined, these three pieces could potentially spoof a wide range of tactile interactions. Is there a VR wall 1ft in front of you? The suit would become rigid at that point, spoofing a solid surface. Are you in VR handcuffs? The suit will physically prevent you from pulling your wrists more 4" apart, reproducing the restraint. Is there a VR ball on the table in front of you? You place your hand over it and pick it up with your fingertips, feeling the sensation via the gloves. The weight of the ball would be spoofed via downward force/upward resistance from the point of the exo-sleeve that is in contact with the ball. You try to squeeze the ball, and the suit could provide varied degrees of resistance, giving a sense of the ball's firmness. Etc,etc.


> we don't even have the physics knowledge to describe how a tactile hologram could exist at safe handling energies

Direct nerve interface. That would be one hell of a disruptive technology.


> battery, circuits, screen and keys

The iPad doesn't use keys though.


The ipad doesnt, but it looked to me like Alan's drawing did. Thats why linked to the image of the 1979 spell-and-speak. Not quite the same, though with cheaper memory it easily could have been; all of the other tech was there, and plenty of real examples predate it.

All of things that made the ipad unique were incremental improvements. That is why i took issue with the assertion that Alan invented / predicted the iPad... So many other examples of tech resembling his drawing far earlier in time.


Back when touchscreens started to become popular i remember some talks about a 'tactile overlay' to touchscreens, essentially a grid of tiny bumps (pixels, though at a much lower resolution) that would 'pump up' and 'flatten' just enough to provide a tactile feel to the otherwise glassy screens.

What happened to that?


I almost made a comment in this article about using Deep Learning and Motion Tracking to have computers better understand our gestures so we can interact visually.

https://news.ycombinator.com/item?id=21115863

At some point AR and VR will finally provide us the overwhelming need. Then we’ll wonder why we didn’t think of it sooner.

At the moment, we’re stuck in the “a keyboard and mouse are better” stage.


You can't just drop the idea of Deep Learning into a problem as a magical solution.

What does visual interaction entail and what is stopping that from having already been trialled during one of the previous (and current) attempts to get vr to work for people?


If you can use Deep Learning and CNN’s to drive a car, you can probably use it to recognize different hand motions.

[Update]

I found a link where someone tried to do a simplified version of this:

https://towardsdatascience.com/tutorial-using-deep-learning-...


It's certainly possible to recognise hand motions (see e.g. Leap Motion), but I think hand motion recognition is pretty hard to use effectively without haptic feedback telling you what you're touching and where. Ideally actuated like [1] but for your whole hand, not just a stylus. I don't think technology is anywhere near that. The device in [1] pushes back against you, physically stopping you from moving through virtual objects.

[1] https://www.3dsystems.com/haptics-devices/touch


You don’t need haptic feedback, just good feedback. There’s plenty of bits in an audio stream alone to give the brain the control feedback it needs. You’d be surprised how few degrees of freedom we use for grasping.


I noticed this while experimenting with VR and hand-tracking. Without tactile or precise auditory feedback, the experience was hollow, even with relatively high-fidelity visuals and tracking. You can never feel like you are somewhere else - or something immaterial is with you - until a full sensory experience coalesces, even if it's "low resolution." Until then, the result is landing in an experiential uncanny valley.

On another note, we should also be looking at accessibility when approaching this research. Contrary to that last little bit at the end, not everyone has full use of their bodies, or even of their hands. I just finished listening to an NPR segment about inner-city youth dealing with permanent disability as a result of gun violence, and the unique challenges presented to them. Perhaps with people like them in mind, the mass solutions we eventually end up with will be better for focusing farther upstream than nerve endings.


Hands-in-air interactions in VR I agree lack the tactile feedback necessary to avoid it feeling hollow. However, I've often advocated that the "ideal" VR input system in 2019 is:

- Freely tracked dominant hand, used for fine-grained control/pointing, expressing body language, and grab/release-like mechanics (which work decently for certain kinds of virtual objects/interactions, like throwing stuff)

- Tracked 6dof controller in non-dominant hand, similar to the existing controllers except with a fully capable twiddler-like button grid for doing fast symbolic entry.

For anything where tactile feedback is core, move it onto the controller hand, where the hardware buttons can provide a more reasonable proxy for things like press-and-release or sustained-holding in terms of motor interactions. (Haptics are still pretty bad on all other accounts, but controllers at least give you pressure feedback under your finger tips, instantaneous response, etc.)

With the above setup, I suspect you could perform most knowledge worker oriented tasks with similar efficiency to a mouse and keyboard, and also perform existing VR-centric use cases like 3d modelling/manipulation, avatar locomotion, full body social interaction, etc. The current two-controller or dual hand-tracked paradigms fall short on the former.

With the introduction of hand tracking capabilities on Quest coming, it seems that we're inching up to mainstream VR devices having the capability to deliver this, except the controllers still lack anything close to keyboard-level discrete inputs. However, once Quest hand tracking launches, it may be possible to use an off-the-shelf twiddler for this if the SDK exposes the ability to track in-hand tools (similar to the Leap Motion SDK.)


> You can never feel like you are somewhere else - or something immaterial is with you - until a full sensory experience coalesces, even if it's "low resolution."

I'm not sure I agree with that. Games like Beat Saber and the old demo of Budget Cuts have really made me feel like I'm inside another world, particularly when I was newer to VR.

I say "not sure" because we're not necessarily talking about the same thing—the above are "games" or at least "experiences", not "interfaces. You don't really touch anything in Beat Saber, except for the blocks where a strong rumble is sufficient. Harder to do that in a UI.


In a fitness experience, you can imagine the resistance at least, which is what I wind up doing in beat saber to increase my heart rate and calorie burn. It is actually an interesting problem: can the mind imagine the feedback that really isn’t there, causing muscles to work harder even if they don’t have to? For some people at least, the answer is yes, though I bet it would be hard for others. But shadow boxing, katas, and so on, have shown that imagining resistance has been around for awhile.

We can kind of imagine tactile feedback already in a UI, and vibration is effective in reinforcing that illusion. Perhaps the way forward is to further play around with fooling the brain (via some kind of cognitive interface) rather than somehow reproducing real feedback.


Vibration is key in current VR experience, IMO. Having spent many hours on Beat Saber over the past few weeks, I can say that the only reason my brain recognizes that I hit something with the blade is the little controller buzz that activates at the right moment.

As for adapting yourself to the game, I keep having to focus on preventing myself from using my wrists too much when playing. Given that my goal is calorie burn, I do my best to use my whole arm (as if wielding a real blade). I've noticed that I do create "impact resistance" myself too.


It took awhile to stop us g wrists, but now I imagine it as some immensely satisfying glaive work.

BoxVR is another game where imagining feedback is crucial, more so actually with their fitness focus. Their inaccurate calorie counter is completely motion base, so wider faster movements will drive that up as well. Honestly, BoxVR (custom with lots of fast songs) is now more important than BeatSaber to my daily workout, because I’m able to get a lot more effective motions in. On the other hand, I can’t play it with the toddler around (if he got in my way, he could really be hurt by the gestures I’m doing).


This is why I dislike touch screens in cars so much: You have to look at them. We have 100 years of experience with car user interfaces that don't require you to take your eyes off the road, but we threw all that away in favor of these shitty pictures under glass.


I feel my Nokia 3210 in highschool had a much more intuitive UI than the latest iPhone... and it was trivial to use without looking at it, since your hands could navigate the keys...

This isn't possible anymore with sliding pictures under glass...


I remember this Bret Victor piece from 2011 and I was thinking about it recently when Google announced Soli for the Pixel 4. https://www.cnet.com/news/project-soli-is-the-secret-star-of...

Soli and other gesture-based technologies seem like at least an incremental move to using motion that Bret Victor was advocating for in 2011.


to me that seems like the same problem, since you still don't touch anything, the only feedback is the computer doing something (or not)

that said, I'm definitely curious to see how it performs in the wild


I think maybe the principle is that we want to be able to transfer as many bits/second of data as possible. We have the computer->human interface down, because screens can already emit more information than humans can process. But human->computer we can only go so fast. This is partly limited by how many fingers we use to do the input. I can type faster on a keyboard than a screen. But I think the main issue people are worried about is that humans can only think so fast. So if the interface requires asking humans for input too much, it will take forever. So instead you try and guess what the humans want. And if you do this well enough, then the input method doesn't matter. Some people say they can type better on their phones because they can use swiftkey. So I think most visions of the future are thinking about AI, and intelligence, and asking "what will we be able to guess without asking the human first?" And this is where they think interaction design is going.

Another point of fact. Such a machine requires less bps from the human. That is, it requires them to do less thinking, which most people will prefer. And if you're trying to predict the future, "machines will require less thinking (but be more mass market)" is a pretty safe bet.

Sandwiches are very fancy things, as everyone knows. But I expend a lot of mental effort getting the cheese slices in the right thickness. I'm sitting there focusing on making sure I don't deflect the knife too deep or too shallow. This is a very fancy task. But many people prefer specialized cheese slicing devices, because they makes more precise slices, and because they require less mental effort. So expect UI to follow the same trend, from general tools requiring complex input to particular tools requiring simple input.


Haptic, resistive gloves will merge the tactile with the digital in VR applications. Just like touchscreens, they will serve as a single device that can simulate an infinite number of different controls. Give 5-10 years for the tech to miniaturize and mature. https://haptx.com/


Embodied cognition is definitely a thing: https://en.wikipedia.org/wiki/Embodied_cognition

And in software development, the primary bottleneck is thinking. Anything you can do to enable greater clarity of thought is going to have huge leverage. So I think this is a super promising angle.

Bret's actively developing his ideas at DynamicLand: http://dynamicland.org But clearly it will be quite some time before that tech becomes viable in the real world.

Now I'm wondering if there are simpler ways to use the rest of my body to facilitate thinking. Like, I can't mount lasers and network connected cameras on the ceiling... but maybe there's some software that can use a webcam to do simple gesture recognition mapped to bash scripts or something like that.


The benefit is of "Pictures Under Glass" is the flexibility - it is extremely easy to change the picture to align to almost any use case. It is extremely difficult to have the same flexible interface with something that is tactile and can manipulate its own shape to be appropriate for the task.


In 1968 the iPad also seemed extremely difficult, though.


> Are we really going to accept an Interface Of The Future that is less expressive than a sandwich?

Great line.


Probably written by a hungry author.


The article focuses on the use of hands, but a visionary future technology might focus more on voice and NLP. This is something I have worked for some years:

http://www.smashcompany.com/technology/the-advantages-of-a-n...


I predict the UI of the future will be via brain implants that allow one to control an app with just thoughts. It may become a necessity in order to keep up with other countries that may have less qualms about the ethical side of surgery and intrusion.


I think we are in transition faze. Current future ideas assumes that we will have same interface and hand driven intersections which may not be true. Increase of IOT devices and voice controlled interfaces have different story to tell.


2011


The video anchoring the article shows as unavailable for me. Is there an alternate place to view it?


Same here, I googled the url of the embedded video and it seems to be a concept video by Corning:

https://www.youtube.com/watch?v=6Cf7IL_eZ38


The condescending tone of this article does not help it whatsoever.

Not to mention the huge gaps in logic that cause it to jump from merely a misguided attempt to rail against well established conventions for interfaces to an active argument for those interfaces still being good.


ahh was really hoping this was something new from Bret


Aw, I was hoping it was a new essay by Bret. Needs (2011) in the title.


Brett victors talk innovating on principal(https://vimeo.com/36579366) and more so actually Greg Wilsons talk from the same conf (https://vimeo.com/9270320) marked a pivotal moment for me in my work ethic. It really motivated me to expect better from the tools i use around every day.


Similar rxn to those talks. Also, if ever someone knew how to write an "evergreen" post (ie, one that holds up over time), it's Bret Victor.

His "Ladder of Abstractions" is a great example:

http://worrydream.com/#!2/LadderOfAbstraction


Also his essay Magic Ink: http://worrydream.com/MagicInk/


I really enjoy his writing, but I think he has a way to go before he's in the same league as Homer, Plato, Lao Tse, and Shakespeare!


haha, those guys sure knew how to write a good blog post!


Yup! They got people still clicking on their posts hundreds or even thousands of years later


There's a (2011) missing from the title here, at least according to the title graphic.


Added now!


Tiny grey sans-serif font means you hate my eyes.

Extra-ironic on a rant about interaction design.


Seems pretty easy to write an essay saying "screens are old news, let's use our hands!" containing not a single word about how that would work, technically or design-wise.

I'm going to write an essay saying "screens are old news, let's use free-floating holograms like in Iron Man!" with no indication how that would even be possible, and see if I can get on HN.


You didn't offer a solution.

Yes, that's why I called it a rant, not an essay — it describes a problem, not an idea. (And FWIW, that's not the sort of thing I typically publish, or want to.)

The solution isn't known, and I didn't think making stuff up would help.

The point of the rant was that the solution will be discovered through long research, and that research won't happen without an awareness of the problem. My intent was to hopefully coax a few potential researchers and funders into tangible interfaces, dynamic materials, haptics, and related fields that haven't been named yet. If that happens, then mission accomplished.

http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...


The author of the essay is the founder of https://dynamicland.org which implements what he's talking about in this article.


None of the things BV has worked on -- smart tiles or putting pieces of paper under a camera/projector -- do ANYTHING AT ALL like what he's talking about here. They don't change their size, shape, weight, feel etc. in response to data. Because that would be incredibly hard. I don't think you should get points for saying "we should do this impossible thing, that would be better."

I must admit I find BV's writing to be really facile... like he'll compare a piano app on the iPad with a real piano and shit on the entire idea of the iPad... without addressing the obvious point that an iPad can be a million things other than a piano.


His point is not about what we should do, it's about what we should aspire to. He's criticizing a video showing a Vision Of The Future, not the current status.

> shit on the entire idea of the iPad

That's not at all what he's doing. He's saying the iPad was the Vision of the Future in 1968. Now the iPad is here. It's no longer part of the Future, and we should stop talking about it as if it is, because we need to reach beyond it.

> I don't think you should get points for saying "we should do this impossible thing, that would be better."

When the other contenders at visionaries are saying "we should do this thing that has already been done", saying "we should do this impossible thing" is worthwhile.


> None of the things BV has worked on -- smart tiles or putting pieces of paper under a camera/projector -- do ANYTHING AT ALL like what he's talking about here.

Sure they do. At Dynamicland there are turn tables that can be manipulated, joysticks on springs, books with pages that turn, and pieces of paper that are manually rearranged on surfaces by groups of people. All of that involve physical devices, and using hands for something other than sliding on a 2d surface.

Does it solve all the issues he raises here? No, of course not. But it's clear that he's working on it, and providing rich avenues of exploration for other people.


>> without addressing the obvious point that an iPad can be a million things other than a piano.

I think you're missing the point. He's saying we should work toward creating a new kind of technology where the ipad would really feel like a piano when you're on the "piano app", but feel very differently for other apps.

What you're saying is that you don't think that such technology could ever exist; BV thinks otherwise and he's working toward that vision. He's also saying that we shouldn't take the future for granted as if a technology will just suddenly appear. It takes hard work, funding and a clear vision.

I can think of a few ways about how we could add tactile touch to a dynamic interface (I.e. turning a piano 2D screen to a real experience). VR is going in that direction (to see 3D from a 2D screen) and you can imagine some sensors at the tip of your fingers where depending where you touch you'd feel something different. Again, that's just an idea, there are many ways it could be achieved.


iPad cannot be any of those million things, no more so than a map of Paris is Paris; all it can be is a picture of those things under glass. Pictures of things might be the best we can build today, but it's a limited and unexciting dream for the future.

I don't think you should get points for saying "we should do this impossible thing, that would be better."

When everyone else seems happy to accept that a picture of a piano is a piano, maybe you should get points for pointing out that the Emperor's New Clothes are lacking many important features.


It actually sounds kinda like it:

"We are inventing a new computational medium where people work together with real objects in the real world, not alone with virtual objects on screens."


He's been working on https://dynamicland.org/ since that time.

So there you go.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: