I think he took this way too far but I can get behind the core of it. I'm especially dismayed by touchscreen interfaces in cars. Give me DIALS and KNOBS. They are SO MUCH BETTER than a touchscreen because I can easily use them accurately without taking my eyes off the road. No amount of touchscreen haptic feedback will ever make up for this.
If you must embrace high-tech in the car cockpit, voice control is fine (if it works well), but touchscreens are horrible in this environment.
Take the infamous iDrive example. For all the good design that BMW puts into their cars, it all suddenly went out of the window when it came to software. It's like when the software engineers showed up, everyone else threw up their hands and said "take it away--we don't care how poorly usable it is, for it is 'cool' and that's what people want."
For example, if you wish to change a radio station to another preset, and you have the misfortune of being elsewhere in the touchscreen UI, you must first navigate to the Radio screen, then switch to Presets (working off slightly hazy memory here, so pardon inexactness if there is one). How is that better than just whacking the button for the preset # on the radio?
A must read-book for any techie, in my opinion, is "The Inmates Are Running the Asylum" by Alan Cooper. It gave me a new perspective on computing. If you do any kind of software design that is used by a human, you must read it.
So that book starts out with "Riddles for the Information Age" and asks you what happens when you cross a computer with: an airplane, a camera, an alarm clock, a car, etc. As you might guess, the answer is that things did not go well.
Not sure if you noticed, but the linked article is on
http://cooper.com, which was founded by Alan Cooper.
Ironically and incredibly, with current Chrome on Mac, I am unable to scroll to the bottom of the page on their site describing the book: http://www.cooper.com/#about:books
>http://www.cooper.com/#about:books
It's horrible on safari as well and a little better on firefox (mac also), but the "Experiencing technical issues?" link to the plain vanilla version at the bottom left hints at the experimental state of the interface.
There have been a number of iterations of the iDrive and some may not be much good but mine works quite well because
1) There are 8 programmable buttons for radio stations/destinations and maybe some other things. They are numbered.
2) Steering wheel buttons for audio input selection and track/station up down and volume.
3) iDrive gestures of pushing and holding East for navigation and South for entertainment (I don't have a phone kit as I don't want to call while driving, that would be the North direction).
So I only need spin the iDrive for setting destinations or scrolling though the contents of the connected iPod. I find the iDrive with the mentioned shortcuts a much better concept for a car than a touchscreen (at until you can feel the screen content without looking).
I find a recent Mini much worse for missing the programmable buttons and possibly the hold to different compass bearings.
Actually it is an early 2008 1-series so similar timing but I think the key is to try the particular version as there is some variety (and I'm not sure it is all progress over time).
My new Golf GTI has a touchscreen radio/nav system, but it also has physical dials that can be used in place of touch for everything. It's actually quite fantastic. I can use knobs (or steering wheel controls) when driving, and the touch controls are even faster and friendlier when I have the ability to look at the screen. Both interfaces feel natural, and have complementing benefits.
I don't drive, though I completely understand the need for physical interface controls for common used tasks and in a car that’s a fair few. Another way of looking at it is if touch screens in cars were that good then Kitt out of Knight Rider would of had one. I believe head-up displays have potential in cars in many interface area's, imagine a fighter jet with touch screen interfaces says a lot.
I still moan the loss of the jog-wheel on the blackberry devices, dammed useful for scrolling down long lists in a controlled way without obscuring the screen or having to play with a trackball designed for mice!
Another argument of interface were dials win over touch interfaces would perhaps be DJ mixers. Touch interfaces will only get better, but will never replace the tactile and visual static presence of a actual knob/button for most. I want my phone which has a button you press to answer calls, not a virtual one.
Even in the knobs world, there are levels of suitability.
In automobiles, there's an increasing tendency to soft/electronic controls for cabin heating/cooling. Which is ... annoying.
My preference is for the three-control (plus A/C switch) control first introduced by Japanese automakers. One for fan speed (and off), one for vent settings, one for heat mix. The switch enables/disables AC.
Compare with the standard American design at the time which had a fan speed switch, heat control, and a multi-function slider combining both vent settings and AC. The end result being more complexity and fewer available settings (want to direct cold only at your feet, or blow the windshield without AC -- no can do).
Agreed regarding touchscreen. I doubt even voice will work as reliably as physical controls though.
This. I much preferred the click wheel and button of my old iPod Nano to the touchscreen of my new one. I could pause, skip, rewind, and adjust the volume on the old one without looking at it.
I look forward to a future where many natural interactions are improved through computer augmentation. Unfortunately, for the vast majority of today's computing interactions, you need some way of conveying more information than what is naturally present.
In any well-designed interaction it needs to be clear what you (the user) can do, and what the state of the system you're interacting with is. 'No UI' only works when you're augmenting a system where these two things are already clear. For instance, in the case of the car door system, you already know that you can open the car with your key, and you can tell when the car door unlocks.
When I open a new app for the first time, I don't already know everything it can do. I need to see the interface to know what's possible. And I need to see feedback to know that I'm making progress.
I've heard that many Nest owners are actually a bit disappointed in its smart features, as there's no way to tell why it's doing the things it's doing (why did it just make it cold in here?) Without a way of communicating its reasoning, people are suspicious of its "father knows best" recommendations. Even Amazon tells you roughly why it's recommending something to you.
And Voice UIs don't count as no UI. In fact, they're often a very poor interface, as they convey information much more slowly and invasively than a visual interface, and there's no a priori way to know what voice commands a system accepts.
The NFC system in Japan (standard on nearly all phones except iPhone) works by putting virtual cash into some chip that doesn't need the phone. In fact you can do it with your train pass which was the first way they did it and then later added the same chip to the phone's case.
So, no need to turn on the phone or choose an app. To add cash to the chip there is an app so basically you add $50-$200 and then don't worry about it for a week or month.
Since they started as train passes you can also rid all the trains, subways and buses (take a out phone, tap on sensor, done).
You can even reserve seats for long distance trains on your phone, walk on the train, there's a sensor above the seat you tap to "check in". Tap it again to check out if you want to switch seats.
The chip holds all the transactions on it. My 2006 Sony Vaio has a reader built in for the chip which can import that transactions for things like expense reports. I would guess that more current phones have apps for reading the chip.
The same as stolen wallets. You only lose the cash on the phone. The thief can't get more money from the phone as that would require passwords he doesn't know.
Hacker News, sadly, supports very little formatting. http://news.ycombinator.com/formatdoc describes all the formatting HN supports for comment, submission, and user profile text. <ol> is not supported. The common workaround seems to be using separate paragraphs for each list item, though some people put the whole list in a code block instead.
Increasingly, stores I go to do not require signatures for credit transactions under a certain dollar value. At the Home Depot (which recently started accepting Paypal - what?) I believe the limit is $50. So the steps are:
1) Get wallet
2) Find the card
3) Say "credit please" as I swipe the card
4) Get receipt.
I'm not sure how you interpreted the article to be saying google wallet is an ideal solution. It's still an interface, it's just a different interface. It seems to me that the whole point of the article is that both those interfaces are unnatural, and the ideal interface (aka no interface) is presented by square card case: you enter a cafe, you ask for a coffee, you get a coffee. payment is taken care of without you taking your phone out of your pocket or even thinking about it.
I just thought the exaggeration of steps was ridiculous and unnecessary. I can already do something similar at local bars, but you still have to include the steps of knowing the bartender, having a running tab, getting your credit card information on file with them in the first place, etc. He conveniently leaves out all of these steps yet includes things like "find the machine".
It's not only terrifying. It's moronic. Not only does this detract from the actual purpose of the speedometer, but it's a terrible interface for Twitter. How does anyone think this is a useful feature? "Yay! More distractions when I'm driving, and right in front of me so I can't ignore them easily!"
Quite many people read e-mail, twitter and facebook while driving. Having it in front of you is much better than in the palm of your hand. Even if you should not do it at all.
That's debatable. It's kind of like banning meth: people want it anyway, so they blow up suburban houses trying to make it. If they could just buy it at CVS, there would be more methheads but fewer meth labs. It's all about how you want to tune the numbers, and the same applies to tweeting while driving. (Twitter on the speedometer: less distraction, more temptation. Twitter not on the speedometer: less temptation, more distraction. Both are bad.)
I'm not sure this analogy applies. You're talking about a drug with addictive versus a social network that, for most reasonable people, I would imagine, is not necessary to access when you're driving. In fact I'd be willing to bet that most people would agree this is a bad idea, until you actually put it in front of them.
I think the important part of the article was that he was encouraging the simplest solution to interface needs. In this regard I totally agree with him. I think some of the stuff they've added to consumer products has fallen into a weird modular development space.
"Oh, you want us to add a touch responsive display to a refrigerator rather than using mechanical buttons?"
Then they throw the kitchen sink at it. Once you've installed that touch screen and the hardware needed to control it, you might as well add the ability to control ice cube production from a mobile app. Since you're already there, you might as well give them the ability to Tweet that they just pulled a slice of double-chocolate cake out of the fridge.
Just because these components are capable of acting as small computing devices doesn't mean they should be utilized like one.
The entire time I'm reading the article I'm cringing too. I don't want a device that just operates as a universal key for everything I do during my day. I walk up, order a sandwich and they charge my account without any physical transaction happening and no passcode required to open my phone? What's to stop someone else from doing the same thing?
A car that opens it doors and starts its engines because I have a phone in my pocket? Same issue.
Steal someone's phone and you steal the keys to their life then (really easily).
"What's to stop someone else from doing the same thing?"
No different than what happens now. When your wallet gets stolen you call the credit card companies to cancel your cards.
Yes, sometimes when you order a sandwich you don't want to pay for it immediately, and sometimes when you get close to your car you don't want it to unlock. But those times are the exceptions. The exceptions are the times that you should deal with a more complex interface. The other 99% of the time it should do the right thing automatically.
Renault experimented the no-interface approach around 2000 with their car. They had a wireless key that would lock/unlock the key depending how far you were from the car.
Later they added a button to lock/unlock the car because people were not comfortable with the technology:
1. They would get inside their house, drop the key, then get back out to check that the car was locked.
2. You could not just park in front of a shop, because the car could get unlocked while shopping if you got too close. Any activity around the car could end-up in continuous stream of locking/unlocking.
Unfortunately, making sure that nobody can enter your car is one of the primary concern of car user. And if it is not, sooner or later, your car insurance will convince you otherwise.
There are supposed to be checks in place to stop things like credit card theft from having an immediate affect though. Cashiers should be asking for ID and checking your signature. It's actually a control that is supposed to occur in stores.
Someone grabs your phone and gets free meals? What controls are there on that interface?
I don't even have a dongle on my keys to unlock my car remotely. Doesn't bother me in the slightest.
There are supposed to be checks in place to stop things like credit card theft from having an immediate affect though.
> Cashiers should be asking for ID and checking your signature. It's actually a control that is supposed to occur in stores.
No, it's not. For small purchases, merchants are not required by the credit card companies to check ID or to even ask for a signature.
And for larger purchases, it typically still doesn't happen. My credit card has "ask for photo ID" written on the back instead of a signature. Even with this, I get asked for ID maybe once every two months. Checks that are supposed to happen don't matter. Only checks that actually happen matter.
> Someone grabs your phone and gets free meals? What controls are there on that interface?
Well, if you're using the system Dorsey was describing, your photo pops up every time you go to pay. So the cashier sees it without asking for it. So if I try to pay with your phone, the cashier can say, "I'm sorry, but you don't look much like rbellio. Should I call my manager over?"
It's not a credit card company control, it's a merchant control to limit liability.
Having worked in loss prevention in the past, I know that the intent is to make sure these controls are checked an maintained. The issue you run into with cashiers is that the turn over rate is usually so high, or those checks aren't done frequently enough that the controls become lax.
Why would merchants be so concerned with these controls, you might ask? Because if it can be proven that the charges were made fraudulently, the merchant becomes responsible for them. If someone buys $300 of stuff from a store using your credit card, the store loses that cash.
If something so simple is deemed optional, how many merchants do you think are going to be willing to pay to have the equipment to view your picture when they try to charge your phone.
Speaking of pictures associated with your phone. Where is this picture going to be stored? On the phone? Where, if someone steals it, they could replace it? Should we have a national database then that relates your phone number to an image of you?
Actually merchants are specifically prohibited from asking for ID when a card is signed. That's under the merchant agreement they have with Visa and Mastercard.
So, the training I got when I worked as a cashier way back when must have been a fluke, even though it was with one of the largest retail companies in the country.
But you can eliminate that interface too, with sufficiently advanced driverless cars that know (from e.g. your habits or calendar) where you want to go.
The author only count the interface that the user feel is not part of the action but obstruction between what to manipulate and what to accomplish.
If you consciously think that you have to use effort to control your eyes to see. Then, yes, to you your eyes are interface. To many others they probably don't think that way.
About 30 minutes ago, I tried the Starbucks app to pay for coffee. I keep seeing all cool dudes (in SoHo starbux) take out their shiny phones to point & pay. So I gave it a try today and it definitely was not a better experience (may be worse) than just handing over my credit card. Below are specifics:
1. While standing in line, I searched the app and had it on, but the phone kept turning auto-off till my turn (I re-opened it 3 times atleast)
2. The app would not scan as the screen brightness was low. I frantically went to settings to change brightness then re-trying and all this time there were people standing behind me pissed.
I think using phones as "keys" or "payment cards" is not the best interface. Ideally there should be a separate device (like a credit card) to do payments and a "Key" device to open all my locks.
As I understand it the key (phone) stays in your pocket and the door is unlocked by touching the door handle while you (the key/your phone) are in a proximity of the door.
It is similar how https://lockitron.com/ is advertised to work except you are required to touch the handle.
No UI means that if the phone is in your pocket the door behaves as though it is always unlocked.
If the app app was corrected to not allow phone to turn off screen, and if the app automaticall adjusted the screen brightness while it's running, what would be your objection then? (Sloppy/buggy programming can be corrected.)
This is great, except the Nest thermostat's learning mode doesn't work (for me; kept coming on at 3am, no amount of button pushing would make it not do that except for disabling "learning mode" which is what I did), so you have to use its UI anyway.
Fortunately the Nest's UI is good enough that it's still a good thermostat without the learning mode. There was a comment on HN the other day along the lines of "if you have good enough AI, does UI design quality matter so much?" and I guess I think that it does if there's any way for the AI to mispredict then you need something good for correcting it.
Can you reset the Nest so it discards previous learnings and starts from scratch? Or, alternately, is there a kind of decay factor so that more recent settings, if they're consistent, override older settings?
"Getting our work done was an alphabet soup nightmare."
Exactly. This is why I'm in favor of a worldwide shift to hieroglyphics and touchscreens for business. Writing business correspondence is old hat. Letters, words, sentences, parapgraphs, what a nightmare. And shorthand? Don't get me started. Let's face it, we all would much rather touch some graphical shapes on a screen to communicate. A picture says a thousand words, so why are we typing them out? What a waste of effort. Text has got to go. It's time to leave the alphabet soup behind.
Everyone at BMW should read this article. They use to understand usability, and that less is more. From window button placement to easy to read VDO gauges.
This is so true. My car is the last generation of 3 series that has the classic BMW amber gauges (doesn't kill night vision) and no iDrive (terrible series of spins, pushes, and clicks to do something simple like change the radio station to a CD track). I've been in family member's newer BMWs with white gauge lighting and screens all over the place (dash and cluster) and a distinct lack of dedicated buttons that I can use without looking at a screen while driving. I hate it.
BMW started to include more configurable buttons. I think that is a good idea having the best of both worlds: Keys for the functions /you/ need most often, and not a ocean of keys for every possible thing.
Too often usability is sacrificed for design. White lights are not the worst in this sense. For example:
Audi has red lights, which, as some people believe, makes aggressive.
VW has blue lights. This is absolutely terrible, as the human eye can not focus blue very well. Extra fuzzy numbers!
The worst thing I ever saw in that regard was a rental: Light gray dashboard. I had to cover it with dark t-shirts, otherwise I wasn't able to see the road!
Weight the increase in functionality relative to the "decrease" in usability. I think the additional functionality you get far outweighs any reduction in usability (which for the record I think is not that bad).
This. I still have all the buttons a non-idrive 3 series has has, and typically turn off the screen when I don't need the additional features. But it is really nice to have navigation, bmw assist, and all the additional features that come along with iDrive when I want them. I have a hard time understanding the downside.
You don't know what you're talking about. Usability is paramount. You would not be that found of idrive if it forced you through three menus to lower the windows.
Think about the cheap TV sets with no buttons. A ton of menus just to change the contrast.
Or microwave ovens unusable without manuals.
I've got javascript disabled, and for some reason the page needs javascript to display anything other than the header. I was initially confused as to whether this was a joke of some sort.
The app I develop at work won't render if you have Javascript disabled. The entire app is rendering into the DOM using Javascript templating and data fetched from a REST API. Using the web with Javascript disabled? May the Lord have mercy on your soul.
I can deal with web apps that require javascript to function, and I'll happily whitelist apps that I actually want to use. But I don't want to let every asshole with a blog or a newspaper that gets linked from reddit or hackernews execute code on my computer. I think that's a fairly reasonable position.
Star Trek? You know we have this technology at supermarkets now, right? :)
This still isn't truly a no-interface situation, though. It's an interface that's so natural that you don't have to think about it. You express your intent by walking toward the door. You're still expressing intent, though. The sensor just does a really good job interpreting that intent and acting on it. But like all interfaces, this one is still imperfect. e.g. Sometimes the door will open up when you're just walking too close. Or sometimes it doesn't open when you expect, presumably because it's poorly calibrated (or maybe you have no soul[1]).
If you observe carefully, Star Trek doors seem to detect intent rather than proximity. Sometimes people are just in front of them talking and as soon as they finish and want to get through, the door magically knows it needs to open.
On the other hand, when they are facing someone through the door and want to close it, they usually have to press a little button next to it.
Maybe it could be possible IRL with a face detector and looking at the direction and speed of the person.
But in many cases, you can eliminate the human-machine interaction itself. Machines can work in the backend, without you having to interact with them.
When I enter a shop and a ringer alerts the shopkeeper of my presence, I never had to interact with the machine at all - it just sensed me and acted in the background.
So when you want to create an interface that provides a good experience, the less you involve the user the better. The best example to my opinion was given with the Nest stat. It watches you, it learns about you and based upon it's learnings it adapts settings.
That's why Internet of Things will become big. It's not the use case of turning the oven on 20 minutes before coming home yourself. It's about the oven knowing you're eating a prepared lasagna that needs be to in oven for 20 minutes. While driving home all traffic information and your location are used to determine when the oven needs to start its work.
Some of this is non sequitur. To say that a refrigerator should not have Twitter on it is not to complain about that refrigerator's interface, but rather its ridiculously unneeded functionality.
Wow: open door to unlock. I can see some problems (e.g. you want to check it's locked; someone breaks in while you are standing near-enough to the car), but this seems a powerful approach.
Google search is an e.g.: almost no UI, improves over time, adapts to you.
Awesome, I agree, except that all of those examples are shit.
It's not "more simple" to just walk up to your car and have it magically unlock based on proximity. Simple is using your damn key to unlock the car, not layering stacks of abstractions in order to compute ones location juxtaposed to a vehicle. In fact, that order of events should have gone something like this (as a generic, modern day implementation of this functionality):
- owner approaches car
- owner's keyfob transmits signal to car
- owner's car polls for incoming signal
- owner's car decrypts keyfob signal
- owner's car verifies that the keyfob has a legitimate encrypted key for that vehicle
- owner's vehicle signals the locking routine in the ECU
- owner's ecu flips solenoid for only the drivers side door
- door unlocks
- owner enters vehicle
How the hell is this more simple than:
- owner approaches car
- owner unlocks door with key
- owner enters vehicle.
Likewise, having your payments automagically charged based on location is NOT more simple. Simple is ordering your food and handing over money at the register.
The best interface is a simple interface, not a whole bunch of programming voodoo to achieve a simple task.
It is not KISS from the engineers perspective, but from the user perspective.
>> The best interface is a simple interface, not a whole bunch of programming voodoo to achieve a simple task.
The point is: the best interface is NO interface.
Go to car. Open door. Sit.
Not: Go to car. Find keys. Unlock. Open Door. Sit.
Ordering your food is less simple than ordering your food and handle money? But you do loose a lot of anonymity, which offsets the ease of use.
I agree. When I go to my car, I walk up to it and touch the door handle, which unlocks it. I sit down and press the start button to start the engine. The key never leaves my pocket, and I don't really care what it's doing as a user. (I personally care since I'm an engineer, but that's beside the point) To the user the unlock and start procedure has precisely 0 steps that aren't physically opening the door and stating the engine via a single dash button press.
"key engages pins in lock" is missing some steps - like the atoms push against each other and stuff...
Why do I care about the pins? I don't. They do their job without me telling them what to do.
With menus it's not that simple - I have to engage them; I have to obey them.
The comparison holds. But this is not what I meant. Lots of comments here expose interface details from hacker's perspective and forget that they don't matter. Users dont care if its a keyfob or a key if it does the same thing with the same effort.
My point is that any interface has some abstractions beyond which you get only hacker delight in knowhing their inner workings.
For usability purpose you don't need to know there are pins or bits - just that they work.
When on the other hand, you artificially expose inner workings (menus) and _force_ the user to make note of them - you should not be allowed anywhere near a design table.
Designer's job is to make users life easier not to shout at them "you stupid you who don't know how to exit my maze"
I don't really disagree with what you're saying. I just don't see the relevance of your references to menus. The conversation in this comment thread is centered around wireless keyfob complexity vs key complexity, and you jumped it with a non sequitur about menus.
That statement hides a lot of mechanical complexity (especially when you consider central locking, alarm and immobiliser). you should break it down into what happens when you turn the key, like you have with the remote key example. you can have just as many steps.
If the technology does 1 Billion operations on your behalf when you could do it in 10, but now you have to only take one action, then it is simpler to you.
Let's simplify the top instructions the same way the bottom has been. The bottom is a high level view, the top is from a low level view. From the high view it would be even simpler than the bottom, owner approaches, enters, turns on. No interfacing required until you get in the thing and press the button.
If you must embrace high-tech in the car cockpit, voice control is fine (if it works well), but touchscreens are horrible in this environment.