Hacker News new | past | comments | ask | show | jobs | submit login
The Cooper Journal: The best interface is no interface (cooper.com)
121 points by davezatch on Aug 30, 2012 | hide | past | favorite | 34 comments



At first I thought I would read a rant about current user interfaces, but the article turned out to be well thought, smartly written and, above all, it doesn't discuss interfaces _per se_. It goes up a meta level and proposes eliminating interfaces everywhere, because instead of helping people, they hinder the interaction.

During the first few lines I had to convince myself that it is actually a good idea, but they are absolutely right. We are on an interface craze. Everything has a twitter panel, which isn't a bad idea, but most designs nowadays obstruct regular interaction to ease a few corner use cases.

Blogs and newspapers hide their content beneath social buttons. Current "Smart TVs" are bad as TVs though they have great facebook support.

But, 99% of the time you want to watch the TV, and if you need to press some buttons for that, you're actually delaying and difficulting the main use case.

Can I mention Apple at this point? Apple is both hated and loved because of its interface design, but I think they're spot on. If they can define the best interaction patterns with a TV and program then on their future iTV, they might eat the market.

What do you think?


First off, I liked the article. He's right in that visual UI should only be looked at after everything possible has been gleaned from process.

As for TVs, I think most who are designing the interfaces are still stuck "inside the box". They need to take a couple steps back and learn how modern people use the interfaces. This is one of the main reasons I dropped cable and now feed my TV from a web-connected PC. The cable box interface was a thick jungle of rectangular boxes when searching or filtering should be forefront. We have the technology!


My grandfather, a regular country man, is horrorized by the amount of choices he has.

Nowadays there is a remote for the TV with a lot of buttons, another for the video decoder, with more buttons, and another for the radio, with yes, more buttons.

He was used to have a machine which worked upon being switched on. The TV had buttons labeled 1-10 for the ten channels. The radio only needed a dial and a volume slider.

He can't use the radio anymore and, with his age, he's starting to get stuck when he presses the wrong TV button (teletext, menus, etc). We taught him to turn the TV off and on again.

More choices is not always better. Interfaces need to be simple or nonexistant. And, by the way, TV manufacturers should ship TVs with two remotes. The regular one, and another for seniors, containing only buttons 1-10 and the volume control.

Seniors, for some reason, think they'll blow the machine up if they press the wrong button. And seniority is where we're all heading...


My grandparents always lament how complicated things are these days. They too just want a simple 1-10 and on/off remote. These do exist, but I keep reminding them why it's not possible for them: they have cable. And a DVR. And a DVD player. And a sound bar.

In the days of simple on/off TVs, these didn't exist. I know they're not going to give up the Food Network, ESPN, Nickelodeon for the great-grandkids, CSI, etc. Things are more complicated now for a reason. When there were only 3 channels, you only needed one remote with 10 buttons on it. I agree that interfaces have gotten out of control (I can enter the menu on my TV without the remote, but I can't exit it?), but the reality is, you get features or you get simplicity. There's no way to control 3 or 4 disparate systems with all their capabilities with only 10 buttons and no on-screen interface.


Yes, it's not really possible with current products. But part of the point is that we should strive to make it possible; the original 6-button Apple remote should be the minimum and every additional button needs a significant justification for its existence, such as volume control.

On/off? Make everything power efficient, turn off after some idle time or detecting that no one is around, reliable auto power on across devices (HDMI-CEC is terribly unreliable), reduce "on" times to no more than two seconds, etc. (requiring a full minute for a TV to boot is insane)

Fast forward / rewind? Reuse left/right arrows. Heck Netflix on my TV doesn't even let you use the dedicated fast forward / play / etc buttons - you have to use the d-pad.

Number pad / channel control? Depends on the person. Cable's dying though, so it makes slightly less sense each day.

3D button? Detect whether glasses are being used. Or develop useful non-eyestrain inducing 3D in the first place.

Widescreen button? Ditch legacy hookups already!

Closed caption button? Who wants to toggle this often enough to warrant a dedicated button? (Yes I know broadcasters are retarded enough that you need to, you shouldn't need to though)

Device selection? Maybe one input selection button. But the button presses should always go to the intended device without having to push another button on the remote. Yes, HDMI-CEC sucks for this again (big surprise)


How about ten (or so) nice big buttons with OLED key caps? I've used rather remarkably simple smart remotes in the past; their only real problem being that they used resistive, early-generation LCD touch screens, which are not great from a tactile perspective. A simple, clearly labelled remote that's capable of changing context beats the heck out of three or four devices with dozens of tiny, poorly-labelled rubber chicklets.


It doesn't necessarily beat the heck out of anything. Tools with changing contexts, be it a remote or software, are highly confusing for users. They always have to remember in which state they are, especially hard when there is bad visibility (like a remote you want to press blindly). vi isn't hated by so many without a cause.


I'm not affiliated with this product but it looks like the remote you're talking about: https://www.flipperremote.com/

We need more of this kind of conversation at HN - identify a problem or pain point and talk about how to meet that need.


Whoa, that's exactly it. Now at least I know what to look for. Thanks for the reference!


The PS3 XMB interface is still one of my favorite simple interfaces to use on a TV screen, especially since it works with only a directional gamepad and a few buttons.


Video games should be looked more carefully for they have nice properties:

    - as you mentioned, few controls needed
    - instant interaction, no formation, documentation
I remember seeing people playing final fantasy and diving into deep listings with ease and efficiency.


I would like to understand why you think being able to tweet from your fridge is a good idea.


I think the main problem is that programs are designed by programmers who think that the users are idiots. Programs designed for programmers have a much leaner interface.

e.g. think about accounting applications:

You had trained data typists in the 60s, who typed the accounting records in a format, that can be directly processed in a batch oriented way.

You had 3270 terminals in the 70s, for trained typists, to enter the accounting data into a form, that was producing the records to be processed in a batch oriented way.

User interfaces become more interactive in the 80s, applications become personal, and there was no batch oriented processing anymore.

Most accounting software requires the use of the mouse now, so its no longer possible to use them for a trained typist, who prefers that her fingers stays on keyboard.

Lets step back:

My own home grown accounting software is written AWK. Its parsing a plain text in a markdown like syntax, and producing all papers I need for tax and accounting with a single "make". Calling ":w<cr>make" is F2 in VI for me. The complete accounting system is 88 lines for invoices, 128 lines for monthly tax, 168 lines for annual tax. Its pure batch oriented, requires no database, and I'm using my preferred interface, the VI editor.


This is not "the main problem". This is "a" specific way of looking at things that becomes relevant in certain situations.

There are factors at play that often render this view naive, such as the fact that untrained consumers need to learn a panoply of interfaces constantly over time. Most contemporary interface designs that you come across outside of specific professional domains are optimized for learnability by making use of physical metaphors and adhering to conventions. I trust that you believe that this is not for a good reason. The results of usability studies show otherwise. The fact that interfaces are the way they are shows otherwise. The experience of myself and many people I've met shows otherwise.

Where training can be afforded, interfaces can be more "lean", but more often than not, it cannot. I suspect that you are underestimating (1) the commitment you've made to allocating time to learning new interfaces and (2) your talent in doing so.

As for me, I likewise prefer VI as an interface, since I allocated a year of free-time to learning to be productive with the damn thing, but I could have spent that year instead living my life.


Developers want to pretend you[1] do not exist. "All users are idiots." Maybe the problem is with developers[2], not users.

1. The user who can learn to use vi and AWK.

2. In many cases it is they the developers who can't figure out how to use vi and AWK. Their solution is to conclude no one else will be able to use these programs either, because... "all users are idiots".


I'm curious just how far down the rabiit hole you can take this philosiphy.

Just how many APIs and programming interfaces and other interaction points of a computer can be eliminated or automated away or even made to adapt to user preferences?

What would this mean for the divide between the Operating System and its Applications?


One example of an interface which you could possibly get rid of is the password-interface.

Identify people by voice, the way they type, walk, their body-shape, fingerprint (already used), whatever you can think of. Especially when combined this could identify someone pretty accurately.

Of course, using these features is only the first step. If it's still an interface, a "say your password screen and look into the camera", that's no real progress (apart from getting rid of passwords). But when using it while interacting to authorize the itneraction, it would have a great effect - like checking fingerprints on the keyboard or mouse or even better an automatic iris-scan or anything.

(Tried to build the first step of a prototype for this in my cs-bachelor-thesis)


Slippery Slope arguments are logically fallacious. It's about knowing the product's true purpose and using good judgement. It's about following good design principles, such as Dieter Rams's ten principles for good design: https://www.vitsoe.com/gb/about/good-design.


Pretty far. Imagine one day not having a remote for your tv at all, and just looking at a certain spot(camera) on the TV saying, "TV, turn on" "next channel, next channel" "louder".

That would completely eliminate the need for a UI or menu or remote.


If I sit down in the couch, and move my focus to the TV and my brainpattern implies expectation of watching the TV, it should turn itself on.

If I sit down in the morning it should have learnt based on all previous mornings that I chose the morning news that I want to watch news.

If I sit down with a bowl of popcorn it should open the movies selection and let me take over using an interface from there.

If I sit down with my girlfriend it should filter the movie selection based on our previously recorded viewing history so we get the best movies for us both — if its just me it can hide all those romance chick flicks.

If I sit down wearing my Liverpool jersey and Liverpool is in fact playing a game or is mentioned in any program description it should default to showing me that.

Theres a whole lot that can be read from the environment of the device, quite a bit is technological feasible as of today (identifying a person by means of a camera and keeping track of viewing habits) and some that are slightly further out (reading your mental activity to determine that looking at the TV meant you intended to watch it.

A big thing here for me is automated personalization I think, this is a very viable next step.


It would get rid of the remote, but it doesn't get rid of the interface. And who wants to talk over a movie you're watching just to change the volume?


In that case just move your mouth and the camera should use facial & mouth movement recognition to know what you want.


One reason retailers are chomping at the bits over near field payment is that it will eliminate all the unproductive conversation between employees One and customers. Distracted by their phone, there will be less likelihood of personal interaction. This will leave the employee free to pimp magazine subscriptions and extended warranties in strict accordance with the scripts retailers are forcing upon checkout line staff.

There's already a great interface for taking people's money which doesn't require a location aware electronic device. It has a face and a uses natural language.

(Now, get off my lawn).


Retailer's will surely recognize that their value add is in the personal, friendly, and helpful atmosphere they can create with their staff. Square seems to have pretty much hit the nail on the head with 'Auto Tab' - eliminating the awkward, unglamorous part of the process (the obvious exchange of money) means that the coffee shop barista can instead focus on greeting the customer by name, offering the usual or taking a different order without reminding them of the money their spending or directly fiddling aroudn with cash or phones.

Sounds like a pretty ideal shopping experience to me. The only real benefit I see NFC having is that it can eliminate the wallet from my pocket.


Some of the same points Aza Askin did a few years ago on his Don't make me click presentation[1] :

"The best interface for a shovel is a hole where you need it"

[1] http://www.youtube.com/watch?v=EuELwq2ThJE


I thought this was great. Learning from the users as much as possible in order for the interface to get out of the users way is a solid idea that can be incorporated into nearly everything... you just have to not implement it in a Clippy "Did you mean..." kind of way.


Non-interactive is far better than interactive. Faster, more efficient, more secure, less error-prone, less repeated effort. It's less work!

But there is an army of UI designers fighting against common sense. I'm sure we'll hear from some of them in this thread.

djb nailed this problem on the head when he wrote about the UNIX interfaces. Quoting rules, special characters... it's a minefield even if you are a "UNIX command line guru". There's a high cognitive price to pay if you are trying to avoid all mistakes using this interface.

Solution: Remove the user interfaces. Programs interface with each other, not the user.

Non-interactive = less work. You start the system. It runs. There is no interaction. No ongoing cognitive price to pay other than monitoring.

And this is only the command line. Dare we look at the price imposed by GUI's?

Imagine a slide show where you had to click each and every time you want to see a new slide. Nice CSS! Wow, that Javascript is amazing! The page is so beautiful! Click, click, click. (Developers rejoice: We can track the clicks!) Now imagine you are the user and the slide show is 10,000 slides. Forget it.

Mechanize? Perl, Python, Ruby? JQuery? Give me a break. Why should people even have to waste their time writing such things?

Hey no problem! The kind developers decide to add an option to run the show on auto-pilot. Hurray. No more interaction is needed.

Think again.

Now imagine you have view 10,000 different slideshows to view and each one has a different way to start the auto-pilot mode based on the developer's own idea of "user experience".

You are right back where you started. Find the auto-pilot button. 10,000 times. Interaction.

A "slide show" is just a random example. You can apply this almost any sort of information intake where "interfaces" like GUI's are involved.

Go to a library and watch people trying to use various computer databases. In almost all cases, you will see them spending noticeable effort just to find things to click, and reading onscreen instructions. Every database is different. Every interface is unique. End-users: make 'em work.

The entire web is like this. Every web developer wants users to interact. Why? It's too much damn work. For users.

Will it ever change? Doubtful.

There is an entire industry built around forcing users to interact regardless of whether it is truly necessary.

For every person working to build an automated system there are two more building a system that forces user interaction.

Sometimes nerds, e.g. those familiar with Lisp or Scheme, say "everything is a list". Can mere mortals who know nothing of "programming" make lists? Is there any literate person on the planet who hasn't made a list?

"List processing".

Too _boring_. (It certainly isn't too _difficult_. Even the grandmother who can't use a computer can still make lists just fine.)

I know, let's build an "interface"! For humans!

Good grief.


Apparently the best page is blank without JavaScript enabled, too.


This isn't too far from the truth! I make extensive use of dotjs[1] to inject my own CSS rules into web sites that I use frequently. ~65% of what I do is simply hiding useless divs/ads/flash and removing background colors, with the other 35% being mostly typographic tweaks (`line-height` and `font-size` being popular candidates here).

[1]: http://defunkt.io/dotjs/


Interesting, it loaded just fine in the text-only "links" browser for me.


I found part of the problem:

#content { padding-top: 53px; z-index: 0; visibility: hidden; }

They're obviously flipping attributes around in scripting.

So, keeping with the theme of "no interface", disabling style sheets for this site will let the content display. Brilliant!


That's an overly-aggressive approach to avoiding FOUC (flash of unstyled content).


Was just going to say the same thing. Ironic irony is ironic.


Very good article, but I was initially turned off by the anachronism of the commands it starts with. Referring to NTFS in a scenario purporting to be before 1984 throws me off and makes me assume the article is poorly researched.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: