I'm a big fan of minimal interfaces. But I guess minimal interfaces are good for the web user who is looking at your web app for two seconds before deciding whether to keep looking or move on.
This actually looks like a pretty effective interface for users who are willing to be trained for months. Think about it: after you are trained, every piece of information is just one glance away and every action is just one button click away.
I'd love to hear an opinion from an expert, though. sinks back into his armchair
At my institute we had two electron microscopes. One was 20 years old and every setting had it's own button or knob. It looked like a space ship. The other one was very new and only had a joystick plus some terrible software.
People _loved_ the old one because having lots of buttons you can tune almost in parallel was much more efficient than the modal click-here, use-joystick workflow.
The newer EM generated much better images though, so the choice wasn't just between UIs.
Usability considerations play a large role when designing complex interfaces. Complexity pretty much has to be reduced and errors have to be displayed in a way that makes it possible to actually deal with them, especially in emergencies. Everything just lighting up may be nice for situations when there’s time to investigate but certainly not when time is critical. Yet wrong decisions become more likely when all the facts in all detail are not known so there is also a right balance to strike.
There is an interesting talk which explores UI problems in airplanes (not quite Space Shuttles but also with a lot of buttons and, as far as the UI is concerned, with a lot of similar elements): http://www.youtube.com/watch?v=YlEcCfEakro
This actually looks like a pretty effective interface for users who are willing to be trained for months. Think about it: after you are trained, every piece of information is just one glance away and every action is just one button click away.
Reminds me of the often-told story about Apollo 12 and how a little-known switch resolved an electrical issue that threatened the entire mission, though in that case it was a person on the ground who instructed an astronaut what to do. The following four-minute clip is from the documentary "Failure Is Not An Option."
As you can see there are a lot of big screens and very few actual buttons and switches in that photo. It looks like the four keyboards are the bulk of the buttons. I would be curious to know how modal those screens are and how much information they can display at one point in time. It doesn’t look like they can display much and I can’t imagine that it’s fun to go hunting for necessary information in an emergency. Depending on how that works it might be a usability nightmare, one can only hope that Airbus does plenty of UI testing.
(In that context an interesting question is what this interface is optimized for. It may work great on normal flights – that’s nearly all of them, by the way – but break down horribly in emergencies. I guess the UI design rule of designing for the common and not the rare case doesn’t really apply here.)
Actually I was reading an article (I can't find it now) about that Qantas A380 engine explosion, and it said that this was a genuine problem in that situation. Once the engine blew up, there were dozens and dozens of different errors popping up, and the first thing the pilots had to do was to page through all the error codes and prioritize them to figure out what the real problems were.
In many ways, better to have a single giant control panel which lights up a different light for each error that can occur.
This is actually an artifact of the way planes are certified today (DO-178B). The verification process has to be done by hand - you can't verify automatically generated computer code.
The result is that you can't build say, a prolog program which produces a huge switch statement to prioritize errors. As a result, there effectively is no prioritization of errors and warnings at all. They're all treated as the same severity.
The new standard, DO-178C allows for computer models to be used in verification. So we may start seeing automated prioritization and management in the generation of planes after the A380/787.
Nuke plants have a great many buttons, dials, gauges and levers, too, according to one of Don Norman's books.
ADDED. Norman said that one of the advantages is that when one operator lowers the aperture on value 68 (or whatever) the other operators can easily tell that that is what he is doing.
It's actually a similar situation (obviously on a smaller scale) for PC users sometimes. For example gaming hardware manufacturers have cottoned onto this fact and made keyboards with a bunch of extra keys for ingame binds/macros, because gamers would rather learn what keys do what than have any delay in their actions.
I guess this is similar to what non-programmers see when looking at source code: An impenetrable mess of gobbledygook where it would require months or years of dedicated effort to figure out what the different commands actually do.
That was a cool shot of the shuttle I've never seen before.
I wonder how many of those controls are redundant systems you only need to touch if they light up?
Actually that is not the assumption at all. The assumption in both the space shuttle and most other (fixed-wing) aircraft is that the pilot is right handed. However, what you don't expect is that actually most of the really fiddly work happens with the right hand, changing settings, pressing buttons, tuning radio's and writing down notes. The left hand has to be taught to fly but thats actually less demanding that teaching the left hand to do that other stuff. This is even true of aircraft which have yokes rather than side-sticks, as the throttles are controlled by the right hand and during the take-off and landing the right hand is stationed on the throttles.
Interestingly for a helicopter this pattern is reversed, the control inputs required to control a helicopter are so much more demanding that the pilot sits on the right hand side so that he uses his right hand for the primary control and has to either make do with his left or temporarily switch hands to do the other stuff described above.
This actually looks like a pretty effective interface for users who are willing to be trained for months. Think about it: after you are trained, every piece of information is just one glance away and every action is just one button click away.
I'd love to hear an opinion from an expert, though. sinks back into his armchair