Hacker News new | past | comments | ask | show | jobs | submit login
Flat Design vs. Traditional Design: Comparative Experimental Study (2015) (researchgate.net)
207 points by karmakaze on Jan 1, 2019 | hide | past | favorite | 137 comments



As a personal anecdote, I've spent quite a bit of time watching toddlers interact with the world. In my experience, if it looks like a button, a toddler can tell it's a button and will be interested in pushing it. (And might even say "button"!) If it looks like a switch, a toddler will be interested in switching it. If it looks like a button on a touch screen, a toddler might still think it's a button. If it's a flat blob that looks like every other flat blob designed by that designer (but looks nothing like a flat blob by other designers!), toddlers don't recognize it as a button. Shocker, I know.

Obviously, the majority of computer users aren't toddlers, but I think there's a lesson here. I really miss the old Windows 3.1 / 95 / 98 days when controls were more or less standardized and you could reliably tell which were buttons, which were radio buttons, which were okay buttons, etc.


> Obviously, the majority of computer users aren't toddlers, but I think there's a lesson here. I really miss the old Windows 3.1 / 95 / 98 days when controls were more or less standardized and you could reliably tell which were buttons, which were radio buttons, which were okay buttons, etc.

Agree.

Also: tooltip help when you hover over an icon.

Also: menus (for discoverability)

Also: manuals/help files (yep, most software shoudn't need one, but now we are in a worst-of-all-state: new software is not self explanatory, doesn't use familiar designs and the manual is gone :-/)


Sounds like reinventing the wheel and ending up with a square was not a great idea. Not a good sign for the tech industry when it starts going around in circles instead of advancing.


IMO human device interfaces have been going in circles ever since Doug Engelbart's demo in 1968.


Larger and larger circles to some degree (a large percentage of the earths population now have access to advanced technology.)


even better regarding discoverability: the macOS menu search, available under "help".


When my youngest daughter was still a toddler, she would try to swipe on our 32" bedroom TV that was more or less at her height. What I'm trying to say is that your mileage might vary. I think it all depends on their experiences.

If something looks like a button and they try to push it, is because in real life buttons have a certain look and feel.

But I'm pretty certain that if you teach them at an early age to discern what does what in a flat UI, they'll have no problem switching from one UI to the other (I actually learned _from her_ that in the Android YT app you can swipe those pesky lower right corner videos to make them go away). And we're still to try flat in real life ;)


> And we're still to try flat in real life

Plenty of appliances have electronic buttons, which are close to flat (though definitely not flat) and commonly paired with a flat visual style on top of them. They work pretty well, in that context.

And now plenty of appliances have capacitive controls, which are in my experience normally a disaster and literally never good (I exclude situations where a screen is involved; those are hit and miss). To take an example, microwaves: almost without fail, I find that the newer it is, the harder it is to use—though the old dials were only easier to use in certain ways, lacking precision. I now have one with a capacitive touch surface, and it’s very hard to use unless the room is brightly lit, at which point is’s merely quite painful unless I want to turn it on for exactly 30 seconds.

Decide for yourself whether these things are “real life”.


My brand new (just arrived today!) stove has these capacitive buttons, and in terms of visual style it does indeed function perfectly; it's entirely clear which parts of the interface are buttons, even though most of them have only unadorned text.

But I think that analogy falls short in one major way, which is that these buttons are logically grouped and evenly spaced across a surface that consists of _nothing but_ buttons. If there's silkscreened text there, you can press it and do something. The visual display of information happens in an LED display set aside in a well-defined area, so things to look at and things to press are 100% clearly delineated. In a desktop/mobile/web app, where content and interactive elements are interspersed, you lose that obvious context clue. That inevitably adds cognitive load, even if the interface is still technically usable.


An additional problem for capacitative buttons in the kitchen is that my fingers are often moist or even wet, and this is a disaster when it comes time to punch cap. buttons.

Not only do they not work, no only do they sometimes trigger adjacent controls, but to get function back I now have to dry both my hands and the control surface.

I will never purchase a kitchen device with capacitative controls, but I run into them enough when traveling to experience this frequently. It's a mess.


This is bad, but then again they will be easier to clean.


The old paradigm of sealing a rubber dome button under an unbroken sheet of plastic worked just fine, my folks had a microwave for 20 years that worked that way with no switch failures.


With you on capacitive controls (other than touchscreens). Very often terrible, useless garbage from a UX perspective. From other perspectives they're great: can be cleaned perfectly and easily, and has no potential for mechanical failure. But none of that matters if they just aren't useable X-)


The problem with e.g. microwaves is that with digital controls it's easy to give the user too many complicated options. All I want is to set the timer and the power, not some extra mode to specifically thaw fish...


I agree wholeheartedly but I will say my new LG microwave is pretty good. It lets you close the door, twist a knob and press start. All the complexity of weird preset modes is condensed down to a few essentials. You can dial up a frozen bread defrost setting, but it's done in a way that keeps the interface simple for everyday use.

The buttons are capacitive but they're very responsive and it's easy to wipe down.

https://www.lg.com/au/microwave-ovens/lg-MS4266OBS


Such knobs are fairly lousy in this way: they lack a start and end point. This is a standard failing of modern things: it’s easier to implement generically useful functionality this way, but it’s objectively inferior in most concrete cases. (Another example: ebooks with unanchored scrolling instead of physical books with pagination where you can feel how far you are through the book. You have no reference point.)

Old microwaves had a clear, directional knob with a start point, an end point, and markings for the values they represented. Barring the question of precision (which could be more or less solved with a more strikingly non-linear dial), that’s positively superb. My ideal microwave has only the following controls: such a dial (you could do some pretty fancy active magnetic design on it to improve it and make it precise, over the old knobs), probably another dial or slider (I’m open to trying it out) for power level, and that’s it. I’d be willing to discuss a +30s start button which moved the knob into the right place for you, and some sort of digital timer for potentially easier reading from a distance. No start button or stop button or open button, because the door opening and closing is better than the old microwaves.

Now it is possible for an unanchored knob to be good; I have in mind a tuning knob from a fairly expensive radio from fifty or more years ago, where they had clearly put a lot of effort into its friction profile. But I’ve never seen such a thing in the digital age, only ever in the analogue world. (Hmm… I wonder whether sound boards count. They may be an exception.) In the digital age, knobs are normally just glorified less/more buttons with very coarse quantisation.

My microwave is similar to yours, but a cheaper model, with most notably a capacitive touch strip instead of the knob; it’s awful.


Half the time, my microwaving action consists of closing the door and pressing [start/+30sec] however many times is required.

Sometimes I twist the knob until the display shows the desired time—in this respect it works like an iPod.

And when I want to defrost bread, I know it's Defrost program 4. Or if I want something else, the list of programs is helpfully silk-screened behind the door.

Maybe my expectations are low, having owned a Panasonic for the previous 9 years. But I'm genuinely happy with it. It works for me.

(I did see the capacitive touch strip model and actively avoided it. I knew I'd hate it without even trying it.)


Depth perception contours and borders are instantly recognizable by our visual system, even toddlers. Learning to read letters fast comes much later in life so i would say it's probably much more difficult to teach little children


In modern flat UIs, it seems fashionable to have clickable things that are not visually distinguishable at all.

As a silly example, I just opened the Google Maps app. Basically everything is clickable or swipable, but there is no visual consistency whatsoever. I suppose “everything is a control” is a viable model, but it breaks down if you want to show content, too.


> I really miss the old Windows 3.1 / 95 / 98 days

You could say the same for early iOS apps that just used the default Apple widgets and style. I think ~every app I use know has its own style.

Funny enough, this is less true on computers. Except games.


And "gamer tools". Boy, are those resoundingly, universally terrible. Have you ever touched an overclocking GUI or whatever management tool for "gamer features" your motherboard/other hardware has?

It's often ludicrously bad.


do they also have midi music playing?


A similar 'experiment' with similar results is landing page optimization. You want to be very clear what you're trying to communicate and what you want the visitor to do with as few distractions as possible.

It turns out the same types of optimizations that you make on a landing page are good UI/UX features for past-middle-aged people to use. Computers don't have to be hard, we just put everything on the screen and have the user sort through what each thing does and whether it's in or out of context for the current action.


100% agree with this and the parent.

Especially with landing pages I aim for the 'mum' test. She's in her 60s and is petrified of computers. If someone like her understands what has to be done / the messaging is clear / the flow of the page directs her to do what I want, it'll probably be a success.

Anecdotally I remember being on a call with my dad (in his 60s) trying to talk him through downloading and installing some antivirus software from download.com (or whatever it was back a few years back). Frustratingly it kept going wrong - "I'm pressing to the download button, but it's just not working!" Except he wasn't. Eventually we figured out he was clicking a banner ad because it "looked like a button," far more so than the flat, actual download button. On a site like that, I often wonder whether it's entirely by design. Happy advertisers get more clicks (albeit probably with shitty conversion), happy site owners get to charge more knowing they provide more clicks. Everyone wins. Except my 65 year old dad.


Eventually we figured out he was clicking a banner ad because it "looked like a button," far more so than the flat, actual download button. On a site like that, I often wonder whether it's entirely by design.

Most certainly yes. In fact, recognition of "false download buttons", and almost instinctively and subconsciously ignoring them in the same manner as banner ads, comes with experience such that there have been times when I've been confused by pages where the real download button looks exactly like the bloated fake ones that I've become accustomed to mentally filtering out. I find that when I'm looking to download something, I tend to go towards direct links that actually show a destination URL when you over over them, vs. buttons that don't.


More often than not, the landing page represents the internal (dis)organization of business units vying for the most valuable visual real estate (e.g. sales team wants contact info, marketing team wants clickable advertisements, etc) and designers hands are forced to oblige by senior management. These distractions are constant battles that take tons of time iterating placement and user testing to get right from a UX perspective and often is neglected. Usability testing gets most of the low hanging fruit given a sufficient sample size.


I suspect there's actually quite a market to develop such UI for. Given a service that standardizes UI over a wide array of applications (web apps, mobile apps, desktop apps), I bet there's a large enough market for it.

Implementation would be interesting and most likely quite hard to achieve, though.


Not all software came out of the Win32.dll (or earlier, or later) worlds.

Having worked on a number of platforms (MVS, CMS, VMS, Unix, plus the usual desktop kit), I'd come to recognise signatures of those platforms. Under Unix, recognising the original X11 Xt, Athena, Motif, Tcl/Tk, gtk, Qt, Java, and Gnome toolkits is largely second nature.

No, all applications don't look and behave the same. (Xt scrollbars FTFW!!!).

But: if you do recognise the toolkits, you've got a pretty good idea of the provenance of the application, and how it's going to behave and what quirks it will have. At least up to a point.

I do still prefer button-resembling-buttons.


It's fun to see them trying to grab the icons, they probably look tasty.


Windows 7 is still nonflat.


Nielsen Norman Group have also ran a study, with results similarly unfavorable for flat design: https://www.nngroup.com/articles/flat-ui-less-attention-caus... (Possibly it's not the only study of theirs on the topic.)

Notably, NNG uses a more 'real world' methodology, testing realistic user tasks in browsing websites. For the unfamiliar, 'Nielsen' is Jakob Nielsen, an authority on usability, working and writing on it for two decades now, and he's a big advocate of usability testing in design process. (BTW, “Designing Web Usability: The Practice of Simplicity” of his is an excellent writeup on how websites should function—I'd bet it's as valuable today as it was in mid-2000s for me, because the functioning of humans doesn't change.)

They also noted that flat design has made some steps back to the immediate recognition of the olde pseudo-3d interfaces, notably with 'Material design': https://www.nngroup.com/articles/flat-design/ But not many steps.


It's remarkable how UI design followed architecture (as in: buildings) down the dark alley - both went on a crusade to expose inner substance of things by cleansing it from "redundant ornamentation", both ended up steamrolling over the ergonomics, the human comfort, in the fight for the inhuman ideal.

This is what I see when I look at the web (+Electron) today: https://www.google.com/search?q=modern+architecture&tbm=isch https://www.google.com/search?q=brutalist+architecture&tbm=i... This is what I would prefer to see: https://www.google.com/search?q=victorian+architecture&tbm=i...


It doesn't even seem specific to UI design but design in general! We've been visiting a newly built indoor pool lately. The male / female shower/toilet signs are stylized so around 50% of people who get there first don't understand which is which! Also, the changing rooms are unisex single cabins that open both sides to let you enter to the bath area - but with no signage on them at all. So if you enter from the clothed side and all cabins are taken you stand in front of a white wall of doors with no idea what you should do next.

Designers! Fcking stop and think what you're doing if you change designs that have worked for many decades!


It's kind of remarkable considering the UI did that decades after it became obvious that modernism is a disaster.


Can you explain why modernism is an "obvious disaster"?


The "machine for living" philosophy is intended to be ergonomic, but is extremely rigid and therefore as soon as there are unanticipated needs it ends up anti-ergonomic.

There is also a great tendency for modern architecture to be selected purely based on looks, which produces all kinds of strange anti-egonomic disasters. Libraries that echo. Office blocks that focus the sun onto random burning spots on the street.

Then there's the effect of anti-ornamentation on the human psyche. I seem to remember that one of the modernist car parks in Britain was demolished partly because too many people were committing suicide from it.



> redundant ornamentation

except in UIs it was not reduntant because visible contours was a deliberately added functional component to take advantage of our (very fast) depth-perception capabilities. The equivalent in architecture would be to remove handles from doors


You might be surprised.

> Eisenman took his duty to create “disharmony” seriously: one Eisenman-designed house so departed from the normal concept of a house that its owners actually wrote an entire book about the difficulties they experienced trying to live in it. For example, Eisenman split the master bedroom in two so the couple could not sleep together, installed a precarious staircase without a handrail, and initially refused to include bathrooms. In his violent opposition to the very idea that a real human being might actually attempt to live (and crap, and have sex) in one of his houses, Eisenman recalls the self-important German architect from Evelyn Waugh’s novel Decline and Fall, who becomes exasperated the need to include a staircase between floors: “Why can’t the creatures stay in one place? The problem of architecture is the problem of all art: the elimination of the human element from the consideration of form. The only perfect building must be the factory, because that is built to house machines, not men.”

From: https://www.currentaffairs.org/2017/10/why-you-hate-contempo...


This is a common architecture complaint, but remember that the Victorian architecture you showed was the 1% of buildings; the majority of buildings people lived in during the Victorian era lived in cramped, smelly, and ugly buildings.


There has to be a better approach than to just settle on one of the two extremes: what architecture styles have a mix of moderate ornamentation, but with straightforward and utility-driven design?


Old brick industrial buildings, the sort that get turned into exposed-beam loft style workplaces or condos. The front exteriors often have a bit of ornamental stonework. Nothing baroque, but just enough to make the exterior interesting and attractive.

When designed the architects probably weren't paid to mess about putting in useless impractical things, because these were workplaces.

For example, the buildings on Chicago's Printers Row (From 525 S. Dearborn south)


McMansions.


Oh my.


Pretty sure current architecture (≥2000s) is indebted to modernism for its entire look, but also incorporating ideas of Scandinavian/German/Swiss industrial design (which itself is derived from modernism)—thus mixing utilitarian cleanness with ‘humanist’ shapes. And ‘after-modernist’ design in general builds on modernist ideas for materials but dispenses with pure geometry, returning to more pleasant contours instead.

There's no need to go overboard with ornamentation when you just want to get away from concrete. And notably, modern web and app design turned into a continuation of current publishing design—which played with geometric Swiss layouts for a while back then but left them behind, just borrowing good ideas.


Oh, and by the way, I think lots of 'futuristic' buildings in visual forms of fiction in fact look like brutalism and, I'm even afraid to say, constructivism. All those wacky towers and endless rooftops in cyberpunk films. Video games in particular, and especially Quake, remind a lot of brutalism―probably because individually designed brutalist buildings look interesting from every angle and at the same time are pretty simple (not much to model): https://www.format.com/magazine/galleries/design/lego-sculpt...

Guess some people actually like those forms, if you dress them up properly.


As a Swiss I can tell you - Swiss designers have lost their way IMO. The world should look much more onto Japanese design as an example to combine practicality and eye-pleasing shapes and textures.


One possibly explanation for these results which would be less interesting that their suggested conclusion: bigger items with smaller margins are better than smaller icons with larger margins.

The difference on the icon results were particularly large and convincing, and if you look at their example icon stimuli, clearly the flat icons are simpler (as expected) and presented with more margins (perhaps also as expected in flat design), but that also means to get the same number of icons on screen, the icons were smaller.

I think it's at least worth considering whether that might explain a large part of the difference.

Another limitation is that tiny and non-diverse sample size, and the fact that it's a few years old. They had just 20 students, almost all male, Russian university-student-aged - it's just not a huge or diverse sample; and the experiment was run in 2014; people have a few more years of flat design expectations under their belts now.

Looks pretty decent, those results, but I still think it's a little early to draw solid conclusions based on this alone.


To back up your more skeptical points, and to hint at the linked study being a symptom of the file drawer problem, here's a study that cites the linked one and which finds no statistically significant difference using a more diverse set of subjects (although still only 21 of them): https://www.researchgate.net/publication/326069864_Affordanc... It does commit the fingernails-down-blackboard sin of talking about "trends" for non-significant results, but otherwise seems relatively sound.

Here's another that effectively finds no significant differences, while subjects perceived flat design as more usable (although full text isn't available so I'm not sure about the methods): https://www.researchgate.net/publication/325563195_A_Compara...

Caveat - this is just from a quick search through studies that cited the linked article. There may be more robust studies around.


When I worked in design, I was always a fan of clean, modern, Swiss inspired design. But, that was mostly print (magazine ads, direct mailers, etc). When I moved into GUI design (early 00's), I went overboard on beveled, 3D, overly ornate designs. As did many people.

"Flat" design was a natural response to the endless bevels. Which, was also taken to an extreme. iOS 7 was, imo, the worst representation of this. The icons, the palette, and especially the entirely-too-thin font weights were all awful.

I think a flat base design, with dimension added in appropriate areas is a happy medium for GUI's. I'm still on MacOs 10.12.6, which I think represents this pretty well. As does the the Material Design widgets. Even if I think they're a little boring stock.


I'm a fan of clean design, which is most definitely not flat.

The trouble is UI and UX stopped being anything about productivity and became simply empty fashion.

iOS 6 was an excess of 3D shine and starburst effects, and skeuomorphism excess like news looking like a set of library or newsagent shelves. Mitigate some of that and the UI would have been lovely. And finished.

Since iOS 7 the excess flatness detracts from usability and discoverability. I still feel the camera icon looks annoying wrong and bland, as do settings, photos, calendar, passbook, compass, safari. Games and Passbook are determinedly unrepresentative of anything at all, just random blobs of colour conveying nothing at all.

Still, on the plus side MacOS and iOS have not followed through with the excess of bland extreme flatness that is Windows 10. That still seems to want to push the propaganda of a phone GUI on the desktop that started in 8. Several years of regular use and I still hate it at every touch. Even adding every tweak I can find to make it more like 7 and it's still a significant step back from 7, and two headed in typically Windows fashion in that settings are partly in apps partly in old control panel. Usually control panel is required for basics that for some reason aren't felt necessary to allow configuration of any more. Never a complete transition from one to other.

After all this progress? I find the app based settings worse in every respect, and perfectly typifies the ethos. Fixing a ton of things that were not, in any respect, broken. I guess they knew that or they'd have continued to permit theming. :)


The News app didn't exist until iOS 9, and was never skeuomorphic. Maybe you're thinking of iBooks, which had the bookshelf.


Used to be called news stand, and had a picture of some empty shelves as the icon. Was replaced by, or renamed news.


Replaced by. Completely different app. Newsstand was really a wrapper around the App Store, in that every "magazine" you could buy or subscribe to were in fact actual iOS apps, which were only visible within Newsstand, and they each provided their own way of reading a digital copy (usually just PDF version of the print version) if the magazine. The News app has none of that.


I remember having newsstand in ios 6


Here are some examples of how flat design has made things worse in MacOS and iOS: http://uxcritique.tumblr.com


I really like that site and the concise descriptions of the problem. ..it's also a little madding to read for too long as one starts to ask themselves "why did they do that" over and over again.


Why? Because design.


I feel like mystery meat navigation is just the new hotness, like it was before as people played with UI paradigms. The issue I see is that , as the examples in that blog point out, there is nothing conaitent to learn and apply later which is why we came back to more traditional UIs for a while.


Interesting to look over these and see how at least many of the iOS issues have been addressed in later releases.


> Our results suggest replacing the flat style user interfaces with interfaces based on the design principles developed over decades of research and practice of HCI and usability engineering.

Apple should just revert HEAD to the iOS 6/OS X 10.6 branches and back-port any bug fixes. Almost all of the last near-decade of work in UIs has been a mistake.


> Almost all of the last near-decade of work in UIs has been a mistake.

You keep saying this, and it's just not true. The extreme skeuomorphism in the older iOS versions was just as maligned as flat design is today. I wouldn't mind adding some more shading to buttons to make them more easily identifiable, but let's leave the cheesy leather textures in the Calendar app dead and buried, please.

I'm not a fan of the Windows 95-era design either. As an example that comes to mind, traditional menus have atrocious usability. Ribbon-like interfaces that feature actual buttons you can click or tap on, with more important buttons larger and/or highlighted, have been a huge improvement over trying to wade through a sea of identical-looking pull-downs full of text. It's important to remember that the move away from pull-down menus was informed by a good deal of end-user research, a lot more than this article.

Regarding iOS, I'd like to see this study try modern UI fonts like San Francisco. There's a reason why Apple, Google, and Microsoft invested in new fonts, and that's because faces such as Helvetica (which, by the way, was used in iOS 6) that weren't designed for screens are harder to read.


Ribbon-like interfaces that feature actual buttons you can click or tap on, with more important buttons larger and/or highlighted, have been a huge improvement over trying to wade through a sea of identical-looking pull-downs full of text.

I completely disagree. Being a list with each item on a line, regularly spaced, a menu is easy to read. Humans have been reading lines of text for a long time outside of computers, so they're quite skilled at it. The buttons on a ribbon change appearance and position depending on the width of the window, and the barrage of icons makes for difficult skimming. It's essentially visual overload.

("A picture is worth a thousand words. However, sometimes you don't want a thousand words when one or two is enough.")

It's important to remember that the move away from pull-down menus was informed by a good deal of end-user research, a lot more than this article.

The problem is that those who participate in such research are unlikely to represent the majority of the userbase. It's unfortunate, but those who have seen calls for such studies and then thought to themselves, "I have better things to do with my time" will understand this completely.


I have my beef with ribbons, but ribbons came out in Office 2007. I’m talking about subsequent developments, like Edge throwing away both ribbons and hierarchical menus and moving to an ellipses menu. There were some pre-2009 mistakes for sure. Arguably it started going downhill with Jaguar and metal windows. But now the wheels have come completely off. You can’t tell what’s what anymore—is it a button or a label or just text? Tap on it and find out! Discoverable buttons and menus have been replaced with undiscoverable gestures and swipes. Not even consistent gestures: how do you move the the insertion point in iOS? It depends on what kind of phone you have! The apocalypse happened, and the righteous were saved. We were left behind to suffer through flat design, material design, the new gmail, Edge, and web apps.


Ribbon has to be the absolute worst interface I've ever dealt with. Things seem to be organized completely randomly, things I never use are emphasized, things I always use are de-emphasized, things are hidden behind various panels and determining which involves a tedious search through every single panel, ultimately giving up and having to google where the hell the button I'm looking for is--and seeing thousands of results from people searching the same exact thing.

That's not even getting into the fact that it consumes a ridiculous, and I mean simply ridiculous amount of space.

I feel like every recent step "forward" in UI is just an attempt to make useful features harder to find, make useless buttons bigger while simultaneously making them harder to identify as buttons, and reducing the actual active input space to the bare minimum. MS Office and gmail started a race to minimize input space a couple years back and now it seems to be spreading everywhere. What used to be nice, full screens of input and being able to visually scan everything is now a tiny white box with absolutely useless garbage everywhere, from ribbons full of buttons I'll use maybe once a year to now auto-suggested text that never aligns with what I want to say and could potentially have disastrous results when I inevitably accidentally click it.


> and reducing the actual active input space to the bare minimum.

That's because the the point of computers has shifting from allowing you do create to allowing other people to feed you content.

Also: The ad driven model always wrecks everything it touches.


How do you navigate the Ribbon using a keyboard? How do you learn about shortcuts now that they are displayed nowhere?

The Ribbon traded a bunch of text pull-downs for a bunch of icons pull-down that you have to hoover to discover what they mean, it kept the "multiple levels of pull-downs" problem, but now the extra levels have no icon or text with even more levels because each one has a limited space, and threw away the icon toolbars that had a much larger density of icons.


> How do you navigate the Ribbon using a keyboard?

Mostly the same way as you used to do with pull-down menus. Press alt to display tooltips with shortcuts over ribbon buttons. Then you do bunch of alt + [letter] to get the option you want. The underscored letter in traditional pull-down menus in my opinion was similarly non-obvious.


Menus show the hotkey by default. Needing to know a hotkey to figure out what hot keys are seems backwards.

And I don't mean the underscore, that was for fast access in the menu. Most menus would say actual hotkey on the right side of the menu by default.

But even the underscore gives a default visual clue that you can do something with that letter, and you can then ask what. If you're new to computers, the concept of anyotkey may not be obvious to even ask about.


That works. It is also more clear than the pull-down menus, because the letters have a large contrast and are spread through more of the screen (what is a problem from another point of view, but it's a benefit here), so it's harder to get stuck on the keyboard navigation mode without noticing it.

I can take that out from my list of Ribbon problems :)


Same thing as with a traditional dropdown menu. Press or hold the Alt key to display the available shortcuts.


>> Almost all of the last near-decade of work in UIs has been a mistake.

> You keep saying this, and it's just not true.

You saying it's not true doesn't make it so.

> The extreme skeuomorphism in the older iOS versions was just as maligned as flat design is today.

Citation needed. As I remember, most pundits simply had some beef with a few extreme cases of skeuomorphism in iOS 6. Nobody advocated that Apple throw the baby out with the bath water as they did in iOS 7.

> It's important to remember that the move away from pull-down menus was informed by a good deal of end-user research

Again, citation needed. Windows 95, now that's informed by a good deal of research. Everything Microsoft did starting with Windows XP? Not so much.


I'd like to see desktop apps adopt a Spotlight/Alfred/Sublime-type autocompletion UI. Hit some keyboard shortcut and you get a "command palette". By typing, you get command suggestions.

An app could easily display not just matches but also a help screen about each match (not possible with today's menus). Also, text matches could be in categories/tags of commands. Type "select" (or even just "sel") and it'd offer commands related to the category "selection", for example.

Such a UI could easily work as a replacement for a classical menu bar, in that it would be a superset of this functionality (a menu bar is essentially an "unfiltered" match UI, exploded).


Sure, I like how the top level menus are still there - edit, view, etc, but File has been replaced with some weird badge up in the corner. That's just dumb, there is no user-centric explanation whatsoever.


...and clicking it brings up a full-screen/application monstrosity that was really disorienting the first time I saw it, and still feels slightly unsettling:

https://www.howtogeek.com/wp-content/uploads/2016/05/ofs_5.p...

What's also irritating is the fact that an extra click or two is now needed to Save As or Open from that ridiculous "menu".


But pull down menus were influenced by pre-mouse interfaces.

Many pull down menus still allow for keyboard based navigation through arrow keys and keyboard letters.


"extreme skeuomorphism in the older iOS versions was just as maligned as flat design is today"

No, no it wasn't. Because skeuomorphism wasn't a usability issue the way flat design is.


Leather textures aren't a usabiliy issue.


To be honest, I thought that quoted sentence weakened the authors' argument. "based on the design principles developed over decades of research and practice of HCI and usability engineering" comes across as sour grapes.


Elementary OS has done well in this regard, I think. They’ve avoided most trends, and have focused on refining their UI. I wish they’d get more funding and traction.


Linux in general has been doing just fine. The one really notable change I've seen lately is a transition towards a hybrid click/touch interface, with ubiquitous "big" targets for tapping and swiping, as part of the switch to GTK+3. And that seems necessary if we're to work towards having a Linux desktop on smaller, touch-capable devices, so I have no real issue with it. The widget style has changed a bit, but it still looks quite traditional, avoiding the excesses of both "skeuomorphic" and "flat" design.


And big targets are easier to click on with a mouse, too. And the space problem resulting from bigger targets also makes you think harder about what targets to present, and which ones to move to subscreens.


I was one of the original beta Mac developers, back in '83. Before the Mac was released. The idea then was to visually duplicate real world interfaces, and I gotta say they are easier for everyone, across the board, to grasp. Still, to this day, there are many people who do not use computers on an hourly basis, and they certainly do not get familiar with flat interfaces. However, the much maligned over-done interfaces are grasped immediately. Imagine that. It is because the software operates to mimic a real world item that person of many is familiar using already, without software. It is as if the entire reason for the UI is being missed by modern UI "experts" because duplicating real world objects and their innate feedback is too boring, or something. Imagine it as a stepping stone to n augmented reality where you are mimicking real world items, if you must. Real world mimicry is intuitive, and that is hands down easier. I don't need a study.


Flat design is form over function and this study appears to prove it. Usually in design there's a reaction against the tyranny of the mainstream. I never liked flat design, even just aesthetically, and I've been waiting for the backlash in this case, but when will it ever arrive!


if you ask me, it's form against function


Never? 1) Users didn't ask for flat design 2) There's absolutely no evidence that flat design are better and there are evidence that they are worse

Yet I don't expect that UI will go back to 'classic UI', many of those who create UI care more about being perceived as modern (or more accurately not being perceived as outdated) than efficiency, so flat design will probably be replaced by another 'fashion driven design'..


Just like the current inexplicable vogue for illustrations of humanoid figures with bizarre body forms. Which designers seem to love and be very proud of using.


I did not spend that much effort on this.. I know the text-shadow on links is kinda bad, and I'm also not happy with the green for visited links. But other than that, this made HN much more readable for me, I figured you might enjoy it as well:

https://news.ycombinator.com/item?id=18752033


When it comes to computer based user interfaces, I think the problem might be that they start to "look dated" after a while, even if they are perfectly good.

The flat fad is just a new coat of paint to make things look fresh and new. When the flat fad has run its course, they'll put on another fresh coat of paint.

That's my guess, anyway.


I feel like I experience the downside of flat design daily while using the gmail and google calendar interfaces. Finding items and discerning the different areas seems to take longer than it used to. N=1, so your mileage may vary.


When I use Safari on macOS I find myself misclicking tabs more often than I'd like to admit. It's like I have to stop and actively process what I'm selecting, which has never happened to me before in a browser. The all-gray UI is such a downgrade for me. [0]

[0] http://uxcritique.tumblr.com/image/107150709190


What bugs me more is how Apple removed scroll bars. It's left me staring at a Finder window thinking I'm going crazy because ls said there's a file there.


In case you didn’t know: You can choose to always display scroll bars in the macOS system preferences.


Rant:

Everyone who has put at least a little bit of thought into UI design knew from the beginning that flat design is bad.

But no, it is 2019 and we still have scrollbars and buttons that are barely visible. Just because Google and Microsoft unreasonably decided to do it that way and every other one blindely copies that (because how can a those big, big companies be possibly wrong?). Fuck accessibility, even though we serve billions of customers on a daily basis.


Simplification. I know what my scroll bar looks like and prefer it not to have multiple borders and shadows. It's easier on my eyes and mind.

(I don't love everything about "flat design", but not everything needs to be super skeumorphic)


I have a rather cheap monitor (as many other people probably have) and I can barely make out the grip of the scrollbars when I don't sit straight in front of the display. What about people with bad eye sight or colour-blindness? I am sure they have even more trouble than I have.

Windows 10 does BTW not allow to change the scrollbar colours even though it was possible in most older Windows versions. They simply don't care about accessability and userability these days.


Makes sense: a scrollbar is super common and isn't something you interact with directly that often because there are alternatives (swiping, or on a pc the scroll wheel, up/down, pg up/down). But most other controls don't have that many alternatives and/or don't nearly always appear in the same position so then some actual visual recognition is needed. Of which the results are described in this and other papers.


Agreed. There is some value in slight hints through gradients or borders.

Ideally we could all choose our preferred themes like we can with GTK.


When Apple did the flat redesign of iOS, it caused a lot of problems for my grandma who could already barely tell something is a button. I know I've struggled with toggles and check boxes that don't make it apparent they're active.


Minimalism is one of those things where simplicity is the goal, but not the measure of success. It seems many people (I'm looking at you Windows 10) forget the design part of flat design. Making all the icons indistinguishable general forms is simpler, sure. This study makes the same mistake: it finds removing indictors of function make things harder to read. But they forgot to indicate function in other ways to compensate. It's as if you intended to prove goto isn't bad by removing it from a language without break, return, continue, or loops.


I often have a hard time finding an app I want to use on my iPhone. And I don't even have that many apps. Either I'm finding it hard to distinguish the icons from each other, or the icons aren't distinct enough for my brain to firmly associate an icon with its app.


We shouldn't have to rely on post-hoc science to tell us that. Designers should not be allowed to execute in counterproductive ways in the first place, and there was nothing non-obvious about this. We have rules about how road signs should look. How are we going to get back the time lost now?


> We have rules about how road signs should look.

Probably a bad example because its signs on public, state owned roads. Private roads don't have signage requirements, you can even have your kids drive your cars without a license on private roads.

However we do have building code and more importantly the ADA which sets standards for businesses (like, ya know, websites) to meet minimum accessibility standards.

The problem is I don't see any congress where the average number of mathematicians within their ranks is within a margin of error of 0 or rarely enough scientists to fill counting on one hand being capable of writing actually good regulation in this or any software related regard.


I believe most big companies employ psychologists or at least someone with knowledge of how psychophysics and human perceptual systems work. None of them would have approved "flat" as a sane choice


A question to any design enthusiasts and experts here: What currently maintained web/CSS framework has not gone the flat-design way, is inline with HCL guidelines, and has a reasonably complete widget set.


I enjoy Palantir's Blueprint.

https://blueprintjs.com/docs/


I know it is not what you ask for, but i have deliberately been using bootstrap 2.3 for that reason. It still works with minor fixes.


If I wasn't a grumpy old product manager, I would be tempted to get cynical and say that flat imposed higher cognitive load which keeps you on the page/UI longer which gives more time for other things to get your attention and create the mental lock in thru mix of attraction and sunk (time) cost...i.e.ive spent so long to figure this out I may as well stay and look around.

Another Responder pointed out that material design is better. Maybe I'm ignorant but I don't see how. Perhaps icon quality/diversity is better these days but the basic premise of 2D facsimile for real world 3D objects is still there.

Personally I prefer that UI things can be easily and uniquely related to action. This can be achieved by education I. E. sliders now look like nice rounded corner progress bars of yesteryear but we have gotten used to them thru their imposition on us by G and A.

That said I have used a few very old GUI programs and their look now is jarringly uncomfortable even though the tools are great and the UI is clear.

Progress?


> Another Responder pointed out that material design is better. ...I don't see how. ...the basic premise of 2D facsimile for real world 3D objects is still there.

Material design uses shadows to add a third dimension. The shadows cause boundaries and can help indicate parts of the interface that can move independently.

I would definitely not claim this is much better, but it is at least a little bit better.


A related thing is contrast ratio of text on background. I try to test on at least a few computers/displays and when choosing between two almost equally subjectively good pick the one with higher contrast.


I expect that they would see even stronger results if their population were in a developing country. Flat design shifts the processing load for the user interface from the designer to the user, and it depends heavily on the user being used to computing interfaces so that they can guess correctly.


> Our results suggest replacing the flat style user interfaces with interfaces based on the design principles developed over decades of research and practice of HCI and usability engineering.

This feels like a straw man argument. Of course nicer looking design is usually worse for function. If nothing else, just sticking to ONE type of consistent design lets all users rely more on memory. They seem to assume that user interfaces are designed with the sole goal of being usable. Have they seen any modern furniture?

When they say "our results suggest that" they don't mean that the product (e.g. a web site) would be better in some sense such as making more money long term. They say it would be easier to interact with. I completely buy that. But that's the easy part of the equation to prove! The hard part is: does a site that's nicer looking according to a majority of users but 5% harder to interact with get enough extra business to offset the more difficult UX? Does the company valuation or other metrics such as ease of attracting talent improve? This is the similar to the question "would a furniture maker sell more of this chair if it was 10% more comfortable but a bit uglier?".


Since when is relying on memory exclusively a good thing? It has downsides as well, particularly blind flight when using software. Sometimes a user should be required to think at least once before clicking, for example when sending money. In this case, UX should cause him to do exactly that.

Pavlovian responses to interface design play into malicious actors hands after all.


Good to read this work, more work like this is needed and the design community needs to pay attention to it.

Watch a six year old try to use google for the first time; results are swamped by a blizzard of ads and maps and diversions. This is design driven by greed and a/b testing at it's worst - our tools are only just fit for purpose and the costs that that imposes on us are hidden and absorbed personally.


The results are reported, but the inputs - the websites and screenshots used as test data - are conspicuously absent.


Flat design was a mistake. I correctly called this out years ago. It's a horrible choice Johny Ive was only able to poison the world with once Steve Jobs wasn't there to force good design guidance on him. Everyone followed suit and the pendulum has swung _hard_ towards the objective failure of flat design. It's slowly swinging back. Look now at how you reacted to it and analyze your choices. This is a good exercise in realizing an industry trend was a terrible idea and making sure you learn from it if you ever thought flat design was a positive move.


I thought the world had gotten beyond "Steve would be rolling in his grave". Apart from anything, Windows Phone and its flat design came out in 2010, three years prior to iOS 7. Windows 8 came out in 2012, one year before. Please don't pin an entire phenomenon on one man, especially the wrong one.


That's the saddest thing. Apple copying flat UI from Windows Phone, which is dead now.


I don't think Apple copied the flat UI from Windows Phone. It had already become a trend by that point, and OS X had started to move to lighter gradients and flatter buttons in 10.7, released in 2011. I'm sure the web had already started getting flat by then, too.


design is about solving problems given constraints. flat design is oftentimes (more-so in the past than right now) a reasonable solution for the web, due to constraints related to latency (sending graphics over the wire) and the computational cost of rendering complex graphics/interfaces. from that perspective, flat design makes total sense to me, even if I don't always care for it personally. browsers can't even agree on how to render vector graphics or what falls on CPU vs GPU. projects like pathfinder[1] are sorting some of that out, but we've got a long way to go. flat design is only a very small percentage about aesthetics.

IMO, flat design takes the "best for the most for the least" approach of charles and ray eames (and so many others who followed the same ethos but maybe stated it less eloquently), and perverts "the least" to mean least amount of effort/money for the computer (or, cynically, the designer as well), rather than for the user. it either skews the idea of a design constraint, or highlights problems with the web as a platform. whatever flat design accomplishes, it isn't perfect for all cases, but it isn't necessarily bad, either. we just have to remember that as technologies mature, it's always possible we're designing for constraints that no longer exist.

I say this a lot, but when talking about web design, in retrospect, flash was like future alien technology that got taken away from us. although I understand the attack surface that came with it and why it had to go.

1. https://github.com/pcwalton/pathfinder


It seems like you've made an reasonable explanation but it doesn't follow the timeline of how things happened. Flat design has been coming about recently after significant increases in hardware and network capabilities. Current web apps that use flat designs are just as heavy as any other.


> Flat design has been coming about recently after significant increases in hardware and network capabilities

sorry, I should have emphasized the web/browser part of my explanation more. the chip in an iphone could support these things, but could safari? we didn't even have a <video> tag back then. 3D transforms in CSS were brand new (i.e. unusable if you wanted to support legacy browsers), same for WOFF files, requestanimationframe, drop shadows, and so many other modern conveniences. 4G was also brand new -- and uncommon. the web was going through some serious growing pains, and that's just in the highly developed world -- this doesn't really touch on the global availability/use/implementation of the tech we're talking about. it's possible my timeline is off, but designers were still constrained by those doing the engineering during this time. maintaining web apps with hacks to support as many browsers as possible was a nightmare. and so compromises were made -- clearly. and I think flat design came out of that, for better and for worse.

> Current web apps that use flat designs are just as heavy as any other.

that's what I was getting at toward the end of my comment with regard to observing constraints that may no longer actually exist.


Yes, and I was suggesting that the motivation was not due to constraints as the trend emerged degree after these constraints were long since relevant. I tend to agree with other comments here that it has come to serve as fashion (different for its own sake) rather than any UI/UX benefit.


ah, yes, I agree with you on where we're at now, it's not coming from any sort of necessity currently. my initial comment was only meant to say "there's a time and light in which this might have made some sense."


That might make some sense for the web, but doesn't explain it for iOS and macOS, etc.

Apple started pushing flat design just as we were getting high-resolution retina displays for phones. Since the UI art wasn't defined with a vector format, the minimalist icons for flat design was still being provided in big multi-resolution .pngs, so there was little or no savings on resources.

So those UI image files could just as easily contain detailed shaded and/or textured images that make the most of a high-res color display.


Nice to see this type of research coming out of МГУ (Lomonosov Moscow State University)!

Haven’t gone through the study, but seeing the examples wondering if there is also selection bias - many older Russian sites still look fairly dated and users may be more used to that look, finding it more familiar and easier to navigate.


I think the extra brightness and contrast of modern screens has something to do with the UI design "fashion". It's now possible to use contrasts instead of 3d/depth, and it's even more effective then print as the contrast on (new) screens are much higher.


I remember trying out an Android sleep app that had a "flat" design. It never worked. On the last day of the trial I realised why - part of the flat "design" was actually a button that I also needed to press. I was so annoyed I deleted the app.


Flat Design seems to take its cue from advertising (instead of editorial). Ads (especially in magazines) often look great - clean, elegant - and editorial designers can be envious. But some hoariness is not only functional, but helps offset substance from marketing.


I tried to read this, but the ads and spam for random crap is all over the site... how did this even make it to HN top page?


Typical example is iOS design (Traditional) vs Android (Flat). Android design is a mess. Learn from iOS instead.


the paper is unconvincing and doesn't control for bad design vs good design

The argument that interactive elements must be skeuomorphic doesn't work today since _every_ element is interactive


our perceptual system hasn't gotten any faster just because designers wished so


in real life everything is interactive - that's the environment we evolved in


“19 female and 1 male university students” participant is a laughable small sample group. How can any insight be drawn from that?


All depends on the effect you are measuring. 20 can be enough if the effect measured is big (if you test an ebola cure on 10 contaminated persons and 10 of them survived, you have with a very high probability found something great). The real question is "Is the effect measured big enough to be signifiant with a sample size of 20" ?


This is all in all a bad way of talking about things. Flat vs. traditional is in the title. I think this can't be taken seriously, and that you all should know better. No time to read this in full, but they start by comparing windows 8 and windows 7 - so the problem is not the functional redesign, no, the problem is that it's flat? What is this




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: