Hacker News new | past | comments | ask | show | jobs | submit login
UI/UX and Subjectivity (gazit.me)
68 points by idan on Oct 2, 2012 | hide | past | favorite | 30 comments



These two statements are at least somewhat at odds with one another:

* "UI/UX is purely a matter of taste in much the same way that cooking is just a matter of taste."

* "If you meet a fellow who claims that design is subjective, tell them that they’re right, if they choose to work with the little-league practitioners."

And the former seems like the stronger of the two statements. It seems true enough that the usability/experience fields, like cooking, have a body of theory and research that professionals can draw on to execute a "vision" consistently and effectively. Yet cooking is still significantly subjective. Professionals can and do disagree on whether a given vision is desirable. Individuals can be constituted so that what might seem familiar and desirable to one might be experienced as strange and unpleasant to another -- for example, the taste for cilantro seems to be more or less genetic.

You might argue "well, what's meant here is that UI/UX professionals are aware of this 'subjective' aspect of the job and in fact have approaches and tools for dealing with it" ... and again, that seems true enough in my experience, but that's a different statement from saying anyone who claims that design has significant subjective aspects is only working with little-leaguers.


You're right, I could have done a better point of communicating your last paragraph.

The point of this piece was to serve as a rebuttal to the handwavy notion that design is purely subjective. Of course design has subjective aspects—humans are subjective creatures, and we design for them. You're spot on with the statement that professionals have the approaches and tools for evaluating design choices.

The little-league-ers, in my experience, rely on subjective statements ("It looks / it feels") more often than the ones who have depth. As a designer, my responsibility to my client is to be able to explain why my choice is in service of their business goals. More often than not, it sounds like a chain of justifications up the hierarchy of needs. Sometimes it's a conscious choice that is worse because something else has higher priority. It takes some practice to articulate, but there's rarely a choice that isn't related to your other choices, which are in turn related to your constraints. If you can't tie a choice to that chain, then it might just be that it doesn't matter, but more often than not it's a red flag that your choice isn't grounded in conscious thought about what makes your thing better.


I don't think the intention of the author in that second statement was to say that design isn't subjective, just that the entire process isn't. I do agree with what you say afterwards and I don't think it was the best way for the author to portray his point but I believe the message was to be that it had subjective elements along with a research-based and founded process.


Julian Boot recently did a presentation in melbourne on this topic: Design eye for the dev guy:

http://julianboot.com/2012/08/design-eye-for-the-dev-guy-sli...

Sadly the slides are not much use without Julian standing there presenting them. For me there were two big takeaways:

1 - Design is based on science. In particular there is a lot to be learnt from the field of Pre-attentive visual processing [1].

2 - Its possible to learn this stuff and apply it to your day to day work, to produce better UI's.

Its exciting to see this knowledge start to spread around the development world. Finally it seems that developers are being given the cognitive tools they need to create good designs.

[1] - http://en.wikipedia.org/wiki/Pre-attentive_processing


Grr... I hope people attending his talk are better at designing sites than his personal web site there. The think where he makes all URL links, most of which are unimportant footnote links, be highly distinctive colored boxes with a brightly colored bottom border rather than an underline is distracting, annoying, and shows he doesn't really have much sense of good design.


Is there a video somewhere?


No unfortunately not.


I don't know ... Some UI/UX is subjective.

For instance, my willingness to put up with an initial learning curve vs working with an "intuitive" interface. You may call these "design decisions" and that a well designed UI/UX could be a part of any point on that spectrum, but they are design decisions made by the people in charge of UI/UX and impact the final UI/UX.

It is why I become frustrated using a many GUI programs when all I really want is to write a damn awk script :)


A great post. Something I was ranting about recently was what I've noticed to be an all too common perception that UX/usability is a side-job, something that any front-end dev worth his salt should be able to ensure and consider while implementing a UI.

For me, user experience design and interaction design should be what informs the implementation of a UI and can not simply be treated as a side effect of it.


I highly recommend a book on the adjacent topic, for those interested in questions re: where subjectivity and objectivity blur, in art and design. The ideas have evolved significantly, well beyond the N isms, and into a deeper level on cognitive psychology, and analytic philosophy. Among other things, it is useful for practitioners to be aware of--given the power and plasticity of modern digital media tools.

The Objective Eye: Color, Form, and Reality in the Theory of Art

http://www.amazon.com/The-Objective-Eye-Reality-Theory/dp/02...

-edits for context-

[NB: Also disregard the amazon ratings]


A user interface is subjective -- even more subjective than other things you might design. The point of a chair is clear. It looks "sittable". How can I convince you that this pattern of lit-up phosphors looks "tappable" or "grabbable"? When you click on a file in the sidebar, do you want to open it or just select it? You want my program to do something, and I want to show you how to do it. We're trying to figure each other out -- you and I -- through this computer program.

All the tools we have to communicate are very subjective. We've both used other programs before. You know what a printer looks like and what buttons with printers on them do. I can make some physical allusion like a page curl (you can drag it to see more) or a nubby, grabbable texture (you can grab and pull whatever it's attached to). But without this shared language, there's literally nothing to go on. Objectively, my pretty interface is just a grid of lights.


Definitely. Everyone can pretty much give opinions on what's pretty and what's not if you ask them. Or even if you don't ask them. This is why designers have to deal with relatively a lot more inputs, politics and decision making deadlocks.

To put it simply, my mom will not be able to tell you whether your codes are scalable but she can definitely tell you why she prefers pink as opposed to white on those toggle buttons (or why the dropdown animation is confusing).


I can convince you that this pattern of lit-up phosphors looks grabbable by knowing what artists have long known about shadow and lighting to give such elements affordance (http://en.wikipedia.org/wiki/Affordance)—an appearance of tactile actionability.

Cognition and perception are remarkably similar across cultures because vision and physiology are remarkably similar across cultures. There's some trickiness regarding cultural implications (for example, colors carrying different connotations across cultures). Navigating and using this shared language to meet your design goals is far more likely to succeed when you're aware of the considerations and use (or break) them consciously.

This doesn't mean that all design should trend towards skueomorphism—conventions enter into this process, but convention isn't a nebulous affair either. If your audience isn't particularly specialized, you can rely on them being familiar with the UI conventions of their operating system, to a greater or lesser degree. This means that people will likley try to resize boxes from the lower right if you draw something that looks "grippy" there, unless they're at home in a RTL locale, in which case they might reach for the lower-left first. If you're overlaying resize controls onto a box, how are you going to draw those controls? Will you draw them outside your content—does your environment even allow you to do that? If you're drawing them on top, how do you ensure that they are easy-to-target without obscuring too much of the content? How do you deal with people who have poor sight or other vision defects like colorblindness? What about audiences who might have trouble using a mouse, like the elderly? What if your application has to serve all of these audiences at once? How would you even go about testing the decisions you make?

Of course there's subjectivity in how to execute this, but I've just led you through a thicket of constraints that affect your ability to communicate affordance using nothing but the illusion of pixels on a screen. Knowing what parts are science, what parts are convention, and how to effectively meet your design goals requires an understanding of the discipline. _Objectively_, without science your pretty interface might be pretty, but it might be a shitty interface, and you won't know the difference?


To broadly say that "art" is purely subjective and UI/UX is not is just a bad analogy. No matter what form of art you are doing, whether it is the art of UI/UX, painting, composing music -- the professionals of the trade use experience, technique and other tools to create more sophisticated designs that work and are successful because of the use of those tools and the planning/science behind their method. A more appropriate analogy would be to craft or hobby work, not "art."


I would have to disagree.

There are a few things which are a matter of preference or taste, but the vast majority of it is not.

When I design a UI, there is little to no hemming and hawing about where to place things, how big they should be, et al. There is a rational process I step through. It's rather banal and boring.

It's more akin to an architect putting a roof where a roof should go, the bathroom needs a toilet that isn't too close to the tub, the ceiling should be at least a certain height, stairs can't be too steep.


You have to try multiple different legitimate design approaches in order to understand anything about the qualities of your final design.

In some sense, it's a lot like data science. The little-leaguers design according to simple gradient descent: They'll pick one basic approach and do loads of A/B testing on it in order to make minor improvements.

The pros will do something like simulated annealing: Pick a number of different approaches, and see if one has a larger starting advantage.


I think the designer is really a kind of artist, and he/she must make use o its own style, preferences, talents and gifts and gifts, but considering UX/UX the attention to clients/customers/users wishes, preferences and expectation must be prioritized, preferably allowing some kind of customization and / or personalization - and always targeting the simplicity and practicality. Att @neigrando


I plan to do a followup post about design and objectivity: the fallacy that design can be reduced to an A/B testing formula.


Looking forward to it!


Design is mostly about synthesizing many different goals and priorities into a coherent approach. It's pretty easy to take one goal or activity and design a flow for it, but making it work with all the other objectives of a project will take a lot of compromise. To make a simple and usable UI, you will need to design chunks that work well enough for a half-dozen different activities.

It's similar to architectural design.

It would be relatively easy for somebody who is not an architect to design a room built for one purpose only, say reading. It would have rows upon rows of ergonomic chairs with cupholders and book or ereader holders. This room would hold many people who could all read at once. It would be useless for anything else. Also, people tend to become uncomfortable in such a mechanistic and single-minded environment.

Most would rather read in a coffee shop, or a living room, rooms designed with compromises and artistic sense to support many, many different types of activities. I'm not going to get into specifics, but cubicle farms have the same problem.

The second most important part is painstaking logic. Once you have an overarching concept that unifies all the goals of your UI, you need to step through all of the most important activities (or "flows"), trying to determine whether it is the easiest possible way to complete the action, and if you can make it easier without breaking the overarching design. Sometimes you can't and that's OK. But if you can make it easier, you should.

Then there's visual design- This is tough because it is a completely separate discipline from UX, yet it is intimately bound to it. The main thing to remember is visual weight. It's bad if everything has the same weight. A good rule of thumb is the 1-2-3 presentation. There should be some elements that are primary, some that are secondary, some that are tertiary. It goes without saying that these weights correspond to the importance of certain actions in the interface. Play mind tricks on your user to steer them towards desirable behaviors.

If you can take something out without too many negative consequences, do it, because that will give you room later to make a completely different flow easier. Or leave that room empty to make your interface more simple and beautiful.

As for "scientific theories of design": Most of these things deal with visual weight and local presentation and have been common sense to artists for millenia, and are about as useful as A/B testing. That is, they are very useful towards the end to make sure that all of the separate elements in your design have reached local maxima. Maybe it's a bit like photography- if the photographer can give you scientific reasons for his white balance and exposure etc., good for him, but they say nothing about the subject matter or even the composition of the shot.

The ultimate criteria is the response of the audience, and no amount of tests or theories will ever touch that.


UI/UX is not subjective. Design may be objectively measured by its ability to satisfy users' actionable needs. Design is a solution to a problem. The author is conflating form with function. Form follows function.


If 1 + 1 is the problem but the solution can be a range of numbers, then it's time to admit that design is more subjective than you'd like it to be. It's more than a "solution to a problem" which just sounds like you're trying to fix a broken oven.

"Ability to satisfy user's actionable needs"... Oh please, there's a million ways to satisfy actionable needs. Just how are you going to measure user satisfaction with the design you've chosen? A yes/no survey? Your bosses opinion? Your colleague's opinion?

How about showing some guts and using your own intuition! Your own preferences, feelings, thoughts, emotions, visions, all that SUBJECTIVE fuzzy stuff. Don't be afraid, it's what humans are good at, particularly creative humans and artists, and it's how good design happens.

The logical, rational, "satisfying the user's needs" part of design, is the easy stuff. Even programmers who are usually afraid of design can figure out logical IA and a sensible UX.

The same conservative scientific argument can be applied to all art, including all the weird stuff we see at the biennale. That is, "it can be measured by the effectiveness with which the viewer engages or responds to the work, and how the artist communicates meaning and expression and...." Yeah yeah. Sure. We could attempt to evaluate the piece that way, but how about we just use our built-in subjective powers instead. And we'll get a better outcome as a result.


You may want to re-read the article as it argues the same (or at least a similar) point that you are.


Of course it's subjective. I don't like all the stuff you like. We don't all want to work at McDonalds where everyone eats it even though it's crap.


Maybe someday I'll get around to expressing this idea in a more refined blog post, but here is the rough thinking for now...

Irrespective of whether one approaches UX using intuition/experience, or statistical methods, both tend to incur a flawed assumption: There is a single 'best' version of a product/app/website, in terms of the overall UX decision.

Consider AB testing, where x measures some objective result:

f(A) = x

f(B) = 3x

So the results show f(B) > f(A). As an example, in context this could mean site version B, retains visitors longer and results in higher conversions. Therefore, UX version B 'wins' right?

What's the problem here? Perhaps it should rather look like this:

f(A,B,C) = [Ax,By,Cz]

Forgive me if the notation is a bit unusual, but the idea is you end up with 3 different UX's. 3 'best' versions of the website, that are always live, simultaneously. Or if you had a bimodal preference group, you would end up designing 2 UX trees. The number of clusters or 'groups' does not matter, only the fact that clusters exist to begin with.

We need to move away from the idea there is a single 'best' version of an interface. More often than not preferences when batched together (in the form of a product, app or whatever), do not form a neat normal distribution, even if approximating that is convenient.

A/B testing so highly regarded because it works. As effective as it is, (if used right), I believe we can do better.

Malcolm Gladwell eloquently expands on this concept, using an example from the spaghetti sauce industry, with further detail. I strongly encourage you to check it out, if you haven't yet, it's awesome![1]

This same idea of clusters applies for taste in movies,books,foods, which faces we find beautiful and almost anything really when looking at a population. Taste in website UX preferences is no different.

So the optimal webapp of 2020, is one that automatically knows which clustering a visitor falls into and presents a version of the site that makes them most comfortable, based upon UX decisions that align well to his/her group.

The current approach in software tries to shoot for a middle ground compromise, and in addition using a settings or preferences panel that the user needs to tweak. This could be considered suboptimal as more effort is required from users.

Example: There is a reason that UI animation annoys me (and others), to the point I'm willing spend hours researching how to hack the O/S, in order to turn it off. Clearly however, there exists other groups for whom it looks good, and there is no issue.

Why these clusters/groupings exist at all could be a combination of differences in our neurocognitive functions, and the summation of experiences over our lifespans.

[1] http://www.ted.com/talks/malcolm_gladwell_on_spaghetti_sauce...


Excellent article Matt! I had lots of conversations around this at SMX earlier this year!

The challenge is finding a way to clearly segment your user groups that doesn't have the same flaw of grouping based on majority otherwise the benefits will just be averaged out.

For example, if I can figure out that the chance of someone belonging to user group A when arriving from source X is 70% then I'm still serving a potentially unoptimized UX to 30% of the users.

Obviously this isn't a problem for an app that can clearly segment its users!


Interesting. Now that I think about it a tldr could be "Use segmentation for UX" ;)

(segmentation = groups = clustering)

I was thinking a bit further on this last night, and thought why not use evolutionary methods as well?

In other words we don't design the UX, we let it emerge and cluster around segments naturally. Kind of like if you took the concepts from genetic programming and mixed it with UX.


What differentiates a 'big-league' chef is that they make really good food.

Some of those chefs will be analytic-style cooks able to provide introspective traces of fundamentally sub-conscious decision processes (to non-chefs), but some will NOT be. Some designer will be good at communicating introspective traces (to non-designers), and some will not be. Some programmers will be good at communicating introspective traces (to non-programmers), and some will not be.

I write this as both a Real Programmer(TM) and an interaction designer, usually hats that I wear on separate projects, and never the twain shall meet. I have experience on both sides of the fence, and disagree strongly that a designer who can not justify their decisions (particularly to non-designers) is necessarily a 'little-league designer'.

The author makes an easy case that will resonate with developers: "a good designer can tell you why they did something, a bad designer can not". This is satisfying, because it confirms the (general) biases of the developer audience; namely that other job roles inside an org better be able to justify themselves (the reverse, a developer required to justify technical decisions to those unable to understand technical concepts is often seen as an absurdity, and I think in many cases it is...). Is the 'big-league designer' iff 'can justify themselves' assertion true, or is it alluring because its what you, as a developer, would like to believe is true?

Yet I believe the author's own chef metaphor undermines the argument.

Do you really believe that what differentiates a 'big-league' chef is that they can explain (esp explain to non-chefs!) why they did a certain thing? Do you believe there is not a significant subset of talented chefs who walk-the-walk but introspection is not a built-in feature?

Introspection is closely related to 'ability to teach', and is NOT necessarily related to 'ability to do'. The author references cognitive architecture, and this is a frequent assertion from that direction. A learning pilot is 'consciously' aware of what they are doing, an experienced pilot is often no longer able to 'introspect' what they are doing anymore, a pilot-teacher has to relearn the 'conscious' (introspective) aspect, coupling it with the subconscious skills acquired while doing.

Introspection /is/ a nice feature in a designer, absolutely no doubt about it. It reduces team-friction, particularly when the designer is working with developers who (rightly or wrongly) like to have design concepts justified to them.

But "not being able to introspect" does NOT necessarily entail "not being able to design", just as "not being able to explain how to fly a plane" does NOT mean "not able to fly a plane".

Many developer-oriented design blogs imply that justification of design decisions to non-designers is a mandatory feature of a designer.

Is justifying key technical decisions to non-developers a mandatory feature of a developer? Yes and no. Its better if you can, absolutely, and being totally unable to explain decisions is a bad thing, but on the other hand, being constantly asked to justify decisions "outside the bounds of professional competence" (say, to a PHB) is counter-productive.

Design is roughly as subjective as programming. Take that for what you will. Both are first and foremost arts; neither is fully encapsulated as a 'methodology' (the whole UX methodology is as silly for encapsulating all possible talented forms of design as Agile is at capturing all possible ways to program).

Most day-to-day decisions in both design and programming are based on experience and intuition, with a mix of conscious reasoning on the top.

Talented communicators exist in both areas, and are more highly prized than poor communicators, yet skilled practitioners who are poor explainers are useful and 'big-league' in both programming and design. To believe otherwise is to unecessarily pass over talented and useful designers, designers that will score big wins for somebody else.

</rant>

I wrote a blog post on this subject that may or may not being less rambl-y than this comment: http://blogs.gnome.org/seth/2010/02/23/i-did-the-worst-desig...


The article contradicts itself multiple times and reeks of pretense. Just like most UX guys I know.


So. A reading list would be handy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: