Hacker News new | past | comments | ask | show | jobs | submit login
The accidental tyranny of user interfaces (uxdesign.cc)
64 points by gjvc 5 months ago | hide | past | favorite | 60 comments



I was quite early to computers (starting circa 1990 with an IBM XT clone). When GUIs first arrived, they felt massively liberating: they allowed you to discover the computer's functions (mainly through menus), rather than having to flip through the (actual, printed) manual.

Then around the mid 2000's, things started to change. GUIs became facades behind which functions were hidden away. All of a sudden, there was no single, logical way (such as browsing through menus) to discover something. You had to keep using the interface until you stumbled onto new functions (or the product team had to breathlessly announce them). And the scripting capabilities, even simple drag-and-drop composable automations, went away.

I can only assume that this is due to the ad-driven nature of computing today. Ads depend on human eyeballs. Scriptable, automate-able UIs reduce eyeballs. Uniform UIs with text labels (like the good old Windows 9x interface) reduce eyeballs. Ability to quickly open up an app, get what you want and get out, reduces eyeballs.

Paying for software doesn't help either. The moment you pay, you signal advertisers that you have purchasing power, and they will pay your product/service provider even more to get at you and your data. I doubt there will be any change until advertising is regulated in some manner.


My guess is it's more that software was eating the world. Since so much more was being made there was less care being taken to be consistent, implement automation workflows, and instead focus on visible features. This was also the era of 'skins' for media players and theming the whole OS was falling out of favor. IIRC OS X had no theming and only an invert option for accessibility; not even dark mode.


I was actually thinking of Winamp skins while writing this. It was a lot of fun initially, but then it got old very fast. Still hard to beat the original Winamp interface, warts and all.


Who designs an elevator (lift) with no floor buttons on the inside? The description of them having to go back to the lobby and start over when they selected the wrong floor seems like something that would happen pretty often.


This, and the other examples, reek of lacking knowledge by the users.

Tall building are expensive. The taller, the more available floor space to rent. The taller, the more elevators you need. More elevators mean less floor space. People need to move between floors.

Buttons outside means that an algorithm can dispatch elevators most efficiently, combining people who want to go to the same floors, or follow up floors. So the throughput of people thru the building is maximized.

Its tyranny of capitalism. Or the tyranny of efficiency.


not uncommon in fancy offices and apartment buildings where access is more tightly controlled.


I was just staying at the Westin and they had this.

It seems much cheaper to build and maintain, due to a reduction in parts.

What's Orwellian about it is that a finger must touch a screen. They are not yet picking up biometric information as far as I know. When that day comes, it will indeed enhance security, but add to the surveillance creep.


Your Orwellian tracking fantasy is still possible. Someone's got to press the call button. They could put fingerprint readers on the buttons as well. It's not like it's something only possible because a screen is involved.


No, it's not. And we haven't mentioned cameras in the lobbies.


Some of these UI decisions are more technical versus tyrannical. I use Linux every day, yet I can understand why Google doesn’t have such detailed progress reports: If you are using Google Docs, and you want to convert a file to Google Sheets, that likely requires several different microservices working in tandem to handle your request.

For them to build out a real-time feed that tells you the progress would perhaps require a complete change in how these microservices behave (so they can all feed real-time, ongoing data to the client), and not provide any real benefit. The only time I really pay attention to my Linux boot sequence is if something is stuck or an error appears, so I can handle it. Seeing what Google Sheets is doing may be “neat,” but I completely understand why that’s not a good reason to build it out, and it wouldn’t make anyone outside of Google employees more productive.


> If you are using Google Docs, and you want to convert a file to Google Sheets, that likely requires several different microservices working in tandem to handle your request.

This discussion is about UX, but your comment shocked me. Seriously, converting a spreadsheet would be a single process if you ran it from your command line. It’s hard for me to imagine why I would invoke several microservices to perform this on a back end server.


What would your solution for 10 million such requests within a day be?


You may be conflating microservices and horizontal scaling. You don't need to have multiple (disparate) microservices to scale. Microservices have absolutely nothing to do with scaling. That's a myth started by people who never understood the actual point of microservices, which was partitioning and continuity of developer productivity.


You know: write a program like in the old days.

I would perform each request in a single process: read in the metadata (mainly structure) and then either process each tab sequentially or more likely map the whole thing into memory and spawn a thread for each tab, then write the whole thing out in order.

No need for the overhead of microservices: locating, invoking, transferring data, and synchronizing responses, much less dealing with all the pain of lost connections, abnormal termination and so on.

The largest excel sheet I've worked on is only about 500 MB and (does a quick search of my local filesystem) almost all are less than one MB. So in the (rare) worst case the transmission doesn't justify spreading it around; in the common case there's no benefit.


So what happens when this hypothetical machine of yours, that has enough NIC bandwidth to process the scale of data that needs to be streamed in both directions, enough CPU power to handle millions of concurrent requests in a process or a thread of their own, and enough ram/ fast enough disks to to map and swap all the files that are being converted, goes down?

In 2022, Google Workspace apparently had ~ 3 billion users (8 million of which paying) https://developers.googleblog.com/en/year-in-review-12-aweso... .

Not every solution needs microservices. But also, we have problems today that we did not have solutions for "in the old days".


You keep confusing horizontal scaling with microservices. The two are basically unrelated. You have to horizontally scale regardless of whether you are running regular services or microservices - the goal of micro services is just to increase the granularity of horizontal scaling (or, more often, to solve organisational issues around feature/code ownership)


Ten million per day seems unlikely to translate to millions of concurrent requests.

This kind of task is typically suited to be made into a single thing, you don't want partial conversions hanging around in 'microservices' if something goes wrong.

As for scaling, I'd likely put this in a process definition and run it on the BEAM if I were to make such a product. That way millions of requests per hour can hit my cluster and those that fail somehow will just get cleaned up and the transaction rolled back, the clients get 'sorry, try again' or 'sorry, we're working on fixing it', and the rest happily chug along.


Apache Beam?



You don't have to run them all on a single device! But you hardly need a bunch of microservices, or any, really, to do any one tranlation.


Maybe this'll come off as snarky, but I would build Excel! It would spread 10 million requests across the 10 million users, be totally immune to network outages, and my users could rest assured I wasn't thumbing through their data.


Couldn’t the conversion run locally?


My assumption is that the comment I was responding to addressed Google Docs specifically (which afaik is web only?). If this is the case, then your options to run locally would be for the conversion to run in your browser, or for you to have some Google agent run on your computer that can handle these requests instead of the Google servers themselves, neither of which is a scalable option in this case (due to browser differences / expecting people to be able to install software locally to their devices)?


> due to browser differences

Surely a file conversion should not be affected too much by browser differences, it should be a pure function, pure calculation not requiring too many APIs and which doesn't have much to do with rendering.


you want to convert a file to Google Sheets, that likely requires several different microservices working in tandem to handle your request.

Well there's your problem, right there.


Why?


Because file conversion is pure (in the functional programming sense). It produces an output based solely on the given input, without side effects.

If you have to implement a pure function by splitting it across multiple services, there is something very, very wrong with your software architecture.


A progress bar doesn’t need to literally report precise progress — just that milestones/checkpoints are reached. Like that 1/7 microservices has completed.

Now, you do need to consider expected timing and weigh it accordingly else you’ll run into the 1-99% takes 1 sec, and then stuck on 99% for 10 min issue… but otherwise.

And if you can’t report that level of progress, then you’ve got other issues (namely that you yourself have no idea what the hell the system(s) is up to and working on at a given moment)


Kind of off topic but this assumes that the ideal progress bar should be a smooth continuous movement from 0% to 100% when I don't think that's the case. If you have a 100-step process, and the first 99 take 1 second and the last one takes 10 minutes, the progress bar should be at 99% for 10 minutes because the process is 99% complete, right?


different devs have done through history different methods implementing this. the real answer is context depend, who's the target audience and what do they want out of that progress bar.

you can have a progress bar that show the milestones + ETA. multiple progress bar + log messages box that shows what the background is doing, or you can just have single disconnected bar that's based on a timer based on what you estimated the task would take etc.

as user: - how do you know if the UI / process is stuck then or just taking ? - how much time is there left, you got other things to do after 5 more tasks like the current one and want to estimate a rough estimate when you'll be done with this.


What does a human want to know when they look at a progress bar? How long (or how much longer) they need to wait, typically. So yeah, the progress value displayed should ideally be the percentage of total time.


Knowing about UI is a bit like knowing about kerning - once you know how proper UI is like, you cannot unsee how shitty modern interfaces are.


The thing that gets me is these large companies obsess over their UIs, so don't they consult with UI experts? Yet why do their UIs become predictably worse over time? Perhaps a certain subset of people actually really like featureless, pseudo-minimalist, vague, vacant, and contrary UIs??


bean counter excel model allocates jr salary to UI UX person (seen as “soft” work, “not real engineering”) and no feedback loop to realign model to reality, i.e. frontend as of yet has no clear KPI the way backend infrastructure does (raise adtech to next scale magnitude). Resulting in salary structures that bias towards what the bean counters understand and amplifying their biases, the excel model becomes reality, tail wagging dog


Bad UIs hurt users, but users do not usually make the choice of which software to buy or use. Therefore, bad UIs do not hurt sales or user engagement. Bad UIs can even boost engagement, and ad views.


The problem is that they prioritise looking elegant over being good usable.

Buying decisions are made on looks, not analysis of UI. People will buy pretty, and most people will never be aware of bad UI - they just adjust to it.

This article by Don Norman and Bruce Tognazzini has been discussed on HN before: https://www.fastcompany.com/3053406/how-apple-is-giving-desi... I cannot find the post that got a lot of comments though.


Possibly a result of labor division? The more a business grows the more there are parasitic positions opening (see Parkison's law). Thus the selection of people whose work matches the intellectual mediocrity of their superiors (graphic designers in lieu of competent programmers)


Believe it or not, the problem is likely that they are consulting UI "experts". Many designers responsible for this stuff (at least the ones I've worked with, and I can extrapolate based on what other companies have released) haven't been in the industry for very long and they don't really have the practical experience in using computers and applications to recognize the harm they are inflicting. Combine this with the fact that the one of the main ways for designers to get pay increases is to pad their portfolio with dribbble-worthy form-over-function designs and you start to get things like the OP is lamenting. And the Slack redesign.


You know why, it's because getting the user in a daze of confusion worsens his decision making, benificial to the app owner.


This sounds too simple to be true.


There's something to this. I worked in credit card processing for a few years. The dumbness of on line card inputs annoys me to no end. State Farm and Papa Murphy's have good experiences, most places absolutely do not and should be publicly shamed


can you link to a screenshot or essay demonstrating a correct UI?


No. But i can teach you about the shittiness, tho.

The header line above your post has inert text (the "on:"), buttons (in-place action, like "flag") and links (takes you somewhere, like "parent"). They all look the same. You need to hover the mouse over them to see which areas are clickable, something which is not possible on touch interfaces at all.


I’m using a touch screen phone to read this. I have no problems.


... hovering over the elements?


Understanding what to click on


> What if I want to go straight to the channel? This was possible, once. What if I want to highlight and select some of the text from the preview? I can’t. Instead, the entire preview, rather than acting as an integrated combination of graphics, text, and hypertext, is just one big, pretty, stupid, button.

When designing user interfaces, you need to think about all sorts of users. When less precise input methods (touchscreens) are involved, or when your users might have issues with precise use of a mouse (and YouTube targets people of all ages and abilities), making everything lead to the same place is much better than some parts of the row leading elsewhere and potentially confusing users. If you’re searching for a channel by name, it should get its own entry on the list.

> We were shopping for a card for a friend or relative, in the standard Library of Congress-sized card section in the store. Looking at the choices, comprehensively labelled 60th Birthday, 18th Birthday, Sister’s Wedding, Graduation, Bereavement, etc., he commented, Why do they have to define every possible occasion? Can’t they just make a selection of cards and I write that it’s for someone’s 60th birthday?

You can certainly find plain cards. You may need a more generic card if you’re targetting an unusual occasion (say, 31st birthday). But the _wedding_ card has wedding-related imagery and a pre-written cringy text, whereas the _bereavement_ card is less happy and more appropriate for that occasion. I can’t draw, but the card company can hire artists to draw something nice on the card for me.

> This is a list of the processes that the operating system has launched successfully. It runs through it every time you start up. I see more or less the same thing now, running the latest version of the same Linux. It’s a beautiful, ballsy thing, and if it ever changes I will be very sad.

You may be able to see that something went wrong — or not, because it scrolled off the screen too fast. You may be able to tell that it’s still working — but you can’t tell when it will finish. Measuring progress is not always easy. Google probably opted not to, since they expect this to only take a couple of seconds, so instead of inventing a way for the conversion service to report progress to the few users who may care, they just show an indeterminate animation.


This reminds me of the old joke at the florist’s shop: the business celebrating a re-opening got flowers spelling out “deep condolences” while they delivered one to the funeral that spelled out, “Best of luck in your new location!”


Ad-based business models will lead to UIs that force you to do things. There's not much to say here other than not to use those services, if possible.

Ideally UIs and services (APIs) would be separate, so you could choose the interface that's best for you. This is a pipe dream though.

Failing that, users paying for software directly is a step in the right direction. For example, Kagi and Linear are both excellent software (IMO).


This year, I started teaching basic computer literacy to adult students, and that put the current UI and design into another perspective for me. So many things are made "easy" instead of straightforward, which makes explaining them difficult.

For example, the URL/search bar in browsers. It's not a web address, so if you mistype an address or it doesn't exist, the text typed in it will be redirected to search. Another browser feature, the back button. It will go to the previous page in history, but only within the current tab and browsing session. Try explaining that to someone with almost no experience using a desktop browser.

Many designs are based on already knowing what to expect. For example, GMail does not separate fields on email composer. Body field does not have a border nor label, it is literally a blank space you have to know to click.

Yet at the same time, terminology is somewhat archaic: compose, carbon copy, forward, paste, and a few other words still remain.


It's interesting in this context to look at the increasing divide between "software designed for experts" and "software designed for consumers".

As anachronistic as it may sometimes seem, the whole software development ecosystem runs on various flavours of (more or less) well-defined plain text formats, precisely because this allows interoperability between a wide array of diverse tools. Similarly in the sphere of sound production, we see widespread software interoperability in the form of MIDI and VST plugin standards. Hell, even in early office software we were able embed (functioning!) snippets of spreadsheets into other types of documents, and use templated mail merge against our contacts database to generate form letters.

And yet over on the consumer software side of things, we're increasingly lucky if we get working copy/paste...


The article has a link to a wonderful Ted Nelson series on YouTube that explores that kind of thing in more depth: Computers for Cynics.

https://www.youtube.com/watch?v=KdnGPQaICjk&list=PLlTLLSskDv...


oh sweet irony - the instant I've opened this article, two popups slided from top and bottom to inform me about spying on users and about a SIGNUP FOR FREE.


Firefox with uBlock and uMatrix active likely allowed me to avoid that experience.


And I can't zoom the page, either.


I think this misses the tyranny we live under every day. What if I want the lift to stop between floors (see Being John Malkovich). Or one way streets. If I accidentally go up a one way street, I am forced to drive all along it to who knows where. Why can't I just steal money?

I think it is a matter of degrees of freedom, not loss of freedom.


I think Rick Roderick's lectures better detail the tyranny of everyday life than Being John Malkovich.

https://rickroderick.org/


Sorry that was a reference to getting to floor seven and a half.


For further reading, “The Design of Everyday Things” by Don Norman is a great source for a comprehensive look at this subject. I believe it’s probably still required reading for UI/UX folk but also approachable for other people and lays out a thorough case for what the key trade-offs are in interface design.


I do not think it's accidental. Dark patterns are quite deliberate.

"All this matters because the interfaces in question do the job of the dictator and the censor, and we embrace it. More than being infuriating, they train us to accept gross restrictions in return for trifling or non-existent ease of use, or are a fig leaf covering what is actually going on."

Software: empowering you to do anything, on someone else's terms!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: