If you aren't making a consumer product for nursing home patients with sub-90 IQs, then you'd be wasting your time, and the feedback you got from the exercise wouldn't be useful. In fact, any decisions you made based on it could be wrong. The point isn't to design for the lowest common denominator, but for the users you will actually have, and usability test participants should be recruited with that in mind.
There is some merit to what I assume is your underlying argument, but the way you phrase it isn't helpful.
>The point isn't to design for the lowest common denominator, but for the users you will actually have
Keyword: situational disability
Even a perfectly fit and educated target audience sometimes suffers from certain conditions or in an environment that significantly reduces their mental or physical capacity. Stress, injury, pregnancy, too many beers, very long nails, terrible weather, a toddler trying to grab your phone, non-native speaker etc etc. You may know the user even personally, but you never know what’s going on in their lives when they use your app. So general advice: ALWAYS follow accessibility guidelines. Even bad copy may drop your usage by a significant percentage, because there are plenty of people with dyslexia.
Pick your favorite programming language. Do you think it should be tested on people in a nursing home? I'd argue that's the wrong audience. (A programming language isn't a graphical user interface, but it is a user interface!)
Programming language is not a user interface, it is a way to describe commands. The UI in this case would be the way to enter the program, e.g. punch cards, text editor or IDE, or AI copilot. People who can write code are very broad audience and of course all accessibility requirements must apply.
A programming language is absolutely a user interface. The error messages and diagnostics emitted by the language are the feedback mechanisms. The syntax and semantics are the design of the interface. The documentation is the user manual. Text editors, IDEs, punch cards and AI copilot are all separate UIs on top of whatever programming language you happen to be using.
After all, TUIs are a thing and nobody debates that they are also user interfaces. Just because a programming language is all text doesn’t mean that usability metrics don’t exist for it.
>The error messages and diagnostics emitted by the language are the feedback mechanisms.
The error messages and diagnostics are emitted by the tools like compiler, linker or interpreter and are part of their interface. Language standard may codify some error messages, but language itself cannot present them to you because language is not a program.
>Just because a programming language is all text doesn’t mean that usability metrics don’t exist for it.
Just because some usability metrics can be applied to a programming language, it does not make it UI. Interface implies interaction. You do not interact with language - it cannot receive UI events and react to them, you interact with the tools that understand it.
You’re being overly pedantic to a fault. Here’s the definition Wikipedia gives for UI:
> a user interface (UI) is the space where interactions between humans and machines occur[0]
Further, Wikipedia lists a ton of different kinds of user interfaces. Included among those is:
> Batch interfaces are non-interactive user interfaces
And further, here’s a better explanation describing how a programming language is a user interface then I can provide here[1]. It really is as simple as the programming language being an interface to the machine, and the programmer being the user of that interface. I don’t understand why you’re arguing so much against a widely accepted fact. When computers were first made, there was no such thing as a mouse or keyboard, there were punch cards. The only way for a user to interface with the machine would be to insert a program with punch cards. Nowadays we have all sorts of input devices to give us new ways to interface with machines, but the most basic way we can interface with a machine is by writing a program with our intent for it.
And if you want to be so pedantic then is a pure HTML/CSS website a UI? There’s no program there just markup. The only program that runs is the browser. So then is the website nothing and the browser the only user interface? Or how about the steering and brakes/accelerator in a car? Those are purely mechanical, are they a user interface because they don’t have a program? Or how about the original arcade games like pong? They were directly soldered onto the board. There was no program just a circuit. There were no instructions being executed. So does that make those games a non user interface?
Using labels does not make your arguments any stronger, on the contrary. Speaking of which, you quote Wikipedia, but neither the article you refer to, nor the article "Programming language" does say, that programming language is an interface. Languages by definition are merely syntax and semantics, they are used in interactions but they do not define an interface themselves - it is not an "is", but "is used by" relationship. You can write a program on a sheet of paper and put it in a frame on a wall, so that your friends could read it and enjoy the beauty of the algorithm after a couple of bottles of wine, or you can print it on a t-shirt communicating your identity. In neither case there exists an interaction between a human an a machine.
Interface is always about interaction: a keyboard to write the command or the program on, a display presenting an IDE or command interpreter etc. So, looking at your examples: HTML is not an interface and html file is not, but the static website opened in the browser is, because browser has downloaded the site and now knows how to interface with you. Steering wheel is of course an interface, because, as I said earlier including in my previous comment, it allows interaction. The example with arcade games is actually the same as for the first computer, which did not have an interface for programming (punch cards came later) and had to be re-assembled to run a new program: they did have user interfaces for data inputs and outputs.
Your second reference is clearly written for the beginners and simplifies things to the point where it becomes nonsense, even saying that "Programming, therefore, generally involves reading and editing code in an editor, and repeatedly asking a programming language to read the code to see if there are any errors in it". Do you still think it was worth quoting it?
Now, if you feel that I'm over-pedantic with this response too,
Okay, then maybe a better example would be a command-line program like `grep` or `sed`. Should those be tested in a nursing home? I'd argue that's the wrong audience, and testing there would cause you to simplify these tools to a point where they're no longer useful.
(I do think it's notable that you can combine the inputs and outputs of such programs into a shell script, which feels a lot like using a programming language—but this is beside the point I was trying to make.)
Horrible advice for expert tools. If you can make the assumption that the end user is going to learn the tool you can design it for peak effectiveness after a learning curve. If you have to consider retards and hostage situation level panic you can't do that, and create a worse product overall.
I think the point is that you can design for peak effectiveness while considering usability, and that makes the tool more effective. There’s a lot more scrutiny on edge cases when designing expert tools.
On “Expert Tools” I’d argue it’s imperative to consider high stress levels interactions, because the outcome outweighs the expert using it.
You are missing the point, it's obvious that a cockpit needs to account for stress or a crisis. Extending this to CAD software for example is nonsense.
I like your confidence, but it also manifests lack of experience and understanding of what engineering is. Expert tools have much lower tolerance for user mistakes because there are big money at stake (or sometimes lives of other people). A typo in Instagram post is not the same as a wrong number in CAD. I have personally seen a construction project where incorrect input in CAD resulted in several dozen foundation piles for a 16-story building installed outside the site boundary. Just because an architect responsible for aligning the building on site made a mistake working in a hurry, confusing two fields in the UI. Of course, there was a chain of failures, each next step costing more than previous one, but it could have been prevented if the software cared about the user and did not assume he is a superman.
It is so easy to squeeze as much functionality as possible on a screen trying to optimize productivity, but then quality of labels is sacrificed, click zones become too small and feedback is reduced to a barely visible message in status bar. It takes one sleepless night or a family argument for the user to get distracted and make a wrong but very expensive mistake.
A UI designer does a good job if the person who's paying them thinks they did a good job, not whether or not they actually followed best practices, unless that's how the work gets approved. A frontend developer does a good job if their tickets are done and their boss likes them, which may or may not include actual quality work that's accessible or usable. That's the secret I wish I'd known when I started working, could have avoided the extra personal cost of trying produce quality results despite there being no incentive structure for it.
Just like morality and law are not the same, the objective quality of a UI designer's work doesn't necessarily have anything in common with their employer's preferences.
You're right that only one of those is paid well, but that's not what GP was talking about.
> You're right that only one of those is paid well, but that's not what GP was talking about.
I didn't say anything about how much someone is paid, just that it is often a job, and whether you have a job depends overwhelmingly on whether the person paying you is convinced that you're doing it well, which may or may not relate to the objective merit of the work. It doesn't matter if you're making $150k or $20k, but it's not wise to prioritize things that nobody paying you didn't ask for.
The exceptions are of course things that don't pay at all, in which case your goal is still probably to get the best job you can done under the constraints provided. If those are too tight, things get cut, or you don't sign up for it.
So what if we will? That does not mean we will be users of the products we are designing the UI for at that point. Design for actual disabilities that you can reasonably expect your users to have, such as color blindness, not the full spectrum of the human condition.
That said, I do think products should be as simple and clear as possible for a given level of essential complexity.
Countless apps do not even accomodate the users they actually have and very obviously don't test accordingly. The non-lowest common denominator is far lower than you seem to assume.
If you think that a fancy UI rework or "please pay our subscription" screen is only confusing to people in nursing homes you are very wrong. They can be nontrivial obstacles to users who work every day, organize conferences etc.
There is some merit to what I assume is your underlying argument, but the way you phrase it isn't helpful.