Hacker News new | past | comments | ask | show | jobs | submit login
Windows 98 Icons are Great (2015) (alexmeub.com)
553 points by maxmouchet on Jan 20, 2019 | hide | past | favorite | 354 comments



Not just the icons, the entire user interface was excellent. You had:

- Clear indication of whether something was clickable or not, vs. everything flat and featureless

- Good contrast

- Normal-sized widgets which left enough room for content vs. the huge widgets we use today for some reason (it's weird for me to say this, but Linux desktops are the worst offenders here). I get why they're important on touch-enabled systems but that's no reason to use them anywhere else.

- Useable scrollbars, no hamburger menus

- And -- although Windows-specific: the Start menu was something you could actually use.

The state of testing, examination and debates about user interfaces was also light years ahead of what we see today. I was genuinely fascinated about what my colleagues who did UI design were doing, and about the countless models they developed and metrics they used. If it was the same bikeshedding we see today, they sure as hell knew how to make it look like they were having a real debate...

I suspect the reason behind this drop in quality is largely economical. Fifteen years ago, you needed a great deal of understanding about perception, semiotics, about computer graphics, and a remarkable degree of mastery of your tools in order to produce an icon set. This made icons costly to develop, to a point where it was pretty hard to explain it to managers why you need to pay a real designer a heap of money for a real icon because, dude, just look at every other successful app on an OS X desktop!


"Not just the icons, the entire user interface was excellent."

Agreed. I strongly dislike the Windows operating system but I miss very much the Windows user interface which was very well designed, consistent, and optimized for real work.

One of the things I miss the most are the consistent, universal and wide-ranging keyboard shortcuts. Not just key shortcuts for menu items, but keystrokes that allowed you to move around dialog boxes, resize windows, etc. OSX is largely terrible in this regard with even many common menu items without shortcuts ...


> One of the things I miss the most are the consistent, universal and wide-ranging keyboard shortcuts.

I actually find that's what I miss when I use Windows. Windows' universal keyboard shortcuts seem to be limited to window management and some basics like new files, save, cut, copy, paste, etc.

On the other hand, just about every macOS has the same shortcuts for actions within applications as opposed to without. For items without shortcuts, you can define your own quickly and easily in System Preferences, and those custom shortcuts can apply to just one app or every app.

Not only that, every macOS app puts the shortcuts in the same place, the menu bar. They're all searchable via the Help menu. On Windows, there are more often than not keyboard shortcuts that aren't listed in any menus, so one would have no idea what they are without looking them up. Further, because Windows software tends to be developed in any ol' framework with any ol' user interface, lacking in any consistency, quite a lot of programs don't even implement the standard, universal shortcuts.


All the things that you list about Windows apps are deficiencies of the modern era of UX design. UX apps of Win98 era were much more consistent, and especially the rule that all commands and shortcuts must be in the main menu was usually adhered to, at least by apps that respected the official platform design guidelines.

The idea of a drop-down main menu with all commands comes from CUA:

https://en.wikipedia.org/wiki/IBM_Common_User_Access#Descrip...


They were also self-documenting and training. Do an action and while you're doing it, right there next to the description of the action, is how to do it quickly.

If you realized that, were doing the same thing frequently, and used the keyboard action after the first couple times you learned the memory of how to do it the quick way.


> All the things that you list about Windows apps are deficiencies of the modern era of UX design

I was going to say "not on macOS, they aren't" because this isn't true for any apps that use Cocoa or Carbon (which is to say, all of them). The only apps that exhibit any weirdness are ones that use cross-platform frameworks like Qt, GTK+, or wxWidgets.

But nowadays, there are loads of Electron and React Native apps that don't even present anything in the global menu bar; if the developers don't think to add them in themselves, basic functions like copy and paste don't work without secondary clicking!


> UX apps of Win98 era were much more consistent

Counterpoint: 99.5% of everything ever created with Visual Basic.


99.5% of everything ever created with VB is probably line-of-business apps. Which is to say, most users would only be exposed to 1 or 2 such app, usually at work.


The one thing I miss from windows is that alt access to the menu. While search menu is great when I don't know what I am looking for, using alt menu and then a chord to access features was great and less painful than some of the macOS shortcuts that require three modifier keys at the same time.


When I bought my first computer (which came with Windows 3.1), it didn't come with a mouse. User interfaces back then were designed to be usable without a mouse, because many people didn't have a mouse - it was optional, and used up one of the serial ports.


I gave up trying the get my first mouse working after hours of tweaking win.ini...


Yup, I started out with win 3.1 without a mouse. You could do pretty much anything except use paint.


What makes up for this, to some extent, is that you can assign a keyboard shortcut to any menu item in OSX: go to System Preferences > Keyboard > Shortcuts > App Shortcuts, click +, select the application (or all applications), type the menu name and set a shortcut.


The other thing about OS X is that command-shift-/ opens a little search box in the help menu and then you can pick any menu item by typing a few characters and selecting it with the keyboard.


Very underrated feature. Last I used Ubuntu's Unity it had a similar feature.

I wish Alfred had a plugin to do this on Mac OS. I remember Quicksilver had something.



What baffled me is that if you have a mouse with side buttons you just cannot use them in any type of shortcut.

Mouse support is generally terrible in OSX.


There's a lovely piece of software called SensibleSideButtons to deal with that. Back in the day, everybody used USB Overdrive to overcome OS X's shortcomings with the mouse. Alternatively, BetterTouchTool is a nice little piece of software for setting custom shortcuts with any mouse button and/or trackpad gestures.


I still use USB Overdrive to make a physical scroll wheel work without crazy acceleration. Without it, macOS seems to scroll just a pixel on the first click of the wheel, maybe two pixes on the second click, but if I scroll a bunch, it jumps half the page. With USB Overdrive I can set it to just always one or two lines of text per click.


Mouse support for Macs has been terrible for decades.


> Mouse support for Macs has been terrible for decades.

I'm kinda out of the loop, do they still sell mice with only one button?


Sort of. Their desktop mice have two buttons, but only a single plastic shell. This seems to follow Apple's design guideline of wallpapering over complexity, rather than admitting that it exists.

Image: https://m.media-amazon.com/images/S/aplus-media/vc/75a2caed-...


> Their desktop mice have two buttons, but only a single plastic shell.

I'm not sure where you're getting that from, since Magic Mouse only has one button: https://www.ifixit.com/Teardown/Magic+Mouse+2+Teardown/51058...


It may physically only have one button, but functionally it has two (left and right click) as well as gesture support. Magic mouse takes a bit of getting used to (it's very flat), but it is a fantastic mouse once you have.


Functionally, it has as many buttons as you'd like, since it's a multi-touch mouse. It's like a trackpad in that sense: you can configure it to do certain things when you use it a certain way, but physically, it's nothing more than a multi-touch surface and a single click button.


Shoutout to Shortcat for providing something like "knowledge-free keyboard shortcuts everywhere" on macOS. You hit a search key-chord (I use SHIFT-CMD-SPACE) and it pops up a search modal making all UI elements keyboard selectable. Usually the thing you want is 1 or 2 characters followed by Return.

It honestly reminds me of the experience of navigating Windows menus with Alt- and progressive keyboard shortcut learning, but flexible enough to handle things like Electron applications that don't even have a Menu bar or settable shortcuts.

Does anyone else use this or something similar?

https://shortcatapp.com/


Jetbrains' IDEA has the "Search everywhere" feature that...well...searches everywhere - menus, commands, project files, etc. I'd love to have that for every app in the OS.


Actually, that would be really, really awesome once you get used to Jetbrains' way of working. It's so unusual to be able to search for all the configuration, functionality, code and files in the same place.


Emacs is much like that, as everything the user can do is an interactive command, which is typically a named function. So tools like Helm, Ivy/Counsel, as well as the built-in completion for M-x, let the user find and execute commands by name. And, e.g. `helm-M-x` display any bound keys next to the command name.


Thank you for teaching me about Shortcat!


One UX issue with the cascading menus in Start menu was when you had to navigate several levels deep to open a program. Since the menus expanded and collapsed on mouse hover every once in a while accidentally moving the mouse in the wrong direction meant all my menu navigation progress was lost and I had to start over.


Dang right! It's like half the programmers of today have forgotten keyboards are a thing.


And many others don't even realize that investing a bit of research on better keyboards would bring us to much better use experience. I don't mean changing the typewriter style keys, those are ok, I mean all other keys; why there is no standard for a reduced keypad with the bare minimum for menu navigation and data entry? Unfortunately the arrow keys are a half solution by not implementing yes/no and other control keys. Why there's no contextual Info key to show information/help pertaining the current task? The computer keyboard as we know it has to be extended, not taken away as the WIMP culture before and touch screen after attempted to do, with disastrous results so far.

I would welcome a kit made of say 120 key modules (one key per module, real clicky keys, please!) all with the same pinout, and normal+2x/3x/4x keycaps, special "key" modules such as trackballs, Thinkpad "nipples", analog knobs etc. then a breadboard-style carrier board where i can stick them as I want, plus a decoder board. Then, once the keyboard is ready, I can send a file describing its details to some high quality 3D printing assembly to purchase a pcb and case for my prototype keyboard and turn it into a real one.


>The computer keyboard as we know it has to be extended(...)yes/no and other control keys.

The Macbook Touchbar is pretty much exactly this, it shows e.g. dialog options as keys. I know it's not what you want (I like mechanical keys too), but it's the first actual extension of keyboards in a long time and it's convenient sometimes.

There is also the Optimus keyboard that sounds more or less like what you want built.

>Why there's no contextual Info key to show information/help pertaining the current task?

Usually F1 brings up relevant help on Windows. The actual implementation in most apps is useless though.


>Optimus keyboard

I’ve had that page bookmarked for like 5 years now, and its never changed/stopped being out of stock. I’m starting to doubt it ever physically existed..


> Unfortunately the arrow keys are a half solution by not implementing yes/no and other control keys.

Wouldn't "yes" and "no" be redundant with "enter" and "escape"? Most situations I can think of where you can confirm or abort an action, those actions will map to enter and escape.


Yes and no. Functionally yes because every keyboard has them but I'd like specific function keys to be part of the "arrow keys section" so that users become accustomed to them and one day if I have space constraints I can implement just the arrow keys section and expect the user to be able to input anything without looking elsewhere, including text if I implement a virtual keyboard the user can navigate using arrow keys. I'd like keyboards to have sections that can be snapped together, say the typewriter section on the left, then the arrow keys section, then the numeric keypad, all independent each other. In this context we could envision keyboard addons such as for example customized function keys for very specific operations and or modifier keys which aren't mapped to the usual shift/alt etc. If every key (sensor/knob/transducer, etc) was a module which initializes itself according to its capabilities and once snapped in with others uses a common bus for communication, we could build on the fly haptic interfaces a hundred time more powerful than any "modern" touch screen out there.


For growing numbers of people they aren't.

As an exercize, I very seriously embraced touch a few years back.

I will not give up keyboard plus mouse, but I am also amazed at what really can be done with touch keyboard and pen.

Lots of people want to expand on that because it expands on phones and tablets.

I have a Note 8, and can content create on it to a level I did not think possible a few years ago.

I use touch keyboard, voice and pen.

At times I carry a small bluetooth keyboard with touch mouse pad. It is not as fast, but is more robust.


I put up with touch keyboards because real ones are not very portable. It's good enough to get by, but it's not good. The same with touch interaction -- selecting a range of text is an exercise in frustration. I don't want that interface on my desktop.

Just because a thing has a CPU and operating system does not mean it needs to have a unified user experience.


Yes! yes! 1000x yes!

Have spent the vast majority of my time on OSX for the past several years, but still reminisce about the wonderful keyboard shortcuts from Windows.


I'd argue MacOS classic was the real winner here, especially keyboard shortcuts. For example cmnd+q always quits an app, where as in Windows you had ctrl+q, alt+f4, ctrl+w, and sometimes you just had to do the menu navigation (alt f, then e I think? and sometimes it was alt f, x), and yet other times there was no keyboard shortcut at all.


It’s funny to mention ctrl+q , because it is one consistency in MacOS that I find unbearable to the point of having set it to spotlight instead

Right next to ctrl+q (something I end up hitting once per session of an app, by definition) is ctrl+a (select all, something I hit tens of times a session), ctrl+w (hit in the browser a good amount), ctrl+1, ctrl+2 (also good browser shortcuts)

I actually don’t need to close the entire program that often! Please don’t make it so easy to accidentally hit at such a high frequency. Bonus points: I switch between QWERTY and AZERTY keyboards a lot. That’s just me but it makes me have a lot of extra pain

Alt-F4 is very nice in that regard.


Select all in MacOS should be CMD+A, not CTRL+A. Try swapping your muscle memory :)


Alt+F4 is a shortcut that closes the current top-level window.

Ctrl+F4 is a shortcut that closes the current child window/tab. For apps with tabs, this was often duplicated as Ctrl+W.

There's no standard shortcut for exit. I believe that this is deliberate, to avoid people hitting it accidentally. However, the accelerator keys to do it via the File menu are always Alt+F ("File") -> X ("eXit").

These are all platform UX guidelines, or at least they were back in the day. Of course, apps can and did ignore them, but that was also possible on macOS.


> Of course, apps can and did ignore them, but that was also possible on macOS.

This is much less likely, given that they are set up by default for every new application.


I find SizeUp to be a lifesaver. Good example of a finished app. It does what it’s supposed to and is incredibly stable and reliable.


Both your comment and the article use the phrase “real work”. What do you meany by that? And is there “fake work” that Windows fails at?


Re: usability of the Start Menu...

When I used to do tech support, from Win 98 all the way through Win 7 days, I would consistently find that reasonably intelligent human adults had difficulty understanding the Start Menu. If something wasn’t on the first menu, they weren’t going to interact with it. It seemed like such a no-brainer to me — just expand the folders! — but a staggering number of people found it alien and never adapted. Even the idea of a right-click VS left is too much for many people.


One huge problem with the start menu is that every company put a folder with their company name in there and you had to go inside to launch your program. Adobe->Photoshop, Microsoft->stuff.

I think the desktop designers should have made a fixed set of top level menus. Only show the non-empty ones, but at least make everyone put apps in them. I'd propose a set including: games, programming, engineering, design, media, office, entertainment, audio-visual, system tools.

I'd also suggest subcategories particularly for games. If there is only one category, or not that many programs total, it could omit the subcategary level when showing that menu.

Put users first and stop sticking your company names in their menus. Add a little structure and some reasonable heuristics. Done.

I think Linux distros could do this since they have packaging guidelines and huge software repositories.


I thought that Win7 had a good solution to that, with typing immediately doing a search within the start menu. Then, Win8 made it also search for files, destroying any utility of the search. If you want people to have muscle memory (e.g. WinKey + moz + enter to start Mozilla Firefox), then having severe lag on the search, and having the top result vary as a function of time (as more search results are found), is horrendous.


Win7's built in menu search was great. Win8 technically had similar functionality but it was awful for all the reasons you mention (lag, poor search capability) plus it took over your whole screen in a horrible, visually jarring flash making it very unpleasant to use. Win10's is halfway between the two, but close enough to Win7's that I actually use it now instead of just using Launchy and pretending the Windows menu doesn't exist, like I do on Win8.

Edit: While we're talking about laggy keyboard interfaces, AAARGH the Win10 logon screen is awful. You have to hit a key, then wait for the screen to load and be shown on the screen, before you start typing your PIN, or it'll eat the first character of your PIN going from "press any key to log on" to "enter your pin". What the hell is so hard about understanding that keyboard interfaces shouldn't be laggy even if graphics don't quite keep up? Electric typewriters from 1990 got this right!


> having severe lag on the search, and having the top result vary as a function of time (as more search results are found), is horrendous.

There's no reason search has to do this, though. Spotlight (the macOS version of this) is able to find results pretty much instantly, and it doesn't show any results until it's searched everything, so the top result never changes. I really don't understand why Windows hasn't been able to do the same thing (and if anything, the win10 search is worse).


I wish Firefox would do this too. I’ve lost count of how many times I’ve pressed return just as Firefox changes the ordering of the items in the search bar suggestions. Very fustrating as I’ve since learnt to wait for Firefox to complete its suggestion process before hitting return.


> it doesn't show any results until it's searched everything, so the top result never changes

I don't see how that follows. If it also searches files, the top result can change if any indexed file changes, which would still break the app launching scenario.

Or is it always showing apps on top?


Is this case, they are referring to consistent display of results within a single search, not the consistency of several search results using the same search terms. Progressive searches that display results as they are found have a nasty habit of changing which result is currently underneath your mouse, right when you are about to click it.


Yup. Mobile Twitter is horrible in this regard. Search for something, it comes up first or second, wonderful; tap i- DAMMIT WHY DID YOU REPLACE IT WITH AN UNRELATED HASHTAG!?!

Result: search, wait 10 seconds for the damn UI to stabilize, tap the result and NOPE, THERE COMES ANOTHER LATE RESPONSE MESSING IT UP AGAIN.


Gnome 2 did this (and Mate still does), but I actually find it more confusing.

It's sometimes not obvious which category a program belongs to, so you have to meticulously search all categories that could apply. And you'd only to that if you don't exactly remember the program name in the first place, otherwise you would have had quicker ways to launch it.

Here the old windows way offers more 'breadcrumbs' to find something.


> I think Linux distros could do this since they have packaging guidelines and huge software repositories.

Many already do such things (most Desktop Environments provide a shell with some kind of software category menu).


KDE already does this.

Screenshot from my system: https://i.imgur.com/csWWl9Y.png

And this is on Arch, which means that almost all software is unmodified from upstream.


XFCE (arch also) also does this, and I find that it a great experience that whenever a program is installed I can find it immediately in the category that makes sense.


That's exactly the sort of top level I want! I once suggested that GNOME games be broken down into categories but I quickly encountered a bunch of resistance. Granted, I probably wasn't addressing the right people anyway.


On the other hand, it requires the user to read minds and figure out how the developer might have classified the application. If I'm looking for After Effects, should I look under Graphics or Multimedia? Or maybe Utilities?

I agree that using Company Name as the top level folder might be the worst option. If I'm looking for some random utility app I downloaded and installed, why should I have to first remember what the name of the company was who made it?


First the categories need be a bit more distinct. Also, a program might be allowed in more than one category. In a commercial case I could see stupid companies stuffing their app in as many menus as possible.


> Put users first and stop sticking your company names in their menus. Add a little structure and some reasonable heuristics. Done.

The concern is namespace conflicts. Users may be surprised to find a program has 'changed' when in fact another user installed a similarly named program.


Putting your application inside your company name was (is?) part of the guidelines from Microsoft. Also applies to c:\Program Files and the registry.

I can't remember the rationale, but it does make it hard to find things.


> Putting your application inside your company name was (is?) part of the guidelines from Microsoft.

That was part of the Windows 3.1 guidelines. On Windows 95, Microsoft's guidelines specified that your applications were to appear as single icons under the Programs menu. Unfortunately, most almost all developers ignored that and continued doing things the Win3.1 way.


One possible rationale: companies tend to specialize in one specific software domain, so you would end up with your graphic design software under Adobe folder, your Quake games grouped under idSoftware, and your databases grouped under Oracle.


The rationale was that on Windows 3.1, the company would put all of their stuff, not just the program executable, in the Program Group.

So not only would you have the program, but you'd also have the readme and the uninstaller and anything else that goes along with the program.

When Windows 95 came out, Microsoft wanted to hide all of that. The new guideline was to just put the program in the root level of the Programs menu, leave the uninstaller up to the new Add/Remove Programs control panel, and if you want the readme or whatever, go hunt for it yourself. But unfortunately most developers ignored it. They continued building Windows 3.1-style Program Groups the same way they always did.

And, yes, the Programs menu used the same backend as Program Manager. What appeared in one appeared in the other.


> One huge problem with the start menu is that every company put a folder with their company name in there and you had to go inside to launch your program. Adobe->Photoshop, Microsoft->stuff.

It was even worse than that. The app name usually was a subfolder too. It went like: 'Start -> Programs -> Adobe -> Photoshop -> Launch Photoshop' (Plus 'Open Readme' and 'Uninstall' and whatnot)


Part of it may also be down to features. Windows 10 is a mess, there's at least two different control panels and more features than any one person would need. Sadly there seems to be a sense that the next version of Windows or Mac OS needs to add new features, visible features.

If you shave a modern OS down to just the features that you'd need to do your work, the economics of carefully designing a desktop and the icons becomes more "reasonable" again.

I still think that a clone of the classic Mac desktop, or Windows 3.11 would be great on Linux.


> Windows 10 is a mess, there's at least two different control panels and more features than any one person would need.

It's a purposeful mess in that regard. They know the control panel is unmanageable for the average user, so they've been building out the Settings app over the last few versions. The settings app is much more like the Settings in iOS and Android and purposefully laid out.

It makes perfect sense to have both those applications exists while the transition in progress.


They've been building out the Settings app for over 6 years.


I hate how you can only have 1 instance of the settings app open, unlike the old control panel, really leaves people who wanna see multiple parts of the settings app no choice but to open the old control panel


Yep, it's getting better and better. For example, we can now change default sound devices - and apply monitor colour profiles - using the new settings.

Lots of work remains, but they're getting there step by step. E.g, there's no link to modify the colour profiles yet, but at least being able to quickly check and change from previously defined ones is a good little addition.


This makes me very sad. Remember when you had to learn to program just to be able to load a game from a disc in the 80s-90s? People can't even be bothered to learn now a days. Is this part of the "instant gratification" societal aspect? Personally I found that everytime that they try to hide settings it just makes it harder for me to find things. For instance before the separation of the Control Panel, they made this Categories view. This is the worst of everything. The categories that they put the settings under rarely made sense and you can't just press a hotkey to jump to the known settings panel. I really really hate the category view! haha


If you must use Windows get Windows LTSC

A version of windows 10 that is completely free of bloatware (no Microsoft store, no preinstalled apps like candy crush, no tiles on start menu, no Cortana, only windows search, no edge, only Internet explorer). It only gets security updates instead of feature updates, and has support for 10 years. Best version of windows 10 IMO. FYI it was originally called LTSB (Long term servicing branch) and was renamed to LTSC (Long term servicing channel).

I put it on my new machine and it the 1st version of Windows I've been ok with since 7. Buying it is basically impossible for a normal person, but its readily available on tpb. (also kms activator)


I recently tried out LTSC again in a vm to see how W10 has progressed. Apps take a long time to open. I thought it wasn't registering the clicks or something, until 30 seconds later it opened it 3 times.

Start menu takes > 0.5 seconds to even begin animating. Disabling animations actually makes the lag subjectively worse.

I installed W7 a few hours later to compare, and I was amazed at the difference in responsiveness. I'm talking like 10-20x faster.

I bought 5 year old netbook recently and it didn't have the resources to open W10's start menu. (Well, it could, but it took three minutes.) Did a fresh install of W7, and like magic, you can actually use the computer again.

I don't use any version of Windows anymore, but I'd like to be able to. I'm sad support for W7 will be ending soon.

With classic theme gone, and everything so slow, the best Windows desktop experience for me in this day and age is XFCE.


Running LTSC in a VM right now with Linux / KDE on the host. It is very snappy indeed. A few ideas:

- you aren't running off an SSD and Windows Defender is choking your disk I/O out from under you

- you haven't allocated enough memory to the VM

- you've just installed the VM and a Windows Update is going on behind your back and dragging things down

Opening the Task Manager and seeing what is choking the life out of your machine would be very helpful.

I agree with the parent poster that LTSC is generally pretty great if you've got to run Windows. For everything else, there is KDE :-)

edit: formatting


Maybe it just didn't work well with the VM. Windows 10 is pretty lean and apps definitely don't take a long time to open.


The mess started with Windows 8. Up until Windows 7 the UI was stil usable, and the Start Menu was great.


I would genuinely be keen to have another go at linux on the desktop if it had a win3.1 or win95 shell.

Back to usable basics.

I wonder if this project is already out there... (Rather than try to start another side project)


I use this XFCE theme on my Raspberry Pi: https://github.com/grassmunk/Chicago95

It takes a bit of work to set up but once you do, it genuinely feels like you're using Windows 95 but with modern Linux apps.


This is true but users also frequently complained about the number of items on screen at once. I love high-density UI but I think the web and modern design has trended the way it has because focus groups/"everyone else"/"the mainstream" kept saying they were overwhelmed.

Now the prevailing trend is this Fischer-Price children's-toys minimalism, with bright shiny colors and cute mascots. It's insulting


I wonder if we'll see a resurgence of "real work" ui for desktops and laptops now that most of the "everyone else"/"the mainstream" has shifted to mobile.


I sure hope so. This may be a contentious opinion, but I strongly think that the 'average non-technical user' that has driven so much of modern UI development (for better or for worse) is not representative of someone who should have input when it comes to tools used by professionals. Complex tasks usually benefit from deep, rich interfaces. Simpler ones may be easier to learn, but will always be restrictive once baseline competence has been reached.


> users also frequently complained about the number of items on screen at once ... focus groups/"everyone else"/"the mainstream" kept saying they were overwhelmed

Do you have any links to such studies/focus groups/articles on this? I'd be genuinely curious to read about them.


My experience there is more anecdotal, though over the life of my career one thing I am often measured on is the visual complexity of the UIs that I have made

My last job we brought in focus groups at times and that was exactly their feedback, for an embedded UI for the machine that we sold. It should be noted that that machine required about a week of operator training to use, which included interacting with our UI


Interesting--I've never in my career actually seen a user study or focus group that concluded that users don't like dense UIs or lots of options. Not saying they don't exist--sounds like you experienced a group that found this. I've just never seen it myself.

What I see a lot is UI professionals and artists simply declaring that interfaces must be minimal and abstract and that visual complexity is bad. Whether this is just current fashionable dogma, or if there is actually research to support this, I have no idea, but none of the product decision makers I've worked with have ever asked for evidence.


Ed Tufte makes the point in his lecture that the sports pages in the newspaper and the stock pages contain loads of many tiny detailed numerical statements. Newspapers are generally written for median intelligence people.


People don't verbalise enthusiasm for lack of busy UI, but they tend to prefer simpler UIs especially if they are not expert users of a more complex UI. It just looks cleaner and less stressful. The choice is expressed through usage rather than studies.


It was something of a unique audience, as these folks were not computer literate and we were training them to be machine operators. On the professional software side of things the big dense UIs seem to be holding their own, so they must have some good :)


I'm not sure screen density is a flaw in the Windows UX/UI style.

A lot of Windows programs have really cluttered, disorganized configuration menus and confusing workflows; PuTTY and ConEmu are two particularly egregious offenders. Seems more like a "developer lacking good design sense" issue, rather than something inherent to the Windows environment.


Agreed. There's two things that seem to be a reoccurring theme in every redesign nowadays, lower contrast and more whitespace. One of the modern design trends I can't stand is buttons with icons and no labels, sometimes without mouse over text as well.


Buttons with no labels are certainly not a modern design trend. A lot of older (Windows, mostly) software had a complex interface consisting of a menubar, a toolbar (that sometimes had functionality not available elsewhere), and a space for "internal windows". The icons were usually 16x16 and it was really unclear what they were supposed to represent. At least mouseover text was reasonably common.


Most apps allowed you to customize toolbars, and displaying labels next to the icons was one of the options.

However, no conforming app should have had any functionality in the toolbar that was not available in the menu bar. This isn't to say that nobody ever did it, but the OS design guidelines were very clear on that point.


But at least mouseover tooltips were common. It’s downright rare to see tooltips nowadays, almost never on mobile apps especially.


I've long believed that Windows 98 is the pinnacle of desktop usability on many fronts, 2000 was the best NT in my view. The performance was spectacular, and it was rarely unclear how to accomplish something (unless it hadn't been thought of before the release of W98).

Added: I'd be interested in putting together a Wayland desktop environment around the strengths of Windows 98's interface designs (along with some modern discoveries about human interface, and at least whole number resolution scaling). I feel like there should be at least one well-maintained toolkit which doesn't attempt to support full CSS styling on widgets.


I use XFCE4 to imitate Win95/98.


Seconded.

On Unix, I also reached a similar point on SGI Indigo Magic Desktop.


When first tried Windows 10 while is still was in public beta I though "Oh they are still working on the design, things are still pretty crude and indistinguishable from each other". Then a little bit latter I realized it was the final design all along.


> - And -- although Windows-specific: the Start menu was something you could actually use.

New Start menu:

1. Click Windows key

2. type (typically) three letters of the program you want to run

3. Click <Enter>

That's 4 keystrokes, bound only by the time it takes the user to physically make the keypresses.

User doesn't need any knowledge of how programs are categorized, nor know a hierarchy of categories.

The process by which the system displays programs matching what the user is typing happens as fast as possible.

You can even speak to your damned computer and the start menu will probably react accordingly.

None of these actions require the user to even know they have a harddrive, or a system path, etc.

So modern users have a discoverable, accessible, realtime-responsive start menu that requires minimal cognitive load.

Remind me: how does Windows 98 Start Menu compare to that?


> 2. type (typically) three letters of the program you want to run

Unless what you want to run is, for example, "Internet Explorer": "Inte" will auto-complete to nothing useful if you have a bunch of apps, "Interne" will auto-complete to Internet Explorer, but "Internet" will auto-complete to Edge. Not exactly convenient.

Besides: it's full of ads, and it takes bloody ages to find an application in a list where every item is touch-sized and which doesn't expand to fill your screen. The keyboard entry became necessary because the new structure is impossible to navigate visually (for bonus points, while this structure is supposed to be better for touch interfaces, that's precisely where it sucks even more, because "just type three letters of the program you want to run" isn't too convenient on touch-only devices).

There are environments which manage to get this surprisingly right, such as LXQT: you have a hierarchical menu which is easy to navigate, but if you're faster with keyboard-based search, you can do that as well.

Plus, you know, to us ol' Unix farts, not having to type stuff in order to launch a program is what progress is supposed to look like. If thirty years of UX research gave us the equivalent of bash and tab completion, we might as well go all the way and replace the start menu thingie with a terminal and call it a day.

Edit: also, I don't know what kind of super workstation hardware you're on, but I'd hardly call that thing "realtime-responsive" :-).


To be honest it worked much better in Vista/7, but still I find it more convenient to just press the meta key and type whatever I need or just use mouse because usually what I'm looking for is at Recently Added or Most Used section.

In the Linux world I think that Plasma nailed that feature.


> Unless what you want to run is, for example, "Internet Explorer": "Inte" will auto-complete to nothing useful if you have a bunch of apps, "Interne" will auto-complete to Internet Explorer, but "Internet" will auto-complete to Edge. Not exactly convenient.

I realize this may not be very intuitive for most people, but in the case of Internet Explorer I would instincitively just type "ie" because I know the name of the executable.


I used WIN+R nameofexe since 1995 at least. So not so much progress in 20 year


> Besides: it's full of ads

You can turn all of these off, with a combination of registry/GP fixes. Granted, they should be off by default, especially on Professional/Enterprise, but at least you can do it. - https://superuser.com/a/1348759/100543


I assume Win 10 start menu search adapts to what you click on, because start>i gives inkscape, in inkscape, int internet explorer


"int" does autocomplete to "Internet Explorer" here.


Because half the time it finds and launches the install executable


I agree that start menu search was great in Windows 7. However, it is totally broken in Windows 10. Results are unpredictable unless you finish typing out the complete and correct name of exactly what you're searching for. There are a bunch of things in the start menu that it just can't find at all. Worst of all, it's always trying to send me to Bing or the Windows Store. I just want to search the menu!


Or it updates after a few key presses, right before I hit enter and focuses on another result!


This happens entirely too often for me, and the other result that ends up being focused on is "<x> \n See web results".

I've never wanted to search for anything on the web from my start menu, and to make matters worse, it always performs the search using the IE/Edge browser and Bing as the search engine. To date, I've yet to see any way of customizing (or disabling) this behavior.

If I had a list of the "most annoying things my computer does", this would certainly be near the top.


I have a firewall rule set up in Windows to disable outbound connections from Edge purely for this reason.


It also sends your keystrokes outside, in order to get the "Search suggestions" as you type.


It’s bonkers how bad the Windows 10 start search is now. Even when I type in an app I use frequently, it tries to recommend a web search. I’ve given up and usually just use explorer to navigate program files and go to the app directly.


I long ago switched to Classic Start Menu (part of Classic Shell), and I love it. I have a hybrid of Win 98 and Win 7. Hit start, type the first few letters of the program, hit enter.

Launchy is really great as well.


discoverable

That's the problem. It was far easier to browse through what's available. You can't search for something you don't even know the name of, but you can certainly read through a list.


Even back then it was inconsistent.

Some things were in the start menu under a hierarchy of company name -> program name.

Others went just by the program name.

Some were (company name) (program name).

Some were just a start menu entry, not a folder.


But you could move them around as you pleased. I had Graphics, Audio, Video, Games, etc.


Indeed. The classic start menu was incredibly simple, it was just a menuized view of an actual folder hierarchy.

It is notable that you can create your own such menuized views by right-clicking on the taskbar -> Toolbars -> New Toolbar.


Oh, this brings memories, although it happens still. It was especially infuriating when a company name wasn't marketed as much as a program brand itself (which is fine by me) and you had to browse to endless meaninglessly named folders to find what you were looking for.


> You can't search for something you don't even know the name of

You can if those search results aren't limited to literal name matches, but also consider the intentions the user expresses with their search terms. Maybe associate a bunch of keywords with the result. Terms like "backup", "update" or "presentation" should lead to relevant applications/settings regardless of what they are actually called.


I very much like the search-to-open feature and it covers 98% of my uses, but every so often, the search doesn't seem to index everything or outright refuse to find things like calc. Or the start menu would take forever to open.

As someone else pointed out, the discoverability isn't as good, but I think this has more to do with the fact that the start menu items aren't just all neatly collected in one location on your disk and by the amount of space the new menu uses, like if I have to always scroll to find what I want, it already lost the race against a list that shows most if not all applications at once.


>So modern users have a discoverable, accessible, realtime-responsive start menu that requires minimal cognitive load.

Discoverability is, to me at least, a nightmare on most modern operating system, mobile included. I don't think it was much better on older operating system, but at least they had a manual and less stuff to worry about.


I get differing results for the same input depending on how sleepy I am and how loaded the machine is. It's utterly useless for me.


I never understood how it is even possible to have the search through the start menu hierarchy lag so much. What is going on there? Modern Wimdows keeps the menu contents cached in a file and keeping the tree of labels cached in memory wouldn't be a huge deal, either - probably around 10kB for just about any realistic system.


It's ridiculous how it fails to perform its one job, in such a spectacular way.

It takes several seconds for the results to load - I know performance may degrade in a VM, but come on! Searching through a list of strings is an interview question.

Not only that, all the key-presses of "Update" show the Java update program.. only when I add the final "S" it will show "Check for updates" system applet.

And with the amount of cpu/memory SearchIndexer.exe consumes, it's not even a bad joke.


I'd say this backs up the original point, because searching is what you do now instead of using the start menu. The huge alphabetical list of programs plus different-sized icons that make up the "menu" part of the start menu now is not so great.


> Normal-sized widgets which left enough room for content vs. the huge widgets we use today for some reason (it's weird for me to say this, but Linux desktops are the worst offenders here)

This is one reason why, on Linux, I prefer KDE/Qt applications above all others.

Ever since I first started using Linux in the early '00s, I've noticed that GTK+ applications have excessive padding, and the widgets just look huge, regardless of what GTK theme you're using. Qt, on the other hand, has a number of theme engines with small, tight widgets.

Personally, I'm a huge fan of QtCurve. You can customize it exactly how you want it, and it's a godsend. I just wish the GTK devs didn't torpedo the possibility of making a GTK3 version available.

For some examples, I opened up a KWrite window and took a couple screenshots of the main UI and the settings dialog: https://imgur.com/a/bx1dk8h


Heh. I think something like the GTK3 application-managed header bar is actually the best compromise if touch-first interaction is going to become important. I mean, even Windows 10 now has huge titlebars in order to support the touch case, and that's surely no better than what GNOME/GTK does.


>Ever since I first started using Linux in the early '00s, I've noticed that GTK+ applications have excessive padding

Get the Tenebris theme.


There were some things that Windows 98 did better than the UIs of today, but overall I just don't agree that it was "excellent". I definitely wouldn't want to go back to it, and here are a few reasons why:

- Even purely from a UI perspective, I much prefer modern Chrome's flat and minimal UI over Internet Explorer 4, and I think most other people do too. At the time, Microsoft claimed that IE4 was an integral part of Windows 98, so I'm going to consider it as part of the Windows UI rather than just a standalone app :)

- The taskbar doesn't scale very well, and once you get more than a dozen or so windows, each entry with the same icon becomes indistinguishable. This was particularly bad because browsers at the time didn't have tabs. As I type this, I have 14 tabs open in various browser windows, and this just wouldn't have fit in a Windows 98 style taskbar. Note that I consider the non-tabbed single document interface ("SDI") to be an integral part of the Windows UI here. Both SDI and MDI (multi-document interface) were part of Microsoft's UI guides, and MDI was even worse than SDI.

- The Start menu also doesn't scale very well, and could get very deep, which confused users. It lacked a search function like Windows 10, Mac (via Spotlight) and most Linux DEs today have.

- No support for virtual desktops, which Windows 10, Mac and Linux DEs today all have.

- Network Neighborhood was slow af, and was confusing for users to configure. Apple's AirDrop has a much better UI for sharing files on a network.

- Active Desktop.

- At the time, Microsoft was experimenting with integrating the web with Windows, and one of the things they did was put hyperlinks all over the place. They even experimented with changing it so that desktop icons and icons in Windows Explorer were links, and this caused a lot of confusion due to the inconsistency between a single-click vs double-click.

Some of the problems here are that we have much more computing power now, and (at least I) tend to keep more things open at a time. There were also numerous Internet UIs that Microsoft was experimenting with at the time (NN, AD, MSN, etc.) and many of them didn't work out.


They also worked with what they had. Limited by 640x480, 16 or 256 colors, they weren’t wasting cpu time and memory on alpha blending a menu so it fades in...

At least when they started doing animated menus and such you could still turn those features off.


You could still get that with the Windows Classic theme up until Windows 8 when they took it out. I'm not sure what I'll do when Windows 7 support ends next year all the new windows interfaces have been horrendous.


It's not all sunshine and rainbows.

- Start menu was crap without search. Search, and being able to just start typing, is extremely important for day-to-day usability.

- The toolbars of many applications used to contain far too many buttons that almost nobody ever clicked on.

- I consider the taskbar in Windows 7+ to be the best way to handle multitasking and switching between applications (including the previews on hover, the wheel click for new window or closing an existing one - just like browser tabs work, etc.). No other OS/environment even comes close in this regard.


Not to mention, the operating system was just a product that you bought. Instead of a “free” “upgrade” that get foisted on you to monetize your personal data.


at the risk of a useless comment, +1.

windows 98 may have been the height of desktop UI.

the windows file dialog is still, puzzlingly, the very best out there. no idea why other platforms are so resistant to copying it.


The widgets are great for a resolution of 1024x768 or so but I think larger ones are better for today’s displays eh 27” 2560x1400


That's what UI scaling is for.


2x scale still looks very good.


And speed.


I feel like older user interfaces treated me like an adult.

Whatever I did on Solaris [1] or even early OS X [2] felt like I was doing real work, important stuff, even if I was just messing around.

I don't know what changed, I use both Linux (Gnome 3) and macOS Mojave daily but they both lack that polished "workstation" feel. Maybe it's all in my head or I'm just getting old :/

[1] http://agilo.acjs.net/files/screenshot_solaris.png

[2] https://forums.macrumors.com/attachments/picture-2-png.57621...


Our overall computing environment acquired a distinct patronizing/infantilizing feel to it in the last decade. I don't think it's only visual -- or even visual at all, not sure.


We've gone from "the process has performed an illegal operation" and "file not found" to "Oops!" and "we looked everywhere, but..."


Again, pretend you're a normie and imagine how you'd react when you are told that your computer has performed an illegal operation.

That's the sort of phrasing you don't want to inflict on everyday users.


> Again, pretend you're a normie and

And right here is why modern tech is so condescending.


Okay, then how about "pretend you're a human" as opposed to the lizard people who design and write computer software. Spin it however you like. You still have to make the distinction between techies and "the rest of us". Systems have been designed that do not make this distinction and assume the user will be able to figure everything out, the most notorious of which is Unix -- when unadorned with Apple treacle, arguably one of the most user-hostile systems in common use. Normies perceive Unix as being more arrogant than the systems which "condescend" to them. It seems to say "Oh, you don't belong to the super-secret cabal of users who know these arcane commands? Fuck you, then!"


> You still have to make the distinction between techies and "the rest of us"

The language you use for this is important because it shapes the way you think about the difference. The way it is often phrased is in the form of "we're special, better, smarter, people than those dumb people who have no hope of understanding the arcane magicks we are naturally attune to". Which is of course bull. We have specialized knowledge and familiarity from spending years working with this stuff. That's it.

> [UNIX]... seems to say "Oh, you don't belong to the super-secret cabal of users who know these arcane commands? Fuck you, then!"

It seems to say that because that's exactly what UNIX says. They don't even name commands sensibly, not even in 2019. Discoverability basically doesn't exist.


> Systems have been designed that ... assume the user will be able to figure everything out, [such as] Unix -- ... arguably one of the most user-hostile systems in common use.

Um, you do know that Unix used to come with user manuals? Like, oh I dunno, the vast majority of software in the 1980s and early 1990s? The designers of Unix and comparable systems were perfectly aware that command-line incantations cannot be figured out simply by sitting at the system and playing with it; this is very much not what it was designed for!

If discoverability by novice users is a priority, then that is an argument for menu-driven, interactive interfaces and UIs - which could well be built on top of something like UNIX. But documentation is always going to be important.


Unix manuals are reference manuals, not training manuals. To learn something from them, first you already need to have a very good idea of what you're looking for.

The kind of documentation that Unix comes with is of little use to people who already have some specific training in computing disciplines.


I believe the parent poster is talking about the paperback books that used to come with your operating system. Not manpages.

I learned the command line from a book that came in a Redhat boxed set.


Then rephrase "illegal" instead of "oops". Btw, also "normies" can learn something new.


I agree that a message like "This program attempted to do something the system won't allow" would be far more useful, along with a "more info" button with a more detailed error description behind it. It sure beats "oops" and "something went wrong". But people tend to forget what a computer said or did and remember how it made them feel. So the market pressure is toward mollycoddling error messages and away from informative ones.

(Also, the word "oops" was chosen because it connotes "something went wrong and it's our fault" -- probably chosen to avoid implying that it was the user's fault. Really ingenious, again, if your goal is to keep users comfortable rather than fully informing them.)


Linux has had "oops" for ages, though.


With the crucial difference that the somewhat funny message is followed by actual detailed description of what happened instead of "something went wrong".


"Something went wrong" ugh, I HATE it so much, and they don't even dispatch a highly trained team of monkeys to fix it either.


Linux has had everything for ages, and that's kind of its problem, too. Everything that fits in one bin will have a counterpart somewhere that fits in the opposite bin.


My opinion is that the overall demeanor of desktop user interfaces have steadily become overrun with that of mobile UIs. The "mobile first" mantra has taken a toll on desktop computing, and it's difficult to measure because the desktop computing of a 2019 unmolested by that infantilizing influence (as you rightly put it) cannot be seen from the timeline of reality.

But most of us who have been around for a while can imagine a modern computing environment that still treats desktop computing as desktop computing (and not just large form factor mobile computing).


Tilling window managers. It's not a new idea and really the only way to be productive on a computer.


They're a half realized idea though. The value of tiling WMs is that they allow you to compose what is essentially your own workflow dashboard and save it. What needs to happen is to complete the idea: entirely composable GUI interfaces.


Xmonad allows you to launch programs in a configuration of your choice. Hell even tmux does.


The only way to be productive... for a developer. The great majority of computer users have little to no need to tile anything, to be productive.


I'd argue that the desire to have windows side-by-side when multitasking is far more common than to have them be particularly overlapping and hiding one another. Certainly a lot of what I do on a computer is development, but tiling makes composing emails, image editting, writing, and web browsing less painful too. Floating windows seem to be prioritizing a metaphor over usability.


>Floating windows seem to be prioritizing a metaphor over usability.

That's why Microsoft set the "always on top TWM/FVWM IconBar", that's it, the taskbar.


The widespread shortcut/gesture for making a window full height and half-screen width was a good middle ground for me.

Tiling WMs (which I tried 10 years ago?) would always break on some programs (say Gimp), then you had to run that program in "floating mode" and its already too much overhead for me...


Nah, CWM is better. Floating, not decorated env with a menu switcher and tags everywhere. Just open a FEW windows per tag, learn to context switch LESS, that's it, keep a kiss approach ;).


We're in the Fisher-Price era, pruning advanced features and providing strong visual cues in the form of bright colors is still the rage.

Who cares about having a file explorer on their mobile device? Who needs advanced networking options on their laptop when they're just using coffeeshop wifi? It'll probably get more and more segmented.


As much as I agree, bright colours would be a step up! Most UIs these days seem to be a blend of contrastless greys.


The UI testers for Windows 95 found that people were baffled by hierarchical file systems, even given the conceit of calling directories "folders" (which I found to be infantilizing and infuriating). The confusion and rage provoked by error messages intended to be specific and somewhat meaningful has become a pop-culture meme. ("PC LOAD LETTER? What the fuck does that mean?!")

I've recently had the fortune of talking at length with my mom about her past, and one thing she brought up was how she felt when my dad brought that first desktop computer into the house. To her, it was kind of like a typewriter (which she understood), and kind of like a television (which she also understood). You type things, and they appear on the screen, but -- and this is the spooky bit -- other things may appear on the screen that you never typed. It's something she got used to quickly enough, but never totally came to grips with.

I think most people -- even very smart people -- are like that. They don't know how to deal with a machine that works semi-autonomously, in ways that don't obviously correspond with their input, nor to form an internal model of how it works, nor to engage with the machine transactionally in order to successfully operate it to complete a task ("if I do A, the machine's internal state will become B and I can expect its future behavior to look like C"). This comes natural to us, because we're techies and this is what we do. Some people can sit at a piano and play it like nothing. I can't!

The insight of the GUI was to draw a representation of the machine's internal state (or a highly simplified model of it) to the screen in terms that humans readily understand, along with available options for a human response (in the form of buttons and pull-down menus). Early GUIs prioritized the mapping of machine models to aspects of the real world, leading things like the spatial Finder which presented the file system in such a way that we can use our instincts for how we find things in real space to navigate it. This approach gets you some leverage, but there are limits to how far you can go with this. As time went on, we ran harder and harder against those limits. Typical office users may have fared okay, but then computers started to enter the home in a big way AND started to be networked in a big way, leading to a whole new base of inexperienced users -- who might've otherwise never touched a computer in their daily lives -- confronted with an overwhelming tidal wave of possibilities. And they became baffled, mystified, and frustrated by even the easier-to-use, Windows 9x era interfaces we had. And then, a decade later, smartphones created a whole new base of confused users. So the designers of today, having exhausted all the good ideas of how to solve the problem, resort to the UI equivalent of shouting at a deaf person: dumbing down the UI, removing elements considered to be too distracting, enlarging and spacing out the ones that remain, replacing specific error messages with meaningless but inoffensive blobs of text ("Something went wrong", "There was a problem", etc.).

Even more maddeningly, some of these changes were inspired by corporate communications. Some of these new error messages ("We're sorry, but...") resemble the old broadcast-TV error message of "We are experiencing technical difficulties. Please stand by." But the thing you have to understand is, this sort of communication works on normies. They don't need specific details of what went wrong, what they need is to be reassured that everything, in fact, will be okay. From an appealing-to-normies standpoint, "We are experiencing technical difficulties" would have been a vast improvement over a common Windows 9x error message -- "This program has performed an illegal operation and will be shut down." To a normie, "illegal" means criminal! The Feds put people in prison for a long time for computer crime; imagine the panic that would set in if you, knowing nothing about how a computer works, were suddenly told that it had done something illegal!

So really UI designers are just prioritizing soothing users over giving them actionable information and fine-grained control. The next revolution in UI design will be in making users well informed and capable without alarming them. I'd prefer that everybody toughen up a little, and basic understanding of how these machines work becomes a part of our civilization's literacy requirements, but that's nearly impossible to achieve given current market forces.


GUIs represent a machine’s internal state, but that representation is often misleading, especially to users who take it literally.

Take object persistence. It’s innate to assume that objects don’t go away simply because we can’t see them. Documents don’t vanish in real life simply because you stop looking at them.

Many people don’t understand why a document on a computer screen can vanish, because they don’t understand that that document has to be assembled from data and code every time it’s opened. They don’t understand why it should look different in a different version of word (or worse, in some other program), because objects shouldn’t change when you view them somewhere else.

They don’t understand why you can’t just put a Word document in an email, or a website, or in ‘the cloud’ and edit it in-place. To many people the functionality of the editing is inherently in the document, (not the system) and don’t understand that, without the system, it’s just a series of bytes with no inherent meaning or functionality.


> GUIs represent a machine’s internal state, but that representation is often misleading, especially to users who take it literally.

And that's largely fault of the developers, since they build on layers upon layers of utility libraries, which are not exposed to the user but inevitably pop-up in the form of a broken metaphor or unintelligible error message.

User-facing systems should be defined around powerful data&workflow metaphors, and all the layers in system built around supporting those metaphors in coherent ways.

There is a tradition of people trying to build user systems around simple concepts, easy to combine (starting with Memex, then Smalltalk, Hypercards, and nowadays with mobile OSs). But there's always been a great deal of friction in adopting them:

- first because their experimental nature can't compete with the more polished nature of commercial systems based on legacy conceptual metaphors;

- and second, because up until recently, end-user hardware was not powerful enough to support the complex graphical and computational requirements for the heavy environments required to support these novel interfaces.

Now that computers are powerful enough to build novel experimental interfaces on top of all the legacy libraries required to run generic hardware, we're starting to see again a lot of experimentation of those system-encompassing alternative metaphors for interaction.


I did mention that GUIs have limitations in how accurately they can represent machine state. You've done a nice job in elucidating some of these limitations.


> They don’t understand why you can’t just put a Word document in an email, or a website, or in ‘the cloud’ and edit it in-place. To many people the functionality of the editing is inherently in the document, (not the system) and don’t understand that, without the system, it’s just a series of bytes with no inherent meaning or functionality

Hello, OpenDoc.


”felt like I was doing real work, important stuff, even if I was just messing around.

I don't know what changed”

You got more experienced. When you’re looking at your second, third, etc. system, there always are cases where you think “This is so easy on ‘Foo’, why does ‘Bar’ make it so difficult?”, and feel like wasting time, even if it isn’t really difficult on that system, but just different, or if it is difficult because you are working on step A, but the new system has a better workflow that does steps A thorough Z in one go.

If you ask people what’s the most fondly remembered or impressive OS, computer game, word processor, mobile phone, music player, etc., it often is the first one they really used.


I don’t think it’s just personal experience. I had a similar feeling as the OP, but my feeling happened at a very specific time. I upgraded my MacBook from Snow Leopard to either Lion or Mountain Lion (I cant remember which version it was), and overnight I went from having my entire life spelled out on my calendar to basically not being able to use iCal (it might have been renamed by that point as well) at all.

The new version of iCal’s only purpose was to look pretty and offer very basic functionality. The older version might have started looking dated, but I could use keyboard shortcuts and see details about my appointments easily at a glance. The new version didn’t even want me to know details existed.

The same story played out in Mail.app, Address book, iWork, etc.

MS’s new “Modern” apps show that the same influences have driven Windows development in recent times as well.


Indeed, for me as well, every update up to Snow Leopard felt like Christmas.

I didn't understand Lion, at all. As much as I loved skeuomorphism on iOS, it felt out of place on the desktop.


> If you ask people what’s the most fondly remembered or impressive OS, computer game, word processor, mobile phone, music player, etc., it often is the first one they really used.

I feel like this is an odd statement to make with no data.

I can only speak for myself and my partner, but our current systems hold much more love than anything that came before.

For me i3 on Linux is mature enough to not be intrusive into my life, mpd as a music player and so on.

For my partner, she uses a Mac/iPhone/Apple Watch, and after coming from windows 7 she finds it “much better”, and “I would never go back”

Games are another example. I played hundreds of computer games in my youth, from donkey king on the Commadore64 to rayman on the PlayStation. And my most fondly remembered game is almost certainly grand theft auto: vice city which is a much later title.

I don’t think it smacks true that people love the first thing they learn on. I’m not keen on MS Windows 3.1 today, or MS operating systems in general, in fact quite the opposite.


What changed is that computers went from a tool used by professionals who didn't start out as, but expected to become power users. To mass market devices intended for casual users who expect to not have to learn anything, to think/decise. aka for things to be intuitive.

Neither is inherently bad. The problem comes when you're apower user forced to. Use casual product or vice versa.


When did we give up on tools that scale well from casual users to power users?


That, there are things that are great for everything, is a false belief. Everything is compromise. If something is good for everything, it is optimized for nothing. Power users don't want good. The need 100% optimized for purpose, the #1 best. Which, btw, means giving control and power to the user. The UI and design worlds push for the opposite extreme. All control (away from user), scripted, "experience" to the user. Those professions selling point "we know what's best for users". A sentiment which doesn't work for power users.


About when the internet started taking off as a consumption medium I think. Desktop computers started to show up in every home and were largely there just as a gateway to the internet. Prior to that, they were mostly for office work and computing enthusiasts with the occasional non-computing-enthusiast gamer thrown in. Since the consumer market was much, much larger the focus shifted.

The strange thing to me, though, is that once smart phones and tables took over as the preferred platform for internet consumption the desktop OSs didn't start reverting to targeting the market that still wants them. Instead they doubled down on trying to turn desktops into smart phones.


> If you ask people what’s the most fondly remembered or impressive OS, computer game, word processor, mobile phone, music player, etc., it often is the first one they really used.

Anecdata: I used Windows for the first 10 years of my computing life, and today I'd rather use any obscure Unix over any Windows. The "Unix philosophy" as an attempt to produce a consistent UX has held up pretty well over 40+ years.


Early OS X looked like a weird candyland to me. Not as lurid as XP, sure, but the 3D-like glassy buttons and the sliding animation as you minimised something always struck me as kind of overkill.


When Jobs first showed off the Aqua UI, he was pretty excited about the æsthetics and what Quartz could do. There were also many practical things shown, but…


"I want to lick it!"


I miss old Aqua. The earliest revisions were definitely over the top, but the version that was part of Mavericks that had many years of refinements baked in struck me as a nice balance — it was more subdued and professional than its predecessors without being stripped of personality (unlike the flat theme that came with Yosemite). It was nice.


What about the stripes? What were they thinking with the stripes.


Early Aqua looks like it was designed to match Apple's hardware. I don't think anyone misses the stripes, though.


It was quite the shock coming to OS X from NeXTSTEP. I actually ran with NeXTSTEP icons on OS X for a lot of years until they made it super hard with SIP.


>I feel like older user interfaces treated me like an adult.

They did, because usually only adults used PCs.

Now kids through elders use PCs and there's nothing wrong making the UX more friendly to people unaccustomed to working in tech.

It's in your head because I think you're missing the roles PCs now play for everyone in society.


In 1998, 42% of US households had a computer. Children used them more than their parents did.[1] Many people without computers at home used them at work, school, or libraries.

There's nothing wrong with making error messages less intimidating. There is something wrong with not giving any information about the problem or not even displaying an error message.

[1] https://www.nsf.gov/statistics/issuebrf/sib00314.htm


It seems to me then that a big problem is designing interfaces to the lowest common denominator, then removing options for advanced users. This is a problem I see even (especially?) on Linux, which gives you limited GUI control and if you want to do anything beyond the most basic changes, expects you to know how to configure everything on the command-line.


A lot of it, I think, isn’t trying to design for the lowest common denominator, but trying to find abstractions that fit problems more accurately. When that works well, you end up with much higher accessibility and fewer errors.

As with all abstractions, though, they tend to leak. Software design tries to minimize those leaks, but they have to prioritize which ones to fix.

Advanced users like you or me don’t need those abstractions nearly as much, so we’re not prioritized. Which is probably fine. We end up seeing the leaks in the abstractions a lot more because of it, though.


> As with all abstractions, though, they tend to leak.

That's part of the problem, though, and not something that should be brushed aside. The old designs were good partly because they operated one abstraction level lower, where the leaks were inherently much smaller.


Yes, but that lower abstraction level also meant that much less people was able to use those systems.


I wasn't exactly an adult in Win95/98 era and I absolutely loved that computers didn't care one bit and spoke to me indiscriminately. It did not feel like some game and it was great.


I think the older systems had to prove that they weren't frivolous compared to text interfaces. The newer ones don't.

I stick with KDE and have been happy.


KDE is nice because of its configurability. You can make behave like Gnome or Windows or something else if you like.


I think much is due to the mobile influence. Apples worked in a lot of hidden functionality into iOS. That’s led to things like the applications folder being nearly forgotten about and left out of UI. App switching used to be done by the top right drop down, but would never go now. Etc. I don’t know fully how to explain what’s missing, but something is.


It's possible to get Solaris CDE working on Linux [1]. I'm not sure how usable it is though.

[1] https://sourceforge.net/p/cdesktopenv/wiki/LinuxBuild/


XFCE was originally a visual clone of CDE, but coded in slightly less obtuse widgets (GTK vs Motif). XFCE has remained pretty great, supporting the new, but still keeping the spirit and speed of the old.


CDE used XForms first.


I used CDE on Linux for few weeks on laptop (IIRC only weirdness involved in building it was Imake, which makes sense for pre-X.org X11 related stuff).

It works perfectly for the same values of "works" and "perfectly" as on commercial unices. In other words it is in the middle between lightweight WM and full desktop environment, mainly because there aren't any applications that meaningfully integrate with CDE apart from all the dt* stuff (text editor, terminal, calculator...) included with CDE distribution and for the CDE's design to be meaningful you really want CDE applications that integrate with it's object model and not just plain X applications, otherwise it is only somewhat mis-designed window manager.


As a WM it is VERY useful and wraps windows for modern apps without issue. Some of the CDE apps though (dtcalendar and dtmail) are in need of a complete overhaul to be useful with modern protocols.


Older interfaces were designed to help users learn. Newer interfaces are designed to let users do fewer things with less learning. GNOME went from underlining shortcuts in menus to eliminating menus.


No, older interfaces were designed to expose as much as the available functionality in the system as possible.

There was nothing in common widget libraries or development processes that helped users learn how to operate the system. Merely exposing all the functions is of no use if you don't already know what's their meaning and how you're supposed to use them.

People learned more those days not because the interface made it easy, but because they had no choice if they wanted to use the system at all.


Early OS X [2] in no way looks like a "workstation" UI. It looks like an interface for a toy.

Solaris [1] UI certainly does though!


My first real (i.e. work) usage of a GUI was DECWindows[1] on a VAXStation, which felt like a real GUI for getting work done. I believe it and Solaris GUI share a common heritage. Even now, I miss the utilitarian feel of it. You can install DECWindows/Motif style themes for Linux but it just doesn't feel 'right'...

---

[1] http://toastytech.com/guis/DWindows.html


You can build CDE on most major Linux distros and run it for that MOTIF flavor. I have been using it daily on Debian and Ubuntu since it was open sourced a few years ago.


There’s a port/tribute of the Irix 4dwm available for modern Linux. I don’t get the satisfying clunk when I drop a file in the dumpster, but otherwise it keeps me from wasting money on an Octane and all the gear necessary to output to a digital display.


I think this can be partially attributed to:

A: You didnt used to have a "workstation" at your house

B: The machine you had at your house was a completely different platform than say, Solaris machines or terminals/mainframes etc.

C: The UI/UX of the work machine and the home machine are now the same -- so its easy to do the "home stuff" on the work machine now.

D: Fewer people than ever have a dedicated "work machine" and do a lot of personal stuff on that "work laptop" regardless of if they arent supposed to.


Much of the appeal of early microcomputers was actually their similarity to the more powerful machines that were being used for "real work". This was true even in the 8-bit era with CP/M, a command-line system that was used almost exclusively for work purposes. That CP/M interface took quite a few elements from the microcomputer and mainframe terminals of its day, and the trend continued with MS-DOS and early Windows. Pure "home computers" did exist during the same time period, but they still looked a lot more professional than the toys we get today, and you could even use them for some light office work.


I still think kde v2.x was a pinnacle of functional no bullshit GUI design.


Older KDE's were overall quite good, but I personally found the deeply nested K-menu quite difficult to navigate quickly.


There are multiple different application launcher's provided in KDE by default.


Window Maker achieves what OS X tried and failed to emulate.

http://www.windowmaker.org/themes/


OS X wasn’t trying to emulate NeXTSTEP.

It borrowed some technologies and ideas from NeXT but the final product from a UI/UX perspective was more a continuation of what is now Classic Mac OS.


More like an hybrid.


FWIW, I switched to MATE[1] instead of GNOME 3 after GNOME 2. It's a fork of GNOME 2 that retains the look and feel of GNOME 2, and thus far, I've been fairly pleased with it.

[1]: https://mate-desktop.org/


Ubuntu Mate also ships with Compiz, so you can have all the desktop effects still.


I think it's nostalgia. CDE and Motif were uniquely ugly. I preferred Openwindow (Sunview) so much.


There’s a fashion maxim that you should look at designer magazines from 15 years ago. 15 years is about half a fashion cycle so you see the goods in their least flattering light.

Today that would roughly correspond to looking at Windows Mobile CE interfaces or OSX Panther/Safari 1.0. Anything older and it starts coming back into fashion.

The rise of windows 95 AESTHETIC a couple of years ago and now this seems to confirm a trend. Certainly so, if you thrown in some art projects like Windows ‘93, recent fashion and music trends around vaporwave and reinterest in PC-9800 emulation.

Love it.


>There’s a fashion maxim that you should look at designer magazines from 15 years ago.

Everyone is copying the typefaces and color schemes from the magazines which came in the 70's.

>The rise of windows 95 AESTHETIC

Most of the kids with the AESTHETIC meme didn't even use Windows 98 or be even aware of computers. I remember w9x not as a fashion trend, but as a shitty os with a nightmare to manage in order to not crashing while intalling a driver. Installing games took ages, and viruses were a real thing.

Also, everything was shareware. Libre software and Linux/BSD were not known outside academics except at very late 90's.

If they knew and be alive into that, they woudn't be so fake nostalgical.


I was alive into all this, and I find it very nostalgic.


Seeing only the icons is barely half the truth. There should also be a rendering of the noise those boxes made during operation. With every movement of the mouse and each key press the hard disk made a sound as if it were being eaten by the cookie monster. And of course the disk access indicator flashed like mad all the time. Contemporary hardware seldom reaches that level of entertainment.


It's ironic that the mass adoption of SSDs also coincides with the gradual disappearance of HDD activity lights on laptops, because with a completely silent SSD the light becomes the only way to know if the machine is slow because of heavy disk activity or something else. Nonvisual/"side-channel" cues are extremely useful for understanding what's happening.

I suppose it's similar to the situation with newer cars, where the engine is so quiet that one sometimes forgets whether it's even on, and attempts to start it again. There have even been laws introduced to make sure that cars can be heard: https://news.ycombinator.com/item?id=8925126


I find indicator lights, if they only have the purpose of conveying something that's already obvious in some other way (e.g. "the computer is on"-led plus a monitor that's clearly saying it is) fairly useless. I'm glad that most devices nowadays don't have them and are completely dark even when they're on.

And to your comment about cars, it seems to be more about pedestrians that can know about an oncoming vehicle and less about whether the user thinks it's running. The latter seems to be something that can be easily fixed.


The indicators are normally not used, but when they are, they're very useful. When the screen is off on a laptop, is it because the whole machine is off, just sleeping, the video output is set to external, or something wrong happened? I've had to use a laptop where the POST was unusually long, and without the power LED I would've probably accidentally turned it off again because I would think I hadn't pressed the power button hard enough, and try pressing it again.


The flip side is status lights to show something is off.


I remember that noise. I blame Microsoft more than the hardware. From 1984 to about 1994, Macintoshes were my main computer, and they didn't make noise just for moving the mouse or opening a menu. Later, I ran Ubuntu on a computer that used to run Windows XP. The noise and latency that Windows had, just for opening a menu or folder, were replaced by silence and instant action. (Just to be clear, of course all three had to spin the hard drive when I did something major, like opening a file or program. But Windows is the only one where all interactions were erratic.)


I once measured the energy usage of opening the start menu on Windows 2000. It was about 40 Watt seconds.


And in fact, you still get that noise and activity light on Windows 10 and a spinning-rust HD! On a system that wouldn't even manage to use the RAM completely while running on a sensible Linux install, let alone filling RAM up or swapping to disk.

I mean, swapping used to be a fact of life in the late 1990s and early 2000s, even on Linux - RAM was just too cramped back then. But then we got machines with lots and lots of RAM even at the low end, and Linux became snappy and quiet-- while Windows is still as bad as ever.


This — constant swapping — is what drove me to Linux. Though, the laptop I am typing this on is still using 16/32 GiB of RAM, mostly due to the "modern" web. (But that's not any operating system's fault.)

RAM is, I feel, one of the more precious commodities on a machine, still. I have spinning rust in my machines (more space/$), and I've not regretted it, or really had a need for the speed an SSD could bring. (If anything, I think I'd do a hybrid install, with a small SSD and a large HDD.) But I've never once regretted upgrading RAM on a machine, and I definitely miss it on my work MBP.


I agree that it wasn't due to the hardware. It's just that today's hardware has gotten so fast that even Microsoft products seem to run almost smoothly.


My HD light on my laptop still flashes endlessly. I wish it flashed brighter when it was doing more work, letting me know the resource intensive task I just initiated was happening... in some way.

But the grind of the hard drive when something happens. I never would have thought of that again had you not made this comment. Crazy nostalgia there.


I never would have thought of that again had you not made this comment. Crazy nostalgia there

So there's this notion in video gaming where you're strolling through a forest or in a cave or factory or some sort of level with no enemies, no battle music, but you suddenly find yourself upon ammo crates and health packs.

Indicator that a big fight was about to happen.

For me and my early voyages through computing, learning how to write little programs and messing about with settings to see what they did, if I ever got stuck on a problem it was THAT noise that told me "hey you're onto something here".

What a time.


While I don't agree with the article's notion about icons, I think it's true that 1990s UIs, especially Windows and Mac, were particularly productive OSes because they applied not just common UI idioms but standard interactions: Buttons, menus, windows, drag and drop -- everything was largely consistent between each app.

With the web, we had a lot of consistency for a while simply because browsers didn't allow much customization. Initially, all links were underlined, and form buttons had to look exactly like the browser presented them. But then CSS happened and all bets were off. Underlining is largely gone as a UI idiom. It's no longer evident if something is a link or button, whether you can right click and do "open in new tab" (often not possible if the link is not a URL but a JavaScript function), and so on. A "native" app like Slack is all over the place in terms of UI consistency, compared the strictness of the old IBM CUA standard and others. One may be productive within a single app, but not all of the idioms translate to other apps.

I think we're in a transitional phase where we're halfway between old-style GUIs and something more fluid that approximates real life to a greater degree. Consider the "UI" of a kitchen appliance or the packaging of a new iPhone, or a TV remote control, or just a plain old door. Everyday objects vary wildly in what "idiom" is provided to the user. Some doors have a handle, some have a knob, some have a bar you push. We have the same kind of annoying lack of standards and consistency in the real world, though it's usually evident that you can turn a know and push down on a handle.

One can imagine a future where UIs are gesture-based, for example. Think of the 3D UI from Spielberg's Minority Report. Some of these UIs may need to offer completely new way of interacting with objects (grab and make a fist to copy, open your hand wide to paste, or something) that will be difficult to standardize, much like the real world.


FYI, these idioms you talk about are called "signifiers", which are signs indicating what you can do.


There’s a book that talks about these things called The Design of Everyday Things. I’ve only read parts of it myself but a friend of mine read the whole book and said it was good. From the parts I’ve read I agree. The author has some given some talks and presentations relating to UX as well that can be found on YouTube.

https://www.goodreads.com/book/show/840.The_Design_of_Everyd...

https://www.youtube.com/results?search_query=Don+Norman


> everything was largely consistent between each app.

Except MusicMatch Jukebox, Sonique and zillons of "who made this?" shovelware.


True, but you could avoid software like that.

These days, such shovelware became the norm, to the point where even built-in apps often look like that.


AKA any smartphone and tablet application.


I'm glad I'm not the only one who remembers CUA, and how much it really gave us wrt UX consistency.

It's interesting that every time I bring it up, such comments get a lot of upvotes. Clearly there's some demand for this sort of UX, at least in this community.


I love the early Windows design, especially the Windows 2000 look. Other than the well designed icons, I find it to be much more intuitive and consistent than modern Windows GUIs and Metro. Part of the reason why I use the classic style on my Windows 7 machine (the other part is better performance). I like it so much I implemented it in ReactJS https://github.com/Gikoskos/react-win32dialog/

Moreover, if you guys haven't read it yet you should definitely check out Raymond Chen's Old New Thing, which talks about the reasoning behind some of the design choices that went down in earlier Windows desktops.


I can relate. What are your plans for when W7 support is ended?


No plans yet but I guess I'd have to adapt. Do you know of any ways to get that look on 8.1 or 10 natively? At least without having to install a third-party theming app.



You might want to consider adding "image-rendering: pixelated" to your CSS; on a high-dpi monitor, these render with bilinear interpolation, which I think doesn't do justice to the crispness of these icons.


Agreed, though with this sort of pixel art you really want either Lanczos, or some palette-based scaling like hq#x. That way you will preserve the "crisp", high-frequency content in the original image as much as possible while not introducing artifacts like a "blocky" appearance (instead, the result is smooth but crisp, like a watercolor painting).

(In fact, I wish modern desktop environments did this automatically on HiDPI screens while keeping the original pixel art as their source-- especially for its improved usability on lower-res displays, which are still widely used, both on desktop and mobile. Instead we tend to get SVG, which while extremely crisp on high-res displays is a mess for the original 16x16 or 32x32 use case.)


What is the problem with 'blocky' appearance? All of those filters produce icons in their own aesthetic. If the point is to preserve their pixellation when why bother with filters?


> If the point is to preserve their pixellation

The "point" of pixel art-- what makes it so convenient for graphicians, even amateur ones-- is not the blocky appearance (what you call "pixelated" - but in fact these icons did not appear "blocky" on the CRT screens that were in common use at the time!), but to set a uniform constraint on fine detail (and sometimes color depth) within the image, and then to maximize quality while staying within that constraint. It is perfectly consistent to want a means of rendering these images that preserves whatever level of detail was in the original while not introducing blocky artifacts.


The "not blocky CRT" argument doesn't really apply to personal computer CRT monitors, as even when they maxed out at 640x480 or 800x600 the display was still pretty crisp.

I can kind of see this argument if you're talking about playing nintendo on mom's old dog-eared TV with the UHF adapter... but frankly I prefer to see pixel art in its original unmolested, pixellated form

edit- on that note I remember very clearly that 320x240 games had a blocky appearance in the 640x480 era. That was one of the biggest reasons to get a 3D card!


I remember playing Master of Orion 2 on my computer at the time (mid 90s) and thinking:

1. it's amazing this game actually runs at 640x480

2. there's no point in having resolutions any higher than that, as you can't see the individual pixels at that size anyway (I had a 14" CRT, viewable area probably around 13").

At least in the early to mid 90s you definitely still had "CRT fuzz" on computer monitors.


I think the typical CRT fuzz is actually quite close to the optimum smoothing that could be achieved with a simple, analog system, such as was common in the 1980s and early 1990s - in that it should closely approximate a Gaussian blur! But lanczos (or hq#x) is crisper than that, of course.

(And yes, 320x240 did use 2x nearest neighbor interpolation on later video cards/monitors that could only display higher resolutions natively. But I assume that back in the early 1980s, you would actually get a "native" 320x240 screen, just like on a home computer or console.)


On any CRT screen, you'd just get 320x240 as native resolution, the "interpolation" basically done by the phosphorus of the screen. This was the norm well into 90s, and not everybody was on an LCD monitor in 00s, either.

I remember that many games (myself included) resisted LCDs for a long time even beyond that, precisely because they could only do one resolution well. If you played old games, this wasn't satisfactory because those were often hardcoded in the resolutions they support - typically 320x200 or 640x480. And if you played new games, you'd often have to dial the resolution down to get it running reasonably fast.


I think the point was that it was the video card that didn't support 320x240 natively, so it NNed to 640x480.


Any VGA card (which you needed to get 640x480) would also support 320x240.


The same art style was used on portable consoles with LCD screens, and on line doubled VGA modes with blocky pixels. Even pixel art designed for SD televisions and low quality interconnects might not have been intended to be blurred, e.g. Chrono Trigger includes a pixel art typewriter in the starting room, where the keys are represented by a single-pixel checkerboard pattern that is easily made unrecognizable by blur. Some designers even showed blocky pixels in printed materials, e.g. the cover art and the instruction manual of Super Mario Bros. It's possible that some artists intended their pixel art to be blurred, but it's not universally true.


If that's your argument, then you should use a CRT filter, not hq2x or lanczos :)


By the criteria outlined in the article I would have thought the Windows 95 icons would be the ideal - they're essentially the same but without the "flashy" gradients and other depth indications.

Side note: gave me some joy to read the comments and find that the ZIP file with the icons in was infected with a virus. Now that's the kind of retro I can associate with Windows 98.


Later in the thread they determined the virus detection was a false positive.


the gradients have just enough aesthetics IMO

kinda like NT5 fading.. it added a bit of information but without distraction nor cost (unlike today UX)


Windows 2000 was just about the peak of desktop interfaces from a usability and efficiency standpoint, its really only missing out on window snapping and workspaces (which is kinda a crutch for bad window management like on OSX)



Oh man, nostalgia wave right there! One of the first things I would do when setting up a new Windows machine was to install that icon pack!


Yeah, these are excellent. Very BeOS feeling.


Oh man, I miss BeOS. It was one of the two OS's I'd bought back in the day. The other was OpenBSD 3 iirc I have the CD around here somewhere, too.


I loved the BeOS icons. Spent a lot of time forcing Windows NT 4 to look like it.


From the article:

> Rather than some designer’s flashy vision of the future, Windows 98 icons made the operating system feel like a place to get real work done. They had hard edges, soft colors and easy-to-recognize symbols.

The change is deliberate and reflects a social shift. In the W98 days computers were primarily seen as work devices, and in particular Windows wanted to distinguish itself from the more playful feeling Mac (which itself chose a more playful feel to address the fear most people felt about their computers. Apple tried to repeat the Mac playful feeling with the iMac and early OS X feel, and while it helped a bit in the consumer market it reinforced that feeling that they weren't for actual work.

And though it feels like it, this relationship hasn't changed! The (often forced) playful feeling of modern UIs comes from the phone and the phone was able to succeed there -- even need it -- because 1> people were already comfortable with a non-professional connection to the web, mail, messaging et al and 2> it was a "phone", not a "computer". This didn't worry Nokia or Microsoft at first because they had "professional" devices, and probably we forget the days of carrying a work and personal phone for phone calling. But then since the work phones were so crappy they were able to capture the mindshare.

I think it's gone the wrong way: because phones are worth so much, much less effort goes into designing "work" apps, and the designers all start mobile -> web -> desktop.


I think nostalgia plays a part for sure, but there's also substance about the old UIs.

Looking at Win98 icons reminds me of my days in high school when I would just tinker around Windows for fun, changing icons of shortcuts, make good ol' personal homepages in HTML and Javascript (mostly alert boxes), and play Starcraft 1 and Diablo 2. The icons and the Win98 UI give me pleasant feelings, mostly coming from those experiences.

On the substance side -- this isn't exactly about Windows, but in the last couple of years I've tried a variety of Linux distros and DEs. Specifically, I've tried CentOS 7 KDE, Antergos Gnome 3, and Manjaro 18 KDE, all on laptops. There's no doubt that both Antergos and Manjaro bring with them very modern DEs (regardless of Gnome or KDE). But for some reasons, I felt the most productive on the CentOS 7 KDE, even though it looks the most primitive. Before I had the Manjaro laptop, I thought it was a KDE vs Gnome difference (KDE being more similar to Windows, vs Gnome being more similar to macOS, and I generally prefer Windows), but I think it actually does come down to the UI design. CentOS 7's KDE looks very dated, but everything is very functional and took little customization to feel productive in. The difference is similar to Win98/2000 vs Win7/8/10.


I also think the aesthetic of classic Mac OS, like versions 8 and 9, as well as BeOS had great design that made it easy to know what widgets were what and how to interact with them.


Yes! The "Platinum" aesthetic in classic Mac OS was great. Simple aesthetic, easy to follow for custom UI elements.

Not all of the interface shown in these screenshots uses Platinum -- in particular, Quicktime Player did its own thing, and a handful of control panels used an older look-and-feel based on Mac OS 7 -- but it should be pretty apparent what the standard was.

https://guidebookgallery.org/screenshots/macos90


I was so disappointed with Visual Studio 2018 when somebody decided to remove all the colors from the icons. This made users guess the icon only by its outline. The icon color design of windows 98, 2000 and xp were great, i miss it so much.


I was in a senior position at Microsoft back at the time early versions of Office, Windows 95 and 98 were being developed. In fact a number of groups including the Visual Interface Design group reported to me. That's the group that designed the general visual appearance and also icons in the UI for both Office and Windows. (They were not designing the actual user interaction, just the visual appearance.)

At that time the visual designers were strongly urging that all icons be greyscale because they said color was "distracting". I overruled them and insisted the icons have color because it was better for overall usability.

Now the whole industry seems to have come under the influence of the visual designers favoring visual appearance over usability. Much less attention seems to being given to real overall usability.


What's most infuriating about Office icons is how they change complete look and feel of the launcher icons for the various programs so often, even the color scheme...

And then they ripped the text labels out of the Windows taskbar, as if we are supposed to remember what the icon for Word and Outlook looks like today.


Yes, even the darn color scheme. What color was outlook? Look for the yellow icon, no the blue, wait not Word... you can't even see the embossed O?

I honestly don't know how the most loathed form of support, phone support, can even do it these days.


I tend to agree color is another channel of communication, but you need to think the UI that way from the bottom-up. If they wanted grayscale buttons, they'd need them easily recognizable by both new and old users. This was not the case except for the most basic ones.

Did they run labs with actual users to check how they reacted to the monochrome icons?

If you do that well, you can use color to convey another dimension, much like it layers meaning on top of source code (by coloring keywords, variables, comments, etc)


That's funny, they tried this with a beta of Visual Studio 2013 as well, and rolled it back.

http://blogs.msdn.com/b/visualstudio/archive/2012/02/23/intr... (the hilarious original annoucement)

https://www.hanselman.com/blog/ChangeConsideredHarmfulTheNew... (commentary)

https://visualstudio.uservoice.com/forums/121579-visual-stud... (a rollback)

https://docs.microsoft.com/en-us/visualstudio/extensibility/... (Visual Studio 2017 icon color pallette and guidelines)


The UI for VS 2013 was an absolute disgrace!

It's unbelievable it was even implemented, let alone released.


The "remove color from all the toolbar icons" thing also happened with Firefox Quantum. And you're right, it's so weird. Perhaps it's a side effect of the ongoing transition to ubiquitous "big", touch-friendly buttons, where a combination of large size and full color would be overly distracting.


The idea is to increase the contrast between content and chrome, by making chrome itself less visually contrasting.

I don't buy into it, though. They tried monochrome grey icons in Visual Studio 2012, and people hated it, for good reasons. So now VS icons have colors again.


Yeah, windows phone 7 was iirc Microsoft's first foray into flat/monochrome design, and in some ways it was wonderful... WP7 had a hyper-consistent design language that made normally-confusing flat UIs a dream... But ditching color in icons in favour of simple silhouettes was not one of its better ideas, and MS is still trying to make that a thing sometimes like 8 years later.


Amazingly, moricons.dll is still included with Windows 10. This contains icons from Windows 3.0.

(it should be called "moreicons.dll", but this was from when filenames had to be 8 + 3 characters long)


I felt so cool in Grade 5 when I found that file. I had discovered that EXEs could contain icons, and then found some 3rd party program that loaded one from a DLL, so I went opening random DLLs until I saw that filename. What a rush! Where were all those programs? I found out many, but some were still a mystery.

This was part of whatever that Win3.1 thing was that would scan your disk for programs it recognized and add them to Program Manager, right?


Windows ME/2000 icons were even better.

https://upload.wikimedia.org/wikipedia/sl/8/8e/Win2000.png


I think windows 95/98 interface was great. I don't want ads when I click on the start button. All they needed was to add a search bar and call it a day instead all these ads or apps you don't want.


In analyses like these, one thing often gets lost: interfaces are a product of their specific time, not just of a linear evolutionary process confined to the type of product being discussed. No product exists in a timeless state, yet retrospectives such as this one attempt to judge products as though they are this thing that’s frozen outside of the space-time-culture continuum.

So, for these “productive interfaces” let’s keep in mind the goal Microsoft had at the time, which was to get people and businesses to buy Windows machines in bulk for serious, office-centric work. The abilities of the new, less complicated graphical UI had to be rendered in a way that made it feel just as serious as the more complicated text-based interfaces ... or paper binders, even.

Moving forward to things like OS X or iOS, the goals of the encompassing products clearly are different. In these cases the interfaces are attempting to permeate the non-work lives of people otherwise not forced to “work” with computers, in ways they would enjoy using outside of a work context. The goal was to NOT feel like work.

Why is this contextual distinction important? Let’s assume people come to HN to learn. I certainly do. It’s a great place to learn about technology, science, and about building products and companies. From that perspective, it’s worthwhile that we develop some rigor in how we reason about designs and the way they were packaged into a sellable entity. When we judge products historically, statements of an absolute qualitative nature like this one are just fine ... but they are often the equivalent of latching on to one article about a single mouse study, never looking at previous work, not checking the references, ignoring available meta-analyses and so on. All the stuff that is rightfully frowned upon when it comes to scientific research. Clearly product quality is tremendously more subjective than science, but throwing all objective perspectives out of the window is a trend that won’t guide our discussions to the kinds of product insights the HN audience would benefit from the most.


Funny how the icon of a floppy disk which was used to represent saving is slowly being phased out because the new generations have never even handled a floppy, so they don't remember how crucial it was to put files on floppy before storing them somewhere safe (or handing them over to a colleague/friend). I even had a floppy organizer in my desk drawer with fresh labels and sharpies, and peeling off old labels was actually satisfying back then.


Ugh, I hope we're not trying to kill the floppy save. There's nothing wrong with anachronistic icons as long as their meaning is well-established. Phone receivers haven't looked like this for a long time: [phone symbol here], but they're still useful symbology for "call" and "hang up".

Edit: apparently hn hates emoji.


I showed my son (age nine) a floppy disk a couple of months ago. His immediate comment? "Wow, a 3d-printed save icon!"

Sigh.


Is this actually happening to lots of parents/kids or is this joke that lots of people copy from each other? https://www.google.com/search?q=Wow%2C+a+3d-printed+save+ico...


This gets funnier. I just asked the little one whether his comment had been as spontaneous as it had seemed at the time, and got a sigh and eyes rolling for my trouble.

"Dad, it is a joke off the Internet. All the kids in school say that whenever they see a diskette."

Now I feel really old.


Don't forget that the kids have all those websites cataloging memes and such, and they often have extensive research on origins of those things.


Quite possibly both? I can definitely see where my son is coming from, as he's only seen floppies once in his life, whereas even his Nintendo Switch uses a floppy to indicate that it is saving his progress. Must be baffling.

I hadn't felt as old as I did then since I bumped into the first kid who didn't know what a landline was. ("Imagine a cellphone with no battery, so you needed to keep it plugged in at all times...")


> who didn't know what a landline was

I feel like you've been had another time.

At least I keep hoping that kids see plenty of those in older movies.


The Verge even wrote an article about how common the joke is. [1]

Not to say it didn’t happen to lb1lf.

1: https://www.theverge.com/2017/10/24/16505912/floppy-disk-3d-...


Well, the complete concept of "saving" becomes less and less relevant. With apps and websites there often is no explicit need to save. Just edit the document and all changes are immediately synced up to persistent (cloud) storage.


Not just explicit saving, but the whole concept of documents and especially files is disappearing under our noses. Somewhat disconcertingly nobody seems particularly worried that our whole model of computing is shifting away from file-oriented systems. Remember that DOS stands for Disk Operating System, managing files was arguably one of the main functions of it. Now days our data is locked up inside "services" and "apps", not in user-accessible files.


And a key feature of Windows 95 was the explorer with your home directory where you could doubleclick on files and the appropriate application would open, so users won't have to think about the application, but the file first.

However abstracting away from a pure file based view also has benefits, like history or having different views on the document (i.e. to share a read only version)

The main question is: How well will those things serve as lock-in in future, GDPR does a first step there by enforcing that there is a way to extract the data (not that the export format neccisarily is is really usable ...)


Windows was document-centric in other respects, too. For example, the standard UX guidelines defined three types of apps: dialog, single-document interface, and multi-document interface. Most apps were actually expected to be of the last two varieties, and the guidelines covered a lot of points on how it was all supposed to look. And when it came to implementation, frameworks like MFC were designed around that concept as well, with the ability to automatically handle things like multiple views into the same document.


It is removed from a lot of MS office software (online versions) and it is a complete disaster, because nobody took their time to evaluate some of the most basic use cases.

Users often use a document as a template. Better teach them to SAVE! it under a different name first before editing anything. If they fail that step, content vanishes regularly.

I love how concepts like "folder structures" apparently get deprecated. Yeah, sure, because a tree is bad to model relations. It is like when we removed table of contents from books and just added the required meta information to a long list. Sumerian literature can suck it.

Sorry for the rant, but I already know the transgressions we will have to endure. I want to just save documents in a defined state. No, I do not want to tag it.

This rolling save mechanic would have been possible without a persistent cloud storage. Why just have these bad ideas now?


UX designers actually fought for this for ages. Like, at least since the 80s.

“Save the document?” is a stupid question: of course I want the program to preserve what I do. But I also need a way to roll back the changes afterwards if I discover a mistake.


I enjoy opening a photo in photoshop and just putzing around with filters and tools to see what they do. I expect it to throw away my changes unless I explicitly state I want to keep them.


Or you could just click “revert to the original” about the same way as you click “discard changes.” With the difference that your use-case is super rare whereas saving the changes is what everyone wants almost every time.


No accounting for taste, but to me those icons are rather unsightly. From a utilitarian point of view, they might be good, but for art (at the time) I would have looked at NeXT icons.


I hate the frequently excessive usage of icons in modern UIs. In icon-heavy UIs, the meaning/functionality of an icon isn't very clear and labels only show up when you hover a mouse over the icon (you're usually out of luck if you're on mobile).


I prefer the Windows 95 icons, e.g. http://windows95tips.com/post/35587792563


Oh Windows 95... the last MS OS to feel completely uniform in aesthetic design.

And that teal... gets me every time. I used to think it was hideously ugly, but these days there's an understated elegance to it that just can't be denied.


[My comment was really a thinly-veiled excuse to link to windows95tips.com. I don't actually have any aesthetic preferences between the icons/design of different versions of Windows.]


Wow, this entire thread is so weird to read! I find these icons really bad compared to what exists on Windows 10 for example, and yet everyone seems to agree that 98's icons are better! I mean, seeing the icons in the illustration, I have no idea what 3 out of the 6 do. In the image at the bottom of the article it's even worse

I'm pretty sure it's just nostalgia, I have no idea why everyone here seems to think they're objectively better


I don’t miss at all those playful icons while using one of the worst windows ever. I hated that mix of Tahoma and MSSansSerif too.


BeOS is peak UI design for me. I'm actually working with Haiku OS sometimes to feel this vibe.


I 100% agree. The BeOS icon set remains my personal favorite to this day. When I run Linux I usually load them up first thing.

https://www.iconfinder.com/iconsets/beos


I miss Windows 2000.

3.11 and earlier were utter garbage. I was on an Amiga, and so thankfully avoided those steaming piles.

95, 98, and ME were brutal operating systems, crashing all the time, corrupting data, and generally making life hell. There was an NT 3.51 mod that would give it the win95 interface, but there were too many software compatibility issues. Amiga was dead, so I switched to Red Hat.

Then came Windows 2000. A nice, clean interface. Speedy operating system. Real memory protection. Most things were in sane places. It got out of your way and let you get real work done.

When XP came out, I didn't really see the point. It had graphics that looked like a candy bar and slowed things down enormously. Thankfully, you could disable it. I'm still not sure what they actually improved in that operating system to make actual, real work easier to perform, but whatever they did, it took twice the memory to do it, and required a beefier processor.

Then came Vista, which was unbearably slow. I upgraded to XP during this time because my software wouldn't run on 2000 anymore.

Windows 7: A New Hope. It was still slow, but it turned out to be not an unbearable upgrade, although I still stuck to XP for as long as I could.

Windows 8: Bigger. Slower. Unfathomable. I stuck to 7 through gritted teeth.

Now we're at Windows 10. Schizoid is the best word I can use to describe it. Horrible UI half in the Vista world, half in the mobile world, Duplication everywhere, no clear path for getting things done, constant updates at inconvenient times (you can't seem to get even a month of uptime with this OS).

I've since switched to Ubuntu, and run my Windows software in Wine.


I remember Windows 2000 feeling like the height of Windows UI. Changed the gray color theme to a oh so slightly brownish gray which just seemed classy. That oh so subtle drop shadow on your mouse and other UI elements. Retaining all of the psuedo-3D box rendering for windows, buttons, borders, etc. Helpful and consistent signifiers revealing UI functionality at a glance.

Then the new-wave of UI designers decided all of this was 'ugly' and needed to be cleaned up. Maybe the new stuff is 'prettier' (subjective) but we clearly took many steps backward with regard to usability.

When Win 3.1 and especially 95 were developed, a lot of focus and testing was done on usability because it was expected that a lot of people wouldn't know how to use a GUI. Concepts of 'this is a button, you move the mouse pointer over it and you can click it' would be novel to a lot of users. All the signifiers had to be on point and consistent.

All of that has gone out the window. I blame 1) Mac OS X, because it was pretty it made classic Windows UI look ancient and obsolete and 2) smart phones with UIs so opaque that apps often include little tutorials showing you how to use the UI, basically giving up on trying to be intuitive.


Broadly agreed, except that I find even Ubuntu to be quite slow these days. Sticking to plain vanilla Debian with a lightweight interface solves this quite nicely - XFCE and LXDE are incredibly snappy even on very low end hardware, quite a bit better than XP at least.


I found the original Mac, or the DR GEM icons (e.g. on Atari ST) even more utilitarian (could one call them 'brutalist' ?) and may I say 'pleasant'.

In that vein, sometimes I wish for a desktop that is grayscale except for exceptional highlights (e.g., photos, warnings, ...). I remember that it was a real pleasure to use the Atari SM124 B&W CRT monitor.


Most of these were from Windows 95. They got some slight cosmetic improvements in 98 and then again for 2000 if I remember correctly.


The big change was in Windows 9x the icons were 16-color, unless you had a 16-bit color display (even 256 color graphics cards used 16 color icons because of palette swapping between applications).

https://imgur.com/GpS5Dln


Interesting to see how much current design has affected my gut reaction. At first my mind immediately went to "there's no way this is the case..."

But looking at Win98 icons and UI... I really do miss that interface. Contrast and buttons actually being buttons... etc

I think (despite my use of Win10 and MacOS for various reasons) that's why I have a love of Linux UIs... many of them hearken back to those days.

The Linux DEs I tend to gravitate to are the ones that are most like classic Windows UI


There has been a shift towards UI that gets out of the way and becomes second nature to us. Admittedly that dream hasn't yet fully been realized, but we are on that path. Removing clutter, HDD led light, noiseless computers all imo try to integrate computers ever more fully into our lives and make it as seamless as an integration as possible. One day we may have UI that feels so natural that we might just think of a computer as one of our organs.


The designers were a very well coordinated integrated group (if there was more than one of them) and they took design imperatives of a restricted palette and fixed pixel sizes to heart.

If you asked the same group to design this now in a world of high dpi high gamut screens they might not get the same constraints out in their design brief.

Google's work on flat responsive was in some ways running against the tide. Skuomorphic was hardly ancient when the Google reaction happened


The fact that the article has nearly no content besides "these are cool" and the comments on it say the download has a virus in it makes me very dubious.


The icons are nice, but I've never known what 90% of them were supposed to represent. Still can only guess at many of them.

I do like the depth though


The Windows 98 interface was actually not bad. There only issue I had was that to stop the computer you click on Start. :-)


I'm sure we will go full circle back to that style soon enough, but with higher resolution and expanded color palette.


I think the small palette is a big part of the charm. It’s limited enough to give the whole set a coherent look, but large enough to illustrate any kind of object.


I collected 131,788 icons from that era as 32x32 PNG files, and made a simple search program for them. Does anyone have good ideas for using this dataset?

https://iconpush.github.io


I tried very hard using bash scripting and some recursive archive extraction stuff to get all the icons out... but I couldnt get all the names, and I got a lot of false positives. Id be interested to know how the author did this.


No they are not. Simply no! Wanna some really good and useful icons? Go get them at KDE. Those... Things sucks. Big. Time. Is that some hipster phoney nostalgia article of some sort? Please...


Ok, follow your tip. Look those nice w9x icons? do the same with your fonts.

https://contrastrebellion.com/


Who was the designer? What’s the license/copyright situation?


That's what I came here to ask. I'd be pleasantly surprised if MS decide to allow redistribution.


Classic Shell helps bring us back to this design. http://www.classicshell.net/


I wish there was a comparison with Windows 95 icons.


I wish we could get a modern spin on this style


great from the same site, possibly nsfw https://alexmeub.com/projects/celery-man/


They irony is that the icons are the opposite to the font colors. Please:

https://contrastrebellion.com/


Sorry, as a Mac user then and now, they still look as cheesy as they did back then to me.

Not that Windows looks nice these days either.


Totally agree with you


You should've linked to the https version of the page so the dozens of icons load faster (because of http2 multiplexing)

https://win98icons.alexmeub.com/


Meh. They look very dated to me.


That's because they're from 1995... The point of the article was that they represent great design, not that they look current 25 years later.


Great design stands the test of time. e.g. TrueType fonts from that era are still used today. (Actually the designs of many of those fonts are substantially older than their TrueType implementations.) These icons haven't aged well IMO. They looked cheesy back in the day and they still do today.


Microsoft was never regarded as a force in graphic design. That was more of Apple's wheelhouse.


To be fair, early OS X hadn't aged well at all either, but it was pretty revolutionary at the time.


OSX Tiger is ideal. Modern-ish, colourful and huge clicky widgets.


And Apple logos and icons look cheesy, too.


Age does not stop something from being great.

The Matrix is 20 years old this year, it is dated, it is still awesome.


The matrix is a simulation of the past (the late 90s) - which is a great film idea because it means even the presence of deep CRT monitors doesn't date the movie - unlike (say) in "Alien", where they do (imo).


I don't know, the whole aesthetic of the Alien universe is so coherent and well thought out that they don't really stand out to me.


That's not what "dated" means in this context. It means the artifact is stuck in its time period and not well suited for the modern era. So if the Matrix is still awesome, then it's not dated.


While they may have been good for the time, it’s pretty hard to distinguish a lot of these while looking at my smartphone, especially when I’m not wearing glasses.


Back then people had 640x480 screens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: