Hacker News new | past | comments | ask | show | jobs | submit login
A CSS framework to recreate Windows 7 GUI (github.com/khang-nd)
216 points by khangnd on Aug 29, 2021 | hide | past | favorite | 124 comments



Ha! What a delightful little project. Thanks for sharing. I can think of all kinds of fun stuff you could do with this.

Reminds me of that fake windows update prank. We used to put it on people's screen and maximize it if they forgot to lock their pc.

https://fakeupdate.net/


Ha! Best part of these fake updates is when they go beyond 100%! Still remember my colleagues face at 105% asking me "ehhh, what do I do now?"


in about 2003 I messed with some folks at work with a similar trick, made a full screen fake IE that showed an error message - when they started to click anywhere on the chrome the whole thing started to fall apart - buttons falling off of the chrome etc.


Once I took screenshot of colleague's win desktop screen, replaced with current current wallpaper, and hid all the desktop icons and taskbar. He legit got confused why mouse left click isn't working expectedly.


Kids today have it easy. To pull this prank on a highschool teacher (who had me sort out some CAD software, in exchange for skipping boring classes) I had to modify autoexec.bat and recreate a virus message in there, based on something I had seen in the black book of computer viruses.

The poor guy nearly had a heart attack when he turned the pc on and got confronted with it.


I really enjoy the Aero glass effect from Windows 7 and feel unfortunate when Microsoft removed it in Windows 8. I tried to disassemble DWM to find out its implementation but it did not work, since I have no experience in reverse engineering and Win32 developing. This project seems to recreate a pretty good Aero effect and I'm happy with it, even though there are flaws.

Hopefully someone could analysis how Microsoft achieved this effect and port it to a X11 compositor.


These shaders are *.bin resources embedded in dwmcore.dll.

This tool can extract https://www.nirsoft.net/utils/resources_extract.html Then, cmd_decompiler.exe from there https://github.com/bo3b/3Dmigoto can either disassemble or decompile these binaries.

The pixel shaders there include both ps_4_0 code for new GPUs, and ps_2_0 for running on DirectX 9 GPUs. This makes the disassembler slightly more useful than the decompiler, the *.asm will contain both programs.

They first sample from 4 locations of the source texture. The sample locations are computed by vertex shaders somehow, and passed in TEXCOORD1 and TEXCOORD2 input registers. 2D sampler needs 2 floats for UV on input, texture coordinate vectors have up to 4 components on GPUs, so they’re packing 4 2D vectors into two 4D ones, and using xy and zw slices of these vectors. Because they probably using bilinear sampling (these are setup by CPU-running code, I have not looked), these 4 sample instructions are actually reading from 16 texels of the source texture.

Then, they compute the average of the 4 colors.

At this stage, they’re using pre-multiplied alpha.

For the next step of the pixel shader, they compare alpha with zero. If not zero, they convert the color to straight alpha, apply this formula https://entropymine.com/imageworsener/srgbformula/ to convert from sRGB to linear, then convert back to pre-multiplied alpha.

For the last step, they’re applying linear transformation to the color using input values passed in the constant buffer. This part varies a lot for different shaders. Some shaders only using a single scalar constant, and returning (alpha.wwww*result)^2 color. Other shaders are using 4x5 transformation matrix in the constant buffer to transform the final color.

P.S. There’re 282 compiled pixel shaders there, and I only looked at a few of them. It’s very possible some other ones are doing something completely different. I think Microsoft compiled them from just a few source *.hlsl files, with different preprocessor macros. At least that’s how I would do that if I were implementing these effects.


That's pretty helpful. Thank you very much.


I'm beginning to have the feeling Windows 7 is the last Microsoft OS I'll ever feel good about installing.


I've been feeling that way, but I can't tell if it's rational. Windows XP and 7 were the versions of the OS I used the most growing up. Nostalgia? Ever since Windows 8, the OS has felt a lot slower and the UI isn't consistent. I was running something the other day that needed privilege escalation and I had this realization that I would have no idea if a particular UI was cobbled-together-Microsoft-UI or if it was cobbled-together-malware-UI.


I held out on "upgrading" to 10 until support was dropped last year.

7 was a huge step forward, with better performance than Vista and all kinds of QOL improvements (Windows Search, DX11 being some big ones).

10 also has some amazing improvements (automatic driver installs via Windows Update being the first thing that springs to mind[1]), but they've unfortunately let ads and anti-features (Bing, Cortana, App "Suggestions", News & Weather Bar) infiltrate and drag everything down. I cringe every time there's a major feature update, because I know I'll have to either change settings or download a tool to remove all of the bullshit they've just added.

[1]Yes, I know about FTDIgate. That was the exception, not the rule.


I’ve been using Windows 10 Pro since release and honestly none of this stuff has ever been a real issue for me. It’s stolen like 30 mins from my life overall in the last 5 years disabling the stuff that I don’t care for.

On initial install there were some Windows Store apps I didn’t care for (like Candy Crush) I just normally uninstalled and they never came back and no new apps ever installed themselves.

I disabled Cortana and other widgets I didn’t care for using the standard UI once ever and they never tried coming back on their own.

On feature updates it sometimes has a two minute wizard asking questions.

Maybe it’s because I live in South Africa, but I’ve never seen these Windows 10 adverts people keep complaining about. Maybe that little bit of clickable text on the login screen is what people are referring to? But before clicking I wouldn’t say for sure they’re adverts, they seem like very mild click bait. If they are adverts then inexcusable yes, but doesn’t bother me enough to get upset about.

I don’t mind telemetry in principle but it does bother me that like once every 2-4 weeks when it scans or something it’s a huge resource hog when doing so.

Otherwise, I feel Windows 10 is more performant (although does need SSD, which I would have had regardless) than Windows 7 and definitely has a more useful task manager.

Windows 10 really “just works” for me and I find myself spending minuscule to no time configuring it which means that I’m always doing what I want to actually be doing, working or playing games.


I think that clickable text on the login screen is what people are talking about. I'm in NZ and for a brief time they were ads in the fullest sense.

I don't know if MS changed its mind, or if there is no market in NZ for those adds, but now they link through to MS web sites. So sort of ads for MS properties.

But it does fuck me right off. Get FUCKING ADS out of my OS.


I don't think either myself or GP was complaining about time lost due to getting rid of bullshit. It's just rude and ugly, like cookie banners or autoplaying videos.


I had a funny issue with Windows 10 a few years ago.

I tried to start cmd.exe and nothing happened. Tried a bunch of times more. Still nothing...

I was connected to a WiFi network that allowed ping to the entire Internet but not TCP connections. Then I switched to a WiFi network that allowed TCP connections to the Internet and 20+ cmd.exe instances started all at once.

I thought: "Wait, that can't be true", but it was reproducible.

This makes me wonder if launching programs takes longer on a slow internet connection.

I do not know if this still happens as I have only used Windows 10 on a handful of occasions since then.


I’ve had the same happen in macOS, which would slow down to a crawl when my network didn’t work correctly (quietly dropping packets).


(automatic driver installs via Windows Update being the first thing that springs to mind[1])

That's the worst part of W10 for me. I have a tablet that was unusable until I learned about WuMgr because every day it would reinstall a broken touchscreen driver.


Windows 7 had a very professional feel to me that subsequent versions lack. Probably has to do with attempting to create a unified tablet / phone / desktop experience.


I had been using Windows since 3.0:

Windows 2000 was that stable, professional sweet spot for me, even at times NT 4. Zero fluff, relatively consistent experience, and little visible intergenerational technology accretion. The 2000 scheduler felt reasonably predictable and crisp, like how BSD on a workstation did compared to Linux, for the longest time. Unpredictable interface latency is a dealbreaker for me.

I never noticed too much of a stability difference between 2000 and XP, but I really disliked the fluffy chrome overlaid on the XP user shell.

As long as we are talking nostalgia, yes, I do miss some extremely consistent UI paradigms from Windows 3.X days (being able to productively drive the shell with a keyboard only though the ALT key accessors). Interface elements were extremely differentiated visually. (Seeing my father trip up on a high-investment hallmark of a touchscreen phone app, because the native widgets were flat and lacking visual differentiation this week was beyond painful. I do not like the direction of modern interface design is heading.)


It's a sad state of things; current design trends frequently punish users and make people fearful to explore and learn on their own how programs and user interfaces work.

It hits me hardest too when I see older people frustrated by the ever changing design and functionality churn. It prevents them building confidence and feeling empowered. For me much of the current computing experience is the opposite of rewarding curiousity and it's deeply sad, IMHO


I don't want a unified tablet / phone / desktop experience. I want interfaces that are optimised for the device rather than optimised for encouraging vendor lock ins.

Also it wasn't Windows 7 that first attempted that. PDAs back in ~2000 ran Windows CE with a Windows 95 / NT4 era UI. That also wasn't well optimised for smaller screens touch screens, hence the requirement to use a stylus. Then what followed was Windows Mobile 6, which looked more like XP, and 6.5 that had a vista / 7 type home screen but with classical Windows widgets. It wasn't until Metro (Win 8 / Mobile 7) when Microsoft really united the look and feel of Windows across all their platforms again, Xbox included.

Personally for me, XP was the decline of Microsoft's professional shell design. Windows 2000 was when Microsoft peaked. It was free from fluff but still had some minor tweaks of visual flair. XP was ugly, Vista and 7's widgets were poorly optimised for screen estate. And there after everything has taken a massive step backwards in usability.


> That also wasn't well optimised for smaller screens touch screens, hence the requirement to use a stylus

The reason for the stylus wasn’t software. Capacitive touch screens didn’t really existed, at least not in consumer devices.

Apple did that. They haven’t invented that of course, acquired other company https://en.wikipedia.org/wiki/FingerWorks but it was them who brought the tech to mainstream in 2007.


What you've posted is a very common misunderstanding of what happened in the mid 00s but unfortunately not at all accurate on any point you've raised.

Pre-capacitive touch screen technology worked perfectly fine with fingers. People had been using their fingers on kiosks and PDAs with infra red and resistive touch screens (respectively) for years before capacitive touch screen technology hit the market. In fact I personally had several PDAs from ~2000 onwards and would often use my fingers for simple operations (ie when precision wasn't required).

Ironically capacitive screens actually have greater limitations over capacitive screens in terms of general usability, as quoted on Wikipedia:

> Unlike a resistive touchscreen, some capacitive touchscreens cannot be used to detect a finger through electrically insulating material, such as gloves. This disadvantage especially affects usability in consumer electronics, such as touch tablet PCs and capacitive smartphones in cold weather when people may be wearing gloves. It can be overcome with a special capacitive stylus, or a special-application glove with an embroidered patch of conductive thread allowing electrical contact with the user's fingertip.

Also capacitive throws a fit if the surface gets wet, which resistive screens didn't, and resistive screens offer greater precision when used with a stylus.

Capacitive screens look nicer though (greater contrast etc). Which is why they eventually won out.

As for the whole finger-orientated UI thing, well that happened around the same time as capacitive screens hit the market (resistive screens can also be made to support multi-touch by the way) and thus would have happened with or without the invention of capacitive screens.

So no, the reason for the stylus wasn't a limitation of resistive touch screens. It was the UI and that would have changed regardless of the introduction of capacitive screens.

Also can we drop the bullshit that Apple were the inventors of multi-touch UIs. There were 3 companies working on the same technology in parallel: Apple, LG and Google. LG even beat Apple to market and then accused Apple of stealing their idea, much like Apple like to claim others did with the iPhone: https://en.wikipedia.org/wiki/LG_Prada

Suffice to say, the industry was changing and would have changed with or without Apple's involvement. They certainly were a big catalyst but where absolutely were not the only players in the game.


> would often use my fingers for simple operations

Fingernails worked on my Palm m500, fingers did not.

> resistive screens offer greater precision when used with a stylus

I know but people weren't too happy with them. They need both hands, and easy to loose.

> It was the UI and that would have changed regardless of the introduction of capacitive screens

I have doubts. Resistive touch screens is very old tech, yet before the first iPhone came out they were mostly used by geeks and corporations.

> companies working on the same technology in parallel: Apple, LG and Google

When Apple launched the first iPhone, engineers at google working on Android decided to throw away half of what they already done and start over. They were building stuff like this https://www.androidcentral.com/look-back-google-sooner-first... and being smart people they have realized the product they were building became deprecated, overnight.

> LG even beat Apple to market

HTC did as well, look up "HTC Touch".

Apple is much better at selling their stuff. And I think they had better product too, despite not even a smartphone (app store launched much later). They did have good features like a unique data plan not available on any other phones, web browser which worked with normal web, and iTunes with all the music.


> Fingernails worked on my Palm m500, fingers did not.

Palm m500 is a very early device. Resistive screens did get better. I remember the pain of using fingernails on early devices but the kinds of PDAs I fell in love with had were full colour screens running Windows CE / Mobile rather than monochrome displays and could run Tomb Raider in landscape mode with virtual buttons on the screen that were clearly only possible to use with your fingers. A version of it you can see here: https://youtu.be/ZJ1GR9mQamI?t=1070 -- but it looked and played soooooo much better on my device.

Bare in mind the first iPhone was released ~2007 vs ~2000 for the Palm m500. That's a lot of years for screens to improve.

> I know but people weren't too happy with them. They need both hands, and easy to loose.

That's true for the largest generalisation but you'd be surprised just how many people do prefer a stylus. I've even seen people use styluses with modern capacitive screens -- which I really don't get because those things are just as fat as fingers so always struck me as the worst of both worlds.

> I have doubts. Resistive touch screens is very old tech, yet before the first iPhone came out they were mostly used by geeks and corporations.

I know you're trying to disagree with me but you're actually making the same point: they were mainly used by geeks and corporations because the UIs were unattractive. The innovation of the iPhone wasn't the capacitive screen, it was the touch-centric interface. And as I said before, Apple weren't the only ones working on it. In fact they weren't even the first to market.

Go back and read some publications of the era, or design magazines. They all criticise Windows CE / Mobile for it's poor touch UI. It was a common known problem at the time. So much so that Microsoft had several attempts at fixing it. But nobody really knew how to do it right because it was a very young problem. Think of it like 80s home computer systems, how everyone was trying to get personal compact computers right and people largely fell on different implementations of the same idea before the computing landscape ended up with a duopoly of Macs and IBM-clones. That's largely how early smart phone and PDA UIs were too.

> Apple is much better at selling their stuff. And I think they had better product too, despite not even a smartphone (app store launched much later). They did have good features like a unique data plan not available on any other phones, web browser which worked with normal web, and iTunes with all the music.

Apple certainly are. But for what it's worth, smart phones and PDA's prior to the iPhone also supported a browser which worked with normal web (I'd used my one of my Windows CE PDA devices as a mobile internet terminal when on a road trip around Europe in the early to mid 00s). And Winamp ran on the thing so my main usage of my PDAs were music. I even had a compact flash microdrive (which still works actually) in an early model around the same time iPods also had microdrives.


That might be the original motivation, but post-7 Windows UI is anything but unified or consistent. New and legacy UI toolkits are mixed in the system shell itself. There are still parts of the good old Control Panel that haven’t been ported to the new “immersive settings” app. Already in a clean Win 10 installation, you can spot at least 3 different styles of context menu, depending on which part of the desktop you right-click.


Best example for me is in windows 10 professional, the blue screen shows you a sad smiley face.

Someone designed this and thought it was a good idea to have this in a 'professional' distribution of an OS.


I wouldn’t consider a smiley face unprofessional. What makes a smiley face worse, or less “professional”, than a big red X?


Compared with windows 95/NT BSODs the 10 one looks very unprofessional.


I think it’s rational. I started using Windows with… DOS 6. Of all my lifetime Windows installs, probably 70% are 98, 2k and XP. Yet I also agree that Windows 7 was peak Windows UX, probably because it was the final version designed to be used with a keyboard and mouse.


I doubt it's nostalgia. Ive used windows 10 more than I used XP. I'm 28.

XP just felt robust - almost like it was hardware itself.


I would say your problem is you haven’t used XP enough to hate it.


There wasn't much to hate about SP3 Windows XP.

It was a solid OS, limited mostly by lack of mainstream 64-bit version and hard capped at DirectX 9.0, meaning modern games no longer worked. Later on, lack of TRIM support for SSDs.

Basically between SP3 release in 2008 and XP's end of support in 2014, you could have used a very stable OS if you didn't need to play the latest games or run the absolute latest hardware.


SP3 was pretty late in XP's life. Many of us remember the struggle with earlier versions of XP and longing for the good old days of Windows 2000.


By the time Windows XP was available, I'd already learned my lesson. Stayed with 2000 until SP3 was available. :)


That was only an issue with the DOS-bootstrapped versions. The NT line was a lot more reliable for early adopters. Or at least it was until XP came along. Which is another good reason why Microsoft peaked with Windows 2000.


I think that there were several different peaks. In terms of UI, I'd say Chicago is the peak. In terms of internals, you're probably right, it was 2000. In terms of performance, I'd say it was probably Windows 7? Maybe it was just because I actually put together my own tower instead of pre-built?


UI is subjective. Performance, however, 2000 was again the peak there. It was the first desktop OS that supported SMP (obviously not the first workstation OS. But I'd argue Windows 2000 was a cross-over platform that brought workstation stability and performance to desktops).

Also look at the era: OSX wasn't yet released and Mac OS 9 was a tire fire (worse than Windows 98). Windows ME was somehow worse than Mac OS 9. BeOS was awesome but pretty much a failed company by that point plus lacked a lot of software one needs day to day. And Linux was pretty buggy as a desktop -- usable but a long way from being polished. Windows 2000 wasn't just a good OS by it's own standards of the hardware at the time. It was an incredible OS by the standards of everything else available at the time.

By contrast, when Windows 7 was released, Linux was still faster and had several well polished desktop environments by that time. FreeBSD made a decent desktop too. OSX was in its prime. Windows 7 was only good by comparison of what it followed: Vista. But it was a long way from competing with the competition on anything bar AAA game support.


Mac OS X Server 1.0 was released in March of 1999, and I used it everyday as a desktop OS at work.

Windows 2000 was released in December.


7 and 10 benchmark roughly the same yet somehow 7 is capable of running from a HDD without shitting the bed, unlike 10.


Same here. For me, Windows 7 is consistent in design, efficient (in both performance and user experience) and pure.

* Inconsistent design: Starting from Windows 8, Microsoft switched to the Metro design. In Windows 10 Microsoft changed to Fluent design. However, even the Windows components themselves are not fully switched. In Windows 10 today you can find so many legacy designs that had not been removed. Windows 7, however, is pretty consistent from shell to explorer and control panels.

* Inefficient in user experience: Starting from Windows 8, MS started focusing on these so called touch devices, with the trade off of having larger margins, bigger buttons and slower animations, making user feel inefficient getting things done.

* Inefficient in performance: Windows 10 is slow as well, in a 2c 8g VBox VM, Windows 10 boot time is longer than Windows 7 and I have to wait for several seconds to open a UWP program, which is unbearable. Personally that is somehow related to the so called "modern" developing. For that I mean all the UWP / Web stuff. For instance, The Windows 10 start menu uses EdgeHTML inside (just press F7 and Caret Browsing dialog will show up).

* Anti-features: With the Microsoft's focus on "cloud" experience, Windows 10 added so many anti-features that I hate. Even though some are optional, I do not want an OS to contain any "cloud" based features and advertisements. Examples: Microsoft account, Telemetry, Consumer Experience (I mean all the pre-installed bullshit). I run several domains in my home and the first group policy I would apply to client computers is to turn off Telemetry (level 0 security), disable web search, disable spotlight and turn off the consumer experience.

I am not saying that Windows 10 is bad. It is, from a technical point of view, more advanced. However, because of all these bad designs I still think Windows 7 is the last Windows version I would feel satisfied.


> For instance, The Windows 10 start menu uses EdgeHTML inside (just press F7 and Caret Browsing dialog will show up).

This also happens in Calculator, and if you look at its source code [0], you’ll see it uses XAML views. I think Caret Browsing is a UWP feature, not necessarily an EdgeHTML thing.

[0]: https://github.com/microsoft/calculator/tree/master/src/Calc...


I guess Caret Browsing is a EdgeHTML thing and in fact it should not show up in UWP applications. Correct me if I'm wrong.


I feel that if software like Windows could be as responsive as the fashion industry, we'd go through the same kind of cycles that fashion does.

Consumers would adopt a retro look and feel to their apps if they could. Unfortunately, it's all just too disconnected to coordinate your whole wardrobe of apps to fit any one style.


I recall distrusting XP when it was new.


Ditto here. Windows 2000 was fast, functional and stable. When XP was new, ie before SP2, it was literally just a reskinned version of 2000 but with a few patches (some for better game compatibility, some for fast font loading, etc). But for the most part XP was just bloated. It had twice the hardware specification of 2000.

XP did later evolve into something distinct in its own right but it took a few service packs to get there. However by that point I had already switched to Linux.

Personally I never got the appeal of Windows 7's theme either. I guess it's just what one grows up with. I grew up in an era where GUIs were maximized for low resolution displays and Windows 7's theme, much like XP's, wasted so much screen estate in comparison. This is less of an issue these days with high DPI screens. But old preferences die hard.


Because it was shit at launch. The nice stable OS we remember is the XP after its service packs.


>It is built on top of XP.css, which is an extension of 98.CSS.

Just like Windows itself. Even Windows 11, despite the new coat of paint, still has code from -at least- the Windows 3.1 days.


I wonder if that's really the case: is there really any windows 3.1 16-bit code running in Windows 11 ?


I believe Windows 11 will be 64bit only, so I guess that means no more NTVDM and no more win16 support. In that case, there won't be any windows 3.1 16-bit code left over. On the other hand, win32 was designed to be mostly source-code-compatible with win16, and I'm sure you'll find elements that have been given a minimal win32 porting effort, while a "git blame" equivalent would probably trace back to windows 3.1 still.

On the other hand, 32bit Windows 10 is still going strong and there are probably plenty of windows 3.1 (and windows 3.0) applications that would still run there.


In a way, yes... see this article : https://news.ycombinator.com/item?id=27556754


I’m sorry but this is not the 7 GUI. These all look like straight linear gradients with no breakpoints in how harsh they appear in this library.

To the creator I’d suggest using photoshop or Figma to relocate the design elements, then translate the breakpoints to CSS to make the components more realistic.

Great start though!


I think you're overly-critical. The CSS immediately gives the look & feel of using Windows 7. Maybe the gradients are not completely spot-on, but the other 95% of elements are great.


None of the elements are accurate replications of the originals even after accounting for DPI scaling. I love the spirit of the project, but it feels like any other poor replica you’d find anywhere else.

There are spaces between tab button elements, the title bar buttons have excess space and incorrect borders, window buttons are the wrong size and use incorrectly scaled raster icons, the backdrop filter property isn’t applied correctly, the shadows on text boxes are incorrect, the tooltip padding is the wrong value and the wrong box size, buttons have the wrong borders, checkboxes have checks incorrectly applied, the author used raster icons for window buttons but not checkboxes for some reason. I could go on.

It reminds me of those yesteryear rips of future Windows themes on deviantART when those kids replicated the desktops of yet-to-be released versions of Windows but didn’t bother comparing the size or even the right colors of elements.


I'm amazed how one can see it all, yet I do not think most people would care, since they (as well as I am) are all blind to these nuances.


Agree, it still looks stiff and far from the 7 GUI. Would you care to elaborate more on the breakpoints you talk about, or is there any good resource I could take to learn more? Thanks.


I assume that they are talking about easing gradients, which is interpolating the gradient’s color stops using a cubic bézier easing function [1]. It gets rid of the sharp edge you sometimes see in linear gradients. Think of CSS animations and their easing-functions.

There are various Sketch and Figma plugins which let you modify the easing function and generate the CSS for it.

[1] https://css-tricks.com/easing-linear-gradients/


The resulting UI widgets actually feel really accessible and easy to use.

Also I love that it's an extension of XP.css, which is an extension of 98.css.


Windows 2000 was the peak of practical and usable GUI design. It all went downhill from there.


I too see Windows 2000 as Microsoft's peak for usability, at least for the GUI controls it supported. We seem to be in the minority though. You hear far more people praise XP or 7 as the best.

I recently went back and installed 2000 to make sure I wasn't forgetting something bad for usability in the UI about 2000 that others remember. Nope. I then installed XP to be sure with today's eyes it wasn't really better than 2000 and I just didn't realize it. Nope.

I didn't continue the experiment after that. I very clearly remember 7, since I still have to use it occasionally.

I see 2000s UI as being from a time when (for the most part) they weren't trying to impress you with the gloss on the UI. They were just trying to make it clear and usable. More modern iterations of Windows has some advancements in interactions but not much in the GUI controls themselves.


> I too see Windows 2000 as Microsoft's peak for usability, at least for the GUI controls it supported. We seem to be in the minority though. You hear far more people praise XP or 7 as the best.

Me three! Windows 2000 is the best not just for usability, but still looks the best. No, all those fluff in XP and 7 don't make them look good, it make them look busy and dirty.

We may be in the minority, but only because relatively few people had a chance to use 2000 before it was replaced by XP one short year later. In contrast, people remember XP and 7 fondly because they kept using them for years and years since the subsequent releases, Vista and 8, were universally hated and panned.


It looks like Microsoft used to pay more attention to GUI design and usability back in Windows 2000 days, which is the peak GUI for me too. Windows 2000 design is very similar to Windows 95, but changes from 95 to 2000 I see as improvements.

Starting with XP more attention was paid to aesthetics and less to other qualities. E. g. XP controls consumed more screen space, which was a visible regression at times when 15-inch displays were still common.

Up until Windows 7 there was a classic desktop theme which was quite close to 2000 and I used it. Unfortunately Windows 10 has no such option. I use mostly FreeBSD and MacOS nowadays, but still have an old notebook with Windows 7 which I occasionally use. If I disliked Windows 10 less I would have upgraded it to Windows 10.


It's a shame the glass effect isn't able to properly blur the background like native Windows glass does, the legibility is very bad without the blur to eliminate high-frequency detail in the background.


The library uses the backdrop-filter property to blur the background – in firefox that’s currently under a feature flag.

https://bugzilla.mozilla.org/show_bug.cgi?id=1578503



There are some details from the Win7 UI that is still not in the CSS, for example the gloss which appears when the mouse pointer hovers over the window action buttons.

Next, the dropdown menu isn't like Win7, it follows your browser/operating system's theme. For example, on Firefox-based browsers running on GNU/Linux it follows your GTK theme, rather than the Win7 theme. On standard Chromium-based browsers, it shows it as dark blue rather than glassy light blue of Win7.

Next, the scrollbar doesn't show as the Win7 one on Firefox-based browsers, but does on Chromium ones for some reason.


Firefox has put some restrictions on scrollbar styling after websites began abusing it for their branding at the cost of accessibility. Other browsers did something similar. I'm not sure if you can even customise the dropdown menu if it's styled by your OS by default, either.

You're right about the missing highlights. Those feel like something not too difficult to implement with a CSS animation and some extra colored borders, so hopefully someone will write a PR for that eventually.


I love the way they implemented tabs and trees. No JS, just pure CSS which by now grew ridiculously powerful.


What I love about these old designs is how obvious everything is. You just can't miss the button or the tabs.

Now after all this skeuomorphic design revolution it's so hard to tell, is it a button, text, link etc. It probably looks better but it is at the cost of extra thinking for the user.


[flagged]


Oh the average user does very much have problems. They just don't have a place to talk about them out loud. With modern design, people get confused all the time whether something is clickable. Seen that happen so many times I lost count.

Windows 7 had the last good, properly-desktop UI before the existence of touchscreen laptops ruined everything. And then came the plague of borderless everything.


> They just don't have a place to talk about them out loud. With modern design, people get confused all the time whether something is clickable. Seen that happen so many times I lost count.

The average user assumes it’s their own inability to use technology, while the average HN commenter gets annoyed because they know it doesn’t have to be so difficult.


>They just don't have a place to talk about them out loud

Laypersons wouldn't even know what the problem is in the first place.

Even if they do, they might not know the right terms to use when communicating. Thus, listening to feedback can be misleading.

Nothing beats watching people in real life using your products.


Interestingly, sometimes this manifests as nostalgia. Yes, they can't put their frustration into precise words like we do here, so they just say "I miss when %something% was easier/better/more respectful". Or "could they just for once stop changing stuff around". Or "I hate this redesign".


The real hacker news-ish comments are the ones like yours posting cheap "gotchas" and pointing out hypocrisies where there are none every time someone says something similiar.

An engineer sees structural problems with buildings and bridges where people who use them everyday don't. An english major can see a problem with the language used in a newspaper where thousands of other readers don't. There is nothing unusual about this.


Are programmers really UI/UX experts, though? If you said a programmer pointing out how stupid programming is shown in film you might have a point but I really don't think many of these people pointing out their perceived flaws in UI/UX are experts.

In any thread that mentions UI/UX something like this always comes up where someone is bemoaning current UI/UX and hearkening to the good old days of when UI/UX was good, which just so happens to coincide with when they really started getting into computing.

That isn't to say that laymen can't make arguments with regards to poor UX/UI because it's ultimately a subjective experience and what is intuitive to one person isn't necessarily going to be the same for another. I just think you're conferring expertise to undeserving recipients.


> something like this always comes up

Unsurprisingly.

> which just so happens to coincide with when they really started getting into computing

On my first job I've used Windows NT 4. Yet I agree with GP Windows 7 was the peak UX.


The comment about it coinciding when they started was slightly tongue-in-cheek, but surprisingly often holds true. My comment more broadly is just pointing out that most people making these arguments are doing so subjectively while framing them as objective.


> most people making these arguments are doing so subjectively while framing them as objective

On subjects like that, I don't believe objective arguments are even possible. UX is similar to art or politics, it's inherently subjective because it's about people and culture. Ask a caveman to compare UX of Windows 7 and 10, and they gonna tell both are equally useless.

Despite no objective arguments is possible, it might exist a consensus.


> hearkening to the good old days of when UI/UX was good, which just so happens to coincide with when they really started getting into computing

Nah. I started out with Workbench 1.3, which was flat. Because they literally hadn't the colors available to do more. I didn't use them, but Mac and Atari ST were the same. Then came Workbench 2.0, then MUI ( https://en.wikipedia.org/wiki/Magic_User_Interface ), and I recognized and enjoyed the improvements. I love change, I just don't love regression. Why would I?

And as for experts:

https://www.nngroup.com/articles/flat-ui-less-attention-caus...

https://www.nngroup.com/articles/flat-design/


Change for the sake of change isn't necessarily a good thing, but rarely is a change in UI/UX objectively bad, which includes the shift from skeuomorphism over to flat design. The article you've linked has been fairly criticized for it essentially removing controls from their methodology by things like using lower contrast buttons for the 'flatter' designs, which is not essential to the design paradigm.

Ultimately, I'm still going to argue that this is a matter of subjectivity and as we age the less accepting we are to change, so those who fully embraced a particular UX paradigm will find it harder and harder to adapt to shifts in UX unless it cycles back to what they're comfortable with (just like fashion).

To quote Abe Simpson,

> "I used to be with ‘it’, but then they changed what ‘it’ was. Now what I’m with isn’t ‘it’ anymore and what’s ‘it’ seems weird and scary. It’ll happen to you!"


> but rarely is a change in UI/UX objectively bad

We're not looking at a box with unknown contents that we would need to go by probabilities. We can just look at the thing.

> The article you've linked has been fairly criticized for it essentially removing controls from their methodology by things like using lower contrast buttons for the 'flatter' designs, which is not essential to the design paradigm.

I can, right now in Win 10, show you plenty examples where there is no contrast and no distinction between an overlay window and what's behind it. Not low contrast, exactly the same color.

> Ultimately, I'm still going to argue that this is a matter of subjectivity and as we age the less accepting we are to change

This is the same indirect reasoning as "people like what they used first", which in my case is provably, objectively, false.

I might as well say "as people age and grow in confidence and experience, they trust their own judgement more, and care less about just going along with whatever is pushed just to fit in" or anything like that.


> This is the same indirect reasoning as "people like what they used first", which in my case is provably, objectively, false.

It's more like someone at some point stops being open to change, which allows some breathing room, but eventually they'll prefer to stick to what they know rather than having to learn something new. This isn't a value judgement, just that when something works and feels right to someone why would they feel the need to try something else just for the sake of trying it?

I use vim. Most people don't and would argue it's bad user experience as it's not intuitive. What's intuitive for text editing is largely just based on convention, though. I think vim is a great user experience because it allows me to edit text much more efficiently. It's entirely possible the text editor Kakoune would be even better than vim at doing the whole modal editor thing, but I do not care to find out. It took me some time to learn and get comfortable with vim and I don't see much value in going with the new thing even if it potentially has real upsides.

When it comes to UX/UI design, you'll certainly find plenty of conventions that are broadly agreed upon. Most people would find it difficult to read yellow text on a white background for example so that would be considered bad UI. But in the case of GUI design like determining how much shadow there is beneath a 'button' element or what color links should be and if that color is distinguishable is pretty subjective and a part of that comes down to convention rather than an innate preference in humans.


> It's more like someone at some point stops being open to change, which allows some breathing room, but eventually they'll prefer to stick to what they know rather than having to learn something new.

That could justify anything that is new. Just take the people who think nuclear would be a bad idea -- "it's just because that's not how they are used to doing things".

What is there to "learn" about, say, 50 pixels of padding rather than 5 pixels of padding and 1 pixel of border? All of these "arguments" are avoiding the actual subject matter at hand to make grand generalizations.

> It's entirely possible the text editor Kakoune would be even better than vim at doing the whole modal editor thing, but I do not care to find out.

Then speak for yourself, because none of that describes me. I constantly try out new things; some things supersede what I used to that point, others fall short.

> what color links should be and if that color is distinguishable is pretty subjective

Not in the case of two colors being the exact same. There is nothing subjective about that at all.


It's the current crop of supposed UI/UX experts though that have foisted this mess of material design, hidden menus, disappearing scroll bars, text fields that don't behave like text fields (ex. browser URL bar) etc. So yes, I do believe many programmers who have to suffer years and decades of UI/UX "advancement" (much of what is actually just design churn) and use computing environments extensively, can definitely have a lot of good input.


They are called "user" interface and "user" experience, after all. Any "user" can call it bad and it can be a valid point. It can also definitely be valuable input. But an engineer identifying a design flaw in a bridge or an English major identifying errors in language use are people that have specifically studied those things and have a depth and breadth of knowledge that most "users" don't. Programmers aren't UI/UX experts just because they use applications.


At least we can agree, that it is a typical hacker new-ish discussion ;-)


The average user has the same problems, but doesn't realize it is a UI problem or even think that he could be more productive. When he does he probably thinks that he is too stupid to use a computer.


A saw a really obvious instance of this recently, with an old lady trying to use a self checkout.

The machine asked which payment option you wanted: cash, credit card, voucher, etc. She selected cash. That brought you to a screen with a button in the middle allowing you to switch quickly to pay with card (to pay the remaining balance after you'd part paid with cash). The button text was "pay with card". The problem is, because it wasn't obvious that it was a button, she thought she had accidentally chosen the wrong option at the previous screen. She chose the "back" button at bottom and then selected "cash" again, but had the same problem. I saw her go through this cycle several times before asking for help.

A really obvious button style could well have avoided the problem.


Sounds like they could have labeled it "pay with card instead".


What you've done there is add some extra text saying that this is not a current status - it's an action you could take. You could just as well have added "(this is a button)" instead. Why not just make it visually obvious that it's a button?

Edit: You could say something similar about any non-verbal cues. Why not say "error" instead of using a red error icon? Why not say "left" instead of using a left-pointing arrow? Etc. The answer is always that (if used appropriately) it lowers the user's cognitive load. Admittedly sometimes it's reasonable to use both e.g. usually you do say "error" in words too, and maybe your suggestion would also be useful for this button.


Which is by the way one main point in "The Design of Everyday Things" by Norman, the book being one core pillar of the usability field.


Yes, I read the book which is in part responsible for my previous comment. But it not only that. The older I get the less I am willing to spend energy to interpret the UI I have in front of me: even finding out which is the app I want to use in the wall of blue apps on my phone is wasted time.


Have you sat with an average user? They have no idea what they are doing with these modern UIs. My mother with an iPhone is a fine example. Has no idea what to press or slide.

She could use windows 7 fine. Anything later, more difficult.


I have trouble with an iPhone.

A friend passed me his back in 2011ish to call his wife letting her know he'd be late to pick her up. It utterly confused me. No back button. A million apps on the 'home' screen. Needing to swipe in an undocumented manner, at least undocumented via a 20 second conversation.

The last Apple product I'd owned, an iPod mini, had useful things like buttons. And despite buttons it managed to have unintuitive features such as sticking music in multiple machine-named folders that only iTunes understood and a proprietary connector. So I dropped the iPod mini and went with something no-brand that was a USB stick that went in a small cradle that had a small battery; it had buttons, it functioned as a music player, it had a sane file system, it was charged via USB, I was happy.

I still have trouble with iPhones.


"(...) Do you see this big red circle on the bottom of the window? Now next to it there is a circle with a little square in it. No, don't click it just yet! You have shared your screen with me. It is used when... You know what, it doesn't matter. Find three dots on the bottom of the window. Yes, three dots, in a line. Now click it and select 'Stop sharing'. Higher. Higher. Yes, now click it! Brilliant. Now, do you remember the red circle? Next to it, on the left side, I believe, there should be another circle with a square and a less than sign, you know like from math. It is striked out? Yes, that would be it. Click it. Yes, that should do it. Splendid! It works. It is supposed to be "a video camera". I am as lost as you are. Honey! Your dad is back. Please, don't click anywhere else. Oh, shit he froze. Maybe I'll call him from Hangouts? Oh, forget it, they shut it down lately. Why? I don't know, they just did. Can't we just call him on the phone? (...)"


Non-techies have the same problems, they just aren't aware of what's happening and what the cause is. (Someone who often coaches/tutors/handholds people here.)


The average user has no idea how to use most UIs these days. I have to explain basics of her phone to my mum every time I try to video call her.

The difference between tech people and people like her is that it's easier for us to notice these issues and talk about them, while - if something doesn't work for her - she'll just shrug and go tend to the garden.


In my experience, these kind of people are scared of experimenting because they think they might screw something up, and the user-interface doesn't help. My dad has an iPhone, and he doesn't know anything about it. I doubt he's ever closed a browser tab, rearranged his home screen, or switched a running app by double-pressing the home button, and he has problems with the most basic of things. He also has an Apple watch, and the only functions he knows how to use on it are answering calls and activating voice messages ("walkie-talkie"). He has no idea how to sync his phone to the car's radio via Bluetooth, and he still names his contacts in his address book like "AAA Wife" and "AA Home Phone" because he doesn't know there's a favourites button. And he simply doesn't care. He just continues struggling to use his phone without really trying to learn it. My 82-year-old grandmother was better with technology than he is.

With people like this who didn't grow up with tech, everything has to be intuitive and obvious. An iPhone certainly isn't. My dad never had trouble with his Nokia, because everything was laid-out in front of him in a one-dimensional way, and all he had to do was scroll down the list and everything was easy to find. I would never put a computer in front of him, but if I did it would hb a Windows 7 computer.


There's something to be said about experimenting, but I hate that iPhones have features that are literally hidden and without documentation. Learning to use my iPhone involved randomly pressing things down to see if a menu appeared or reading "10 Things You Didn't Know Your iPhone Could Do" articles. And years later I'm still learning simple things like how to quickly move my cursor when texting (press and hold the space bar).


No the average user has the exact same problems.


I'm not "struggling". I just see the pointless regression. Why defend it? What is gained by defending something worse as if it was progress? Why can't we recognize flat everything as the failure it is, and make good UI again?


Objective evidence may be hard to come by. That said, the studies Microsoft did to produce Windows 95 must have paid off, IMO. Despite its different UX it felt significantly easier than Windows 3 and DOS.


> Somehow a group of people supposedly good at using computers struggle to work out what a button is while the average user seems to have no problems.

I guess you’ve never sat in on a usability research study? Average users have way WAY more issues than you are implying.


And by the way, sure one can probably figure out whether it's a button or not after some time of careful analysis. The problem is that if instead of it taking me 10ms to figure out it's a button I have to pause and understand it and it takes me 10 seconds, I get annoyed.

I have this problem right now with Firefox Proton. Since it has problems respecting my color scheme, it takes me some time to distinguish (in an inactive-colored titlebar) which is the damn currently active tab. Before, I would be able to do it almost subconsciously, without distracting me from my current goal. Now, my train of thought has to stop and search for the damn button, then I have to restart my thoughts. And we all know how that hurts.


The average people will call one of their family/friend that knows about computers/phones most as a tech assistance. Because they obviously have no clue about how to use it them self. And it's lucky that it is not you in this case.


I imagine the average HN lurker spends less or equal time clicking around on their OS GUI than any other average person.

Personally I can automate the provisioning, configuration and management of hundreds of servers, but still often have my partner show me how to personalize and customize my own desktop.


This is synonymous with calling a writer bad at writing because they pointed out bad writing. The conclusion should be the opposite shouldn’t it?


The average user didn't find it strange to click at "Start" to shutdown Windows.


I think the "Start" was for the user as in "Start here" ^^


It's not strange. You tell the computer to "start to shutdown" and after 5 or 10 minutes of updates, it finally finishes!


Best comment for me. LOL

Yes, I agree with you. With smartphone sales skyrocketing the last thing you would assume is a bad user experience. Quite the opposite: how bad Windows 7 was.

This is computer history romance or different personal taste.


Anyone have an example of this running somewhere?



This project is extension of 98.css. In the thread where Obsidian released there mobile version they said Obsidian could be made to look like anything because it's Electron based. So I made windows 98 Theme (or may be it's windows 2000) for Obsidian. It's not perfect yet but it's close enough https://github.com/SMUsamaShah/Obsidian-Win98-Edition


Previous discussion:

8 months ago https://news.ycombinator.com/item?id=25601650


This is going to be used by some scammer to make a more convincing webpage to trick someone's grandmother into thinking her web browser has a virus or something.


That can be - and indeed has been - done with much, much less effort than using a complete stylesheet. A static image will be enough to fool most people.


My God, I am thoroughly impressed. This is almost 1:1.


Cheers OP. Just what I needed to create my popup advert for my fake registry cleaner software.

Just kidding, of course - nice work :)


It's amazing it does much better job matching actual native style than WPF and Swing ever did.


Should be called vista.css.


Why?


The design system used in Windows 7 first appeared in Windows Vista.


Interesting idea.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: