I mean, as far as I know, most would still implement this the same way.
Surely there could be a GUI framework that'll handle these Win32 Messages and dispatch them to your callbacks in your `Application` object but your application could be dealing with all sorts of messages not covered by your framework like WM_SYSCOMMAND, WM_POWERBROADCAST, WM_DEVICECHANGE, etc. and you'll find yourself with this classic Win32 Message Queue handling switch even in modern C/C++ Win32 GUI codebases.
It's a good pattern to DRY-up resource-releasing unwind code. In an ideal world, there would be closures, RAII and fluent interfaces (macros) that would wrap allocation, deallocation, error and success handling without needing C++ and avoiding resource leaks.
int
some_func(void) {
int result = ERR;
HANDLE foo = alloc_foo();
if (!foo) goto err0;
HANDLE bar = alloc_bar();
if (!bar) goto err1;
if (do_something_else() < 0) goto err2;
result = OK;
err2:
release_bar(bar);
err1
release_foo(foo);
err0:
return result;
}
I set up Windows 98 on an old Pentium III for vintage gaming. I had forgotten how simple, fast and "you have one job" this is. Definite install on my Windows 10 machines.
No. Multi-threading and hardware acceleration is objectively faster for many workloads. The problem is over-engineered, one-size-fits-all solutions, especially multiplatform and/or written in Javascript.
Back in the day, we had built-from-scratch programs that could only do exactly what they were intended to do on a specific platform, instead of doing things like shipping a gamepad driver with your all your text apps[0].
I'm not sure why you think modern software is slow, but it is definitely not those two things, unless they are implemented EXTREMELY naively, which is far, far less common than one would think.
> I'm not sure why you think modern software is slow, but it is definitely not those two things
I think modern software is slow because it's slow (despite having vastly more powerful computers), those 2 things along with the language apps written in are the major technical differences between old software and modern software. If theory doesn't match reality then the theory must be wrong.
This app starts is more responsive than my windows start menu.
Yes, the application starts very quickly and feels instant (which is how ALL software should be, dammit.)
I GUARANTEE you that any lack of multi-threading or hardware acceleration is not the cause for the speed, but rather the simplicity of the application. It's as complex as it needs to be to perform its job, and not more.
Today's unfortunate software development practices stack layer upon layer of abstraction on top of the CPU; far more than was the case when this application was written. As such, more modern applications are bogged down with the weight of all of this stuff.
Developers today have forgotten that "mo' code means mo' problems" and are quite happy to reference any stupid library they come across in order to make things feel simpler. What they've actually done is make things far more complex, and complexity is the undying, always victorious, enemy of performance.
I don't like that modern applications are slow, either. They aren't slow because of multi-threading and hardware acceleration, I promise. They're slow because of piles of stupid decisions on how software should be built.
Ranger is by far the most useful file manager I have ever used. It's quick and powerful. The only thing that keeps it in the minor leagues is how big the learning curve is for anyone that doesn't know Vim shortcuts.
I'd argue such a file manager is meant for people who prefer and are used to Vim shortcuts. I just tried Ranger. Got image preview and Solarized theme working, but it does hang on previews it seems.
For a GUI one, I can recommend fman (fman.io) as it uses the Sublime paradigm (shortcuts plus a command palette). It isn't FOSS though.
Also worth mentioning nnn, fff, and filet. All Ranger-inspired file managers that are faster and more lightweight (listed in order of ascending speed).
Windows explorer has gotten a little bit "magical" for my taste. There are all these different kinds of special system folders that invoke different view logic somehow. Sometimes you just want to see the actual file system.
I have never heard of that. I mean, I don't doubt it's true, but I think I just want to see the files usually. But then again, I don't know what that means.
As a simple example, you can explore into a zip file as if it were a file system folder. This is really useful, but you're obviously no longer dealing with just plain old file system paths.
Also things like "My Computer", the recycle bin, the network, phones connected via MTP, etc. It's basically something the actual file system is only a part of, and yes, some file system locations are hidden, some are merged, some are redirected, etc.
I have a single folder (no subfolders) with 650,000 files in it, and the Win10 explorer is plenty fast if nothing in that folder is currently changing. Even if a single file access time changes it will spend 20-30 minutes trying to display the change, despite the access time not being displayed for any file.
If you open the folder 1 second after the file access time is updated, it's instantaneous, as before.
There's good and bad here. NTFS is a HELL of a lot better at handling stupid amounts of files in a folder like this than it used to be. But maybe it could handle a single file attribute change without losing its kind for 30 minutes.
NTFS with default settings will actually degrade a fair bit on such a folder. You probably have to disable 8.3 legacy filename generation to make this work (or the usual pattern of using the file name prefix as folder names beneath that folder).
https://github.com/microsoft/winfile/blob/master/src/winfile...