What's "low performance"? Humans measure tasks on human timescales. If you ask an embedded computer to do something, and it finishes doing that something in 100ms vs 10ms vs 1us, it literally doesn't matter which one of those timescales it happened on, because those are all below the threshold of human latency-awareness. If it isn't doing the thing a million times in a loop (where we'd start to take notice of the speed at which it's doing it), why would anyone ever optimize anything past that threshold of human awareness?
Also keep in mind that the smaller chips get, the more power-efficient they become; so it can actually cost less in terms of both wall-clock time and watt-hours consumed, to execute a billion instructions on a modern device, than it did to execute a thousand instructions on a 1990s device. No matter how inefficient the software, hardware is just that good.
> These days, the basic Windows Calculator consumes more RAM than Windows 98
The Windows Calculator loads a large framework (UWP) that gets shared by anything else that loads that same framework. That's 99% of its resident size. (One might liken this to DOS applications depending on DOS — you wouldn't consider this to be part of the app's working-set size, would you?)
Also, it supports things Windows 98 didn't (anywhere, not just in its calculator), like runtime-dynamically-switchable numeric-format i18n, theming (dark mode transition!) and DPI (dragging the window from your hi-DPI laptop to a low-DPI external monitor); and extensive accessibility + IME input.
That's well and good - when your program is the only software running, such as an a dedicated SBC. You can carefully and completely manage the cycles in such a case. Very few people would claim software bloat doesn't otherwise affect people. Heck the software developers of that same embedded software wish their tools were faster.
> No matter how inefficient the software, hardware is just that good.
Hardware is amazing. Yet, software keeps eating all the hardware placed in front of it.
I mean, I agree, but the argument here was specifically about whether you're "wasting" a powerful CPU by putting it in the role of an embedded microcontroller, if the powerful CPU is only 'needed' because of software bloat, and you could theoretically get away with a much-less-powerful microcontroller if you wrote lower-level, tighter code.
And my point was that, by every measure, there's no point to worrying about this particular distinction: the more-powerful CPU + the more-bloated code has the same BOM cost, the same wattage, the same latency, etc. as the microcontroller + less-bloated code. (Plus, the platform SDK for the more-powerful CPU is likely a more modern/high-level one, and so has lower CapEx in developer-time required to build it.) So who cares?
Apps running on multitasking OSes should indeed be more optimized — if nothing else, for the sake of being able to run more apps at once. But keep in mind that "embedded software engineer" and "application software engineer" are different disciplines. Being cross that application software engineers should be doing something but aren't, shouldn't translate to a whole-industry condemnation of bloat, when other verticals don't have those same concerns/requirements. It's like demanding the same change of both civil and automotive engineers — there's almost nothing in common between their requirements.
I think the other comment has a point though: these frameworks are definitely powerful, but they have no right to be as large as they actually are. Nowadays, we're blowing people's minds by showing 10x or 100x speedups in code by rewriting portions in lower-level languages; and we're still not even close to how optimized things used to be.
I think the more amicable solution here is to just have higher standards. I might not have given up on Windows (and UWP) if it didn't have such a big overhead. My Windows PC would idle using 3 or 4 gigs of memory: my Linux box struggles to break 1.
Have you tried to load UWP apps on a machine with less memory? I believe that part of what's going on there is framework-level shared, memory-pressure reclaimable caching.
On a machine that doesn't have as much memory, the frameworks don't "use" as much memory. (I would note that Windows IoT Core has a minimum spec of 256MB of RAM, and runs [headless] UWP apps just fine! Which in turn goes up to only 512MB RAM for GUI UWP apps.)
Really, it's better to not think of reclaimable memory as being "in use" at all. It's just like memory that the OS kernel is using for disk-page caching; it's different in kind to "reserved" memory, in that it can all be discarded at a moment's notice if another app actually tries to malloc(2) that memory for its stack/heap.
Also keep in mind that the smaller chips get, the more power-efficient they become; so it can actually cost less in terms of both wall-clock time and watt-hours consumed, to execute a billion instructions on a modern device, than it did to execute a thousand instructions on a 1990s device. No matter how inefficient the software, hardware is just that good.
> These days, the basic Windows Calculator consumes more RAM than Windows 98
The Windows Calculator loads a large framework (UWP) that gets shared by anything else that loads that same framework. That's 99% of its resident size. (One might liken this to DOS applications depending on DOS — you wouldn't consider this to be part of the app's working-set size, would you?)
Also, it supports things Windows 98 didn't (anywhere, not just in its calculator), like runtime-dynamically-switchable numeric-format i18n, theming (dark mode transition!) and DPI (dragging the window from your hi-DPI laptop to a low-DPI external monitor); and extensive accessibility + IME input.