Hacker News new | past | comments | ask | show | jobs | submit login
Your CPU may have slowed down on Wednesday (travisdowns.github.io)
735 points by superkuh on June 22, 2021 | hide | past | favorite | 469 comments



All these mitigations should really be disablable. I have a computer which is not connected to the Internet I use to run number-crunching tasks and play relatively-old single-player games, but I still update it periodically. Obviously I don't need any such performance-killing mitigations there.


Probably you're on Win as you mentioned gaming. In Linux you can pass `mitigations=off` as a kernel boot time switch [1]. I did it on my old PC at home and it felt a bit snappier after that. However, I don't have any numbers to quantify it.

[1] https://www.phoronix.com/scan.php?page=news_item&px=Spectre-....


On Windows you'll have to delete C:\Windows\System32\mcupdate_GenuineIntel.dll to prevent Windows applying Intel microcode updates. Whatever microcode is in the bios, it will use that. There are also ways to modify the BIOS to update or downgrade its microcode.

My gaming PC, a Sandy Bridge i7-2600k, has had 20 microcode updates between 2010 to 2019. The majority were in 2013 until the spectre mitigations came in 2018. The last bios update for my mobo contains microcode 2 versions behind the spectre updates.

Feed a BIOS image in to this tool to view microcode information. https://github.com/platomav/MCExtractor


mitigations=off doesn't disable this effect.


A citation or some kind of further explanation would be more valuable than an effective "no it doesn't" response.


Sorry, I explained it more detail elsewhere in this comments section.

mitigations=off works at the OS kernel level to patch out some expensive aspects of what the kernel can do (possibly in concert with the CPU but driven by the kernel). This issue a microcode change that affects the CPU behavior without any intervention by the kernel, so mitigations=off doesn't help.


Do the microcode changed alone, without kernel mitigations, negatively impact performance?


Yes. There is no kernel mitigation associated with this issue and it is not in the Spectre class of vulnerabilities.

It is purely a microcode fix for an "old-school" data dependent timing issue.


It disables kernel mitigation, not microcode mitigations.


Thanks! I wasn't aware of that!


Yes, I actually dual-boot.

I know this way of disabling mitigations on Linux (nevertheless thank you, I well might be unaware of this feature, I only learnt it some years ago).

However, I doubt you can disable microcode-level mitigations this way. I believe it only affects the kernel level.


On some linux distributions [0] microcode is loaded early during kernel boot. So it's a matter of pinning the package to a previous version. No idea what happens with BIOS/UEFI updates though, although you can roll these back too. Obviously the "proper" way here is to have open source microcode but I can't see that happening anytime soon.

[0] https://wiki.archlinux.org/title/Microcode#Early_loading


I also have no idea how to do on Windows but disabling those mitigations along with other unnecessary Windows 10 stuff can be done through Chris Titus' debloat program: https://christitus.com/debloat-windows-10/


Can you disable these on a per-process basis? I'm not concerned that my compiler or Blender render is going to side-channel attack whatever other part of the system. Generally there's little overlap between completely untrusted code and code that requires high performance.


> Can you disable these on a per-process basis?

No. If it wasn't impossible, it would add a massive delay to context switches between processes as the required microcode changes are applied.


Hm... I'm on a host OS with Meltdown mitigations disabled, but I can seemingly run a VM with the mitigations enabled, verified via InSpectre[1]. I guess that's different somehow?

1: https://www.grc.com/inspectre.htm


This is because the Spectre/Meltdown can't be patched with microcode updates, the mitigations instead use different kernel mechanisms (KPTI for Meltdown, retpolines for Spectre). If the guest kernel for your VM is using these mitigations it will be protected even if the host has mitigations disabled.


And I guess those kernel mechanisms can’t be applied per process? If a hypervisor is required, could, say, JVM do it?


There's a lot of mitigations. Some are microcode, some are kernel/vmm code, some are user space if that process is running untrusted code. Your host probably just isn't running with the normal ISA mitigations, but the microcode installed.


Ha interesting! It was something I wanted to check/try. I'm wondering if it makes the mitigations less effective. Maybe it would drop for some time the even minimal risk of netspectre or such and make cybertight architects lay off my vm hosts...


It would be interesting if they could be done per core... So one could reserve 1 core for JS and also down-clock it to a minimum.


Me too. Last time I checked, it halved my boot time, which I know is not a perfect benchmark but it's something.


Last I checked, my performance almost doubled in multicore benchmarks when I disabled mitigations.


> Probably you're on Win as you mentioned gaming.

This is not as sure a deduction as it once was.

The main reason for games not to be playable on Linux is the anticheat software used in modern online games. Pretty much everything else will now run perfectly (see protondb.com). Since the commentor you were replying to specified that they played old single player games they would probably be fine with just Linux. Indeed many sufficiently old Windows games are now easier to play on Linux than they are on modern Windows.


Agreed! That's why I said probably. I am also gaming a bit on Linux, but I know the vast majority of people still stick with Winddows for that.


It took me a surprisingly long time to understand the word "disablable". The spelling "disableable" is slightly better but still not a very nice word.

The Linux kernel mitigations can be disabled easily. But with microcode this is the problem with having it as an opaque blob. You have no idea what's in that update. Maybe it's good for you, maybe it's bad.


You can, to some extent. One of the side notes in the article addresses this point, the microcode version can be held back to an earlier version.


Staying on an earlier version is not ideal as some of the later microcode updates actually contain useful errata beyond performance-limiting mitigations. Really wish Windows had an option like "mitigations=off" in Linux.


"mitigitations=off" doesn't disable microcode mitigations like the one in this article.


Yes, I assume the kernel has a predetermined list of microcode to be disabled when the flag is set. The same should be doable in Windows right now but the process seems quite tedious.


The kernel keeps no such list and microcode is a monolithic firmware anyways so having such a list wouldn't enable you to pick and choose which fixes you get to apply.


Yes, you are quite right. I was referring to the ability to directly interact with the MSR


Better yet, the microcode should be open source and programmable just like the macrocode.


I don't think that's necessary.

But I do think software engineers should agree that updates should always be reversible. And security updates should always be backported yet still reversible.


> security updates should always be backported

This means getting security updates without feature updates, doesn't it? But this time we want feature updates without a security update.


I'd consider the microcode fix to be a security update, not a feature update.


Yep, and they don't want the fix. Hence needing feature but not security.


Here's a tool for windows. https://www.grc.com/inspectre.htm


It's amazing to me what people will put up with to avoid an attack vector that doesn't have a single known victim. I don't run untrusted programs or Javascript. I see no reason to install any of these measures on my personal desktop until people actually start being victimized by them.


We've entered Inverse Moore's Law: every two years single core performance drops 20% as optimizations exploits are mitigated.


We are already there, to be honest. 20% every two years is negligible compared to growing inefficiency of software.

I have to replace my computer every several years due to performance degradation, even though my workflow has remained the same for over a decade. Even editing simple text documents has become painfully slow on my 2013 Macbook Pro.

M1 gives me a few years breathing space, hopefully, but I'm sure it won't be able to open a 1KB text file instantaneously in 5 years.


I only say this half-seriously, but that's why I sometimes think that part of any performance test should be "try to run your software on a Pentium 4". If it's unusable, go back and optimize some more.

But it won't work now, because the whole stack is probably too inefficient.


Yeah, all this concern with scalability, portability and clean code, has created quite a dystopian software world.

A world where software can barely fit in a single machine and can only scale up. In many cases, this software doesn't even do anything that is functionally different from 10 or 20 years ago, but it still consumes many more resources.

A world where some websites still don't work properly across web browsers. Electron apps which should enable easier cross platform applications, don't (e.g. Microsoft Teams doesn't have feature or even UI parity in windows and linux).

A world where clean code destroys performance, because the time of developers must be paid for by the company, but the costs of resource wastage are externalized. And there isn't even any evidence that these practices actually help to do what they are supposed to do!


> In many cases, this software doesn't even do anything that is functionally different from 10 or 20 years ago, but it still consumes many more resources.

Which software exactly is functionally the same yet consumes more resources? Can you give some specific examples? Nah, having lived and written code through the last 20 years, I think this often repeated argument just isn’t true at all. I can see why it’s easy to jump to this conclusion and easy to believe, if you don’t know or don’t pay attention what’s under the covers. It’s easy to forget about all the conveniences you’ve gotten used to along the way. You wouldn’t agree the functionality is the same if you tried using software from 20 years ago.

One easy to overlook difference is displays. It’s easy to forget that 20 years ago we had 800x600 screens, 24 bit color was not ubiquitous, and 60hz monitors were practically non-existent. Today 4K is normal, and combined with refresh rates and colors, we’re pushing upwards of 2 orders of magnitude more data to our screens. We have much better compositing, much better text and graphic rendering, better antialiasing, all around higher quality and faster rendering.

YouTube and Netflix didn’t exist 20 years ago. Browsers were incredibly slow and couldn’t support streaming video or anything but the smallest of applications written in JavaScript. Localization didn’t exist, web analytics wasn’t really a thing, browsers didn’t run background tasks.

> A world where some websites still don’t work properly across web browsers

This specious framing might have you believe that web portability hasn’t improved that much, while in reality the difference in support for web standards has changed dramatically in the last 10 years. Supporting IE6 was a real and widespread problem for businesses compared to the few minor corner cases that are left.


> Which software exactly is functionally the same yet consumes more resources? Can you give some specific examples?

Compare basic document editing in Office 365 with Office 2000 running on Win2k. Sure, 365 has loads of additional functionality, and your data follows you across devices. But the base user experience is largely worse.

> It’s easy to forget that 20 years ago we had 800x600 screens, 24 bit color was not ubiquitous, and 60hz monitors were practically non-existent.

This was maybe the case in the mid-90's. But by the late 90's 1024x768, 85hz, and 24 bit color were standard. 1280x1024 and even 1600x1200 were not uncommon.


> Sure, 365 has loads of additional functionality

Right, so you’re agreeing with me and disagreeing with @miltondts? UX isn’t what we were discussing.

> This was maybe the case in the mid-90’s.

“In the PC world, the IBM PS/2 VGA (multi-color) on-board graphics chips used a non-interlaced (progressive) 640 × 480 × 16 color resolution that was easier to read and thus more useful for office work. It was the standard resolution from 1990 to around 1996. The standard resolution was 800 × 600 until around 2000.”

https://en.wikipedia.org/wiki/Display_resolution#Evolution_o...


One problem with that entry: the PS/2 only ever had a minuscule share of the market. In the clone world, which was the lions share (think Dell, Lenovo etc of today), resolutions were all over the place. So 800x600 was often the minimum resolution many developers were targeting for things like dialog boxes etc in that timeframe. It's a fiction to say there was a standard during that time, there were many. From SVGA (800x600) on, it was more a combination of user preference, monitor capability and any minimum software requirements as to what resolution a given user would run. Unlike digital LCDs of today, analog monitors didn't care what resolution you ran them at as long as they could handle the horizontal/vertical frequencies you were throwing at them and the phosphor mask had the resolution needed. (and if it didn't have the resolution, the picture would just look progressively more blurry as you jacked up the resolution being sent to it)


Yep: as broke students in early 90s we ran VGA on a 1970s era amber terminal display.


I have been using 1600x1200 for 20 years. I had a single CRT. Later upgraded to two LCDs of the same res.(So I guess 3200x1200, but not for gaming.)

Then 1920x1080 didn't seem worth the cost.

Then gaming landscape changed and 60Hz is not a thing anymore, now it has to be 120+.

So have been waiting until 4k 120Hz+ and the money in my pocket meet.


> 20 years ago we had 800x600 screens

My iiyama VisionMaster was doing 1600x1200 @ 90Hz and running Quake 3 just fine back in 1999/2000.

> better compositing, much better text and graphic rendering, better antialiasing, all around higher quality and faster rendering.

Thank you Nvidia, I guess? They have been carrying the water since the late '90s when everyone else dropped the ball.

But let's get something straight. When we went from CRT to LCD/LED we weren't picking visual fidelity. We, as a consumer base, picked convenience. Thin and light vs. great colors, contrast, viewing angles, refresh rates, etc. My first LCD was in 2004. It was an expensive top of the line brand. I took it home and the ghosting was incredible. Massive input lag. If you were sitting in your chair and just leaned back, all the colors would shift on the display. The viewing angle was nonexistent. I took it back the next day. We are just now getting to the point of matching a CRT from the '90s. And you may have noticed we still don't have OLED on the desktop.

Around that same time we all got stuck with perhaps the worst crime against monitor tech: 16:9 aspect ratios. Not only is that aspect not "cinematic" (theatrical 16:9 does not exist), but we're leaving tons of resolution and screen space on the table. Because your monitor will always have the same footprint due to the width, we could have much better vertical resolution if we stuck with 4:3. Consumer TV fads really hurt the computing industry.

> support for web standards has changed dramatically in the last 10 years.

Yeah well, Webpack, Babel, and numerous polyfills all say otherwise. We're only just now dropping support for IE11 everywhere. Maybe that's the light at the end of the tunnel? But I have my doubts. The incentives are too high for the web to fully standardize. Embrace-and-extend will continue. Developing for the web has never been more complex than it is today.

The other bit of sad news is that the web is no longer open. HTML5 is not an open technology. It requires proprietary DRM which only a few companies allow you to use.

> web analytics wasn’t really a thing

I'm going to shed no tears when Google Analytics finally dies.


> perhaps the worst crime against monitor tech: 16:9 aspect ratios. Not only is that aspect not "cinematic" (theatrical 16:9 does not exist), […] we could have much better vertical resolution if we stuck with 4:3.

I’m confused by this. Film aspect ratios are even higher than 16:9, at 1.85:1 and 2.39:1. I don’t understand why you complained about 16:9 being not cinematic while suggesting 4:3 is better, could you elaborate?


Sure. What I mean is at 16:9 you're still getting letterboxing (black bars on top + bottom). In addition to that, it's rather silly to think that just because 16:9 is closer to a theatrical ratio that you're going to ever get a cinematic experience sitting 2 feet in front of a 24" monitor. Or 6 feet in front of a 46"+ TV. We optimized all of our devices for viewing DVD content whether it ever made sense or not. Because home theater was where the money was.

4:3 is better because it fits the application better. What are you doing at your computer? Probably work or browsing the web. If you watch YouTube videos then you have a chicken-and-egg issue... YouTube content is 16:9 because that's what your device is (to see what I mean: look at all the vertical videos today... the content follows the device). YouTube/TikTok creators aren't using aspect ratio in an artistic manner because few of these people are even aware that they could have a say in the matter. Unlike say, Kubrick, who deliberately picks a ratio for each of his films. They are using what the prosumer cameras are designed for, which is all 16:9.

The point is: you're getting black bars if you watch theatrical content whether it's 4:3 or 16:9. The width of your computer monitor is a constant. So we lost vertical screen space for what? Nothing, that's what. Slightly less black bars on the top and bottom when we watch a Marvel movie on our desktop computer or laptop.


While you are correct about movies not using that aspect ratio, I’d say 16:9 is a standard because of TV shows. With the advent of the glutton age of television we’re in now, people find value in a 16:9 display because of all the Netflix shows you can watch full screen on their laptop without bars.


16:9 is a compromise resolution. It's the geometric mean of 1.33 and 2.85 (TV vs Panavision).

> So we lost vertical screen space for what? Nothing, that's what. Slightly less black bars on the top and bottom when we watch a Marvel movie on our desktop computer or laptop.

It's not "nothing" that we lost; on a X" display with Y total pixels, the marvel movie will be larger and higher resolution on a 16:9 than it will be on a 4:3. This is true for any of the post 1960ish popular ratios for films: 15:9, 1.85, and 2.35.

Similarly, 4:3(1.33) content will use more pixels than it would on a 1.85 or 2.35 ratio screen.


Yeah I was distraught by the complete eradication of 4:3 offers in just a few years' time. I think it was a marketing problem: In 16:9 you could suddenly offer screens with a greater diagonal without actually increasing (or even while decreasing) true screen real estate. Measured in inches/dollar, classic 4:3 was fighting with both hands tied behind it's back.


I think diagonal inch unit for screen should be banned. It works as incentive to go for non-square aspect ratio. See smartphones getting longer even though we have eyes horizontally.


I like this comment, thank you for it.

Would you be so kind as to clarify on this?

> And you may have noticed we still don't have OLED on the desktop.

I am pondering one the LG CX 48" OLED display but I have no clue what are the pros and cons.


i have one, here are pros and cons that i have identified

1. if you sit about 1m or more away from the monitor, it might be usable in terms of size

2. it has brightness limiters, so if you have a mostly white screen (say a fullscreen diagramming app with a white canvas) the monitor will dim by about 50% so you will need to keep some empty space around your window of the desktop image visible (unless you are in complete dark mode for the app)

3. its a glossy display, so if you have direct light behind it wont work out well

4. 4k at 48 inches, the pixel density is a bit low, and if you do scaling of the ui its quite nice but then you loose the advantages of a large monitor to have more desktop space

5. darkmode + low light is brilliant

6. oled is beautiful (wish i could get it without the gloss though)

7. you will need a good enough gpu to get 120fps at 4k (laptop + egpu or latest macbook pro might work)

8. you will need a hdmi 2.0 cable to get 120fps at 4k

9. there are only hdmi inputs

10. if you want to prevent the timed autodimming (you cannot disable dimming completely (see 1) you will need to get the factory remote to disable it

11. to prevent damage to the oled, it auto shifts the screen output by about 5 to 10 pixels every few minutes in desktop/game mode (so you might notice the mac menubar is slightly cut off, thats normal)

12. the remote is a but wonky but cool... there are no direct shortcuts to brightness controls unfortunately... ---

it will also take while to get used to it.... it took me about 2 weeks to start liking the size... (i sit about 80cm from the monitor, deep desk)

honestly, if you sit close, say 50cm or less, a 32 inch 4k display will be better (like the lg 32 inch 4k displays)

edit: my dream monitor is 8k oled at about 38 inches (not ultrawide) and not glossy... thats a few years off i guess (and also who knows if they would make that size in non-ultrawide..)


Thank you! Detailed and extremely useful!

Yeah, I'm never buying a glossy display again though. You helped me make a decision.


The 48" CX is a TV. It happens to work pretty nicely as a monitor if you have enough space (between you and the display). But in a desktop setting where you are sitting 2 feet from the monitor, you'd have to physically move your head to see all the content. OLED on the desktop means 24", 27", 32" OLED displays.


Yep, completely agreed on the sizes + viewing distance. I've been on the fence but I'll retrain myself from buying it. It seems it has quite a few downsides.


Pros unlimited contrast, cons the TV doesn't age well. Since it's in the name. (Organic)


> Which software exactly is functionally the same yet consumes more resources? Can you give some specific examples?

From this end user's perspective, Google Chat is functionally no different than ICQ or MSN Messenger were 20 years ago. Notifications show up on my phone screen instead of my windows 95 screen. And now we call them emojis and there are more of them. And Google chat has more lag (~1400ms) when I open a chat window from my contacts list. All while the functionality of getting some text from myself to my friends has stayed identical (actually decreased - I can't use custom fonts/bold/etc on the modern version)


Datapoint: I've recently spun up my mom's old G4 Powerbook 12" (running OS X 10.5 Leopard) to rip some old CDs I found, and everything was reasonably snappy. Well, ok, a bit slow, but fine, really. Until I opened Safari. After some 3 or 4 tabs the machine basically ground to a halt. (Fair enough: 256 MB RAM and a spinning hard disk!)

Things got a bit better when I switched off JavaScript. But yeah. The web is basically unusable on older machines. For many use cases all I want is some text, maybe some images, maybe some forms and buttons. How hard can it be to render that?


Is Google chat using more resources and draining perf? The UX of a chat app is somewhat of a straw man here. None of the chat apps you mentioned “can barely fit in a single machine” or “destroys performance”.


ahem slack?

But realistically Google chat uses more ram than the PC I used MSN messenger on even HAD.

And msn was “hoggish” for the time.


Slack is vastly more functionality than MSN messenger.

How do you know the RAM usage of gchat? I agree Chrome uses more (and again it’s a completely different ballgame than it used to be) than but where are you getting the chat specific app metrics?


> Slack is vastly more functionality than MSN messenger.

I don't buy that. Because you can attach bots? or make calls, custom emoji? (you can do all of these on MSN) and more (like theme customisations and font changes!)

> How do you know the RAM usage of gchat?

You can see each tab's memory usage with Safari on MacOS as it's tied into the Activity Monitor.

Google Cloud Console uses 989MB for me, for instance.


> Because you can attach bots? or make calls, custom emoji?

Because you can access an indexed repository of every conversation your team has had, organized by grouping, channel, digression, and participants, even if you weren't present, online, or even a member at the time. What's the MSN Messenger approach to this? Everyone join one giant group conversation window and try to keep the conversation straight, everyone send every file to every person each time, everyone keep every file you've been sent, grep through your 5 years of logs to find stuff, and hope you never need to use a different device or lose your connection? Slack provides the ability to reply to specific messages in digressionary threads, which is a key part of keeping busy channels readable in the present, let alone in the future, and the only reason many conversations can span long periods without getting lost in noise. I shudder to imagine my job relying on staying up to date with an MSN Messenger window with 44 participants.

Then there's integration with the bugtracker and support ticketing system, and Keybase, putting live-updating views from spreadsheets and the database directly into chat, drop-in/out call channels (which aren't comparable to MSN/WLM's direct calls), compatibility channels to migrate from IRC and mailing lists without losing anybody... those don't count as extra functionality? There's a reason professional teams use Slack when they never used MSN/WLM.


Those advantages (offline file storage, sorting by subtopics and threads, access to things from before you were even a member) sounds largely like forum software from ~20 years ago too, which only required decent hardware on the server side. Integrated chat and forum software (with office software thrown in to boot). The original comment I had made was about MSN Messenger being functionally equivalent to Google chat.


> Because you can attach bots? or make calls, custom emoji?

And to add on to that - these features shouldn't have any noticeable resource consumption of the kind people are complaining about. Bots shouldn't affect performance at all. The ability to make calls should have no performance impact when a call is not being made. People aren't complaining about Slack with a call being slow - they're complaining about Slack taking a long time to start up, and Slack using lots of memory and CPU just sitting there doing nothing.


> You can see each tab's memory usage with Safari on MacOS as it's tied into the Activity Monitor.

That's not necessarily a good proxy. Safari, like Chrome and Firefox over-allocate far more than they need speculatively for caching & rendering purposes. There was a whole blog post on HN that completely bungled memory metrics when looking at the integrated Activity Monitor https://news.ycombinator.com/item?id=26179817


> 60hz monitors were practically non-existent

?? 60Hz progressive scan was fairly standard for SVGA 20 years ago; people looked for 70+Hz displays because they were bothered by the flicker (my college roommate in 2001 ran at 1024x768@75Hz rather than 1280x1024@60).

800x600@60i was the bargain-bin monitor even in the late 90s. By 2001 you could get a used 800x600@60p or 1024x768@60p monitor for nearly free if you lived anywhere that had companies upgrading to higher resolution.


The 75 hz trick was such a lifesaver in offices with fluorescent lighting.


Terminal emulation and text editors are the obvious ones, notably because DISPLAYING TEXT should be trivially fast in a world of multicore multigigahertz processors and GPUs with ridiculously fast shaders.

Casey Muratori has been running down the insane failings of terminal emulation lately:

https://twitter.com/cmuratori/status/1405342954194051073 https://twitter.com/cmuratori/status/1405347255511486464

Chaser: https://twitter.com/cmuratori/status/1405356794495442945

This is to say nothing of the fact that VSCode is the most popular editor on the planet and that the very notion of allocating several gigs of ram to embed a web renderer ( one that is constantly having to be patched for the annual round of UAF vulnerabilities ) in order to DRAW TEXT is the very height of inefficient idiocy.


> This is to say nothing of the fact that VSCode is the most popular editor on the planet and that the very notion of allocating several gigs of ram to embed a web renderer ( one that is constantly having to be patched for the annual round of UAF vulnerabilities ) in order to DRAW TEXT is the very height of inefficient idiocy.

If you just want to draw text, you can simply use Notepad or Notepad++. VSCode has a lot more functionality, though.


That’s a completely different issue than what parent suggested. Terminals aren’t using more resources, they are just going slow - on any hardware, due entirely to lack of devs engineering for perf. Since Casey’s posts, a bunch of terminals and editors have upped their game, started using the GPU for rendering, and are now both faster and more efficient.

> allocating several gigs of ram to embed a web renderer […] is the very height of inefficient idiocy.

No doubt theres bloat around, but I think you don’t understand web browser allocation strategies. Chrome, Firefox, and Safari are allocating enormous chunks of ram for both caching purposes, and rendering efficiency. It’s extremely difficult to know how much memory is really needed for any given app, you cannot make the assumption that the RAM reported for any given app or page has anything to do with how much the page itself asked for.

You also lack justification for calling memory use idiocy if you aren’t running out of memory. As long as there is free memory in the system, it’s fair game, and does not represent inefficiency.


> Terminals aren’t using more resources, they are just going slow

"Going slow* is using resources. Clock cycles are a resource.

> No doubt theres bloat around, but I think you don’t understand web browser allocation strategies.

Using a web rendering engine to render a text editor is still bloated and inefficient. It doesn't matter if web browsers are intrinsically expensive - building a text editor using webtech is the incorrect design decision.

> You also lack justification for calling memory use idiocy if you aren’t running out of memory.

It should be obvious that the people here aren't complaining about applications that take a trivial amount of memory. In 2021, nobody cares about a text editor that consumes 8 MB of memory. People are complaining about applications that do take up a significant amount of memory and cause you to run out.

Mozilla's Firefox hardware report[1] says that just under 25% of Firefox users have only 4 GB of installed RAM. If running on Windows, probably half of that is consumed by the operating system. Suddenly, a 400 MB Electron app is consuming a full fifth of your available memory. That's a real problem, especially for folks that either (1) don't have much money and can't afford a newer machine or (2) want to try to conserve the environment by not spuriously buying new hardware when the old stuff should work.

> As long as there is free memory in the system, it’s fair game, and does not represent inefficiency.

Maybe by your definition of "inefficiency". Most of the people in this thread are (apparently) using it to mean "using significantly more resources to provide functionality than are intrinsically necessary". This is a much more reasonable definition given that you don't actually know how many resources the user has.

[1] https://data.firefox.com/dashboard/hardware


> "Going slow* is using resources. Clock cycles are a resource.

This is not true for terminals, nor in general. You can, and terminals do, go slow without consuming more clock cycles. Latency in terminal rendering, not throughput, is the main reason for them feeling slow. In other words, delays in the system are the bottleneck, not compute.

https://danluu.com/term-latency/

> People are complaining about applications that do take up a significant amount of memory and cause you to run out.

People are complaining about memory usage without understanding why it's used or what it's being used for. Nobody yet has complained about running out of memory, I haven't seen any relevant discussion about virtual memory, memory compression, nor about what browsers actually do when they run low on memory. Turns out, surprise!, browsers can fit many more tabs than you think if you had blindly assumed that the amount of memory your process manager reports will scale linearly until you run out... that's not how it works.


The point is, web is everywhere and nobody has to explicitly install anything to use a website - especially important for the less techy people that have no clue about "6 MMU wait states". So the browser is absolutely the right platform to build a (rich) text editor e.g. as a component of a web app. Is it an easy task? No, not really. We still have a half dead IE, Safari and the other browsers are far from perfect. Is that efficient? No. There is a ton of cruft and perhaps the best thing is to draw into a canvas and just not use most of it all (for better performance in the end). So you are basically pushing pixels from JavaScript (ok, or WebAssembly) in the end, if you want the best performance. Then you reimplement all the work that was already done at least 3 times before by OS, Browser and some library/ toolkit/ framework authors. You have to reimplement e.g translation using your own dictionaries, because you cannot easily work with the translator of the browser or the operating system (that's why browsers bring their own). You write a least a slightly different CSS for every browser family. You handle stuff like Content Security Policy differently. This is like every second thing, that you have to reimplement or make platform specific adjustments to make it work somewhat well. That's totally insane. In all this, absolute performance is just really hard to achieve. Do you expect people to ship apps for like 5 different platforms using perhaps 4 considerably different tech stacks or using 1 but then still adjusting for each platform again/ hacking it to look native and not get banned by an app store?

The problem is, people don't take responsibility for their work and almost nobody simplifies stuff. Stuff just somehow works but most of it just isn't solid or it isn't compatible or it is extremely (over)complicated. The next guy using it as a dependency has all these problems on top of the problems of his or her own with the problem at hand. This tree is multiple levels deep. Your pyramid is built on stuff that hundreds of smart engineers basically overlooked/ ignored for decades. Meltdown and friends is just one example, there are HW bugs, there are management engines, there is the OS, the libraries, other software/ daemons/ services, bad APIs, old APIs, deprecated APIs, functionality split between old and new API, bug ridden runtime, inconsistent behaviours of a markup language processing and styling implementation etc. pp.

We all need to get our act together and the harsh reality is, most of us just don't know everything and there isn't time to know everything. We need basic stuff to really just work and be as simple as possible given the problem domain, else we will not progress without gigantic investments.


> Terminals aren’t using more resources

Right as I type this, gnome-terminal has 68MB allocated, of which 57MB is resident. My scrollback buffers are limited to 4096 lines and I have 7 tabs open, four of which are sitting on a bash prompt doing nothing. By ANY measure, this is absolutely insane.

> you don’t understand web browser allocation strategies

You may have missed my point. Rendering TEXT with a library intended to render gmail.com and facebook.com is a ludicrous extravagance.

> As long as there is free memory in the system, it’s fair game

This is my favorite argument. Just buy more ram, bro. CPUs are super smart bro, caches are magical and you'll always get a hit no matter how much you allocate or how fragmented your heap gets. YOLO.

Building software as if DDR4 is just as close to the ALU as L2, that our L2 is fully-associative, that branch prediction and cache hits on vtable dispatch is perfect, then spraying huge numbers of small objects all over the heap, and declaring that all you need to do is buy more RAM, and that halting your application for 6 MMU wait states every dozen cycles is perfectly acceptable.

Yes, waiting 20 billion cycles on an amdahl-adjusted basis to move a scrollbar is definitely the future.


> My scrollback buffers are limited to 4096 lines and I have 7 tabs open, four of which are sitting on a bash prompt doing nothing. By ANY measure, this is absolutely insane.

Why? Is it using CPU while doing nothing? Scrollback might need at least 4096 lines * 7 tabs * 80 unicode chars =~ 5MB, conservatively. You have 7 child shell environments, and perhaps 7 large bitmaps cached for fast scrolling (I don't know what gnome-terminal caches for rendering, just guessing about what's possible). Plus the program code, the UI code, the terminal fonts. You haven't really explained why 60MB seems like too much to you, let alone "insane". It seems like you're just not accounting for all the features.

> Rendering TEXT with a library intended to render gmail.com and facebook.com is a ludicrous extravagance.

Why? Isn't it possible if the text rendering library were customized and smaller, that it would then consume more memory because it would be a 2nd library can't be shared with your HTML+CSS renderer, and you need those loaded too anyway?

The whole reason the library is big is because it does a lot of things. What reason is there for small/simple uses to not use a library that's already there?

> Yes, waiting 20 billion cycles on an amdahl-adjusted basis to move a scrollbar is definitely the future.

Your scrollbar takes multiple seconds to respond? Mine doesn't.

Your hyperbole notwithstanding, the reasons that browsers speculatively over-allocate is for caching - it is precisely because it's more efficient and performant that way. I didn't write Chrome, but I don't think what they've done is insane.


> Which software exactly is functionally the same yet consumes more resources? Can you give some specific examples?

Here is one concrete example: visual studio startup and debugging. Source: https://youtu.be/GC-0tCy4P1U

Maybe I could start collecting examples. I never did because it is so obvious. I mean, have you never updated android/iOS/windows on the same hardware?

Regarding the "exactly" part in you sentence. Why would more features equal less performance? The only possible reason I can see this happening, is if you have features that run in parallel. Otherwise it's just another callback/if etc. That is, something to be triggered by the user that should have imperceptible impact (slightly more pressure on the instruction cache) on the performance of other features.


Pretty reminiscent of Grace Hopper's nanosecond lecture [1]

Certainly there are reasons why a feature might slow down a system beyond the branching (for example, if it requires you to load other resources)... but really, the issue is that as pointed out in the youtube video. Software companies very frequently do not care about performance.

[1] https://www.youtube.com/watch?v=9eyFDBPk4Yw


> Why would more features equal less performance?

"More features" can include things that were not feasible in the '90s. Things that inherently consume more resources.

Icons/images are a simple example.

Of course good software can get plenty done with 16x16 images. Good software may get all that done and more with access to more memory for higher resolution images and more CPU cycles for image manipulation.


> Why would more features equal less performance? […] Otherwise it's just another callback/if etc.

Wait, really? Are you serious? Adding features historically is - by far - the single biggest cause of bloat and losing perf on a given fixed hardware configuration. I have no idea why you’re suggesting that unnamed “features” are “just another callback/if”. What features are you talking about, why are you assuming how they’re implemented?

I don’t think Visual Studio is representative of most software, nor a reasonable demonstration of your argument above against clean code practices. That said, I’ve been using Visual Studio for 20 years, and in my experience it’s significantly faster now to start up than it used to be. The functionality has also changed, so it’s not an example of functionality staying the same while resource usage increases.


> Wait, really? Are you serious?

It should be pretty obvious that many features should have no perceptible performance impact if they're not actively being used.

> Adding features historically is - by far - the single biggest cause of bloat and losing perf on a given fixed hardware configuration.

If by "bloat" you mean binary size - nobody is complaining about that. If by "bloat" you mean idle CPU, UI latency, startup time - that's an engineering failure.

> I have no idea why you’re suggesting that unnamed “features” are “just another callback/if”.

Because that's true. Most features literally have no reason to use CPU if they're not being used. If Spotify adds a new "double-shuffle playlist" option, there is no valid reason that feature should use any CPU except when it's actually being used to shuffle things.

> What features are you talking about,

Most features? Take almost any program, start enumerating the features, and then start partitioning them by what resources they consume while not in use. Let's see, Google Chrome, downloads tracker. Is there a good reason for it to use CPU while you're not clicking a download link? Nope.

> why are you assuming how they’re implemented?

This entire thing is about bad implementations. If a particular implementation of a computationally cheap thing is inefficient, then that's a bad implementation, and it should be rewritten. Nobody is complaining that ML models take a long time to train, or that videos take a lot of CPU to be transcoded, because everyone knows that those things are intrinsically computationally expensive, regardless of how you implement them (although obviously the difference between a good and bad implementation can still shave off massive amounts of time and space).


You're presuming to speak for @miltondts, while contradicting what they said. The argument at the top of the thread was that good implementations are to blame for perf decay.

> It should be pretty obvious that many features should have no perceptible performance impact if they're not actively being used.

That's true, and irrelevant. It's also true that feature bloat is the number one cause of software slowing down. It doesn't have to be all or most features, it's still a fact.

I've lost track of your point. Software is bigger? I agree. Some software is bloated and uses resources it doesn't really need? I agree. You're arguing with everything I say, but don't seem to be trying to have a conversation or to understand or give any benefit of the doubt. Some of the things you're saying are true, I may not be disagreeing with you, a lot of this is just not relevant to either my points or the comments I replied to.

> Google Chrome, downloads tracker. Is there a good reason for it to use CPU while you're not clicking a download link? Nope.

Did you mean to include during downloads to refresh the page, or during scrolling and interaction with the page? I don't understand your example, have you seen the tracker consuming copious CPU? Does this example support your argument about engineering failures somehow? I opened my Chrome task manager just now and my downloads tracker is consuming exactly 0 cycles. Have you seen Spotify shuffle consuming CPU? If not, why did you bring it up?

> Most features literally have no reason to use CPU if they're not being used.

This is completely meaningless, until you name all features and all software, and define what "feature" means in all cases. You have no basis here to make any claims about "most features" of software.

Many features do have reasons to use resources. Caching is a feature that always uses memory when not in use, and caching is absolutely ubiquitous. Rendering is a feature that uses CPU even when the user isn't asking for anything. Speculative downloads, background processes, pre-computation, event driven callbacks, timers... the list of things ("features") your OS and browsers and applications intentionally do when you're not looking is very, very long. Claiming otherwise only demonstrates ignorance of what's under the hood. Naming a couple of cherry-picked features that don't use a lot of resources is not particularly compelling.


> You're presuming to speak for @miltondts, while contradicting what they said.

I am presuming to speak for nobody except myself, by responding to your arguments.

> The argument at the top of the thread was that good implementations are to blame for perf decay.

I don't see that anywhere in the thread. If, by "good implementations", you mean "scalability, portability and clean code" - then you're wrong, because implementations that are highly inefficient are not "good", regardless if they're any of those other things too.

>> It should be pretty obvious that many features should have no perceptible performance impact if they're not actively being used.

> That's true, and irrelevant.

It's completely relevant, because the topic is "performance", and your argument is that the performance losses are caused by added features.

> It's also true that feature bloat is the number one cause of software slowing down. It doesn't have to be all or most features, it's still a fact.

...a claim which is so vague as to be unprovable, and for which you have provided absolutely no evidence whatsoever - so, no, it's not "still a fact".

Meanwhile, I can make any number of arguments from first principles as to why there's no good engineering reason why many of the added features of modern-day programs should cause the massive increase in used resources over their older equivalents.

I'm willing to bet that you cannot point out more than two or three features in Discord, Spotify, Teams, Atom, Slack, or other similarly bloated applications that actually necessitate their ridiculous resource consumption. As in, O(n) for space and time use. (hint: none of these programs are solving the traveling salesman problem)

> I've lost track of your point. Software is bigger? I agree. Some software is bloated and uses resources it doesn't really need? I agree.

Perhaps I should have stated my point more clearly: the main reason that modern programs are inefficient is because they're implemented with inefficient technologies, most relevantly Electron (and webtech more generally), and not because the added features that they bring are intrinsically computationally expensive.

> You're arguing with everything I say, but don't seem to be trying to have a conversation or to understand or give any benefit of the doubt.

A "debate" is different than a "conversation", and this isn't surprising. I'm not trying to have a conversation, I'm trying to debate points that you're making. Not everything can or should be a "conversation". Moreover, there's no "benefit of the doubt" to give - I'm not assuming that you're being malicious, I'm just asking for actual empirical evidence and/or argument from first principles - neither of which you're giving.

> Does this example support your argument about engineering failures somehow? I opened my Chrome task manager just now and my downloads tracker is consuming exactly 0 cycles. Have you seen Spotify shuffle consuming CPU? If not, why did you bring it up?

It should be pretty obvious that I brought those examples up to illustrate examples of features that should not use resources while not actively being used.

> Many features do have reasons to use resources. Caching is a feature that always uses memory when not in use, and caching is absolutely ubiquitous. Rendering is a feature that uses CPU even when the user isn't asking for anything. Speculative downloads, background processes, pre-computation, event driven callbacks, timers... the list of things ("features") your OS and browsers and applications intentionally do when you're not looking is very, very long.

None of these are "features" - these are all implementation details. A "feature" is the downloads page in Chrome, or the shuffle feature in Spotify.

> Naming a couple of cherry-picked features that don't use a lot of resources is not particularly compelling.

You haven't been able to name a single actual feature that has a technically valid reason to use significant resources simply by being added to a host program, let alone one that's relevant to the highly-inefficient applications that everyone throws around (Discord, Spotify, Teams, Atom, Slack). My features, while individual examples, are miles better than the literal nothing that you have provided.

And, for emphasis: I'm willing to bet that you cannot point out more than two or three features in Discord, Spotify, Teams, Atom, Slack, or other similarly bloated applications that actually necessitate their ridiculous resource consumption. As in, with big-O notation for space and time use. (hint: none of these programs are solving the traveling salesman problem)


> I'm not trying to have a conversation, I'm trying to debate

I can tell, please consider relaxing a little. I appreciate the time you put into responding to me, but FWIW your extra long reply is almost completely straw man from my point of view, and getting unnecessarily aggressive and hyperbolic now. I don’t need to debate this, because you’re not actually addressing my points, because you haven’t actually understood what I’ve said, because you’re trying so hard to debate. This conversation wouldn’t have gone down like this face to face, and as engineers there’s a pretty good change we’d agree. Good luck, TTFN.


> Which software exactly is functionally the same yet consumes more resources? Can you give some specific examples?

Everything that uses NVIDIA drivers on Linux, e.g. through OpenGL. Just bringing up an empty window takes over a second these days, while it was <200ms just a few years ago, even with spinning metal disks instead of SSDs.


So the idea that performing the same daily tasks on modern hardware feels about the same speed as before (and sometimes slower), doesn't have a ring of truth to you?


No. Modern computers seem considerably faster to me than the ones I was using in the '90s and early '00s, especially computers that are 4-5 years old running recent software. I remember a lot of choppiness and spinning hourglasses which are much less common today.


That's how I remember the late 90's and early 2000's specifically, which I think was an especially dark age for the speed of the overall software + hardware combo. but if we go a bit more recent (say 2005 to today) I mostly haven't perceived a gain in speed.

That said, I think in the last very few years it's turned around a bit, mainly due to SSDs. In my experience most times that software is painfully slow (in any era) it's because you have too many things open and you're swapping. And SSDs provide acceptable speed even when swapping.

In other words, I find modern machines to be roughly as slow as ~2005, but only up to the very recent years when SSDs have become the norm. Now I'm starting to finally feel things being faster overall.


My daily-driver linux UI (fvwm something or other, whatever the fvwm that comes with Debian 10) is noticeably slower than my daily-driver linux UI as per 1997-2002 (gwm). Which is somewhat ironic, as gwm was basically an interpreter with a bunch of built-in X11 manipulation primitives.

But, back then, getting the "desktop context menu" up too approximately no perceivable time, whereas these days I notice time passing. I don't think that is because my reflexes have improved massively in the last 20 years.


We have SSDs now. That alone made a huge improvement.


>Localization didn’t exist, web analytics wasn’t really a thing, browsers didn’t run background tasks.

Maybe the question to ask is how much of this is to the benefit of the end user versus building ever more sophisticated adtech.


I think localization & background tasks are more or less 100% to the benefit of the end user. Web analytics was just an example of more functionality that goes unnoticed because it’s not hurting performance. I’d agree analytics is building more adtech, though I feel like that’s a cynical glass-half-empty framing. Analytics have also helped huge numbers of sites to improve UX by understanding how users behave and where they get tripped up.


We don't even need to look 20 years in the past. Software performance has degraded significantly in just the past handful of years. Compare web applications like Facebook, Jira, Twitter and Reddit to the exact same websites 5-7 years ago. They have all implemented horrendously over-engineered, buggy and most importantly dog-slow single page apps to offer the same basic service.


> Which software exactly is functionally the same yet consumes more resources?

This is irrelevant, as it's extremely difficult to find any two pieces of software that are "functionally the same", regardless of their age of release or performance - and, moreover, because we're programmers, we can make informed estimates about the minimum performance impact of most features without just guessing.

For instance, string localization should have no perceptible performance impact - it's literally just a key-value lookup (O(1) with a hash table). The additional features that Spotify provides over Audacious (a local Linux music player) are either not intrinsically computationally expensive (fetching audio over a network) or are done server-side (playlist recommendation). Discord has no technically valid reason to be taking up 15% of a CPU core while sitting in the background. And so on.

> Today 4K is normal, and combined with refresh rates and colors, we’re pushing upwards of 2 orders of magnitude more data to our screens.

We have GPUs that now do all of our graphics rendering work for us. The features implemented by Spotify, Slack, Discord, and Visual Studio Code do not intrinsically require enough graphical resources that they should stress even an older integrated GPU, let alone take 5 seconds to start up on CPU and RAM that are, similarly, one to two decimal orders of magnitude faster than what we had two decades ago.

> YouTube and Netflix didn’t exist 20 years ago. Browsers were incredibly slow and couldn’t support streaming video or anything but the smallest of applications written in JavaScript. Localization didn’t exist, web analytics wasn’t really a thing, browsers didn’t run background tasks.

None of this is relevant to applications. Browsers were slow back then? Fine. Browsers are faster now? Also fine. That's still irrelevant to the fact that many applications are slower now, on much faster hardware, than they were then.


> This is irrelevant, as it's extremely difficult to find any two pieces of software that are "functionally the same"

Yes, indeed. You're illustrating is why the post I replied to is a straw man argument. Most software, as you say, is not functionally the same anymore.

> string localization should have no perceptible performance impact

Yes, my argument was that we got localization without a noticeable perf impact. But it does consume memory, bandwidth and code size, and as you correctly point out, a small amount of compute. I would venture to suggest that rendering Asian fonts is a tad more involved than an O(1) hash table lookup. Localization is just one of hundreds of features we have standard now that we didn't have 20 years ago.


> Which software exactly is functionally the same yet consumes more resources? Can you give some specific examples?

Operating systems, particularly Windows. Given fixed hardware, you cannot expect an operating system to continue working indefinitely.


Windows is not functionally the same between branded versions. Even if the NT kernel was shared across some branded versions over time, for example, new features were added and underlying functionality were changed.


The question is how much those features were worth in terms of cost (is a feature bloat?). If you don’t have a cost constraint, then yes, you have feature diversity but the cost for any feature can be arbitrarily high.


Whyever not.


Windows 95 needed 4MB memory. Windows 7 needed 1GB. Try running Windows 10 with less than 4GB.

They all run processes, do they not?


> Which software exactly is functionally the same yet consumes more resources? Can you give some specific examples?

Instant messaging. Even if we ignore applications like the original MSN Messenger, Trillian, etc, the original Skype before Microsoft's acquisition was much faster. Sure, it wasn't as fast as it could be - but still didn't had an entire browser wrapped around it nor needed checks 350MB of RAM and 7 background processes just to display an icon on the tray so i can send text messages and perhaps the occasional video call once every two years.

And to be honest? I do not really find it that weird if a program needed some extra resources to make a video call - which is why LoadLibrary and (especially) FreeLibrary exist. But in practice that is never done or when it is done it is piled on top of an overengineered system to the point where it loses any benefits under the weight of the architecture astromancy.

> if you don’t know or don’t pay attention what’s under the covers.

Or don't care.

Don't forget the don't care part! It is very important because a lot of the bloat is presented as "functionality" (which, assuming most people even care - and that assuming they'd know what they "pay" for it - often doesn't even need to be as heavy and bloated as it is).

> One easy to overlook difference is displays.

Aside from resolution, physical dimensions and thinness, displays are worse nowadays. Compared to CRTs they have bad contrast, often bad colors, increased latency (even the ultra-fast 144Hz+ ones), a fixed resolution with awful results if you try to use anything but the native, etc. Note that i'm writing this from a 165Hz 27" 2560x1440 monitor BTW with a few CRTs next to me connected on older systems. And BTW

> It’s easy to forget that 20 years ago we had 800x600 screens, 24 bit color was not ubiquitous, and 60hz monitors were practically non-existent.

That'd be 30 years ago, not 20. 20 years ago was the year 2001 :-P. While 800x600 was indeed common, it was pretty much as common as 1024x768. And monitors at 60Hz were not common because those would hurt your eyes - even standard VGA released in the 80s runs at 70Hz. But by 2001 pretty much every monitor was able of 85Hz and several went much higher (e.g. 120Hz for mine).

> Today 4K is normal

4K is a tiny minority that is barely a blip in statistics. Here, check this:

https://data.firefox.com/dashboard/hardware

This is Firefox's hardware statistics. The most common resolution, by far, is 1920x1080. The next most common? 1366x768! Both make about 64% of every web capable computer out there! And the resolutions after that are actually lower than 1920x1080 until you reach at around 2% for 2560x1440.

Even among games where 4K has meme status, in Steam Hardware Survey you can see that only around 2.44% have a 4K monitor. There are more than three times more gamers using monitors less than 1080p that aren't 1366x768 or 1360x768!

There will be several years before 4K becomes a normal on the desktop. Right now it makes way more sense to think about something like 1366x768 than 4K.


> A world where some websites still don't work properly across web browsers.

The situation today is vastly better than in the days of IE6.

> A world where clean code destroys performance

Clean code does not mean code that is not performant.

In my mind, "clean code" is foremost small methods that do one thing, and are well-described. These small methods will often be optimised away, in the unlikely situation where they were the bottleneck in the first place.

In my experience messy code "for performance" reasons was usually code that had claims to being "fast", but was often incorrect and hard to fix.


It could be OP is referring to multiple layers of (unnecessary) abstraction.

In Android land for example, there is a popular "clean" 3 layer architecture, where model classes are blindly mapped multiple times (even in cases where this is suboptimal).

I have lost count of people building "clean" inefficient caching mechanism instead of just using an HTTP cache.

Side note:

I believe these things are useful in some situations. Maybe the solution is to have a smart compiler that compiles out these inefficiencies?

Some (most?) apps are just skins around a database. Amazing what Facebook did with Messenger rewrite [0]

Is there an incentive problem? Generally, people get bonuses/accolades for making a slow system fast, not for keeping a system consistently fast. I admit the former is easier to measure.

0 - https://engineering.fb.com/2020/03/02/data-infrastructure/me...


Maybe it's my bias but I took "clean code" not in the traditional sense, but to mean insisting on patterns perceived to be "pure" or "clean" even when their application harms performance or isn't a great fit for the host language (ahem, JavaScript and functional + immutable-datastructure programming) and makes it even easier for devs to footgun into bad performance while staying "pure".


Yes, I should have typed "clean code". It's exactly what you and also what the parent said. It's about patterns/architectures/abstractions that we consider to be good/beautiful, but with very little regard for performance ("just throw more or better hardware at it, that is not our problem"). What really rubs the salt on the wound, is that the people that advocate them are quite loud, they seem to be growing in numbers and there is a massive lack of evidence for the supposed benefits, while the drawbacks are clear and easy to measure.


I don't think that's what they meant but it's a good point! I've seen embedded developers write some...truly terrifying code though.


Function purity is a mathematical term with a clear definition and doesn’t deserve the double quotes which imply some vacuous definition or uncertainty.

Assuming you are referring to “pure”, in the context of functional programming and isolation of side effects.


Was what I was referring to actually ambiguous?


What is ambiguous are the details about how you think that JavaScript and functional + immutable-datastructure programming is detrimental.


Oh, that's totally different from what you posted about before. OK.

Allow me to quote me, to clarify this part, then.

"when [they aren't] a great fit for the host language"


What remains ambiguous is how it is not a great fit, in that specific case.


If the OP meant "multiple layers of unnecessary abstraction" I would agree with them.

However "clean code" connotes to me the techniques advocated in "Clean Code" by Robert C. Martin, which amongst other things, advocates small descriptive functions.


> Maybe the solution is to have a smart compiler that compiles out these inefficiencies?

Compilers are software. A "smarter" compiler is probably going to consume more resources.


True. But at least it happens for each compile per developer which is a far smaller number than for each use per end-user.


> In many cases, this software doesn't even do anything that is functionally different from 10 or 20 years ago, but it still consumes many more resources

I would argue that in most cases the choice is not between fast, optimized and slow, unoptimized software. The choice is between having a functioning program (using a universal, "reasonably" efficient stack) and not having one at all (because the amount of resources it takes to develop a customized, low-level implementation is prohibitively high).

A good example: https://www.youtube.com/watch?v=tInaI3pU19Y A custom 3D engine performs immensely better than a comparable Unity project, but it took the author 3 to 4 times the effort.


> Yeah, all this concern with scalability, portability and clean code, has created quite a dystopian software world.

I don't see it having anything to do with scalability, portability, or clean code.

It's just cheap and (for the developers) convenient. Developers think that their software is a gift to the world, and simply getting it made (not matter how cheaply and craply and user-hostilely) is a net positive. (I wonder how many people use all these bloated things only because they pretty much have to, due to peer pressure, company requirements, etc.)

It is possible to make software that scales, is clean, portable, and doesn't rely on massive bloat.


I think scalability, portability and cleanliness is mostly talk. The reality is that browser based frameworks like Electron are popular because developers who only know js are cheaper and the first half of the lifecycle is cheaper and faster while the second half is often either dropped entirely or someone else's problem.

Of course a JIT compiler will never be as performant as precompiled binaries, especially when its running in a full featured web browser that is packing functions that the developer neither needs nor is using.


I'm not convinced React Native developers are cheaper than C/C++ developers. At my last job, the Python and C++ devs were on identical salary bands.


The reason electron took off if because it was the first to offer an easy way to build cross platform apps. Something the systems guys have failed to deliver for many years.


Qt, wxWidgets, hell even Java's UI toolkits were all cross-platform long before Electron crawled out of the swamp.

Even GTK is cross platform!


> an easy way

That's the key phrase, and it means easy way for front end web devs, those who have forced the entire toolchain to dance to Javascript's tune because everything would be easier if we had one language to rule them all.


What’s the easy way to build cross-platform GUIs for people who aren’t frontend web devs?


There are plenty of mature cross platform GUI frameworks to choose from, they're not going to go away. Pick one, and like any other framework, if you stick with it you will soon find it easier than using something else.

Even Zig has Qt bindings[1] and that's still enough of a hot young thing to regularly reach HN's front page, so it's likely that other popular languages that are older have bindings, no need to devolve to using something JS devs crave. There's wxWidgets[2] or Shoes (I loved using Shoes, thanks _why!) and many more[3].

[1] https://wiki.qt.io/Language_Bindings

[2] https://en.wikipedia.org/wiki/List_of_language_bindings_for_...

[3] https://en.wikipedia.org/wiki/List_of_widget_toolkits


Build n native apps, make them look the same and pretend it's cross platform. :-)


I've seen people say stuff like this before. Do you actually not realize there have been cross platform GUI libraries for the last thirty years?


There are tons of easy ways to build cross-platform apps, and have been for a long time. Extreme, very free-form customization of GUI elements so it looks & behaves exactly the way your marketing, branding, & web-design folks want down to the pixel on every platform, is what was hard before Electron. If you just wanted to deliver a GUI applications cross-platform, full stop, that's not something you couldn't do before, fairly easily, especially if Electron's performance and integration with its host platform is considered acceptable for what you're doing.


What alternatives? How many of them include the web as a platform?


Now you are moving the goal posts to "web as a platform". Web as a platform is not what you originally said and "web as a platform" is done well with web pages.

People talk about electron being 'the only way to create cross platform apps' and ignore fltk, juce, qt, gtk, wxWidgets, tk and a whole host of other solutions, not to mention making a local webserver, opengl guis etc.

When someone only knows javascript they might want to work with electron, but no one who is going to use that software wishes it was made in electron. That's the bottom line, people don't want it, it is a selfish decision that creates terrible interactivity and grossly bloated software.


I'm 100% convinced the appeal of Electron is that it makes chasing the latest UI trends month-to-month as easy on the desktop as it is on the web. Designers and businesses (and lots of developers) consider this essential, these days, for whatever reason. The above (yes, including the business side) will all complain about how a real desktop program using built-in or otherwise sane-and-stable GUI elements that haven't changed for 5 years looks too "old".


How many popular desktop apps use anything resembling native GUI elements?

Not browsers, not photo editors, not games, not many chat applications. Certainly there must be some big ones that do right?


Now you are moving the goal posts again from 'no other way to do cross platform apps' to 'what uses native GUI elements'.

The dozen cross platform libraries that have been around for multiple decades are all 100 times faster than the electron, that's the point.


I didn't move any goal posts. Here I'm responding to a different comment. This time regarding gui elements and user experience.


Java and its various GUI toolkits (again, especially if Electron's performance isn't bad enough that it's off the table). QT. TK. Wx. Delphi and some others if you didn't mind paying.

Obviously none of them include the Web as a platform (unless you count Java applets, which you shouldn't since they're basically dead, even if they'd seem high-performance and quite nice compared to what we've replaced them with).


Why is that obvious? The only thing that’s obvious to me is that if these cross-platform GUI frameworks don’t support the web then almost no one who needs to build a cross-platform GUI will have any use for them.


Because I was responding to "The reason electron took off if because it was the first to offer an easy way to build cross platform apps".

If you're defining "cross platform" as, strictly, "the Web", then I'm not really sure what kind of exchange you're trying to have or what you're trying to add, here. Obviously only the web satisfies the criterion of being the web. Which makes me wonder even more why you didn't find that obvious.


I'm certainly not trying to define "cross-platform" to mean "the web." I'm just pointing out that the web is a huge platform and for many use cases is arguably the most platform for a cross-platform GUI framework to support.


"Webtech's (Electron, specifically) the only reason cross-platform desktop GUI programs are now cheap & easy enough to be viable"

"Well, no, we've had that for a long time, especially if you consider the performance compromises of Electron or in-browser 'apps' to exist within the acceptable range. Electron does make mimicking this year's UI trends as easy on the desktop as on the Web, and doing so on every platform at once, and that was hard before."

"Right, but do those earlier solutions support the web as a platform?"

"... huh?"

See why I'm confused about where you're taking this? It's like someone claimed that airplanes were the only way to travel by machine, and I pointed out that, for one thing, trains existed before that, and now you're asking me if trains can fly. No—and, again, obviously so unless we've got some seriously different life experience, here—but that's... a complete non-sequitur to the conversation already in progress.


I think you’re conflating two very different things.

The first thing is using web technology to build apps that are distributed as native apps on many platforms (which works because there are open source web browser engines that already support most computer platforms). This is what Electron is.

The second thing is for any cross-platform framework to support targeting the web, because the web is itself a very huge platform. This is important, because not all platforms and situations support installing native apps. If you want your cross-platform app to work on web browsers, you need your cross-platform app to support targeting the web. This might be important if you want your app to work in computer labs, for example.


Why does "the web" have to be a targeted platform? It isn't one in reality, Chromium, Safari and Firefox are different enough that you have to target them specifically.

Seems to me the desire for having "web" as a platform isn't so you can also support a browser application. Its so you can make it a web app and call the job done.


So what do we do about it? Even if Electron doesn't give feature parity across OSes, it still has some other huge wins. It allows you to use "webdev" talent to build guis, which are (I presume) easier to hire. They can use their favorite frameworks. It seems like SPA-esque-apps have some agility in development over other native approaches. It makes updates, CI/CD, and testing easier.

Even if not prioritizing cross-platform, Electron has some serious appeal.

What are my alternatives? Qt? Gtk? Native libraries?


It's a shame that cross-platform GUI toolkits seem to have fallen out of favour somewhat (as in, native toolkits like Qt or GTK). I wish it were not so. I can see why the electron route is so tempting, I just really hate them as a user.

I wrote a bunch of GTK desktop apps with Python like 15 years ago and they were snappy and you'd never even know it was Python behind it, but I wouldn't do that now. Too hard to distribute, bindings a bit clunky, lots more cross-platform testing, it's hard to look at the relative levels of effort and not take the faster path.


A Qt install with all the bells and whistles is about 500 megabytes, so it’s very comparable in size to Electron. Qml is a lot easier to develop in than Node+browser though.


Size on disk isn't, or shouldn't be a problem today. In an ecosystem where there are games that take up over 100 gigabytes of storage, few apps should be large enough to make a user double take. CPU and Memory usage on the other hand can be serious concerns and one where Electron apps have a serious problem compared to traditional approaches.


I’m not one with much experience, but I rewrote my Qt based code for a small python app into C++ and the results were amazingly better. I would bet GTK might be similar!


Qt is pretty good, but honestly since React came out I can't code a GUI in any other way. Reactive is the way to do (application) GUIs, period, and anything else feels like clunky bug-prone spaghetti.

I put application in parentheses because other paradigms may make sense for high performance rendering pipes like games.

Honestly I don't think HTML5/CSS/JS is that bad as a pure presentation layer. It has warts as they all do, but it can be made to look decent and is fairly productive. The bloat is mostly the fault of Electron specifically. Tauri and Sciter are much leaner, in some cases leaner than Qt depending on the app.


> What are my alternatives? Qt? Gtk? Native libraries?

I don't know what the development side is like, but as a user I do find Qt apps to be fairly nice.

I also think Java (Swing) apps are basically fine. Java was "bloated" back in the day, but not relative to what else is in use today.


Hm, Jvm also gives access to Kotlin and Scala, I'll definitely have to look into it.

Qt development can be excellent or nightmarish, sometimes both. I don't know enough to know why the disparity but I've been adjacent to teams that love qt and ones that loathe it. I'm guessing a big aspect is state management; qt predates the reactive revolution, and IIRC is prone (as is any gui) to callback hell.


Overall Qt is quite nice and works portably across platforms. You're right about callback hell but there's a fox for it. Personally I prefer to create a bunch of overloaded utility functions for different widgets and data types and in each utility function i use QSignalBlocker to make sure I'm not accidentally invoking any callbacks. I condiser any "raw" Qt "setXYZ" call to be a bug.


Huh, neat. What's a "fox" in this context? I tried searching "qt fox" and got some, ahem, less than salient results.


Typo for "fix" would be my guess.


Just as a fun test a while ago I made a small desktop app with scala and scalafx/javafx. It's not bad, gives you some of the nice things you get from js guis (reactive rather than loads of callbacks etc).


> What are my alternatives? Qt? Gtk? Native libraries?

Yes. You are not a Javascript Programmer, or a Frontend Programmer; Choose the right tool for each job.


> Yeah, all this concern with scalability, portability and clean code, has created quite a dystopian software world.

I don't think scalability, portability, and clean code are the enemy.

C/C++ are as portable as they ever were, and Java/.NET provide higher-level abstractions that work across both Windows and *nix systems. Even Electron doesn't have to suck, in theory.

The reason performance is bad these days is because developers are just bad. The number of developers doubles every five years. Ergo, half of all developers have less than five years of experience. The vast majority of them don't have a CS, CE, or EE degree. A large number of them went through bootcamps (which are an expensive version of the old "Learn VB in 21 Days!" books.) And they're all writing the Electron apps we love to hate.


As a college drop out I'm pretty sure I know more about writing efficient code than most CS grads - I've seen what gets taught, it's not impressive. I've also seen bootcamps (haven't attended, but friends have) and while they don't teach about efficiency, they're far more rigorous than college was. Anyway, I don't want to get into this really, but I found this part of your post to be annoyingly off-base, and I resent it.

That said, I'm in agreement that most devs don't know much at all about performance. Frankly, we're told not to... it's a sad state. Performance is explicitly referred to as a last priority, the "optimization is the root of all evil" quote is completely taken out of context and parroted everywhere (go look up Knuth/Hoare's context if you haven't), etc.

All that is to say, the issue is not specific to education but is an unfortunate part of engineering culture. We, as a large group, explicitly neglect performance and treat it as a dirty word.

IMO we need a sort of performance revolution the way testing got one. I've written lots of software in a 'benchmark driven' way, and the results are extremely fast programs - I've gotten programs down from just under 1ms to a few dozen nanoseconds like that.


I think it's the growth in data that when combined with accidentally bad algos suddenly start to drag down the perf.


> Yeah, all this concern with scalability, portability and clean code, has created quite a dystopian software world.

All of these three - when interpreted reasonably - coincide and even constitute performance improvements:

* You make less assumptions, including regarding the speed of things, when you make your code portable.

* When your code is clean, it's easier to generalize speed benefits rather than when you need to apply some ugly spaghetti hand-written optimization in 20 places.

* Scalability means scaling up, out but also _down_ from the machine you're currently using. How else would you run your, say, distributed search web app on, say, a cluster of RPi 0's?

... Of course, if your approach to scalability and portability is to stick things in a container or vm, and manage a fleet of those, then yeah, that's kind of a problem.


This isn't a problem of clean code and software, its a problem of walled gardens.


I did exactly that (well, almost that - I downclocked my Phenom II to 1.7GHz and disabled all but one core) when I wrote my first commercial app back in 2016.

It ran great with a few optimisations (or more specifically, removal of some really dumb code).

It seemed really obvious to me to ensure it would run well on a broad array of old hardware, and I suspect this is because of all the time spent on OCAU as a teenager trying to get the cheapest hardware possible to play the latest games.


Your point is solid and I had the same philosophy for a while but I think we should draw the line of "minimum supported CPU" a bit further into the future for one simple reason:

Ecology.

A lot of the older computers were awfully power-inefficient (and also emit a lot of heat).

So IMO we should replace "Pentium 4" in your statement with something like "Celeron J4115", or some of the Atom CPUs, maybe?


My guess (and it is a guess) is that the extra heat / electricity consumption is pretty negligible compared to the amount of energy and raw materials needed to manufacture and distribute a new computer.


It's smaller, but not actually negligible. An old but oft-cited 2002 study said a laptop with a three year lifecycle took about twice the energy to manufacture as operate. Silicon is much more energy intensive than other materials, with fab energy consumption apparently relatively steady over time on the order of 1 kwh/cm^2 of silicon processed.

So, there is a high bar for replacing a computer with a new one to actually save net resources, but an actual 10x reduction in power consumption like replacing a P4 desktop with an RPi is big enough to pay off reasonably quickly.


Quite interesting, thank you!

I am even more interested in how the formula works out if you buy one professional laptop (say, Lenovo X13) with a Ryzen CPU -- because they consume less power -- and I am looking forward to reading such an analysis sometime in the future.


Interesting - thanks for providing data to contrast with my hunch!


There are ALSO a lot of old cell phones and raspberry Pis and things around. These computers use very little power and are still quite capable. There's NOTHING preventing them from being useful for a wide range of tasks except for software bloat, or closed platforms. It would not be hard conceptually to build computers to prioritize power consumption, it's simply just not a priority.


I would love to see and use such machines but as you said, the software isn't there. Sadly nobody is paying to write or improve it.


True, but as I said in another sibling comment -- the machines get manufactured anyway (since the production capacity is mostly optimized around the needs of big customers) so I might as well get a much greener machine now and hold on to it for 10+ years because I will burn less CO2 while I am using it.


That logic makes the "re-use" part of "reduce, re-use, recycle" redundant. You're part of a society that, when acting roughly in tandem, can make large-scale chages. If everyone followed your logic nothing would ever change and, if I'm honest, it sounds more like a rationalisation for indulging in new tech (an impulse which I share!)


I believe you're frustrated with the world's stance towards ecology -- for good reason. I am too. Almost nobody who can make a true difference gives a f_ck. But I am looking at it historically and holistically while you seem to want to argue theoreticals that don't have basis in our current reality.

Let me be 100% clear here: I am sharing your stance in general but there are complications I am forced to consider and act accordingly with:

- Software gets slower with time, so I have to upgrade. Me upgrading only once 7-8 years is, I think, a fairly heroic effort on my side. I've seen businesses that blindly upgrade everyone's laptops every 3 years, zero questions asked.

- You and I have no recourse whatsoever against the big players who make disposable tech. You think I don't want them fined all the way to bankruptcy and even jail? I would love that. But we can't make it happen.

- My "logic" is simply of a family man with a ton of responsibilities and a pretty demanding job. Please don't make me the villain because I am doing my very best just to function and have a few precious relaxing hours per day. Most of the common folk will never be willing to sacrifice the little "me time" they have just so they free some ecological bandwidth... which will be quickly consumed and re-balanced (in the wrong direction) by those who create huge and environmentally disastrous manufacturing facilities.

- I love to indulge in a new tech but I have mostly tamed this wrong impulse. Doesn't mean I have to hold on to inadequate machines until they fall apart in my hands however.

---

Again, I get where you are coming from but please don't vent on me for doing the best that I can without sacrificing all the comfort and free time that I already don't have much of.

There are much bigger villains out there that deserve your frustration more than I do.


I'm sorry that came across as a vent - it certainly wasn't intended that way. I'm not trying to make you the villain, I'm trying to gently point out that the logic you're using on the surface would excuse a lot of people just not trying. Now you've expanded on that I agree with much of what you say - everyone needs to take things at the page they can handle - and my life sounds similar to yours, so I totally get the lack of bandwidth.

There's a podcast that my wife listens to quite a bit call Outrage and Optimism about the climate crisis, and it's essentially my approach - I am angry about the lack of leadership by governments, but I also feel like people need to be optimistic about what we can achieve together, including pushing governments to act.

I took part in the initial Extinction rebellion protest in London, and although I don't think it achieved much in immediate concrete terms, and I feel like the organisation is going backwards now, I do think it galvanized a lot of people into believing there are enough people out there who want meaningful action that speaking up is worth it. I was handing out leaflets to people from all walks of life - from a guy in a sharp business suit to a local building foreman who runs a vegan group - and only got one person out of hundreds who thought it was pointless. Most were actively enthusiastic and felt glad that there were lots of other people who shared their concerns.


Yeah, I don't disagree with you at all. Truth is, people would use anything that sounds like logic to them to excuse themselves from not helping even a little. Sad fact of life.

It's really cool that spreading awareness works! I just wish we collectively as a civilization finally move to the next stage after it because ever since I exist (I am 41 y/o) people were mostly only spreading awareness. Guess I am getting old and jaded because I'd like to see some action on these extremely important topics one day.


I would add that it's okay (or, at minimum, better) to recognize that you personally upgrade more often than you should, while also realizing we should maintain support for older hardware so everyone else isn't also forced to upgrade too often.


Agreed on that. People who can't upgrade often shouldn't be ignored. Slack and Teams should work non-intrusively on laptops with 4GB of RAM.


Teams doesn't consistently work well on my machine with 64GB of RAM, so it's not simply a memory problem. Slack I've for some reason never had a problem with, even on a laptop with 4 GB of RAM. But to be fair I'm only on a handful of fairly low volume slack channels.


You save more (money and ressources) by not buying a new engine.


You assume that my goal is to spend the absolute minimum sum on tech over my entire life.

That's not it. My goal is to spend the minimum realistic sum for tech that enables me to do my job well and long-term so I am financially free and help the businesses that hire me, and keep improving my craft (for which I have love ever since a pre-teen).

That doesn't equal holding on to a MacBook Pro 2012 until 2030. It equals keeping an old machine around to check if the code in the final version of my current PR is well-optimized -- but I work on a much stronger machine because stuff like LSP and re-running tests is crucial for productivity. And we all know that most dev tooling is generally extremely demanding.


Many of those older computers are the first computer of someone around the globe still, at least as long as they keep working.


Sure, not contesting this, but I meant my comment more along the lines of:

If I am to gauge my app's speed on an anemic hardware I'd prefer running it on a modern Atom or Celeron because (a) indeed they're anemic but (b) at least are more eco-friendly.


Why does the eco-friendliness of the processor you're testing on matter at all? I can guarantee you that your users are not going to go out and purchase less power-efficient CPUs themselves just because you tested on a power-inefficient CPU yourself.

And, the amount of electricity that you, the developer, spend testing your code will almost always be eclipsed by that of your users if more than a dozen others use your tool.

So, again: why does it matter how power-efficient the CPU you're testing on is, as long as it's slow (to produce the proper throttling effect)?


You are right, they aren't related per se.

I was alluding to replacing the idea of holding on to a Pentium 4 with buying a modern, more eco-friendly, but ultimately just as anemic, CPU.

Not a perfectly representative test, sure, I just felt that it was a good compromise between "test your code on weak machines" and "be environmentally responsible".

So yep, I did conflate weak CPU with low-wattage CPU, you are right.


If you're going that route, I'd focus on small & cheap SBCs. A bare, credit card sized ARM computer is probably going to be more ecological (both in manufacture and operation) than an Intel PC.


Yes, and I do just that where I can. One of my home servers is exactly an ARM SBC and I am very happy to see it idle at <5W.

(Also pondering moving my NAS to an ARM SBC as well but not sure it's worth the hassle since the mini Intel i3 PC idles at 10W.)

For work however, a programmer just can't do with those underpowered machines if they want to be productive and not wait 3 minutes for incremental compilation after each change they make (which an LSP server does, and even if it didn't, re-running tests does the same anyway).

I am happy to work on a more ecological machine if somebody optimizes the dev tooling and my programming languages' of choice compilers and linkers. I'd absolutely love it if all my machines idled at 5W and peaked at 35W. But if I am to support my family, that stance isn't easily achievable today. Yet.


Meanwhile, my Starlink dish is idling at 100 watts, which works out to be about eight percent of my electrical bill.


Do you include the cost of manufacturing in your calculations? There is a point where newer and more efficient is better, but in general it’s better to hold on devices for longer.


I don't, mostly because the manufacturers are far disconnected from the needs of the end users of computing -- they mostly serve big businesses. So the machines end up manufactured anyway.

On this lane of thought, I'd say better to buy a laptop today (that's much greener than everything before it) that you can hold on to for 10+ years than holding on to another that has long passed its expiration date. But I am aware that this is not always a popular opinion.


> the manufacturers are far disconnected from the needs of the end users of computing -- they mostly serve big businesses

Same goes for power. They're not going to burn one less lump of coal because your laptop uses 100W instead of 300W


That's true if one only talks about one person using an extra 200W. But it matters when it's about millions of people using an additional 200W.


Even then, such big organizations react slowly to change.

Even when we collectively manage to reduce our energy usage they'll likely divert the extra energy to another power substation for storage and redundancy.

So don't think that me and others are defeatist. It's just that the entire grid is well planned for so that any difference we can make, even as 100_000 people, is still not as big. So this whole thing will take a while, likely decades.

In the meantime I was very happy when I replaced a fridge with 43kW monthly expense with a 19kW one, about two years ago.


> Even when we collectively manage to reduce our energy usage they'll likely divert the extra energy to another power substation for storage and redundancy.

That's not how it works. On the electric grid, generation must always be precisely matched with consumption, otherwise things can literally burn up. Any excess of generation will cause an increase in voltage and frequency, any lack of generation will cause a drop in voltage and frequency. That's why the generation always follow the consumption; all generators sense changes in consumption (by measuring the system frequency), and adjust their input to match (for instance, by adjusting the fuel intake). When they don't (usually because the change was too fast, like when large blocks of generators or consumers drop suddenly from the grid), protective systems will disconnect the generators and/or consumers until the grid is balanced again. Storage (which is still uncommon) only does a temporal shift of the consumption and/or generation, it doesn't change the amount (other than the inevitable losses).

That is, if you use 200W less, the generators will adjust to generate 200W less power (actually a bit more than 200W, because of losses). If millions of people use 200W less, the generators will adjust to generate hundreds of megawatts less power.


In that case -- great! I happily stand corrected.


Big businesses are the end users of computing. But that doesn't change anything. Why do big businesses have to upgrade their computers every few years?


That I don't know. I personally wouldn't. I've seen big companies where 100+ people who just do Zoom / Excel / Jira all day have MacBook Pros, which is insanity.

I was simply mentioning that the machines get manufactured anyway so I definitely refuse to be guilt-tripped into "it's your fault that those machines exist!" extremist stance. No, it's not my fault. The machines are manufactured regardless of what I do so every 7-8 years I evaluate the market and upgrade.

Reasons are simple -- most modern software gets slower all the time and I still need to work and support my family. I would hold onto my machines for a lifetime but it's not the reality we live in and I wish the downvoters stopped being so tunnel-visioned and were aware of that.


> do Zoom / Excel / Jira all day have MacBook Pros, which is insanity.

Agreed. A base MacBook Pro isn't nearly powerful enough for all 3 of those at once.


LOL! You have a good point. :D


We can run those inefficient machines on electricity from renewable sources. Reduce > Reuse > Recycle applies: Building a new computer will almost always be worse than keeping the old one, whose production cost has already been paid.


We can, but do we really? I am fully behind your idea but sadly the whole "let's move to renewables" thing just takes so damn long. :(


Yeah, it does. But in the meantime you can purchase renewable credits to offset your consumption and help push adoption.


Agreed, and I do that. During that meantime I can't work on an RPi however.


The last time I was heavily into game development, I chose to develop only on a $350 laptop, for exactly that reason. The strategy really does work. The first time I ran it on a gaming machine was after it was done, and it ran like a dream.


I remember spending days trying to optimize some photo-scanning code in an app I was building and getting seriously frustrated with it. "This should be instantaneous!" Turns out I forgot / didn't notice I was running it under Valgrind the whole time. It was plenty fast once I ran it normally. There's another strategy for you :)


I often say to myself (tongue in cheek) "Chrono Trigger for SNES fits into 8MB. Chrono Trigger is a better piece of software than anything I have ever written. Why does my software need anything more?"


A good rule to program by.


My personal rule-of-thumb is that I develop with

    -O0 -fsanitize=address -fsanitize=undefined
And things have to stay instant

So far that has reliably meant that the software is useable on e.g. a raspberry pi 3.


Not entirely sure if you're serious, but that doesn't seem to make a lot of sense. You are not creating an environment where you're optimizing or scoping the project for slow machines.

Instead you're optimizing for an environment with totally different performance characteristics, and manually doing low value optimisations that the compiler could normally handle. (E.g. O0 will mean no inlining, so you're incentived to do it at the source level, wasting effort, making the source less readable, but not actually getting any speedup on a production build )


> so you're incentived to do it at the source level

It's definitely not something I felt the need to do so far - my code is chock-full of high-level abstractions (and a lot of TMP).

Where this helps is, noticing things like redoing a computation every time instead of caching the results, copying things left and right, etc. and more generally preventing "death by a thousand cuts" as just iterating a semi-large array more than necessary will make things noticeably slow if using asan (and, in my experience, slow computers - i've got a non-negligible portion of my user base using things like 2008 entry-level laptops for instance)


What about -Wall and -Werror? :)



Thanks for sharing it.


> try to run your software on a Pentium 4

Alternatively, on Linux you can limit the maximum CPU frequency with something like

    FREQ=800000 # 800 MHz
    for i in /sys/devices/system/cpu/cpu[0-9]*; do
        echo $FREQ > "$i/cpufreq/scaling_max_freq"
    done


Note that this may not work out of the box, see https://unix.stackexchange.com/questions/153693/cant-use-use...


Does that answer still apply? I'm using the intel_pstate driver and limiting frequency via scaling_max_freq seems to work fine.

It should work, if I'm reading this correctly: https://www.kernel.org/doc/html/v4.12/admin-guide/pm/intel_p...


No, the obvious solution is paying a monthly subscription to use a cloud-based computer for such high-intensity tasks like web-browsing. We should all be using https://www.mightyapp.com!

/s


Is this an elaborate joke website?



Our team keeps our dev and test deployment ecosystems at about 40% of the resources of production. If it's too slow for us, then we optimize, and then it's smooth as silk in production.


> Pentium 4

Not such a good choice. It was kind of a "strung out" processor, with an overly deep pipeline, plus there was that RDRAM business...

I'd say maybe an Intel Pentium III coppermine, or maybe go AMD and choose the original Athlon.

> I only say this half-seriously

Why just half-seriously? Make it totally serious.

> If it's unusable, go back and optimize some more.

That too, but even more than that: Go back and make more initialization lazy; make sure your UI isn't overly de-prioritized in favor of other work; and make sure you have logic and UI for when certain thing take time, allowing them to still carry out some activity while waiting.


I used to do that test with a mobile pentium from back in the day... I think it was a T5600? The model name sounded like a terminator, that's for sure.

Problem is that then AVX became a common thing (at least in part of what I run) so that processor became unusable. I moved to an i3-3120m, which supports AVX, but then AVX2 became a thing.

I could've gone for a 4000-series i3 at that point, but I just grew tired. And Pentium/Celerons are a bigger mess to find because some of them are Atoms now. For example, I have a N4000 sub-notebook that I got for my mom to check her email and view online videos, and the things feels pretty snappy for those things. But then you try to run some numerical code and you find it's 50-100x slower than their bigger siblings... Turns out it has no AVX of any kind, and event though it says it supports SSE4, it feels possibly slower than the older Pentium I mentioned at the start, even with faster, dual-channel memory, and those old Intel didn't even have SSE4, just SSE3. But possibly at that point the 35W thermal headroom of theT5600 plays a big part as the N4000 is passively cooled and has a 6W TDP.

Moral of the story? Even if you try to, it's hard to pick a slow/old processor that will support all the operations that your software could use. Some of the give a huge speedup. You have to pick your battles. You'd probably have an easier time underclocking a lot out of your current processor (and disabling a few things?) and call it a day for performance testing.


> I only say this half-seriously, but that's why I sometimes think that part of any performance test should be "try to run your software on a Pentium 4". If it's unusable, go back and optimize some more.

The condition that the software has to be able to run on a Pentium 4 makes quite some performance optimizations impossible.


It'd be neat to have presets in kvm just for this.


You say this, but there is also a push for servers to be: in containers, server less, and loosely coupled

One trend at my company is to have microservices, where a single app is "single function" and uses a LOT of network traffic to talk to other "single function" applications.


At the very least, test it in virtualbox.


You're talking somewhat tongue-in-cheek, but my situation is that I need to use Microsoft Teams. Composing a message, I would say at about 300-500 words with some formatting it starts becoming unusable.


I am completely serious, actually. Surprised to see it downvoted. The only underlying assumption is that I'm using default/mainstream tools (macOS, JetBrains IDEs or XCode or Google Docs).


Teams is the bane of my existence. My fully specced 2019 MBP can't handle a Teams call without slowing down the whole system. I have to join calls using Chrome instead. I don't understand how a company like Microsoft can make such terrible software.


It’s now well known that in the Malaise Era of Detroit, all corporate executives received their company cars after they had gone through the quality shop to correct all problems. They drove perfectly set up cars that did not have any of the problems experienced by the average consumer. They simply had no idea what it was like.

You read complaints on HN all the time that sure make it seem like we have entered a Malaise Era in computing. Us developers surely do not run computers set up like the average consumer, and you can be darn sure that corporate executives at Microsoft and Apple do not get plain vanilla laptops fresh off the production line.

The result is entirely predictable.


> You read complaints on HN all the time that sure make it seem like we have entered a Malaise Era in computing. Us developers surely do not run computers set up like the average consumer, and you can be darn sure that corporate executives at Microsoft and Apple do not get plain vanilla laptops fresh off the production line.

One of my college internships involved setting up laptops for VIPs and Executives for exactly these reasons! We optimized the installs, removed crud ware, and setup the system to be as well performing as possible, then shipped them out.

At one point Lenovo made a laptop where they cut costs by removing the cache on the HDD, there was nothing we could do to make that machine not horrible. Multiple minutes to do simple tasks, installing Office took hours! It was horrible, their reputation was so bad a number of them came back to us not even having been opened.


The same applies for web developers. Most people who make web sites for a living don't test their products with the outdated Android phones and low-end laptops and 2 Mbps connections that many of the site's visitors will be using.


IME, "Turn off incoming video" fixes most of the slowness, and it's fine for me since I don't need a grid of heads anyway.


Yep. I recall couple years ago when Google updated their Gmail interface(wtihout any way to go back of course), Gmail suddenly became unbearably slow on older machines. Laptops which I could easily use for managing my emails were now stupidly slow, because someone somewhere at Google decided that having a slighly nicer interface is worth the extra CPU time. I wonder how much that one single change cost the world in burnt electricity, just to produce some useless features in gmail.


You can access gmail via IMAP/SMTP. Any desktop e-mail client will be significantly lighter and faster than any browser based webmail client. And I don't care how shitty your computer is, mutt will run snappily on it.

https://support.google.com/mail/answer/7126229


Gmail does have a "Basic HTML" mode, FWIW.


And it's a step below the interface they had not that long ago, which was fine and which worked on literally any machine very very fast. The "basic HTML" mode is well....extremely basic.

That's the interface I have in mind:

https://www.youtube.com/watch?v=OBI2liYIvdY

I don't feel like the current 2021 interface offers any advantages over it, and yet it uses far more resources to load and open things smoothly.


Can you explain why “editing simple text documents has become painfully slow on my 2013 Macbook Pro”? Because the process is instantaneous on my 2013 Chromebook Pixel running Linux, or on my even older Thinkpads. I certainly don’t have to replace my computers every few years. Is it a problem with MacOS or the software you’re using?


As an example, I have for work a HP ZBook with an i7-6820HQ that clocks at 2.7 GHz, and 32 Go of RAM. I have to have Outlook, Teams and Lync constantly open because of work rules. If I open a web browser, the fans start spinning and the whole system slows down. Maybe I'm just cursed with bad hardware, but in 15 years of using Windows I never had a smooth experience, I've always encountered small slowdown, stutters and things like that. Same thing on Linux.


Perhaps you're cursed with corporate security. My Macbook Air is a beast compared to the maxed out Macbook Pro 16" with forced corporate security policy one corporation gave me.


That's what I thought at first, but others that have PC supposed to be slower (I have the "scientific" one) have a much smoother experience.


I’ve twice deleted Sophos for slowness. I’m now running Bitdefender on a 4GB 2012 Air, and performance is reasonable (so long as I restart Firefox daily).


I'm not familiar with the ZBooks, but if they're as thin as the EliteBooks and ProBooks, and the cooling is as much of a joke, I can't say I'm surprised, especially with a HQ CPU.

My ProBook with an i5-8250U spins up its fan for no reason while doing next to nothing on Linux with i3. And no, there's no "borken power management because Linux" issue, the battery actually lasts a long time (comparable to the official specs, which are presumably for Winows).

But curiosity got the best of me one day and I opened it up. The cooling system is an absolute joke. The heatsink is ridiculously small, my iPhone 7 probably has a bigger one.

Same story with an EliteBook something, with an i5-7xxxU CPU. I actually switched the EliteBook for the ProBook because I could add extra RAM.

However, I don't have any "slowdowns or stutters" with this machine, and I usually run it attached to an external 4k screen.


Mine is more than a inch thick, I'd say it's about an inch and half thick. The cooling system is probably a joke on mine too, it often reaches 80°C while being idle.


Could be worth a try: did you open it and blow some compressed air through the heat sink? I did this when my MacBook was some 5 years old for the first time, and it helped with temperature and the fans would stop revving up during normal use.


That's really bad, maybe it's an option to repaste the CPU. Because this isn't normal.


Because OS and core upgrades these days seem more focused on features than performance, I think, and playing compatibility catch up with one another because the industry expectation is that people keep buying better hardware. Even if you do the same thing with a given piece of enterprise software, if the runtime for that software is doing all sorts of things to run features that you do not use and cannot turn off, then you pretty much have to "keep running to stay in place"


Possibly he is running an antivirus program or two. Maybe editing files over the network. I know that I've experienced these types of things in a corporate environment and that can lead to slowness where it otherwise would not be.


I don’t understand either. Emacs works splendid on my Macbook Air 2013.


Aye. Or vim (emacs with fewer steps for the uninitiated).

Just kidding But yes, command line utilities have never been faster.


Vim is certainly not for the uninitiated, whatever else it may be.


A better editor than nvi (but, weirdly, not as good a vi as nvi)?


Yeah, I think I'm going to need extraordinary evidence for that claim (of his, not yours). "painfully slow" to edit text documents seems absurd.


Isn't the solution there to use software that puts more emphasis on efficient use of resources? The corollary to Wirth's law is that as time goes on, deleting crap from your machine becomes more and more effective -- I was using Neovim on a Thinkpad T60 at work in 2018, and loving it.


This is one solution. Sometimes I use Sublime Text 3 on my old 2GB MB Air, to code mostly 3D graphics, sometimes backends, some web dev as well.

However, on my beefy i9 work laptop I need to use Microsoft Teams and also Docker, which means the fan is on half the time and the computer can barely keep up with the workload.

I've seriously considering asking for another computer just to run Teams.


When I had a contract which required me to use Teams I was exploring installing it on a server and just using VNC to access it.

The solution I've settled on however is to just reject contracts where terrible software is mandatory unless there's a significant financial upside.


I do have a Windows 10 PC for crapware which I connect using RDP. RDP on gigabit LAN is fast enough to watch a video with acceptable results. It can also pipe webcam and microphone to the remote host.

For Linux hosts NoMachine is really fast and can make use of stuff like x265 streaming. The open source x2go although slower can publish applications terminal services style in a way that the remote app window looks like its running on the local machine.


Huh, is x2go faster than standard x11 forwarding? I played around with forwarding from Linux VMs to my host Mac via XQuartz, and even running on the same machine it was much too slow for e.g. web browsing.


I'd say it can be made faster because you can control the streaming quality and compression algorithms.


Woah NoMachine is still around?! I used to use that to login to home from uni in 2003 (well it was freenx but same diff)


My main tool to login to my Linux workstations. Works like a charm.


For what it's worth, Teams was a pain on MacBook but it's running fine now that I can work with a desktop (32GB, Ryzen 3700X). I completely agree on Teams being by far the worst software I have to use though.


In my project most PMs have the mid-tier Macs (not sure what they are called). They can't use Teams video conferencing and Jira at the same time. Some have resorted to running Teams on a tablet next to the laptop, which works well unless you need to share the screen. It's a pretty ridiculous situation.


Yep, I see the same problems. Since our PMs never needed beefy machines before (they don't do software development and Teams is new for us), I'm the one who ends up being the JIRA pilot :)

The fun part is that I could be using my personal stone-age 2GB/i3 MB Air if I were doing only development and e-mail.


Teams is really just awful. Lync/Skype for Business generally ran with 10% of the memory usage that Teams uses, and it wasn't a particularly well-written native WPF app.


Use an iPad. Teams app is smooth there. And you’ll have a better camera.


That's actually a great idea.

I used to use an iPad for email, calendar and Jira. Maybe I should go back to it!


I am using Office 2003 on my home computers and it is snappy and instantaneous.

I wish I could use Windows XP with up-to-date hardware support and security updates, too. I remember the days of requiring on 300MHz core, 64MB of ram and 1GB of hard disk for the same functionality that Win10 now provides in a worse way.


Win10LTSB + vmware workstation fullscreen + xp/2000 vm

works wonderfully and is very fast!


But more and more, good, efficient software is becoming abandonware because most people want the flashy new interface that makes a lot of stuff harder to do, makes nothing easier, takes more resources, but looks better.


There isn’t much of anything to directly show the end user the externalized cost of those things, so most people don’t understand it. If there was something akin to gas mileage, it would be much easier to see that you don’t want to install Yukon XL SUV of programs.


> Even editing simple text documents has become painfully slow on my 2013 Macbook Pro.

I just installed Linux Mint on my 2013 MacBook Air, and now it's fast enough that you wouldn't know it's an old computer. Really amazing, it seems like Mac OS in particular is just super resource heavy. I realize Linux pretty much defeats the purpose of owning a Mac, but I'm just glad not to have to throw out an old computer that has seen me through my entire career shift into programming so far.


Not going to try to convince you of anything, just going to share that I preferred to get rid of my MBP 2015 and buy a fairly decent AMD Ryzen laptop for even less money than I made with the MBP sale.

Linux absolutely flies on a Ryzen 4700U with an NVMe SSD. ^_^


Yes. Whenever I accidentally open a .JSON file in Finder and thus launch XCode (instead of opening it in VScode), my 2.4Ghz 8-core 32GB RAM 2019 MacBook Pro Retina screeches to a halt for ~30 seconds. To display a text file. sigh


Especially true when you consider the biggest bottleneck was I/O, and the best SSD are in some cases 100 to 1000 times faster than HDD. You can have 8 - 16 times as much CPU core. Your single CPU Core is at least 2 times faster compared to 2010.

And yet somehow opening a text file is as slow if not slower.


Do we have to do this in every thread? This argument is as boring as the systemd/init argument.


If this is actually true the problem is on your end. I'm still using the first retina MBP from 2012. I keep all of my software up-to-date and it has no performance issues. I do keep it back a few MacOS versions because I see no need to update past the one that's on it.


People say my OS looks old because it's always cwm/fvwm full of xterms with vim or emacs but I'm comfortable on 10 year old budget computers because I reflexively avoid anything shiny looking.


https://suckless.org/

I've been happily using non bloated tools for years now.

Obviously this isn't for everyone. ;)


> but I'm sure it won't be able to open a 1KB text file instantaneously in 5 years

Unless you decide to switch to a modern console-based text editor. It doesn't have to be vim or emacs. There are many other alternatives. One new such (written-in-Rust) editor being Iota: https://github.com/gchp/iota


For the rare occasions I open text files outside the terminal I've had good experiences with Textadept. It's open source, lightweight & fast, and can also run in the terminal if needed.

Usually I just go for Vim, but sometimes I need something else and in those cases I want to avoid heavyweights like VS Code.


I've been seriously thinking of buying old software off eBay and running it in a VM just to have capable (if archaic) applications to hand that aren't filled with cruft. I'm also very glad for some of the terminal-based solutions I've found to substitute for bogged-down GUI applications, even when they bring their own set of usability issues with them.


I noticed this trend all the way back in 2005. At the time I was excited about upgrading my desktop computer but after a few months I noticed everything performs about the same as before I upgraded the hardware. I concluded that the software must have gotten slower, absorbing all the hardware upgrade.


I might get shot, but I don't believe this, since I'm using a 2013 ASUS such feels snappy to do all my webdev on it and host a couple of virtualmachines.


IDK, I have a ten year old laptop that still does pretty well at those things. Its biggest issue is its terrible IO and slow HD, but those were its biggest issues when new too.


You don't need to edit that text document in Visual Studio's bloated Frankenstein of an IDE. VIM is still there waiting for you to come back - it misses you.


Sadly, it isn't about you the user. It's all about "dev productivity".


Not sure what you have running in the background or where specifically you are editing text, i have a 2013 MacBook air and i can run intellij. Let alone simple text editor.


There’s a lot of truth to the joke lurking in this comment. Apart from the covert channel issues, the “market” has spoken: speed of development (and simplification of development) has been preferred to raw performance. I definitely consider that a good thing, even when I look upon a lot of code with horror).

40 years ago writing code was a lot harder; improving that has allowed a lot more interesting things to be built (both bc people can work faster and at higher levels of abstraction and because development is more democratic, leading to a wider pool of people writing things). The cost has been high though; few people think about the hardware and even the term “bare iron” is used unironically to refer to a machine running a multi-tasking OS!

But I think this will swing back the other way. Not by removing the abstraction but by “melting” some of the abstraction implementation layers. Hard work on improving interpreters and compilers has always been one of the ways, but assembly-optimized hotspots and tighter integration in some of the stack elements will be increasingly worth doing because fixing that speeds up all the developers working at higher levels.

The downside is that you won’t be able to, say, easily customize that version of React you’re deploying. But nowadays you really want to stay away from that anyway, in 99.99% of cases.


> by “melting” some of the abstraction implementation layers.

So bake those supply chain vulnerabilities right in there, nice and deep.


Likely the opposite. A reduction of optionality reduces the opportunities for dependency injections.

When you load 50 (or for the tree, 500) independently developed modules the probability of failure (or vulnerability) is typically the sum, not product, of the failure probability of each component. The same reason why not everything is implemented as a fleet of microservices.

Few worry that the TCP implementation in your OS is a monolith, and one that is tightly integrated with the IP code. This will make increasing sense further up the stack, for good or ill.

There are trade offs both ways.


Just couldn't resist nerding out about probabilities. Summing failure rates would potentially get you above 100% chance of failure. What you'd really want is the product of success rates. So, if you have a 1% failure rate on a module, and you have 50 modules, that'd be 0.99^50, which brings you to about 60% success rate, or 40% failure rate overall. For 500 modules that drops to less than 1% success rate.

But yeah, agree that the odds of failure are probably lower pulling in a handful of monoliths that are well exercised than hundreds of miscellanea of varying robustness.


why is a multi-tasking OS inconsistent with "bare iron"?


Because in an OS like Linux or Windows, the kernel is sharing out access to the machine by managing the hardware's lowest levels of controls, which applications do not have access to.

That's not even including the fact that processors are actually processor simulators these days.

Thus, "bare metal" is actually "resource division among managed processes on a simulated processor" which sure sounds a lot like a VM to me. Modern OSs are in fact a form of virtual machine, and we found it useful to have yet another.

Arguably docker type of simulation introduces yet another level so a react app in a docker shell in a VM cluster is really a interpreted application on a simulated operating system, on a resource sharing virtual machine, on a virtualized hardware environment, on a multitasking virtual machine, running on a simulated processor that's implemented on a _real processor_.


The OS itself runs on the "bare iron" -- hardware without any software running on it. An OS initializes and manages the hardware and doesn't call on something else to do so.

If you write a program that is launched by an OS you are by definition not manipulating the hardware directly. Those POSIX calls go somewhere.

I write such code from time to time: a boot loader, a small OS, or apps that run with no OS at all.


This means we programmers can eventually get back to the real fun i.e. optimizing code like it's the 1990's instead of toying around with bloated layers of libraries and lazily built abstractions. It can be cool to run a previously complex algorithm in a WebGL shader in a browser but it also dulls our blades.


> the real fun i.e. optimizing code like it's the 1990's

not every programmer think this is fun. Some programmers prefer to code up features, rather than learn or test out esoteric ways to micro-optimize.


Speaking as somebody who never stopped optimizing like its the 90s: there's definitely a learning curve, but when microbenchmarking becomes a reflex, you learn your language in depth and the burden drops quickly. It only seems esoteric to you because the industry has collectively decided that Grace Hopper is an old codger who doesn't need to be listened to.

https://www.youtube.com/watch?v=9eyFDBPk4Yw


so good


Actually optimizing WebGL can be fun, because no matter how much ray tracing capable the GPU is capable of, the best it can do with WebGL is ES 3.0 subset from 2011.


> We've entered Inverse Moore's Law

AKA Less's law


It's mostly Intel that's affected. They got the market edge by knowingly putting unsafe optimizations in their silicon and then never disclosing it. AMD is not nearly as affected, nor ARM.


Or Intel is more popular and attracting more attack research.

I think both are true.


This is not true

ARM: https://developer.arm.com/support/arm-security-updates/specu...

There are also a bunch of mitigations for AMD.


>We've entered Inverse Moore's Law: every two years Intel's single core performance drops 20% as optimizations exploits are mitigated.

FTFY



We also have crypto doing a similar effect.

Look at the prices of graphics cards. Or, nowadays, hard drives thanks to Chia. It's going to be a goddamn tragedy for the entire planet if Chia continues to grow in popularity.


"I am not convinced that removing any optimization which can be used in a timing-based side channel is sustainable." basically boils down over two decades of JavaScript (now WebAssembly) API and language design, where one subset of the committees fervently argues against any feature or primitive that could possibly be used to acquire timing information, no matter how useful. It's tough, because they are ALSO trying to make sure any advertiser on the planet can run code on your PC without your permission, so it is pretty important to stop that code from using timing side channels to capture your credit card number or something. Once it was proven that JavaScript could trigger rowhammer (I remember having debates about this during wasm spec meetings at the time when it was still speculative), that kind of guaranteed that the side-channel people would remain in control, I think. SharedArrayBuffer is the biggest example - we basically had safe multithreaded memory access available in browsers but it got disabled due to spectre/meltdown, and we only started getting it back relatively recently with lots of additional constraints to try and make it "safe". Want high-precision timestamps? Not anymore you don't, it's a side-channel!

Similarly, when a new browser update goes out it may or may not make all the web apps you rely on slower (or break them), so any web benchmarking also needs to be re-run from scratch on a regular basis. New Chrome and Firefox releases "ride the trains" every 6 weeks or so, and they quietly push changes to the browser more frequently than that which can occasionally impact perf or behavior too.


Now, if it were limited to browsers, or even to desktop computers...

"Security by default" affects, by default, also trusted workloads. Games, desktop editing software, word processors, spreadsheets running dumb totals in big tables, and virtualized computing workloads running in the cloud (where it has a serious financial impact for whoever pays the cloud bill). All of them pay the Javascript tax. Worse, there doesn't seem to be any solution in sight, we have become too reliant on the damn thing.


> virtualized computing workloads running in the cloud

as trusted workloads?


It's understood that multi-level security (MLS) is extremely hard to do, wanting MLS capabilities (which enables separation of co-located, hostile programs) while also entertaining high performance simply is not going to happen.

The problem is the obvious solution - get rid of the hostile programs! [i.e. removing the capability of any website to run code from anywhere on your machine] - has been ruled out from the start. Now we are only left with non-obvious solutions and bad trade-offs. Things like CPUs with fast/slow microarchitectures on the same die so you may run the JS engine on a slow core that has a harder time messing with things, for example. How many engineer-millenia are poured into trying to make this work?


> It's understood that multi-level security (MLS) is extremely hard to do

Has it even been tried? Given that we do need to protect against info disclosure vulns, even a rough attempt at proper MLS would be way more feasible than treating all programs as untrusted/hostile or all data as equally sensitive.


How do you get rid of the hostile programs?

If one could get rid of the hostile people, we wouldn't need passwords. Login is enough. Much easier to develop webapps. But we can't get rid of the hostile people and we can't get rid of the hostile programs.

The only semi-successful way to get rid of the hostile programs is Apple-style walled garden. But even there malware happened. And giving up all your freedom for that kind of safety is like going to the jail voluntarily to be safe from the outer world. I don't think that this approach would work for everyone.


The silent browser updates that break things are so much fun. Earlier this year Chrome pushed a change that throttled JS timers severely on inactive tabs, and everything that used SignalR over websockets broke.


Older thinkpads are perfectly fine machines still today—until not. The performance blow to Sandy Bridge from all mitigations I've experienced is significant, especially so in SSD I/O speeds (still fast enough, though). A saddening state of affairs, I hope nothing like the Core-era CPUs market domination will ever happen again, I hate being struck with one option alone.


I used InSpectre to disable meltdown/spectre mitigations on my 2600k. Nothing important was stored on it, it was used purely for gaming. It had a significant effect. I would hold my 144 fps cap in some titles but with mitigations on it would fall below frequently.


ARM/M1 family will be plenty of competition I think.


First, it's still a fairly closed platform. Whatever work being done on reverse engineering it and finding out what its components do is being done by third parties and even though it's already possible to run Linux somewhat, it's hard to say we'll be ever able to use 100% of the platform capabilities without Apple's help. Even if we do that with M1, Apple will change things in M2, it's going to be a cat-and-mouse game. Apple has zero motivation to help these efforts (heck, they would try to stop it if the law allowed). So although RISC-V is far away behind M1 now, I very much hope in quiet gradual improvements here until a tipping point when somebody realizes it's actually a competitive platform not just for industrial use cases but also for general computing.


Whilst I applaud what RISC-V is doing I do find comments like this a bit frustrating:

- M1 machines are 'closed' because of Apple and not because of the CPU architecture. They could adopt RISC-V and the GPU and drivers would still be closed etc.

- M1 is fast because Apple have invested a lot in building an outstanding CPU design team and the economics of their business allow them to fabricate on the latest TSMC node.

- Apple could have locked down the M1 machines much more completely than they have done.

If you want fast 'open' systems you need to work out how to incentivise firms to build systems in this way. RISC-V will not magically get there on its own because its a more 'open' ISA than ARM. In the meantime credit to those who are working out how to get the most out of the M1.


I applaud talos for making fast, modern workstations[0] with an OpenPower CPU. This is probably the best you'll get today for a fast, secure, open workstation. RISC-V is not there yet.

0: https://www.raptorcs.com/content/TL2WK2/intro.html#sidebar-s...


The Mac remains the Mac. That is not changing. That has been repeated by Apple executives publicly many times. Part of the Mac's heritage is that you can turn off the guard rails and take control of your system, including installing things like kernel extensions. These are not things most users should need to do - secure by default is very important - but they remain possible.

Others have documented the engineering that went into a new boot system for the M1 to make multi-boot and custom kernels possible. Engineering that would be pointless if no one were expected to use it.


FWIW, my conversations have (to me) painted a different picture around the future of kernel extensions: it seems like they will only continue to exist until the last large company that relies on them doesn't have a userspace API replacement. The remainder seems accurate, but I would really love for Apple to allow me to put up the guardrails again but in a way that I control…


That has been the official message at all WWDC talks since userspace extensions have been introduced.

Actually I look forward to it, eventually Linux and BSD will stand alone as the only classical monolithic kernels, which is kind of ironic, given how much containers and hypervisors they get loaded with anyway.


> they would try to stop it if the law allowed

What law? They locked down iPhone and law does not care. How is it different for Mac? The could've locked down their M1 Macs, but they deliberately chose not to.


I think the parent comment was refering to law(s) preventing reverse engineering of hardware - thus "Apple would prevent people reverse engineering the M1 if they could".

"locked down" from your comment as I understand it, is just making it hard to gain root on the device. Nothing in most Western law prevents you or I from jailbreaking the phone or cracking it open and looking at it under a microscope or whatever. Thats good. What would be perfect is of course publish schematics and manuals etc.


What ARM device outside of M1 is competitive for desktop ?


None that I'm aware off. But Qualcomm promises to release desktop CPU soon. I don't have high hopes and it probably won't surpass Intel, but I think that it'll be competitive enough to be interesting.


Qualcomm already has that Microsoft Surface X1 chip and it's terrible. They acquired a player in this space recently but from what I saw that's two years away from launching.

I'd have higher hopes Samsung pulls something off with their fab but still skeptical. Apple seems to be generations ahead in this space.


Imo, it's not that bad, about the level of an entry-level 8th gen Intel Core chip. The problem came with the generally sorry state of x86 emulation (and lack of native support).


Main difference between Apple and Microsoft is that Apple can force most developers to release ARM versions of their software pretty quickly, while Windows software vendors still often ship 32-bit software.

While Rosetta might be good, I'm sure it'll be gone in a few years and most people won't notice. It won't work with Windows.


Verticle integration is going to make it low quality soon.

Today it's an average expensive chip, tomorrow it's below average. It's really hard to keep up with the world.


It seems to me that almost all optimisations (and especially the accumulation of optimisations that we want to get more performance), that all optimisations are going to be vulnerable in some way to timing attacks.

After all an optimisation is designed to alter the timing. If all actions must have the same timing, optimisations fail.

Do we simply offload any sesnitive processing to specialised chips? The silicon wafer seems to be developing vulnerablities like a network.


We need to stop trying to do transparent caching, and instead give developers control over what happens. Itanium wasn't wrong, it was just ahead of its time.


I keep wishing that on an alternative reality AMD would have lost access to x86 IP, thus never coming up with x64 as cheap alternative to Itanium.


I'm pretty sure in that alternative reality, you would be paying today top dollar for a quad-core. Today.


I doubt that mass production of Itaniums would be kept the prices that high.


Having a monopoly on the production of them would have. Example: Intel HEDT pricing before and after AMD Ryzen Threadripper release.


Intel already had a kind of monopoly and the prices were going down.

In Portugal there were only PCs with Intel CPUs around, the Cyrix, NEC and other clones were hardly anywhere to find until the first Athlons came to be.


I think x64 WAS Itanium. x86-64 is the beast we are dealing with today. In that world maybe everyone goes to PowerPc? The move from 32bit G4 to 64bit G5 cpu wasn't a big deal since the PPC had 64bit variations back in 1997 with the 620 (though apple never used it)


Everyone would go where Windows would go, as far as I remember PowerPC Windows NT did not got much love.


Yeah, that was kind of tongue-in-cheek. Apple knows when to leave a cpu architecture. They've done it 3 times already (M68k->PPC->x86->ARM)

Though I think we're getting now to the point that we would have gotten to then. AMD bought x86 another 15 years of life.


Itanium was IA-64.


you're right. I guess X64 and X86-64 are synonyms, more or less?


Yes, x64, X86-64 and amd64 refer basically to the same.


Has compiler state of the art changed?

Years ago, I remember reading Itanium stalled because writing compilers to ably support speculative instructions was hard or simply impossible.


There hasn't been a quantum leap forward, but I think/hope we've got gradually better at programming. Certainly compiler development is a lot more active now.


Or just disable optimizations for security-affected calculations, and let GTA V run with all the optimizations enabled.


The only way we can actually stop timing-based side channel attacks is by having deterministic (in time) execution. The only way that can happen is by running at the slowest hypothetical rate for every individual operation. So, no cache, dedicated CPU timeslices, dedicated RAM bandwidth.

Luckily, we only need this for untrusted code. What really peaves me is that trusted code is now having to run slower as well.


Allowing the CPU to drop into a deterministic-performance mode would be quite useful not only for running untrusted code (whatever we decide to be "untrusted"), but also for cryptography. So many classes of side channel attacks would simply vanish, or at least become controllable.


I wish there was a flag you could set on a CPU that would switch it between "fast" and "safe" modes. Security conscious stuff could switch it into safe mode and then it could be switched back to fast mode for general computation.

I don't like that everything has to get 25% slower because your SSL session setup needs to be protected.


With supply chain attacks, all code is untrusted.


Sure, but you don't need to mitigate timing attacks on untrusted code if that code already has root access.


With JavaScript in Browsers, every consumer pc runs untrusted code every day.


I don't think it's so black and white. When I install some software I'm trusting the developer and the package maintainer, and I'm sure others. If I go to gmail, I'm trusting google. That's not a huge difference at the end of the day. I suppose it's easier to navigate around to lots of websites than it is to go an install a bunch of software.

But then again, browser exploits have been a thing forever. I expect that something bad will happen to my computer if I'm visiting shady websites.

So my question is: are shady websites _more_ dangerous now that these side channel attacks have been discovered? (serious question.)


You're trusting Google to do what? Google is not trying to attack you, but they are using a different threat model than what you describe. In particular Google is in the business of serving ads to you, that they do not author themselves, and are written with the assumption that you are not blindly running all their code ring0.


But they do so in a highly-sandboxed environment which has constantly evolving mitigations to counter exactly this threat.


You mean it’s untrustworthy code.

Supply chain attacks exploit that fact that far too much code is inappropriately trusted.


Unless you mine your own silicon all hardware is untrusted.


Only if you write every bit of code and trust every person involved in the entire value chain


But even then, do you trust yourself?


Right, what if you’ve been Inception-ed by an adversary


Maybe the NSA installed a backdoor in your brain.


> Luckily, we only need this for untrusted code. What really peaves me is that trusted code is now having to run slower as well.

Realistically though, most userland software falls into that category nowadays:

- JS-driven web apps like GSuite

- Sandboxed "App Store" apps on mobile and even desktop

- Potentially any other desktop app that is exposed to content originating from the web or email (e.g. Acrobat, desktop Office, etc.)

I wonder if security models need to become less black-and-white, with a middle tier of trust for apps or domain names that should still be sandboxed but are trusted enough that we're willing to trade some risk of timing exploits for improved performance?


All these intransparent layers of low-level software (microcode, management engine code, device firmware blobs) are a nightmare from a security, reliability and maintainability point of view.


This is purely an artifact of poor system design. x86 platforms are notoriously bad at this.

I'm reverse engineering the Apple M1 and haven't found any trustability issues yet (besides the proprietary early stage bootloaders, but everything has that). There are some 12 or so coprocessors running proprietary firmware, but all of them are safely behind OS-controlled IOMMUs and not privileged to take over the system. There is no ME, no SMM, no TrustZone, nothing running with higher privilege than the OS. Most of the coprocessors have at least inspectable firmware (except the Secure Enclave, but you can just turn it off). The ARM architecture design makes it near impossible for an early boot backdoor to, say, run your OS in an undetectable VM too. No updatable microcode.

On x86 you have the ME running secret code with higher privilege than the main CPU, as well as SMM running secret code on the main CPU at a higher privilege than the OS and stealing cycles from it, as well as a rather poor track record of proper IOMMU sandboxing (especially for anything that isn't a PCI device), and secret, encrypted microcode updates.


> I'm reverse engineering the Apple M1 and haven't found any trustability issues yet

... except for assuming "EL3 seems to be missing" means "EL3 doesn't exist" instead of "the bootrom dropped to EL2 before we got control".

Not saying I have any evidence to the contrary, nor is this in any way a criticism of your awesome work. Just saying that this will unfortunately remain forever unknowable unless Apple decides to volunteer information.

Personally, if I were Apple, instead of removing EL3 I would've left it in but unused, accessible only to code signed by Apple with the public key burned into silicon. As a way to mitigate currently-undiscovered silicon bugs.


The CPU ID registers say EL3 does not exist.

Your argument is equivalent to "Apple might have backdoored the CPU". Well, yes. So might have every other CPU vendor.

If EL3 were to exist, for starters, it would have no way to receive interrupts, as Apple uses both IRQ and FIQ for the OS (normally FIQ is used for EL3 on platforms with a secure monitor).


> The CPU ID registers say EL3 does not exist.

Thank you for mentioning this. I retract my earlier (GP) comment.

I didn't know that ARM chips self-declared their support for EL3 with register bits that could be read! But yes, they do:

https://www.kernel.org/doc/html/latest/arm64/cpu-feature-reg...

These bits amount to Apple saying "there is no EL3" and constitute one of the few pieces of official documentation attributable to Apple.

PS, if you're maintaining any kind of list of "why the M1 is more trustworthy than chip XYZ", the fact that there is a MSR-like bit declaring that the feature doesn't exist should definitely go on that list. Just to make it clear that EL3 not existing was something declared by the chip rather than assumed by the reverse engineers.


Including EL3 is not free, it requires silicon validation effort. Apple has no use for EL3, so it is gone. They have other ways to patch things anyways, though there are limitations. (FWIW, we also know there isn't an EL3 because we have the code the processor runs as it comes out of reset.)


Thanks for your hard work! I'm looking forward to read more about your findings (and to running alternative OSes on the M1).


> The ARM architecture design makes it near impossible for an early boot backdoor to, say, run your OS in an undetectable VM too.

Can you explain this? It doesn’t sound right.


The M1 does not support nested virtualization, so if you put a VM hypervisor under an OS, the OS knows it's running in guest mode. Making it transparent would imply implementing nested virtualization in software in an undetectable way, which is nigh impossible given the way the architecture is designed. Undefined/illegal/HV-privileged instructions don't trap to the hypervisor, nor does the "am I a guest?" instruction. You have to get into full guest code analysis and patching territory. It's a giant mess, and cannot be done without massive performance overhead.


> SMM running secret code on the main CPU

Well, not secret if running coreboot… I'm not sure if coreboot's SMM handler might call into FSP/AGESA though.


They need to be held accountable for performance regression at this scale if they don’t provide BIOS options to switch back.


If you apply your microcode using the OS loader (as described in the post), you can switch back to any microcode (which is at least as new as your BIOS version) by rebuilding the initramfs with the microcode of your choice and rebooting.


How do I do this on a Windows workstation with stock BIOS/boot loader though? Who knows how much slower this machine, that I wait for 5-6 hours per day, has gotten in the last few years. IMO whoever flashes the patch should be responsible for giving a toggle.



And under Windows with the VMware CPU Microcode Update Driver.


> Is there a new crypto-coin based on who can store the most zeros

You jest but…..


A bit too close to Bitcon's proof of work scheme, of all things...


Rule 34 but for crypto.


If you can imagine it, it exists

But partially because you like money and can make it exist pretty easily and cheaply


Chia Coin?


What's the sustainable solution here? Do we give up and let our performance erode back to where it was a decade ago? Do we just let the vulnerabilities pile up until somebody starts exploiting them? Or do we create an entirely new 'premium' CPU architecture that is designed from the ground up around security, and have an underclass of insecure and dangerous devices?


So many of the performance-reducing security are related to timing attacks.

Would perhaps a solution to this be to have mixed cores: some cores intended for managing passwords and other secure data with all the performance reducing mitigations to avoid timing attacks. Other cores completely unlocked, no mitigations, and intended for non-secure computation (gaming, number crunching, ...).

Then let the OS and programmers ensure it's known which computation is marked as secure and which is not, to decide on which cores it can be ran.


No. I've erased this reply several times as I thought of a mitigation to the problem I was going to state - but in the end nothing mitigates everything. Most of the ways to make this work are hard to get right - it is too easy to forget to mark something as unsafe, or in some other way allow data to the fast cores that the fast cores shouldn't have.

Even if you figure out how to do it though, you can still fall to a double failure: your safe code doesn't leak passwords via a timing attack, but via something else (buffer overflow?) it leaks the password to the unsafe side, which in turn leaks it via a timing attack.


As with anything, we have to ask ourselves why are we offloading the responsibility to the OS providers to solve this, when we really have to determine for ourselves if the risk is even a risk at all on our individual systems. Do I care about this hard to exploit vulnerability on a web server that's streaming video content to the public? No. Does it matter on a DB server with important financial data? Yes it does.

Applying the patch should be opt-in if you ask me. But of course, most sysadmins are hopeless. So then the OS vendors push it out, it's safer than letting the decision to uninformed people.


This "but people are stupid and uninformed" mentality has really got to stop. If you find yourself making an argument that hinges on everyone but you or some large population being idiots then you're wrong. Not only because people aren't stupid but doubly because you're specifically talking about a population of highly educated people who start as developers and get interested in infra.

Sysadmins/DevOps/SREs aren't hopeless, they just have different incentives and responsibilities. Default secure with the option to let down your guard when the need is there is always always the right choice. You wouldn't have your firewall default allow with a blocklist. You wouldn't grant everyone sudo access and then maintain a list of commands they can't execute. Such a thing is impossible to maintain.

For me specifically I manage too many servers to bother with this. It's going to be deployed to everything without exception and if you need more performance we'll rack more hardware. The cost of more CPUs is less than the risk that something will slip through the cracks. I don't care that your pet service doesn't execute any untrusted code, I'm not carving out exceptions when I have 20 teams constantly asking for stuff.


>I'm not carving out exceptions when I have 20 teams constantly asking for stuff.

Sounds like your IT department is severely understaffed and is unable to meet the needs of the developers without reducing service.


Sometimes this happens —- the vast majority of organizations have tighter budgets than FAANG, especially if they are not for profit. How helpful do you think this observation could be to someone in their position ?


I think it should be opt out because it’s better to have secure by default than fast by default in my opinion.


This is especially true since the exploits are available in Javascript.

> but that analysis is nearly impossible an filled


I see many comments about people wanting to turn off the mitigations. Just remember that you are currently running a virtual machine (your browser) that lets anyone (the web sites you visit) to run code on your machine.


CPUs have a bajillion cores these days. Can we keep mitigations enabled for a few of them and run browsers over there, and keep the rest of the system fast?


Does anyone know what could potentially be accessed through these exploits?

Like could we just not use the browser's password manager or use a separate browser for sensitive websites (eg banking)?


How would we know if Intel started disabling features that they explicitly marketed? I believe removing or reducing the functionality of a product that was sold under the pretense of a certain level of performance would warrant class action.

That's like Ford selling a truck with 500hp and then later recalling it to detune it to 400hp. The truck I have is no longer the truck I wanted or the truck I paid for. Ford owes me what it said on the label.

What if Intel wanted to sandbag existing products to promote new ones?


> What if Intel wanted to sandbag existing products to promote new ones? IANAL, but I'd guess that this is maybe similar to Sony removing OtherOS in a firmware update, and that you could sue Intel for it. Or maybe not, since you can actually disable microcode updates.


You know, just 10 short years ago whenever someone had a problem with an AMD chip, the overwhelming response from the tech community was "dude, just get an Intel. They are way better."

So pardon me, but from a lifelong AMD enthusiast, you all deserve this...

"Hey guys, why are you still running Intel? That shit is like 2 generations behind my AMD in performance and reliability."

Damn, that felt good.


Reliability? It isn't a car, it doesn't wear out.

Also why would you be a life long enthusiast of a specific company? Why not buy what works best at any point in time?


It does wear out. As a general rule everything with moving parts can wear out, and a CPU is full of moving electrons.

https://en.wikipedia.org/wiki/Electromigration

It takes on the order of 100 years for well made modern CPUs to fail due to these effects, but poorly made CPUs could fail far earlier. There are likely some nonzero number of failures attributable to electromigration.

There are some people who buy overclockable parts and don't overclock them because they are more robust. It's unknown exactly how or why CPUs fail. Taking them apart to check is an extremely expensive endeavor. But sometimes they do.


So processors wear out but it takes longer than a lifetime and longer than any processor has exited for it to happen?

Where is the evidence that AMD is more 'more reliable' than Intel?


Also, the chip obviously doesn't need to wear out to become less preformant over time. Reliability is perhaps the wrong word, I think quality would suit better.

But in any case, if you have one chip that decreases in performance 10% per year due to security mitigations and another chip that remains consistently performant, it is something to consider when shopping.


> But in any case, if you have one chip that decreases in performance 10% per year due to security mitigations

Why would this be true?


Because security mitigations contained in micro code updates regularly impact performance.

I mean, no offense, but that is literally the focus of this article and this discussion.


This is a one time thing, where are you getting a pattern from?

You hallucinated 10% a year from a single incident and called it "reliability".


Because Wintel has always displayed bad sportsmanship and anti competitive practices. I don't want a winner who wins because they cheat.

Also, AMD almost lost everything. At one point they had to sell their own HQ and lease it back to generate cashflow and keep the doors open.

If they had closed, Intel research budgets would tank. They wouldn't have nearly as much incentive to innovate. Without Dodge motor company you would still be driving a Ford Model T in black.

If AMD died instead of making the original Athlon, you would still be stuck with a Pentium MMX because Intel could have stopped innovating and kept their market share.


Thankfully, this doesn't seem to affect my Haswell machine. A microcode update was installed on 2021-06-09, but the microcode loader reports its build date as 2019-11-12. Does this vulnerability not affect older microarchitectures, or does Intel no longer bother updating them? I couldn't find any details in CVE.


It isn't clear if Haswell has this optimization. As an easy test you could run the zero-fill-bench linked in the post and check the fill0 vs fill1 numbers. If they are the same, Haswell probably never had this optimization in the first place.

Based on the Intel microcode release note [1] it doesn't seem like client HSW got any update this time, only HSX (Haswell Xeon).

---

[1] https://github.com/intel/Intel-Linux-Processor-Microcode-Dat...


From the post about the original finding (https://travisdowns.github.io/blog/2020/05/13/intel-zero-opt...), it looks like Haswell did not have this optimization to begin with (find section Hardware Survey and look at the different architectures - specifically looking for divergence in performance at L3 and RAM between orange and blue dots).


Is there a way to selectively disable mitigations on Macs?


Is that directly related to Spectre/Meltdown?

Is that microcode fix somehow better than mitigations in the kernel?

I wonder if it will allow AMD to catch up even further (even though AMD CPU were also affected).


Well Spectre (not Meltdown) is a broad class of bugs, and whether something is Spectre or not often just comes down to whether the author chooses to put their side-channel attack in that bucket.

That said, I don't think this is related to any of the named Spectre variants.


My PleX server last week, running on CentOS 7 in a docker container, seemed to have trouble with streams that previously played fine. There was a kernel update for CentOS 7.x recently, too. Now I'm wondering if this is the cause.

The container hasn't been rebuilt in many months (like 7) and the only thing that has changed on the machine is CentOS updates.


I just built a Rocket Lake desktop, I wonder if the architecture updates to Willow Cove removed the need for this mitigation.


If I run vanilla Debian stable, would I have gotten this kind of an update?


If my washing machine suddenly decided it needed longer time to wash a load of clothes, that would be a change of specifications after the fact, and I'd be able to return it to the store without issues.

So... why not with CPUs?


The issue here is that every washing machine of even remotely the same quality has more or less the same issue, regardless of manufacturer. So, you return it to the store. What do you do next? Spend even more going to the laundromat, even though you need to travel and it takes just as long once you're there?

The remedy for an entire industry's product performing to a degree less than advertised is a class action lawsuit (good luck getting more than about five dollars for something like this), not returning the products for a refund.


This essentially happened to dishwashers in 2010. Phosphates were banned in detergents, and everyone had to switch to enzyme based detergents. They were much worse when they came out.


> So, you return it to the store. What do you do next?

As someone said below (maybe after you posted)

> Then with that money I can do whatever I want... eg. buy an AMD washing machine... or generally a newer, faster one, becuase some time has gone by. Or go on a vacation. Or drugs and prostitutes.


So, your plan in this case is to go buy a competitor's washing machine with the exact same flaw, which will perform approximately the same as the thing you're returning? Or to demand a full refund on something you've been using for five years, so that you can buy a newer product that's been released since? These don't seem like great solutions (the latter would be fine if you could pull it off, but good luck with that).

The other options here are "Well, I'll just not have a washing machine". Which you can do, but it will take more time to clean your clothes and cost more money after not very many washes at all (or you hand-wash, but then you're looking at 10x as long). Same for computers -- if simply not having one is an option for you, you probably haven't noticed the degradation.


So you're just going to let manufacturers cripple the stuff you bought, because of their mistakes/incompetence, and silently let them?

They should give you your money back, or replace the CPU with one that is as fast as your old one before they crippled it - ie. new CPU with eg. higher clock to compensate for the speed loss due to this fix.

If you buy a 1000lumen lightbulb, and the manufacturer has to lower the output by 50% because of overheating, they should either give me my money back, or give me a 2000lumen bulb which was crippled to 1000lumen, to get the original brightness i bought the bulb for.


In this analogy, the lightbulb manufacturer can't build a 1000lm bulb for a comparable price once they account for the overheating issue. The technology won't exist for, say, five years. No other manufacturer can do that either. They can give my money back, but then I still don't have a lightbulb. What am I going to do, buy the same one again?

One thing you might say is that this change happened even to lightbulbs I've had for 5+ years. I can buy a functional 1000lm bulb for the same price now, due to the march of technology! The problem is that I've been using the lightbulb for five years already. If you use a lightbulb for five years, and then it fails entirely, the manufacturer is never going to give you a full refund: why would they give you a full refund for a lesser failure mode?

The choice is not between silently letting manufacturers cripple stuff, and having non-crippled stuff. The choice is between silently letting manufacturers cripple stuff, and loudly letting manufacturers cripple stuff. You're welcome to do the latter, but your stuff is still going to be crippled.


You can still operate it with the original specifications. Do not apply the microcode if it's that important to your workload. Just remember to avoid running untrusted code.


So i have two options:

- keep the old "firmware", and risk the machine catching fire

- put the new firmware on, and make it wash slower

Yeah... I'll choose returning it to the store.


You don't risk the machine catching fire by itself but rather someone setting it on fire by letting you wash their clothes that are designed to ignite it.

Or perhaps more accurately, someone spoiling your clothes by putting a dye in their pocket and letting you wash it in the same batch.


Situations like this happen often enough. The 2012 Nissan Leaf shipped with a regenerative braking configuration that stressed the battery too much leading to in-warranty replacement. Nissan issued a recall that patched the software to reduce regenerative breaking capacity leading to reduced range and reduced warranty replacements.


And there are lawsuits quite regularly too: https://www.greencarreports.com/news/1099200_nissan-leaf-bat...


Give it a shot, but for me it's simply one component in my laptop. I'm not going to bother unless there's a class action lawsuit.


The original specifications generally include "secure"... which you cannot operate it within without sacrificing other parts of the original spec.

The problem isn't that they made the patch, it's that they made a defective part in the first place. The patch just fails to cure the fact that they made a defective part.


It affects the entire class of products from all manufacturers to various degrees. No one making an out of order, speculative processor has made claims about immunity to all timing attacks, no less specifications promising that.


CPUs are overwhelmingly more complex than a washing machine. Or at least, than a washing machine from the 80s with analog controls (new washing machines have CPUS inside them).

The problem, IMO, is that a manufacturer of a high-quality washing machine has the human ability to go through and engineer out or (or economize in) every conceivable failure mode. There's no need to issue updates, because it only does the specific behaviors that it was designed to do. Each part has a wear characteristic, can corrode, can be mechanically stressed...but a pressed sheet metal bracket is fundamentally always going to be a bracket, either it corrodes, or wears, or yields, or it doesn't, or else it holds the things it's supposed to hold, there's just not that many ways in which the performance of a bracket can surprise the designer. The owner's manual says it's to be transported carefully, installed on a flat surface indoors, plumbed to water/wired to 120VAC/drained through a standpipe, and then you operate it by adding clothes and turning the knob. It's a fixed-purpose washing machine, that's what it does.

In contrast, Intel employed very smart, hardworking designers to write the microcode and HDL that describe the `vmovdqu YMMWORD PTR [rax], ymm1` assembly instruction, they thought about what that meant, what side effects it could have, why it should do what it does, and they probably were quite pleased when they optimized it to be faster when doing zero stores. But while the hypothetical ME for our washing machine could probably comprehend every requirement for the bracket, the Intel designers didn't build a fixed-purpose machine, they built a general-purpose machine that runs both trusted and untrusted code of unknown, highly flexible contents. The state machine describing all possible results of a Turing complete processor even with storage limited to mere gigabytes instead of an infinite tape, is unimaginably large. I maintain that numbers over a few thousand are humanly impossible to fully imagine, but to put it in concrete terms, worldwide, literally every second on the order of 10^20 novel combinations of 64 bit instructions are executed by a CPU, and every second, millions of programmers and computer users are generating never-before-seen demands of the hardwrae.

It's impossible to predict every side effect of every combination of instructions a CPU will run. Meanwhile, the brackets are simple brackets, handing the same requirements and performing the same functions that the designer should have known about when they designed it.


It's not a washing machine, it's a tiny motor inside the washing machine that only works with that model.

What CPU are you replacing it with? Most if not all modern CPUs are vulnerable to this so changing it makes little sense. Even if you can change it, you'd need to switch motherboard as well.


I, as a consumer, don't care... I bring the whole washing machine to the store and get my money back, because they changed the specs after i bought it.

Then with that money I can do whatever I want... eg. buy an AMD washing machine... or generally a newer, faster one, becuase some time has gone by. Or go on a vacation. Or drugs and prostitutes.


To parent's point, currently the risk is all pushed on the customer.

We wouldn't accept that on any generic customer product. We'd recall whole fleet of cars if they had severe flaws that directly affect their performance.

There might not be viable alternatives, but there should be at least something from CPU makers to help swallow the pill.


Because CPUs don’t have performance immutably specified in their specifications. You’ve decided that x transistors should yield y performance and so when you get y’ performance you get upset. But you were promised x transistors and you have x transistors.

No one gave you a guarantee that “there will never be any security issues with this product and if there are, mitigations will cause no performance degradation”.

If you want that promise you’re gonna have to pay for it, man. And it’s not gonna be a flat fee. That’s an insurance model.


There was a security issue. In order to fix the issue, so that your washing machine doesn't do something crazy like take over your bank account, we had to slow your washing machine down a bit.

We expect that this issue will be sorted out soon.

In the meantime we hope you'll consider that speed and security are sometimes traded off, in this world of network-aware, gigahertz speed washing machines which you can also use to do banking, create spreadsheets and watch your favorite movies.


Author here, happy for any feedback!

Just a note that the title should be:

Your CPU May Have Slowed Down on Wednesday


Vaguely relatedly: I used to design graphics accelerators for Macs (back in the 90s) we got our best performance by spending time making writing 0s (or rather solid fills) really really fast (1.5Gb/s which was lot back then) - why? because as always whatever you'd do in hardware, the software guys would piss away - for example sometimes excel would erase the background 8 times before it ever wrote a black pixel - there was little point in speeding up those black (interesting, useful) pixels when speeding up the background white ones being written over and over on top of themselves was such a win


In a similar vein, all non-tiled GPUs do some form of lossless framebuffer compression purely as a bandwidth optimization (it doesn't save on DRAM footprint because that would mess too much on the fly with the memory layout and you'd need to allocate for the worst case anyway). The simplest case is indeed fast clears (to any color/depth/stencil value) but it also works at a finer grain. It's essential for MSAA where a single pixel on your display is resolved from up to 16 subpixel samples which most of the time are strongly correlated. E.g. 1 constant color, or 2 constant colors separated by a line (from a triangle edge). And not just color but depth: with a 16x MSAA pixel covered by a single triangle you have 16 distinct depth values but they are all coplanar, so you can compress them losslessly by just storing the plane equation's coefficients.


Well that's sort of a different point - what I was more generally trying to point out is that sometimes you need to optimise for stupid stuff (and win bigly) ... because people actually do stupid things


It was "Your..." when I submitted it. No idea how that changed. Regardless, fixed now.


HN does various title mangling: capitalising things, removing words like “your” and numbers from the start, that kind of thing. I strongly dislike it because it harms titles far too often. Fortunately it allows you to second-guess it by going back and editing the title after submission.


How did that idea pass five minutes of contemplation?


It desensationalises/de-clickbaitises/de-Buzzfeedises a lot of stuff. The submitter can edit it, and the edit doesn't go through the same sanitisation. Also if it's popular enough for a moderator to notice/they happen to, they often fix them too (or sanitise further in other cases).


I imagine that it looks more reasonable when you see all the submitted titles; there may be many that it improves—I don’t know. (I still think it should be reviewed and probably curbed or discarded, but I’m confident that there’s a reason it’s there; dang knows that it regularly does harm to titles.)


Most likely an Arc feature.

bug*

feature*

(bug*)


I'm glad you are here. I came into comments on the off-chance to specifically say that I loved your writing style. Refreshing, witty, clear. Keep up the good work.


Here's my feedback: ditch your intel gear and get a recent AMD processor.

See, we can also play that bait game


Way ahead of you, I had an 1800X at work years ago which was a beast, current PC is a 2700X (wanted the 5950X but well can't buy one and the 5900X isn't really what I want but I digress) - love them.

Last laptop I bought the missus at xmas was a Ryzen 4000 and that thing screams performance wise, incredible for 800 quid.


I've got two 3700x because 3900x and 3950x were unavailable for quite a while here in the dark parts of Europe. Wife has a really slick 1.1kg aluminium Lenovo with a 2500u, which we plan to replace with a couple of 5800u machines


It doesn't have to be either/or!

I have a Zen 2 box which I'm just starting to learn about.


The title is a click-bait. Please don't do this.


Wikipedia on clickbait <https://en.wikipedia.org/wiki/Clickbait>:

> Clickbait is a text or a thumbnail link that is designed to attract attention and to entice users to follow that link and read, view, or listen to the linked piece of online content, with a defining characteristic of being deceptive, typically sensationalized or misleading. A "teaser" aims to exploit the "curiosity gap", providing just enough information to make readers of news websites curious, but not enough to satisfy their curiosity without clicking through to the linked content. Click-bait headlines add an element of dishonesty, using enticements that do not accurately reflect the content being delivered.

I don’t feel this applies in this case because it fails to be deceptive, misleading or dishonest. (Parts of the definition certainly apply, but not enough.)

Sure, you could have a title like “Intel just slowed down their CPUs with a microcode update” (and I do prefer explicit titles like this—you should see the titles I write myself, 70- and 80-character limits severely cramp my style), but the article’s title isn’t particularly bad.


Do you think that click-bait titles are ineffective, or effective but unethical, or something else?


you didn't ask me, but I feel strongly about the question asked.

Effective but unethical, but unethical in the sense that marketing psychology is unethical.

Unethical because it uses well known human psychological weaknesses to help nudge a target towards an action that they themselves may not have chosen to perform without the nudge.

If I was asked, i'd wager that most motivated persuasion is unethical to some degree.


They are unnecessarily vague/mysterious.

Instead, the title should just state what is the case: "Intel CPU-update causes slowdown" or something like that.


I think the "on Wednesday" bit is also significant since it tells us that this was a recent change (and one we probably weren't aware of).

On the whole I'd say that the original title is much better than most.


Should've said "... this Wednesday" then.

The way it's phrased implies a periodic event, which is what makes the title clickbaity.


I specifically tried to avoid implying that (periodic). I think it would have to say "on Wednesdays" (note the s) to have that meaning.

The title originally said "last Wednesday" but I waited too long to publish it and was actually two Wednesdays ago...


They are misleading.

Imply one thing, deliver another and this leaves an aftertaste even the actual content is good (like in your case).


Ignore them. Nice work as usual, Travis.


mitigations=off


This won't restore your performance in this case: the microcode changes the behavior of the zero store optimization without any action from the kernel. If you accept the new microcode, you get the new, slower behavior.

Perhaps there is an MSR bit you can flip to set the behavior back to the old way, but none has emerged so far.


Thankfully I do save all of the microcodes, so I can just use an early version by simply rebuilding the initramfs. I also hold that package back on Arch, so it does not get upgraded.

Might want to look for them here: https://github.com/intel/Intel-Linux-Processor-Microcode-Dat...

As to how to actually use those files to get your "intel-ucode.img", just take a look at: https://archlinux.org/packages/extra/any/intel-ucode/

For the "iucode_tool": https://gitlab.com/iucode-tool/iucode-tool/-/wikis/home

The actual PKGBUILD for it on Arch Linux: https://github.com/archlinux/svntogit-packages/blob/packages...

---

  cd Intel-Linux-Processor-Microcode-Data-Files-microcode-${pkgver/./}
  rm -f intel-ucode{,-with-caveats}/list
  mkdir -p kernel/x86/microcode
  iucode_tool --write-earlyfw=intel-ucode.img intel-ucode{,-with-caveats}/
The above gives you "intel-ucode.img". Copy it to "/boot/". Run "mkinitcpio -p linux" or the like.


This likely would not help here. It is blamed on the cpu's microcode, not the kernel's mitigations.


If that's the case, the microcode update happens some time during the boot process. It isn't persistent. It's either in the firmware or the OS. If it's on Linux, you can disable the boot script that performs the microcode update or patch it to update an older microcode image and you're done.


Is dis_ucode_ldr kernel command line option sufficient to disable updates?


WTF. I thought my grandma's Laptop purely catering to her Netflix addiction was just Windows glitching and spent 4 hrs downloading and installing Ubuntu on it.


Make processors which are 10% faster than competition by dubious security practices. After some time slow the processors down by 20% because various corner cutting practices are detected.

First you kill AMD with "efficiency". Then you "encourage" users to upgrade, since secutity updates make this efficiency disappear.

How is this even legal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: