Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How to intentionally throttle CPU on a M1 MacBook?
40 points by dsnr on Oct 31, 2022 | hide | past | favorite | 61 comments
Hi everyone,

I recently had no choice but to upgrade to an M1 Macbook from an Intel-based one. I did not want to upgrade, as I’m working on a software for which performance on mid-range machines is critical, but unfortunately the previous laptop gave up on me. Since M1 macbooks are so much more powerful than previous generations, is there a way to intentionally slow it down to be able to test my program on a slower machine? One way I can think of, is to run the program inside a VM, but that would slow down the development loop.

Thanks for all the suggestions. Now I have the following list of tricks to try (collected from the thread):

- taskpolicy -b -p [pid]

- macOS low power mode (which can even be enabled when plugged in)

- VM with limited CPU and RAM, and shared filesystem with host OS.

- Buy older laptop (not an option at this point, really)

- Stress the machine by compiling the Rust compiler in parallel :)




The M1 chips have two types of cores: high-performance "Firestorm" cores and energy-efficient "Icestorm" cores, which are much slower.

It is indeed possible to constrain a process to the Icestorm cores using taskpolicy:

> taskpolicy -b -p 567

where 567 needs to be replaced by the process id.

Reference: https://eclecticlight.co/2022/10/20/making-the-most-of-apple...


You can also do this with GUI apps like App Tamer※. I use this myself to limit CPU consumption of hidden background apps (but not visible apps in the background), optionally running on the Icestorm cores only as well. Lots of options and ways of being configured.

※ - https://www.stclairsoft.com/AppTamer/index.html


Awesome app, thanks for sharing!


This is some TIL material, and looks like a really viable option.


This makes me want to constrain every web page to the icestorm cores. If a web page wants more than ~5% of a CPU, I want that web page to die.

Actually, now that I think about it, I'm almost tempted to write a script to do just that, and either relegates CPU-hogging processes to the icestorm cpus or just kill -stop <process id> them


I tried to implement what you said on my M1 2020 MBP. Heres's the one liner (albeit using 0% of CPU as the cutoff, and only throttling Safari processes):

  ps axo pid,%cpu,command -r | awk '{ if ($2 >= 0 && $3~/WebContent/ )   { print $1 } }' | xargs -L 1 taskpolicy -b -p

You could set this up to run at whatever interval, and it should work. Using browserbench.org/Speedometer2.0/, before running this one liner I get a score of 274. Afterwards, I get a score of 33.6.

EDIT: Fixed url


This looks awesome! I already have a periodic script that archives my desktop and downloads (clean desktop for life!) but it only runs hourly. That said, it would be worth it to run this every minute, or maybe even more often. I assume this is almost zero-weight to run, but we'll see about that.


One caveat to note[1] is that apparently you cannot promote those threads back onto the performance cores using setpolicy.

[1] https://eclecticlight.co/2022/01/24/how-you-cant-promote-thr...


I would be perfectly fine with that.


https://browserbench.org/ (instead of ..benchmark..), right?


Oops, my bad. That's correct.


Ugh, I ran it, then sat back amazed: I hadn't realized my computer is so crap!


One thing I noticed is that browser tabs each have their own process id. Would applying the policy on the main process affect each child tab process?


I had a piece of software that did this, and it turned out the energy-efficient cores are still really fast.


I understand it's not what you want to hear, but I think the only viable option is to test on real low powered hardware.

A throttled high-end machine will not behave like a low-end machine. Sure, they'll superficially both be slow, but not slow in the same way. There are differences in IOPS, cache sizes, and so forth you simply can't escape from.

Especially in low-end laptops, you may also have uneven performance from thermal throttling.


This is 100% true (and different "high performance" machines can operate differently, too).

Some people like to develop on a slower machine (do all their development work on the slow machine) so they catch things, others like to do all the work on the fastest machine they have, and only when nearing completion test things on the slower one. Both have arguments in favor - a decent compromise can be a "build on fast, deploy on slow" where it may be as simple as a shared folder between the two machines with Synergy or even Ventura's built-in equivalent for running the app.


This.

Crippling a high Mac is still not going to effectively mimic a low end system. An end user device is a system of queues if you will think like that. The slowest/smallest/busiest link in the system determines the overall experience. Sounds like for your needs, you’re going to need a real low-end system that has real bottlenecks in CPU(speeds, caches, accelerators), mem(capacity and speeds), IO(capacity and speeds).


If anyone needs to do this with an Intel-based MacBook Air 2020 model, the answer is much simpler: join a Google Meet call.


Docker Compose allows for resource limits which can be useful for testing harnesses. I know this works locally for v3.9:

  version: '3.9'
  services:
    some-container:
      deploy:
         resources:
           limits:
             cpus: '0.5'
             memory: 1.5G
           reservations:
             cpus: '0.5'
             memory: 1.5G
Reference: https://docs.docker.com/compose/compose-file/deploy/#resourc...


AFAIK Docker doesn't run natively on macOS so it's running via a VM. And since you're already running it via a VM, might just lose the additional abstraction of Docker and set the limits on the VM directly.


Why try to mid-optimize though? Even when running with Rosetta, Docker's still faster than an 2015/2016 macbook pro (since we're talking about mid tier machines) so being able to throttle it even further is pretty useful.


> Why try to mid-optimize though?

What do you mean mid-optimize?

I'm just saying if you're running on macOS and want to limit resource usage, running it in docker in order to limit resource usage, feels like solving the problem at the wrong layer as you already are using a VM in that case, so just put the resource limitation directly on the VM instead.


Sure, but trying to put the limitations in place so that the VM runs slower is a ridiculous chore, whereas setting restrictions for a docker image Docker is a a few lines of trivially tweaked ascii. Running docker may be using a VM, but it's not a slow VM compared to mid-tier machines, something running at half the speed of the M1 is still running considerably faster than the target you're trying to hit, so just tell docker to slow down instead of trying to mess with MacOS behaviour.

Hence the mid-optimization: don't go "the VM's already slow, tweak that", just keep going and tweak Docker, it's almost trivially easy, and trivially replicated on any other machine should you need to either scale or hand off.


"put the limitations in place so that the VM runs slower is a ridiculous chore"

It's just as easy as tuning docker. Every VM gets resources allocated to it, that you, as a user, can customize however you want. Most major VM implementations have this for you to use today already. Just as replicable as tuning docker container parameters.


Honestly just buy an older laptop on eBay. Depending on how slow you want, they’re very cheap.


I did this once, bought an older MacBook Pro to make sure my app was still 'usable' on an older machine. After about a year the machine didn't get any OS updates, and then a year after that it just died. Wasted my $300.

I have a 2006 Intel MacBook that still works but I've also had 3 year old Macs just quit working.


In some ways $150/yr to make sure your app was usable may be a worthwhile expenditure.

And if you abuse things like https://github.com/dortania/OpenCore-Legacy-Patcher/releases you can keep running "modern" releases on older hardware.

But it depends on what you're trying to do.


Sorry, did you say "this computer that only cost me $150 a year but allowed me to test and deploy for its platform on native hardware was a waste of money"? Because it kinda sounds like the completely opposite?

As a counter anecdote, I've done this three times for macs so far, and all of those still work for their intended purpose. The 2011 model even survived overheating due to user error and fried its internal graphics cables but works fine with an external monitor. And sure, they're all stuck on whatever their respective latest operating systems are, but that's fine: they're test/compile machines, not daily drivers. And not all of them: in 2022 the oldest realistic mac target you're looking at is the 2015 macbook pro (i.e. "the good one", that even today's models can still take design lessons from).

And here's the best part: if something in them breaks, replacing the entire machine is only a few hundred bucks. Which you don't have to: you just need to buy a "for parts" with the bits you need still working. Any model that's still being used in the wild still has a decent sized for parts market. You can tell when the world stops using a model when the parts dry up.


You bought a used machine, and got 2 years of testing out of it for $300? You got a pretty good deal.

I've also had 3 year old Dells, HPs, Lenovos, ASUS, you name it... all die prematurely. I don't honestly believe that any particular company uses any better quality hardware than another. Not measurably better, at least. You're just gonna win some, and lose some, no matter how much you pay for the hardware. This year's most reliable laptop could have a revision next year that makes it the worst laptop...

With laptops and proprietary desktops especially, it's sometimes worth paying for the extra warranty.


Perhaps the devices are worth fixing. I trust Rossmann Repair because of their YouTube videos.


Depending on how it dies, it's often not worth repairing a $300 laptop - since you can buy another one for $300.

It is often worth selling the dead one for parts to someone else; even if the logic board is dead the screen is likely to be useful.


If you don't want to do eBay and you're near a Micro Center, they often have some really old refurbished stuff in stock. Got myself a 2014 Mac mini for $250.

:)


I think it depends on the Micro Center. I've been going to the one in Parkville since it opened, and will keep doing until either it dies or I do. Back in 2015 I got a really nice refurbed 2014 MBP that I still use daily, but a couple years later they dropped a lot of old stock in their Apple section and there's been next to nothing in there ever since - latest models of hardware, a few matching accessories, and that's more or less all there is to it.

I haven't asked about that that I can recall, but my guess is they don't see a lot of margin on that stuff, especially with at least two Apple stores in easy driving distance and most Apple customers tending strongly to favor the latest and greatest. Disappointing, but with the streaming boom then being on the upswing, I really can't blame them for making space to cater to the open-walleted parents of a million infants in Fortnite t-shirts.

Still, if you have a Micro Center nearby and you haven't been, you should go there. It's like what computer stores used to be, back when those still existed.


I've had quite good luck with CL in the past, and FB Marketplace more recently. I feel much better being able to physically see the device before I pay for it.

That way you can verify that It's been properly reset, and that the description/picture are accurate.


There are many ways to stress the machine or deprioritize your program so that it behaves worse. That said, actually getting performance that is in-line with what your users on old machines would see is generally not easy to achieve. What are you working on? Perhaps there might be something that can be done specifically for your workload.


M1 has a low power mode in the battery section which limits CPU performance.

Geekbench scores with normal and low power mode

- Single core Normal/low power - 1749/1053

- Multi core - 7739/4491

https://www.reddit.com/r/MacOS/comments/qj13bv/m1_macbook_ai...



`sudo /usr/bin/cpuctl` can turn off cores, so for example you can turn off the performance cores.


Thank you for your suggestion, but I do not succeed with it.

I fail to take a CPU offline on my M1 mac: "cpuctl: processor_exit(i) failed"

Moreover, I can't discern the core type from the cpuctl list since all cpus are of the same "type".

I assume it is easier to activate the "power safe mode" then. Of course, the disadvantage of these approaches is that the whole system gets slower.


Depending on what your program is, you might be able to just do some testing on a users machine.

I sometimes use a remote desktop tool (eg. teamviewer) to troubleshoot my software running on a users machine (with the user on the phone normally).

It's usually the quickest way to understand what issue a user is having, and the user walks away thinking they're getting great customer service.

It could be a good way to get a rough feeling for which areas of your software perform well/badly on a low end machine. One benefit is you get to experience it the way a user does - so the interactions with a 7 year old copy of norton antivirus, one of 13 programs running at once with 1GB of RAM, the fact the user has an always-on-top program that is blocking some of the UI but they don't know how to move it, etc.


Why a VM would slow down the loop? You can have shared directories and you can communicate between the two


VM is still an option, many VMs today allow easy file exchange between the host and the VM, with proper configuration you can still code and compile in your original workflow, and test in the VM.


Was just buying an old mac from ebay not an option? Tons available still, I've bought several for that purpose over the years.


If you run something very performance hungry at the same time, the M1 will slow down considerably. Two things that constantly maxes out my M1 is either compiling the whole Rust compiler / toolchain (which is like 30min of heavy load) or rendering something complex in blender (with the Cycles renderer) which easily taxes the M1 for hours.


I know that you don’t want to spend money on a test machine but you might want to check on used laptops through eBay or Craigslist or Offerup. Or someone here might have an older laptop sitting around.


Does your program require macOS? If not, maybe you could run the program on a low-end machine from AWS/GCP. With terraform its pretty easy to create/destroy a VM on demand for testing.


something something Electron.


That's easy, just use your bare hands to throttle that thing. Whether you succeed or not, depends on your strength.


What about running a Virtual Machine and not giving it much ram and CPU to work with?


Do any of the Mac Mini hosting providers offer hourly/daily rental?


You can force processes to the e-cores. That should be plenty slow :P


Use a profiler?


I use Zoom. CPU and battery drain.


SIGSTOP/SIGCONT in a loop?



There is a low power mode you can turn on in battery settings.


I barely notice a performance difference in low power mode.


There is a significant difference:

Geekbench scores with normal and low power mode

- Single core Normal/low power - 1749/1053

- Multi core - 7739/4491

https://www.reddit.com/r/MacOS/comments/qj13bv/m1_macbook_ai...


Oh interesting. I guess it speaks to the baseline performance of the M1?


Ever heard of Docker?


Put it in an oven and wait a bit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: