I have no idea on how many other OSes do this, but at least OpenBSD will use idle time to pre-clear memory pages that have been returned to the OS, so that when the next process has them mapped, it doesn't have to zero-fill-on-demand at the very latest moment.
It has to be done at some point, but if you keep a queue of pages needing to be cleared and dealing with as many as you can while any core is idle, a non-100% busy system can be seen as to "improve" performance a bit by timeshifting the task to not-directly-after-former process but also not when the new process is calling for memory to be mapped in.
But when that list is empty, doing the right kind of CPU sleep is totally worthwhile of course.
- Pre-zeroing a page only takes 80ns on a modern cpu. vm_fault overhead
in general is ~at least 1 microscond.
- Pre-zeroing a page leads to a cold-cache case on-use, forcing the fault
source (e.g. a userland program) to actually get the data from main
memory in its likely immediate use of the faulted page, reducing
performance.
- Zeroing the page at fault-time is actually more optimal because it does
not require any reading of dynamic ram and leaves the cache hot.
- Multiple synth and build tests show that active idle-time zeroing of
pages actually reduces performance somewhat and incidental allocations
of already-zerod pages (from page-table tear-downs) do not affect
performance in any meaningful way.
I expect openBSD to continue to do this anyway though because there is the possibility that someone can reboot the system to a new OS (presumably designed just for this purpose) and read whatever was in RAM. Of course programs that deal with encryption zero memory before returning it (It is hard to make sure the compiler doesn't optimize this otherwise useless work out), but most other programs that deal with secrets are not so well written and will live sensitive information around.
The starting point is that there is stale, useless data in ram. Then a usermode program requests an empty page, and usually when they do this they want to immediately use it.(1) Using non-polluting writes, you have to use main memory bandwidth both for clearing the page, and also for bringing the page back to cache immediately afterwards when the program uses it.
Using writes that just allocate new, cleared dirty lines in cache (like the AMD CLZERO), it avoids both the write (which will happen later, when the lines are evicted from cache, probably after the lines have been used by the program), and the read because the lines are now all in the cache.
(1) And on Linux this is trivially true, because Linux only allocates and clears the page when it is first accessed.
I don't follow. Who is "you" here? The user-mode program, or the kernel-mode zero-page thread (or whatever its name is)? I'm talking about the zero-page thread here, which is zeroing pages in the background long before any thread has requested access. Those threads do not want to evict anything from the cache. This seems exactly what we want.
The issue is that zeroing pages in the background is a pessimization, that should not ever be done. The user-mode program that allocates some memory is typically not going to be able to use only writes that allocate new cache lines without reading memory. So to compare the two systems:
Your system: memory is released, kernel clears it in the background, wasting write bandwidth (which might not matter for anything except power if the system was idle at the time), and when the user-mode program starts using it, it will start writing and every new line they write to will trigger a spurious read.
Modern Linux: memory is released, kernel lets it lie, not using any power or bandwidth to do anything to it, until an user-mode program allocates it and touches the page. Then the kernel picks up the page, writes the entire page with 0:es using whatever idiom on that CPU allows it to just allocate the page in cache without reading it from the RAM. This is really fast, faster than a single memory fetch. The user-mode program can then directly use it without having to fetch anything from DRAM.
> This is really fast, faster than a single memory fetch.
Nit: it's faster than a page fault (so fault + zeroing is pretty much the same as just fault).
According to the recent-ish Latency Numbers, a main memory reference is ~100ns (variable by arch and local v remote DRAM) which is about the same as zeroing a page, at least with respect to the dragonfly numbers I posted above.
Windows NT thread scheduling does not work that way. If the Zero Page Thread is ready to run, the processor's Idle Thread is not scheduled. Idle Threads do not live on ready queues, and are only dispatched when there is no thread available from the processor's ready queue.
Also, do not conflate the user-thread idle priority class with the (non-)priorities of Idle Threads.
The idle task (when there's nothing to run) doesn't have a priority at all, it just gets run when nothing else is available. the "idle" priority is an actual priority, for when there's no higher-priority task to run. The ZPT is prioritized below "idle", so super-duper-low priority.
It runs when the there are no higher-priority processes to schedule on a particular core. IDLE processes are higher priority, so as long as an IDLE process can be scheduled, the zero page process won't run.
This is when I miss the days of big iron. The Burroughs B5500, 6500 and 6700 all had massive light displays of registers. The idle process loaded the registers with a bit pattern that showed the Burroughs logo, a circle with a B in it. You could watch the panel and see how busy the system was by how often you could see the logo flash in the lights.
That piqued my interest and I found this delightful video from 1969 on youtube: https://youtu.be/MkxBmviMy_E covering the launch of the B6500. It has some surprisingly interesting technical details.
Your comment got me curious: apparently Linux controls LEDs activity through /sys/class/led [0]
A cursory search led me to this project[1] that blinks the power led according to the disk activity, which is not far from your idea (replacing the disk activity by a composite of system load for instance?)
Isn’t there already a disk activity led? Well, on desktop boxes anyway.
Eons ago I had a small program on Linux that blinked the otherwise-almost-useless scroll/numlock leds based on network i/o activity. It was fairly cool.
Reminds me of the book Cryptonomicon by Neal Stephenson, where the character writes some code to redirect stdout to blink the num/scroll/capslock status LEDs in Morse code.
The project I mentionned above was started by someone who did not have such a led on their laptop
I just looked at several severs I have access to at several operators, different physical hardware and distros, none of them has a LED defined for anything other than scroll/numlock
I guess you're referring to CPU Trigger led activity as defined by [0]?
As I said elsewhere, none of the (recent) servers I have access to, most of them Intel based, several distros, etc have anything else than LEDs defined for numlock/scrolllock under /sys/class/leds/, where I'd expect to find the CPU activity LED? What am I missing here?
There is a file in that directory for each physical led on your computer. You echo in the name of the "driver" for that LED (disk, cpu, heartbeat, whatever) and that physical LED becomes what the driver wants it to show.
So if numlock and scroll lock are the LEDs available on your hardware, you can repurpose one of those as the CPU activity LED.
This is really impactful work. Data centers are 2% of US electricity consumption, and a 20% improvement in idle energy usage will cut that by a sizable fraction. (Even if virtualization is intended to reduce idle hardware capacity.)
Naive question here, but how much of the time would you expect cpus in a data center to idle?
Also, you save 20% of idle (which is a sizable percentage, but not much in absolute wattage) and nothing when not idle (which is much higher wattage). So how much do you save per month overall?1%, 0.1%?
Well, if your goal is to reduce electricity consumption, the proper way would be to increase prices for it. It's just there might be easier and faster ways, not only improving idle CPU states.
I hate that approach so much. By increasing prices you're not reducing demand, you're just making it less affordable for people who might need it equally than those who are in better paid jobs etc.
I prefer the approach of improving energy efficiency (short to mid term time scale) and investing in greener energy sources (mid to long term) to bide us time until nuclear fusion becomes viable and thus electricity consumption no longer becomes a harmful process.
We shouldn't punish people less fortunate than ourselves for our own gluttony - which is all that raising prices would do.
>I prefer the approach of improving energy efficiency (short to mid term time scale) and investing in greener energy sources (mid to long term) to bide us time until nuclear fusion becomes viable and thus electricity consumption no longer becomes a harmful process.
In economics, the Jevons paradox (/ˈdʒɛvənz/; sometimes Jevons effect) occurs when technological progress increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the rate of consumption of that resource rises due to increasing demand.[1] The Jevons paradox is perhaps the most widely known paradox in environmental economics.[2] However, governments and environmentalists generally assume that efficiency gains will lower resource consumption, ignoring the possibility of the paradox arising.[3]
In 1865, the English economist William Stanley Jevons observed that technological improvements that increased the efficiency of coal-use led to the increased consumption of coal in a wide range of industries. He argued that, contrary to common intuition, technological progress could not be relied upon to reduce fuel consumption.
The Jevons Paradox is an extreme form of the rebound effect[0]. More normally, we expect efficiency gains to be only partly offset by changes in behaviour.
As an anecdotal example, we replaced one 60w incandescent bulb in our bathroom with 4 x 5w LED spotlight bulbs. This is both an increase in light, and a reduction in energy usage, even though it doesn't reflect the full efficiency gains of the LEDs.
It doesnt have to be behaviour of the consumer. It can mean that the value proposition of application development can move to more less-efficient programs. e.g. we have more and more memory in computers so developers often say "meh, memory is cheap" when developing Electron apps that take hundreds of GB. This can definitely result in "meh, CPUs are efficient" (even if it's a non-sequitor in this case).
Fair point. We've definitely seen that trend happen where software will often be written to take advantage of the resources available rather than written to be efficient. Not just in recent times with Electron but throughout the evolution of GUI-driven operating systems (eg compositing desktops, themeing, pre-compositing animations, etc).
The gaming industry demonstrates this the most clearly but it's definitely present in general purpose computing as well. eg when Windows XP was first released (pre-service packs) it required twice the hardware specifications of Windows 2000 yet offered little functional difference (read: actual real world stuff that could be done on it) aside themeing.
Thankfully that trend with Windows has reversed somewhat but it's still ever-present with desktop software and their movement towards using web-based technologies.
> The gaming industry demonstrates this the most clearly
I think the gaming industry actually also demonstrates the opposite the most clearly.
I'm often blown away by how efficient some games are, and how well they take advantage of advances in hardware.
I look at something like World of Warcraft, a 14 year old game that has trouble running on my desktop computer despite its graphics being... limited to say the least. And then I look at Breath of the Wild. A patently stunning game.
And then I remember which one of the two is a mobile game.
On the contrary - the introduction of efficient LEDs has led to insane stuff like decorative building facade illumination which nobody would think of doing with incandescents, just because of replacement effort and energy usage. But with cheap LEDs the operating costs drop to the point where such an application becomes viable, and we get a new source of demand (and light pollution)
You're begging the question, which is not adding to the discussion.
Saying efficiency leads to increased usage is not the same thing as saying increased usage balances efficiency 100%.
It reminds me of people who argued that airbags and ABS lead to people taking more risk when driving. That may be a real effect, but nevertheless, casualties have declined.
I quite believe that phenomena is true for industry but I would be surprised if it still applied to electricity consumption in consumer devices. The cost of power there is pretty low in real terms (low enough that few consumers will factor the cost of running a device into account for general* computing devices or other household goods) but the collective savings from energy efficient hardware at a national scale is massive. So I couldn't see people buying a second mobile phone or laptop just because it's more energy efficient).
The question of datacentres is another matter though. But they only respond to demand from people like ourselves who lease computing time from them / place our own hardware in their racks.
* I'm not counting mining (bitcoin et al), home servers, media centres, etc. where it's more likely running costs will be factored into the buying decision. However these are uncommon compared to other hardware like laptops, desktop PCs, games consoles, mobile phones, TVs, fridges, kettles, etc.
People definitely buy more and bigger TVs if their cost of operation is cheaper. They also buy more power-hungry phones – energy efficiency increases in general haven’t turned into energy savings! More efficient use of energy also simply makes people have less incentives to conserve it (switching off lights and appliances when not used, turning down heating/AC, driving less, and so on).
Interesting points but I only agree with one of the 3 examples you've provided:
> People definitely buy more and bigger TVs if their cost of operation is cheaper.
In all my years I have never, ever, heard anyone say that until now. People buy bigger TVs because it's cheaper to buy, or because they offer better features (smart, 3D, 4k, curved, etc) or because they move house and their new room is bigger so want something proportional. Or even just because they're used to their old TV and want a visible upgrade.
However I have never heard anyone say "this TV is bigger than my old one because it's cheaper to run".
(obviously I'm not saying "nobody in the history of consumers have said what you claim", but I would be astounded if that was a normal buying trend. More likely it's a niche quirk you've exampled)
> They also buy more power-hungry phones – energy efficiency increases in general haven’t turned into energy savings!
From what I gather, the trend for power-hungry phones have outstripped energy savings anyway. It's improvements in battery technology that have enabled people to continually upgrade. So I don't really see the cost of electricity vs energy consumption improvements being a deciding factor here.
> More efficient use of energy also simply makes people have less incentives to conserve it (switching off lights and appliances when not used, turning down heating/AC, driving less, and so on).
That's a fair point. It would be hard to judge just how significant that impact is though. But I definitely agree there will be people out there who do leave lights on because they're cheap to run.
Of course energy efficiency is not the only, or even primary reason. But it is there somewhere, and insofar as it isn’t (would people be willing to pay for 4x the energy use? 8x? 16x?) it is just evidence that energy is too cheap if people don’t need to think about it!
I do grant you that energy efficiency wins in many fixed appliances (washing machines, fridges, etc) do probably transfer directly to reduced energy usage by those appliances – but remember that energy is fungible, and reduced use here may and often will transfer to increased use there.
On the bigger TVs point, you’re sort of talking past each other. Noone* is buying a bigger TV because it suddenly fits within some power budget but nobody* would have bought a 60” CRT that heated the room like an oven. Nowadays, you’ll struggle to find a non-garbage TV much smaller than that.
The improved efficiency enables new uses which consume more energy, eating into the improvement.
The problem with 60" CRTs wasn't the heat it produced (or at least that was only a small part of the problem), it was the depth of the box. The depth of CRTs was related the width of the screen. This meant larger CRTs was often too deep to be practical (unlike Plasma and LCD which even in the earlier days could fit more flush against the side of the room or tighter into a corner). Schools, youth clubs, etc did have larger CRT TVs on a trolley but even those weren't 60" because storing that simply wasn't practical - it simply wouldn't have fit into many storage rooms. Even TV studios used a wall of monitors (which surely would have ran hotter) rather than one big monitor because of the space requirements.
Also lets not forget that back in the days of CRT everyone was still watching standard definition - which looks terrible on 60" displays (which possibly was also a deciding factor for TV studios using a wall of monitors rather than one big screen?). So there was a lot to be said for having the right sized screen to fit the output resolution. These days we have 4k and that will easily scale to 60 inches.
You’re still missing the point, which was that new technologies enable new uses that consume the energy efficiency gains made over their predecessors.
TFTs, being thinner/cooler/more efficient than CRTs, allow bigger and more common screens thus using some if not all of the energy saved from doing away with CRTs.
This is the same point as someone else made up the thread WRT the availability of RAM enabling developers to be more complacent about the efficiency of their applications.
I get the point you're making but my point was that I don't agree that energy efficiencies is what have lead to larger displays. At best I see that as a byproduct but honestly I think it's more of a parallel development. So I agree there is a correlation there but I don't agree with your conclusion of causation.
At least with the RAM example (where more RAM enables developers to write heavier software applications) there is a definite causation. However with regards to CRTs, I think we'd have seen the same trend to larger screens even without the drive to engineer more energy efficient hardware (and in fact we did see that with plasma screens back when they were in vogue. Plasma was favoured for bigger displays because it produced better looking screens* despite LCD being more energy efficient).
> I don't agree that energy efficiencies is what have lead to larger displays
That wasn’t my point, and was why I originally said that you and Sharlin seemed to be talking past each other.
LCDs took off because of their physical advantages (weight/thickness/heat, although the heat it produces has to be correlated with energy input) despite their shortcomings (fixed resolution, limited brightness and contrast ratio, response time) and plasma screens were an attempt to deal with those shortcomings but are now mostly dead. As you say, improved efficiency was correlative but not entirely causative.
A naive view would have been that as LCDs took off, their efficiency would lead to a drop in power consumption over CRTs. The Jevins paradox shows that not necessarily to be the case - bourne out by the proliferation of displays where previously there were none and in displays getting larger.
> A naive view would have been that as LCDs took off, their efficiency would lead to a drop in power consumption over CRTs. The Jevins paradox shows that not necessarily to be the case - bourne out by the proliferation of displays where previously there were none and in displays getting larger.
I think we'd need to run the maths before making any claims there tbh. We're getting dangerously into the realm of using assumptions as statistics. Points we'd need to consider:
* how much more efficient are LCDs compared to plasma and CRTs per square inch.
* how much did the trend to bigger screens proliferate with plasma vs LCD
* how has the cost of LCD and plasma screens changed over the last 20 years (this should be broken down by TVs with features such as smart TVs, 3D, HD, 4k, curved screens, etc)
* what about the uptake of said features on TVs?
* and lastly are those features only available on TVs of screen sizes > n?
* any other variables I've not considered? (I've only quickly thrown some thoughts together so there's bound to be some metrics I've missed)
I think the point you're making is a pretty hard conclusion to argue (or for me to refute) without any meaningful statistics to back it up. However it does still make for an interesting discussion so while the conclusion may remain unproven I have enjoyed the debate :)
Agreed - my point has a lot of hand waving, and have a +1 for it staying civil too :)
I would probably argue that integrating more (oxymoronically) “smart” stuff into TVs might have made them less efficient too but it probably helped because of increased integration, fewer <100% efficient power supplies etc.
"Nowadays, you’ll struggle to find a non-garbage TV much smaller than that"
I think that's an overstatement regarding 60 inches. I still have a 27" Samsung LCD TV that was high end when I bought it, but I did some light research and it appears that if I limit my choices to new 4K Samsung TVs that are in stock at a NYC retailer, there are plenty of 40-50 inch options. There are also 30 inch Samsung TVs if you don't mind a lower resolution.
I think it's assuming too much to assume that everybody has a $500-$1000 budget and gets the biggest thing they can afford. Some people don't have that budget, and some people who have the money still happily take the savings now that prices are down from a decade ago. And some people don't get a new device until the old one breaks.
I think an educated person should be aware of Jevon's paradox, but it's overused and abused because people cite it dogmatically to short circuit thinking or fact gathering. Risk homeostasis is another similar idea - there's something to it, but it's harmful to reasoned thinking when people go around assuming it applies 100% without checking.
The original point was about energy efficiencies though. This was the point people were disagreeing with.
Also for what it's worth, back in the early-to-mid 90s I actually did do a study on the number of TVs in an average household in my home town (it was for a college assignment). While my sample size was relatively small (ie only a few hundred people interviewed), I did discover the vast majority of homes had 2 TVs instead of 1 (which surprised me as I didn't live in a particularly affluent area). So I don't think it's quite true to even say most people used to only have 1 TV. Or at least that wasn't the trend observed by my study.
The original point was the Jevin paradox, which says that increased efficiency leads to more efficient consumption. We’re now going round in circles about the causes of the decline of CRTs relative to flat panels, which is definitely a divergence.
I suspect if you re-ran the study you’d get a number bigger than 2, which (qualitatively) is the point I was getting at!
The think the thing about the Jevin paradox (if I understand it correctly) is it requires causation and, as I've said previously, I think in the case of TVs it's a correlation without causation. ie I think we would still have seen larger displays and more TVs in each home even if there hadn't been improvements in energy efficiency. I appreciate you feel we're going round in circles but that's always been the crux of my point right from the start and the reason why I don't believe the Jevin Paradox applies to that specific example.
However I do also think we've headed into the realm of using assumptions as statistics (as also discussed in my other post[1]) so perhaps this is one of those occasions where our differing opinions cannot be consolidated?
Also there has to be a somewhat high and tractable initial cost for the paradox to kick in. How much does it cost to run a TV for a year? I have no idea as it is rolled into my monthly electricity bill.
Now for a car I can see that immediately. Whoa $60 for a tank of gas? Maybe I won’t go on that road trip or maybe I’ll use the bus or telecommute.
There’s an important distinction here. This is an efficiency which decrease power consumption of idle an itself useless ”activity” which is a side effect everybody is trying to avoid, compare that to coal burners that really care about generating energy cheap.
But raising prices on electricity would spur demand for improved energy efficiency. Consumers are notoriously short-sighted on things that don't impact them directly and immediately.
We Americans used to love our V8 engines in our big, boaty cars. Then we had an energy crisis and gas prices rocketed up. For a while, people bought more fuel efficient small cars with 6- or 4-cylinder engines. Then gas prices dropped and people started buying trucks. Then gas prices went up again and small cars became popular again. Then gas prices dropped and people started buying SUVs/crossovers... see a pattern forming?
I know the following is an unpopular view in the US, but I believe this is where regulation comes into play.
Businesses will take shortcuts to save money because saving money is pure profit. And consumers will generally put their own financial needs above the concerns of the wider planet - because "what difference does one person make?" (a point I often read / hear). So the only alternative is to set mandatory guidelines for which products have to adhere to. Sure that will make products more expensive in the short term (R&D costs) but those prices will come down in the mid to long term and you end up with cheaper hardware to run (than if you just put electricity prices up) plus less energy consumption per device. It's a win-win.
But as I said, this is more of a European opinion than a US one - who tends to favour a lack of corporate regulation.
However I think ultimately there isn't a "correct" approach, just different opinions on the least disruptive.
Regulation another tool in the toolbox, but I don't think it should be the default tool to apply in many cases. It is pretty much the last resort: the market has failed (for any number of reasons) and there are no (dis)incentives that can be applied to alter the situation... so now the government needs to step in and tell the market what to do. This is not without risk as governments often screw this up whether intentionally due to lobbying/other interests or unintentionally as a result of just not understanding how to get the desired outcome. Then there's the reality that regulations tend to take on a life of their own as the regulators build their power bases on top of them.
On the other hand, with anything that consumes energy (whether electric, petroleum, or other), there is a lot of opportunity to influence behavior as there's both an up-front capital cost and an ongoing operational cost involved. Usually, the purchaser is paying directly for the operational cost. As a result, there's a direct path from the price of the consumable to the user of it. In these cases, incentivizing via the pocketbook can work quite well. And if that turns out to not be enough, more incentives can be added on the front-end via credits/taxes on the new/old thing to help shrink the price delta between them. So there's quite a bit that can be done before you get to the point of regulating. It also has the advantage of being easier to fine-tune than regulation.
You're absolutely right and it does take time. Unfortunately, when dealing with macro-level issues like this, putting in place long-term incentives and disincentives in the form of fines/taxes/etc. has been shown to work far better than just encouraging people to 'do the right thing.'
You can increase prices slowly, or announce that you will increases prices in fife years. I find it quite unlikely that people's life will collapse when you increase energy prices by five percent a year or so.
Please try and live in a developing country. 5% is huge for a family that can't make ends meet. I don't want to generalize but utopian ideas seem to be propped up on HN without counting billions of other people and their circumstances.
That's the same argument against making petrol more expensive - because there are people who need it so making it more expensive is somehow unfair to them.
Absolutely disagree here - if petrol was vastly more expensive, alternative technologies would have to come down in price and be more popular, so yes, even the poorer people could eventually afford them. By keeping the price as low as it is(and living in the UK it's hugely expensive compared to US and many EU countries) we're allowing huge pollution of the environment for the sake of affordability. Maybe by that logic we should allow coal-fired boilers again? They are still allowed in many places across the EU for that exact reason - because forcing people to switch to natural gas/electricity/ecopellets would punish the less fortunate people. But the "less fortunate" people are going to be fucked the most if we don't work on the pollution, which is sort of impossible if we're stopping the fight because of them in the first place.
That's not really an equivalent example because there isn't an alternative to electricity where as there is an alternative to driving petrol powered vehicles (electric vehicles, public transport, car share, bicycle, walk, etc).
I also don't appreciate your "Maybe by that logic we should allow coal-fired boilers again?" comment when I was very clear in my post - the one you're directly replying to - that we should be focusing on greener forms of electricity creation and energy consumption. If that does also drive up electricity prices then so be it. However artificially increasing electricity prices and expecting the market to do the honourable thing seems like you're trying to solve the problem by changing the end variable rather than fixing the root cause itself and letting market prices adjust accordingly.
Generally it’s more efficient to help the poor by giving them money, not by distorting prices. Prices are information guiding a giant distributed system; you don’t lie to your OS kernel and expect your programs to run as well.
Making energy prices include externalities would encourage the improvements you listed.
Except raising prices is the only thing that has the desired outcome. If PCs use only half the wattage that they used to consume... great now I can afford to run 2 for the same price.
I cannot see that happening on the consumer side of things because consumers have generally* not looked at power consumption as a factor for which hardware to buy nor how many to run.
* "generally" because home servers / media centres / etc are often picked for their power consumption. As are mining servers (eg crypto-currency). But these are by far the exception rather than the norm in terms of consumer devices.
It only matters if energy consumption is large compared to cost of the device or otherwise total cost of ownership. If PCs use only half the wattage that they used to consume, then it doesn't meaningfully change how many computers (or how powerful single computer) I can afford, because paying for the computers themselves dwarfs the payment for running them.
Yes and no. Policymakers too rarely do anything to show they care about distributional impacts, but you could certainly arrange things so as to fix this.
Why not impose a tax that doubles the cost of energy, and use all of the proceeds to fund universal basic income? That would both help poorer people and reduce energy usage.
Well, the same is true of every price level change, especially in food and housing. I'm sympathetic to the "fuel poverty" arguments, but fuel and energy subsidies are a pretty bad way of addressing the problem. Especially in non-Western countries.
Progressive pricing of goods/commodities tied to CO2 emissions as a result of their production/consumption would be good. Transportation, energy, industry, and agriculture account for the majority of US greenhouse gas emission.
Charging heavy consumers at a progressively higher rate will ensure the heaviest emitters pay their share without disproportionally affecting low income or light emitters.
Your argument makes no sense because it is self defeating. The point of increasing the price of electricity is to increase the incentive to use less of it.
How would we use less of it? Precisely by actions such as development time spent on tasks to reduce energy usage.. such as improving the CPU idle states (among many, many other such options).
The increased cost makes it more impactful for the companies, even if they don't care about the enviroment. So they actually might pay the engineers to do something about it.
What i'm getting here is that you think the author should stop working on this research, because if he doesn't then maybe companies will start funding him so he can...do this exact research? Science shouldn't grind to a halt just because not doing it might increase funding.
I don’t see how that’s self defeating. But I understood the comment to be “if electricity cost more, people and companies would be more diligent in turning off lights when they aren’t needed, setting the thermostat to use less electricity, and doing other simple things to save electricity and money.” It wasn’t a question of whether the kernel can do a good job automatically, but whether there is low hanging fruit that we’re currently ignoring.
I don't want to sound complacent, but I don't see that 2% is necessarily a bad sign. I suspect that 2% of the power-consumption is integral to well over 2% of the economy, for instance. Improving energy efficiency is always good, of course, but I'm not convinced there's a whole lot of low-hanging fruit that people need to be economically incentivised to implement.
That said, there are projects like Erlang on Xen [0] (a bare-metal-ish system) which enable unusual deployment patterns like spinning up a VM only after a request has been received, which could, presumably, make radically more efficient use of virtualised platforms. Not sure if anyone's done this in anger though.
Edit: I suppose higher prices would lead more people to do simple things like configuring their test servers to shutdown overnight and on weekends, mind
Energy demand is not elastic so raising prices is not the right way to reduce demand. Raising prices for inelastic commodities does not change the behavior of the consumer.
Because it is a basic need of today's society. You can't suddenly decide you won't use energy without significant side-effects. It's worse if someone else decides that for you by raising prices.
Now we just need to iron out bugs which prevent CPU's to reach certain low-power states. Turns out in modern CPU's that's surprisingly difficult to achieve, as not only the CPU is considered for such power states, but attached components, like the NVMe controller, as well. Matthew Garrett explains pretty good what's happening there: https://mjg59.dreamwidth.org/41713.html
I once increased coin-cell run-time of a client’s hardware (ARM) platform from 1 month to 1.5 years simply by replacing almost all sleep() functions with low-power-mode-enter.
Some cheap microprocessors do that, usually in devices where power consumption matters they have a small companion microprocessor (a micromicroprocessor so to speak) that will power down the big one if necessary and enter a sleep state itself to severely cut down on power consumption (some devices can go as low as microamps).
Generally, most modern CPUs support turning off when not needed, however, this is generally referred to as power-on-standby (S3, IIRC). The CPU is off, most things are off, RAM is on.
The CPU itself has to continue to run because there is almost no timeperiod larger than a few seconds in which there is truly nothing to do and shutting down CPU cores and clocking the remaining one is efficient enough.
Another good example are the javascript microcontroller boards. Because of the event-loop model of the Js engine, they can simply see there's no code to run and shift into power saving modes without the dev having to do anything special.
Only obsolete x86 processors at this point (pre-P4 Intel, iirc)
Pretty much until the early 2000's, PC processors essentially idled at full speed. There are probably some super-cheap, bottom-of-the-barrel, low-end ARM chips still being made somewhere that can't sleep or down-clock.
Not really, AMULET was an asynchronous (i.e. un-clocked) ARM implementation intended to explore a new cpu design.
The aim was low-power-usage, but they came at it from a different direction. This is referring to the OS control over power-states on a traditional (clocked) processor.
AMULET gained its low-power capabilities from not clocking any unused functional blocks during normal usage, same aim, different strategy.
All processors in the last 15-20 years have had low-power states.
Its standard fare really. dropping into low power mode in an idle loop has been standard practice for that long also.
In embedded systems, the real difference is how you can schedule your application-level events to make optimal use of the low power states of both the core you're using, the other cores and on-board devices (e.g. FLASHs, ADCs, DACs, etc).
This is how iOS and Android try to have an effect by managing the applications use of timers and wakeups from external devices (interrupts) so as to maximise the 'sleep' time.
I don't know of modern desktop/laptop CPUs that truly disable unused cores, but I do know modern parts will often reduce power usage/disable parts of a core/underclock cores to reduce heat output from those components and then boost clock speeds on other cores.
The article states makes numerous mentions of the "the governor":
>"In this loop, the CPU scheduler notices that a CPU is idle because it has no work for the CPU to do. The scheduler then calls the governor, which does its best to predict the appropriate idle state to enter. There are currently two governors in the kernel, called "menu" and "ladder". They are used in different cases, but they both try to do roughly the same thing: keep track of system state when a CPU idles and how long it ended up idling for."
Could someone say exactly what "the governor" is? Its a code path in the scheduler? It wasn't clear to me from reading the article.
The kernel subsystem that handles this is called cpuidle: https://lwn.net/Articles/384146/ . It has two different governors, ladder (which chooses an idle state adjacent to the existing state) and menu (which can choose any idle state)
This reminds me of a story I read from some operating system book: Guys working on an early OS profiled the system and found that one particular routine is taking a lot of CPU time. They worked hard optimizing it but found it didn't improve the overall performance at all. Turned out that routine was the idle loop of the OS.
I don't remember if it was a true story or just a joke.
> Idle states are not free to enter or exit. Entry and exit both require some time, and moreover power consumption briefly rises slightly above normal for the current state on entry to idle and above normal for the destination state on exit from idle. Although increasingly deep idle states consume decreasing amounts of power, they have increasingly large costs to enter and exit.
What causes this? I would have thought that "stop computing for a bit" would be a simple thing to do, but I clearly don't know much about processor design.
I work in CPU design. It all comes down to saving power given how frequent idle-state entries are (especially C1 enters with almost most wait-for-interrupt operations) Stop-computing isn't well defined. As long as the clock ticks, the frontend will keep fetching instructions. Sure you can keep feeding it NOPs, but instead, we save power by entering idle states.
The quickest to enter and exit (C1) simply clock-gates the core. Caches are preserved. The next c-state might turn off caches too (and thus incurs the penalty of flushing caches on entry and starting with a cold cache on c-state exit). Further C-states might require even more work to enter and exit but consume much lesser power when in that state.
The cpuidle governor decides which C-state to enter since a deep C-state entry and exit may end up consuming even more power than keeping the system running or in C1.
Usually there is a change in voltage on the processor when the processor changes state. This change in voltage expends energy due to the capacitance of the power rail and the circuit itself (the capacitance of one transitor itself is small but there are billions of them).
Even if there is no change in voltage usually the processor will just leak power while making no forward progress on the program while the various timing circuits change.
An interesting thing regarding the NOP (no operation) instruction many CPUs have, is that it many times is implemented as a pseudo-instruction. Ie, what actually runs is something that has no effect, eg move contents from a register onto the same register.
It has also given name to the human activity of "NOPping", similar to zoning out.
Note that this seems to be about not waking up the CPU (by timer ticks) more frequently, to allow it to go into a deeper sleep --- AFAIK the actual "idle loop" just executes the HLT instruction, which puts the CPU in a "wait for interrupt" state, and for newer CPUs they go into successively lower power states the longer they're halted.
This reminds me that earlier operating systems like DOS and Win9x kept the CPU in a busy polling loop when idle --- which was great for responsiveness, but not power consumption nor heat; applications like http://www.benchtest.com/rain.html soon appeared, which replaced the idle loop with an actual HLT loop and actually had a noticeable effect. The DOS version is at https://maribu.home.xs4all.nl/zeurkous/download/mirror/dosid...
It's more complicated than that these days. Rather than HLT, you call MWAIT with an argument that corresponds to the C state that you want to enter - the OS has a better idea than the CPU of how long it's going to be asleep (basically what this article is about), so it can tell the CPU to enter a deeper state. The CPU may make an executive decision based on its own needs to enter a different state (potentially even a deeper one), but it's largely still up to the OS to choose rather than the CPU entering progressively deeper states.
Does it also imply that a non-blocking event loop in the userspace application is energy-inefficient?
Likewise, given that some OSes APIs (syscalls) provide both non-blocking and blocking modes, should we prefer the blocking ones concerning energy efficiency?
They are pretty much equivalent. The kernel will only schedule your program when an event happens. The difference is that by using a blocking syscall you will need more threads which indirectly decreases energy efficiency through increased context switching and RAM usage. If you only have a single thread then blocking or non-blocking is going to consume the same amount of energy.
Yes. A blocking loop will allow the CPU to power down until an event arrives. A non-blocking/busy loop will continue to burn CPU cycles and the CPU will be busy and cannot reach lower power states.
How does this work on a single-CPU system? Wouldn’t the various idle checks and governor calls keep the CPU always busy? Or is there some way to turn off something so these instructions don’t mark the CPU as “active”?
The idle checks all happen in the kernel scheduling code which runs on every CPU/core/hyperthread between doing real work. There isn't a separate core which hands out work or controls them. The kernel knows which CPUs are active because it's the one running things, not because it asks the CPU if it ran any code.
On a single CPU system it works because when the current process stops running (calls sleep, does IO, etc...) the kernel looks to see if there's anything else to do, if not, it knows the system is now idle.
I don't get it. They have nothing to do because they finish the work so fast that they are idling most of the time? But then the second joke doesn't make any sense at all.
1. This is an exaggeration but the idea is that they have nothing to do because no one is buying or using them anymore (since the Intel vulnerabilities were uncovered).
2. A platform is a kind of software. A shelf is a kind of platform (different definition). Before items such as Intel CPUs are sold, they sit in warehouses for a while. Intel is not good for running any software platform so the best platform for them is a shelf... Also alludes to the fact that no one is buying them and that no one should buy them. (Also exaggerations).
>> What's a CPU to do when it has nothing to do?
> Mine Bitcoin!
This would be great for CryptoNote Webminers running WASM in the browser to help users understand you don't have to juice every thread/cpu in order to effectively mine at scale using a proxy like the one provided in Webminerpool.
You mean "in order to spend an extra $2 in power to make 3c for someone else"? Very effective.
As long as there are more price-efficient mining pools which are an appreciable fraction of mining power, it will not be cost effective to mine anywhere less efficient since margins will naturally approach what those larger pools can support.
A consumer desktop will never be able to compete with a centrally cooled data-centre which likely gets special power rates and was intentionally built in a location power is cheaper in. Especially not if it's having to go through wasm.
But when that list is empty, doing the right kind of CPU sleep is totally worthwhile of course.