Hacker News new | past | comments | ask | show | jobs | submit login
Intel says one of its 13th Gen CPUs will hit 6GHz out of the box (theverge.com)
222 points by Tomte on Sept 12, 2022 | hide | past | favorite | 451 comments



AMD announces 5.7ghz Ryzen 9 7950X chip retailing for $699 this month, meanwhile Intel hints at vapor chip that theoretically might hit 6ghz with no release date and no price. This stinks of desperation on Intel’s part.


I'm waiting for them to somehow redefine speed, like they redefined production geometries. How fast is it? It's Intel 7.


You laugh, but AMD did that in the mid-2000's with the Athlon XP series: https://en.wikipedia.org/wiki/List_of_AMD_Athlon_XP_processo... The numbers used to match their clock speed (ex. Athlon 1000 was 1000 MHz), but that changed with the XP models (ex. Athlon XP 2400+ was 2000 MHz).


IPC is a thing - clockspeed optimized architectures like Alpha and the Pentium 4 just didn't get as much done per cycle or had much harsher penalties when the pipeline stalled, even though they ticked faster.

Comparing the whole system against an actual task is the only way to really measure - everything else, including the clockspeed, is marketing.


Certainly, the last two decades prove that raw metrics win advertising because the complicated world of benchmarks just don't sway many people.


The commonly used benchmarks are often pretty useless or irrelevant. Coming up with useful benchmarks is very hard and a pretty tragic thing about the supposedly irrelevant independent CPU benchmarking of the past 20 years is that CPUs are specifically designed to perform well at the silly benchmarks that are popular as well as at real world workloads where I think CPU manufacturers will have their own suites of more carefully designed more realistic more relevant benchmarks.

My claim is that (1) results on irrelevant benchmarks like SPEC do matter for CPU sales (probably not for big tech companies that operate their own datacentres – they’ll likely do their own testing – but likely for many ‘savvy’ consumers and also for companies that want to market their computers as having fast CPUs) and (2) the complicated world of benchmarks is handled very poorly by the people who tend to evaluate CPUs with them and publish their results.


Bribing companies to avoid selling your competitor's product doesn't hurt, either.

https://en.m.wikipedia.org/wiki/Advanced_Micro_Devices,_Inc.....


Bad benchmarks sunk AMD after Bulldozer, then years later they paid tens of millions in a false advertising lawsuit


There was an expression floating around for this naming scheme, but I can't remember what it was and don't find any good search results either for my candidates. "Processor/Pentium equivalent rating"?

Edit: found it - https://en.wikipedia.org/wiki/Performance_Rating


This actually caused some problems with some Maxis/EA games at the time (like Sim City 4 or The Sims 2), because those games automatically tuned certain settings (mostly graphics settings, but somewhat annoyingly in The Sims also things like the maximum number of sims on one lot) depending on the power of your computer – mostly as measured by GPU model, amount of system and video RAM and CPU clock speed.

The problem was that the CPU clock speed levels categorising your system as high/mid/low performance were quite obviously based on Pentium 4 clock speeds, even though at the time AMD actually had a market share of around 40 – 50 % for desktop computers. This meant that everybody with an AMD processor would find his/her performance and graphics settings mysteriously restricted. People using the first generations of Intel's own Core i-processors had similar problems if they were still playing those games a few years later.

The saving grace was that at least things weren't totally hard-coded – things were controlled by a rules file in a plain text-based format, and so it was comparatively easy to just change the expected clock speed levels to something more reasonable for a non-Pentium 4 processor.

This was also useful because the GPU detection had its own share of problems down the line – I still occasionally play Sim City 4, and for some reason or other on my current system the game doesn't correctly detect the amount of video RAM I have and therefore resorts to extremely restrictive fallback settings unless I manually override it, and AMD recycling graphics card model numbers also caused some confusion that had to be manually fixed.


Cyrix 5x86 PR100. Or Cyrix 6x86 PR266


And Apple started it with their Megahertz Myth campaign.


I was a kid back then and we had mac labs at school, and those things were always slow or hanging. Maybe because they were imacs? During the second coming of Jobs era, Macs never really blew me away performance wise until the intel transition.


In my experience school computers have always managed to give a terrible impression - our school had Windows PCs and compared to my Mac at home they seemed terrible.


Intel just did that to conform to what the rest of the industry (namely TSMC) had already been doing. NM hasn't meant nanometers for a decade.


The numbers were always complete bullshit this is like complaining about a shop calling a cake tasty.


Nanometers has been a meaningless measurement for a long time now, ever since it stopped referring to the actual size of the transistors.


I hope Intel can keep up, actual competition has been so great. It does feel like they are really struggling with their process issues though.


I think we're going to see major shakeups over the next ~10 years from both companies. Old models of design are quickly reaching their limits. Heterogeneous cores and multi-die designs are just the start. It's an exciting time!


reaching the performance limit would be a blessing in disguise


Factoring in say 10x-100x repetition code for for error correction, existing silicon tech is already damn near to the launder limit. General purpose CPU and GPU architecture has some data flow logistics it could optimize, but ASICs are pretty much at the performance limit already.


It just seems like they are struggling as a company.


Please have some sympathy for the pathetic workers who have to work overtime to keep up because management failed to keep progress for the new and exciting.

Sometimes this competition thing is a distraction. Like now there is an emergency meeting for the three year roadmap to shrink to 1 year and with less innovation than they planned it to have, and the actual three year roadmap becomes 5 years because of wasted energy for short term gains.


Thats how business works. If people dont want to work at intel because of this thats ok too. Its the society we created and its worked out really well in the CPU industry for 50+ years now.


I know that is how some businesses work. But workers have more sight. Some would pander for the philosophy of Apple, don't compete just innovate. And sometimes they get that opportunity.

But other times when competing, long visions lose. And so do the workers who have them. This is detrimental.

I am not ever working for something like this I hope, but I do see others work in this way. And sometimes they are gifted yet unable to make that gift work.


Sounds ripe for a union


that 6ghz intel chip will probably use at least 300W too (12900ks uses 274W)


that's actually a feature since you don't need to have a space heater anymore. checkmate


You won't need to worry about the heat, since you can just reuse some of the liquid nitrogen that's cooling the chip to allow it to hit 6ghz in the first place.


Sure does... also the Intel announcement is on the same date that AMD releases their new chips.


Intel was always good at marketing. Performance (buck for buck) on the other hand ...


That was exactly my thought. It's just a bit sad to watch Intel fail so badly here. I was a big Intel fan for a long time.


Raptor lake engineering samples are already in the hands of people benchmarking them if you know where to look


The pendulum swings yet again. I remember in the 2000s when AMD and ATI were the best buys.


I'd recommend waiting for unbiased reviews of actual products, taking into consideration price, performance, and power usage before determining the "best buy" of the upcoming generation of products.

Though I hope the pendulum is swinging enough to keep driving the improvements. My current desktop CPU is absolute overkill while sipping relatively little power during my light usage.


drum roll please


To be honest, I'm disappointed by the new Ryzen 7000 series power consumption, enough so that I'll be sticking with my 5000 series CPUs for as long as I can. Not everyone wants their PC to double as a space heater.


Ryzen 7000 has better performance per watt though. If you don't want a space heater you might be better served to get a Ryzen 7000 and limit the TDP to whatever you're comfortable with.

It'll be faster than a Ryzen 5000 drawing the same amount of power: according to AMD the Ryzen 7000 will be "up to 49 percent"[1] faster at the same power draw.

[1] https://www.pcworld.com/article/918007/amd-launches-ryzen-70...


Precisely which reviews have you seen for Ryzen 7000 that paint it as a space heater?

The official announcement indicated significantly better power efficiency. AMD claims that if you limit both to a 65W TDP, you'll see 74% better performance on the new generation, but that even at full TDP, the efficiency will be higher.

However, I am waiting on the actual reviews... not making multi-year decisions based off of limited, pre-release information. If it is more efficient, then vowing to stick with an older, less efficient product just because it has a lower TDP number on the box doesn't make any sense. Efficiency is the only thing that matters when seeking to avoid a space heater, since the less efficient product will generate more heat for a given amount of work. All of these processors will clock down when idle


Not the person you asked but I poked around to see what I could find. Seems like a bunch of random sites all taking about some leaker. I could find nothing concrete to support them running hot.

https://www.google.com/search?q=ryzen+7000+running+hot&rlz=1...


That’s a lot of articles claiming the processors will “run” at 95C, but… what is that even supposed to mean? The processor temperature depends entirely on the cooling solution being used. Those leaks don’t seem to be very grounded in reality. Maybe they meant the junction temperature or the automatic throttling temperature, but those certainly weren’t the implication.

But, all of this is still irrelevant to efficiency. Running at 230W for 1 second to complete a task would generate less heat / use less electricity than running at 150W for 2 seconds to complete the same task.


Psychologically it's going to be hard for most people to underclock something they spent hundreds of dollars on.


Underclocking isn’t necessary for it to be more efficient, if AMD is to be believed. A higher peak power does not mean generating more heat / using more electricity to perform a given task, as long as the task can be completed proportionally faster.

But, efficiency is often a curve, and you can supposedly get crazy efficiency by limiting the TDP of Ryzen 7000, so it is an option for those who prize efficiency above maximum performance, and it would still represent an improvement over a Ryzen 5000 at the same TDP. As I said, though, we’ll need to see actual reviews.


Wow, Raptor Lake's max TDP looks to be 253W. That's crazy high https://www.tomshardware.com/news/intel-13th-gen-raptor-lake...


All these power hungry beasts coming out of Intel and NVIDIA feel quite out of sync with the zeitgeist in a world that's worried about the power bill - especially when the M1/M2 is there to provide contrast. I'm getting Pentium 4 vs Core architecture vibes.


> All these power hungry beasts coming out of Intel and NVIDIA feel quite out of sync with the zeitgeist in a world that's worried about the power bill -

These CPUs aren't consuming 250W all the time. Those are peak numbers.

Both Intel and AMD are providing huge efficiency gains, too. Rumors show the new i7 13700T Raptor Lake part can have a 35W mobile TDP and still outperform a Ryzen 7 5800X: https://www.tomshardware.com/news/intel-13700t-raptor-lake-a...

Speed scales nonlinearly with power. These high TDP parts are halo parts meant for enthusiast builds where it doesn't matter that the machine draws a lot of power for an hour or two of gaming.

It's also trivially easy to turn down the maximum power limit in the BIOS if that's what someone wants. The power consumption isn't a fixed feature of the CPU. It's a performance/power tradeoff that can be adjusted at use time.


Just adding to what you said, a 24-core CPU won't get anywhere near peak power usage during gaming. Most games only use a handful of cores. The only way you'll approach it is with parallelizable productivity work like video encoding or compiling code.


My nephew, B, got his 16+8 i9, during Path of Exile, to peak at 250W, and use all 24 cores. He is running at 5.2Ghz, and using air cooled. We are not sure at all how it uses e-(efficiency) cores, when it has 16 p-cores w/ hyper threading, but it all did show up in the new dark mode task manager.


PoE is one of the few games that actually makes use of lots and lots of cores/threads.


Any idea what for? I feel like PoE doesn't involve that much compute other than what would be offloading to the gpu. Maps are static, and I would have assumed that mobs are primarily computed server-side based on some sort of loosely synchronized state.

I guess I could imagine a few threads for managing different 'panes', a thread for chat, a thread for audio maybe? It's hard to think of 24 independent units of work.

I'm not a game dev, just used to play PoE and curious.


The trick used in AAA is to see each frame as an aggregation of core-independent jobs that can be queued up, and then to buffer several frames ahead. So you aren't working on just "frame A", but also finishing "frame B" and "frame C", and issuing the finished frames according to a desired pace, which allows you to effectively spend more time on single-threaded tasks.

The trade-off is that some number of frames of latency are now baked in by default, but if it means your game went from 30hz to 60hz with an frame of delay, it is about as responsive as it was before, but feels smoother.


Sure that explains the parallelization, but not why it takes 250 watts worth of compute to run the game. What's it computing?


The next frame.


if it's anything like gta5 it's going to be calling strlen a billionty times


Can you provide some more info about this?



Could it be the gpu driver/framework? I thought DX12 and Vulkun were meant to be cpu optimised and be able to use heaps of cores.


I guess, but like... how? Like I said, I can't really think of 24 things to do lol. I'm reminded of Dolphin, the GC/Wii emulator - people would ask for more cores to be used and they'd basically be like "for what???", they started moving stuff like audio out, eventually they made some breakthroughs where they could split more things out.

Maybe with these frameworks threads are less dedicated and instead are more cooperative, idk. Really not my area!


https://m.youtube.com/watch?v=MWyV0kIp5n4 I'm reminded of this poe build that can crash the server with too many spell effects


Or simply put, there's too much going on. I remember they had to rewrite ASAP some parts of the engine right after the release of Blight due to FPS drops down to 1/inf at the end-endgame versions of the encounter, as well as server crashes.


Sort of funny story, the concept of this build (spell loop) is currently meta, sadly the servers have improved to the point that they don‘t crash anymore.


Maybe all it does is produce crazy high, pointless FPS.


I've seen the NVIDIA driver eat up all the CPU on multiple cores without really doing anything substantial to the framerate.

This was back in the Windows XP days when I was working on OpenGL and DirectX. It would do this while rendering like a couple of triangles. One core I could understand, but not all. I'm pretty sure the driver had some spinlocks in there.

I also managed to find out the NVIDIA driver assumed user buffers passed to OpenGL (VBOs) would be 16-byte aligned, using aligned SIMD operations on them directly, even though there's no mention of alignment in the OpenGL spec.

It just so happened that Microsoft's C++ runtime would do 16-byte aligned allocations, while the language I was using only did 4-byte.

All is fair in love and performance wars I suppose...


What’s a new dark mode task manager?


The latest Windows 11 preview finally reads the system default paint allowing “dark mode” rendering the ui with dark background and light foreground.


So like in win 95 when you use a "dark theme". What an achievement. Wait, you can also set background colour. /s


What a time to be alive!!


I think you'll find that modern games use many more cores than they used to since mainstream consoles have all moved to being octa-core for the last two generations and you have things like Vulkan better allowing multi-threaded graphics code.


Many more cores yes, but 100% CPU usage should still be rare. If your game uses 100% of a 24C/32T processor, it will run poorly on a "mere" 8-core CPU, and most of your target audience won't be able to play it. You're right though, these aren't your grandma's single-threaded games anymore.


I don't really second this perspective.

CPUs and GPUs keep getting hungrier and that is just not where we should be heading. I wish the perf increase didnt keep coming along consumption increase each gen.


You can clock down a 7950x to 105W and it will be 37% faster than a 5950x


I hardly care, I don't want that heat in my room anyway.


> Both Intel and AMD are providing huge efficiency gains, too. Rumors show the new i7 13700T Raptor Lake part can have a 35W mobile TDP and still outperform a Ryzen 7 5800X: https://www.tomshardware.com/news/intel-13700t-raptor-lake-a...

Don´t let the TDP of T-models fool you. Power consumption to reach boost clocks can peak up to 100W for T-models of the previous generation, and the 13700T probably needs to run close to that to outperform a 5800X.


> for an hour or two of gaming

U gotta pump those numbers up, those are rookie numbers.


> These CPUs aren't consuming 250W all the time. Those are peak numbers.

But they require a heat-sink management designed for that peak. And it is insane. Try to keep microwave oven under 100C :-)


Your toaster uses more than 250W, microwave ovens are far above at 1-2kw


Its still pretty terrible from an optimal performance viewpoint. I can undervolt my 3070Ti by ~100mV, dropping performance by ~6% but dropping temperature by about 10C, or 13%, and dropping fan speed from PS3 to inaudible for anything <90% load.


If you consider the mainstream products of Intel and Nvidia, they have way more moderate power consumption. These products with massive power draw are ultra enthusiast products. They are an outlier. You could build a great PC now with a RTX 3060 and a mainstream CPU that would be fine with a 500~ watt PSU.

As technologists, we should support manufacturers pushing the limit in power and performance. It helps drive overall efficiency and move technology forward.


Power consumption doubled at least even on mainstream and keep increasing gen over gen.

1070 vs 3070 is +52٪ average (145 vs 220w) and +66% (154 vs 250w) sustained.

2070 vs 3070 is +10% (195) or +24% (203) sustained.

Even the 3060 you defend draws power as older flagships and 500w arent enough even for mainstream gaming.

And it keeps getting worse both on gpu and cpu size.

We aren't technologists but consumers, and reality is that x86 and gpus are in near duopolys so the 3 companies involved have little reasons to do a better job and it's clear Apple socs or more and more cloud moving into arm have not been enough of wake up calls.


And yet energy efficiency continues to improve: https://tpucdn.com/review/nvidia-geforce-rtx-3080-ti-founder...

In fact, the 3070 is significantly more power efficient than the 1070 is. Because while yes power is up 50%, performance is up 100%. So performance per watt has continued to improve even as power consumption has also increased.

The reality of power consumption is that it's the main lever to pull right now to deliver generational gains. It's the same lever Apple pulled for the M2 even.

> more and more cloud moving into arm have not been enough of wake up calls.

You mean the ARM enterprise SoCs that use just as much power as x86 does to deliver on average worse performance?


Yep, I’m using a 500w psu to power my gaming pc with a rtx 3060 ti and a 12700k cpu


where is the amd/nvidia/intel product that offers comparable performance at a power draw that is anywhere near m1?


I believe AMD Ryzen 6000 mobile cpus can hold their own against the apple m1. They have comparable performance and can be set by the manufacturer at a TDP comparable to the m1(and still perform well). Except for mainly m1 optimized apps, GPU performance should be pretty comparable too. Ryzen integrated graphics perform better in gaming.


Power consumption is an issue, but worrying about CPUs of a gaming machine is like worrying about straws in the context of the pollution.

Electricity needs of an average household in western world are going to increase a lot in coming decades, with transition to more electric heating, cooking, cars, etc. Gaming machine power usage is minuscule compared to those.


Just because energy needs keep increasing it's not a good reason to be okay they do.

While most house electronics keep pushing to consume less, computers go in the opposite direction.

This also adds tons of heat in my laptops or the air.

Working or gaming in a small room during hot days is painful.

Even consoles making noises of a turbo jet are nowadays considered normal. It's a disaster.


The major difference between a CPU and, say, an oven, is that the former runs 24/7, whereas an the latter would run for a short period of time.

Back of the envelope calculation here:

Assuming an average oven consumes 2kWh, and a CPU 0.1kWh:

Oven for four hours (average weekly usage) would be 2 * 4 = 8kWh weekly.

CPU for 24 hours, 7 days a week = 0.1 * 24 * 7 = 16.8kWh weekly.


The flaws in your calculations are apparent when we realize that modern CPUs clock down when they are not busy. You would assume this would be common knowledge on a site called Hacker News.


Interesting choice of units, kWh per hour?


I would say that people who regularly invest in top end hardware don't care as much about power bills. Otherwise power efficient chips is the norm (laptops, phones, etc).


This is slowly changing IMO - I'm seeing concern over energy use even on forums discussing high end hardware builds as cost of energy mounts in Europe. Previously no one really ever mentioned this other than to laugh at poor thermals.

If some of Nvidia's next generation 4xxx series GPUs are close to 1000w draw as many rumors suggest, the total draw of a high spec Intel/Nvidia system is going to probably have similar running costs to an electric space heater when playing demanding games. The existing 3090ti is already a 500w part, which not so long ago was enough to power a whole system in addition to the GPU.


The power cost of running my enthusiast build is on the order of a few dollars a month.

Now I am all for being green but there are things in my household that are much more of a concern than this.

Huge datacenters full of these chips is one thing. A personal computer for hacking & gaming probably not such a big deal.


Yeah that's not true, at least here in the UK it isn't. My normal build will use 500W when gaming, so every couple hours that's £0.40. Every 10 hours is £4. That's just few days of gaming for me, not including all the other computer use, definitely adds up over a month, especially since my bills used to be £100/month now they are £300 a month.


Using your own numbers, if you're spending 200/month more on gaming then that's 500 hours/month or approximately every waking hour. Are you really gaming that much? And even if you are, that's a lot cheaper than practically any other hobby you could spend that much time on.


I never said I spend £200 on gaming? I just said that my bills have increased from 100 a month to 300, but that's due to rising energy costs in the UK, not my gaming habits . It's more that in addition to my bill literally tripling, costs of gaming aren't insignificant for me. It doesn't matter that it's still cheap for the type of entertainment - it adds up. Ever pound spend this way is not a pound spent on something else.


> It's more that in addition to my bill literally tripling, costs of gaming aren't insignificant for me. It doesn't matter that it's still cheap for the type of entertainment - it adds up. Ever pound spend this way is not a pound spent on something else.

Sounds like a route to being penny-wise and pound-foolish. If you cut cheap entertainment you may well end up spending more (because in practice it's very hard to just sit in a room doing nothing), and if your gaming costs are a lot smaller than your energy bills then the cost should be relatively insignificant, almost by definition.


Well the discussion started with saying "why would enthusiasts care about their energy consumption" - so my point isn't that I'm going to cut out entertainment altogether, but for my next GPU I will definitely look at the energy consumption and there is zero chance I'm buying a 500W monster, even if I can afford the energy for it. It's just stupid and wasteful. I might go with a xx60 series instead, just because the TDP will be more reasonable. Or alternatively I might play the same games on my Xbox Series S which provides me with the same entertainment yet uses 1/6th of the energy of my PC when gaming.


Also, considering that the used power becomes heat, it is not such a waste if you already have inefficient electrical heating.


Conversely, it makes summer much worse.


perhaps.. you can easily equalize indoor temperature to outdoors, so if you're not cooling, it makes no difference.

sucks if you run an AC though :)


I don't use AC I think it's terrible to waste energy like that. I can understandan office or hospital, but home?

I'm from southern italy, it's hot, you sweat, you don't need to burn gas, coal or build infrastructure so people waste it to cool their room, this is so entitled and no wonder we're in full climate crisis.

People can't give up on anything really. It gets worse everyday and people's remedy is to make it even worse,. Nonsense.

So yes, it makes things much worse with a hot pc in the room.


24h x 30 days x .5kW x 0.25 CHF/kWh gives me 90 CHF a month to run my PC, assuming it never sleeps.

Have you run the calculation? It's worthwhile configuring suspend for PCs these days. My 3090 never seems to go below 120W, for one thing.


> 24h x 30 days x .5kW

No modern PC should be pulling 500W all the time.

Idle power can be as low as 20-30W depending on the build.

You should also allow it to sleep, of course.

> My 3090 never seems to go below 120W, for one thing.

Something is wrong. A 3090 should only pull about 20 Watts at idle: https://www.servethehome.com/nvidia-geforce-rtx-3090-review-... . You might have some process forcing it into full-speed 3D mode for some reason.


Windows indexing service.


> 500W

500W is a very high average power consumption. And my electricity is 0.13 USD/kWh, which is about half 0.25 CHF/kWh.

True average power is probably below 100W, for a total cost in the realm of 10 CHF or USD per month.


In the USA, for a significant part of the year, chances are you have to add the electricity costs of running your airconditioning to get rid of that heat.

If you’re living in a colder state, you may have to subtract the costs of having lower heating costs.


Sure, although it's not a ton of heat either way and doesn't make a large impact on the net cost.


The lowest contract you can get in italy is 40 cents.

Also, consuming more energy is bad and this rush to apologize lack of innovation in gpu and cpus weve seen in the last decade is ridiculous.

Where does it end? I'm okay with a 5000cc truck because airplanes and cruise ships are much worse?


the last 2-3 generations of cpus and gpus have seen the most innovation, efficiency, and performance gains in a long time.

If you don't want to drive a 5l truck, don't drive one, but 500W is not an average load, and if you have high electricity costs thats a you problem not an everyone else problem.

The largest parts of my electrical bill are distribution and overhead charges that don't change whether I use power or not. The marginal utility of the power vs the cost is quite reasonable.


This is nonsense. The last 2-3 generations of GPUs have seen nothing of the sort in terms of efficiency, the same is also true of many Intel desktop CPUs. The latest Alder Lake desktop parts from Intel have been universally criticized for power draw too.

Each of the last 4 generations of mid to high-end card from Nvidia has required more electricity than the last. By Nvidia's own admission, their future chips will get larger and hotter to some extent due to slow down in moore's law and future process nodes being harder to reach. The die size of the parts has also grown which does not help.

Its not a small trajectory either; 4 years ago the most power hungry consumer part from Nvidia was a 1080ti which would easily draw 250w under load. today that number is now 500w for the current 3090ti, and rumored to be 800-1000 for the 4xxx parts launching end of this year. A GPU often will sit at peak draw for hours at a time in games as they try to push as many FPS as possible.

A current gaming machine with recent components can easily exceed 500w constant load during gameplay, and this figure will rise again with the 4xxx parts.

The ONLY exception to this really is Apple devices, and even then its not clear we can compare an M1/M2 GPU to the fastest parts Nvidia offer.


You and me are seeing very different news if you dont see midrange gpus consumimg 250w under load and cpus getting over 100.

The performance gains are non existent and largely driven by bigger and bigger chips on smaller nodes.

There is no innovation in gpu and x86 cpus space since a long time, that happens only on arm nowadays.


>5000cc truck because airplanes and cruise ships are much worse

Oh boy, you should visit USA sometime. A 5 liter truck is the small one.


The fuck are those calculations ? Are you trying to mislead people on purpose?

Who is running their computer at 500W 24/7 ?


500W 24/7 consumption? What do you do? Train ML non-stop?

Your example is in now representative of reality.


I use suspend on my PC and I definitely do not run it 24h, or anywhere close to that. Also power is $0.08/kWh where I live.


That's insanely cheap power, use it while you can. I'm paying £0.40/kWh, so about 46 cents per kWh.


Really?

My power is some of the cheapest in the country and we pay ~13 cents/kWh. It's a little misleading though since my bill breaks out generation and distribution costs into separate line items. They are both billed per kWH though and add up to 13 cents.


Yes. We have a fixed basic connection charge of $20. So it's really close to your $0.13/kWh when that is taken into account.

I wasn't trying to be misleading though because the point is the basic charge does not increase with usage. So for each additional kWh we add to that it's only $0.089/kWh.


No, that's fair. I have an additional basic charge too. Congrats on the cheap power.


And double that number if you're in the UK.


It's worse than that. Gamer's Nexus had a video a few months ago about power transients becoming a bigger problem. Power spikes can double the amount of power needed. It doesn't really impact average power useage, but it can cause a psu's ocp to shut down the machine. https://www.youtube.com/watch?v=wnRyyCsuHFQ


> If some of Nvidia's next generation 4xxx series GPUs are close to 1000w draw as many rumors suggest

Those rumors are for millisecond long transient spikes, not an average of anything. So basically the rumor is a 500w peak load. Just like how the current 350-400w GPUs have transient spikes to upwards of 800w. It's not a problem in terms of power consumption (although obviously the increase from 400w to 500w would be), rather it's an issue with over-current protection in power supplies tripping. It's a "need bigger capacitors" type problem.


Yeah exactly this. I have a 3080 with a 5900X, would consider myself an enthusiast, and after recent price hike to my tariff here in the UK electricity usage is definitely something that's on my mind. Like, it hasn't stopped me gaming yet, but I'm very acutely aware that I'm using £1 worth of electricity every few hours of play - it adds up.


> £1 worth of electricity every few hours of play

I hope you make a lot more per hour of work. Stop worrying about that.


I mean, thank you for the thoughtful advice about my finances, but it doesn't help in the slightest. Life is getting a LOT more expensive lately, with everything going up in price - I'm seeing my grocery bills double, energy bills triple, spending lots more money on petrol, on eating out, on taking my family out for trips, and yes - on gaming too. Is that £1 every few hours making me destitute? No, absolutely not and I'm extremely privileged to be able to afford it. But at the same time every £1 taken for this isn't a pound saved, or spent on my kid, or on literally anything other more productive.

So yes, I can "easily" afford it, but it doesn't mean that the energy consumption of my gaming rig hasn't affected how I think about it. Any future hardware upgrades will also be impacted by this - there is no way I'm buying a GPU with 500W TDP, even if again, I can afford the energy bills.


Whether he does or not it's none of your business, and it doesn't change the fact that those are high prices and sources of environmental issues.

This power draw is getting out of hand on desktop, consoles and x86 laptops and is largely a symptom of lack of competition and lack of technological advances.


> those are high prices

By any reasonable measure they're not. £1 for "a few hours" of fun is a very cheap hobby.


And probably the heat output of a space heater as well. I had to move my tower into another room because it kept the whole room way too hot


The pilot light on my furnace went out years ago. I only noticed because when I opened the door to the room with my computer a light but noticeable heat blast hit me. It took a second, but I turned around and checked my furnace, etc instead of going in the room. It really was a revelation about how much heat those things produce.


> that people who regularly invest in top end hardware don't care as much about power bills

It adds up, especially in data centers where you end up needing even more megawatts of power and cooling capacity.


> don't care as much about power bills

Not yet!


I wonder what % of the overall power bill a PC actually consumes. My gut would say it doesn't compare, really, to the water heater or air conditioner, but it would be good to see numbers.


I would love some PSU metering ability, to see actual data about how much juice my PC is pulling down. Other than getting a kill a watt meter, how could one go about this?


They make "digital PSUs" like the Corsair AXi series that can talk to your PC over a comm port.


Some UPSes can show how much power is being drawn by everything connected to it, in this case presumably your computer.

You probably want a UPS anyway if you've got a power guzzling (and thus presumably expensive) machine.


Many server PSUs and motherboards have a SMBus or similar interface for monitoring. Quite rare on consumer parts, sadly.


Knockoff meters are like $10 on Amazon. Not a bad investment.


I have what is probably a close spiritual cousin of one of those, and while it even touts a power factor display, it also loves to show ridiculously high values during idle consumption for anything involving some sort of power electronics (not just for my computer, but for example for my washing machine, too – it shows sensible values while the heating element runs, or when the motor actually turns, but in-between it shows nonsensically high values).

It is a few years old, though, so maybe by now quality standards have improved even for those kinds of cheapo-meters…


Calculate it. You'll have kW/hr costs for electricity which depend on where you are and can ballpark power based on fraction of the time it's running and the components in it.

My standard dev machine is ~1kW flat out, ~500W most of the time, probably 100W idle. Runs for about eight hours a day. Say 500W is the average, suggests 4kW/hr a day. That's about $2 a day in the UK.

(those power numbers are relatively high - it's an elderly threadripper with two GPUs)


I think that 500W average is the tricky bit. When web browsing for example, my laptop (linux+intel) seems to spend 99% of the time in the C1 halt state, according to i7z.


I can say that when my son left for college last fall, our electric bill dropped about $30 a month compared to the months he was here (after adjusting for seasonal heating/cooling costs). He has an i9-12xxx gaming rig with two monitors. A Prusa 3D printer that gets a lot of use and a few other gadgets and such.


They are in development for years, not last 6 months


arguably it was the same thirst for electricity that was the killing stroke for most of the POWER architecture. that, and IBM contract fees.


netburst was the first thing that came to mind


I don't know, we found Helium-3 on the moon this week, so I think it might be fine.


In Intel's case, they need to push these insane TDPs in order to even dream of performance parity with AMD and Apple. All those years spinning their wheels on 14nm+++++++++++ are biting them in the ass.


Between this and the ridiculous TDP expectations for this generations latest graphic cards people are going to have to start thinking about using dedicated circuits per gaming computer.


It certainly makes building Mini ITX a lot more interesting when you're trying to get the sweet spot for performance to thermals/noise ratio.

I did a nCase M1 build recently and my objective for the build was small as possible, quiet as possible, and as powerful as possible in that order. I still ended up with a pretty powerful machine by going with an i3-12100 instead of an i5/i7 which uses much less power and puts out less heat. The RTX 3080 reference card was the biggest card that could fit into the case which I undervolted.

A lot of people are undervolting their RTX GPU's because for an only about a ~3% performance loss you get about 10C less temp which translates to far less fan noise. I don't know why Nvidia doesn't just have a one click button for people.

nCase unfortunately have discontinued this case based on 'market factors' which I suspect means that they don't anticipate things to be getting smaller and cooler any time soon.


>A lot of people are undervolting their RTX GPU's because for an only about a ~3% performance loss you get about 10C less temp which translates to far less fan noise

Bah, this is brilliant. I just upgraded a 1070 to a 3070 and am flabbergasted at how much heat it dumps into my room. One of the reasons I did not go with the 3080 was the ~100 watt lower draw.

Do you know of any good tooling to assess the impact of undervolting or is it a manual guess-and-check process?


Trial and error. You need to dial in the right point on the voltage/clock frequency curve for your workloads, AKA "just play some games and look at the results." Just use whatever your overclocking software for your motherboard is, and modify the default curve it has. I use MSI Afterburner and just set a flat clock frequency (plateau) at a certain voltage level to undervolt. I think for NVidia GPUs there's a way to modify the curve with the default tooling, but third party tools like Afterburner can also do it.

You can get great results pretty fast this way. My Mini-ITX build is about as thermally compact as possible given the parts (3080+Ryzen 5600X, NZXT H1), and I'm pushing my PSU to the absolute limits in the stock settings, so undervolting is important for safe power margins since the 3080 can reach ~360W in my testing. I think 30 minutes of tweaking got me something like a +80W power drop for only 10% FPS in Read Dead Redemption 2 @ 4k60fps; I never breach 300W now which is within my personal safety margins, and can native 4k everything.

Some software like Afterburner have "Overclock Scanner" tools that will run benchmarks and repeatedly try to dial these settings in for you, but it really is easier to just modify the curve manually and test your specific workloads.


I just built a Ryzen 5600G system (without a discrete video card atm) and you can set either temperature or power consumption limits in the BIOS and it will underclock itself (actually turbo boost less) until it obeys your limits.

Perhaps I'll wait with the video card until they give me the option to do the same there...


Make sure to also cap your FPS or use Vsync. No point pumping out 100fps when you have only a 60hz TV, etc.


This is the correct answer to tackle power draw. Use Vsync/Adaptive Sync for fixed refresh monitors, or FreeSync/GSync for variable refresh monitors.

For variable refresh rate monitors, it's best to use framerate limiters as well: either in-game or in the Nvidia control panel. Set the cap at least a few fps lower than your monitor's max refresh rate. Even better, aim for 90-100 fps cap, beyond which diminishing returns kick in and power bills continue to creep up.


Just use MSI Afterburner and do some tests. I also usually setup a fan curve where the fan always runs faster than default to keep the temps lower.


i use prime95 for cpus and msi kombustor for gpus. if they can run for a while without errors i keep my settings, otherwise i increase power/voltage and try again


prime95 isn't a very good test anymore. With the changeover from blend to smallfft, it doesn't test the frontend or the memory controller or any of the other parts of the CPU very well anymore, it loads the kernel into instruction cache once and then it just slams the AVX units as hard as it can.

so not only does this not test the rest of the cpu at all - meaning you can run into problems with other parts of the CPU that aren't stable at those frequencies, because they're not being tested because it's only running the AVX units - but it also doesn't test frequency/power state changes at all, so you can run into situations where as soon as you close prime95 and it drops to a lower p-state, it'll crash.

gpus have run into similar things with furmark and kombuster and other power-virus tests... actually the GPUs themselves will detect when they're running and throttle down, so they no longer even do the thing they're supposed to, but, gpus also change power/frequency states under real-world workloads, just like CPUs, and they don't under furmark/kombuster. this actually caused a crisis at the ampere launch... all the testing had been done with a "pre-release bios" that only allowed these sorts of power/thermal testing, and it turned out that while the chips might be stable at max p-state, they weren't stable when they shifted back to a lower p-state, or from a lower p-state back to maximum. That was the whole "POSCAP vs MLCC" thing.

prime95 and furmark were very very popular 10 years ago but that's where they belong, they don't do the job anymore these days.


>> my objective for the build was small as possible, quiet as possible, and as powerful as possible in that order.

With the same priorities and a deemphasis on graphics, I present to you the Mellori-ITX: https://github.com/phkahler/mellori_ITX

Uses the CPU fan as a case fan. By protruding through the top we get a lower profile than is possible with any other ITX case (well the standoffs can be cut down but that has not be optimized).

My next build will be an upgrade of the same design but with a Zen 4 or 5 chip with 8 or 16 cores depending what fits in the power constraints of the Pico-PSU. It will be a while though because that system is still more than enough for everything I do with it.


Mini ITX is also insanely more expensive than building a regula tower. Sure, if you're only putting the most expensive CPU and GPU in it then probably it doesn't matter to you but for value oriented builds, miniITX case, Mobo, PSU and cooler add up a lot.


You can get a great cheap ITX cases these days for about $50 (Cougar), A 650w SFX Power Supply for about $70 (Evga), ITX motherboards start at $110... then just make sure you choose a sensible CPU/GPU from there based on your power supply. And if you're using entry level cpu/gpu then you don't need to go crazy with cooling either.

Certainly not much more expensive than a regular mATX build imho.


I have an M1 gaming build where I prioritized efficiency; 5800X3D and RX 6600 with a 450W PSU.

I also have a mini-ITX Lone L5 build with an i3-12100 and no GPU with a 192W PSU. (Effectively - PSU is technically a bit more, but the AC/DC adapter is only 192W.)


What games can you play on an M1?


I think this is the nCaae M1, a computer chassis and case, not the Apple M1.


> A lot of people are undervolting their RTX GPU's because for an only about a ~3% performance loss you get about 10C less temp which translates to far less fan noise. I don't know why Nvidia doesn't just have a one click button for people.

Yeah, I did exactly that with my 3080. Dropped ~50W depending on the game and I was able to keep the same clock speeds.


Undervolting actually let me over clock my 3070 higher, presumably due to extra thermal headroom? I noticed two peaks in the timespy results and undrrvolting moved me between them, so this must be pretty well known.


They probably work on a successor. In the meantime, the DAN Cases A4-H2O or the FormD T1 are worthy replacements.


Yeah undervolting is always worth it.

You can also limit your i7 power usage, so no need to go for an i3 if you have the money.


And also use the heat from their PC to boil water and then spin a turbine to generate electricity to sell back to the grid.


For people who use resistive electric heating I've recommended them to run crypto miners on their computer. Same efficiency with regards to heating, but you can earn some extra money as a bonus.


I've done this to heat the small room I use as an office, its far more efficient than the shit electric radiator in there.

Rented house so can't do much about the absolutely useless heating setup.


I lived in a little apartment with resistive wall heaters and I did just that in the 2018 period. Even had some Raspberry Pis mining Aeon, a lightweight offshoot of Monero.


Not really even close. Even with a 235W CPU and a theoretical 600W GPU you wouldn’t actually exceed even half the capacity of a single 15A circuit in synthetic benchmarks that stress the system beyond real-world loads.


I think the issue is with older houses that might have 15 amp circuit breakers compared to the modern standard of 20A. One high end desktop computer by itself isn't likely to be a problem, but the way these houses are wired, there are a lot of outlets on the same breaker since they were mostly designed for lighting loads. Our 1950 house in MI will flip the breaker if we use the microwave, toser and bathroom vent at the same time, and my desktop is also on that circuit (with a UPS)


15A vs 20A is a factor of the gauge of the wires as well. You can’t just swap the breaker for a bigger one. You’ll get heat and depending where that could burn the house down.

I have a relative whose house burnt down due to stapled wiring in the attic. Thermal cycling eventually created a short. When your attic catches on fire the smoke alarms go off in time to save the people, but the moment the ceiling starts to cave in the entire house is involved and you’re mostly trying to keep the neighboring houses from burning.


Good callout. Yeah, I should be referring to the electrical circuit as a 15A circuit and not that it's just a limitation of the breaker.

We have the original fabric sheathed wire as well in the walls still which needs to be replaced to modernize the electrical system.


You have a 15A Fuse. You use a 12A Microwave, a 10A Toaster you have blow your breaker right there, and a 10A Bathroom vent?

if you run a separate circut for the Microwave, and separate your bathroom vent + your bathroom LED lights, you can run all of them at the same time with your toster. Running circuts is comparatively easy, vs installing a new 200A fuse box.


If you have bathroom and kitchen on single circuit you have bigger fucking problems than powering gaming PC, whoever did that abomination needs to be fired.


In 1950, that was probably seen as perfectly fine. I used to own a 1942 home that had 4 screw in fuses for the entire house.


For sustained loads you are only supposed to draw 12A, and the PSU has a conversion loss, dropping you to perhaps 10A of power for it all. Plus, then you can’t run anything else on the circuit.


10A is ~1200 watts. That's quite a lot.


It’s easy to go over when you start factoring in other things like monitors etc

A beefy CPU, GPU and a couple of high end monitors can take you to the edge of that and over.


> It’s easy to go over when you start factoring in other things like monitors etc

Why would monitors even be factored in? They shouldn't be on the same circuit anyway.


Why wouldn't they be on the same circuit? The monitor and computer are in the same room and would generally be plugged into the same outlet. I think this would be the rule rather than the exception.


That’s absurd. Most people will not only plug them on the same circuit, they’ll plug everything into a single multi plug feeding from a single wall socket.

I’ve never seen anyone, in corporate and home environments , split their circuit use like you describe.


I would argue most people wouldn't even understand concepts like a circuit.

All they would see is an electrical cable and plug in hand, and an electrical outlet on the wall. Put two and two together, computer turns on. Circuits? Watts? Load? Might as well be pig latin.


253W (13th gen intel) + 450W (video card) + 2 monitors + a speaker system can easily hit 1000W.


Sure, you would be able to put smaller power draw items on the same circuit, but between the CPU/GPU/Motherboard/PSU/Monitors/Peripherals you will not be able to put two of these machines onto the same circuit.


For those confused like me, this conversation is about US circuits. On a typical European 230V 16A circuit, it's not a problem.


Maximum available power for standard domestic users is still only 3kW in many places. Might not be enough for a gaming PC, washing machine and microwave!


You would fit a gaming PC there?

I have a microwave (1270W) and dishwasher (2400W) on the same circuit (230V, 16A). It didn't trip yet...


230 x 16 = 3680 and 1270 + 2400 = 3670. Living on the edge.


Depends on your country, in the Netherlands 25A and 35A main fuses are common.


that's it?? that's not enough to even power an electric stove


Don't forget that they use 230V, and electric stoves often use three-phase power. Even with a 25A fuse that gives almost 10 kW of power.


So do American stoves. I have a 50A/240V circuit for my stove.

Three phase on the other hand is a slight of hand, since that gives you more power than what 230V would imply ;)


Exactly, a "cooking fuse" is not uncommon, which is two 16A lines to the same stove. That gives you 7360W to play with, something you won't reach in practice.

Alternatively, if you already have a multi-phase connection, then you would of course have the lines on different phases. If you have a 3-phase connection 25A main fuse is common, for single phase connections 35A is common.


Just to clarify here: when you talk about a "main fuse", you mean one that sits between the meter and the entire rest of the panel, correct? So individual circuits would be downstream of the main switch.

For context, most American homes have 240V split phase (single phase for all intents and purposes) service with a 200A main breaker.


Wtf do residential homes need 48 kW of power for? I guess it's nice to charge your car quickly, but other than that I'm struggling to think of any uses.


Simultaneously washing and drying clothes while cooking a turkey in the oven, brussel sprouts in the toaster-oven, boiling water for tea, distracting the children with a computer or tv, doing some welding in the garage, and powering a bunch of Christmas lights. --- This is something that actually happened one year. It is much easier to use a big wire and a big breaker/fuse to the house than to have the power go out.


Between my electric heat, dryer, stove, water heater, and car it's easy to get close to the limit, and that's before anything that runs on 120, like computers, a refrigerator, lights, or washer.


Where, in deep russia ?

3kW is typical kitchen power.


In Italy it is very common. The wiring is normally rated for more (4.5kW) The power is limited at the counter switch.

> 3kW is typical kitchen power

Most stoves used to be gas powered. Now induction is becoming more common (but it requires an upgrade to 4.5kW).


A 15A circuit is good for 1440w sustained(120*15*.8), not 1800w.


If you have a circuit in your house that trips for “no reason” this is partially why.

With a steady load you can run a circuit breaker past the rated amperage on the breaker. But look at it funny and it will pop.

The most obvious case of this was when I knew someone who would plug a vacuum into a different circuit and blow a breaker. Just a little noise on the wires and click.


People often use power strips for their computer. So you also have your dual 4K LCD monitor system, as well as maybe plugging in a phone to charge as well which can have high peak power draws over USB 3.0.


I actually did this...I got two dedicated circuits put into my room - one for the window unit AC (no point in cooling the whole house when I really just need to cool this room most of the time), and one for my gaming computer. My work laptop, lights, etc. are all on the original main circuit of the house.

A friend of mine is an electrician so the price was very reasonable, and it has been worth it, especially during this hot summer.


I did something similar for my home lab setup in my previous house. It was pretty reasonable having two dedicated 20A circuits run w/ surge protected hospital-grade outlets and dual function breakers. Each circuit fed a different UPS which fed a different PDU so everything had redundant power back to the breaker panel, which was all I could reasonably do residentially, and it meant none of the servers/network gear impacted the rest of my office circuits.

It was reasonably cheap, and in my next house I'll do the same again. Running additional circuits is pretty easy if you have an attic or crawl space.


Are you exhausting the gaming computer to the outside or into the room you are cooling?


Into the room I'm cooling. I suppose it would be possible in theory to do so, but the particular layout of the room makes it difficult to impossible to exhaust both the AC and computer, I think.


In previous heat waves I've seriously considered venting my PC through the wall straight to the outside, but alas I currently rent.


Do you have an inroom AC unit? You could run a dryer hose from the back of your computer to the same window vent.


Ironically, I have a printer that really needs a dedicated circuit. When it warms up the toner, it draws 12 amps for 1-2 seconds.

Printing often pops the breaker. I had to move the printer out of my home office into a bedroom, but even then we've popped the breaker when printing while vacuuming.

(It's not a case of bad wiring, either.)


There are types of fuses that have a time delay on them for this purpose. A lot of electrical appliances have that kind of startup burst of energy. An electrician can tell you more


Warming up the laser printer (and its a small one) reliably causes the lights throughout my apartment to flicker.


It's not an AFCI breaker, is it? My last house had sensitive AFCI breakers that my laser printer would trip about a quarter of time when warming up.


Yes, my electrician was going to change it for me; but then I plugged my printer into a kill-o-watt and learned that it was pulling 12 amps.

I've been assuming it was current, because if it has the circuit to itself, nothing trips.

(My office only has one circuit, which is dedicated to the room. I could have asked for 20 amps, but it didn't occur to me.)


I used to own a brother laser printer and it did this, I switched to HP LaserJet and this no longer occurs.


who cares about these russian gas problems. I will have to throttle my cpu so it doesn't get too hot in winter


My office gets about 5c warmer when I play games on my PC, despite being poorly insulated with the window open.

And I've only got a 3900X+2070 Super...


feeling a bit cold, gonna turn on my gaming pc for an hour


Water cooled? More like water heated, amirite?


253W is only 2 amps in the US, thats like 10-20% of a breaker’s capacity


Peak draw for even a current-gen graphics card is well over 500W. There are rumors that a 4090 will need as much as a 1500W power supply to run it. That's almost a complete 15A circuit just for the PC once you factor in cooling, speakers, monitor etc.

I already have issues where the breaker would pop with my current gaming PC if it fully spins up and I had to get a 20A circuit put in to handle it (mostly because there is more than one computer on the circuit).


There are cards like that.

But you don't have to buy those cards. I play my games in QHD on a Radeon 6700XT 12GB, which tops out at about 165W.


I can't lie, the idea of needing a 30amp dedicated CB makes me feel happy as a nerd. Makes my power company happier..


With the GPU it's going to reach 51% at this rate. That's all it takes to require one per computer.


Just realized everyone's gonna have to turn their power targets down when running a LAN party. I gave my brother my old 2080ti, something would coil whine when he played Battlefield. We turned the card's power target down until the whine went away. We found at 35%, the whine stopped, and the performance difference was not easily discernable with a basic FPS counter just flying around a MP game.

Opportunity for software that dynamically adjusts CPU and GPU power targets in the middle of various games, learns the game's power/performance profile and whether it's CPU/GPU bottlenecked, and optimizes perf/watt while maintaining a given FPS target?


That's what Radeon Chill is.


I believe the new 4x nvidia series uses 450-500W.

Throw in 2 monitors and a speaker system and you're coming close to overloading your 15amp breaker.


For those of you unaware, most households in the USA have 15amp circuits for their wall plugs. With that you can safely pull about 1200Watts constantly.

I am unsure what the normal household circuit amperage is in the EU or elsewhere...


Max sustained load on a 15A breaker is not 10A/1200w, it’s 80% of 15A at 120v, or 12A/1440w.


The normal amperage in Germany is 16A at 230V, with a peak load of 3500W and a sustained load of 3000W.


I had a LAN party at my house in 2012 and one lunatic brought his 1KW+ PC and tripped a breaker which has seemed really twitchy ever since.


Circuit breakers do degrade after being tripped. Once usually isn't enough, but repeated tripping will wear them out.


You should have an electrician check the wiring too. You might be leaking a little current due to decaying insulation.

If you have to wire a room, ask for a larger gauge of wire so have the option of a larger breaker if you want it.


Already there in Europe... I actually prefer getting the Steamdeck out than turning the main computer on for "light" games.


This is the definition of “working smart vs. working hard”. Not everything about the CPU needs to be solved by pushing it to the limits of physics. TDP is not linear relative to CPU freq.


...no. Average EU circuit is 230V/16A, that's 3.6kW

Even if you ran 1kW (...somehow) you could still connect 3 of them and still have 600W left for audio/monitors.


We usually only use that high amperage fuses on high power appliances, like ovens or workshop equipment from my part of Europe.

I would say it's more normal to be fused to 10A for most indoor things, anything else is not normal, as most home appliance power cables are not even thick enough to carry the 16A 230V power safely.


You're right, I was being North American-centric, which is 120V/15A and could only run a single 1000W machine.


I popped the breaker when I accidentally connected my gaming computer and my car charger (plug-in hybrid) to the same circuit.

Probably would've been fine if it were a 20A circuit and not a 15A, but it did remind me how much power these things draw...

Or with power in SF averaging 40-ish cents per kWh a standard evening gaming session can easily cost a non-negligible amount of money.


That's not a TDP, which is a sustained metric (originally designed for board/cooling design integration) and shows 125W for that part. The 250W number is a new thing they're calling "Processor Boost Power" and I guess it's intended to represent some kind of "maximum short term draw" number. That's not something that's been historically reported for other parts, so it's kinda wrong to try to compare them 1:1.


intel's following AMD and introducing a "PPT" terminology for the boost value, since they routinely get compared against AMD's (non-boost) TDP values.

even in this thread you see people saying "wow intel pulls 250W against AMD's 105W processors"... when the comparable PPT number for AMD this generation is actually 230W, and their previous-gen number was 145W.

It's a huge marketing disadvantage, just like with node naming for fabs. Intel's 14nm is hugely better than GF 14/12nm or TSMC 16/12nm, and 10ESF is comparable to TSMC 7nm (although much later ofc). When the competitors are playing marketing games, to some extent you just have to start playing them too.

Desktop/HEDT TDPs used to pretty much cover boost clocks, the "tau" concept always officially existed but (eg) 5960X has a 143W idle-to-prime95 power delta as measured by Anandtech, so, the 140W tdp is pretty much sufficient to cover any "normal" non-prime95 AVX load at full boost clock. Similarly 4770K is a 85W TDP on paper and the measured idle-to-prime95 is 88W. Overclocked desktop loads could go higher of course, but most people overrode tau limits anyway in those cases. So in practice, tau limit was pretty much only a thing that existed on laptops in the intel world, because there was always enough TDP available to cover boost clocks, in a stock configuration.

https://images.anandtech.com/graphs/graph8426/67026.png

Then AMD came along with Ryzen and started marketing around base TDPs, and made their boost TDP this other higher number (but it's not a boost TDP guys, it's, uh, PPT, yeah!!!!)... and allowed it to boost to the higher number for an unlimited period of time. 9-series really started pushing it and Tau limits started becoming a problem, but it looks really bad to have a 145W TDP when the competition has 105W... even if it's the same actual power consumption in practive. So over time Intel more or less had to move to the same "TDP/PPT" concept as AMD.

It's really really noxious in laptops where AMD allows processors to boost to 50% (more than the desktop chips even!) above their configured TDP for an unlimited period of time. Yeah partners get to pick the cTDP for the particular laptop, but either way an AMD chip with a 15W cTDP gets to use 50% more power than an Intel with a 15W cTDP, for an unlimited duration, which is a huge functional advantage... basically a 15W AMD laptop is more comparable to a 25W Intel laptop in terms of power draw, and a 25W AMD will pull more power than a 35W Intel. So they move themselves up a whole power bracket through The Magic Of Technical Marketing (tm).

https://images.anandtech.com/doci/16084/Power%20-%2015W%20Co...


Looks like that's just for the 5.4GHz chip. To hit 6GHz, it's probably going to be this 350W (!!!) turbo mode.

https://www.tomshardware.com/news/intel-raptor-lake-to-featu...


It sounds high, but we’ve had plenty of AMD and Intel workstation CPUs with even higher TDPs for a long time. Overclockers have also routinely pushed well past that number.

235W is well within the range of what a decent air cooler like the Noctua NH-D15 can handle without excessive fan noise.


Justifying the off-the-shelves TDP of new GPUs/CPUs by saying it's still lower than what overclockers reach is the same as saying a car doing 50L/100KM is completely fine because an M1 Abraham uses 2000L/100KM offroad.


That's not what I said. I specifically said that AMD and Intel have been shipping CPUs with higher TDPs (stock!) for a long time. Overclockers have been going even further.

AMD's Threadripper PRO CPUs come with up to 280W TDPs.

It's really not a problem with modern air coolers and not a problem at all for people running liquid coolers.

A 253W boost TDP isn't really a big deal any more. There are plenty of smaller CPUs for people who don't want such high overheads.

Some of Intel's new parts can be limited to 35W and still outperform a Ryzen 7 5800X: https://www.tomshardware.com/news/intel-13700t-raptor-lake-a...

There's a lot of "sky is falling" over these numbers, but it's a non-issue for the enthusiast builds these are targeted at. Nobody is forced to put a 253W CPU into their machine, but it's great that the vendors are making them available for those who want them.


Your comparison is moot as nobody is forcing you to buy the most gas guzzling chips Intel and AMD make as those are exclusively for enthusiasts who want to have the best of the best with no regard for value for money or efficiency.

But Intel and AMD also make enough chips with very good efficiency for the average folk who don't need to set benchmark records.


Wasn’t the top of the line DEC Alpha drawing 200 watts at one point?


So, almost 25c an hour at EU electric prices.


Coin operated Game systems lol. We have come in the whole circle.


Or <$0.04 in the US and in most of the EU under normal circumstances.



Is eu electricity really 1 dollar per kwh?


Here are the hourly prices in Denmark, without taxes and other charges which are about 1.6DKK/kWh. If you go forwards and backwards, you can see the price has varied/will vary from about 1kr to 4.5kr, plus tax, or 35¢ to 81¢. Car chargers can be set to charge at the cheap times, and things like dishwashers and washing machines have delay timers for people who want to run them at the cheaper times.

Straightforward day/night electricity rates have existed for decades, hourly rates are more recent, and optional.

100¢/kWh has happened in the last month, but only at a peak period. I'm not sure how long or how often it happened.

1.00DKK = 0.14USD

https://andelenergi.dk/kundeservice/aftaler-og-priser/timepr...


Depending on the day and hour of the day, yes. At least here in Denmark.


Solar should be 10x cheaper, why don't more European homes have solar?



Cause a lot of europe is on the same latitude as canada.


because its expensive (installation) and depending on country solar panels grands can be difficult to get and are practically non existent


Here in downtown San Jose it’s around $0.75/kWh


Wow... my whole desktop setup, with 3 screens, a 7-year-old intel CPU, a gaming GPU, and a grip of hard drives is showing a draw of 143W right now, up to 289W under stress (prime95). 235W just for the CPU is nuts.


TBF the performance per watt is also nuts. 7 year old is Haswell-Skylake (same performance mostly)?

Alder Lake impresses me, but Ryzen is the better choice because f Intel heh


Yeah, this PC has a Haswell CPU.

As much as I want to agree on Ryzen, Intel is still the best platform for low-latency audio stuff. So I hope that performance-per-watt is or will be good as I'm beginning to get that PC-building itch. I'm curious what its idle usage is like.

But my living-room PC has a third-gen Ryzen (I built it just before the pandemic hit) and have been super pleased with its performance.


Do you have a source for Intel being the best platform for low latency audio stuff?


Nothing citeable, just anecdata and murmurs from being active in that scene for a long time

I did find https://linustechtips.com/topic/1238719-low-latency-cpu-for-...

edit - Sound on Sound with a bit more detail https://www.soundonsound.com/sound-advice/core-wars-amd-inte...


My i9-12900k hovers right around 250W TDP with no overclocking or anything. If you keep it under 100C it's happy to do so.


Just to clarify for the audience, there is nothing you can do within reason that will cause your CPU to ever self-heat above 100°. They manage their own power to stay below their maximum design junction temperature, less a safety margin. Even if you ran it without a heat sink, it will not run above 100°. It just won't run very well or very often.


I damaged some traces on an AMD board which allowed the CPU to talk to the VRM (anything related to SVI2 couldn’t be read when booted) and even that didn’t kill anything, it just put the system in like a 0.8 V, 400 MHz mode. Windows 10 takes an incredible amount of time to do literally anything on a system like that btw., even with twelve cores. Patched the traces and everything was back to normal.

Modern hardware is really difficult to permanently damage as long as you don’t go full “manual OC” - in that case many protections may be disabled, and you can certainly get Ryzens to overheat and die like that.


Intel CPUs most certainly will hit 100°C and above in laptops.


These designs pretty much demand a setup with a water cooling loop implemented via radiator sized for two 140mm fans (280 mm length).

Thankfully all-in-one kits for that which are pre filled and sealed are much more commonplace than they used to be, and even fairly cheap midtower ATX cases I see on newegg in the $60-70 range will have a top panel mounting place for a 280mm radiator.

And definitely any "gaming" marketed ATX case above that price range will have the capability for it.

You possibly could get away with a 240mm length radiator (dual 120mm fans) on something like this but I really wouldn't recommend it, and the savings for an AIO kit would be only $50-60.

From the perspective of noise annoyance, fan pitch and sound is somewhat proportional to size. 140mm fans can be a lot quieter and move more air than 120mm with less perceptible noise to the human sitting next to it.

Higher end stuff will be implemented by 360mm length radiator (3 x 140mm) which I am pleasantly surprised to see not ridiculously priced ATX cases having options mounting now.

I would figure you have to budget an additional $150-200 on top of the CPU cost for a capable water cooling loop setup. Which is not absolutely ridiculous considering that a really good skived copper heatsink/heatpipe/fan setup for pure air cooling on a 130W TDP CPU could easily be $65.


Just in time for the winter.


Relatedly...

One of the things I'm not happy about on my current machine (Ryzen 1800X, RTX 2070S) is the heat and noise. I'm going to invest in a better case and fans next time, but new hardware is trying to make the problem even worse. The new hardware is supposed to be very efficient if you limit the max power, but they don't make it easy to do.

From what I can tell the only way to change power limits and fan curves for CPU/GPU are either to reboot into BIOS or use multiple separate manufacturer's shitty bloated windows GUI utilities. AMD's Ryzen Master software is supposed to be good but it doesn't work at all if you have Hyper-V enabled which is basically mandatory for developers nowadays. My GPU's default fan curves have them turn on/off around typical idle desktop temperatures so they continuously cycle on/off and have worn out the bearings and now make a scraping noise every time they do this. The only way to fix this is to launch a bloated windows GUI utility every boot. I was surprised to not find an open source Linux library or kernel driver that lets you read and write fan speeds for GPU and motherboard controlled fans.

I want two things:

1. A simple, unobtrusive button in my system tray that lets me toggle power limits of my CPU and GPU from "silent" to "performance".

2. A simple, unobtrusive way to configure fan curves with averaging and hysteresis that, crucially, lets the case fan speeds be controlled by a combination of GPU and CPU temperatures.

As far as I know neither of those are possible today. I've considered buying or making a USB fan speed controller and even plugging my GPU fans into it because there's no other good way to control them.


I have found it much more difficult to tune fan curves for quiet operation on Ryzen than on Intel chips. There are frequent short surges in fan speed, presumably caused by frequent short surges in core temperature. This happens at normal load just above idle, e.g just web browsing and YouTube.

This has happened with a few different brands of motherboards, in multiple chassis with different brands of fans, so I think it's a characteristic of the Ryzen platform rather than a specific motherboard brand or BIOS.

It's really odd and annoying, and I've resorted to fan curves that keep the fan on low RPM until just before the CPU hits 90C -- basically flat with a big hockey stick inflection at the end.


> It's really odd and annoying, and I've resorted to fan curves that keep the fan on low RPM until just before the CPU hits 90C -- basically flat with a big hockey stick inflection at the end.

Same. I haven’t found much better. I always wonder how it doesn’t drive other people nuts.


The counterintuitive trick I've used for my 5900X is to increase the minimum fan speed up to around 60% or so. It makes the base volume very slightly higher, but avoids it constantly ramping up and down through the 50% zone which is what I found the most annoying/noticeable.


Currently using AMD Software Adrenalin Edition (on Windows): https://www.amd.com/en/technologies/software

Pretty average as far as bloat goes, ~150-200 MB of RAM used, about as much as JetBrains Toolbox takes up in the tray, or Mattermost, Discord or other apps like that.

Lets me switch between GPU power/fan profiles (Performance > Tuning) so I can run my GPU at 50% of its maximum power most of the time (as well as different fan curves), for longevity/noise related reasons, especially when dealing with badly optimized software/games.

The CPU just seems to do its own thing and throttles up/down based on system load, haven't really needed to tune it for any particular reason yet.

It's passable but you're right that things could be way better, more usable and user friendly! I guess in a way, when everything is bloated, nothing is. Wirth's law at its finest.


It can't control the speed of case fans though, no?

I suppose I don't care about memory usage for "bloat". I want it to be easy and pleasant to use, and to start quickly and unobtrusively. No 10 second long splash screen every time I boot.


100% depends on your mainboard manufacturer and their software support. Once upon a time most Mainboards had Fans controllable through SpeedFan


> they continuously cycle on/off and have worn out the bearings

Does cycling a fan on/off wear bearings faster than being on all the time? Naively I would have assumed that bearing wear is a function of time enabled and speed, with # of spinups a negligible factor.


So what you're after re fan control exists, in this bloody fantastic app: https://github.com/Rem0o/FanControl.Releases


I'll look into that, thanks! Too bad it's not open source, though.


I use this for my radeon RX 5700XT: https://gitlab.com/corectrl/corectrl You can use it for cpu as well, but i havent felt the need to fiddle with any settings there.


How about putting the button on the case, like the good old turbo buttons on 386/486s


Welcome back to the failed Pentium 4 strategy.

>Raptor Lake to Offer ‘Unlimited Power’ Mode for Those Who Don’t Care About Heat, Electric Bills

https://www.extremetech.com/computing/338748-raptor-lake-to-...


It's true that like... naturally throughout production some chips have better "silicon" (for lack of better terms/words) and some have worse, leading some to get marked as lower frequency chips (because otherwise if you turn the clock speed/power/heat up too much on them, they don't perform as well/have errors due to... mild defects in manufacturing?)

Am I understanding that correctly or butchering it?

Like how would you describe the fact that not all chips Intel producers will be able to hit 6GHz+?


I think more just the fall into "We're having trouble scaling performance with sane power usage, so just amp up the power usage" as to what was referenced as the Pentium 4 strategy. What you described is called binning and as far as I'm aware everyone does it (for example nvidia's founders editions tend to be the higher quality chips, causing 3rd party cards to not always overclock as well as older generations).


Is it offered first? We can just set PL1/PL2 unlimited for older Intel CPUs.


P4 was just plainly inferior architecture, this is not that. "Just" outdated one


These are crazy frequencies and I am sure they have the thermals to match, but this is also the sign that competition is tight. Both companies are trying to get every bit of performance out of their designs. Advanced cooling systems that are now common on enthusiast machines are helping.

Regardless if you think they can pull it off or not, Intel's roadmap is fascinating. They expect pretty tremendous growth not just in the processor space, but in the US foundry space. They are aiming to be able to compete with TSMC and Samsung in this space.


Looks like desperation to me.

I believe AMD's technology and furthermore strategy is superior to intel's. Now with Xilinx on board I am curious if we will see GPU's or APU's with FPGA's which allow custom hardware instructions.


I'm really not sure what's the point of this either. Some AMD FXs were clocked to 5GHz back in the day and it sure didn't help them with performance much, they just had to ship with water coolers as stock because they overheated like crazy.

Meanwhile an Intel i5 of the time could run faster at half the clock speed. I suspect this'll be a similar blunder but companies reversed in some stroke of irony.


The difference is that Bulldozer (and NetBurst before it) was a bad design. Golden Cove and Zen 4 are both excellent and quite similar core designs with nearly identical IPC so the difference between 5.7 and 6.0 GHz is the difference between being #1 (by a hair) and being #2.


5% clock difference can be dealt with using AMD's 3d V-Cache. Hundred megs of cache goes a long way.


It seems that if you don't opt for AMD on AMD this new generation of PCs will be absolute toaster ovens.


AMD is also in the process of being a toaster oven. The new Ryzen 9 7950x will use considerably higher power (105W -> 170W) for a smaller die area, which is making people worry about air-powered cooling being not enough (previously a Noctua D15 was enough to cool a 5950x). And I really don't want high-end CPUs to require water cooling since it's more unreliable and requires more maintenance.


There's a small silver lining in that you can set hard power limits in the bios and amd claims it's still ~15% faster when limited to 105w. It would be nice if they just left it at that but it makes them look worse in benchmarks against Intel since most people won't check the power consumption.


keep in mind that is not idle power consumption. If you are setting a limit like that it's because of heating or power constraints (battery? PSU?)


> 7950x will use considerably higher power (105W -> 170W)

The 5950X uses 180W at full load. Some measure as high as 230W. The 105W figure reported as "TDP" was a mistake.

Not sure why the specs were a big lie but this is well documented online.

> air-powered cooling being not enough

If you want to turbo to 4.1GHz or better on all cores on the 5950X, you have to spend about $60 on a cooler - water or air. The AIOs perform well. A big AIO will give you 4.5GHz. No turbo air cooling - 3.4GHz - is a waste of money.


I can definitely confirm this, I tried to air cool my 5950X without much luck. Enabling PBO I've seen it go over 200W, which a 240mm rad handles very well.


Huh. Interesting. I put a noctua heatsink (dual fan) on mine but I am guessing this is why my overclocking curves weren't that great. Any chance you have the rough before/after numbers and how much did you splurge for the water cooling?


those are base-clock TDPs not the boost TDPs (which AMD calls a PPT) too... it's 170W base/230W boost power, and those chips are allowed to boost for an unlimited duration (which of course has pluses and minuses).


Since noctua nh d15 air cooling has been on par or surpassing any aio and even all but the high end custom loops. https://www.youtube.com/watch?v=PeQX1uhb0iQ


Not really. The NH D15 is an impressive cooler, but it falls behind 280 AIOs. Even ones cheaper than itself.

https://www.gamersnexus.net/hwreviews/3571-arctic-liquid-fre...


Yeah I admit that there are cheaper water coolers than Noctua that perform better. But the main issue I have is with water cooling itself (can burst or leak if unlucky or with poor maintenance, has a more limited life span, etc...)


Intel power scaling works very well. My 12th-gen 12700K says its is drawing 660mW at the moment, and its complete silence is consistent with that estimate. If there's some power level that you prefer, you can just enter it in the BIOS and leave it that way.

Personally, I do not pretend that CPUs are light bulbs. If my CPU could draw 1000W for 10ms and that made short tasks like web page rendering twice as fast, that's a trade I would happily take. The short-term power consumption of CPUs is pure benefit to the user, and the rarer sustained tasks that run all the cores flat out for more than 1 second are always going to level off at about 125W because of the long-term cooling situation.


Intel CPUs have better idle power consumption that even the latest Ryzen CPUs. It caught me off guard when I switched from Intel to AMD and the idle draw of my PC went up by a significant amount.

Given that our computers spend more time idle than at 100% peak load, my AMD CPU draws more power (and therefore heats up my room) more then my Intel setup. That wasn’t an outcome I was expecting, but then again I was only looking at peak, not idle, numbers at the time.

I really hope AMD can start bringing their idle power consumption down in this next gen.


IIUC, this is mainly because the IO die was made on 14nm. With Zen 4 moving to 7nm for IO die, I think lower idle power draw is expected. We'll see by how much soon.


It’s not just the IO die, but also IF on AM4 seemingly not supporting power management - it always runs at full speed, unsurprisingly a bus that fast burns a lot of power.


In Ryzen Master, my Ryzen 9 5900X is showing 11-13W CPU Power, 17W SOC Power.

It's largely idle, just lots of background stuff/open programs (VS Code, Discord, Excel, Firefox, Teams, Edge, MySQL Workbench, Thunderbird, Messenger, Outlook.)

PPT is 30% of 142W = 42.6 W. (Not sure what PPT stands for.)

EDIT: https://www.gamersnexus.net/guides/3491-explaining-precision...

> Package Power Tracking (“PPT”): The PPT threshold is the allowed socket power consumption permitted across the voltage rails supplying the socket.


This is true, it seems like the infinity fabric limits how low AMD cpus can idle. Intel invested pretty heavily in power saving tech during their 4 core

One other thing that hurts AMD here was the x570 chipset. It seems to be a hack job, basically installing the I/O from the CPU upside down and running it as a chipset. IIRC it uses like double the wattage of the x470 chipset it replaced.


It seems quite unlikely that the idle power draw difference between Intel and AMD is doing anything at all to your room. The chiplet design is hurting AMD's idle power, but it's still going to be far from a "large" difference. Your monitor is probably using more power than an idling Ryzen CPU is.


Over in Japan, Intel CPUs are sometimes fondly(?) called Idlemaster, which is a pun and reference to Idolmaster[1], for their superior idling performance. :V

[1]: https://en.wikipedia.org/wiki/The_Idolmaster


I see the same thing in Intel vs AMD servers, AMDs are like 3x the power on idle


Can you give some details (components and consumption)? I'm interested in the topic, and I've done some wall measurements as well, in the past and present.


My 5600G idles at 22W whole system (wall power). However it has the cheaper chipset, not the x prefixed one, no video card at the moment and just one NVMe ssd for storage. It does have 4 x 16 Gb ram sticks so you can go even lower if you get 2 x 32 i guess.


AM4 APUs are totally different from their CPU brethren and are much better behaved in this area, because they’re socketed versions of the mobile chips.


Interesting. My 3950x draws (wall power) approximately 67/68W in idle. I'm on a Bxxx chipset and a 6600 XT, which should both consume little in idle mode, and 3 SSD, which consume little.

I haven't read much, but it t seems the range of around 40/60W seems fairly normal in idle, for a modern system (both Intel/AMD). I remember a former system of mine consumed 32/33W though; I think it was an AMD 6800K.


Well I picked the 5600G because it was advertised as 65 W (well and because it allows me not to decide on a video card yet). Your CPU is advertised as 105 W. TDP is a lie these days but the lying should be proportional :)

At a quick google, the 6600 xt idles at 4W with one monitor but at 20 W with two! Plus whatever extra power the motherboard needs to keep that pcie slot active.

Two extra ssds may consume little but that can be like 4-5 W more. It all adds up.


I have a 3950X and a 5950X. I thought it was the other components at first but then someone pointed me to the official Ryzem master software. It shows CPU package power, which has a persistently high floor consumption. Idle temps are also higher on the same cooler. It’s a well-known phenomenon on the forums when I searched.


FYI some programs really fuck with this, even at ""Idle"". On my ryzen 5 3600, it idles at a few watts, but if I open steam and just let steam idle on my library, the CPU now draws 15 watts.


My 3900x is also very inefficient when idling. It never goes below 50w according to hwinfo64. My entire pc is surely above 100w idle.


Interesting, I'm currently running Windows 10 with a Ubuntu desktop in a VM and multiple web browsers open with lots of tabs. According to Ryzen Master, my 3900X is consuming ~25-30 watts.

Edit: Libre Hardware Monitor shows package consumption around ~55 watts which looks suspiciously close to CPU + chipset power consumption.


Does this apply also to laptop parts?


No it doesn't. Unlike desktop parts the mobile chips are single die, it's the separate IO die which seems to be the primary reason for the high idle draw. Current gen AMD mobile chips are pretty much even with Intel at idle and significantly more efficient under any load metric.


Yes, and even moreso for the laptop/mobile CPUs because those generally tend to have more E-cores than P-cores compared to the desktop CPUs, which further reduces power use in exchange for less absolute performance.


Every comparison I've seen between Ryzen 5000/6000 laptops and Alder Lake laptops has AMD winning battery life by a wide margin. Intel may want you to believe their E cores are enough to win the efficiency battle, but in practical use, they aren't doing enough.


Why do we focus so much on TDP or max boost? How often and for how long are you running your CPU at max? I'd like to have the performance there when I need it but for most of the day I am sitting near idle.


How often and for how long are you running your CPU at max?

On my workstation? 8-16 hours at a stretch is common, several times a week.


What are you doing that’s pegging your CPU at 100% for 8-16 hours? Or do you just turn off throttling?


Microsoft Teams.


What happens in the bedroom stays in the bedroom, and we don't kink shame here. :P

I would imagine some machine learning/development or video editing. Or playing dwarf fortress.


If someone is doing ML or video editing and the software is not using the GPU for most parts of the workflow, I'd like to invite that person to 2022.

Now Dwarf Fortress, that's another beast with no cure...


> If someone is doing ML or video editing and the software is not using the GPU for most parts of the workflow, I'd like to invite that person to 2022.

That sounds like a pretty strong statement, so I decided to try it out on some hardware from the last 5 years (an AMD GPU with VCE 3.0 and a 6/12 core/thread AMD CPU), in particular, encoding the same video: 1080p, 30fps with similar quality settings.

Here are the results for various encoders (using ffmpeg, through Handbrake and/or Kdenlive):

  How File         Size   Time
  CPU h264         105 MB 03:15
  GPU h264_amd_vce 198 MB 03:22
  CPU h265          99 MB 07:44
  GPU h265_amd_vce 361 MB 04:36
  CPU mpeg4        102 MB 02:07
  CPU mpeg2         72 MB 02:04
  CPU vp8           55 MB 06:13 (low CPU usage, <50%)
Seems like the quality settings don't actually mean much across different encoders, so these results aren't that conclusive in regards to comparing the codecs against one another, however one can surmise out that GPU isn't an order of magnitude ahead of CPU in regards to video encoding, at least on mid tier consumer grade hardware.

That probably changes on more specialized or recent hardware (something newer than VCE 3.0), or things like the aforementioned ML.


A proper hardware encoder will absolutely smoke CPU decoding. Switching from OpenGL software rendering to NVENC/VAAPI reduced power consumption considerably for me. Nvidia and Apple's hardware-based decoders are both top-notch, with Intel's also being fairly decent. Like the other commenter said, I've heard mixed things about the AMD decoders (not to insinuate that their GPUs/iGPUs are bad, though).

As for machine learning, most metrics right now suggest that conventional GPGPU calculations scale with power consumption. If you want high-performance ML without specialized hardware, your only choice is high-wattage hardware.


AMDs Hardware encoders are also notoriously bad. Maybe the situation is better on some nvidia cards.


A possible answer would be running Android Studio, another one, piles of Electron apps.


Electron apps.


ok, I know you're being snarky, but that legitimately made me laugh.


Discord Helper (Renderer), Skype Helper (Renderer) and Slack Helper (Renderer) are the processes most likely to make my laptop unresponsive so i don't see how he's snarky there :)


aka electron, electron, and electron :)


One word, bro: Docker!


torrenting linux isos


It must be IO bound.


Maya or Blender batch rendering


Yes, this will only be useful for servers, workstations, and performance nodes. I heavy doubt this will be useful for generic home computing any time soon.


I collect and rip Blu-Rays. Whenever I buy something big (like a complete TV series box set), it usually means my 5900x will be churning away in Handbrake non stop for a few days straight. It makes my basement a bit toasty.


But given that space is cheap and transcoding means you're necessarily compromising image quality, why bother re-encoding?


Because I have a lot of shows and space isn't that cheap, especially on mobile devices.

Video compression has improved a ton since the introduction of Blu-Ray in 2006. I can cut the average size of a 45 minute TV show from ~8-10GB on the disc to ~2GB with minimal or no perceptible quality loss. This makes a huge difference when I go to load up my iPad with movies and TV shows before I travel.

I do not bother re-encoding anything on a 4K disc, I just rip them straight to my NAS and strip out foreign audio and subtitle tracks.


HEVC can produce extraordinary quality for the size (especially on cartoons/anime) but man it is a fucking CRUNCH to get it there. It makes even x264 look easy by comparison.

And AV1 is a whole 'nother level past that... minutes per frame basically. But great size/quality.

Just as a casual observation, you should look at hardware AV1 encoders once they come out, because they'll beat software HEVC with a hardware (read: fast) encoder. Of course you also have to have support in the devices to play it back... which can be a problem with HEVC as well.


I may switch to AV1, but not until decoding support is ubiquitous within my device ecosystem. The big one is my iPad, I don't want to cut my battery life in half by watching movies with a software decoder during a long flight.

Seeing as the M2 and A16 both still lack AV1 decoders, I might be waiting a little while, which is fine since it gives encoders and processor speed more time to catch up to the speeds I currently get with x265.


Which drive are you currently using? I bought the external Archgon drive but never got to the flashing part.


Want to be friends?


TDP is correlated with consumption, so a higher TDP is indicative of how much the CPU will burn in general.


Completely false, idle and low load power consumption has little to do with TDP. Case in point: Ryzen


Post your exact numbers. I've done wall-power measurements with different Ryzen models.


HN is full of Intel bears / AMD bulls so they're going to latch onto whatever they can complain about.


CPU temps will spike, computer turns into revving jet engine. Opening a program is all it takes. Shouldn't have to have the biggest noctua cooler on the market to keep a CPU below 90C. They're meant to be quiet.


Kind of hard to say they're "meant to be quiet" if opening a program is all it takes to make them... not quiet.


big numbers good. we need big numbers. big numbers better than small numbers. big numbers please monkey brain. monkey brain happy = more sales = more money = bigger number in bank. bigger number in bank good


Seriously, reading semiconductor news these days feels like watching dragon ball z. I won't even be surprised if someone announced a cpu that runs at over 9000MHz.


Daily for a couple of hours at least. TDP max is basically the only interesting metric. Because this is what you have to design your power delivery and cooling for. Average TDP is only interesting for your energy bill.


Almost no one is capable of the nuance to actually compare these on real workloads without buying one.


For a whole minute before throttling.

Warning: required power station not included.


Properly cooled workstations don't throttle, ever.

I run simulations which use 100% of all cores for endless hours.

You just gotta use a decent thermal paste and cooling fan. In my case it's water cooling so I don't even hear the fans.


Warning: small lake required for proper cooling not included


Haha. It's a closed loop, just plug and play. The radiator is 14cm x 14cm and fits nicely inside the case.


Which model do you use? Just realized that I've only seen or heard about 120mm (single fan) AIOs :)

Briefly looked 140mm AIOs up, and have only found Corsair XR5 so far. These 140mm rads seem like a rare-ish thing. Did you randomly stumble upon one of those or is it, like, a new trendy thing?



I think an underappreciated thing in these conversations is that +200W of TDP of top end products nets a very small increase in actual performance, especially if you're not maxing out all 24 cores. The "halo" models are there purely for the enthusiast that want the absolute fastest CPU regardless of other tradeoffs not those concerned about things like air cooling or electricity cost. E.g. the 13700k is a 125W/253W CPU. The 13700T is a 35W/~105W? CPU. They have ~identical single thread performance and are within <25% on 24 core performance.

What the article doesn't cover is the actual performance uplift but that seems a given being an article about this newly rumoured model not an overview of 13th gen performance as a whole.

I love my power efficient M1 macbook and I love my number crunching Zen 3 CPU. The 2 SKUs I have aren't serving the same market and where the architectures overlap the differences aren't as profound as comparing extremes from each family would lead you to believe.


Yeah, basically unless a piece of laptop hardware comes on x86 that can match the M1's battery life/efficiency/performance, Gigahertz and 13900 k-class unicorn horns and all that is just marketing. Generations, X% better than last generation is all powerpoint / tech press bulletpoints.

The M1 produced a bigger enduser boost than the introduction of SSDs. We're now at 1 year for the M1 essentially, and there is no conquerable x86 product yet, and I don't really see it on roadmaps.

Microsoft remains too incompetent to make a usable desktop OS, and their hardware forays are always spectacular failures, so they won't provide the hardware leadership. That leaves Intel and AMD, and both appear to be in head-in-sand mode.

And I am no mac zealot. The only thing I like about my M1 work laptop is the battery life, but it's such an incredible leap over any other laptop experience that it is the gold standard. I'd love a linux laptop that was comparable, x86 or ARM, but if the M1 is an A+, all other laptops are basically at C or C- grade.


If your only guiding metric is laptop perf/watt then absolutely the M1 is ahead of the pack. I'd call things like the 6800 at least B grade for that class though - close calls in both single/multi with each having a win and power efficiency not all that far behind for doing so. It'll be interesting to see Zen 4 based mobile CPUs vs the M2 in '23, the M2 didn't really make much of a move but Zen 4 did.


Does the "box" include a 2kW industrial chiller like last time?


My NUC ECE board can reach 5ghz. But I throttle all cores down to 3.2 GHz so it can run without the helicopter sounds from all the fans running full tilt. This sounds the same.


I would be pretty amazed if this 6ghz "stock" clock can be achieved with just air cooling.


Modern air coolers are extremely performant. The popular Noctua NH-D15 can handle these TDPs and more without problem, and has been available for years.


Wait, what? I've been researching this lately, and have been coming to the conclusion that the NH-D15 can NOT handle peak head from intel's 12700k.

The NH-D15's max cooling ability is slightly beneath the 12700k's max thermal output. So if you're running a 12700k, you can get away with it, because the liklihood that you're running the 12700k at max thermal output for long periods of time are pretty slim, and if you're running under max, then you're covered.

I don't remember TDP numbers off the top of my head, but if the 13700 is a significantly higher thermal output, and its going to be in the same case as the next gen video cards with their higher output? That doesn't seem feasible for the NH-D15 anymore


Heck, I have a cheap Coolermaster air cooler (I think it was $30) on my i9-9900K. I've run Prime95's max heat torture test on all 8 cores and my CPU will hover around 65 C, well below the thermal throttle threshold.


Either you mean 65C above ambient or something's wrong with your machine, unless you're running liquid nitrogen there's no way that chip's only at 65C under max heat torture. Here's Tom's Hardware getting 90C on the blend test (MUCH less heat than max heat) with a 240mm AIO. https://www.tomshardware.com/reviews/intel-core-i9-9900k-9th...


Hmm...you might be right...

I just tried the test again, and with the max heat test, I was in the 95-100 C range and it was throttling to 4.3 Ghz.

Now I'm wondering exactly what I did. It was years ago when I did this, right after I got my CPU.


Best guess, maybe you unintentionally only ran it on 1 thread?


Doubtful.

I did find an old message on Discord though where I talked about it:

> Ran the CPU torture tests in Prime95 on my new i9 with this Cooler Master Hyper 212 air cooler. When doing the Small FFTs test which generates maximum heat, yeah, it does suffer some thermal throttling. It'll run for about 30 seconds at full speed, then it'll enter a pattern where it throttles down to 3.6 Ghz for about 1/2 second, then jumps back up to 4.5 Ghz for 2 seconds, then throttles again. For the other two tests, it happily just hums without throttling at about 60 C. But even when suffering from thermal throttling, it's still faster than my old i7-3770k.

Thing is, right now, if I do the "Large FFTs" test, it's hitting 90C, whereas before it was only 60C.

I've had this CPU now for 4 years, and I have 3 cats and my computer is only a few inches off the floor. I wonder if I just need to dust out the cooler.


Fwiw 90C on Large FFTs is much more in line with what I'd expect. Curious if you can actually get it down in the 60s somehow


I'm sure a big air cooler like the Noctua NH-D15 would be enough to stop throttling.

Maybe Intel will include a beefier stock cooler in the box?


Intel's stock coolers were always a joke and people usually switched it out for something else. It might be good for them to bring some decent stock coolers to the entry-mid level models like what AMD does.

But really doesn't make sense including a stock cooler for the i9 lineup though, since that monster of a CPU will defeat probably every mid-level air cooler and would require a beefy Noctua or something similar...


Just one though, so some lucky individual will have the fastest 13th Generation Intel CPU.


how on earth does this not melt all matter in the known universe?

forgive my laymen knowledge, I am just a humble software person but isn't the equation of power -> speed exponential which is why CPU speeds clocked out around 4ghz and we moved to multi core processing? what sorcery exists that lets us suddenly break that barrier?


Lots and lots and lots of energy. The tdp of my first i7-920 was 95 watts, and rumors are this new Intel chip will clock in almost 3x higher.


This processor's TDP will likely be 125W just like its predecessor and the generation before that and the generation before that, too. It's a practical design point that desktop cases, coolers, and motherboard power circuitry can hit.


Power = Capacitance * Voltage ^2 * frequency ; not exponential


thank you! I just had a simple graph in my head. Now that there is something I can conceptualize, is it :

Power = Capacitance * Voltage ^(2 * frequency)

or

Power = Capacitance * (Voltage ^2) * frequency


In general you can often work out the answers to questions like this by considering units. In this case, we don't need to figure out what Watts / Farads is(1), we can see that Volts^Hz is not going to give us anything well-behaved.

(1)[m^4 kg^2 s^-7 A^-2]



The difference in energy stored in a capacitor at two different voltages is 0.5*C*(V^2). A chip will burn that energy 2*(frequency) times per second, so P=C*(V^2)*F.

Edit: forgot a half


It's

Power = Capacitance * (Voltage ^2) * frequency


I look forward to TSMC fabbing it for them.

In all seriousness, hasn't Intel been hanging on to their current architecture for way too long at this point? IIRC Ryzen consumes less power and does more per Watt on the high end and ARM is eating them up on the low end. It feels like Intel is just trying to squeeze out a little more from a much-delayed 10nm process and their existing architecture.

It sort of reminds me of Pentium 4 just outstaying its welcome.

Genuinely curious: what's on the horizon for Intel as their next big change and not something that's just a marketing clock speed boost?


Current architecture, no - this big+little architecture just started with the last generation, like a year ago I think? However, if you mean process node, then yes, I think this is on the same one they’ve been using for awhile. But they have 2-3 more nodes coming in the next 3-4 years. If they don’t screw those up, they should be ok. That’s a big if, though.

They’re also changing to a chiplet-based architecture with their next Xeon line which will be interesting.


They finally started moving forward in the 11th gen, where they backported the core originally meant to 10nm to 14nm. 12th gen had a nice IPC boost, pity they didn't keep AVX512 enabled IMO.


Arm is absolutely nowhere. Apple are very good but that's not an arm design.

Also you're wrong, it's really not an existing architecture unless you talk in extremely broad strokes.


Incidentally, the chip is code-named Preshott 2

Snark aside, I have a couple of 3700x machines and an itch. Kind of split between upgrading my gaming machine (not entirely worth it) or changing the mini-ITX and severely undervolting a 7700x.

Or, I can just wait for the zen 5 which was my original plan but as I said, I've the captains itch


Apparently this is a stopgap until Meteor Lake, which will use the "Intel 7" process, hopefully improving on power efficiency. However Meteor Lake isn't scheduled until 2nd half of 2023.

These 13 gen are Raptor lake, which uses the same process as Alder Lake.


Meteor Lake is on Intel 4.


Doubling processor speed over 20 years is still exponential.


Literally halt and catch fire.


... and will it be a 400 W cpu or a 600 W cpu?

Where have I seen this "more megahertz" strategy at Intel? Right, with the Pentium 4.


I can't wait to see the RPCS3 benchmarks. We are so close to perfection with the 12900K already.


This is literally crazy. To think just a few years ago we could barely get 3Ghz. With this and AI breakthroughs I think the next decade will be crazy.


I'm betting 99 percent of devices using this chip will be used 99 percent of the time for internet access, meaning the bottleneck is now bandwidth.


Intel still playing the GHz game, not having learned anything from the Pentium 4?


They are losing the process manufacturing game so need to make up the lost ground somewhere.


Golden Cove (Alder Lake) has been a massive uplift in performance-per-clock from the previous generations. This is also related to the fact that Intel was reiterating Skylake for most of the time until ADL, which itself is barely more than Sandy Bridge with a few generations of iterative tweaks. I believe Raptor Cove (what will be in 13th gen) is not that big of a change from Alder Lake; it's the same but with some refinements and mainly larger caches.

Furthermore, the P-core/E-core divide means that they can make the P-cores quite large and inefficient, as E-cores will pick up the heavily-threaded tasks. So while a Zen core is significantly smaller than a Golden Cove core, Gracemont is much smaller still.



No, amd and intel are still lumping eachother over the head with IPC gains.

Alder lake was Intel's first proper response to Zen -> huge IPC boost


Every few years either AMD or Nvidia comes out with some chips that blow away the competition and everyone claims that the chip war is dead. Then the gap slowly closes again until they are sort of neck and neck until one of them does the same thing.


> everyone claims that the chip war is dead

Who claimed this?


Yeah but how much actually gets down in each of those cycles?


And use fifty gigs watts of power or?


Nice, that will be my next build!


Tejas and Jayhawk lives !!!!.


Are we back to the GHz wars?


1 plank meter! 12 GHz!


this chip will cause a blackout in the neighborhood


Does it come with an included nuclear power plant?


At what TDP?


No more than 1kW


Even if that were true, there'd be a niche for these. There are still tasks for which single threaded performance dominates.

Not sure it'd be a big enough niche to save Intel's market share though.


I can finally have the greatest Minecraft server ever


Dwarf fortress as well, though not a server unless you play over SSH.


Nice and cheap for us Europeans!


Might as well be productive while we heat our homes this winter!


Resistive heater with a cheeky twist!


Nearly 100% efficient!


You can also boil water for tea while you game!


finally! doom at full speed


Clock rate limits are directly due to physical heating limits on-chip with silicon. That's what it stopped being an automate gimmme of scaling and it likely never will come back again as a systematic thing because the laws of physics can't be changed.

Cores were the direct answer to this limit. But then Amdahl's Law limits what cores can do.

What this story means is someone decided to trade off thermal effects for clock rate. The primary effect of heat is reliability - that is, how many years of operation can you expect with the final product. Heat kills the life span of transistors, metal and oxide.

Honestly there's nothing special about hitting 6 GHz - it's not some magical thing. Magical thinking people!!


I read headline as 60 GHz and raised my eyebrows, then I reread it as another non news. CPU tech is extremely boring last decades.


I wouldn’t say that. At least in the last years it way more exiting than before since AMD is competitive again and Apple does use ARM chips with a whole new tradeoff strategy. The only boring part is Intel IMO


Given the fact that x86 instructions are not generally completed in a single cycle, how much of this increased clock do we actually expect to translate into faster performance? Presumably we're not expecting a 50% performance increase here.


X% higher clock gives X% better computation performance, regardless of the number of clock cycles each instruction takes.


Most x86 instructions do translate into a single uop that executes in a single cycle. Going from 5.5 to 6.0 GHz should be around 9% faster; maybe 10% with the larger L2.


When are we getting good alternative to M1?


If battery life is what you're after, looking at a 1.5 generation (soon to be 2.5) old 35W Ryzen that gets 11 hours of battery life should give you clues. Ryzen 7000 mobile should be announced in January, and those will utilize TSMC 5nm.

https://www.laptopmag.com/features/how-amd-ryzen-whooped-int...

https://www.tomshardware.com/news/amd-ryzen-4900hs-battery-l...

Or 0.5 generation ago 15W Ryzen getting 17-20 hours...

https://youtu.be/An3OpQ7v0rs?t=546



A fork called Fandaniel Linux is sorely needed. It doesn't even need to have anything.


When you buy AMD instead of Intel.


AMD is worse in every regard


That's a pretty bold claim that you're going to need to back up. AMD has been beating Intel in performance-per-watt for years now, and both are regularly trading blows when it comes to fastest consumer chip.

And that's not even talking about EPYC which is pretty thoroughly trouncing Xeon in just about every metric.


probably never.


Intel and AMD are waving the white flag. They're bowing to Apple and admitting that they can't compete in the laptop arena.


This comment is a bit like saying Telsa is admitting they cannot compete in entry level electric cars because they announced the Cybertruck's maximum performance figures.

Meanwhile, 9 out of 10 laptops sold... contain Intel or AMD chips (or presumably Qualcomm), rather than Apple Silicon.

https://macdailynews.com/2022/01/31/apple-takes-10-share-of-...

Now if the question was... is Apple Silicon much more efficient... than an enthusiast level desktop CPU?


How is this "waving the white flag." That usually signals defeat, this is an attempt to build hype.


Can you run this chip on a laptop? How will they hope to compete with Apple's M2Pro/Max?

Answer: they won't. It'll be like phone chips where Apple is 2+ years ahead of the game for perf/watt.


It sounds like Samsung's marketing on the phone side. Wasn't there a time when they competed on having more cpu cores? And they were caught overclocking them when benchmarks were run, leading to power consumption and heat that was unsustainable in normal usage?

At least Intel is doing this with a desktop CPU. If you can afford the electricity bill generated by said GPU and the A/C to cool the room.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: