Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Compared to the AMD Radeon Pro 5500M, the base GPU in the 16" MacBook Pro: https://gfxbench.com/compare.jsp?benchmark=gfx50&did1=907542...

Compared to the AMD Radeon Pro 5600M, the +$700 upgrade in the top of the line MacBook Pro 16": https://gfxbench.com/compare.jsp?benchmark=gfx50&did1=907542...

Note that the Onscreen numbers are capped at 60fps on OS X, so ignore any Onscreen results at 59-60fps.



Considering that there’s no difference between the 1050ti in the OP and the 5500M that PragmaticPulp posted I’m inclined to say this test sucks. Userbenchmark.com shows there should be a substantial (38%) improvement between those two. Take these early results with a HUGE grain of salt because they smell fishy.

https://gpu.userbenchmark.com/Compare/Nvidia-GTX-1050-Ti-vs-...


Well, two things there.

1: Userbenchmark.com is terrible and nobody should use it for anything. At least their CPU side of things is hopelessly bought & paid for by Intel (and even within the Intel lineup they give terrible & wrong advice), maybe the GPU side is better but I wouldn't count on it.

2: The real question there isn't "why is the 1050 Ti not faster?" it's "how did you run a 1050 Ti on MacOS in the first place, since Nvidia doesn't make MacOS drivers anymore and hasn't for a long time?"


> Userbenchmark.com is terrible and nobody should use it for anything. At least their CPU side of things is hopelessly bought & paid for by Intel (and even within the Intel lineup they give terrible & wrong advice), maybe the GPU side is better but I wouldn't count on it.

To provide some elaboration on this: Their overall CPU score used to be 30% single, 60% quad, 10% multicore. Last year around the launch of zen 2 they gave it an update. Which makes sense; the increasing ability of programs to actually scale beyond four cores means that multicore should get more importance. And so they changed the influence of multicore numbers from 10% to... 2%. Not only was it a blatant and ridiculous move to hurt the scores of AMD chips, you got results like this, an i3 beating an i9 https://cdn.mos.cms.futurecdn.net/jDJP8prZywSyLPesLtrak4-970...

And there was some suspicious dropping of zen 3 scores a week ago, too, it looks like.


The one that really made it screaming obvious is the description of the Ryzen 5 5600x still somehow recommends the slower 10600k https://cpu.userbenchmark.com/AMD-Ryzen-5-5600X/Rating/4084

And they added some "subjective" metric so even when an AMD CPU wins at every single test, the Intel one can still be ranked higher.

There's a reason they've been banned from most major subreddits. Including /r/Intel.


Why should I care about a subreddit? They are all probably moderated by the same poeple. It could be one person got offended and happens to be a mod


I don’t see that as evidence of blatant bias for Intel. The site is just aimed at helping the average consumer pick out a part, and I think the weighting makes sense.

Most applications can only make use of a few CPU-heavy threads at a time, and these systems with with 18 cores will not make any difference for the average user. In fact, the 18 core behemoth might actually feel slower for regular desktop usage since it’s clocked lower.

If you are a pro with a CPU-heavy workflow that scales well with more threads, then you probably don’t need some consumer benchmark website to tell you that you need a CPU with more cores.


But lots of things do use more than 4 cores, with games especially growing in core use over time. Even more so if you want to stream to your friends or have browsers and such open in the background. To suddenly set that to almost zero weight, when it was already a pretty low fraction, right when zen 2 came out, is clear bias.

> In fact, the 18 core behemoth might actually feel slower for regular desktop usage since it’s clocked lower.

It has a similar turbo, it won't.


The amount of processes running on a windows OS reached 'ludicrous speed' many years ago. Most of these are invisible to the user, doing things like telemetry, hardware interaction, and low level and mid level OS services.

A quick inspection of the details tab in my task manager shows around 200 processes, only half of which are browser.

And starting a web browser with one page results in around half a dozen processes

Every user is now a multi-core user.


Re #2, the Nvidia web drivers work great if you're on High Sierra


Regarding 2. I think that none of those benchmarks were run on MacOS. Their benchmark tool seems to be Windows-only https://www.userbenchmark.com/ (click on "free download" and the MacOS save dialogue will inform you that the file you are about to download is a Windows executable).


The gfxbench link in the OP of the M1 vs. GTX 1050 Ti days that the 1050 was tested on MacOS. That's what I was referring to.


1. Today I learned something new. Still, can’t let great be the enemy of good. It may be imperfect but it’s the source I used. Do you have a better source I can replace it with?

2. That’s a good question and I don’t have an answer for that.


Sure, great is the enemy good etc , but the allegation here and in other threads is that these benchmarks are bad. Or, worse, inherently and deliberately biased.

As for a better source, I don't know with the M1 being so new, but that's no reason to accept bad data, if this benchmark actually is as bad as others here are saying.


Please don't use userbenchmark for anything. Site is so misleading that it's banned from both r/amd and r/intel.


Looks pretty good against Intel's new Xe Max:

https://gfxbench.com/compare.jsp?benchmark=gfx50&did1=907542...

It's not the 4-6x in raw graphics improvement advertised but at 10W vs Xe Max's 25W for just the GPU, the M1 getting 50% more fps, that's still 3.75x in perf/watt.


I wouldn't brag, when comparing Metal to OpenGL.

Let alone - let's really not forget that this is an HBM type architecture. As a complete package it seems awesome, but we can argue for ages about the performance of GPU cores with no end result.


> this is an HBM type architecture

You'd think so, but seems most people[1] think it's just LPDDR wired to the SoC using the standard protocol, inside the same package. (Though it might use > 1 channel I guess?)

[1] eg in the spec table here https://www.anandtech.com/show/16235/apple-intros-first-thre... - an interesing thinb in the mini spec table is also the 10x downgrade of the ethernet speed.


LPDDR channels are weird. https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de... shows "8x 16b LPDDR4X channels"

Which would be the same width as dual channel DDR4 – 8x16 == 2x64 :)

Also it's completely ridiculous that some people think it might be HBM. The presentation slide (shown in both anandtech articles) very obviously shows DDR style chips, with their own plastic packaging. That is not how HBM looks :) Also HBM would noticeably raise the price.


Damn, I would happily take the 10-20% performance hit to avoid having the laptop turn into a jet engine as soon as I connect it to a monitor.


You can have that trade off already by disabling turbo boost.


Unfortunately it’s the GPU that causes the issue, not the CPU.

Whatever bus video runs over is wired through the dedicated GPU, so integrated is not an option with external monitors connected. That by itself would be fine, except for whatever reason, driving monitors with mismatched resolutions maxes the memory clock on the 5300M and 5500M. This causes a persistent 20W power draw from the GPU, which results in a lot of heat and near-constant fan noise, even while idle. As there isn’t a monitor in the world that matches the resolution of the built-in screen, this means the clocks are always maxed (unless you close the laptop and run in clamshell mode).

The 5600M uses HBM2 memory and doesn’t suffer from the issue, but a £600 upgrade to work around the issue is lame, especially when I don’t actually need the GPU, you just can’t buy a 16” without one.

Disabling turbo boost does help a little, but it doesn’t come close to solving it.


My memory is hazy on this but I did come across an explanation for this behaviour. At mismatched resolutions or fractional scaling (and mismatched resolutions are effectively fractional scaling) macOS renders the entire display to a virtual canvas first. This effectively requires the dGPU.

Your best bet is to run external displays at the MBPs resolutions and because that is not possible/ideal you are left with choices of running at 1080p/4k/5k. macOS no longer renders crisp on 1080p so 3840x2160 is the last remaining widely available and affordable choice while 5K is still very expensive.


Hardly anyone makes 5K displays - I have a pair of HP Z27q monitors made in late 2015 that are fantastic, but I had to get them used off eBay because HP and Dell both discontinued their 5K line (Dell did replace it with an 8K, but that was way out of my budget).

Part of the reason for 5K’s low popularity was its limited compatibility: they required 2xDP1.2 connections in MST mode. Apple’s single-cable 5K and 6K monitors both depend on Thunderbolt’s support for 2x DP streams to work. I’m not aware of them being PC-compatible monitors at native resolution yet.

I love 5K - but given a bandwidth boost I’d prefer 5K @ 120Hz instead of 8K @ 60Hz.


I am a bit curious to know why this specific problem has been appearing in various Nvidia lineups in the beginning of the decade, and is reappearing now.


You should be able to easily downclock and undervolt the GPU persistently.


I thought the 5300M was the base GPU. That's what I have anyway.

https://gfxbench.com/compare.jsp?benchmark=gfx50&did1=907542...


We need this compared with the RX 580 running in the blackmagic egpu.

That’s the most relevant CPI compare, given it is the entry level apple endorsed way to boost Mac gpu.

It also helps understand the value of the 580 and Vega on pre-m1 macs.


Also the M1 is built on TSMC 5nm.

The AMD Radeon Pro 5600M is built on TSMC 7nm.


Looks like there's interesting "offscreen" optimizations that might need to be re-implemented for M1, IIUC.


Offscreen benchmarks just mean that they are run at a fixed resolution and not limited by vsync. These benchmarks are better for comparing one GPU to another. Onscreen can be artificially limited to 60fps and/or run at the native resolution of the device which can hugely skew the results (A laptop might show double the benchmark speed just because it has a garbage 1366x768 display).


I think those are the same link




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: