Hacker News new | past | comments | ask | show | jobs | submit login

M1 is using standard LPDDRx. It's not "very high performance". It uses a different interconnect -- that's it.

I guarantee that 64GB does not cost anything near EUR300.

You might be thinking of HBM[2] which has a wider I/O path and costs more.




>M1 is using standard LPDDRx. >It's not "very high performance".

AnandTech disagrees with you: https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...

Besides the additional cores on the part of the CPUs and GPU, one main performance factor of the M1 that differs from the A14 is the fact that’s it’s running on a 128-bit memory bus rather than the mobile 64-bit bus. Across 8x 16-bit memory channels and at LPDDR4X-4266-class memory, this means the M1 hits a peak of 68.25GB/s memory bandwidth.

Later in the article:

Most importantly, memory copies land in at 60 to 62GB/s depending if you’re using scalar or vector instructions. The fact that a single Firestorm core can almost saturate the memory controllers is astounding and something we’ve never seen in a design before.


Anandtech is comparing M1 vs A14. It's high performance for a cellphone part.

Dual channel DDR3L or DDR4L also has a 128 bit bus. 4200MHz DDR4 is clocked on the high side for most laptops, sure, but it's hardly unusual.

Run the numbers and you get the exact same throughput figure as for M1, which isn't surprising, because we're just taking width * rate = throughput.

So I'll repeat my assertion, downvotes be damned: the memory on the M1 is not special. The packaging and interconnect is interesting. It might reduce latency a little; it probably reduces power consumption a lot. But there's nothing special about it. The computer you're on right now probably has the same memory subsystem with different packaging.


Anandtech is comparing M1 vs A14. It's high performance for a cellphone part.

That’s where they started, but their conclusion was beyond that.

Did you miss the part where they said the fact that a single Firestorm core can almost saturate the memory controllers is astounding and something we’ve never seen in a design before?

This isn’t only about A14 vs M1.

It’s not that LPDDR4X-4266-class memory is special; it’s been around for a while. What is special is that the RAM is part of SoC and due to the unified memory model, the CPU, GPU, Neural Engine and the other units have very fast access to the memory.

This is common for tablets and smartphones; it’s not common for general purpose laptops and desktops. And while Intel and AMD have added more functionality to their processors, they don’t have everything that’s part of the M1 system on a chip:

* Image Processing Unit (ISP) * Digital Signal Processor (DSP) * 16 core Neural Processing Unit (NPU) * Video encoder/decoder * Secure Enclave

There’s no other desktop such as the M1 Mac mini that combines all of these features with this level of performance at the price point of $699.

That is special.


> Did you miss the part where they

I don't think that's notable, sorry. I would expect that of any modern CPU.

> it’s not common for general purpose laptops and desktops

Well, yeah, because "memory on package" has major disadvantages. You (laptop/desktop manufacturer) are making minor gains in performance and power and need to buy a CPU which doesn't exist. Apple can do it, but they were already doing it for iPhone, and they must do it for iPhone to meet space constraints.

I think unified memory is the right way to go, long term, and that's a meaningful improvement. But as you point out, there is plenty of prior work there.

> they don’t have everything that’s part of the M1 system on a chip

They actually do! The 'CPU' part of an Intel CPU is vanishingly small these days. Most area is taken up with cache, GPU and hardware accelerators, such as... hardware video encode and decode, image processing, security and NN acceleration.

Most high-end Android cellphone SoCs have the same blocks. NVIDIA's SoCs have been shipping the same hardware blocks, with the same unified memory architecture, for at least four years. They all boot Ubuntu and give a desktop-like experience on a modern ARM ISA.

> There’s no other desktop ... at the price point of $699

Literally every modern Intel desktop does this.


I've seen latencies when pointer chasing (in a relatively TLB friendly pattern) of 30-34ns. Have you seen similar elsewhere?


https://news.ycombinator.com/item?id=25050625

showed https://www.cpu-monkey.com/en/cpu-apple_m1-1804

which determined the M1's memory is LPDDR4X-4266 or LPDDR5-5500. If those memories are not high performance, what is?


You can't do what you do on a desktop on a laptop, not even a good one

Who cares if an M1 consumes less energy than a candle if I can buy 64GB of DDR4 3600 for 250 bucks and render the VFX for a 2 hours movie in 4k?

Another 300 bucks buy me a second GPU

When I deliver the job I put aside another 300 bucks and buy a third GPU

Or a better CPU

vertical products are an absolute waste of money when you chase the last bit of performance to save time (for you and your clients) and don't have the budget of Elon Musk

The M1 changes nothing in that space

Which is also a very lucrative space where every hour saved is an hour billed doing a new job instead of waiting to finish the last one to get paid

You can't mount your old gear on a rack and use it as a rendering node, plus you're paying for things you don't need: design, thermal constraints, a very expensive panel (a very good one, but still attached to the laptop body, and small)

So no, M1 is not comparable to a Threadripper, it's not even close, even if it consumes a lot more energy

When I'll see the same performances and freedom to upgrade in 20W chips, I will be the first one to buy them!

https://www.newegg.com/corsair-64gb-288-pin-ddr4-sdram/p/N82...

Then there's the 92% (actually 92.4%) of the remaining market that is not using an Apple computer that will keep buying non Apple hardware

Even if Apple doubled their market share, it would still be 15% Vs 85%

How is it possible that on HN people don't realise that 90 is much bigger than 10 and it's not a new laptop that will overturn the situation in a month is beyond me


> You can't do what you do on a desktop on a laptop, not even a good one

Ummm... ok. But my comment was not all RAM is equivalent.


Not all cars are equivalent

I guess you don't drive a Ferrari or a Murciélago

And does it really matter to have a faster car if you can't use it to go camping with your family because space is limited?

That's what an Apple gives you, but it's not even a Ferrari, it's more like an Alfa Duetto

It's not expensive if you compare it to similar offers in the same category with the same constraints (which are artificially imposed on Macs like there's no other way to use a computer...)

But if you compare it to the vast amount of better configurations that the same money can buy, it is not


>You can't do what you do on a desktop on a laptop, not even a good one

Yeah… no, those days are over. The reviews clearly show the M1 Macs, including the MacBook Pro outperform most "desktops" at graphics-intensive tasks.

>So no, M1 is not comparable to a Threadripper, it's not even close, even if it consumes a lot more energy

Um… nobody is comparing an M1 Mac to a processor that often costs more than either the M1 Mac mini or MacBook Pro. However, the general consensus is the M1 outperforms PCs with mid to high-end GPUs and CPUs from Intel and AMD. Threadripper is a high-end, purpose build chip that can cost more than complete systems from most other companies, including Apple. However, it's at a cost of power consumption, special cooling in some cases, etc.

>Who cares if an M1 consumes less energy than a candle if I can buy 64GB of DDR4 3600 for 250 bucks and render the VFX for a 2 hours movie in 4k. Another 300 bucks buy me a second GPU

The MacBook Pro has faster LPDDR4X-4266 RAM on a 128-bit wide memory bus. The memory bandwidth maxes out at over 60 GB/s. And because the RAM, CPU and GPU (and all of the other units in the SoC) are all in the same die, memory is extremely fast.

From AnandTech; emphasis mine [1]: "A single Firestorm achieves memory reads up to around 58GB/s, with memory writes coming in at 33-36GB/s. Most importantly, memory copies land in at 60 to 62GB/s depending if you’re using scalar or vector instructions. The fact that a single Firestorm core can almost saturate the memory controllers is astounding and something we’ve never seen in a design before."

It can easily render a 2-hour 4k video unplugged in the background while you're doing other stuff. And when you're done, you’ll still have enough battery to last you until the next day if necessary. According to the AnandTech review [1], it blows away all other integrated GPUs and is even faster than several dedicated GPUs. That's not nothing; and these machines do it for less money.

>vertical products are an absolute waste of money when you chase the last bit of performance to save time (for you and your clients) and don't have the budget of Elon Musk

>The M1 changes nothing in that space

This is not correct… seeing should be believing.

Here's a video of 4k, 6k and 8k RED RAW files being rendered on an M1 Mac with 8 Gb of RAM, using DaVinci Resolve 17 [2]. Spoiler: while the 8k RAW file stuttered a little, once the preview resolution was reduced to only 4k, the playback was smooooth.

[1]: https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...

[2]: https://www.youtube.com/watch?v=HxH3RabNWfE


The M1 beats low end desktop GPUs from a couple of generations ago (~25% faster than the 1050ti and RX560 according to this benchmark [0]). Current high end GPUs are much faster than that (e.g the 3080 is ~5 times as powerful as a 1050ti).

Don't get me wrong - this is still very impressive with a ~20w combined! power draw under full load, but it definitely doesn't beat mid - high desktop GPUs.

(This is largely irrelevant for video encoding/decoding though as you can see - as that's mostly done either on the CPU or dedicated silicon living in either the CPU or the GPU that's separate from the main graphics processing cores.)

[0] https://www.macrumors.com/2020/11/16/m1-beats-geforce-gtx-10...


How much does a 3080 cost? Could you build a complete computer around one for $1000?


You're missing the point. I'm not trying to argue about which system is better, I'm just saying that the comment I'm replying to is saying incorrect things about GPU performance. I'll answer your question anyway though:

You could build a complete desktop system including a GPU that's more powerful than the one in the M1 for ~$1000, but certainly not a 3080. They're very expensive, and nobody has any in stock anyway.

An RX 580 or 1660 would probably be the right GPU with that budget. (Although you could go with something more powerful and skimp out on CPU and ram if you only cared about gaming performance).


- a 3080 costs > $750 . Good luck buying one, I would if it wasn't out of stock. On the other hand the gtx 1050 mobile that is on the M1 can be easily found on eBay for < $50

- yes, you totally can. The best thing is that with a 1k.entry level you can start working on real-life projects that have deadlines and start earning money that will let you upgrade your gear to the level you actually need, without having to buy an entire new machine. The old components can serve as spare parts or to build a second node. You don't waste a single penny on things you don't need.

Even though, it's true, you can't brag with friends that it absorbs only 20 watts full load and the heat of the aluminium body is actually pleasant

It's a big sacrifice, I understand it.


> The reviews clearly show the M1 Macs, including the MacBook Pro outperform most "desktops" at graphics-intensive tasks.

They don't!

Cut the BS

> Here's a video of 4k, 6k and 8k RED RAW files being rendered on an M1 Mac

Blablablabla

That's not rendering




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: