Hacker News new | past | comments | ask | show | jobs | submit login
Apple unveils M1, its first system-on-a-chip for portable Mac computers (9to5mac.com)
1365 points by runesoerensen on Nov 10, 2020 | hide | past | favorite | 1346 comments




Completely unconfirmed speculation incoming:

There's a solid chance that the logic board is exactly the same on all of the Macs announced today and the only difference is the cooling solution. If you play around with the Apple Store configurator, the specs are all suspiciously similar between every new Mac.

https://www.apple.com/shop/buy-mac/macbook-air

https://www.apple.com/shop/buy-mac/macbook-pro/13-inch

https://www.apple.com/shop/buy-mac/mac-mini


At Apple's volume and level of system integration, it doesn't make sense to do assembly sharing at that level between different models. Presumably the SoC package is the same between the different products, but binned differently for the Air, Pro, and Mini. The actual logic boards would be custom to the form factor.


Not just that. At 5nm there will also be yield problems. I.e they will put the best yield into high end and the worst yield into low end.


This is undoubtedly why they launched the Mac Mini today. They can ramp up a lot more power in that machine without a battery and with a larger, active cooler.

I'm much more interested in actual benchmarks. AMD has mostly capped their APU performance because DDR4 just can't keep the GPU fed (why the last 2 generations of consoles went with very wide GDDR5/6). Their solution is obviously Infinity Cache where they add a bunch of cache on-die to reduce the need to go off-chip. At just 16B transistors, Apple obviously didn't do this (at 6 transistors per SRAM cell, there's around 3.2B transistors in just 64MB of cache).


Okay, probably a stupid question, but solid state memory can be pretty dense: why don't we have huge caches, like a 1GB cache? As I understand it, cache memory doesn't put off heat like the computational part of the chip does, so heat dissipation probably wouldn't increase much with a larger chip package.


Nand flash is pretty dense, but way too slow. Sram is fast but not at all dense, needing 6 transistors per bit.

For reference: https://en.m.wikipedia.org/wiki/Transistor_count lists all the largest cpu as of 2019 amd's epyc Rome at 39.54 billion MOSFETs, so even if you replaced the entire chip with Sram you wouldn't even quite reach 1GB!

Dram would be enticing, but the details matter.


Nand is nonvolatile and the tradeoff with that is write cycles. We have an inbetween in the form of 3D Xpoint (Optane), Intel is still trying to figure out the best way to use it. It currently like an L6 cache after system DRAM.


Well not just Intel. Optane is a new point in the memory hierarchy. That has a lot of implications for how software is designed, it's not something Intel can do all by itself.


SRAM is 6 transistors per bit, so you're taking about 48 billion transistors there, and that's ignoring the overhead of all the circuits around the cells themselves.

DRAM is denser, but difficult to build on the same process as logic.

That said, with chiplets and package integration becoming more common, who knows... One die of DRAM as large cache combined with a logic die may start to make more sense. It's certainly something people have tried before, it just didn't really catch on.


> DRAM is denser, but difficult to build on the same process as logic.

What makes it so difficult to have on the same chip?


I don't know the details, but the manufacturing process is pretty different. Trying to have one process that's good at both DRAM and logic at the same time is hard, because they optimize for different things.


Or, well, how about putting all your ram on the package as Apple says they are doing with the M1?


Cost. Area is what you pay for (at a given transistor size).


the bigger the cache the slower it gets

so ram is your 1gb cache


Are you are referring to latency due to propagation delay where the worst case increases as you scale?

Would you mind elaborating a bit? I'm not following how this would significantly close the gap between SRAM and DRAM at 1GB. Since an SRAM cell itself is generally faster than a DRAM cell, and I understand that circuitry beyond an SRAM cell itself is far simpler than DRAM. Am I missing something?


Think of a circular library with a central atrium and bookshelves arranged in circles radiating out from the atrium. In the middle of the atrium you have your circular desk. You can put books on your desk to save yourself the trouble of having to go get them off the shelves. You can also move books to shelves that are closer to the atrium so they're quicker to get than the ones farther away.

So what's the problem? Well, your desk is the fastest place you can get books from but you clearly can't make your desk the size of the entire library, as that would defeat the purpose. You also can't move all of the books to the innermost ring of shelves, since they won't fit. The closer you are to the central atrium, the smaller the bookshelves. Conversely, the farther away, the larger the bookshelves.

Circuits don't follow this ideal model of concentric rings, but I think it's a nice rough approximation for what's happening here. It's a problem of geometry, not a problem of physics, and so the limitation is even more fundamental than the laws of physics. You could improve things by going to 3 dimensions, but then you would have to think about how to navigate a spherical library, and so the analogy gets stretched a bit.


Area is a big one. Why isn't L1 MB? Because you can't put that much data close enough to the core.

Look at a Zen-based EPYC core- 32KB of L1 with 4 cycle latency, 512KB of L2 with 12 cycle latency, 8MB of L3 with 37 cycle latency.

L1 to L2 is 3x slower for 8x more memory, L2 to L3 is 3x slower for 16x more memory.

You can reach 9x more area in 3x more cycles, so you can see how the cache scaling is basically quadratic (there's a lot more execution machinery competing for area with L1/L2, so it's not exact).

https://www.7-cpu.com/cpu/Zen.html


I am sure there are many factors, but the most basic one is that the more memory you have, the longer it takes to address that memory. I think it scales with the log of the ram size, which is linearly with the number of address bits.


Log-depth circuits are a useful abstraction but the constraints of laying out circuits in physical space imposes a delay scaling limit of O(n^(1/2)) for planar circuits (with a bounded number of layers) and O(n^(1/3)) for 3D circuits. The problem should be familiar to anyone who's drawn a binary tree on paper.


With densities so high, and circuit boards so small (when they want to be), that factor isn't very important here.

We regularly use chips with an L3 latency around 10 nanoseconds, going distances of about 1.5 centimeters. You can only blame a small fraction of a nanosecond on the propagation delays there. And let's say we wanted to expand sideways, with only a 1 or 2 nanosecond budget for propagation delays. With a relatively pessimistic assumption of signals going half the speed of light, that's a diameter of 15cm or 30cm to fit our SRAM into. That's enormous.


Well, the latest AMD Epyc has 256 MB L3 cache, so we're getting there.


but any given core only has access to 16 soon to be 32mb


in Zen 1 and Zen 2, cores have direct or indirect access to the shared L3 cache in the same CCX. In the cross-CCX case the neighboring CCX cache can be accessed over the in-package interconnect without going through system DRAM.


this!

when i started with computers, they had a few KB of L2 cache, L3 did not exist. Main Memory was a few MB.


With the DRAM this close, it can probably be very fast. Did they say anything about bus or bandwidth?


AnandTech speculates on a 128-bit DRAM bus[1], but AFAIK Apple hasn't revealed rich details about the memory architecture. It'll be interesting to see what the overall memory bandwidth story looks like as hardware trickles out.

[1] https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


Apple being Apple, we won't know much before someone grinds down a couple chips to reveal what the interconnections are, but if you are feeding GPUs along with CPUs, a wider memory bus between DRAM and the SoC cache makes a lot of sense.


I am looking forward to compile time benchmarks for Chromium. I think this chip and SoC architecture may make the Air a truly fantastic dev box for larger projects.


That’s what “binned differently” means btw.


Interesting that commenter knew the process but not the terminology.


As someone who works with a lot of interdisciplinary teams, I often understand concepts or processes they have names for but don't know the names until after they label them for me.

Until you use some concept so frequently you need to label it to compress information for discussion purposes, you often don't have names for them. Chances are if you solve or attempt to solve a wide variety problems, you'll see patterns and processes that overlap.


Seconded.

It’s often valuable to use jargon from another discipline in discussions. It sort of kicks discussions out of ruts. Many different disciplines use different terminology for similar basic principles. How those other disciplines extend these principles may lead to entirely different approaches and major (orders of magnitude) improvements. I’ve done it myself a few times.

On another note, the issue of “jargon” as an impediment to communication has led the US Military culture to develop the idea of “terms of art”. The areas of responsibility of a senior officer are so broad that they enter into practically every professional discipline. The officer has to know when they hear an unfamiliar term that they are being thrown off by terminology rather than lack of understanding. Hence the phrase “terms of art”. It flags everyone that this is the way these other professionals describe this, so don’t get thrown or feel dumb.

No one expects the officer to use (although they could) a “term of art”, but rather to understand and address the underlying principle.

It’s also a way to grease the skids of discussion ahead of time. “No, General, we won’t think you’re dumb if you don’t use the jargon, but what do you think of the underlying idea...”

Might be a good phrase to use in other professional cultures. In particular in IT, because of the recursion of the phrase “term of art” being itself be a term of art until it’s generally accepted. GNU and all that...


Where can I learn more about the US Military culture's "terms of art" idea?


> How those other disciplines extend these principles may lead to entirely different approaches and major (orders of magnitude) improvements.

Fascinating. Would you happen to have any example off the top of your head?


But then how will developers exert their local domain dominance to imply and an even greater breadth of knowledge when patronizing other devs? /s


I have always assumed this term was in widespread and general use in the US business culture. Is that not the case?


This gets even more fun when several communities discover the same thing independently, and each comes up with a different name for it.

My favorite is the idea of "let's expand functions over a set of Gaussians". That is variously known as a Gabor wavelet frame, a coherent state basis [sic], an Gaussian wave packet expansion, and no doubt some others I haven't found. Worse still, the people who use each term don't know about any of the work down by people who use the other terms.


Reminds me of the Feynman story about knowing something vs knowing the name of something :-)


Reminds me of self-taught tech. I’ll often know the name/acronym, but pronounce it differently in my head than the majority of people. Decades ago GUI was “gee you eye” in my head but one day I heard it pronounced “gooey” and I figured it out but had a brief second of “hwat?” (I could also see “guy” or “gwee”.) It’s, of course, more embarrassing when I say it out loud first...


First time I went to a Python conference in SV, more than a decade ago, I kept hearing "Pie-thon" everywhere, and had no idea what the hell people were talking about.

I took me a solid half hour to at last understand this pie-thingy was Python... in my head I had always pronounced it the French way. Somewhat like "pee-ton", I don't know how to transcribe that "on" nasal sound... (googling "python prononciation en francais" should yield a soundtrack for the curious non-French speakers).


I thought numpy was pronounced like it rhymes with bumpy for a year or so.


Picture 18 year old me in 1995, I got a 486SX laptop as a graduation present out of the blue from my estranged father. I wanted to add an external CD-ROM to it so I could play games and load software for college, and it had a SCSI port. I went to the local computer store and asked the guy for a "ess see ess eye" CD-ROM drive, he busted out laughing and said "oh you mean a scuzzy drive?" Very embarrassing for me at the time but that's when I learned that computer acronyms have a preferred pronunciation so I should try to learn them myself to avoid future confusion.


> Very embarrassing for me at the time

it shouldn't be, it should be a badge of honor of some sorts - it points to somebody reading to expand their knowledge that is not available in oral form around them, so kudos to them !


It's even more visible in non-English speaking countries. In Poland: first everyone says Java as Yava and after a while they start to switch to a proper English pronunciation. Many times it divides amateurs from professionals, but I wouldn't really know, because I don't work with Java.



Not the one I was thinking of but same point :) https://fs.blog/2015/01/richard-feynman-knowing-something/


Great story, yes. But there's no such thing as a "halzenfugel" in German as far as I can tell as a native speaker. Even www.duden.de, the official German dictionary, doesn't know that word ;-0


That's OK, AFAICT there's no bird called the "brown throated thrush" either.


As a native English speaker and middling foreign-language speaker of German, "halzenfugel" sounds to me like a mock-German word that an English speaker would make up.


Hah, good to know. However, unless you are talking to people from the same domain, it's usually a better approach to spell out things instead of relying on terminology. Concepts and ideas translate much better across domains than terminology.


I think a bunch of people learnt a new thing from your comment, so it is a good one.

I hope my reply didn’t come out as gatekeeping, it was genuinely just to help put a name to a thing.


May just have skimmed GP and missed it.


Well then that was a good explanation because I didn’t know that!


That's now how yield works. Yield is the number of functioning chips that you pull out of a wafer.

I think what you are trying to refer to is frequency binning.


That's only partially true.

For example, AMD sells 12 and 16 core CPUs. The 12 core parts have 2 cores lasered out due to defects. If a particular node is low-yield, then it's not super uncommon to double-up on some parts of the chip and use either the non-defective or best performing one. You'll expect to see a combination of lasering and binning to adjust yields higher.

That said, TSMC N5 has a very good defect rate according to their slides on the subject[0]

[0] https://www.anandtech.com/show/16028/better-yield-on-5nm-tha...


Which is likely why there are some "7 core" GPU M1 chips.


Yep for the MBA. I think for devs that can live with 16GB, the cheaper 7GPU MacBook Air is very interesting instead of the MacBook Pro for $300 cheaper.


Plus, defects tend to be clustered, which is a pretty lucky effect. Multiple defects on a single core don't really matter if you are throwing the whole thing away.


Is that not what the parent comment said? I thought "binning" referred to this exact process.


Isn’t that what “binning” means?


If you compare the M1 Air and Pro, the only difference seems to be the addition of the Touchbar, 10% better battery life, and a "studio" speaker/mic on the Pro.

https://www.apple.com/mac/compare/

I assume the addition of a fan on the Pro gives it better performance under load, but there doesn't seem to be a hugely compelling reason to not just get the Air.


I think they got it wrong. I would pay money to NOT have the touchbar.


I got one of the MBP with the touchbar this year after holding out for many years (changed jobs so had to change laptop). Man it is so much hard to do the volume changes, and there has so far for me been zero benefit.


Not what you're looking for, but I'll mention it anyways just in case:

It's possible to set the default touch bar display to only ever show the expanded control strip (System Preferences > Keyboard > Keyboard > Touch Bar shows: Expanded Control Strip). In that mode you tap volume up and down instead of using a volume slider.

Again, I know you're looking for physical keys (aren't we all) but it's better than nothing.

I've been using the MacBook Pro 16 (with a physical esc key plus a touch bar) and I think it's a pretty good compromise between me who wants physical keys and apple who wants to push the touch bar.

The other thing that kept happening to me: I would accidentally tap the brightness button when reaching for ESC. For that, you can "Customize Control Strip..." and remove individual buttons, so that there's a big gap on the touch bar near the ESC key so that stray taps near ESC don't change the brightness.


I realise I'm an outlier here but I actually have grown to like the touchbar.

It's often unused, yes, but when I fire up Rider for my day job it automatically flicks to the row of function keys and back depending on which app has focus and it fits quite nicely for me between work and entertainment (I like having the video progress bar if I'm watching something on the laptop). Maybe I'm just strange but the non-tactile function keys didn't really bother me much either.

In any case, I could live without it, which is probably not a roaring endorsement in any case, but I'd rather have it than not.


I like it as well, especially in applications like CLion/IntelliJ which have tons of keybindings I keep forgetting because they are different between Linux and macOS. The context-sensitive touch bar is actually very useful in these applications for things like rebuilding, changing targets, stepping through the debugger etc. without having to use the mouse.

There's a lot of things to complain about with Apple products, but if you ask me there's been enough touch bar bashing by now and people should just get over it. It's pretty useful in some situations, and IMO no real downsides, especially now that the esc key is a real physical key again. Why all the hate?


Physical F keys are still useful so why not both?


If you hold Fn you get the traditional row of function keys, which seems like a pretty good tradeoff. And if you really hate that, you can simply configure the touch bar to always display them, in which case literally the only downside is that they are not physical keys anymore. Do people really touch-type the function keys so heavily that this becomes a an actual annoyance and not just an ideological one?

Adding an extra row of physical keys that do the same thing as the row of virtual function keys, at the expense of trackpad size and possibly ergonomic (harder to reach the touch bar) doesn't make a lot of sense IMO.


You can’t hit them without looking at the bar, because you have nothing to index your fingers on.

The touchbar is the second worst thing Apple has ever done in the history of the Mac, following closely on that abomination of a “keyboard” they used from 2016-2019.


I liked the keyboard too.


And if you ask me, It has not been enough touch bar bashing...

I’ve opted to buy a 2015 Mac Book Pro this year, it might be easier to get over apple than the touch bar even...


The only time that there would be enough touch bar bashing is when Apple listens and give users an option to have function keys instead.


Pretty sure they just did. The MacBook Air is virtually identical now save for the absence of a fan and the Touch Bar.


My guess - that you couldn't get a Macbook without the TouchBar. I'd like to be able to choose between a Macbook with or without a TouchBar, but with otherwise entirely identical specs.

I've been holding out on upgrading my 2013 MBP (mostly out of frugality) to a newer version, mostly due to the butterfly keys and the TouchBar.


Yup, me too. Especially when debugging in Xcode, the TouchBar shows a bunch of useful functions.


Those are f keys on a regular keyboard, and a few million of us have developed the muscle memory to use them over the say last thirty years that they’ve been assigned to those f keys in most IDEs.


I’m with you: I enjoy the TouchBar, and it’s genuinely useful for some apps I use


This is one option, but it still suffers from my main complaint about the touch bar -- it's way too easy to accidentally activate something. Therefore, I have my touch bar set to activate only while I'm holding down the FN key.

I will not remove that safety key until they make the touch bar pressure sensitive so that "buttons" on it only activate with a similar amount of force that was required to activate the tactile buttons they replaced. Until then, I consider it a failed design improvement.


I need my ESC, so I'm glad it's there. As for the rest of the keys on the top row, I was not in the habit of using them except in vim, where I hooked them up to some macros I had written. For them, I kind of like the touchbar now, because I have the fake keys labelled with sensible names. (No more trying to remember that I have to hit F3 to do such-and-such.)

I've also found the touchbar pretty useful in zoom calls, because my zoom client shows keys for common actions.

All in all, I think a physical escape key plus the touchbar is a slight win. I would not pay more for it, but I have reversed my previous opinion that I'd pay more not to have it.

I suspect these new machines are going to be quite nice, although I won't buy one for a while since I purchased a mbp a few months ago.


I don't understand why they don't put the physical keys AND the touchbar in. There is so much space on the 16" model they could easily fit it in and shrink the obscenely large trackpad just a touch.


I think that the trackpad needs to be bigger. Doing gestures is much easier with a big pad.


Especially on the 16 there is no excuse, they could really have a function key row and a touch bar :( more than enough space for them.


I ordered my 13" touchbar MBP with the UK keyboard layout. Adds an extra key to the right of left shift (mapped to tilde), letting me remap the key that is normally tilde on a US keyboard to ESC.


I've been really happy with the following mod for the last couple years of TouchBar usage:

https://community.folivora.ai/t/goldenchaos-btt-the-complete...

Fully customizable while being much better for muscle memory by giving you exactly what you want where you want it, gives you icon-shortcuts to script, and still allows you to have as much dynamic functionality / information as you like. So, for example, mine looks roughly like this:

- Fullscreen

- Bck/[play|pause]/Fwd

- CURRENTLY_PLAYING_SONG

- AirDrop

- ConfigMenu

- Emoticons

- (Un)Caffeinate

- (Dis)connectBluetoothHeadphones

- (Dis)connectMicrophone

- (Un)muteVol

- VolDown

- VolUp

- ScreenDim

- ScreenBright

- Date

- Time

- Weather

- Battery%

CURRENTLY_PLAYING_SONG playing shows the album cover, song name, and artist, but only shows up if there IS something playing. Same with AirDrop, which shows up only if there's something that I could AirDrop to, and then gives me a set of options of who to AirDrop to. The Emoticon menu opens an emoticon submenu on the TouchBar with most-recently-used first.

That all fits fine into the main touchbar, with other dynamic touchbars available modally (ie, holding CMD shows touchable icons of all the stuff in my Dock (my Dock is entirely turned off)), CTRL shows LOCK AIRPLAY DO_NOT_DISTURB FLUX KEYBOARD_DIM/BRIGHT, etc. ALT shows me various window snap locations.

Edit: BetterTouchTool also replaced a bunch of other tools for me. Gives you the same kind of tools for scripting eg Keyboard macros, Mouse macros, remote-control via iPhone/Watch etc with a lot of reasonable defaults.


I've heard a lot of complaints about the touchbar. The loss of tactile feedback is a fair one, and admittedly removing the escape key was a terrible idea. I recently upgraded to a machine with a touchbar, learned quickly why the default settings are pretty bad, and then quickly found BTT and set it up. The touchbar is not a revolutionary innovation, but it definitely improves functionality in some cases, and it's fun to mess with. Oh, and a button to "toggle mic in zoom" actually solves a real problem.

The people who complain about the touchbar functionality must not be putting any effort at all into it. I customize so many other things on my system, regardless of the OS. Why would a new hardware feature be any different?

I didn't know about this GoldenChaos thing though, thanks for that.


> The people who complain about the touchbar functionality must not be putting any effort at all into it.

I would say that people who complain about uselessness of F-keys must not have put any at all effort into using them.

Upon getting my MBP 2016, I spent numerous months trying to make the TouchBar useful; from customizing the contents where apps allowed it, to BTT.

What it came down to is that things worthy a keyboard shortcut are things I want to be able to do fast, reliably, and instinctively. I don't want to search for the button on the TouchBar – I'm using the keyboard, it needs to be as natural as typing, without the need to look down at it. I have a screen already, I don't need another one on my keyboard.

I firmly believe TouchBar can't even come close to the same realm of usefulness as F-keys, much less being worth the price hike it imposes. Function keys are twelve, tactile, solid, free, reliable(1) buttons for keyboard shortcuts; TouchBar is a touchscreen that sometimes(2) works.

> a button to "toggle mic in zoom" actually solves a real problem

I haven't used Zoom, but if it's a decent-ish Mac app, it either already has a keyboard shortcut to toggle microphone, or you can set one in Keyboard Shortcuts, in System Preferences.

(1) as far as anything is reliable on the butterfly keyboards.

(2) same story as butterfly keyboard – if it's even slightly sporadic, it is a shitty input device.


That's fair. Personally, I used the tactile function keys for exactly six things (brightness up/down, volume up/down, mute, play/pause). Those six functions are now available on my touchbar. They're no longer tactile, which, yes, is a minor inconvenience. I wouldn't use a touchscreen for touch-typing code, of course. But for a handful of buttons way off the home row, it doesn't affect me much.

In exchange, I get to add other buttons which can also display customizable state. Yes, zoom has a global shortcut to toggle the mic. The problem is, the mic state is not visible on your screen unless the zoom window is visible. This is a frustrating design flaw IMO, which really should be addressed in some consistent way across every phone/video app. But, it's not, and so I need to pay attention to my mic state. My touchbar button toggles the mic, and displays the current mic status. I would imagine that every phone/video chat app is similar.


I don't understand how Apple missed the boat so completely on haptic feedback for touchbar, considering their mastery of it on trackpads and touch screens.

I use this app alongside BTT; it attempts to supplement haptic feedback via the trackpad haptics. It's no where near as good as a real haptic solution would be but does provide some tactile feedback when pressing buttons https://www.haptictouchbar.com/


I have a haptic touchbar and I’m pretty sure it was enabled via OOTB with GoldenChaos on BTT, but maybe not.


Did you notice you can hold and slide to change volume? You don’t need to hit the volume slider where it appears. Same with brightness. Totally undiscoverable gesture.


Yup - I love this feature but I'd guess based on people I've shown it to that no more than 20% of users are aware of it.


Exactly this. I tried so hard to like it (since I paid for it), but I have found 0 good uses cases for it.

I would assume that macOS sends at least some basic usage data for the touch bar back to Apple HQ. I wonder how often it is actually used... and I would love the hear the responsible product manager defend it.


Same situation, my trusty 2012 rMBP finally gave up the ghost and I had to get a new one with this icky touch bar. It's useless to me and makes it harder to do everything. My main complaint is that I am constantly bumping it when I type, leading to unexpected changes in volume and brightness.


Oh yeah I forgot that. I keep hitting the chrome back button in the touch bar all the time. In the beginning I was not sure what was happening then I realized it was the touch bar.


My problem with the touchbar is that I tap it accidentally while typing all the time. It needs to be like another centimeter away from the keys.


Or use the same haptic feedback as the touchpad.


Haptics plus requiring a bit of pressure to register a press, just like the trackpad.



That's actually one of the things I like better on the touchbar. Just press down on the volume icon and slide.

I continue to be disappointed about the lack of haptics though. It's such a strange thing to not have when they've proven to know how to make very convincing haptic feedback. It works very well in both the Apple Watch and the MacBook's trackpad.


You CAN configure it to show the old-style mute/down/up with the touch bar, so you are not relegated to the ultra-shitty slider. No replacement for a tactile switch, but at least you are not stuck with the default arrangement.


Easiest way is to press and hold the Touch Bar on the volume control button and slide your finger left or right–that way you barely need to look at the Touch Bar.


You can use the touchbar to skip commercials on Youtube in Safari.

I love it.


Instead of press and hold, it's press, hold, and drag. Definitely annoying when it freezes, but when it's working it doesn't seem that much different.


The main difference is that I need to look down at the keyboard to operate the touchbar. With the keys I can rely on muscle memory.

Also I think every device which makes sound should have a physical mute control. The worst is when I want to mute, and the touchbar freezes, and I have to go turn the volume down with the mouse.


I intentionally took a performance hit by moving from a Pro to an Air almost entirely for this reason (although the low power and light weight are pleasant benefits). I'm glad that the new Air still has F-keys with Touch ID; but I'm flabbergasted that they're still not making the Touchbar optional for the Pro series, given how polarizing it's been, and the underwhelming adoption of new Touchbar functionality by third-party developers.


They brought back the escape key which is what really matters.


Honestly, I think it's only polarizing here and among some developers.


I mean if you think about other professions that use Macbook Pro they don't need it either. Are video professionals using it to scrub through video? Nope. For any audio professional it's useless. No one who is serious about their profession would use it.


I’m serious about my profession and I use it daily. Wonder what that says about me.


Please tell me what it can do that hot keys and just generally knowing a program can't do? I'm honestly interested.


I'd be curious to know what portion of their user base for the Pro series are developers. Anecdotally, quite a lot of devs seems to use Macs; but I have no idea what that fraction is, relative to the rest of their market.


The touchbar is probably why I'm probably getting the Air for my next upgrade, and not a Pro.


Honestly, the idea of a soft keyboard is a great one, particularly one as well integrated into the OS as the Touch Bar is. However, while I have a Mac with the Touch Bar, I never ever ever ever use it intentionally, as I spend 90% of my time on the computer using an external keyboard.


Just put that control panel at the bottom of the screen. It would still be close enough, and it would be out of the way of my accidental touches.


My daughter loves it. It's her emoji keyboard


As someone using a hackintosh considering a real Macbook, what's so wrong about it?


There are loads of rants out there that are easy to find, but personally it's mostly: you can't use it without looking at it to make sure the buttons are what you think they are (nearly always context-sensitive, often surprising when it decides it's a new context), and where you think they are (can't go by feel, so you need to visually re-calibrate constantly). Button size and positioning varies widely, and nearly all of them have a keyboard shortcut already that doesn't require hand movement or eyes (or at worst previously had an F-key that never moved).

The main exception being things like controlling a progress bar (mouse works fine for me, though it's a neat demo), or changing system brightness/volume with a flick or drag (which is the one thing I find truly better... but I'd happily trade it back for a fn toggle and F keys). But that's so rarely useful.


When I watch non-HN type people use it, they like it. They never used Fn keys in the first place.

I just hated the lack of ESC key (which they brought back, though my Mac is older). I have no muscle memory for any other key in that row.


I think the touchbar was my favorite part of my old MBP, specifically because of the contextual buttons that are always changing.

I'd probably pay a little extra to get one on future non-Mac laptops, but not too much extra.


Yeah, most people I know almost never use F keys (except perhaps F1 for help). They leave it on the media-keys mode... which is the same as the touchbar's default values, but without needing to know what mode it's in.

With the physical media keys, if they want to mute, it's always the same button. Pause music, always the same button. They're great in a way that the touchbar completely destroys.

(and yes, I know you can change this setting, but if we're assuming non-techy-users we also generally have to assume default settings.)


Honestly, I hardly ever used the function keys either. As a result the Touch Bar doesn't really bother me -- but neither does it seem the slightest bit useful for the most part.


Lot of non-hn people also type while looking at the keyboard and some with single finger from both hands.


It's just not useful. The context-aware stuff is too unpredictable, and I'm never looking at the keyboard anyway so I have never learned it. So the touchbar ends up being just a replacement for the volume and brightness keys, but a slow and non-tactile version of them


For me at least (and I'd imagine most of the other folks who hate it) - I had the key layout memorized. If I wanted to volume up/down/mute I could do it without taking my eyes off the screen. With the touchbar ANYTHING I wanted to do required me to change my focus to the touchbar - for the benefit of...?

I'm sure someone somewhere finds it amazing, but I have no time for it.

To me it's no different than volume controls in a car. I've been in a cadillac with a touchbar for volume, and a new ram truck with a volume knob - there's absolutely no room for debate in my opinion. One of these allows me to instantly change the volume 100% up or down without taking my eyes off the road. The other requires hoping I get my finger in just the right spot from muscle memory and swipe enough times since there's 0 tactile feedback.


For me it hides the things I use all the time (media and volume controls) to make room for application specific controls that I never use.

If it was more customisable I wouldn't mind it, but the apparant inability to force it to show me the things I actually want is annoying.

I can imagine there are some people for whom the application specific buttons are useful, but for me they are not worth it for what they displace.


Not sure if it's helpful for you but you can customize the behavior by going to your System Prefs > keyboard settings and toggling "Touch bar shows:".

I did this on like day 2 of having my MBP for what sounds like the same reason you want to. The setting I have turned on is "Expanded control strip" and I never see any application-specific controls, only volume, brightness, etc.


Omg, thank you. Somehow I'd missed that setting.


Check out BetterTouchTool if customization is holding you back.


FYI you can customize it, and force it to always display certain controls.

I had the exact same frustrations as you. Took me 10 mins digging into settings to figure it out. Now I have my touchbar constantly displaying all of the controls that are buttons on the Air (ie a completely over-engineered solution to get the same result)


The latency on the touch bat is terrible. Perhaps 1/2 second to update when you switch apps, for example!


It doesn't provide tactile feedback.


https://www.haptictouchbar.com/ is a great app I use, provides haptic feedback for the touch bar.



They do such a good job on the iPhone with this that it is quite mystifying why not.


In addition to what everybody else said, because the touch at is flat and flush with the body chassis, I find it’s very easy to accidentally press when you press a key in the row below. Eg, accidentally summoning Siri when pressing backspace, muting the audio when pressing “=“. And then you’re forced to look down and find the mute “button” to fix it.


No haptic feedback, mainly. A lot better with a real escape key, but still.


hahaha 2 times Air user, touchbar is a big NO.


Air doesn't have a fan, so if you want consistent performance, you have to buy the touchbar.


Maybe some kind of clip-on, thin, third party cooling solutions will become a thing?


so true! i hate the touchbar. if i would have to change my laptop today, i'd buy air just because i hate the touchbar.


I think it’s good actually


My Y-series Pixelbook with passive cooling performs as well as a U-series laptop from the same generation -- until I start a sustained load. At that point, the U series systems pull ahead. Actively cooled Y-series systems get pretty close in lots of applications only falling short due to half the cache.

If you are doing lightweight stuff where the cores don't really need to spin up, then they'll probably be about the same. Otherwise, you'll be stuck at a much lower base clock.


Surely they have different clock rates, but Apple isn't referencing clock rates in their marketing material for Macs with M1 chips.


Yeah I am surprised more people are ignoring this. If the two computer models have identical chips, then why does Apple even bother putting a fan in the pro?

To me the fact that the Air is fanless and the pro has a fan would indicate to me that they have different clock rates on the high end. I am sure the Air is capped lower than the Pro in order to make sure it doesn't overheat. It is probably a firmware lock, and the hardware is identical. But once we do benchmarks I would expect that the pro outperforms the air by a good margin. They added a fan in the pro so that it can reach higher speeds.

Surely the Air is capped so that users don't ruin their computers by running a process that overheats the computer.

But of course Apple doesn't want to reveal clock speeds. The best they can give us is "5x faster than the best selling PC in its class". What does that mean? The $250 computer at Walmart that sells like hotcakes for elementary age kids that need a zoom computer or the Lenovo Thinkpad Pro that business buy by the pallet? Who the hell knows.


They said in the presentation for sustained workloads. I get the impression they're the same peak clock speed but the air throttles faster.


The fan is only there for sustained loads. They all have identical chips (aside from the 7 core gpu option Air). They all hit the same pstates and then throttle accordingly due to temperature.

The MBP and Mini are there for people who want maximum sustained performance.


I recently got an Air after using a MBP13.

Aside from the loud fan and slow performance which should be fixed in this release, my biggest complaint is that they only have the usbc plugs on one side of the laptop.

Really obnoxious when the outlet is in a difficult spot.

Unclear whether the new MBP13 also has this problem...

Edit: the new M1 MBP13 has both usbc ports on the same side. No option for 4 (yet). Ugh.


The two-port and four-port 13" MacBook Pros have been separate product lines since their introduction. This new A1 MBP only replaces the two-port version. Presumably the higher-end one will share an upgraded processor with the 16".


I'm confused of their new pricing scheme / spec tiers for Macbook Pros.

There's no more core i7 for Macbook 13. You have to go to Macbook 16. I'd rather get a Dell XPS or other Core i7/Ryzen 7 ultrabooks.

So now, spec-wise, Macbook Air and Macbook Pro are too close.


I'm guessing the MBP13 is now a legacy model, being refreshed more to satisfy direct-product-line lease upgrades for corporate customers, than to satisfy consumer demand.

Along with the MBP16 refresh (which will use the "M1X", the higher-end chip), we'll probably see the higher-end MBP13s refreshed with said chip as well, but rebranded somehow, e.g. as the "MacBook Pro 14-inch" or something (as the rumors go: same size, but less screen bezel, and so more screen.)

And then, next year, you'll see MBP14 and MBP16 refreshes, while the MBP13 fades out.


These are transitional products so it makes sense. I'm looking forward to see the replacement of the iMac Pro and Mac Pro. Will be interesting to see what those include.


Addendum: just got an email from the Apple Store Business Team about the MBP13. Here's the copy they're using (emphasis mine):

> "Need a powerful tool for your business that’s compatible with your existing systems? Let’s talk. We’ll help you compare models and find the right Mac for you and your team."

That's a corporate-focused legacy-model refresh if I've ever seen one.


Specs don’t tell you the thermal story. You can buy an i9 in a thin laptop and feel good about it until it throttles down to 1GHz after 30s.

The MBP should be built with better thermals to avoid throttling since you might be running simulations or movie encoding all day. The air should throttle after a certain amount of time.


And prioritize function over form? I think you just want to buy a PC.


This has to be the funniest take on the release of a whole new CPU architecture.


I agree, I use a small form factor desktop box which fits in my messenger bag.


    There's no more core i7 for Macbook 13
Sure there is – you just have to select one of the Intel-models and customize the processor.


You are correct. They hid the option.


it's there, you just have to configure it


I think MacBook Air is very compelling, and that's why got more screen time. Unless you want to run your system on >60% for extended periods of time - MacBook Air should be great.


Pro has one extra GPU core as well.


The base model Air has 7 GPU cores instead of 8, but the higher models have all 8 cores. Seems to be +$50 for the extra GPU core.


Note that it's a 7 core GPU only for the 256GB SSD


I am curious - my 2017 12" MB is an absolutely awesome machine (fast enough for casual use and light development while absolutely quiet and lightweight), but 30+ degree summer day is enough for it to get so close to it's thermal envelope that it throttles down to cca 1 GHz during normal (browser) use soon.

So, the sustained performance might make quite a difference for pro use.


It shouldn't really throttle to 1ghz - it's because Apple puts low quality paste in it, and sometimes because of dust.

My Macbook from 2014, is still excellent but it started throttling to 1ghz with video encoding.

After going to a repair shop and telling them about the problem they put high quality thermal paste in it for about 100 usd and the problem disappeared. Now i get 100% CPU no matter what i throw at it, pretty incredible with a computer from 2014!

Just fyi..


Thanks for your experience, but the passively cooled 12" Macbook is really a different thing. Basically, it isn't a 2.5 GHz CPU that throttles, but a 1.1 GHz CPU which can boost itself for a very short time, and the it runs into thermal limit and returns to a 1.1 GHz sustained performance.

And on hot day, that boost might last 30 seconds and then that's it.


My guess is different clock speeds, also base air has one gpu core less...


And this might be the old production trick where one part of the core fails QA and so they shut it out and make it a cheaper part.

The GPU parts might be the tightest silicon and highest rate of failure so this approach reduces waste.


By "trick" you mean the only approach every chip-maker has been following for decades? Literally every single one. It's called binning.


The gp is using “trick” not with the nefarious connotation, but more along the lines of “hack” or “clever idea”.


yeah I guess I should have said 'hack'


I think it's proper way than hack


I think there’s one more notable difference: the M1 MBP also has a brighter screen at 500 vs 400 nits. Both have P3 gamut and are otherwise the same resolution.


The Pro screen is 500 nits vs 400 nits on the air.


Battery is not so important post-covid.

Air is clearly the better value option, if you really want to get one of these.


I guess the cooling let’s them tweak the CPU clocks accordingly? Wonder if we can hack the Mac mini with water blocks and squeeze higher clocks. The memory limitation makes it a dud though.


Wouldn't be surprised if the cooling solution was serialised and it had a detection whether the cooling is originally programmed for the particular unit like they do now with cameras and other peripherals (check iPhone 12 teardown videos). I bet that the logic would check the expected temperature for given binning and then shut down the system if it is too cool or too hot. Apple knows better than the users what hardware should work with the unit.


For a while, the fan was broken in my 2017 MacBook Pro 13". Didn't spin at all. The MacBook never complained (except when running the Apple hardware diagnostics). It didn't overheat or shut down unexpectedly. It just got a bit slower due to more thermal throttling.

I expect it would work the other way, too. Improve the cooling and performance under load would improve.


https://www.youtube.com/watch?v=MlOPPuNv4Ec

This is a video from Linus Tech Tips that demonstrates that no matter how much you cool it, they've physically prevented the chip from taking advantage of it.

And if it could be fixed with software, they would have worked out how, they're into that kinda tweaking.


Intel chips, on the other hand, are designed to work with a varying degree of thermal situations because they don't control the laptop it is put in. In this situation, Apple could potentially get more creative with their approach to thermals because they control the entire hardware stack.


Sure, if by "more creative" you mean handicap the CPUs by not delivering them any power because they know they can't cool them.

https://www.youtube.com/watch?v=MlOPPuNv4Ec


Intel processors use Intel designed throttling solutions... which exist to keep their own processors from overheating because they have no control over the final implementation.

These new M1 laptops are the first laptops that have complete thermal solutions designed by a single company.

As an example, there is the potential to design a computer with no throttling at all if you are able to control the entire thermal design.


> As an example, there is the potential to design a computer with no throttling at all if you are able to control the entire thermal design.

This is not true. A laptop needs to work in a cold room, in a hot room, when its radiator is dusty, etc. If your CPU is not willing to throttle itself then a company with Apple's scale will have machines overheating and dying left and right.

For a computer to never _need_ to throttle, either (1)the cooling system has to be good enough to keep up with the max TDP of the CPU, or (2) you "pre-throttle" your CPU by never delivering it more power than the cooling system could handle. Apple refuses to accept solution 1, so they went with solution 2. If you watch the video I posted, it shows that even when there is adequate cooling, the new macbooks will not deliver more power to the CPU. In effect, the CPU is always throttled below its limit.


If Apple actually did that Louis Rossman would be out of a job.

No, not in the sense that the cooling lockout would make him unable to fix MacBooks - he clearly has the industry connections to get whatever tools he needs to break that lockout. Yes, in the sense that many Apple laptops have inadequate cooling. Apple has been dangerously redlining Intel chips for a while now - they even install firmware profiles designed to peg the laptop at 90C+ under load. The last Intel MBA had a fan pointed nowhere near the heatsink, probably because they crammed it into the hypothetical fanless Mac they wanted to make.

Apple actually trying to lock the heatsink to the board would indicate that Apple is actually taking cooling seriously for once and probably is engineering less-fragile hardware, at least in one aspect.


So, essentially their new Macbook line is a glorified iPhone/iPad but with a foldable display (on a hinge)?

Not too far-fetched when you see the direction MacOS is headed, UI-wise. And it sounds nice, but if it means that repairability suffers then we'll just end up with a whole wave of disposable laptops.


To be fair to apple, people keep their macbooks for years and years, keeping them out of landfill longer. They are well made and the design doesn't really age. Written on my 2015 Macbook pro.


To be fair to the rest of the world, this comment is written on a 20 year old PC. It has had some component upgrades, but works like a champ after 20 years.


If you keep replacing failed/failing components or give needed upgrades to the system every few years, is it fair to call it 'working like a champ for 20 years'?


I'll take it a step further. Is it fair to even call it the same system after 20 years of changes?

Like the Ship of Theseus thought experiment, at what point does a thing no longer have sufficient continuity to its past to be called the same thing? [1]

[1] https://en.m.wikipedia.org/wiki/Ship_of_Theseus


Some parts are kind of like tyres on a bike, just need to be replaced from time to time, it doesn't mean the bike is bad or not working like a champ.


Yeah, but it does mean it is no longer the same bike. If you replace every part of a bike, even one at a time over years, it is no longer the same bike. So it all depends on what GP means by "replacing some parts". Is it entirely new computer in a 20 year old case? Or is it a 20 year old computer with a couple sticks of RAM thrown in?

Regardless, I have a hard time believing a 20 year old computer is "working like a champ". I've found the most people who say their <insert really old phone or computer> works perfectly have just gotten used to the slowness. Once they upgrade and try to go back for a day, they realize how wrong they were. Like how a 4k monitor looks "pretty good" to someone that uses a 1080p monitor everyday, but a 1080p monitor looks like "absolute unusable garbage" to someone who uses a 4k monitor everyday.


Definitely not if the metric we care about is keeping components out of landfills.


I don't understand the landfill argument here.

A typical "Upgradable" PC is in a box 10 times the size of the mini. If you upgrade the GPU on a PC, you toss out an older GPU because it has pretty much zero resale value. Typical Apple hardware is used for 10-15 years, often passing between multiple owners.


It's a shame we don't have charities that would take such parts and then distributed them to less fortunate countries. Ten years ago a ten year old graphics card would no longer be quite usable, but now 10 years old card should work just fine for most of the tasks, except more advanced gaming.


I don't see the point. There is nothing to put it into. It's far cheaper to just ship modern CPUs with integrated graphics which will be faster and more efficient than that 10 year old GPU. The era where computer components were big enough for it to make sense for them to be discrete parts is coming to a close.

This is particularly true on the lower end where a 10 year old part is even interesting.


I thought you could donate any part of a computer and then people could sort and match, but I think you're right.


if only two parts got replaced, then landfill mass was reduced.


Why do I think of Trigger's Broom when I read this?


Apples and oranges. I've never kept a laptop for five years.


That's only applicable to Macbooks made upto 2015.


I guess I'll throw my 2016 MBP out then.


You probably will before I throw out my 2010 MBP thanks to easily replaced parts.


To me it looks more like they swapped the motherboard out with their own, keeping the rest of the hardware the same.

With RAM and SSD already soldered to the motherboard, repairability can't really get much worse than it already is.


It's not difficult to replace RAM or SSD with the right tools (which may be within reach of an enthusiast), problem is that you often cannot buy spare chips as manufacturers can only sell them to Apple or that they are serialised - programmed to work only with that particular chip and then the unit has to be reprogrammed after the replacement by the manufacturer. I think they started doing it after rework tools became affordable for broader audience. You can get a trinocular microscope, rework station and an oven for under a $1000 these days.


You can get a screwdriver (allowing you to replace RAM and SSDs in most laptops, including older macs) for $5. There's really no excuse for them to do this all the while claiming to be environmentally friendly.


Depends on the model. My 2012 mbp15r uses glue and solder, not screws. Maxed out the specs when I got it, which is why it's still usable. Would've been delighted for it to have been thicker and heavier to support DIY upgrades and further improve its longevity while reducing its environmental impact, but that wasn't an option. Needed the retina screen for my work, bit the bullet. Someday maybe there will be a bulletproof user-serviceable laptop form factor w a great screen, battery life and decent keyboard, that can legally run macOs... glad to say my client-issued 2019 mbp16r checks most of those boxes. /ramble


Something like ATX standard but for laptop shells would be awesome - imagine being able to replace a motherboard etc, just like you can with a desktop PC.


Intel tried this more than a decade ago. The designs were as horrible as you might imagine, and a few OEMs did come out with a handful of models and parts.

As I recall, consumers didn’t care or wouldn’t live with the awful designs that they initially brought out. I don’t remember. I remember thinking I wouldn’t touch one after seeing a bunch of engineering samples.


Maybe it was too early for this kind of thing. I could imagine today such shell would be much slicker.


Except the RAM is in the M1 now. Pretty good excuse Id'say.


Is it? I thought only memory controller is in the chip, not the memory itself.


The M1's RAM is integrated into the SoC package. But it's still separate RAM chips, not actually on the same die as the M1.


Mmm... it's certainly better than they had before. But really they ought to be designing repairable machines. If that makes them a little slower then so be it.


My 2007 MBP, yes. I don't think that's true of my 2017 MBP, nor my 2012 MBA.

It's been years since Apple did away with this stuff, and nobody expected them to suddenly allow after-market upgrades.


Serialized components should be illegal, frankly.


There are good privacy and security reasons that someone might want serialized components.


Sure, but you add the option to ignore the serialization, or options to reset the IDs as part of the firmware or OS. That way the machine owner can fix it after jumping through some security hoops, rather than requiring an authorized repair store.

Mostly because, its doubtful if state level actors (or even organized crime) aren't going to pay off an employee somewhere to lose the reprogramming device/etc. Meaning its only really secure against your average user.


I don't believe those reasons are more important than open access and reducing the environmental impact of planned obsolescence, outside of the kind of government agencies that are exempt from consumer electronics regulations anyway.


Surely there is a better (and I'd bet, more effective) way to handle environmental regulations than mandating specific engineering design patterns within the legal code.

Perhaps instead, it might be a better idea to directly regulate the actions which cause the environmental impact? i.e. the disposal of those items themselves?

Engineers tend to get frustrated with laws that micromanage specific design choices, because engineering practices change over time. Many of the laws that attempt to do so, backfire with unintended consequences.

It is quite possible that your solution might be just that -- many industries with high security needs are already very concerned with hardware tampering. A common current solution for this is "burner" hardware. It is not uncommon for the Fortune 500 to give employees laptops that are used for a single trip to China, and then thrown away. Tech that can give the user assurance that the device hasn't been compromised decreases the chance that these devices will be disposed of.

As a side note, I don't think serialized components is even one of the top 25 factors that does(/would) contribute to unnecessary electronics disposal.


I think resetting instead of bricking doesn't compromise security, but saves a burner laptop from ending up in landfill. I get your point, but I think company would have to demonstrate that e.g. serialising meets particular business need that is different from planned obsolescence. Could be a part of certification processes that products before getting marketed have to go through.


Based on what legal principle should they be illegal?


In practice, such a law could resemble right-to-repair bills like the one recently passed in Massachusetts, which requires auto manufacturers to give independent repair stores access to all the tools they themselves use. A bill like this for consumer electronics could practically ban serialized components, even without mentioning them explicitly.


Illegal, no. Taxed extra.


Why beating around the bush? If the function of extra tax is to stop producers from implementing planned obsolescence, then why not just stop them directly and require that components are not serialised etc. as a part of certifications products need to go through? If you add tax, then all you do is restricting access to such products for people with lower income.


the point is to push the market into the correct^Wdesired direction without outright banning anything. non-serialized would be cheaper, hence more accessible. there are use cases where serialized parts are desired (e.g. if i don't want somebody swapping my cpu with a compromised part).


Normally I prefer nudges to bans, but I'm not sure they work on giant monopolies. Unless the tax were high enough to have no chance of passing, Apple would dodge it or write it off as cheaper than being consumer-friendly.


With Apple Silicon, the RAM is not even on the motherboard. It's integrated into the SoC package!


I don’t think that’s true for M1.


Yes, it is true of the M1. RAM is integrated into the M1 SoC package (but not on the same die).


> So, essentially their new Macbook line is a glorified iPhone/iPad but with a foldable display (on a hinge)?

This isn't some new. Since day 1, the iPhone has always been a tiny computer with a forked version of OS X.

> but if it means that repairability suffers then we'll just end up with a whole wave of disposable laptops.

Laptops have been largely "Disposable" for some time. In the case of the Mac, that generally means the laptop lasts for 10-15 years unless there is some catastrophic issue. Generally after that long, when a failure happens even a moderate repair bill is likely to trigger a new purchase.


You won't be able to go beyond the built in P-states which in the end is a power limit not a thermal one.


8GB and 16GB configurations seem more than enough..


I had a quad-core Mini with 16GB in 2011. Almost 10 years later we should be much further, especially as the Intel Mini allows up to 64GB. (Which you probably would use only if you upgraded the memory yourself).


We're not any further in terms of capacity per dollar, but we are advancing in terms of speed.

The M1's memory is LPDDR4X-4266 or LPDDR5-5500 (depending on the model, I guess?) which is about double the frequency of the memory in the Intel Macs.

Apparently, this alone seems to account for a lot of the M1's perf wins — see e.g. the explanation under "Geekbench, Single-Core" here: https://www.cpu-monkey.com/en/cpu-apple_m1-1804

Bleeding-edge-clocked DRAM is a lot more costly per GB to produce than middle-of-the-pack-fast DRAM. (Which is weird, given that process shrinks should make things cheaper; but there's a DRAM cartel, so maybe they've been lazy about process shrinks.)


Not all types of processes shrink equally well.

Apparently DRAM and NAND do not shrink as well because in addition to transistors in both cases you need to store some kind of charge in a way that is measurable later on - and the less material present, the less charge you are able to store, and the harder it is to measure.


> The M1's memory is LPDDR4X-4266 or LPDDR5-5500 (depending on the model, I guess?) which is about double the frequency of the memory in the Intel Macs.

That's a high frequency, but having two LPDDR chips means at most you have 64 bits being transmitted at a time, right? Intel macs (at least the one I checked), along with most x86 laptops and desktops, transfer 128 bits at a time.

> Apparently, this alone seems to account for a lot of the M1's perf wins — see e.g. the explanation under "Geekbench, Single-Core" here

That's a vague and general statement that site always says, so I wouldn't put much stock into it.


No virtualisation -> I’m guessing no Docker.

Am I missing something?

Mind you with 16Gb, Docker won’t be that useful.


Why do you think there is no virtualisation? Apple showed Linux running in a VM during WWDC already.


I missed that, I assumed virtualisation was dependent on Intel VT.

Then again I would have expected them to have discussed it as much as the video editing.

I am guessing that they’d need a M2 type chipset for accessing more RAM for that. Or maybe they’ve got a new way to do virtualisation since that is such a key thing these days.

Edit: thanks for pointing that out though, that’s why I mentioned it

https://developer.apple.com/documentation/virtualization

https://news.ycombinator.com/item?id=23922846

And the mentioned Virtio here:

http://www.linux-kvm.org/page/Virtio

How well this fits in with current virtualisation would be interesting to find out; I guess this will be for a later version of Big Sur, with a new beefier M2 chip.


Are they virtualizing x86 though? Having Docker running arm64 on laptops and Docker x86 on servers completely nullifies the main usecase of Docker imo.


But you can run Docker on arm64 servers!


The intel Mac Mini is still available with the same 8GB in its base model, but configurable up to 16/32/64. RAM is definitely the biggest weakness of these new Macs.

On iOS they can get away with less RAM than the rest of the market by killing apps, relaunching them fast, and having severely restricted background processes. On Mac they won't have that luxury. At least they have fast SSDs to help with big pagefiles.

With the heterogeneous memory, your 8GB computer doesn't even have its whole 8GB of main system memory.

When the touchbar MBP launched in 2016 people were already complaining that it couldn't spec up to 32GB like the competition. Four years later, and it's still capped at 16GB.

Hopefully they can grow this for next year's models.


And the Intel Mac Mini had user-replaceable RAM. Tired of fan noise and slow response, I went from a 4 Thunderbolt 2018 MacBook Pro with only 8GB of RAM to a 2018 Mac Mini with 32GB of RAM (originally 8GB, bought the RAM from Amazon and upgraded it).

The difference was incredible


8GB ram is just soul crushing - even for basic office workloads. I need 16GB minimum.


What in a basic office needs 8GB RAM?! I used Word 6.0 under Windows 95 with 64MB of RAM!

Have you looked at Activity Monitor to see what is butchering your memory?!


Well, for starters, the most obvious answer, which is Office 365. Have you glanced at the RAM use of those apps, ever?

Second: web browsers, which can easily grab 5-10GB by themselves or even more if RAM is available.

So in other words: everything.


Probably Chrome.


I'm idling at 18gb right now and doing what I consider to be next to nothing.


It doesn't make sense for the system not to 'grab' a big chunk of your RAM. That is what it is there for. You want stuff to be preloaded into RAM so you can access it quickly if needed. You only want to leave some of it free so that if you launch a new application it has breathing room.

For example Chrome will scale the amount of RAM it reserves based on how much you have available.


> It doesn't make sense for the system not to 'grab' a big chunk of your RAM. That is what it is there for. You want stuff to be preloaded into RAM so you can access it quickly if needed. You only want to leave some of it free so that if you launch a new application it has breathing room.

Cache is excluded from just about any tool that shows RAM use, at least on desktops. If the ram shows as in use, the default assumption should be that it's in active use and/or wasted, not cache/preloading.

> For example Chrome will scale the amount of RAM it reserves based on how much you have available.

Which features are you thinking about that reserve ram, specifically? The only thing I can think of offhand that looks at your system memory is tab killing, and that feature is very bad at letting go of memory until it's already causing problems.


That seems like a hell of a lot of RAM for next to nothing.

I'm not a mac user but that seems ridiculous. I'd be investigating what's hogging it all.


I build my desktops with a lot of ram.

I have chrome, Firefox, photoshop, vs code, docker and a few other things running. As a kid I had to manage RAM. As an adult, I buy enough RAM to not need to think about it.

I was committed to buying an M1 on day one. I won’t buy a machine with only 16gb of RAM.


I'm the same, my current desktop has 32Gb, but still I'd be pretty concerned about 18Gb in use with nothing running.


The 2018 Intel Mac Mini has user-replaceable RAM. The 2014 mini has fixed RAM.


Another note on the Mini and MacBook Pro (in higher end SKUs) - these both used to have four USB-C ports, and now only have two. The Mini at least keeps its a pair of USB-A ports, but on the MBP you're back on the road to dongle-hub-land.

I'm betting this is due to Thunderbolt controller and PCIe lane capacity. They couldn't do four Thunderbolt ports with the M1 SoC, so they dropped the ports. Having four USB-C ports but only two supporting Thunderbolt would be a more obvious step back from the previous MacBook Pro. This way people can just blame it on Apple doing Apple things, instead of seeing a technical limitation.


Yes, based on my experience on a mac, I would not buy any mac with less than 32gb ram (I personally use 64gb and it's so much nicer)...

Yes, it seems crazy, yes it's a lot of ram, but I like to be able to run VMs locally and not have to boot up instances on AWS (insert provider of choice), I like to keep tabs open in my browsers, I like not to have to close apps when I'm using them and I like my computer to be snappy. 64 GB allows that 16 doesn't, 32 barely does.


Having read a bit more about the new M1 I really think it is designed and speced for the new Air. The RAM is on the package, which makes it very performant and 16G is a reasonable limit for an Air-like computer. The Mini got cheaper and more powerful, so it is not a bad trade off. I strongly assume, that there will be variations/successors to the M1 which are going to support more memory and also more IO (more USB-4 ports, more screens).


From their schematic, the two DRAM modules were directly on the SoC - possibly to improve bandwidth etc. So it looks like this cannot be upgraded / replaced. That said, it might be worth it to look past the specs and just use your applications on these machines to see how they perform. SSD storage is much faster these days and if the new OS has decently optimized paging, performance will be decent as well.


Also, lack of 10gbe is a big let down...


16GB limit with the latest MBP M1 13inch seems a big downer, I will wait for 16 inch MBP refresh now.


You have to factor in possible memory management improvements with the M1 chip, and ability to run iOS apps instead: https://twitter.com/jckarter/status/1326240072522293248


That is fine with the Air. But for a small desktop computer not to support more than 16GB in 2021? Its predecessor allowed up to 64GB (and possibly more with suitable modules).


They've also halved the 2 base SSDs to 256/512

I thought with the last update they'd finally seen the light and moved to 512/1tb, now we're back with the silly 256gb.

If you factor in having to upgrade ram to 16gb and ssd to 512 it's only £100 shy of the old price. Good, but not as good as it looked to begin with.


You can get an external M2 USB 3.1 Gen 2 (10Gbps) enclosure plus 1TB M2 SSD for $130 and a 2TB for $320. That makes the 16GB Mac Mini 256GB a decent buy at $970 imo.

https://www.amazon.com/dp/B07MNFH1PX


For the mini sure, but it's a massive pain having an external drive for a laptop. I use one all the time and as well as being waaaaay slower even with a good drive, I lose it all the time.


Yeah it's not feasible at all to use external storage on a two port laptop. Dongles that allow you to plug in power and monitor are still just not reliable enough for storage, the only reliable connection I can get on my 2 port MBP is with a dedicated Apple USB-C to A adapter.

Shocked they're still selling the two port machine, it's been nothing but hassle for me as someone who has to use one.


That's why I'm waiting to upgrade my laptop.


I've still got my 2013 MacBook Pro. Still chugging along.

I'm hoping it can wait for v2 of the MacBook Pro 16"

Apple's second version of everything is always worth the wait.


For pro users, the fact that 32GB isn’t even an option is pretty surprising


My guess is that the next wave will be Pro.

And they will have significantly upgraded CPU/GPUs to match the memory.


But it's right there in the name: 13" MacBook Pro


The 13" 'pro' has never really been a 'real' pro. They were/are always spec'd with less cores than the 15"/16" and never had dedicated graphics.


There are two lines of 13" MacBook Pro, the two-port and four-port versions. The two-port always lagged behind the four-port, with older CPUs, less RAM, etc. The four-port (which has not yet been replaced) is configurable to 32GB of RAM.


Entry level 13" MacBook Pro is for prosumers.

Think web developers, photographers, bloggers etc.


Web developers and photographers are the opposite of 'prosumers', kind of by definition. Plus, think of the size of a full res photo coming out of a high-end phone, never mind a DSLR.


Most of the professional photographers that I work with have PC workstations with 64gb to 256gb of RAM. Retouching a 48MP HDR file in Photoshop needs roughly 800MB of RAM per layer and per undo step.


Old undo steps could be dumped to SSD pretty easily.

And while I understand that many people are stuck on photoshop, I bet it would be easy to beat 800MB by a whole lot. But so I can grasp the situation better, how many non-adjustment layers do those professional photographer use? And of those layers, how many have pixel data that covers more than 10% of the image?


From what I've seen, quite a lot of layers are effectively copies of the original image with global processing applied, e.g. different color temperature, blur, bloom, flare, hdr tone mapping, high-pass filter, local contrast equalization. And then those layers are being blended together using opacity masks.

For a model photo shoot retouch, you'd usually have copy layers with fine skin details (to be overlaid on top) and below that you have layers with more rough skin texture which you blur.

Also, quite a lot of them have rim lighting pointed on by using a copy of the image with remapped colors.

Then there's fake bokeh, local glow for warmth, liquify, etc.

So I would assume that the final file has 10 layers, all of which are roughly 8000x6000px, stored in RGB as float (cause you need negative values) and blended together with alpha masks. And I'd estimate that the average layer affects 80%+ of all pixels. So you effectively need to keep all of that in memory, because once you modify one of the lower layers (e.g. blur a wrinkle out of the skin) you'll need all the higher layers for compositing the final visible pixel value.


Huh, so a lot of data that could be stored in a compact way but probably won't be for various reasons.

Still, an 8k by 6k layer with 16 bit floats (which are plenty), stored in full, is less than 400MB. You can fit at least eleven into 4GB of memory.

I'll easily believe that those huge amounts of RAM make things go more smoothly, but it's probably more of a "photoshop doesn't try very hard to optimize memory use" problem than something inherent to photo editing.


So why are you blaming the end user for needing more hardware specs than you'd prefer because some 3rd party software vendor they are beholden to makes inefficient software?

Also, your "could be stored in a compact way" is meaningless. Unless your name is Richard and you've designed middle out compression, we are where we are as end users. I'd be happy if someone with your genius insights into editing of photo/video data would go to work for Adobe and revolutionize the way computers handle all of that data. Clearly, they have been at this too long and cannot learn a new trick. Better yet, form your own startup and compete directly with the behemoth that Adobe is and unburden all of us that are suffering life with monthly rental software with underspec'd hardware. Please, we're begging.


Where did I blame the end user?

> Also, your "could be stored in a compact way" is meaningless. [...]

That's getting way too personal. What the heck?

I'm not suggesting anything complex, either. If someone copies a layer 5 times and applies a low-cpu-cost filter to each copy, you don't have to store the result, just the original data and the filter parameters. You might be able to get something like this already, but it doesn't happen automatically. There are valid tradeoffs in simplicity vs. speed vs. memory.

"Could be done differently" is not me insulting everyone that doesn't do it that way!


You must not rate photographers very highly if you are mentioning them in the same sentence as "bloggers, etc".


I should wait for a 64 GB option. I've already got 16 GB on all my older laptops, so when buying a new gadget RAM and SSD should have better specs (you feel more RAM more than more cores in many usage scenarios).

It was surprising to see essentially the same form factor, the same operating system and not much to distinguish the three machines presented (lots of repetition like "faster compiles with XCode").

BTW, what's the size and weight of the new Air compared to the MacBook (which I liked, but which was killed before I could get one)?

Seeing two machines that are nearly identical reminds me of countries with two mainstream political parties - neither discriminates clearly what their USP is...


I don't think today's computers were aimed at those kinds of usecases.


Apple has a "missing middle" problem.

They have a ton of fantastic consumer-level computing devices, and one ridiculously-priced mega-computer.

But there are many of us that want something in the upper-middle: a fast computer that is upgradable, but maybe $2k and not $6k (and up).

(The iMac Pro is a dud. People that want a powerful desktop generally don't want a non-upgradable all-in-one.)


Apple's solution for upgradability for their corporate customers, is their leasing program. Rather than swapping parts in the Mac, you swap the Mac itself for a more-powerful model when needed — without having to buy/sell anything.


Apple has missing middle _strategy_


Apple doesn't care about your upgradability concerns on the notebook lineup. Once you get past that, it has traditionally done fairly well at covering a wide spectrum of users from the fanless MacBook to the high-powered MacBook Pros.


I have a late-2013 13" MBP with 16GB of memory. Seven years later I would expect a 13" MBP to support at least 32GB. I can get 13" Windows laptops that support 32GB of memory. The Mini is a regression, from 64GB to 16GB of memory. The only computer worth a damn is the new MBA.


Pretty sure my 2014 ish 13inch MBP with 16gb and 512 storage cost me around £1200, today speccing an M1 13inch MBP to the same 6 year old specs would cost almost £2000.

Seems absurd.


Wait just a bit and I'm sure your concerns in this area will entirely disappear.


They already disappeared, I switched to Windows in 2019.

I use MacStadium for compiling and testing iOS apps. I was wondering if the ARM machines would be worth a look, but they are disappointing. If I was still using Macs as my daily driver, I would buy the new MBA for a personal machine.


But 16gb is what I had in a computer 10 years ago.

I was just window shopping a new gaming rig, and 32gb is affordable (100 bucks), 64gb (200 bucks). Cheap as shit, what’s the hold up?


The memory is on package, not way out somewhere on the logic board. This will increase speed quite a bit, but limit physical size of memory modules, and thus amount. I think they worked themselves into a corner here until the 16” which has a discreet GPU and reconfiguration of the package.


A little bit up is was shown the memory in M1 is 5.5GHz DDR5. https://news.ycombinator.com/item?id=25050625

Can you please provide the link to 64GB DDR5-5500 for $200? I'd love to buy some too!


It's fair but if they choose fast but expensive and unexpandable technology, possibly the choice is failed in some perspective. I think most people who buy mini prefer RAM capacity than faster iGPU.


https://www.newegg.com/p/pl?d=64gb+ram

I guess DDR 500 runs you 350. It looks like Apple charges you 600 for ddr 400 32gb. I don’t know, what am I missing here?


Can you actually link to a product, not a search ? Because none of the items coming up there are DDR5-5500, they're all DDR4-3600 or worse, as far as I can see.


I guess I was wrong, everything is ddr4.


I’m confused. The link is for DDR4, it’s all too slow, and Apple doesn’t offer a 32MB M1 option at this time.


A new processor architecture. Wait a couple months and you'll probably have the computer you wanted released too.


The DRAM seems to be integrated on the same package as the SoC.


I went to Apple's website right after I finished watching the keynote with the intention of buying a new Mac mini ... the lack of memory options above 16GB stopped that idea dead in its tracks though.


Also no 10G networking option. The combination of those feature exclusions makes it a dud for me; I don't want to have a $150 TB3 adapter hanging off the back, not when previous gen had it built in.


I bet “pros” never bought it and it’s only been viable as a basic desktop. Probably nobody ordered the 10 gigabit upgrade.

I bet they’re only upgrading it because it was essentially free. They already developed it for the developer transition kit.

I commend the idea of a small enthusiast mini desktop like a NUC but I don’t think the market is really there, or if they are, they’re not interested in a Mac.


I think it is notable the new mini’s colour is light silver, rather than the previous dark ‘pro’ silver. Presumably there will be another model a year from now.


Over the years the mini has had a variety of different shades and designs. I wouldn't read too much into it.


I bet “pros” never bought it.


16gb can be limiting for some work flows today, and doesn't give you much future proofing (this RAM is permanent, right?)


Yes, it's in the SoC (or SiP now).


How do they get the ram into the SoC? Is it like a massive die?



Don’t touch Xcode then, it welcomes you to paging hell.


With that fast SSD, do you notice paging in Xcode? Would it be worth the extra $300 or however much Apple asks for extra 8GB of ram in the US store?


yes, you notice paging, even with 'fast' SSD.

or perhaps it's not 'paging', and just dumb luck I hit and see beachballs on multiple new higher-end macbook pros regularly.


It's not normally paging, but thermal throttling which involves the machine appearing to 'spin' but it's actually just the kernel keeping the cycles to itself, which typically give you beachballs as a side-effect.

And one tip is to use the right hand side USBC ports for charging, not the left hand ones as for some reason or other they tend to cause the machine to heat up more...


the right hand ones are the only ones that can drive external monitors (on mine anyway). I feel like I'm the only one that has this - I had a MBP 2019 - first batch - and I thought I'd read that one side was different than the other re: power. Power works on both sides, but monitors won't run from the left usb-c ports. but it's not documented anywhere. :/

thx for tip.


I just tried plugging my monitor into a right hand socket on my 2019 MBP, and it worked fine for me.

On my machine it is true that charging on the right is better. Charging on the left spins up the fan.


Just a thought, but maybe everyone should be appalled at that extra $300. And the lack of upgradability on a Pro machine, especially.


You're talking to Apple customers. Being gouged is a way of life for them.


The wording in the event supports this. Particularly when speaking about the Mini's fan "unlocking" the potential of the M1 chip.


This makes sense.

Most likely this is why the CPUs are all limited to 16GB. It's likely when they unwrap the 16 inch MacBook Pro, it will open up more configurations (more RAM in particular!) for the 13" MacBook Pro and hopefully the mini.


Going into the event, my thinking was that they'd have two aims:

1. "Wow" the audience given the anticipation without a full overhaul of the range. 2. Deliver some solid products that enable the transition while being effective for non-enthusiasts.

From my viewing they hit both. I expect they'll fill in the range next fall with bigger upgrades to the form factor.


I agree, It almost feels like they are going to have 3 main M Series CPUs. This one. One for the iMac and higher end MBPs. And perhaps a third for the high end iMac/ Mac Pro.


RAM limits are pretty easy to explain. 16GB chips cost disproportionately more and use more power.

I wonder if they use 2 4GB chips or 1 8GB chip in the low-end SKU?


It's even easier to explain than that. The RAM is integrated into the CPU. While there are a few SKUs here, Apple only designed and built one CPU with 16GB RAM. The CPUs are binned. The CPUs where all RAM passed testing are sold as 16GB, the 8GB SKUs had a failure on one bank of RAM.

There are no 32 or 64 GB models because Apple isn't making a CPU with 32 or 64GB of RAM yet.


It looked to me that they were placing 2 DDR4 modules beside the chip.

https://images.anandtech.com/doci/16226/2020-11-10%2019_08_4...


Well maybe? Huh. I don't know now, certainly looks like it.


The logic board probably isn't the same, but the SoC [probably] is identical, and with it a lot of the surrounding features/implementation. My own speculation as well of course :)


I doubt the logic board is the same. It’s just that the M1 integrates so much.


Actually that was what I notice in the video as well. Mac mini has huge amount of empty space. And the only difference was the cooling unit fitted on top.


There used to be a DVD player and a hard drive in there, there has to be spare room.

When comparing the mini to other SFF computers be sure to note that the mini has the power supply built in though where most of the others have it external.


If you look at the power supplies it's 30 vs 60 Watts, definitely interested to see what kind of TDP Apple targets with these machines.


They've stated that they target 10W in the Air, the cooling system in the Pro is identical to the Intel so probably 28W ish, and the Mac Mini specs say maximum 150W continuous power, but that probably includes a few tens of watts of USB and peripherals.


2 port Pros usually have 15 watt parts and only a single fan. Of course, PL2 goes far above 15W.


M1 chip page shows that 10 watts is the thermal envelope for the Air


So I get the whole performance-per-watt spiel, but if they're targeting the same 10W as with Ice Lake Y-series[1], it's gonna be hot during continuous workloads like screen sharing, since they've seemingly decided to get rid of even the "chassis fan" they've had on 2019 Air.

[1] https://ark.intel.com/content/www/us/en/ark/products/196596/...


Worth noting that Intel doesn't actually honor those 10W listings, and often boosts above it for significant portions of usage.


This would make sense given the pricing, too.

For example, the $1,249 air is very similar to the $1,299 pro. The MBA has a bigger SSD, but the MBP has a bigger battery, isn't throttled (i.e. has a fan), and also has the touchbar (which may be controversial, but for the sake of comparison, remember that it comes with a manufacturing cost).

It seems reasonable that these are priced similarly. Of course, the machine I want is the MBP with the bigger SSD and no touchbar :)


The max ram in all 3 is only 16gb :(


I noticed the slides kept saying "Up to" 8 GPU cores.

That left me wondering if there are different variants of the M1 with different core counts.

(Note: It always said 8 CPU cores)


Looks like there's two variations of the Air: one with 7 GPU cores and one with 8 GPU cores.

https://www.apple.com/shop/buy-mac/macbook-air


According to Apple's website the Macbook Air appears to only have 7 active GPU cores. I suspect that chips in the Air may be binned separately and may or may not support the higher clock speeds even with the active cooling of the Mac Mini and Macbook Pro.


7 on the base model, 8 on the upgrade. You're probably correct that this is a binning thing.


According to the tech specs on the Apple site, the Air is available with 7 or 8 GPU cores. All other new Macs have 8.


They did the exact same thing with the A12X and the A12Z. 7-core vs 8-core GPU is the only real difference between them.


My guess is maybe this is a yields thing for Apple Silicon? They use the same chips for Air and Pro, but shut off a faulty gpu core that didn't pass QA? Or a temperature thing.


One of the slides mentioned that the air is limited to 10watts though. I wonder if it does have the same soc but its nerfed beyond 10watts.


The two differences are cooling and that the base Air appears to receive binned processors with 7 GPU cores.


The MBP now only has 2 USB-C/Thunderbolt ports which would support this theory.


That's the same as previous low-end MBP.


that's right but the 'regular' one had 4. I've already seen a pro user (in music production) complain about this.

But my point here is that the fact they are both the same supports the theory that the logic board is the same on both models.


They’re continuing to sell the 4 port Intel 13” Pro.


Maybe they'd launch a more expensive 4 USB-C/Thunderbolt ports model with their more powerful chip (and upto 64/128GB memory) like they did with the earlier MBP13s.


"Testing conducted by Apple in October 2020 using preproduction MacBook Air systems with Apple M1 chip and 8-core GPU, as well as production 1.2GHz quad-core Intel Core i7-based MacBook Air systems, all configured with 16GB RAM and 2TB SSD. Tested with prerelease Final Cut Pro 10.5 using a 55-second clip with 4K Apple ProRes RAW media, at 4096x2160 resolution and 59.94 frames per second, transcoded to Apple ProRes 422. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Air."

This is relevant. This means that the performance increase vs Intel is using the extremely throttled 1.2 GHz i7 as the baseline.


According to the Anandtech benchmark the A14 (what's in the latest iPhone) is 8% below the Ryzen 5950X. M1 is likely faster than that.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...

Quote:

"In the overall SPEC2006 chart, the A14 is performing absolutely fantastic, taking the lead in absolute performance only falling short of AMD’s recent Ryzen 5000 series.

The fact that Apple is able to achieve this in a total device power consumption of 5W including the SoC, DRAM, and regulators, versus +21W (1185G7) and 49W (5950X) package power figures, without DRAM or regulation, is absolutely mind-blowing."


This is stupefying to me. Like, when Anand did the A12 or A11 reviews, I wrote:

"The craziest thing is how close [to 1/56th of a Xeon 8176] they get at 3W. That's the part that's staggering. What could Apple do with, say, laptop thermals? Desktop thermals? The mind creeps closer to boggling." -- me, 5 October 2018

Guess we're finding out.


Would this be good a chip for a data warehouse?


I think so, especially with the specialized hardware in there for machine learning (although in a data center you'd probably opt for specialized hardware like discrete GPUs). There are some players on the market for ARM servers as well.

That said, Apple won't make their chips available to the greater market. They might use variations of it in their own data centers, maybe.


> That said, Apple won't make their chips available to the greater market.

I wonder if this could potentially be a good time to bring back Xserve...


They definitely should. Even if the profit on the device wouldn't be decisive, but chip making is all about production numbers. Using the Apple Silicon first for all the Apple owned cloud servers and then offering them in Xserve units would be very profitable and could reduce the power draw of data warehouses a lot.


The question marks for me would be:

1) Is Apple interested (and capable) making macOS into a realistic choice for servers again?

2) If not, is Apple comfortable with selling a box that most people will put Linux on? Would they put resources into a Bootcamp-esque driver package?


The biggest question is: does Apple want to bring back the Xserver? If yes, there are several ways. If not, no good proposal will sway them :) But one thing was interesting during the AS announcement at WWDC. Federighi stressed how good a hypervisor macOS is. That was probably first of all a comment about virtualization on the new Macs, but could also allow to install any operation system as guest on a cloud server. Many cloud services are running inside VMs.


The only reason for that statement is to silence those who would raise a stink over the loss of Bootcamp.

"See, you don't need native Linux on the machine, just use our hypervisor & Docker/VMs"


Yes, that was the context of the statement. But it also would give an answer how customers would use an AS Xserve.


The market for Mac OS specific services would be vanishingly small; everybody else can just use Linux. Apple's not going to make a play for a commodity market.


this is what that boggles my mind. Like are those benchmarks really comparable? I mean Intel and AMD have been doing this far longer than Apple and it seemed as if Apple just came in and bend the laws of physics that Intel and AMD have been limited by


Apple has been at it for 12 years (they bought PA Semi in 2008). The talent they brought onboard with PA Semi had been producing processors since 2003.

Experience counts, but Apple is pretty old-hat at this by now as well.


Apple has been designing chips since the A4 in 2010. Samsung, TMSC, and other manufacturers with more experience actually build the chips for Apple, and this M1 is no exception being manufactured by TMSC.


Isn’t that just the ARM vs x86 difference? They’re completely different architectures, each with their own benefits and drawbacks.


With the size of modern chips the instruction set has very little to do with an overall chip design. Most of it is a balancing act between power, performance and area for things like caches, controllers, optimizations, accelerators and so on. Then there are also factors like verification complexity and the fab technology you are choosing.


I would have thought so too, but why are there no ARM based competitors coming out with similar numbers? Or are there?


Graviton2 I expect would stack up nicely.


One factor seems to be the vertical integration. It seems like these days there are a bunch of large companies succeeding by having the users of the hardware involved tightly with the the designers of that hardware. Tesla is definitely doing it. I think Google is doing it with machine learning. And Apple has definitely been doing it with their phones and tablets. The feedback loop is tight, valuable, and beneficial. No need to make big compromises for backward compatibility, either, just update the software.


Personal opinion, no insider info

I think it's a good CPU, but I don't think it'll be a great CPU. Judging by how heavily they leaned on the efficiency, I am pretty sure that it will be just good enough to not be noticeable to most Mac users.

Can't wait to see actual benchmarks, these are interesting times.


You skipped the keynote parts where they say faster 2x, 4x, 8x... and they say faster than every other laptop CPU and faster than 98% of laptops/PCs in their class sold last year? They said it several times. But I agree... let's wait for the benchmarks (btw. iPads already overturns nearly all laptop CPUs)


> You skipped the keynote parts where they say faster 2x, 4x, 8x... and they say faster than every other laptop CPU and faster than 98% of laptops/PCs in their class sold last year? They said it several times.

The important part there is "in their class".

I'm sure the Apple silicon will impress in the future, but there's a reason they've only switched their lowest power laptops to M1 at launch.

The higher end laptops are still being sold with Intel CPUs.


That's not what they said during the keynote.

They said the Macbook Air (specifically) is "3x faster than the best selling PC laptop in its class" and that its "faster than 98% of PC laptops sold in the last year".

There was no "in its class" designation on the 98% figure. If they're taken at their word, its among every PC laptop sold in the past year, period.

Frankly, given what we saw today, and the leaked A14x benchmarks a few days ago (which may be this M1 chip or a different, lower power chip for the upcoming iPad Pro, either way); there is almost no chance that the 16" MBPs still being sold with Intel chips will be able to match the 13". They probably could have released a 16" model today with the M1 and it would still be an upgrade. But, they're probably holding back and waiting for a better graphics solution in an upcoming M1x-like chip.


This probably says more about the volume of low end PCs being sold than about the performance of the air.


If you believe that, then you've either been accidentally ignoring Apple's chip R&D over the past three years, or intentionally spinning it against them out of some more general dislike of the company.

The most powerful Macbook Pro, with a Core i9-9980HK, posts a Geekbench 5 of 1096/6869. The A12z in the iPad Pro posts a 1120/4648. This is a relatively fair comparison because both of these chips were released in ~2018-2019; Apple was winning in single-core performance at least a year ago, at a lower TDP, with no fan.

The A14, released this year, posts a 1584/4181 @ 6 watts. This is, frankly, incomprehensible. The most powerful single core mark ever submitted to Geekbench is the brand-spanking-new Ryzen 9 5950X; a 1627/15427 @ 105 watts & $800. Apple is close to matching the best AMD has, on a DESKTOP, at 5% the power draw, and with passive cooling.

We need to wait for M1 benchmarks, but this is an architecture that the PC market needs to be scared of. There is no evidence that they aren't capable of scaling multicore performance when provided a higher power envelope, especially given how freakin low the A14's TDP is already. What of the power envelope of the 16" Macbook Pro? If they can put 8 cores in the MBA, will they do 16 in the MBP16? God forbid, 24? Zen 3 is the only other architecture that approaches A14 performance, and it demands 10x the power to do it.


Not all geekbench scores are created equal. Comparing ARM and x86 scores is an exercise in futility as there are simply too many factors to work through. It also doesn't include all workload types.

For example, I can say with 100% confidence that M1 has nowhere near 32MB of onboard cache. Once it starts hitting cache limits, it's performance will drop off a cliff as fast cores that can't be fed are just slow cores. It's also worth noting that around 30% of the AMD power budget is just Infinity Fabric (hypertransport 4.0). When things get wide and you have to manage that much wider complexity, the resulting control circuitry has a big effect on power consumption too.

All that said, I do wonder how much of a part the ISA plays here.


M1 has 16MB of L2 cache; 12MB dedicated to the HP cores and 4MB dedicated to the LP cores.

Another important consideration is the on-SOC DRAM. This is really incomparable to anything else on the market, x86 or ARM, so its hard to say how this will impact performance, but it may help alleviate the need for a larger cache.

I think its pretty clear that Apple has something special here when we're quibbling about the cache and power draw per core differences of a 10 watt chip versus a 100 watt one; its missing the bigger picture that Apple did this at 10 watts. They're so far beyond their own class, and the next two above it, that we're frantically trying to explain it as anything except alien technology by drawing comparisons to chips which require power supplies the size of sixteen iPhones. Even if they were just short of mobile i9 performance (they're not), this would still be a massive feat of engineering worthy of an upgrade.


AMD's Smart Memory Access was recently announced. In unoptimized games, they're projecting a 5% performance boost between their stock overclock and SMA (rumors put the overclock at only around 1%).

The bigger issue here is bandwidth. AMD hasn't increased their APU graphics much because the slow DDR4 128-bit bus isn't sufficient (let alone when the CPU and GPU are both needing to use that bandwidth).

I also didn't mention PCIe lanes. They are notoriously power hungry and that higher TDP chip not only has way more, but also has PCIe 4 lanes which have twice the bandwidth and a big increase in power consumption (why they stuck with PCIe 3 on mobile).

It's also notable that even equal cache sizes are not created equal. Lowering the latency requires more sophisticated designs which also use more power.

https://www.amd.com/en/technologies/smart-access-memory


Really interesting analysis.

So you expect the M1 MBP will outpace the more expensive Intel 13 MBP they are selling for more money!!

How would that not destroy sales of their “higher end” intel MBP 13?

If so do you think it will be better across most workload types?


I don't see why they would care that they are destroying sales of the intel MBP 13. Let consumers buy what they want - the M1 chip is likely far higher profit margins than the intel variant, and encouraging consumers to the Apple chip model is definitely a profit driver.

Some people especially developers may be skeptical of leaving x86 at this stage. I think the smart ones would just delay a laptop purchase until ARM is proven with docker and other developer workflows.

Another consideration - companies buying Apple machines will likely stay on Intel for a longer time, as supporting both Intel and ARM from an enterprise IT perspective just sounds like a PITA.


Docker runs in a VM on MacOS anyway, it’s not running on bare metal. Is instruction-set even relevant in that world?


That uses hardware virtualization which is very much dependent on the architecture. Running an x86 docker image on a M1 would take a significant performance penalty.


Thank you for posting this! I was so confused when folks were saying 16GB of ram was too little.

I run Linux on all my machines, and even running many (5-10) containers, 16GB was plenty. I now understand a bit better.


My Linux laptop locks up every now and then with swapping when I'm running our app in k3s; three database servers (2 mysql, 1 clickhouse), 4 JVMs, node, rails, IntelliJ, Chrome, Firefox and Slack, and you're starting to hit the buffers. I was contemplating adding more ram; 64 GB looks appealing.

I would not buy a new machine today for work with less than 32 GB.


Apple is notorious for releasing products that cannibalize their own products, I don’t think that concern would dissuade them.

There are likely many people who are not ready to switch yet either.

It has been a while since the ppc/x86 transition, but I want to say it was a similar situation then


The first 20” Intel iMac was released in the same chassis as the G5 iMac it was replacing with 2-3x the CPU speed. I beleive they continued to sell the G5 model for a brief while though for those that needed a PPC machine.


I think that’s the point? They want to show that M1 decimates comparable Intel chips even at lower price points.

This release is entirely within Apples control, why would they risk damaging their brand releasing a chip with lower performance than the current Intel chips they are shipping. They would only do this at a time when they would completely dominate the competition.


Impossible to know until we get the hard numbers.

But, just looking at A14 performance and extrapolating its big/little 2/4 cores to M1's 4/4; In the shortest tldr possible; Yes.

M1 should have stronger single-core CPU performance than any Mac Apple currently sells, including the Mac Pro. I think Apple's statement that they've produced the "world's fastest CPU core" is overall a valid statement to make, just from the info we independent third-parties have, but only because AMD Zen 3 is so new. Essentially no third parties have Zen 3, Apple probably doesn't for comparison, but just going on the information we know about Zen 3 and M1, its very likely that Zen 3 will trade blows in single core perf with the Firestorm cores in A14/M1. Likely very workload dependent, and it'll be difficult to say who is faster; they're both real marvels of technology.

Multicore is harder to make any definitive conclusions about.

The real issue in comparison before we get M1 samples is that its a big/little 4/4. If we agree that Firestorm is god-powerful, then can say pretty accurately say that its faster than any other four-core CPU (there are no four-core Zen 3 CPUs yet). There's other tertiary factors of course, but I think its safe enough; so that covers the Intel MBP13. Apple has never had an issue cannibalizing their own sales, so I don't think they really care if Intel MBP13 sales drop.

But, the Intel MBP16 runs 6 & 8 core processors, and trying to theorycraft what performance the Icestorm cores in M1 will contribute gets difficult. My gut says that M1 w/ active cooling will outperform the six core i7 in every way, but will trade blows with the eight core i9. A major part of this is that the MBP16 still runs on 9th gen Intel chips. Another part is that cooling the i7/i9 has always been problematic, and those things hit a thermal limit under sustained load (then again, maybe the M1 will as well even with the fan, we'll see).

But, also to be clear: Apple is not putting the M1 in the MBP16. Most likely, they'll be revving it similar to how they do A14/A14x; think M1/M1x. This will probably come with more cores and a more powerful GPU, not to mention more memory, so I think the M1 and i9 comparisons, while interesting, are purely academic. They've got the thermal envelope to put more Firestorm cores inside this hypothetical M1x, and in that scenario, Intel has nothing that compares.


> But, the Intel MBP16 runs 6 & 8 core processors, and trying to theorycraft what performance the Icestorm cores in M1 will contribute gets difficult.

Anandtech's spec2006 benchmarks of the A14 [0] suggest the little cores are 1/3 of the performance of the big ones on integer, and 1/4 on floating point. (It was closer to 1/4 and 1/5 for the A13.) If that trend holds for the M1's cores, then that might help your estimates.

[0] https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


Many people will get the other model because they specifically want an Intel machine. I would expect it to be replaced in the coming months.


> Not all geekbench scores are created equal. Comparing ARM and x86 scores is an exercise in futility as there are simply too many factors to work through. It also doesn't include all workload types.

The meme of saying that Geekbench is not a useful metric across cores, or that it is not representative of real-world usage, and therefore cannot be used needs to die. It’s not perfect, it can never be perfect. But it’s not like it will be randomly off by a factor of two. I’ve been running extremely compute-bound workloads on both Intel and Apple’s chips for quite a while and these chips are 100% what Geekbench says about them. Yes, they are just that good.


In the case of Infinity Fabric: high speed I/O links generally consume gobs of power just to wiggle the pins, long before any control plane gets involved.

In this case, it's high speed differential signaling, and that's going to have a /lot/ of active power. There's a lot of C*dv/dt going on there!


I’m pretty sure the amount of cache on the chip was on one of the slides, according to Wikipedia it’s 192 kb for instructions and 128 kb for data.

To me it seems pretty unlikely to be that important because if you can have 16gb of memory in the chip, how hard can it be to increase the caches a fraction of that?


A normal SRAM cell takes 6 transistors. 6 * 8 * 1024 * 1024 * <total_mb> is a big number.

Next, SRAM doesn't scale like normal transistors. TSMC N7 cells are 0.027 nanometers while MY cells are 0.021 (1.35x). meanwhile, normal transistors got a 1.85x shrink.

I-cache is also different across architectures. x86 uses 15-20% less instruction memory for the same program (on average). This means for the same size cache that x86 can store more code and have a higher hit rate.

The next issue is latency. Larger cache sizes mean larger latencies. AMD and Intel have both used 64kb L1 and then move back to 32kb because of latencies. The fact that x86 chips get such good performance with a fraction of the L1 cache points more to since kind of inefficiency in Apples design. I'd guess AMD/Intel have much better prefetcher designs.


> 6 * 8 * 1024 * 1024 * <total_mb> is a big number.

No, when your chip has many billions of transistors that’s not a big number. For 1 mb that’s about 0.2%, a tiny number, also when multiplied with 1.85.

Next the argument is that x64 chips are better because they have less cache while before the Apple chips couldn’t compete because Intel had more. That doesn’t make sense. And how are you drawing conclusions on the design and performance of a chip that’s not even on the market yet anyway?


With 16MB of cache, that's nearly 1 billion transistors out of 16 billion -- and that's without including all the cache control circuitry.

Maybe I'm misunderstanding, but the 1.85x number does not apply to SRAM.

I've said for a long time that the x86 ISA has a bigger impact on chip design and performance (esp per watt) than Intel or AMD would like to admit. You'll not find an over-the-top fan here.

My point is that x86 can do more with less cache than aarch64. If you're interested, RISC-V with compact instructions enabled (100% of production implementations to my knowledge) is around 15% more dense than x86 and around 30-35% more dense than aarch64.

This cache usage matters because of all the downsides of needing larger cache and because cache density is scaling at a fraction of normal transistors.

Anandtech puts A14 between Intel and AMD for int performance and less than both in float performance. The fact that Intel and AMD fare so well while Apple has over 6x the cache means they're doing something very fancy and efficient to make up the difference (though I'd still hypothesize that if Apple did similar optimizations, it would still wind up being more efficient due to using a better ISA).


I’m sure you are well versed in this matter, definitely better than I am. You don’t do a great job of explaining though, the story is all over the place.

I’ll just wait for the independent test results.


That's 16GB of DRAM. Caches are SRAM, which has a very different set of design tradeoffs. SRAM cells use more transistors so they can't be packed as tightly as DRAM.


16gb is 500 times as much as 32mb. Cache memory is not 500 times as large as normal memory.

If 32mb would be too hard they could have easily went for 1mb.

But they didn’t and that’s a pretty good indicator it doesn’t make a lot of difference.


The Ryzen 9 is a 16 (32 hyperthreads) core CPU, the A14 is a 2 big / 4 little core CPU. Power draw on an integer workload per thread seems roughly equivalent.


Last time I checked the TDP/frequency curve of Apple's chips was unimpressive [0]. If you crank it up to 4Ghz it's going to be as "bad" or even worse than a Ryzen CPU in terms of power consumption per core. There is no free lunch. Mobile chips simply run at a lower frequency and ARM SoCs usually have heterogenous cores.

>but this is an architecture that the PC market needs to be scared of.

This just means you know nothing about processor performance or benchmarks. If it was that easy to increase performance by a factor of 3x why hasn't AMD done so? Why did they only manage a 20% increase in IPC instead of your predicted 200%?

[0] https://images.anandtech.com/doci/14892/a12-fvcurve_575px.pn...


Isn’t this a RISC vs CISC thing though? AMD and Intel use a lot of complicated stuff to emulate the x86 instructions with a VM running on a RISC core. Apple controls hardware and software including LLVM so they compile efficient RISC code right from the get go. It’s their vertical integration thing again.


You got a point in the optimization part, it's difficult to compare both chips when you're running on completely different OSs, specially when one of them ruins a specially optimized OS like ios

The closest it could get I think would be running a variant of Unix optimized for the Ryzen.



Don't just downvote me, explain. I'm asking because I'm interested. Why is my hypothesis wrong?


My point was this is marketing spin.

There's no way to say what "faster" means or what 98% they used. Is it faster than 98/100 models? or faster than 98% of the 261M laptops sold in 2019?

"faster than 98% of laptops sold last year" is a nice quotable soundbyte that will be spread without any of those details.

I'm actually a big fan of Apple's products. I'm sure all the numbers, R&D, and benchmarking will prove the M1 to be impressive.


I don't know much about this stuff but if all this is true then I don't see what's holding AMD / Intel back. Can't they just reverse-engineer whatever magic Apple comes up with, and that will lead to similar leaps in performance?


This wording is very hand-wavey. They can and should give a direct comparison; as consumers we don't know what the top 2% of sold laptops are, or which is the "best selling PC laptop in its class". Notice it's "best selling" and not most powerful.


> "faster than 98% of PC laptops sold in the last year"

Keep in mind that the $3k MBP is probably part of the 2% in the above quote. The large majority of laptops sold are ~$1k, and not the super high end machines.


"98% of laptops sold" is not a useful metric.

If you buy a Ferrari, they won't market saying its latest car is better than 98% of cars sold last year.

it is not a good metric to compare performance of budget laptops where sales is going to be higher to premium laptop.


Just awhile ago Apple said the AIR was in a class of it's own so....


So 2% of the airs sold last year are equal to or better than this.


The vast majority of PC laptops sold are cheap low-end volume computers.


Yeah, I just thought about all those 499/599 Euro (factory-new) laptops that are being sold in Germany. They are okayish but nothing I'd like to compare great systems against.


Really good observations - I suspect that the 16" MBPs may be in the 2% though.

Plus, given use of Rosetta 2 they probably need 2x or more improvement over existing models to be viable for existing x86 software. Interesting to speculate what the M chip in the 16" will look like - convert efficency cores to performance cores?


My guess is that they'd still keep the efficiency cores around, but provide more performance cores. So likely a 12 or 16 core processor, with 4 or 6 of those dedicated to efficiency cores.

The M1 supposedly has a 10w TDP (at least in the MBA; it may be speced higher in the MBP13). If that's the case, there's a ton of power envelope headroom to scale to more cores, given the i9 9980HK in the current MBP16 is speced at 45 watts.

I'm very scared of this architecture once it gets up to Mac Pro levels of power envelope. If it doesn't scale, then it doesn't scale, but assuming it does this is so far beyond Xeon/Zen 3 performance it'd be unfair to even compare them.

This is the effect of focusing first on efficiency, not raw power. Intel and AMD never did this; its why they lost horribly in mobile. Their bread and butter is desktops and servers, where it doesn't matter. But, long term, it does; higher efficiency means you can pack more transistors into the same die without melting them. And its far easier to scale a 10 watt chip up to use 50 watts than it is to do the opposite.


I'm old enough to remember the MacBook Pro intro (seems a long time ago!) when Steve Jobs said it's all about performance per watt.

My only worry about the systems with more cores (Mac Pro etc) are about the economics for Apple of making these chips in such small volume.

PS Interesting that from Anandtech the M1 has a smaller die area than the i5/i7 in the Intel Airs so plenty of room for more cores!


>This is the effect of focusing first on efficiency, not raw power. Intel and AMD never did this; its why they lost horribly in mobile.

If you want a more efficient processor you can just reduce the frequency. You can't do that in the other direction. If your processor wasn't designed for 4Ghz+ then you can't clock it that high, so the real challenge is making the highest clocked CPU. AMD and Intel care a lot about efficiency and they use efficiency improvements to increase clock speeds and add more cores just like everyone else. What you are talking about is like semiconductor 101. It's so obvious nobody has to talk about it. If you think this is a competitive edge then you should read up more about this industry.

>Their bread and butter is desktops and servers, where it doesn't matter.

Efficiency matters a lot in the server and desktop market. Higher efficiency means more cores and a higher frequency.

>But, long term, it does; higher efficiency means you can pack more transistors into the same die without melting them.

No shit? Semiconductor 101??

>And its far easier to scale a 10 watt chip up to use 50 watts than it is to do the opposite.

You mean... like those 64 core EPYC server processors that AMD has been producing for years...? Aren't you lacking a little in imagination?


Have no considered being less snarky as you careen your way through that comment without really understanding any of it? The efficiency cores on Apple’s processors perform tasks that the main, more power hungry processors aren’t necessary for, which a profoundly different situation from a typical server that is plugged in and typically run with capacity all the time. Honestly, besides being against the rules, the swipes you make against the commenter “not knowing about the industry” are just shocking to see considering how much you missed the point that was being made.


Without actual real world performance tests from neutral 3rd parties all that really really need to be taken with a grain of salt. Never trust the maker of a product, test for yourself or watch for neutral reviewing parties who don't have a financial incentive.


Remember that the Intel CPUs have shitty thermals. 1.4 GHz with turbo up to 4, etc.

Smacking that with single cores on a 5nm process is probably pretty easy.


The reason is almost certainly that its the lowest power laptops is where the volume is and because it makes absolute sense to go for that part of the market first.

I'm really impressed by what they've done in this part of the market - which would have been impossible with Intel CPUs.


I think the main reason is discrete GPUs. I’m guessing they are currently working on PCI Express for M1. That or a much faster GPU on a different version of the SoC.


Based on what Apple says. Is there an independent cross platform benchmark which is actually relevant and that says the same?


No, because it hasn't shipped yet.


For how often they repeated "faster", I was surprised to see them announce no benchmarks with some sort of baseline or context. Fast than what at doing what?


A number of times when they said that I though, "but, that is a GPU heavy workload". Its not clear if they were comparing to existing mac's or more recent ryzen/intel xe machines. Both of which have very significant GPU uplifts with their integrated GPUs.

AKA, I suspect a big part of those perf uplifts evaporate when compared with the laptops that are appearing in stores now.

And the ipad benchmarks also remain a wait and see, because the perf uplifts I've seen either are either safari based, or synthetic benchmarks like geekbench. Both of which seem to heavily favor apple/arm cores when compared with more standard benchmarks. Apples perf claims have been dubious for years, particularly back when the mac was ppc. They would optimize or cherry pick some special case which allowed them to make claims like "8x" faster than the fastest PC, which never reflected overall machine perf.

I want to see how fast it does with the gcc parts of spec..


They are comparing to the MacBook processors which they kept as ancient 8th gen Intel quad cores.


Never rely on first party claims. Always wait for 3rd party benchmarks.


Did they say what they meant when they said faster?

I saw some graphs that looked unrealistically smooth with two unlabeled axis


The literally benchmarked the slowest 10th Gen i7 CPU on the market.

And I got so excited about it...


Which means that, according to Apple, it's better than the CPUs in the other laptops it's competing against. That slow i7 is typical for thin+light 13" machines. I fail to see how this is a bad thing or even sneaky marketing.


Intels marketing is more at fault here. i7 cpus range from slower 4 core 1.8ghz thin and light cpus to high end desktop 8 core 3.5ghz ones.

When the name i7 is mentioned people are more likely to think it's one of the high end desktop processors rather than the mobile processors.


If they targeted exclusively the consumer or media editor market - sure.

But then they throw in code compilation... that's where they stop being honest.


They also said it's the fastest laptop CPU?


"in its class"


Classic Apple claim


So you're disappointed that the MacBook Air doesn't beat the CPU and GPU performance of a hulking "desktop replacement" style notebook?


The complaint is that Apple interprets “in its class” to mean literally the slowest mobile processor of the prior generation made by Intel. It’s not even “fastest in the class of low-priced value notebooks for everyday use” which is what a reasonable person might interpret that to mean.


The Pro is quoted as 2.8x the 1.7 GHz i7.

Not Intel's fastest chip but, if these benchmarks are correct, that's a big speed bump and the higher end Pro's are to come. How fast does it have to be in this thermal envelope to get excited about?


I'd wait for actual benchmarks just judging by the amount of parachute clauses in the disclosures. Power consumption numbers should be great but I'm guessing it falls flat on some compute tasks that can't leverage custom instructions.


From Anandtech:

"Apple claims the M1 to be the fastest CPU in the world. Given our data on the A14, beating all of Intel’s designs, and just falling short of AMD’s newest Zen3 chips – a higher clocked Firestorm above 3GHz, the 50% larger L2 cache, and an unleashed TDP, we can certainly believe Apple and the M1 to be able to achieve that claim."

Not sure what the custom instructions you're referring to are.


How many cores does that have, and how many cores does the i7 have though?


Why is this a mystery or rhetorical question? Look on the website? It's public:

M1

> Apple M1 chip

> 8-core CPU with 4 performance cores and 4 efficiency cores

> 8-core GPU

> 16-core Neural Engine

Intel

> 1.7GHz quad-core Intel Core i7, Turbo Boost up to 4.5GHz, with 128MB of eDRAM

https://www.apple.com/macbook-pro-13/specs/

https://www.apple.com/shop/product/G0W42LL/A/refurbished-133...

Apple's claim:

> With an 8‑core CPU and 8‑core GPU, M1 on MacBook Pro delivers up to 2.8x faster CPU performance¹ and up to 5x faster graphics² than the previous generation.

https://www.apple.com/shop/buy-mac/macbook-pro/13-inch-space...

Fine print:

> Testing conducted by Apple in October 2020 using preproduction 13‑inch MacBook Pro systems with Apple M1 chip, as well as production 1.7GHz quad‑core Intel Core i7‑based 13‑inch MacBook Pro systems, all configured with 16GB RAM and 2TB SSD. Open source project built with prerelease Xcode 12.2 with Apple Clang 12.0.0, Ninja 1.10.0.git, and CMake 3.16.5. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.

> Testing conducted by Apple in October 2020 using preproduction 13‑inch MacBook Pro systems with Apple M1 chip, as well as production 1.7GHz quad‑core Intel Core i7‑based 13‑inch MacBook Pro systems with Intel Iris Plus Graphics 645, all configured with 16GB RAM and 2TB SSD. Tested with prerelease Final Cut Pro 10.5 using a 10‑second project with Apple ProRes 422 video at 3840x2160 resolution and 30 frames per second. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.


The 2019 MBP 13 supposedly uses an 8th gen, 14nm Intel part (14nm is 6 year-old technology).

A more fair comparison would be Tiger Lake (20% IPC improvement) on Intel's terrible 10nm process. The most fair comparison would be zen 3 on 7nm, but even that is still a whole node behind.


> The 2019 MBP 13 supposedly uses an 8th gen, 14nm Intel part (14nm is 6 year-old technology). A more fair comparison would be Tiger Lake (20% IPC improvement) on Intel's terrible 10nm process. The most fair comparison would be zen 3 on 7nm, but even that is still a whole node behind.

I'm not sure exactly what your point is. The fact it is IT IS on a better node. That's probably a big part of the performance picture. But, as a user I don't care if the improvement in performance is architectural or node. I just care that it's better than the competition.


I agree, but the real question becomes whether users would prefer 11th gen Intel processors or zen 2/3 over what they're shipping now. Would you take AMD seriously if they started talking about how their 5950 beat an ancient 8th gen Intel processor? I expect a higher standard from hardware makers and don't give a free pass to anyone over that kind of shenanigan.


We'll see third party benchmarks in less than a week and then this discussion is moot.


Do you really? It’s nice if it’s faster (and a hard sell if it’s slower) but really most people aren’t going to do the same things on a Mac they are going to do on a Windows laptop. You can’t run Windows or Windows software on these Macs. And it’s really no use to compare for instance Office for Windows to Office for Mac.


It depends on how you define fair. The base level M1 MBP is the same price as the previous generation Intel MBP and has the same amount of ram and storage. So in terms of a generation-on-generation comparison, it's absolutely fair for Apple to position the benchmarks that way.


Yes, Wikipedia says it's i7-8557U which has 8 MB cache.

So comparing a

4-Core, 5nm, 16 MB cache, 10 watts TDP vs

4-Core, 14nm, 8 MB cache, 15 watts TDP.

https://ark.intel.com/content/www/us/en/ark/products/192996/...


4 + 4 vs 4 hyperthreaded I think. Not sure I follow the point though.


The new Apple chip has more cores, so faster performance doesn't necessarily mean faster single-core performance. Faster on a parallel benchmark doesn't necessarily translate into faster performance for most real-world scenarios. That is all. That being said, I do believe Apple has very competent chip designers.


From Anandtech

Apple claims the M1 to be the fastest CPU in the world. Given our data on the A14, beating all of Intel’s designs, and just falling short of AMD’s newest Zen3 chips – a higher clocked Firestorm above 3GHz, the 50% larger L2 cache, and an unleashed TDP, we can certainly believe Apple and the M1 to be able to achieve that claim.

Thats per core performance.


Also unclear what they even meant when comparing performance to the “highest selling PC”, I’m not even sure if analysts could tell you what the top PC on the market is


Only for the Air, the M1 Pro is tested against the equivalent i7 Pro so I assume not throttled as much.


Sorry to ask, what is the "Fabric" module in their architecture? Also, is the "Neural Engine" some neural network specific processor?


The "Fabric" is the interconnect between different components on the chip. It's what the different units use to communicate and transport data.

The "Neural Engine" is a special compute unit specifically for machine learning. The iPhone Camera app uses it for semantic segmentation, for example.


Fabric is a common term for describing on-chip interconnects in a system-on-a-chip.

It's all the wires and buffers and pipeline registers and arbiters and decoders and muxes that connect the components together.

It's dominated by wires, typically, which is probably how it came to be known as a fabric. (Wild speculation on my part.) I've been hearing that term for a long time. Maybe 20 years?


The Neural Engine in iPhones is used for things ranging from photography enhancement to facial recognition for the unlock and Siri.


I wondered this too. In my experience, "fabric" typically refers to the programmable logic region in a FPGA-based SoC (such as the Xilinx Zynq). I'm assuming the M1 does not have this, so I'm not sure what it means in this context.


I think Apple has done an amazing job of pulling off their own silicon. At Sun I got to be involved peripherally in the effort that created SPARC and it was way more work than I had suspected. So kudos to Apple's design, engineering, and QA teams!

I am also noting that they have killed the "ARM SoC's will never match what Intel can do" story line. That always reminded me of the "Microcomputers will never be able to do what minicomputers can do" story line that everyone except DEC and HP seemed to realize wouldn't hold up over time. "Never say never" is a good motto in the computer business.

That said, as the integration continues apace re-usability and repairablilty reach all time lows. When you consider that you could take a 1970's minicomputer, disassemble it to the component level, and create an entirely different (and functional) computer out of those components, you get a sense of how quaint that notion seems these days. I doubt anyone will ever desolder an M1 chip and put it onto a new board of their own design. And that reminds me more of the IBM "die on substrate" modules that they started using in mainframes which made them basically nothing more than low grade gold ore if they no longer worked as a mainframe.


I agree about the upgradability/repairability. However, if you look at like a cell phone maybe it makes more sense. No one has ever complained that you couldn't update the RAM or the CPU in a iPhone. In order to upgrade the hardware of your cell phone you have to purchase another one. I'm guessing Apple is looking at the Mac hardware in the same light. "Want more power? Buy a new laptop!"

The more interesting question is how does this affect the Mac desktops? It would be a shame to throw out a whole Mac Pro machine just because it only has 64GB RAM. Then again, this event did release the Mac Mini, which is a desktop and does not have upgradable RAM or storage or CPU. Hmmmm...


> I'm guessing Apple is looking at the Mac hardware in the same light. "Want more power? Buy a new laptop!"

How often do phone users actually upgrade their phones for more power? I mostly use my computer to develop software, play video games, and edit media, and stream videos. I mostly use my phone to text, play music, and access the internet.

Almost every PC purchase I've ever made was driven by a need for better performance, but I've never upgraded my phone for that reason. I've upgrade my phone for a better screen, a better camera, a bigger battery, software support, new hardware features (4G, biometrics, USB-C, etc)... but never because I felt like I needed a faster processor or more system memory.


The iPhone has a very smooth process of upgrading the phone. You turn on the new iPhone next to your old one, it prompts and voila. It's like upgrading hardware of your current phone.

I'm unsure if we're at a point where these things are even feasible. e.g I have configured my MBP a lot, the .zsh file, home directory and endless configuration.

I would not be surprised if they ditch the kernal completely, and make everything an App (much like chromebook)


You can do something similar with Macs. Using Migration Assistant I was able to transfer all my data (.zsh, git repos, configs, Docker containers, everything.) Took less than an hour. In fact, I did this because I needed to upgrade my RAM


Yup, did this when I upgraded from a 2013 MacBook Air to a 2018 MacBook Pro. Everything worked great except for homebrew, which unfortunately had to be nuked and re-installed; but I can't really blame Migration Assistant for that.


I thought, the parent comment was talking about hardware upgrade on the same laptop.:)


I don't know about the new model but the 2018/2020 MacMini does have upgradeable memory.


iMac as well.

Also, the previous Mac Mini has 64G max memory and not the laughable 16G.


I think that is a good observation, perhaps at some point the motherboard becomes the swappable part?


It could be done but companies rather want you to buy a complete computer again and again. Same with cars. That way they sell more stuff and earn more. With proper environmental marketing tricks like the recent one with chargers and recycling robot lies, environmental impact is hidden.


Unlikely, it'll be as expensive as just buying a new one and it's unlikely Apple would offer full motherboard replacements easily.


I think that the future of computer customization is in virtualization. Instead of having a single very expensive machine to upgrade, we just go grab 30 servers from a bulk seller and use something like Xen or Proxmox to use them all as a single machine with 1 virtual CPU, 1 virtual disk and 1 virtual network interface. Got another computer? Just add it to the pool. So while computers are getting harder to disassemble, it's also getting easier to pool them together to share compute, network and storage.


Apropos SPARC, one of the most interesting features of that ISA (apart from mainstreaming register windows) was the tagged arithmetic instructions. Does anybody know why they didn't have much success in later ISAs?


Apple mentions TensorFlow explicitly in the ongoing presentation due to the new 16-core "Neural Engine" embedded in the M1 chip. Now that's an angle I did not expect on this release. Sounds exciting!

Edit: just to clarify, the Neural Engine itself is not really "new":

> The A11 also includes dedicated neural network hardware that Apple calls a "Neural Engine". This neural network hardware can perform up to 600 billion operations per second and is used for Face ID, Animoji and other machine learning tasks.[9] The neural engine allows Apple to implement neural network and machine learning in a more energy-efficient manner than using either the main CPU or the GPU.[14][15] However, third party apps cannot use the Neural Engine, leading to similar neural network performance to older iPhones.

Source: https://en.wikipedia.org/wiki/Apple_A11#Neural_Engine


I can't see how something that tiny can compete in any meaningful way with a giant nVidia type card for training. I'd imagine it's more for running models that have been trained already, like all the stuff they mentioned with Final Cut.


Not all NN models are behometh BERTs, U-Nets or ResNets. Person detection, keyword spotting, anomaly detection... there are lots of smaller neural nets that can be accelerated by a wide range of hardware.


For an 18-hour battery life computer (Macbook Air) that now doesn't even have a fan, it's for a complete different market segment from where nvidia cards dwell.


Yeah I would imagine it's intended for similar use-cases as they use for iOS - for instance image/voice/video processing using ML models, and maybe for playing around with training, but it's not going to compete with a discreet GPU for heavy-duty training tasks


It is just not meant to be a training device so comparing with data Center or developer GPUs is useless. Faster inference for end-users is what is mentioned by Apple and the only use case where this hardware makes sense.


Isn't it better to rent a cloud with as many GPUs as necessary for a time needed to train the model? I don't know state of things in ML.


Not necessarily.

It can be surprisingly cost-effective to invest a few $k in a hefty machine(s) with some high-end GPU's to train with due to the exceedingly hefty price of cloud GPU compute. The money invested up-front in the machine(s) pays itself off in (approximately) a couple of months.

The "neural" chips in these machines are for accelerating inference. I.e. you already have a trained model, you quantise and shrink it, export it to ONNX or whatever Apple's CoreML requires, ship it to the client, and then it runs extra-fast, with relatively small power draw on the client machine due to the dedicated/specialised hardware.


For productionizing/training massive models, yes.

But in the development phase, when you are testing on a smaller corpus of data, to make sure your code works, the on-laptop dedicated chip could expedite the development process.


I agree with the parent poster that it's probably more about inference, not training.

If ML developers can assume that consumer machines (at least "proper consumer machines, like those made by Apple") will have support to do small-scale ML calculations efficiently, then that enables including various ML-based thingies in random consumer apps.


Cloud GPU instances are very expensive. If you get consumer GPUs not only do you save money, you can sell them afterwards for 50% of the purchasing price.


Curious to hear responses to this too..


Sure, inferencing doesn't need floating point instructions, so NVIDIA will stay the only real solution for desktop/laptop based model training for a long time.


Can anyone who knows about machine learning hardware comment on how much faster dedicated hardware is as opposed to, say, a vulkan compute shader?


On the NVidia A100, the standard FP32 performance is 20 TFLOPs, but if you use the tensor cores and all the ML features available then it peaks out at 300+ TFLOPs. Not exactly your question, but a simple reference point.

Now the accelerator in the M1 is only 11 TFLOPs. So it’s definitely not trying to compete as an accelerator for training.


That depends entirely on the hardware of both the ML accelerator and the GPU in question, as well as model architecture, -data and -size.

Unfortunately Apple was very vague when they described the method that yielded the claimed "9x faster ML" performance.

They compared the results using an "Action Classification Model" (size? data types? dataset- and batch size?) between an 8-core i7 and their M1 SoC. It isn't clear whether they're referring to training or inference and if it took place on the CPU or the SoC's iGPU and no GPU was mentioned anywhere either.

So until an independent 3rd party review is available, your question cannot be answered. 9x with dedicated hardware over a thermally- and power constrained CPU is no surprise, though.

Even the notoriously weak previous generation Intel SoCs could deliver up to 7.73x improvement when using the iGPU [1] with certain models. As you can see in the source, some models don't even benefit from GPU acceleration (at least as far as Intel's previous gen SoCs are concerned).

In the end, Apple's hardware isn't magic (even if they will say otherwise;) and more power will translate into higher performance so their SoC will be inferior to high-power GPUs running compute shaders.

[1] https://software.intel.com/content/www/us/en/develop/article...


It's kind of sad that AMD spent years to get mediocre TensorFlow support and Apple walks in with this. It really shows how huge Apple is.


In fairness, it's been possible to convert a TensorFlow model to a CoreML model for a while, and in April TensorFlow Lite added a CoreML delegate to run models on the Neural Engine.

https://blog.tensorflow.org/2020/04/tensorflow-lite-core-ml-...

So don't think of it as Apple walked right in with this so much as Apple has been shipping the neural engine for years and now they're finally making it available on macOS.


But they needed to, MacBooks are simply no option if you want to train models. I dont expect crazy performance but would be great if MacBooks would be an option again for prototyping / model development at least


Who mentioned training? Most of these chips are only any good for inference. A wonderful symphony with the Apple computing mantra, of course.


Totally agree.

Besides, can the neural engine be used to speed up other tasks?


Could this also be remedied by Apple supporting Nvidia GPUs again? And then you plug in a beefy eGPU?


I think it is interesting that Apple is at Nvidias mercy again with the Nvidia ARM deal. Hopefully they get their shit together


I think this is for inference not learning, even though they use the term machine learning. They seem to just mean running models based on machine learning approaches.

Tensorflow includes stuff for inference.


My thinking was along the same line as yours, but the way apple framed it seems to suggest that the M1 accelerates model training and not just inference. Here's the actual quote "It also makes Mac Mini a great machine for developers, scientist and engineers. Utilizing deep learning technologies like tensorflow ... which are now accelerated by M1". It should be pretty straightforward to test this though: installing tensorflow-gpu on a mac mini and seeing the result. I suspect, TF's latest branch should also indicate which GPUs are supported. Curious to hear more thoughts on this.


They do not have any hardware combination which can actually support even modest GPU intensive training sadly, so much touting running models instead of training.


Yeah, I was confused at that implication. I don't think these are designed for training!

(If they are or can be, I'm interested)


> (If they are or can be, I’m interested)

Exactly. Currently I am training my models using Google Colab and then exporting the model to run on my MBP. Would be interesting if I could do it locally

Another interesting thing is that ( if this is for training ) this will become the only accelerated version of Tensorflow for macOS as: - No CUDA drivers for latest macOS - AMD ROCHm only supports Linux runtime


I’m hoping the M1 can be used for prototyping with small data sets, then final training on Google Colab with complete data sets.


Is this just opinion? Maybe they are designed ALSO for training. I wonder if this things can replace nVidia graphic cards on training? The neural core has a LARGE area on the chip design similar to the GPU area.


Yes, need to see more strong evidence that the new MBP's can handle large amounts of ML Training using TF or CreateML so we don't have to get NVIDIA machines/laptops.


Would love to get GPU accelerated learning in Lobe (http://www.lobe.ai).


No 32GB options on any of the models announced today is a shame. Or not I guess as it means I won't be buying anything today

M1 looks impressive, hopefully it scales as well to higher TDPs.

Pretty much what I expected today with all the low power models getting updated.

With no fan I expect the Air will throttle the CPU quickly when using the high performance cores. Calling it an "8-core CPU" is a little cheeky IMHO but I guess everyone does it.

Looking at the config options it seems the CPU/SoC is literally the "M1" and that's it. No different speeds like before so apparently the same CPU/SoC in the Pro and the Air? I guess the fans in the Pro will allow the high performance more headroom but still kinda odd. I wonder if that will carry over to the 16" MBP next year?

I am disappointed to see the design is identical to what we currently have. I was hoping we would see a little refinement with thinner screen bezels, perhaps a slightly thicker lid to fit in a better web cam (rather than just better "image processing") and FaceID.

Overall I am disappointed due to 16GB max RAM and the same old design. I guess we will see the bigger machines get updates sometimes between March and July 2021.

Also are we calling it the M1 CPU or the M1 SoC?


> I am very disappointed to see the design is identical to what we currently have

Yeah one possible explanation I can think of is that they're not 100% confident in how rollout will go, and they want to avoid the situation where people in the wild are showing off their "brand new macs" which are still going through teething issues. There's less chance of souring the brand impression of M1 if they blend into the product line better.

Alternatively, maybe they want to sell twice to early adopters: once for the new chip, and again in 12-18 months when they release a refreshed design.


I think it could also be that a lot of people will be 'calmed' by the design they're used to when you're trying to convince them of a new chip the average consumer might not understand


I don't know about that, I think the confident move would have been to release the new chips along with a big design update.

As you say, I think the average consumer doesn't understand the difference between an Intel chip and an Apple chip, and will probably not understand what if anything has changed with these new products.

I would say developers would be the group which would be most anxious about an architecture change (which is probably why this announcement was very technically-oriented), and developers on average are probably going to understand that design changes and architecture changes basically orthogonal, and thus won't be comforted that much by a familiar design.

On the other side, average consumers probably aren't all that anxious due to the arch change, and would be more convinced that something new and exciting was happening if it actually looked different.


> developers on average are probably going to understand that design changes and architecture changes basically orthogonal

If the reaction here to the touchbar is representative, perhaps they didn't want the M1 announcement to be overshadowed by the new feature related whining.


Probably a good mix of both. No need to send out a completely new design along with the new chips, plus the designs are really not that outdated. In any case these are squarely targeted at the "average consumer" market - lower budget, keeps computers for a very long time, places significant value on the higher battery life.

Less reliant on perfect compatibility of enterprise software, which I am sure is something they want a little more time to sort out before committing their higher end lineup.


> No 32GB options on any of the models announced today is a shame. Or not I guess as it means I won't be buying anything today

I think this was the fastest way to get something out quickly and cover as many SKUs as possible.

We'll need to wait for the next/larger SKU models to get out of the 16B / 2TB / 2-port limitations. Maybe they should call those devices the Macbook Pro Max.

I wonder how many more SKUs will be needed. Obviously they are going to minimize them, but will they be able to cover Macbook Pro 16", iMac, iMac Pro and Mac Pro with only one (unlikely) or two (likely) more SKUs?


When Apple does eventually produce larger than 16GB / 2TB / 2-port Macs I hope they don’t name them MacBook Pro Max. As long as they are called Macs this naming convention seems silly. I’d prefer that we don’t have any Max Macs.


> I am very disappointed to see the design is identical to what we currently have

Lot's of moving parts in a single generation is risky for scheduling and changing CPU architecture is huge enough for a generation change.


Maybe they really really don't want professionals to buy the 1st gen M1 Macs, because we have the toughest use cases and will shit on it the hardest if something goes wrong. Lack of 32GB seems like a pretty clear signal.

The Mac has turned into a very nice luxury-consumer business. It is a status signal even if its just used to check email and watch YouTube. It's a pretty smart idea to let these users be the gen 1 guinea pigs.


Also just two ports on a PRO model. C‘mon there is plenty of room on both sides. Maybe lets it look elegant but then the cable mess will be on the desk, thanks.


This MBP refresh replaces the lower-end MBP which also only had 2 ports.

This is a small detail that is easy to miss.

The "higher-end" MBPs with 4 ports are only availabe with the intel CPUs for now


Those 8 extra PCIe lanes eat a lot of power. It's worth noting that Intel's processor they used in the 2019 MBP 13 had 16 PCIe lanes (I think the chipset added more too), but this chip probably has 10-12 at the absolute most.

The "we integrated thunderbolt" bit is also implying an untruth that nobody else does that. Intel did that last year with Ice Lake. AMD will do that next year when they upgrade their IO die.


It was just mentioned in passing, and they did not imply that the others did not. It’s just the answer to an obvious question about these devices: “Does it support Thunderbolt?” “Yes, it’s integrated into the M1”.

You are reading too much into this.


Are there any other ARM devices with Thunderbolt? I am not aware of any.


No, but that's not relevant. Their x86 to ME graphic shows all the things merging together.

The point is that it's mostly Apple's fault to begin with. They choose to use outdated processors without integrated thunderbolt. They chose to add their massive A10 die instead of a normal, optimized SSD controller (not to mention this screws over data recovery and SSD replacement). They are exaggerating at best about RAM as the chips merely sit beside the core. The only thing they should really claim is the IO die and even that's nothing too special.


> Also are we calling it the M1 CPU or the M1 SoC?

The M1 Chip: https://www.apple.com/mac/m1/


> Or not I guess as it means I won't be buying anything today

Honestly, people who don't use their computers as Facebook machines or have very specific things they want to do with the ML engine should probably stay away from Arm Macs for a couple years. There's bound to be plenty of software that doesn't work or doesn't work well, Docker will have ~limited usefulness until more and more people publish Arm images, etc. Plus I'm waiting for the typical Apple move of screwing early adopters by announcing that next year's model is 3x as fast, $200 cheaper, etc.


Hmmm... What is the day 1 ability for, say, a systems programmer to run VSCode/golang development locally? It'd be neat to find out. The fact that they don't say doesn't tell me much, either it's "not a problem" or it's "such a problem we won't talk about it".


The Visual Studio Code team tweeted that they are targeting a release for the Insiders channel by the end of November.

https://twitter.com/code/status/1326237772860981249?ref_src=...


Both should work in Rosetta for now and I expect them to become native in the next few months.


> Plus I'm waiting for the typical Apple move of screwing early adopters by announcing that next year's model is 3x as fast, $200 cheaper, etc.

They aren’t screwed, the early adopters adopt the new model early by definition. They get the new model and the move on to the next new model.

I know several of these bold individuals.


I didn't watch the event, but if you have an Intel MacBook, will everything keep working as it is now? Even when upgrading to Big Sur?


Yeah, there’s no change for Intel Mac owners really. Given Apple will still be selling Intel Macs through 2021 and likely 2022, I’d expect them to be fully supported through something like 2026/7 (though your exact model may vary depending on its current age).


Such a good point. I wonder how many folks don't actually realize there's a different ecosystem for Docker x86 and ARM images.


Will Apple to transparent emulation (x86-on-ARM) or just not run x86 binaries?


Rosetta 2 is a x86_64 to ARM64 translation layer.

https://developer.apple.com/documentation/apple_silicon/abou...

Some things still won't run though. Among other issues, there's a page size difference (16 kb vs 4 kb) that bit Chrome (since fixed): https://bugs.chromium.org/p/chromium/issues/detail?id=110219...


> Some things still won't run though. Among other issues, there's a page size difference (16 kb vs 4 kb) that bit Chrome (since fixed): https://bugs.chromium.org/p/chromium/issues/detail?id=110219...

Rosetta 2 on shipping hardware uses 4kb pages; it was only the DTK that was 16kb-only.


Especially since it’s unified memory between the CPU and GPU - no wonder they were showing off mobile games on a desktop device


13" has always had unified memory.


I'm very curious to see some real benchmarks. I have no doubt that they've achieved state of the art efficiency using 5nm, but I have a harder time believing that they can outperform the highest end of AMD and Intel's mobile offerings by so much (especially without active cooling). Their completely unit-less graph axes are really something...


Read the fine print at the bottom, the claims they are making seem very..... not backed up by their testing. They chip they say they beat that makes them the best in the world is the last gen mackbook with an i7


Their claimed 3x faster is massive even if the baseline is low. And they also achieved that vs. the Mac Mini's 3.6GHz i3. So I don't think fastest singlethread performance is an outlandish claim.

Historically speaking, Apple doesn't underdelivier on their claimed numbers. I'm excited to see what they'll do with a full desktop power budget.


Which was severely underpowered because the cooling solution was knee-capped. This is partly Intel's fault, but they made their last-gen so underwhelming that this _had_ to be better


If they could outperform AMD/Intel today, they would've offered 16" MBPs / iMacs / Mac Pros. Today, they have power-efficient chips, so they converted their low-end machines, and Mini so that companies porting their software can start migrating their build farms.


I don't see why they need to rush into the higher end. Let ARM chips prove themselves for consumer scale. Enterprise is going to be weary of the new tech and potential compatibility issues. Proving it in the consumer market makes far more sense, and they really get to flex the benefits in something like a macbook air. Fanless, silent, and huge battery life increases are really tangible upgrades.

A beefy chip in an MBP 16 will be exciting, and I'm sure they are working on that, but it will make far more sense once ARM is established in the "normal" consumer models.


They probably beat AMD/Intel on perf/power efficiency, which is why it makes sense for MBA and 13" MBP.

The smaller machines are also likely held back by cooling solutions, so if you have Intel beat on power efficiency in a tiny form factor, you can boost your clock speed too.


Considering how AMD also beats intel on power/perf by a wide margin and they compared their results against Intel CPUs I wouldn't be surprised if their power/performance was close to AMD (they do have heterogeneous CPU cores which of course is not the case in traditional x86)


Apple's perf/watt is much higher than AMD. Anandtech has the A14 within a few percent of the 5950X on single threaded workloads. Power consumption is 5 watts (entire iPhone 12) versus 49 watts (5950X package power).

Normalizing for performance (say A14 vs 5950X at 4.5GHz instead of 5GHz) would close the gap somewhat, but it's still huge. Perhaps 4x instead of 10x - those last 500MHz on the AMD chip cost a ton of power.

Of course, none of this is particularly surprising considering Apple is using both a newer process and gets nearly 60% higher IPC.

[1]: https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


You are comparing a desktop processor that's been tuned to overboost as much as thermals will allow it to a mobile phone CPU ?!?

AMD has a laptop series of CPUs, M1 compared to that is the only relevant comparison - the rest is apples to oranges.


AMD's Smart Memory Access would like to have a word. I'd note that in unoptimized games, they're projecting a 5% performance boost between their stock overclock and SMA (rumors put the overclock at only around 1%).

https://www.amd.com/en/technologies/smart-access-memory


I don't understand what that has to do with power/performance ? Did you mean to reply to someone else ?


Yes.

I would say on your topic though that Intel has a hybrid chip that might interest you. Lakefield pairs 4 modified Tremont cores with 1 Sunny Cove core.

https://www.anandtech.com/show/15877/intel-hybrid-cpu-lakefi...


I think that we will see how this pans out in practice - but power efficiency is not something I associate with Intel - they just seem to far behind in manufacturing process to be competitive.


I find it interesting, but I'm not really interested in that market segment. Atom competes with the A53 on most things. I want a little more power.

I'd like to see 2 zen cores paired with 4 updated jaguar cores.


We don't even have new Zen 3 mobile chips yet if they are anything like the New Desktop parts we just got last week then everything Apple just put out is FUBAR.


I'm hoping Zen 3 delivers because 4000 mobile chips shows great promise but is not available in anything premium (MacBook pro level) and has supply issues.

That being said Apple has an advantage in that they control the entire stack and on the power efficiency side of things they have experience with mobile CPU design, they can do stuff like heterogeneous cores and can optimise for it throughout the stack, I think it would be challenging for x86 to do the same.


They can probably out perform them, but only for a few seconds/minutes. That's why they put a fan in the pro.


Which actually sounds like a good fit for a dev machine, where you only compile once in a while, but do not sustain high loads


looks at 1-hour compile-test runs on a threadripper yes, of course, great fit.


It’s a great option for office work.

Need a day off? Have some errands? Hit the compile button.


lolwut. you mean let your CI/CD on a real computer/server compile it much faster so you can keep working?


Seems silly to do that compile/test on a laptop instead of a server.


Well they're advertising XCode so


if you have a server for on demand compiles why would you buy a macbook over el cheapo linux machine? besides vanity ofc


OSX and overall better dev experience?


Where's dmesg, kvm, namespaces, cgroups, and package management built in?

What about inodes and cpu spikes even running node on osx? Let alone doing that inside a container.

Nah. OSX is long gone for "better dev experience". Linux always wins.


Not at the company I work for.


On the one hand I'm kind of jealous, but on the other it's really fun :D


> better dev experience

what constitutes a better dev experience?


From just off the top of my head

- Same text editing shortcuts that work consistently throughout all apps in the system including your editor/IDE, browser, and terminal.

- iTerm2 having so called "legacy full screen mode" that I use exclusively. (I've been searching for something similar for windows/linux for quite some time).

- The general system polish and usability. This is not directly "dev experience", but it's something you interact with while doing development throughout the day and it's just hands down a lot better than anything linux has atm.


> - iTerm2 having so called "legacy full screen mode" that I use exclusively. (I've been searching for something similar for windows/linux for quite some time).

Is that about disabling animations? That can be done globally on windows and linux desktop environments.


No, it's about window being in a non-exclusive fullscreen mode. It's fullscreen, but at the same time can be on the background underneath other windows. Looks like this:

https://imgur.com/a/YNzcA0w

Essentially, I always have terminal in the background even when I'm not working acting as my wallpaper.


Ah, that's basically a frameless maximized window. It requires some tinkering but there are tools to force that behavior on other applications; for both windows and various linux window managers.


A mac server?


Which don't exist?


Exactly.


My compiles take like 8 minutes at worst.


You'd hit thermal throttling way before 8 minutes I'd guess.

A lot of my iPhone app builds are pretty quick though, < 1 minute.


I did a lot of development on a pixelbook. Things were mostly fine for development, but compiling took 2-3x as long due to throttling.

Our unit tests took around 30 minutes on a fast system though and were simply not worth running outside of the specific tests I was writing.


So if I understand they have the exact same chip in the Air as the Pro? Will better thermal make that much of a difference? Is there really that big of a reason to get Pro over Air at this point?


Better thermal will make a big difference for anything where you need high power for a long period. Like editing video for example. Anything where you need only shorter bursts of power won't make as big of a difference.

But better thermals will probably mean they can run the CPU in the Pro at higher base clock speed anyway, so it will probably be faster than the air all around - we'll have to wait for benchmarks to know for sure though.


The thing is that even new Ryzen cores start shining at >20W, while ARM SOCs (especially with 5nm) can do very well at these low power envelopes.


> (especially without active cooling)

It seems likely the higher end of the performance curves are only attainable on the systems with active cooling. Or at least only sustainable with active cooling. So the MacBook Air would still realize the efficiency gains, but never the peak performance on that chart.


Someone will probably Geekbench it by next week.


Apple's A series chips have been decimating the competition in phones for years now, what makes you think they can't do the same in this space?


The fact that, like in most industries, it's easy to catch up and very hard to beat once you do. It's very telling that Apple didn't compare their M1 with something like AMD Zen3.


They don't use AMD processors in their current products. It would have been a meaningless comparison to their customers.


Because Zen 3 has been out for a week


Max 16GB of RAM on these new machines is really not that great to be honest. The mini supported up to 64GB before.


Wow, that's actually a pretty big limitation. I guess it's tough to do 64 GB with their on-package unified memory.

I wonder if they're working on a version with discrete memory and GPU for the high end? They'll need it if they ever want to get Intel out of the Mac Pro.


This would seem to point toward a tiered RAM configuration that acts somewhat like Apple's old Fusion Drives: On-package RAM would be reserved for rapid and frequent read/write, while the OS would page to discrete RAM for lower priority. Discrete RAM would act as a sort of middle ground between on-package RAM and paging to the SSD.

Then again, maybe their in-house SSD controller is so blazing fast that the performance gains from this would, for most applications, be minimal.


Apple doesn't like writing to their SSDs much, to prevent wear-out.


Let's think about that a little bit. If the RAM is fast and the SSD is fast and the virtualization options are limited, then this is good enough?

Or, inspire me. Which processes really require occupying and allocating bigger blocks of RAM?

I personally don't want to purchase another machine with 16gb RAM but that's mainly because I want the option of having a powerful Windows guest or two running at the same time. But if you take out that possibility, for now, what if the paradigm has changed just a tad.


SSD latency is still several orders of magnitude higher than RAM latency. Having similar magnitudes of total throughput (bandwidth) isn't enough to make RAM and SSDs comparable and thus remove the need for more RAM. Random access latency with basically no queue depth is very important for total performance. Certainly, SSDs are far better at this than hard drives were... but, SSDs still serve a different purpose.

Intel's Optane SSDs are based on a completely different technology than other SSDs, and their low queue depth latency is significantly better than any other SSDs out there, but good luck talking Apple into using more Intel stuff just when they're trying to switch away, and even then... just having some more real RAM would be better for most of these creator/pro workloads.


I have a project that won’t compile on systems with less than 32 GiB of RAM, and I refuse to refactor the hideously overgrown C++ template magic that landed me here.


I suspect "Apple silicon" will not really be very suitable for software engineering.


Their performance claims are the very essence of vague, but Apple sure seems certain it will be great for software engineering. I'm curious. I won't be convinced until we get some real data, but signs seem to point that way. What makes you strongly suspect it won't be great?

    Build code in Xcode up to 2.8x faster.
    
    [...]

    Compile four times as much code on a single charge, thanks to the game-changing performance per watt of the M1 chip.
source: https://www.apple.com/newsroom/2020/11/introducing-the-next-...

I have a hunch it will be adequate for single-threaded tasks and the real gains will come for multithreaded compilation, since its superior thermals should enable it to run all cores at full blast for longer periods of time without throttling, relative to the intel silicon it replaces.


Not everyone's codebase is an over-bloated mess


I've been building an over-bloated mess on Apple silicon for months now; it's been quite good at it actually.


For now, most developers use MacBooks and tools like vscode already have apple silicon build.


Saying that "most developers use MacBooks" requires a very different understanding from mine of what the words "most" or "developers" mean.


You are downvoted, but you are right. In Asia (esp India, Indonesia, Phillipines & China) but also in the EU, I see 'most developers' walking around with PC (Linux or Windows=>mostly Windows of course) laptops. I would say that by a very large margin 'most developers' on earth use Windows machines.

The most vocal and visible (and rich) ones have Macbook's though, i guess that's where the idea comes from.


I'm guessing some took issue with the possibly poorly phrased "our understanding [...] of what [...] 'developers' mean [differ]". It can be read as me dismissing people that use Macs as "not real developers", where my intent was to counteract the opposite: people dismissing others that might be using Windows as "not real developers" because otherwise they would be using Macs, which is circular logic that I have heard expressed in the past. And I say that as someone who has used Macs at work for the past decade.


According to the Stackoverflow Survey 2019, 45% use Windows, 29% use MacOS, 25% use GNU+Linux


It's actually from the 2020 survey in 2019 windows was at 47,5% and in 2018 at 49,9%. Unfortunately this metric apparently wasn't tracked in 2017 so we'll never be sure if they were above 50% back then.

So currently the majority of responding dev use a POSIX (mostly)-compliant operating system. That is actual food for thought.

Sources: https://insights.stackoverflow.com/survey/2018#technology-_-... https://insights.stackoverflow.com/survey/2019#technology-_-... https://insights.stackoverflow.com/survey/2020#technology-de...


I suspect your suspicion is going to be very wrong.


    Which processes really require occupying and 
    allocating bigger blocks of RAM?
It's not uncommon to work with e.g. 50GB+ databases these days.

They don't always need to be in RAM, particularly with modern SSD performance, but if you're using a laptop for your day job and you work with largeish data...


Virtualization and containers. Especially if you want to run an Electron based code editor next to it.


Containers on Mac rely on virtualization, don't they still? Will the new CPU arch have a native virtualization SW? Because if not, I suspect that the virtualization layer might break with the translations to and from X86, and/or might take pretty significant performance penalty.

A wild unsubstantiated guess of course, at this point (or rather, a worry of mine).


Containers on Mac still rely on virtualization, but Apple said at WWDC (and showed a demo) of Docker for Mac running on Apple Silicon, and of Parallels running a full Debian VM. Both were running ARM builds of Linux to run properly, and Apple added virtualization extensions to the new SoC so it doesn't have to resort to emulation (if the software running is ARM and not x86).

https://developer.apple.com/documentation/hypervisor


They must be... there's no chance they're wiping out their Intel lineup with machines that max at 16GB of RAM. Especially not for the Mac Pro.


I suspect/hope they are for the 16” MacBook Pro, which is still Intel-based.


They only launched their lower performance machines today. Air, mini, two port Pro.

So that’s the context to interpret the Ram they offer.


The context is the Mac Mini previously supported 64GB of RAM. In fact it was a pretty powerful machine, with options for a six core i7, 10Gb Ethernet, and as I said 64GB RAM. Now it's neutered with GigE and 16GB RAM, despite having much better CPU and GPU performance.


You can still buy an Intel iMac Mini with 64GB RAM and 10GB ethernet. If you don’t like the M1 mini, buy the Intel one.


But then you get worse CPU and GPU performance. What a choice.


SSD performance with their integrated IO controller might close the gap here, the same way that pretty fast storage on their iPhones makes the lack of RAM there not so debilitating.

But yeah, agreed that not having a 32GB option is somewhat disappointing.

(sent from my MacBook Pro with 32GB of RAM)


This is an interesting line of inquiry. In the gaming console world, a similar principle has been explicitly espoused by Sony in the decision to only go with 16GB of RAM (a smaller factor increase than any previous generation) for the Playstation 5, as the speed of its custom SSD architecture should in theory make that 16GB go a lot further than it would have with a more traditional design.


PS5 shares memory with the GPU, doesn't it? Which is different to just using RAM to load games.

I play modern games with RX 5700XT 8GB video card and 8GB RAM machine without issues.... So maybe there's not much need for more memory...


And then PS5 load times are worse than Series X. Whoops.


That's when playing backwards-compatible games (PS4 games on PS5, and Xbox / 360 / One games on Xbox Series X) as Microsoft spent more time optimizing their console to play old games. PS5 loads next-gen titles significantly faster than the Xbox Series X due to the 2.5x faster SSD.


"significantly faster than the Xbox Series X"

You have every right to support your Sony stocks, but blatant lies though ?


That's for reasons that I would expect the technical readership of this site to understand.

Please take this shallow retort back to Kotaku.


mmmmm yeah, I will let you all be the guinea pigs here and check back next month.

I am very curious about the performance.

If the macbook air is editing 6k videos and scrubbing smoothly, I am optimistic.


Heh, I have no intention of being in the initial round of beta testing! I explicitly upgraded to an ice lake machine this summer in order to get the best Intel Mac while waiting for the dust to settle.

But I do believe the dust will settle, and I'm optimistic about the future of Apple Silicon.


While a valid workload for a segment, I would be a bit skeptical of generalizing the extraordinary video editing capability to general purpose computing, as those things are facilitated heavily by hardware decoders.


Agreed, but they did mention what sounded like impressive code compilation times in the presentation too, which is pretty salient for most of us here.

Of course, we'll need to wait for third party verification in the end.


Totally--was not diminishing <strike>A14Z</strike>M1 at all--just to moderate expectations from the quite astonishing video editing capability to "measly" ~2x. Even two year old iPad Pro A12X kicks ass on Clang benchmarks (see Geekbench), although I am not sure how sustained the performance is in that thermal envelope.


out of curiosity, were we saying thermal envelope before the Apple presentation today


I don't want my SSD being killed by being used as a swap device, especially as it's not replaceable anymore. My 16GB MBP can write 200-300GB a day if I'm pushing it hard. It doesn't bother me too much because I can just replace it with a bog standard NVMe drive. But if I'm going to buy a new Mac, I want lots of RAM.


I’ll believe that when I see it. For like ten years now marketing gremlins have been breathlessly saying “these SSDs will be so fast they can replace RAM!” and it’s never even close.


For reference, I have a maxed 2018 mini I've upgraded twice -> 32GB then to 64GB.

Amazon price for 2x32gb modules is lower than it ever has been as of today ($218.58) [1] and I have had no problem making full use of that memory in MacOS. [2]

[1] https://camelcamelcamel.com/product/B07ZLCC6J8

[2] https://i.imgur.com/BDWrGw3.png


A bit more: I also run the blackmagic rx580 egpu to the XDR pro display. The system outperforms the current entry Mac Pro in geekbench 5.

So I def have eyes on the new mini. My suspicion is the rx580 will still provide 2x or more graphical power than any of these machines.


And it costs an extra €224 to go from 8 to 16gb. This is ridiculously expensive.


Any option in Euros is about 30% more expensive than buying the same in the US or Hong Kong (which is pretty crummy, but possibly related to taxes and the cost of doing business in the EU).

EDIT: I don't mean VAT/sales tax, I've considered sales taxes in the comparison, but also exchange rates of $1.18/1€. The difference is almost exactly 30%. It looks like the cost of doing business in the EU is much higher, and/or Apple chooses to price their products however they want by region.


Most of the US priced don't include tax... Eu prices include vat...


I did include CA sales tax (in Los Angeles) of 9.5%, vs the VAT-inclusive price in France/Germany/Italy, and the difference _after_ including those was about 30% higher in the EU. Germany was the lowest, probably because they temp. reduced their sales tax from 19% to 16%.

I can actually get a round-trip economy flight (pre-COVID and now) to LA just to buy a Mac mini, and save about $400. It's really.. unfortunate.


It's not just sales tax, there's also import taxes no?


That's true, I'm comparing local purchase prices with currency conversion rates. The rest depends on your personal situation and tax jurisdiction.


No what I mean is that Apple pays import duties to bring the computer into the EU from its point of manufacture. This further increases the cost compared with the US or a duty-free zone like Hong Kong. 30% overall increase of price is what I would expect for import duties + VAT, so Apple isn’t overcharging here.


Oh ok, gotcha. I’m not trying to blame any entity here, just pointing out the price difference that’s larger than the sales tax difference between the EU and CA. Interestingly HK prices for iPhones are almost exactly California prices (within a couple of dollars) even though there is no sales tax in HK - that’s probably on Apple.


No import duty either. HK is considered an international transshipment zone with no tarrifs.


Pop down to Switzerland. More than USA, less than Eurozone, and 7.7% VAT.


Not for computers into Germany.


Maxed out Mac Mini is $1699 pre tax in US and €1917 in Ireland. That's €1558 pre VAT price, or $1838 including mandatory 2 year warranty.

$139 is NOT 30% difference.


1917€ would be about $2262, and $1699 pre-tax would be about $1699*1.095=$1860 with tax, or about $402 more expensive.

I'm comparing retail pricing in France (with a 2-year EU warranty) to business pricing in the US (with a 1-year warranty). That comes out to 1949€ including VAT in France, or about $2299 USD, vs $1749 USD including Los Angeles/CA sales tax with business pricing.

About 31% more.


IMHO it’s misleading to compare the gross prices due to difference tax rates. JAlexoid’s comparison is a much more honest representation of how much Apple marks up prices in Europe. I wouldn’t expect Apple - or anyone - to eat the 10% sales tax/VAT difference.


Right, but I’m not trying to determine Apple’s markups. I just want to know where to buy a new Mac for myself. I don’t care who gets the markup, Apple, the gov’t, whatever.

Since you use words like “misleading” and “honest”, I wonder what your agenda is. Maybe you’re an Apple shareholder who’s allergic to any perceived criticism.


Then your argument is with the typical EU VAT tax model, not Apple.

It's not Apple's fault, that our governments decided to tax the crap out of everything.


Don't forget VAT. I think US prices don't include taxes as it's state dependent.


It's not quite as insulting as their $800 2TB SSD.


You could get the smallest SSD (a laughable 256GB) and then add one or two external 10Gbps NVMe M.2 SSDs at very low cost and with adequate performance.


I mean if it's SLC chip, that can explain the price.


The fastest drive available today is the WD SN850: https://shop.westerndigital.com/en-us/products/internal-driv...

The consumer price for 2TB is $450. $800 is absurd.


This has always been par for the course for computer manufacturers. I used to spec out Dell PCs for the school I worked at. At the time, retail cost for a 1TB HDD was the same price as a 512GB SSD. But Dell was charging double for the SSD.

For me personally, I used to buy the lower-end models with small HDD and RAM, then upgrade them. But that's no longer an option with these machines.


Apple price is always 2x consumer price.


Also if it comes with $600 in cash stuffed inside, that would explain the price. But it's neither of those things.


It's not, although it is ridiculously fast. But most people don't need ridiculous storage performance; they just need the space.


It's not, Apple most likely use TLC 3D NAND. There's no definitive source but the only remaining SLC drives being made are enterprise (and a few at that), and I think MLC is the same by now. (Even Samsung's Pro series moved to TLC)


Holy cow, that's true. New 2020 Mini with a 16GB RAM, seriously?


Especially head scratching when you consider that a lot of folks still have 2012 Mac minis with 16GB of RAM.

I think 16GB is the bare minimum for a professional machine. Apple clears the bar here, but doesn't exceed it.

Maybe next year's machines? As a first product, I think it's good enough. And the performance gains elsewhere--if what Apple says is true--are actually pretty radically impressive.


It's possible to do development on 8 GB, you just can't use Electron apps.


To be fair, most of us doing dev on our machines probably have a Slack client or Mattermost app floating in the background.

...And we might even be doing dev in something like VS Code in the first place.

We all like to dunk on Electron, but it's kind of become part of the furniture at this point, for better our worse.


Maybe we could use the iPad version of Slack for better efficiency.


This is really great point, and does highlight a key advantage of Apple Silicon going forward. This kind of thing will now be an option going forward on Apple's new computers, in a way it wasn't before.


Yeah, until Slack throws a fit and decides that desktop users don't deserve to use it.


You can avoid Electron if you try hard enough. I use Slack through a native client, for example; before that I was using browser tabs.


Try hard enough, haha. I don't have a single Electron app, and yet have never purposely avoided them either.


Which native client? Slack for desktop is Electron.



I'm not stupid dude, I know there are ways to avoid it.

Sometimes they're not worth it. Electron's existence itself is proof of "sometimes it's not worth it."


I'm learning Flutter on my Windows 10 desktop. I have Android studio and the emulator open, and Firefox, Chromium and Brave opened.

Total 14G of ram. My combined browser ram use is about 3G. The flutter project is barely 20 lines of code. The PC was started up about 2h ago.

I'd feel much better with 32G on a dev machine (to be fair, my .net projects require much less ram than this).


Three browsers open is not a requirement for the vast majority of folks.

There are probably a number of useless Windows services that could be shut down as well.


Three browsers open is not a requirement for the vast majority of folks.

Every developer who makes frontend things for the web should have a minimum of three browsers (Chrome, Firefox, Safari, but maybe others as well) open any time they're testing or debugging their code. That's quite a lot of developers.


Nah, most of us go browser by browser I think. I think I've been going with two browsers open at most the last 5 years, and at my current employer I'm so reliant on browserstack that I don't even have anything other than chrome running.


And 16GB is fine for that. That's not what is taking the majority of the grandparents ram.


I have 1500 tabs open in Firefox, plus CLion, PyCharm, a few Jupyter kernels eating 5-15G, a few apps running in background - it’s often nearing 32G on a 32G 7-year old Mac and sometimes goes into swap space. I personally won’t consider anything less than 128G as a main machine at this point (and it’s a pity that you can’t swap upgradable RAM on iMacs for 256G).


That's a little radical...

People used to tell me that Java development was resource consuming... But I somehow manage to build systems with 16GB. I didn't even go for 32G on my new laptop.


...Can I ask why you have 1500 tabs open?


Stuff that’s easier to keep as open tabs (in a tree format via TreeStyleTab) and eventually close those trees when they are no longer needed, as opposed to bookmarking everything and eventually accumulating a bajillion of irrelevant bookmarks. E.g., I’m doing house renovation so I might have a good few hundred tabs open just on that topic which will eventually get closed.


On most browsers you can organize bookmarks in folders/tree structures. You could then delete folders/trees of bookmarks at a time, eliminating this "accumulating a bajillion of irrelevant bookmarks".


I know. Been there, done that. To each his own, I guess. An open tab is an open tab, if I close it, it's gone forever unless I bookmark it which I would very rarely do. A bookmark is an inverse, it's going to stay there forever unless you clean it up and manually delete it. In my experience, a few hundred more open tabs beats ten thousand dead bookmarks, and closing tabs is easier than cleaning up bookmarks.


Tabs being "open" doesn't mean they're loaded into ram.


Atom, Chrome and Firefox on a 2017 8GB 13" MBP - no issues in using it for development.

I upgraded it only because my keyboard died.


I guess I mean more that if you're going to buy a brand new dev machine in 2020, you shouldn't buy anything with less than 16GB of RAM.

You can still be productive right this moment on 8GB of RAM (you're proving it!), but the march of time will inevitably start to make that amount feel less and less tolerable, if it isn't already.

Personally, when I buy a dev machine, I'm generally trying to look at the next few years of ownership. Will 8GB of RAM cut it in 2023? Doubtful. 16GB? Yeah, a little safer. But 32GB would make me feel better.


What kind of development? Firefox, Spotify, Slack, WhatsApp, any JetBrains IDE, an iOS Simulator, and an Android Simulator will have my 16GB MBP swapping like crazy.


I run a few Electron apps daily within 4GB RAM, it's (fine - 1); =~ almost fine lol.


They’re not advertising any 16Gb models on their web site. It’s all 8Gb. Which you’re not going to be able to upgrade.

I’m out


You can choose 16Gb for +$200.


Ah yes just seen that.

8Gb of RAM for £200 they can piss off. I paid £240 for 64Gb of decent stuff in my desktop.


This is super interesting. I personally wouldn’t consider a 8GB or 16GB laptop this year as my daily driver, but it’s true that the performance gain from extra RAM beyond 8GB is marginal, especially for average audiences and especially when their performances are measured only externally.

Like, you might get super frustrated, develop mental health issues, not that the corporate cares. Expenditure reduces, ROI might even slightly improve, why bother then?


> you might get super frustrated, develop mental health issues

Uh, what? Is your comment literally "because I don't have enough RAM in my computer my mental health will decline"?


Yes, literally?

I’m talking about 4GB DDR4 non-SSD Office machines still in production that are borderline crime against humanity.


But you were just talking about 8 GB machines with fast SSDs…


you have to admit when you're in a busy day w/ looming deadlines and your machine starts chugging coz it can't handle the excel docs/dev work going on it feels like the worst thing ever.

the kinda company that can't afford to give you the latest stuff is more likely to have those kindsa days all the time too so it feels even worse. management even rides your ass coz you can't make it work...

part of "making it" as a software dev in a lot of countries is getting a machine per your specs not having a machine specced out to you by IT.


My 100 Chrome tabs easily consume 8GB


I agree, i was going to order one but 16GB is not enough for me. I guess i'll wait until 32 is available.


I know use cases are different but I can't imagine 95% of people needing more than 16GB of RAM especially with mac os "unified memory". I have intellij, pycharm, firefox, slack and a mongodb instance running on my windows laptop and its only using 10GB. Who knows how much of that 10GB could be reduced.


"unified" memory is bad not good. It's agree with GPU.


My guess is that they are using up to 2 HBM2 memory stacks (from the picture). Each is limited to 8GB . If they were to go to HBM2e in M2 they could get up to 2x24GB. The biggest advantage of HBM is lower power per bit as the signals are all on the package running at a lower frequency.

The memory market is getting fragmented with Nvidia having seemed to moved from HBM2 to GDDR6X (Micron being only supplier). LPDDR super low power (cell phones) and DDR4/5 for rest of market...


After more consideration it is much more likely to be LPDDR4X just like the recently released A14 chip. It would seem unlikely that they would have developed a brand new memory interface.


Are they using HBM2 memory? I keep waiting for AMD to do that in a cpu package.


16GB is fine for mobile and low end. I'm a developer and struggle to fill that amount of memory, even with VMs running.

I guess they made a cost trade-off for these machines, which will not be carried forward to the high-end. Perhaps a new "M2" chipset, with discreet RAM and graphics. Next year?

It would be amazing if they could bring ECC for a decent price as well, time will tell.


I'm a rails dev and I constantly struggle with 16GB. Once you start up rails, background workers, webpack, vs code, MS teams, a database, plus your web browser you very quickly run out of memory.


Wasting that much is a choice.

I develop Django and unfortunately have to run a 4GB Windows 10 vm in the background for work, still have plenty of ram and don’t even have swap enabled.


Its not a choice because not a single item on that list can be closed and let me do my job. So rather than petitioning to replace 10 years of existing software development, I'll ask for a 32gb laptop.


Doing mobile dev on my 16GB MBP causes it to constantly swap, and on my 32GB desktop I often get very close to using all 32GB. I wouldn't buy a new dev machine with less than 64GB.


I do Android stuff occasionally and never got close to that. I suspect you’ve made some choices to use that much, and it’s your prerogative. Hardly necessary however.


And it's unified memory too, shared with the graphics.


The Intel GPUs shared RAM too


Which makes it worse, Intel Macs look appealing than these.


Depends what you need it for I guess, and how much differently ARM uses RAM than Intel. Anyone know?

Also how much OS X optimisation can help with that. The iPad Pro is a lot faster than most laptops with, what, 6gb memory? And iPhones always seem to have a lot less RAM than Samsung phones but still run faster.


iOS doesn't use swap, it just kill background apps when it needs to devote the whole RAM to a single app; iOS apps are designed with that in mind.

MacOS doesn't get to use any of those tricks.


It's mostly the same, perhaps a little bit less.


It could even be advantageous


Also no more 10G ethernet.


And no way to buy a previous Mini with 10Gbe/64GB now either?... (from Apple)


At least in the US Store, the Intel Mac Mini is still for sale:

https://www.apple.com/shop/buy-mac/mac-mini/3.0ghz-intel-cor...


Getting one via their refurbished store is an option, if you're lucky and they have what you want: https://www.apple.com/shop/refurbished/mac/mac-mini


iPhone has typically had lower specs while outperforming its competitors in benchmarks. I think the integration goes a long way toward efficiency.


But Mac OS X isn't iOS. The relative memory usage should be the same between an intel Mac and an ARM Mac. Which means for power users this is extremely disappointing.


That's not obvious at all. The memory hierarchy is very different in an Apple SOC compared to a typical PC architecture. For example, I don't think there is the equivalent of a system level cache in a PC.

https://www.anandtech.com/show/14892/the-apple-iphone-11-pro...


Power-users generally won't be purchasing the lowest/mobilest end for work purposes.


Some of us are power users don't have a choice in what computer our work provides us with and many companies just buy the absolute base level MBPs, "cos it's Pro right, says it right there on the box so its good enough"


> But Mac OS X isn't iOS.

Not yet. MacOS has been moving towards becoming iOS for a long time now. Now with iPad and iPhone apps running natively on them, they only need to lock a few more things down until it becomes an iPad with a windowed interface.


I'm expecting 16" MBP and iMac to support more along with an M1X chip or something along those lines.


It'll most likely ship some time in July, so could be an M2


Probably M1Z, like the iPad Pro.


That's it for the M1 crop of machines, higher-powered ones will go higher...


I bought a Mac mini 2 weeks ago and I was sweating when they announced the new Mac mini, not anymore now that its only max 16GB. Apple has effectively killed expandable storage and memory in all of their line up. No doubt in the future they will offer larger memory options when they release the 16" Macbook pro. Need 32GB a year later down the road for your mini? Just buy a new one!


I'm with you on the sweat factor as I just bought a new iMac 5K, but FYI it has a purpose-built expansion slot for RAM that surprised and pleased me. I bought it with the least RAM possible from Apple and maxed it out via an upgrade I bought from Newegg for like 40% of the price.


The current Intel Macbook Pros aren’t expandable either, though. 16GB is a disappointment for me too.


This is the first CPU/ SoC and the first SKUs. It's likely (almost certain) 32GB options for the 13" MBP will come out when the 16" MacBook is launched. I'm not so sure about the Mac mini, but considering they are just dropping the same chip on all three devices, I suspect the mini will have a higher end option as well.


The choice to do on-package RAM makes it hard. They limited themselves to just two dies here, so maybe a real "pro-focused" machine can have 32 GB later. I guess those of us that need the extra ram will have to wait for Apple to release the M2 with DDR5 support and then re-evaluate our options. But for now these are a hard no for me.


what is interesting to me is they all use the same chip but don't reveal their operating frequency. I am hoping I am just overlooking this.

I expect the Pro to run faster as it has a fan to support it but how much faster than the Air would it be?


Yeah this is surprising considering how much they talk about how great the processor is for ML, too...


The mini still does support 64GB of RAM.

They’re still selling the intel mini which supports up to 64GB, so that option has not been taken off the table if you need the RAM. This is just an additional option with a different trade off. Faster RAM but less of it.


Not all RAM is used the same, and the SSD is on par with RAM speeds from a decade ago. Probably won't need it. An iPad Pro does magic with only 4GB.


Right? "Oh we can't give you a 64GB MBP because Intel can't use low power dram yet". Launches Apple Silicon with 16Gb.


Did you expect they would launch every single configuration on day 1?

Apple said the transition is going to take 2 years. This is day 1. You can still buy Intel based Macs with 64GB RAM. When Apple phases those out and you still can't buy 64GB Macs, then you can complain.

What are the chances that happens?


I confess, that now I've looked at all the details, I can't argue that they've done the right thing. It seemed to me that it was daft to launch an MBP13 with less RAM than the intel, but in fact the M1 is so far ahead in every other respect that it would be foolish not to give people that option.

I might have to get an Air for myself, just to have an ARM based computer again, like it's 1988.


It just seems silly though. If you want a Mac Mini with 64GB of RAM you're forced to take the slower CPU and GPU. If you want a 13" MBP with 32GB RAM it will cost you $500 more and also get you a slower CPU and GPU.


It doesn't make sense because we're only seeing a part of the picture right now. Once they release the rest of the product line on their own architecture, it will come together. (Well mostly, Apple has been known to leave big holes in their offerings)


> Launches

They purposefully launched with consumer-level hardware. There is no way that the "real" pro machines will not let you ratchet all those specs up.


The 13" MacBook Pro has an option to upgrade to 32.


I just checked the store configurator and there is no option to upgrade the Pro with M1 to 32, which is consistent with the presentation.


Intel's MacBook Pro 13" is still available for purchase with 32GB upgrade. The M1 is capped at 16GB.

edit: spelling


The M1 MacBook Pro starts with 8GB and has an option for 16GB. I was talking about the Mac mini which in the Intel version has an option for 64GB (at +$1000 it's not exactly cheap).


The 2 lower machines on the selection page are intel ones, and are the only ones with 32Gb option


I don't see that.

You might have looked at the 16" one, which did not get updated today.


Really, 16GB max? I find it very strange how Apple went from targeting professionals, seeing the strategy work from the ground up for many years, but now somehow want to pivot to hobbyists/others instead of professionals.

Could someone try to help me figure out the reasoning behind Apple changing the target customer?


My guess is the pros are going to be the tock to the consumer tick in the Apple silicon upgrade cycle. It takes months before pros are comfortable upgrading MacOS to begin with, and it will probably be a year or two before they're comfortable that pro software vendors have flushed out their bugs on the new architecture.

Basically I'm guessing that no one who wants more than 16GB on their professional machine was going to upgrade on this cycle anyway. We'll see


Since about 2-3 years back, the film professionals I hang around with are all starting to dump Apple hardware in favor of PC workstations now, as they are tired of paying large amounts of money for something that can be had cheaper and with less restrictions. Especially when it comes to price vs quality for displays and hard drives that Apple sells.

I think today's presentation is just confirming what we've known for a while, Apple is pivoting away from professionals.


Same thing in the audio engineering world. It’s crazy how quickly it’s pivoted from Mac to PC in such a short timeframe.


While I use it for gaming, I cannot fathom anyone using Windows 10 professionally.


I think it depends a lot on your job. I used to do a lot of Cadence Virtuoso work in Windows (now I've moved to Linux) but that fact that is was running on Windows only mattered once, when I setup the tool. From then on I was full screen in the CAD tool and my day to day was basically identical to my flow now. I imagine for a lot of professionals, like myself, it's the application suite that matters, not the OS.


Not sure what industry you're in (professionally) but at least in creative industry it's either Windows or OSX. And if you need really powerful hardware, it's either Windows or Linux (with custom software), at least that's what I'm seeing in my circles.

Although OSX used to be popular with the indie-professional scene, it's disappearing quicker than it appeared years back.


It is the leading OS for enterprise users by miles


Probably saturated market. "What got them here, wont get them there". They continuously need to show growth. As you said, the strategy of focusing on Pro users worked. It's now time to focus on not so Pro users... those who don't need matte displays, 32gigs, fancy keyboards and much rather use a colorful touch bar than another row of buttons.


This is their 13" model. Their 15" model is targeted for professionals.


There is no 15" model anymore and the 16" model has not been announced with an M1 yet.


> ... 16" model has not been announced with an M1 yet.

That's the point the above poster was making.

This is Apple's first CPU, there is a reason Apple said the transition will take 2 years. More powerful CPUs with more RAM for the 16" MacBook, the iMac, and the Mac Pro lines will be coming later. Some of those CPUs will likely be available for the 13" and the Mini.


I mean, "Pro" is in the name, I expect Pro-level specs, haha :P


It was a long time ago MacBook Pro meant "MacBook for Professionals", as other makers have now equalized and sometimes even passed Apple when it comes to producing quality hardware for reasonable cost.


Right, but the person I'm responding to says the 15" model is for professionals -- I argue the whole line of MacBook Pro is "for professionals" :)


Nothing they launched now is for "professionals". The RAM is a joke.


Apple believes that there are different kinds of professionals. An artist that uses a lower-powered computer is still a professional, of course, just not a software engineer.


Both have a 16 GB limitation.


That's not true. The 16-inch model remains Intel and supports up to 64GB of memory.


They haven't announced a 15/16inch Apple Silicon Mac yet.


Oh indeed. Disregard my comment then, I've mixed something up.


There isn't a 16" MacBook Pro with Apple Silicon yet.


And macOS doesn't support Linux-style compressed RAM either, at least as far as I know.

[1] https://www.kernel.org/doc/html/latest/admin-guide/blockdev/...


One search away and you can prove yourself wrong. In fact they have had it since 10.9.


Cool. I stopped using it in 10.6. Glad macOS has caught up.


Caught up with the 1990s when RAM and disk compression were popular on Macs and PCs.

https://en.wikipedia.org/wiki/Stac_Electronics

Interestingly enough Stac was apparently co-founded by Nvidia/Stanford's Bill Dally. The story of them being "sherlocked" by MS-DOS 6.0 and successfully suing Microsoft is interesting as well.


That was before the current NS-based OS and so doesn't count


It does compress RAM, although I’m not sure if you mean something different here.

https://arstechnica.com/gadgets/2013/10/os-x-10-9/17/


The M1 is basically what would Apple would call A14X / A14Z if it was on iPad Pro.

So they decided to reuse the A14X / M1 across all three products. And the only differentiation are their TDP cooling. The MacBook Air is 10W TDP, and both Mac Mini and MacBook Pro are likely ~35W range.

The did mention MacBook Air's SSD performances is now twice as fast, so this isn't exactly an iPad Pro with Keyboard. That is great except I suddenly remember the 2019 MacBook Air actually had a slower SSD than the 2018 MacBook Air. Where the 2018 do Read at 2GB/s, 2019 could only do 1.3GB/s. So even at 2x / 2.6GB/s it is still only slightly better than 2018. And considering modern day NVME SSD, this is barely good enough.

Pricing kept at $999, and same old ridiculous upgrade pricing of RAM and Storage. Although they did lower the Education Pricing to $899, a little bit better than previous ~$829. But for MacBook Pro, You are essentially paying $300 more for a Fan and Touch Bar. And Pro still limited to 16GB Memory ( because it is now using the LPDDR RAM as used on iPad ).

I guess may be this is exciting for many, but extremely underwhelming to me.

A Quote from Steve Jobs:

“When you have a monopoly market share, the people who can make the company more successful are sales and marketing people, and they end up running the companies, the product people get driven out of the decision making forums. Companies forget what it takes to make great products. The product sensibility, the product genius that brought them to this monopolistic position is rotted out... The people running these companies have no conception of a good product versus a bad product. They've got no conception of the craftsmanship that's required to take a good idea and turn it into a good product. They really have no feeling in their heart about wanting to really help the customers”

And my small rant and wishes, Dear Tim Cook / Apple, Please Stop Saying you LOVE something. There is no need for you to tell me that, because if you did love something; We will know. Steve never said anything along those lines, but we all know he cares way more than any of us could even imagine.


I was hopeful they'd weigh less. An iPad Pro 12.9 inches weighs 1.4lbs. The 13.3 inch MacBook Air M1 is 2.8lbs. I'm sure there are reasons but there are companies, LG for instance, that make 2 lbs 13.3 inch notebooks and 3 lbs 16 inch notebooks. No I don't want an LG but I was hopeful for a 16" Arm Macbook Pro that weight 3 lbs. Now I'm pretty confident it will be the same as an Intel Macbook Pro 16" at 4.4lbs (heavy enough my messenger bag leaves marks on my shoulder and gives me muscle pains if I have to lug it around all day) Somehow, getting it down to under 3.lbs removes that pain. Maybe because it includes the power supply which is also bigger.

To put another way. They could have taken an iPad Pro, expanded the screen to 13.3 inches, added a keyboard and put MacOS on it instead of iOS and it would be probably 1.8lbs. I don't know what the tradeoffs would be but I was excited by that possibility. It didn't happen though.


The 12.9” iPad Pro plus Magic Keyboard weighs 3lbs. Subtract a little weight for the wrapper and you are basically equivalent to an air with screen + keyboard + trackpad.


not according to apple's home page.

    iPad Pro 12.9"   1.391 lbs
    Magic Keyboard   0.5 lbs
    Total            1.891 lbs
https://www.apple.com/shop/product/MLA22LL/A/magic-keyboard-...

Apple doesn't list the weight of an iPad Pro but wikipedia does

https://en.wikipedia.org/wiki/IPad_Pro#Fourth_generation#Mod...


Wrong Magic Keyboard. Look at the one for iPad Pro 12.9”.


> LG for instance, that make 2 lbs 13.3 inch notebooks and 3 lbs 16 inch notebooks

Those are the Gram series, right? They use a magnesium alloy chassis that feels like flimsy plastic. They are light, but they feel like a something that's going to imminently break compared to Apple's aluminum unibody.

In my view, the Gram trades off too much stiffness and chassis robustness for weight to be palatable to a non-niche audience.

I thought I was in the niche of users that prized weight over all else but I had concerns that a light on-the-go laptop that was so flimsy would last. I returned my Gram due to the fact that I just couldn't get used to Windows (after using Linux and Mac for >10 years) so the build robustness and weight ended up being a moot point


I agree the LG feels bad. I don't agree that Apple doesn't have the talent to solve this and make 30% lighter laptops.

As for weight, carrying a 4.4lbs notebook to and from work and to meetings never bothered me. Carrying the same notebook for days on end as a digital nomad, especially in hot and humid weather did.


The LG grams go out of their way to be lightweight and they prioritize it above just about everything else. What you get is a magnesium alloy (?) chassis that feels very flimsy, a screen that flexes a LOT, and a battery that is far smaller than the chassis can support [1] to meet their weight goals.

Apple isn't really hellbent on making the lightest laptop and don't really have the luxury of creating SKUs just for these niche needs.

[1] https://www.ifixit.com/Guide/LG+Gram+15-Inch+Repairability+A...


You know, honestly, I think it's just time to stop making comments like this and admit that Apple isn't making products that you are interested in.

Stop letting them live rent free in your head.

It's a consumer product. WHO. CARES.

For the average user, these are fantastic. A laptop that lasts beyond the number of waking hours in a day and has no noisy fans with basically untouchable performance in that form factor. And YOU'RE STILL MAD...about the vocabulary that Tim Cook speaks with compared to Steve Jobs, who has been scientifically verified as being a completely different person. (I highly doubt Steve never said anything about loving Apple products, either)

In other words, Steve Jobs has been dead for almost 10 years, it's time to leave his personality cult. If you liked what he said better, it's probably because he was one of the best salespeople who has ever lived - that doesn't change what the product actually IS.

And oh my goodness, stop getting worked up about specs and industry-standard price segmentation practices. 99% of customers DO. NOT. CARE. The VAST majority of MacBook Air buyers buy the base model and don't need anything more than that. macOS is perfectly usable on 8GB, heck my parents are using 4GB on theirs and it works fine - yes, with Catalina. No joke, you need to open up something like 40 chrome tabs before it matters. RAM requirements have completely stagnated for anything but the most exotic workloads.


I have to agree. While I'm sure the processors themselves are great, the anemic RAM and storage provided on base models (8GB in 2020, seriously?) is outrageous. Especially considering that the M1 chips should be much cheaper for them than the pricy intel processors.


That quote has summed up whats going on at Apple for quite a while now. Under Tim Cook we've seen what was once a product line with each archetype having it's place and having meaningful difference and care put into it fragment into many different tiers designed to milk a few more hundred dollars while also trying to reduce the BOM at every step.

Long time Apple fans who would in the past have championed how much care and vision Apple puts into their products now champion how working this way is so important because apparently building shareholder value is something an end customer should care about.


> But for MacBook Pro, You are essentially paying $300 more for a Fan and Touch Bar.

More than that. Better screen. Bigger battery. Better speakers. Better mics (although I don't know how much I would buy this one as 'studio quality' is silly). A "USB Power Port". The fan will make the MBP perform without throttling a lot longer than the air.

The price difference is minor for all that IMO.


Is the screen better on the MBP? I see "2560-by-1600 native resolution at 227 pixels per inch" for both:

https://www.apple.com/macbook-air/specs/ https://www.apple.com/macbook-pro-13/specs/


500nits vs 400nits is only difference I'm spotting.


Steve Jobs called things "insanely great". Stop mythologizing him.


So there is an interesting trend here in the comments because HN is very developer focused. Everyone seems to say "well this does not work for my XXX because of RAM, etc." I would say yes, no surprises there! With this line up Apple is fine with that, because the Air and 13" are not targeted at you. This is the laptop for my wife or kids that use it for school, web, some music and video, nothing at a professional level. That battery life will make both of them very happy.

The reality of this is that it is very likely they focused on yield for the chip vs market segment for what they can ship today. For the professional user I would very much to expect to see follow on products with more RAM and cores in 1Q21 (MacBook Pro 16" and iMacs).


I'm a dev and I have the air as my personal, side-project computer, and I have already ordered the new one.

I use vim to develop primarily in python/js. So my need for performance is super low.


Hah, this reminds me of college. I remember some of my first CS courses, everyone either had a brand new mbp or an ALIENWARE super beefy laptop.

I had a 300 dollar 7" netbook. Turns out that unless you're doing deep learning, etc. (which you might as well just run remote IMO), a bottom of the line intel atom processor can handle the workloads that a CS program throws at it no problem. I mean, in reality you're doing things like solving sudoku puzzles or making some syscalls 95% of the time.

Admittedly, I ended up buying a desktop because I wanted the screen real estate when things got more complex, but I thought it was funny how many people had this conception that they would need to run their ray tracer from CS101 on their i7 with 32gb RAM


CS courses are the least demanding of all on computers.

Usually the other courses when it comes to computer would need either office suite or specific software, both are vastly more demanding than the CS toolchain that has been mature since the 90s.

That is, unless the course somehow uses Java, which practically forces you to use an IDE that are full of bloat.


gosh, I just remembered my first IDE I used in class, Dr. Java.

I honestly thought that the ability to interact with java code without all the boilerplate was a great way to introduce people to coding. That being said, I'm sure languages and interpreters like python's were pretty mature in 2010 too.


Netbeans, Eclipse and VS run just fine on my 2009 Asus laptop with 8 GB, now InteliJ and Android Studio, I can forget about even trying.


The code I wrote for a company I cofounded and sold was written on a seven inch netbook as well. I wouldn't do it today, but I got to tell ya spending some years writing code on a tiny screen really does make you a better programmer. It's hard to have long lines or excessively long functions when your screen is that small.

I loved that little netbook.

It also saved my bacon when I forgot my decryption strategy for my uploaded bitcoin wallet and I remembered I kept a backup of the wallet on that thing that was under a pile of books in my closet.


Some people want an all-in-one computer.

They don't want a separate desktop for heavy tasks.


I was going to point out something like that but figured I would get shouted down. For me, check in to bitbucket and Jenkins pulls the build and does it on a rack full of servers dedicated for the process...and yes to VIM. :)

One of my colleagues placed an order for the Air about 10 seconds after you could.


Yes, developers with cloud backed infra have the most to gain from this architecture change.

All we need is huge battery life (check), better heat management (check), and snappy enough performance for our thin client machine (check). We're not training the next GPT on our laptop anyways.


I ran RubyMine and Rails on the Air for three years back in 2012-15. It was faster than the Pro at the time as measured by our unscientific execution of our entire test suite. The biggest difference came from SSD vs HDD of the Pro. For most use cases a reasonable amount of memory (8GB) and for developers (16GB) should be enough. The fact that Electron apps are gaining popularity and chromium is a memory hog is my only concern about future proofing it. I’m assuming even there the faster SSD performance might make paging seamless and unnoticeable.


It is literally called Pro if that is not for professionals then what is ?

its $1,000+ price tag is pretty steep if exclusively targeted towards consumer market.

Also the wife and kids have plenty of alternative devices including the iPad to do basic school, web and multimedia, the value prop for just that market is not convincing enough for me

Why would I buy and iPad and MB Pro if they are basically going to do the same type of work ?


They have pro versions of their phones. It doesn’t mean anything anymore.


True, it does not seem much in Apple any more.

While Apple's phones are not really "pro", I can see a phone being designed for the professional market primarily like the Blackberries and Nokia communicators of old.


Not to be snarky but if $1000+ is steep then your are not Apples target market. :) They have phones that cost over $1000 so $1000 for a laptop seems to fit.

For me, we just went all in on the Apple ecosystem because it is simple and the least invasive privacy wise. I do tech for a living so when I get home the last thing I want to do is debug anything. I think in the last 5 years the number of times my family has asked me for help to resolve something is under 5. This was not the case when we had Windows systems, Windows media centers, etc.


For 1000+ price to be scalable, Apple cannot rely on just consumer market of wealthy people earning 6 figure salaries of people in tech buying for personal use. They need the professional market which is my point. [2]

I have considerably less problems troubleshooting Chromebooks for the technically challenged than I have ever had with macOS or windows combined [1]

[1] TBF Windows has improved, S Mode, seamless upgrades etc have helped its user experience to improve. It is funny that windows is considered the more complex OS, their user friendly practices like strong backward compatibility and focus on GUI based workflows shot them to mainstream in the 90's, times have changed indeed.

[2] They are not going to lose professional user base overnight there will be stickyness, anecdotally I bought the first 2017 touch bar MBP, for 10-15 years Apple had delivered good hardware for me, that is the last Apple I will buy in a while. There is some stickiness in the professional market however the brand loyalty is very limited, if a device does not fit your needs you find one which does . You may feel sad you still need to make money though so you move on.


Welp, I guess all those years of development I’ve been doing on my MacBook Air were nothing professional.


Of course it is possible to use almost any computer for work. That doesn't change the type of users that the particular model is designed for though.


The responses here seem to not really acknowledge what a rabbit apple has been pulling out of the hat. 2-3x the performance both from a CPU and GPU perspective, whilst being far more energy and thermally efficient. Of course this was kind of known and rumoured, but now we know.

Finally real competition for intel. If only these chips were available to others. they are literally running laps around the competition.

Curious where Apple's pro offerings will go, and whether these will be available as servers...


>Real competition for Intel

AMD would like a word.


I'm really worried that AMD has won the battle, but lost the war in terms of x86 being replaced by ARM in the next decade.


There is no danger to AMD in the PC market from ARM. Apple will not sell their cpu’s to other PC vendors, and other ARM chipmakers aren’t even competitive with intel, let alone AMD. What this shows most of all is that the instruction set is not what matters, it is all about the core architecture. Intel is still dragging the ghost of skylake along, while AMD has something better with zen, apple has their A cores, and everyone else is forced to use ARM reference architecture because they don’t have the resources to design their own core. Only nvidia has the ability to upset things.


No danger? It seems a very precarious position over the next five to ten years. If Apple find success with their strategy, then money is going to flow into the development of ARM chips from Microsoft, Amazon, Google, Nvidia, etc for their own products/servers.


I doubt this. Apple may end up with a massive competitive advantage over other laptops. I doubt Microsoft and PC OEMs are just going to stand on the sideline and watch Apple eat their lunch.

I would expect there to be an arms race on... ARM. And maybe MS suddenly stops treating Windows on ARM as a hobby project.


Well it is not like x86 was homegrown at AMD either, it is unreasonable to think they will be able to build great products on ARM.


Nowadays power efficiency for performance is nearly equal to the performance. So the M1 is the performance king for mobile and SFF. I'm exciting to see how Apple Silicon works for power hungry desktop/workstation area.


I think it's more of a X86 vs ARM battle and not particularly Apple vs Intel. AMD will suffer too. And Graviton, Nuvia, Ampere are all competing against X86 incumbents.


Are servers important anymore? Seems to me that servers are made to automatically scale on AWS, and that is it. 8vCPU may be equivalent to 1x M1, it doesn’t really matter as long as you can have 4 of them.


Yes, greater performance means greater density. That lets you do more with constant everything else. Look at the uses for HPC if you want ideas for applications.


Absolutely.

More performance/watt in the server room ultimately translates to cloud provides being able to offer more performance/dollar to users.


The thing that makes me fairly bullish on this transition is these are the low end devices. It will be interesting to see what they replace the 16" MacBook Pro, iMacs and Mac Pros with.


Apple dug Intel's grave.


> Finally real competition for intel.

Unless Apple is going to sell these chips for other laptops, desktops and servers the only competition going on here is Apple competing with itself. Besides, AMD has been competing with Intel very well for the last couple years and have now pretty much taken over the market with their latest CPUs.


The most important feature of an M1-based Mac will likely be OS support into the distant future. I'm still using a 2012 rMBP, which was the first-gen retina version, and it's held up much better than any other computer I've ever bought, partly due to OS support into 2020. I imagine Apple will stand behind this new generation for a long time as well.

The new M1 SOCs max out at 16GB RAM, which seems like a major limitation, but the timing and latency of this integrated RAM is probably much better than what you could otherwise achieve. Meanwhile, improved SSD performance will probably have a larger impact on the whole system. I remember when I bought a 15k RPM hard drive ca. 2005 - it was like a new computer. Upgrading the slowest part of the storage hierarchy made the largest difference.

One slight disappointment in the Mac mini is the removal of two USB C / Thunderbolt ports and no option for 10G ethernet vs. the Intel model. An odd set of limitations in 2020.

Overall, at the price they're offering the Mac mini (haven't really considered the other models for myself), I think it's ok to take the plunge.

- Sent from my Dell hackintosh


> Overall, at the price they're offering the Mac mini (haven't really considered the other models for myself), I think it's ok to take the plunge.

I thought the same. I actually wonder whether the low prices aren't due to the App support being extremely limited at this point (basically only first party stuff)


Applications for intel macs will still run by way of Rosetta 2, so I wouldn't say " App support [is] extremely limited."


It sucks that the SSD is (presumably) not upgradeable anymore. Apple generally charges a ton for the larger SSD offerings, but it used to be that you could just buy a base model and replace the thing later.


Apple positions itself as being on the side of the consumer, and while they never really justified their soldered-down RAM (on laptops), one _might_ argue that it reduced failure rates. It's interesting that IBM discovered that soldering RAM to the individual compute modules in the Blue Gene/L (ca. 2004-2007) did improve reliability, in part because they had 2^16 modules in one cluster. I don't really buy that argument for laptop RAM, and especially not for SSDs, but I'm not sure if there's anything that can push back against Apple other than plain old competition, which they're trying to distance themselves from as much as possible, of course.


They're on the side of a certain type of consumer (doesn't even consider upgrading hardware down the line), and with these product launches they're making it more clear than ever that they don't want you to purchase their products if you don't fit into being that type of consumer.


Also good luck recovering data from a watter-damaged laptop. I have also seen some macbooks with broken SSD on eBay, sold for cheap. Soldered SSD is a ticking timebomb for your data.


Use external drives. They're cheaper and way larger.


And slower. And one bumped cable away from corrupted data.

No thanks, not for my boot drive.


NAS is a very established solution that is not expensive at all to set up at home.


Try editing 4K videos over 1GbE Internet from your NAS, because the mini doesn’t support more people 10gbe.


You still have local storage...

This is the same concept as L1 -> L2 -> L3 Cache -> RAM -> Disk.

When you need something, bump it up, when you don't, move it away.


I'd prefer to store my hot raw video footage on my disk, and my preferred way to do that is by buying a $350 2TB nvme instead of paying Apple $1000 or whatever for the privilege.


I understand that it's less convenient but you actually can buy that 2TB NVME SSD as external drive too. It should operate in the same ballpark as decent to higher end internal NVME SSD options.


Not slower, with USB/SSD speeds today.

Do you need a 512GB boot drive?


It only has two ports and multiport dongles are not reliable enough for external storage, you can only trust the single USB-C to USB-A dongles for that so it means giving up a port.

I have a base level 2 port MBP from work and having to do any work with large files has been a nightmare because of this, having to juggle power/monitor and storage constantly.


Out of curiosity, how many people are actually constrained by 16GB of RAM? What applications are you using it for that 16GB vs 32GB is actually a deal breaker for you?

Thinking about your average end-users, like most of my family, 8 to 16GB is about all they need for their systems (and if software were better written they'd probably need less). So is this specific work like machine learning? Video and image processing?


Software development.

If you have a microservices architecture or are using Kubernetes then you will easily need more than 16GB. I have 128GB on my desktop purely so that I can have my full platform running in the background whilst also doing development.

Also if you're doing data engineering e.g. Spark then you will likely want more than 16GB.


Adobe Premiere Pro barely works on 32GB, let alone 16GB. Perhaps not "normal people", but it is a Pro device. And bear in mind, due to the UMA, that 16GB is shared between GPU and CPU.

I have 74GB on my system (10GB of which is GPU) so I can run my dev env (Kubernetes, PGSQL, Visual Studio), data science, machine learning and do 6K video editing. But then, there's also zero chance that I would consider doing this on a laptop.


> Adobe Premiere Pro barely works on 32GB, let alone 16GB

Which is why I use Final Cut Pro. It was a little sluggish from time to time on my Air with 8GB of RAM, now on my mid-2015 MBP with 16GB of RAM it never stutters or slows down. Editing 1080p.


People running electron apps are likely constrained. Maybe with the ability to run iOS/iPadOS versions we can ditch things like the desktop version of Slack.


You think the iOS version of Slack is native?



Yeah fairly certain it is, despite not feeling that way from a UX perspective.


Yeah that's why I was asking, I use it and it doesn't feel native :)


On my 2015 MBP with 16gb I currently have 20-ish tabs in Chrome, Scrivener, GitKraken, Capture One, Slack, WhatsApp, Messenger, Books and Xcode open. The only things that's really bogged down is Capture One.


Now that the processors are different, there will be cases where we will need to run multiple VMs to use Linux or to use Windows etc. Then having more RAM will be very helpful.


I'm not in the market for Apple stuff but if I were, developing locally: SQL server drinks all the memory you throw at it and then some. Occasionally spinning up a VM.


I bookmarked this thread and keep coming back to it to see if any situations would apply to me. I’ve been fine with 16gb on Ubuntu with a Dell XPS 13 for VS Code/Python/PostgreSQL for what most people call CRUD work. Android development has been a little painful with both Android Studio and device emulation slowing things down but still workable. Otherwise I’ve had no problem with web app style work on large projects. Honestly I’d prefer 32gb for those things (and to “future proof”) but I need a Mac dev environment for early 2021 and will probably go with the MacBook Pro 16gb unless the reviews or benchmarks on things I actually do look poor.


I often use more than 16 GB of RAM. Generally, I have 3-4 Windows 10 VMs running which each take at least 4GB to be stable. So if I have 4 running that is all the RAM already.


Running 3-4 Windows 10 VMs is not a standart Macbook (at least not for 13" macbook) workload.


I was running a Windows VM on my laptop earlier this year (before some of the telework issues got worked out). While, yes, you need at least 4GB/VM, by the time I had two running the issue wasn't RAM. Laptop CPUs just can't keep up with that workload IME. Not unless you get, maybe, the latest 16" MBP (if restricting ourselves to Apple's hardware).


> Generally, I have 3-4 Windows 10 VMs running

What are you doing that needs them all actually running simultaneously? Disk-backed VM pause/resume is extremely fast when your SSD's sequential read/write performance is measured in GBps.

I have a hard time imagining a workflow that actually uses multiple systems all at once, so to me what you're saying sounds like "Well of course my RAM gets filled if I fill it on purpose."


That's assuming that your SSD is doing nothing else at the time that it is pausing/resuming.

Often it is. And so whilst SSDs are fast they still aren't fast enough to not slow everything else doing under high load.


Haha. I will admit that it is sometimes just for convenience, but when testing networking code, or writing things to interoperate with large system I do really need 3-4 VMs running.


I develop, and make music with rather substantial sample libraries, but 16GB is still comfy. 32GB is not worth it, IMO.


I think the storage size is more important in that


It would be crippled in several years. Software would always consume more — browser, messaging, calls, games.

I've bought One Plus 3 (6GB RAM) in 2016, still strong. Computing power has not changed much, limiting factor is RAM.


8GB is still quite good for me. For container stuff there are servers.


I run 128GB of RAM on my desktop and I will never go back.

I don't want to be forced to close my complete dev. environments to work on another project; or play with a ML project.

The cost of additional RAM is marginal, to me it's the way it should be. Your time matters, your productivity matters, why not max it out?


on a laptop, the cost of the additional ram is not marginal on power consumption, so it's not only a matter of $.


This comment reminds me of the classic "640K ought to be enough for anybody" quote.


it's not, is that 16gb of ram is actually enough for 99% of people. I see many more downsides on the lack of upgradeability after buying it and on the cost of the ssd and ram factory upgrades than on the lack of availability of a 32gb ram version, which will probably follow up when they make the new revision of the soc for the 16 inch laptop and to replace the higher tier 13 inch intel models, that they are still selling with 32gb if you really need that amount of memory.


I think the point is about RAM inflation into the future, rather than the size of the present-day market segment that uses large amounts of RAM.

When I buy an Apple computer, I pay a premium but I plan to use that computer for several years.


For me its just Chrome. About 100-300 tabs at any given time.


An 8 core processor on a Macbook Air that is also energy efficient? That is truly impressive. I never thought I would consider using Macbook Airs after all the years of using Macbook Pros, but Apple surprises me once again.


It's 8-core, but they're 4 performance and 4 low-power cores, so it's not your normal 8-core chip. It's more like a big.LITTLE chip.


Anyone know how you interact with these cores as a developer/user? Say if I'm running some C code with OpenMP parallelism, can I bind it to three of the fast cores?


Binding to specific cores is not exposed to userspace, but you can influence which kinds of cores it's likely to be run on by setting thread priorities and QoS classes.


The macOS SDK exposes a processor affinity API that you can program against.

There isn't an option like taskset on Linux to pin or move tasks among different cores, or like anything that's exposed in Linux's sysfs.


With ARM, yes, and you can also selectively turn on and off cores. For example when travelling with my pinebook pro I turn the big cores off to drastically improve battery life. However it's up to Apple to expose this functionality, and we all know how much control apple wants you to have of the computers you "license" from them.


WRT the pinebook, i didnt know it could do that. can it be done at runtime or do you have to change it at boot time?


They can be onlined or offlined at any time,

To offline a core:

echo 0 > /sys/devices/system/cpu/cpuN/online

To online a core:

echo 1 > /sys/devices/system/cpu/cpuN/online

Where cpuN is 0-4. Keep in mind there's always one core you cannot disable to process interrupts.


That sounds awesome. Sounds really fun to hack around. I might buy a Pinebook


Very cool, thank you.


Seems pretty clear they're using big.LITTLE-style low-power and high-power cores on the same chip.

No fan however, is impressive....


But it can't sustain max performance. That is reserved for the MacBook Pro with a fan.


4 high-power and 4 low-power cores.


At 9:15 of the keynote they claim that the "high-efficiency" cores are as powerful as the outgoing dual-core MacBook Air's cores. Seems pretty good to me.


Agreed, but I wish I knew the numbers behind those statements.


I can't shake the feeling that they try to pull a fast one. For example, Mac mini: 3x faster CPU than the old one, and 6x faster in graphics. At the same time 5x faster than the "top selling PC desktop". What is the top selling PC desktop that it's essentially at the level of a 2018 mac mini, or below? Is that desktop at the same price level?

Also: People who use graphs without numbers on the axis should be shot on sight.


The slide doesn't say it, but the speaker said "in the same price range."


I honestly expected more. While compute performance and power efficiency seems to be really good, the M1 chip is apparently limited to 16GB RAM, 2TB storage and 2 Thunderbolt/USB4 controllers.

Comparing the new 13" MacBook Pro with the previous one that's a regression in every aspect as the Intel-based ones allow up to 32GB RAM and 4TB storage and offer 4 Thunderbolt 3 ports.


While compute performance and power efficiency seems to be really good, the M1 chip is apparently limited to 16GB RAM, 2TB storage and 2 Thunderbolt/USB4 controllers.

In other words, M1 is for the majority of consumers.

    - power efficiency
    - 16GB RAM
    - 2TB storage
    - 2 Thunderbolt/USB4
That looks like a really good feature set for ordinary consumers. My guess is that there's going to be 2 "Pro" variants for a high powered laptop and a professional workstation.


> My guess is that there's going to be 2 "Pro" variants for a high powered laptop and a professional workstation.

My guess is that they'll introduce one other ARM-based chip for Macs in 2021, targeted at all remaining Macs (except for the Mac Pro). I believe the differences in performance can be achieved with a single chip simply by binning and different power envelopes.

I expect (or rather I hope) that chip then supports at least 64GB of memory, 8TB of NVMe storage and 4 Thunderbolt 4 ports.


I'm thinking we might see a M1X which might add a couple cores to all the compute units in the SoC.


If every model had the 16 GB I'd say it's a really good feature set for ordinary consumers but a new MacBook Pro having 8 GB of shared memory seems quite limiting even just for ordinary browsing. It's a great step but I think the 2nd gen of this is going to be what knocks it out of the park.


Seems like Apple is relying on MacOS's (arguably good) memory compression and fast ssd swap performance to get high margins from the 8gb ram while still not cripping regular users' light workloads.


I definitely feel like that's the case. I'm still on an 8GB 2015 MBP and it still works pretty well for multiple windows of Chrome + VSCode + node servers + spotify etc even at 1-4GB swap. The only thing that really bothers me is 4k performance and not being able to run multiple Docker containers (especially ones that needs a lot of RAM).


Lets be honest - Apple has not really expanded its share of PC/Laptop market in a long time. Ordinary consumers are not going to care about this, especially with the Apple premium tacked on. I think combined it runs 10% +/- year to year variation.


Be interesting to see what market share actually is now.

Tim said at the beginning that 50% of Mac buyers were buying their first Mac. Apple’s quarter that just concluded at the end of Oct was massive for the Mac. $7.1B which is a 22% YoY growth. And 50% of those macs went to first time Mac buyers. That is insane growth in new users for a 36 year old product


They only upgraded the entry level Macs, as expected. The previous 2 port 13" MBP was also limited to 16GB RAM.

The 4 port 13" MBP will most likely get upgraded early next year and also support more RAM. Apple did announce the transition will take them two years, not sure why people are surprised that their high end hardware is not updated yet on day one.


I don't think the devices' RAM is limited to 16GB by anything other than choice.

Think about it. If you want to introduce a new architecture, you can sell a bunch of entry-level devices to pandemic-constrained consumers during the holiday season. Their take-up of the product will spur development and flesh out the software ecosystem for Apple Silicon devices.

Once you cultivate the software ecosystem, more advanced users (professionals with very specific needs) can migrate toward the new architecture.

They're basically using Christmas to slingshot the new chip into the mainstream.


It's not a regression if you compare like with like: https://www.apple.com/mac/compare/?modelList=MacBookPro-13-M... (M1 “two ports” compared with Intel “two ports” and “four ports”)

I assume there will be a “four ports“ Apple Silicon version down the road.


So I guess Windows won't run on M1, so none of my Windows software/Steam games will work either. Will software developers start dropping support for Mac now that they'd need to maintain an ARM version as well? It's not clear how long Apple will support emulation with Rosetta before cutting it off. What's the Linux landscape look like for this chip? It's great that there's performance and battery life improvements, but Microsoft said the same things about WindowsRT and their move to ARM in 2012, and we know how well that turned out.


Windows has an arm build, so I wouldn't count it out entirely.

Edit: Never-mind, it looks like they are not porting bootcamp.


Apple said that only virtualization will be supported on their ARM chips. Locked through the bootloader (although wouldn't be surprised if exploits are released to load up Linux/Windows)



That blows my mind. Microsoft has had an ARM version of Windows for years. I would have thought they would have lept to support Bootcamp on these new Macs. That being said, I bought my MBP with 64GB of RAM to run Windows VM's in Parallels, but I no longer have any need to run any software that runs only under Windows. I think the market for Bootcamp is basically just gaming now, and the new consoles are really eroding the performance reasons for choosing a Windows box for that.


> That blows my mind. Microsoft has had an ARM version of Windows for years. I would have thought they would have lept to support Bootcamp on these new Macs.

Why and for who though? Yes, there is an ARM version of windows, but there is pretty much no software for it - and most of the Windows ARM software (Office?) likely has already iOS versions and will have upcoming Mac ARM versions.

I have the feeling that 99%+ of Bootcamp users use it for x86 software.


At WWDC Apple said Bootcamp usage had fallen from 15% of users down to 2%.


> That blows my mind. Microsoft has had an ARM version of Windows for years.

And people went crazy over MS trying to mandate UEFI secure boot on these systems.

Now Apple does the same... And the result? Absolute silence. A world-wide mass of Apple fans suddenly not giving a fuck.

Talk about double standards.


I am totally relying on my memory but didn't Craig Federighi say that you will be able to disable secure boot ( at WWDC either in the keynote or in an interview with MKBHD )


Yes, the boot loader is not fully locked down. It'll presumably take a while to figure out how to run other software on it, but it should definitely be possible


Even if ARM Windows was supported by Apple, none of the games have ARM versions so it wouldn't help.


I'm pretty sure they'll use the same boot chain as in iPhones/iPads where you're not allowed to replace the operating system on the device.


If I was Microsoft I'd offer some kind of Windows in the cloud for Mac users.


It's happening with Azure Virtual Desktop.


It just needs packaging in a way that doesn't require calling sales for assistance nor trying to navigate Azure.


For the first time in a while I'm actually excited to see what is possible when the hardware and software are built by the same company.

These CPUs aren't meant for professionals but 18 to 20 hours of battery without all the heat from Intel CPUs is really nice.

Shame they didn't mention clock speeds


Current intel ones supposedly have 10-12 hours of battery, and many times i find that they don't even get to 3 hours, leave open a browser tab that uses webgl or an app that uses the discrete graphic card, and you are doomed. At least on the 16" inch, I'm very disappointed with the battery life.


Do I need to worry now that a walled-garden company - with all its advantages - pulls ahead of open ecosystems?

I only use Linux at home. Will I be doomed to inferior H/W going forward?

This is not a rhetorical question or a flame... It's an honest question... Is Apple pulling ahead of everybody else and soon going to be our only option?


Your situation won't be worse than what would exist had Apple not moved so far ahead of the others. In fact, Apple moving ahead only encourages others to up their game. So you're better off, due to Apple, even if you don't buy their machines.


That's an excellent point, I didn't consider that.


It does appear Intel has become rather complacent in recent years, more competition can only be a good thing.


I doubt it. When they talk about performance - they always said it in terms of compute/watt. In absolute numbers these CPUs and GPUs will still be less performant - just my guess.

On top of it for a foreseeable future I think a lot of professional software will stay on x86 architecture simply because most of it must be also available on Windows.


I'd agree, but Apple's position with the iPad likely will ensure enough professional developer mindshare? Most enterprise apps I use typically have a fairly built out version for the iPad (and most only support Windows otherwise). I agree that developers themselves will likely stick with x86 architecture to write code on for the most part given that's what servers will run.


Don't forget that AMD is still improving its cpus and moving ahead of intel .


I actually worried about this because of Windows on ARM only supporting ARM apps installed via the Microsoft Store which banned competing browsers like Firefox. You can run x86 apps with emulation without the Microsoft Store but why even bother with an ARM device then?

Turns out Windows on ARM is a farce and never had a chance in the first place. The available SoCs were crap, no hardware manufacturer actually launched ARM laptops in significant quantity, apps had to be ported to UWP, etc... It's basically Windows RT again and it was a total failure at that.


> Will I be doomed to inferior H/W going forward?

H/W is more than performance alone.

Personally I would consider a closed platform which can only run a single proprietary operating-system inferior, no matter what performance it has to show for.


The iPhone-like speed of waking up from sleep itself is enough to convince many of us to go for the M1 macbooks


How long does it take the current gen of macbooks to wake from sleep, ~1 second?


Takes some seconds in my experience. And about 10 in total for both of my external displays to be displaying useful stuff. Definitely annoying.


The displays always seem to be the challenge.

With my Mac Pro, as soon as I wake it, I can see the LED on my monitors are 'alive', but it still takes that same 10s or so to be displaying data.

Also on macOS, if you have an always-on VPN, that can absolutely cause wake time challenges.


It's pretty instantaneous already.


With two external displays plugged in, 10-15 seconds, sometimes more. Without any, 1-2 seconds.


By time I've lifted my screen in to position it's ready for me to log in. It's very quick. I've never felt held up by it.


I genuinely would love some info or statistics on this. AFAIC remember, laptops wake from sleep almost instantly to the lock screen. Is it a longer wait if one wakes directly to their desktop?

I'm with you in not fully understanding the benefit. Maybe this is a technology that is hard to imagine, but is difficult to go back from (60hz, Retina displays).


I think there's a conceptual difference. Macbooks wake up after a brief pause of a second or two; phones act like they never went to sleep. There's this perception of locking your phone being a zero-cost thing, which isn't quite true of putting your laptop to sleep. I assume this is the gap they're talking about bridging.


They had that in 2007. In 12 years this somehow became a selling point.


Because they lost it somewhere along the way.


I'm interested to see the benchmarks vs. comparable AMD systems. Some of the claims, like 2x performance increase on the MBP are impressive, but intel laptops have been absolutely trounced by AMD 4000-series laptops of late.

Also will be interested to see the benchmarks of the integrated GPU vs. discreet GPU performance.


Anandtech posted some comparisons of the A14 against Zen 3 today, which may be an interesting comparison: https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...

Seems like the A14 is within 10-20% of the desktop 5950X in single threaded workloads. The M1 will probably close the gap with higher clock speeds. AMD will probably still be ahead on multithreaded workloads until Apple releases a chip with 8 high-performance cores.


Didn't they claim it's the fastest per-thread performance in the world?


Yes, Anandtech benchmarked the A14 chip from the iPhone. The M1 is as far as we can tell an A14X-like chip with much higher thermal headroom within a Macbook/MacMini for higher frequencies and has more L2 cache (12mb v 8mb), in addition to more CPU/GPU cores. So if Apple can get frequencies up 10-20% they'll be equal in single thread to Zen3.

A14 has some important and kind of crazy architectural differences like being 8-wide and having double Zen's per-cycle throughput. It's a very different chip from anything else on the market.


Psst…you want to use "discrete".


Depends on what they're rendering with that GPU…


Also, mb air has no touch bar, but includes fingerprint scanner. I’m sure a lot hackernews will be pleased with that, despite it not being in a machine with the pro moniker.


I was so hoping the Pro might have the same option, but no.


Touch ID has been on the MacBook Air since 2018.


The 13" Macbook Pro lineup looks pretty weird on their store now, at least in the UK.

The M1 with 8GB RAM and 256GB SSD is at £1299, with a 512GB SSD option at £1499. To bump the RAM to 16GB is another £200(!), so £1699 for 16/512.

The Intel options are £1799 for 16/512 or £1999 for 16/1024. So on the Intel side they seem to have cut out 8/256 options, and then they've re-introduced them for the M1 only - which makes the M1 look like it has a much lower starting price. It is cool if 16/512 M1 is £100 cheaper with more power and better battery life though. Intel can of course still increase the memory to 32GB (for £400 lol).

The other downside is only 2 ports on the M1 ones, Intel have 4.


>so £1699 for 16/512.

My 2014 13inch MBP has 16GB and 512 storage and I'm almost certain it only cost me £1200 at the time. Back then we considered the storage upgrade expensive because SSD was still considered new tech... Today my desktop machine has a 2TB NVME m2 SSD in it, £285 on Amazon.


Yea, my 2015 13" MBP is 16/256 and I'm pretty sure it was close to £1000 with a small student discount. I have no idea when the price went up so much since then or why, but it definitely stops a midrange Pro being a no brainer for me as it was at that time.


I'm a little surprised that there's so little mention here regarding this being a possible precursor to the end of the hackintosh.

One comment mentions the possibility of using old ARM tablets in a hackintosh build, but this seems to run against one of the two reasons for using hackintosh over Mac directly - performance. Cost would definitely be a benefit, but I'm less convinced of the value here.

I imagine though that this is literally the end of the hackintosh entirely. With control over basically all the hardware, there are no more workarounds or backward compatibility aspects. Hell, even external GPU aren't supported as-yet.


If you didn't see the event, they've put the new chip into the new MacBook Air, MacBook Pro, and Mac Mini that will start shipping soon.


* 13" Macbook Pro. No word yet on the 16"


Next week, to be exact.


November 17, to be exact exact.


The M1 looks very promising, looking forward for real-life reviews. The biggest winner might be the Air - going to 4+4 cores and longer battery life while dropping the fan sounds like the perfect portable laptop. It will be interesting how much performance difference the fan in the MB Pro brings. The Mini at a lower price point is also great news. When introduced, the Mini brought a lot of new users to the mac at 499. While it is still far from that, bringing the starting price of the Mini down is great, especially with the fast CPU/GPU.

The big letdown are the upgrade prices for RAM and SSD. Even at half the prices Apple would make a decent profit. As a consequence, the excitement about the affordable entry prices quickly vanishes and many people might not get the amount of storage they should - or they go somewhere else. At least for the Mini, you would probably only upgrade the RAM and buy a TB SSD.

For the Mini and the MB Pro, not having a 32GB option hurts. These are machines which should be made for real workloads.


All the Macs looked quite impressive! It was well worth waiting for these (at least for me). But I was disappointed that the maximum RAM is 16GB. I would’ve preferred a 32GB option for better future proofing (especially with web applications needing more and more memory).

Edit: Considering the fact that the RAM won’t be upgradeable (it’s part of the SoC), this limitation is a big bummer. What may be worse is that all these machines will start with an 8GB RAM configuration option at the low end, which isn’t going to age well at all in 2020.


It's wild to me. I still love my 1st gen MBP Retina and it has 16GB memory.


Same! Mid-2012 rMBP. Turns out I could upgrade the AirPort card with a used one from ebay for $20, as well as the SSD (although that was maxed out at 1TB with mSATA). Eight years later, it's still a good computer.


> Considering the fact that the RAM won’t be upgradeable (it’s part of the SoC)

While I agree that RAM won't be upgradeable (as it hasn't been in all new models the past few years), are you sure that the RAM is part of the SoC? I believe what they labelled with "DRAM" in the M1 schmatics is very likely the L3 cache instead.

Adding RAM to the SoC would make little sense from a cost and yield perspective. I also believe that 16GB of DDR4 memory are much larger than the "DRAM" part of the SoC.


It's on the SoC package not the die, just like in phones.


Thanks for pointing out. My bad.

Here is a picture of the SoC with the 16GB of DDR4 RAM: https://www.apple.com/v/mac/m1/a/images/overview/chip__fffqz...

And here is the picture which confused by and tricked me into thinking the OP talked about the RAM integrated into the CPU (although upon closer inspection that picture also seems to cover the whole SoC package): https://www.apple.com/v/mac/m1/a/images/overview/chip_memory...


These are all targeted towards consumers. My last three personal Macs have had 8GB of ram and they were fine.

My work machines on the other hand were all specced with 32GB.


Especially since the Intel-powered 13" MacBook Pro was configurable up to 32GB.


Was that for just mb air or all of them? I missed that part.


13" Pro is also 16GB.


Yes:

https://www.apple.com/macbook-pro-13/specs/

“Memory 8GB unified memory Configurable to: 16GB”


How much of this 8GB of basic configuration will be available for user's processes? Kernel+GPU will take quite a bit.


All of them, including Mini.


32GB is an option on the pro


No, it is not. And 16GB is $200 more.


Only the Intel Pro, not the M1 Pro.


No, it isn't.


I'll bet you'll get what you're looking for in the 15"/16" Macbook Pros when they come along.


The pricing is pretty impressive.

Shame no 16 inch pro though. Surely they need to update that quick because who is going to want a 16 inch intel Mac now?


Rumors were it was just behind the initial rollout (though those rumors missed the new Mac mini, AFAIK).

Signs point to another event in January, I'd expect it there with a heftier SoC.


Yeah, strangely no-one predicted the mini, despite them already making a tonne of them for developers. Bit of a no brainer really.


Yes, but they seemed pretty ambivalent about the mini for a long time. Probably easier from a SKU perspective though than coming out with new iMac versions. (Even if iMacs overall outsell Minis--which I'm guessing they do--individual models may not.)


I think they're targeting a more powerful SoC at the iMac, or at least they didn't want to announce new iMacs without being able to replace the whole range.


Me and other people who want to run windows VMs.


I guess you'll get a good deal now at least.


After seeing how big a downgrade these machines are over their intel counterparts (16 GB max RAM, max 2x thunderbolt, max 1 external display on the laptops, no 10 GbE on the mini), I'd absolutely buy an Intel machine now to tide me over until Apple can catch up in a generation or two


I'm also disappointed by 16g when my current laptop has 64g. However, Apple mobile devices have a history of being way underpowered on RAM specs and outperforming in real usage.


That's their typical approach. They released iPad 1 with 256Mb only, then iPad 2 was released in short time while iPad 1 has become literally unusable after next software update. That's the lesson. I am certain they will upgrade CPU, webcam, RAM, and connectivity in the next version very soon.


I'm going to be pretty sad if they don't keep releasing Intel MBPs, I need to at least be able to run x86 Windows in a VM on my laptop.


Pretty sure most MBP16 are sold with 16GB RAM


Anyone who needs more than 16gb memory (me, sadly)!


IIRC, the 16 inch MacBook Pro had a discrete GPU. Maybe that’s why?


I mean because they're going to bring out another one really soon, that will, presumably, be drastically better. I'm guessing (hoping) that beefing up the graphics is why the 16 is coming later.


Impressive is not the word I would use when its £200 for 256GB more storage and another £200 for 8GB more ram


This all looks very promising but unless a solution for running VMs with an x86 OS/apps materializes my 16" MBP might be my last :-/


While not impossible, I would guess that something like that would probably be unusably slow.


I didn't see any units for their "the GPU has 2.6 TFLOP/s of throughput". 2.6 TFLOPS/s at what precision? FP16, FP32, something else? Nvidia labels their Tegra AGX Xavier part as 5.5 TFLOP/s (FP16) and AMD's APUs say 1.8 TFLOP/s (FP32).

They also say "up to" 3.5x CPU performance (total performance) of previous generation macs, then make references to a Mac with a 1.2 GHz quad core Core i7 (MacBook Air) and a quad core 3.6 GHz Core-i3 (Mac Mini).

I would like to see some independent benchmarks personally...


If you go to the MacBook Pro 13" page (https://www.apple.com/macbook-pro-13/) and click on "See how M1 redefines speed" there are some more numbers for specific use cases (Xcode project build, Final Cut Pro ProRes transcode, etc). The footnotes for these numbers say the tests were done with the maxed-out previous-generation 2-port MacBook Pro:

> Testing conducted by Apple in October 2020 using preproduction 13-inch MacBook Pro systems with Apple M1 chip, as well as production 1.7GHz quad-core Intel Core i7-based 13-inch MacBook Pro systems, all configured with 16GB RAM and 2TB SSD.

I think it's a fair comparison, since the new M1 MacBook Pro 13" is a 2-port model that replaces the existing 2-port models, while Apple still sells Intel 4-port models.

Similar situation on the MacBook Air page. The tests are done with the maxed-out previous-generation MacBook Air:

> Testing conducted by Apple in October 2020 using preproduction MacBook Air systems with Apple M1 chip and 8-core GPU, as well as production 1.2GHz quad-core Intel Core i7-based MacBook Air systems, all configured with 16GB RAM and 2TB SSD.


Is there any information on what type of software will run?

Will it run things like R, Julia, Postgres, Docker (which I can work with reasonably well on my late 2013 MBP) and will we see any of the speed improvements or will some of the packages that work fairly well on the Intel-based models just not work?


I'm sure a good amount of things won't work on ARM the first time around.

But I also think they'll be fixed quickly. RIP my dependency packages...


There's a translation layer, so I'm hopeful everything will just work.


At what speed? As far as I understood, all the presented performance increases are for optimized apps. I guess this is also where Apple sees the primary customer base.

For developers, scientists, etc., it would be interesting if it improves any of their typical use-cases – memory is one big limitation (I don't necessarily need 32GB on the laptop, that's what I use a workstation for, but after using my current Macbook for 7 years, I don't want a new laptop that feels limited after a few years)


Generally reasonable speeds.


Funnily enough, in the Macbook Pro section of the presentation:

"Developers can compile up to 4x code"


...on a single battery


..without a fan


As a technical person, the cartoonish performance graph is kinda infuriating. It's so infantilized I feel embarrassed and insulted. Errg!


So, all of these new Macs have the same SOC across the board? Or will there be slight differences like the Macbook pro will have a higher-binned soc or maybe the Mac mini and Macbook air will have fewer graphics core?


The Mac Mini and MacBook Pro have fans where the MacBook Air is fanless, so I assume there are some differences in at least boost clock and probably core clock too. I wouldn't be surprised if we see higher core count M1X chips next for the larger MacBook Pros and iMacs, and maybe even the Mac Pro.


Maybe not the on-paper clock-speed, but likely the effective clock due to throttling


If you look on the order page they're all described the same but the Air has different option on the GPU side of the chip, 7 vs 8 cores.

It sounds like that is possibly the only binning they did, where one GPU core is disabled? Perhaps the 13" ones can run faster & more efficiently or something but they're not saying that.


That is interesting. Both Mac Mini versions have a 8 core GPU. I wonder why they made the distinction for Macbook air.


There's some binning happening as the Macbook Air has a 7 core GPU instead of an 8 core one, but that's the only difference it looks like. Which would explain why Air has no fans but the MBP and Mini have fans.


The cheaper MacBook Air has a 7-core GPU instead of an 8-core one.


My uninformed guess is that those are chips where one of the 8 cores failed in testing, but allowing them to be used in low-end machines means they won't be scrapped completely. Seems incredibly unlikely that they produced a distinct 7-core GPU variant just for that machine.


I wonder if the Air gets the chips with one faulty GPU core? They could just disable the faulty one and use the remaining seven. </idle-speculation> Edit: Whoops, I see I was not alone thinking that thought.


Pro has fans unlike air, so it’ll probably run faster


For MacBook Air and MacBook Pro "Simultaneously supports full native resolution on the built-in display in millions of colours and: (Only) One external display with up to 6K resolution at 60Hz". Mac Mini supports 2 displays, one via thunderbolt the same as above plus a second 4K display on the HDMI port only which I'm guessing internally uses the same output as the laptop displays.

That’s a little disappointing - probably won’t be getting this first generation model :/ Apparently eGPUs are not supported either (https://9to5mac.com/2020/11/10/new-apple-silicon-macs-with-m...) although perhaps there will be some "traditional" USB display adapters that would work?

Use case being docking the machine to my desk the majority of the time. I'd like at least 2 and preferably 3 display outputs to match my current setup.


>Although I can imagine some USB video card of eGPU option fixing that issue (use case being docking the machine to my desk the majority of the time

seems like external GPUs are not supported https://www.theverge.com/2020/11/10/21559200/apple-m1-macboo...


The onboard graphics performance seems to be impressive (1050Ti - 1060 range). I wonder if Valve and Epic will start compiling games for ARM. This MacBook could be my first gaming laptop, who would have thought


I would guess that neither Valve or Epic will do that, it is up to game devs to do that. Mac is bad gaming platform, and dropping 32 bit x86 support from Mac OS didn't help https://www.macgamerhq.com/opinion/32-bit-mac-games/


I think it is clear they dropped 32bit x86 support so it would be easier to develop Rosetta 2 (The x86 emulation layer). I don't really see why they would drop support otherwise.


Epic precisely might not.


> The onboard graphics performance seems to be impressive

I agree, although caveat emptor until independent benchmarks drop.

> I wonder if Valve and Epic will start compiling games for ARM

There are plenty of other problems besides lack of raw processing power that prevent the mac and macOS from being good gaming platforms. And of course, Epic and Apple are fighting it out over App Store policies, so they're unlikely to do each other any favors.


Unity and Unreal already run on iOS so tons of games built on those engines should work well.


Apple M1: 192KB I-Cache, 128KB D-Cache, 12MB L2 Cache

AMD Ryzen 9 5950X: 32KB I-Cache, 32KB D-Cache, 8MB L2 Cache

Isn’t this difference huge? What am I missing here?


The L2 cache on the M1 is the last level cache (LLC), where the 5950X has a 64MB L3 cache for LLC. Also I'm not sure we know yet how much of the chip is using that L2, it might be more than just the four high perf cores.

A closer comparison is probably Intel's 10900K which has a 20MB L3.


TSMC 5nm versus 7nm. Process advantages win most chip fights.


Anandtech did a write-up and concluded: “ Apple claims the M1 to be the fastest CPU in the world. Given our data on the A14, beating all of Intel’s designs, and just falling short of AMD’s newest 5950X Zen3 – a higher clocked Firestorm above 3GHz, the 50% larger L2 cache, and an unleashed TDP, we can certainly believe Apple and the M1 to be able to achieve that claim.

This moment has been brewing for years now, and the new Apple Silicon is both shocking, but also very much expected. In the coming weeks we’ll be trying to get our hands on the new hardware and verify Apple’s claims.

Intel has stagnated itself out of the market, and has lost a major customer today. AMD has shown lots of progress lately, however it’ll be incredibly hard to catch up to Apple’s power efficiency. If Apple’s performance trajectory continues at this pace, the x86 performance crown might never be regained.”


I am somewhat skeptical about total performance claims as many notebook manufacturers have been moving to ARM for efficiency and not total performance.

Current top of the line for notebooks would be the Qualcomm 8cx (ARM, 7? Watts) and AMD 4800U (x86-64, 15 Watts TDP) from quick search around. Would be interesting to see independent reviewers benchmarking those 2 in comparison to Apple's first in-house processor.

Here are some spec comparisons for the time being:

- https://www.cpu-monkey.com/en/compare_cpu-apple_m1-1804-vs-a...

- https://www.cpu-monkey.com/en/compare_cpu-apple_m1-1804-vs-q...


this has a pretty good analysis https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...

This is far from Apple's first in-house processor, and they were already way ahead of other ARM chips before the M1


Perhaps it is due to less data copying and configuring the right processor for the job in Apple's benchmarks? Most CPU benchmarks wouldn't be testing the performance characteristics if the matrix heavy computations were done by the GPU, etc.?


Both the Qualcomm 8cx and AMD Ryzen 4800U have integrated graphics.


I note that the official press release from Apple does not mention ARM even once...


Maybe they want to own the brand and if they move to another chip they can do so with more consistent (and confusing) naming?

This is, depending on how far back you want to go the 4th change. 6800 > 68k > PPC > Intel > Arm ... The idea that there will be a 5th one eventually isn't unreasonable


What happens when Apple deprecates support in macOS for these machines in 7 somewhat years?

Is there an upgrade path to install Linux on them like you currently can with Intel-based Macs? Or are these SoCs like every other ARM SoC that requires a custom kernel and bootloader to get Linux running on them?


You don't get the privilege of controlling your Apple hardware for the most part so probably not easily.


You can turn off boot security.


A locked bootloader is only part of the issue.

The other part is most ARM SoCs don't have an enumerable bus, and thus require customized kernels to run Linux. As a result, they almost always require explicit Linux support from the vendor.

There are millions of ARM devices out there that will never run Linux, or will only ever support an old and outdated fork of Linux, and will never have mainline support in the kernel.

Are we going to see that vendor support for Linux from Apple? Or will these Macs end up like iPhones and iPads where no one will be able to get Linux to support them?


I doubt Apple will do anything here, but there are always developers who care enough to make this happen.


I really hope we will be able to. All the mentions of "security" in the keynote were making me cringe.


You will, they promised it during WWDC. (And it's available on the developer hardware they released.)


Source?


https://developer.apple.com/wwdc20/10686. (Also, this is a thing on the DTK.)


Seems there are two variants of the M1 chip in the MacBook Air, one with 7 GPU cores and one with 8. Wonder why that is the case.

https://www.apple.com/macbook-air/specs/


Probably to get better yield. If one of the GPU cores has a defect, they can still use the die. https://en.wikipedia.org/wiki/Product_binning#Semiconductor_...


Maybe they only manufacture the 8-core version and bin the ones that have defects?


No mention of processor speeds, just like iOS devices. This is the new normal now for Macs.


I think it's a much more useful metric. At the very basic level, I don't know what 2GHz or 5GHz will mean for my computer aside from bigger is better, but anyone can understand "2x faster."

Beyond that though, processor speed is increasingly useless as a single metric. This thing has eight cores, half high-performance and half high-efficiency; GPUs are everywhere and doing seemingly everything; and RAM is always important. The speed cannot be summarized in Hz, but in standardized tests and "The stuff you care about is way faster now."


Do you really, REALLY think that the marketing pitch is measuring what "you care about" and not cherry picking conditions where the big colorful "X" number on screen is higher?


Faster at WHAT? It is a world of difference between apps that depends on single core performance and tasks that can easily be spread out over 8 cores.


Processor speeds are not necessarily a good comparator anyway given that things like caches and core counts are a thing.


To first order processor speed is a very good indicator of performance when comparing similar core count products.


If the CPUs being compared are of similar generations (e.g. Intel 10th gen vs. Intel 11th gen, etc.), I agree. But trying to compare across distinct generations is a bad idea.


If you use it, and it is faster, do you care what the numbers are?


I want to know what I'm buying, like I want to know and compare the screen and DPIs of displays; or how much RAM I have.

Clock speeds are another metric that would cost Apple nothing to disclose.


Not complaining. Those are exactly my thoughts too.


if you hear "3x times faster" would you like to know the baseline or just blindly trust that the metric is correct? if you don't care, they would never show the "3x times faster" metric at all, but they do for a reason


It’s pretty nice that the stats don’t list the CPU’s clock rate. We can finally enter an era of less misleading indicators of performance, in the sense of what the overall experience will be. That’s what really matters.


This press release has the vaguest, most misleading and content-free performance claims that have ever been printed.


They're pretty much in line with any Apple press release.


All the right ideas, just the execution is lacking. No new designs, no touchscreen and no mention of how you could combine multiple M1s into a powerful desktop machine. Prediction: it’ll take years again.


Did I understand correctly that MacBook Pro and Air will share the same M1 CPU?

Previous models had a massive delta in CPU performance based on using low power (Air, no fan) or medium power (Pro, with dual fans) Intel chips


I think there's still a no fan vs fan distinction so they'll still be able to run the Pro more powerful, but it looks like it. I half suspect the Air was the one they wanted to release and they just didn't want there to be no Pro faster than an Air in the lineup.


Yes, but they only replaced the lowest-end 13 inch pro at this point, and are still selling the higher-end Intels. The performance versions will follow next year.


Same CPU line, but doesn't mean they will have the same specs. Pro will definitely have its cores run faster/hotter.


Is there any indication at this point that they'll make that clear? It seems kind of strange to buy a computer without really knowing what you're buying.

I guess apple didn't officially say what intel processors they have in their computers, but at least you could look it up and know what you're getting.


They have also never shared this info for iPhones/iPads, so no reason to believe they will do it for Macs going forward. But we should be seeing third party benchmarks soon enough.


For those looking for some form of TDP estimate for the M1 in a relatively thermally unconstrained form factor:

Apple's web site lists the following for the Mac Mini:

"Maximum continuous power: 150W"


No FaceID, interesting.

I thought they might use FaceID as a "Pro" only feature like they do on the iPads but nope, they also released an updated 13" MacBook Pro without FaceID.


I presumed they were holding all the good updates like Face ID, thinner bezels, modern webcam, 14 inch screen for this update to push adoption but yeah seems like we'll have another 6 months or a year for the 13" to resemble a modern laptop.


Do I understand it correctly, that memory is packaged together with the CPU? So no memory upgrades ever? And what does that mean for the max memory configuration?


How many people still upgrade their own memory? I honestly don't believe it's a use case anymore, even with the hostile design making it difficult.

Would you add a feature to an application or hardware if you knew that less than 1% of your users would make use of it?


I placed an order this morning for a new MacBook Pro to replace my 2016 with Touchbar, but after humming and hawing all day, I cancelled it. Since I'm working from home these days without any prospect of that changing, I've decided to get a new M1 Mac mini instead. It's actually going to eliminate the need for this Elgato Thunderbolt dock I have, and really the only think I'll really miss is TouchID. Some things allow me to use my Apple Watch instead, so that'll have to do to.

At the end of the day, after four years, I want to be buying a replacement laptop that feels even moderately evolved from what I already have. I have a 12" MacBook that I love, so that will be my on-the-go computer for the times I need that.

With the money I'm saving going with the mini, I'm upgrading my daughter's iPad and still have some left over after that.

I'll re-evaluate in the future when hopefully Apple puts some more energy into the laptops. I do actually like the TouchBar but I would expect that after four years, they could at least add proper haptic feedback. As well, the USB-C ports only one side (and the wrong side for me) feels like a downgrade from my 2016 model which has two on each side.


The mini... such a great surprise and so welcome. It doesn't have internal expandability, but otherwise, it looks like a fantastic affordable desktop option.


The word "fast" appears 16 times on the page.


One for each CPU. Maybe it's a code...?


i think it's code for, we're definitely, positively, conclusively faster than the Latest PC Laptop Chip, for sure.


I've been using my Macbook Pro Retina ever since it was first released in 2012 and haven't been impressed enough to upgrade since then. It's on its last leg though and this upgrade looks exciting for a change. I hope they release the updated 16 inch soon. Not thrilled they insist on keeping the touch bar, but other than that this seems like they are headed in a good direction.


I keep waiting for each new generation to offer more RAM than my 2015 MBP which has 16GB and is still going strong. How else can I run a bunch of docker containers?

But every new release is the same or worse for memory. Is there something about these new architectures that I'm missing which should negate my desire?

Why can't I get a machine with 32GB RAM for under $3k or whatever they are now?


I don't know if this will be a big success, but if it is, I think ARM will take over a large amount of server/cloud market.


> but if it is, I think ARM will take over a large amount of server/cloud market.

Seeing as how Apple has zero presence in the server/cloud market, how do you figure? Are you hoping they bring back xserve?

Otherwise I don't see how Apple having a custom power-efficient chip in a laptop is going to do anything at all in the server/cloud market?


If my dev environment and build pipline is all ARM, I might as well get a ARM based server in production if they are available at a reasonable price.


The argument has been that lack of credible Arm desktop / laptops has held back Arm in the cloud. If you can't compile on your desktop then it's much harder to develop / test etc. Now you can buy a Mac and use that to develop for AWS Arm etc.


I don't see your cloud infrastructure throwing everything away and wholesale switching platforms just because you bought a new macbook air, though.

This is all assuming you're doing anything at all that depends on the ISA and pushing prebuilts of it somewhere. Otherwise it's not like deploying Java, Python, JavaScript, etc... changes at all here.


You've gone from "no impact at all" to "won't throw everything away"

Removing barriers to developers building software on the desktop for the Arm ISA will have an impact on deployment in the cloud. How much - who knows and it's not measurable, but it will.


It's equally plausible, if not more likely, that the impact here is just that Apple laptops are no longer viable for those developing cloud applications where the ISA is relevant.


I've just bought an Intel MacBook Pro so I can continue to develop for x86 in the cloud!

Both things can be true though.


That’s also possible. But at that point I won’t define the new chip as successful.


Not throwing things away. X86 will continue to exist for sure. It’s just if ARM is popular to developers, the barrier that sometimes your code don’t run on ARM will eventually be removed. After the barrier is removed, some advantages of ARM will shine, such as more energy efficient, more cores(so when you buy a cloud server you are more likely to have dedicated cores, instead of vcores).


> some advantages of ARM will shine, such as more energy efficient, more cores

ARM doesn't have any such benefits. The ISA has minimal impact on power efficiency, performance, or even core counts.

Apple's specific CPU has so far been pretty good, but that isn't the result of running ARM nor does it translate to other ARM CPUs. The specific CPU implementation matters here, not the ISA.


Sorry but ARM does have an advantage as a 10 year old ISA vs one that still carries the baggage of 40 plus years of development. The exact size of that advantage isn't clear and may be small vs implementation specific features and process differences, but it's still there.

Plus, there have been features of Arm implementations that have clearly given them power efficiency advantage vs x86, big.LITTLE which is now coming to Mac and up until now has not been a feature of x86 implementations.


There are other manufacturers that are building ARM servers these days. AWS even went so far as too build their own chip: https://aws.amazon.com/ec2/graviton/


Will the presence of ARM silicon over Intel mean these macbooks will be poorer developer machines?


Depends what you are developing for - if you are developing for mobile then by that standard it's significantly better.

If you are developing for web... it depends what you are writing and how you are deploying it, but it probably won't have much impact dependent on your toolchain.


For app and web development, likely it will be par or better. For system development, there could be some issues at least for the near term. Till now many homebrew core formulas are not working for apple silicon, specially related to compilers.


Not if servers are also runing ARM


A part that seems interesting to me is the "unified memory" architecture

At first it seems like just a cost/power/size saving measure, like not having dedicated RAM in a discrete GPU

But the CPU-vs-GPU RAM distinction means there's a cost to moving work to the GPU as data has to be copied, leading to it only be used in cases where you can either do the whole work purely on the GPU or queue up batch jobs of work so that the cost of copying data can be amortised over the faster parallel GPU

They sort of hinted at it in the presentation, but the unified memory architecture potentially allows more flexible distribution of tasks between CPU and GPU and "Neural" cores, because the data doesn't have to be copied.

I wonder if this is then potentially something compilers will be able to do take more advantage of, automatically.

It will be interesting to see how useful the "Neural" cores are for non-ML workloads.


I'm actually surprised by the price. I thought it'd be more expensive. I'd easily had paid another $250 extra for 2 ports on the other side, and even some more for 32GB ram. Better camera + 14" screen would also be nice. Basically I was expecting to pay between 3-3.5k for a new macbook :-)


wow, 5nm, memory, gpu, and cpu on the same soc. RX 560 has the same 2.6 tflops as this gpu on this chip. They say 4x more efficient at 10w and 2x more powerful than intel.


RX 560 has about 100 GB/s memory bandwidth, how does that department stack up?


A 560 is low end in 2020


16GB max it looks like.


huge own-goal on Apple's part. It would be literally impossible for me to use this machine to do my job. I guess I'll have to wait until M2.

Edit: If it wasn't clear, this was not a joke. I develop a relatively heavyweight service on the JVM, and between my IDE, the code I run, and all the gradle build daemon stuff, I regularly use up more than 32GB. Often over 50GB. (Although some swap is tolerable, having the majority of my resident set being swapped at any given time means things get very slow.)


How many Slack instances does your job require?!


16GB hasn't been enough for bigger development stacks for a while.


Especially if you need to spin up some VMs and do compilation there.


You're aware that these are Apple's low-end machines right? They haven't released a high-powered M-series Mac yet.


Is there even a JVM available for darwin-armv8 ?


There is in development, but it recently stopped working due to increased codesigning requirements :(


If there isn’t yet, I’d be shocked if there won’t be one soon.


I don't understand this. Does the chip make up for it? Do I have to wait until the next generation for 32gb?


The RAM is in the chip, so presumably they just want to iterate a bit first before just throwing RAM at it.


To put the 3.5x faster CPU into perspective.

I added various machines to my hash benchmarks. My MacAir with an i7 CPU is about 2x slower than a desktop CPU or AMD laptop cpu, and has about the same speed than my ordinary Cortex A53 phone. That's mostly the 2 vs 4 GHz difference.

Their latest A14 Chip is described here https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de... and it's with its 3Ghz entirely plausible that it is 3x faster than the MacAir or my current A53 phone.

https://browser.geekbench.com/ios_devices/ipad-air-4th-gener...


The chip seems to be A14 + 2 extra high performance cpu cores + 4 gpu cores. That's cool, but that's not a great leap for Apple. The real news here is that macOS now runs on ARM processors. Keep your old tablets around, you may soon be able to make it into a hackintosh.


Maybe we can also get MacOS emulation working on Raspberry Pi soon.


I'll be waiting for the 16" MBP which will hopefully have a reasonable RAM option. 16GB is fine for normal MacOS use I suppose, but I'm also curious how this virtualization situation will play out. Having Windows on boot camp and VMWare is a nice thing to have.


FYI boot camp won't be supported, only virtualization for ARMs


Amazing when Steve announced the transition to Intel, he also mentioned the transition to Mac OS X and how that should set up up for at least the next 20 years... That was almost 20 years ago... I can see a new OS for Mac soon that maybe brings OSX even closer to iOS?


> Amazing when Steve announced the transition to Intel, he also mentioned the transition to Mac OS X and how that should set up up for at least the next 20 years... That was almost 20 years ago... I can see a new OS for Mac soon that maybe brings OSX even closer to iOS?

You're a little confused. Those were two very separate transitions. Mac OS X came out in 2001. Intel Macs came out in 2006.


In his announcement of Intel transition he referred back to the change from OS 9 to X and how it would be good for the next 20 years...


That's Big Sur. It's actually versioned as Mac OS 11 and brings macOS much closer to iOS in terms of feel and looks.


Anyone care to explain how they got to 5nm so quickly?


It's 7nm with some improved process. Or 10nm with 2 improvements, or 14nm with 3 improvements, etc.

The process names are completely detached from reality in terms of actual transistor feature size. The only thing we can be reasonably certain of is that 5nm has some kind of improved density over 7nm.



Simple. They booked 100% of TSMC.


They paid TSMC the most and bought up nearly all their capacity.


Juicing apples.


By shoveling loads of cash at TSMC - does anyone know how much - it's got to be millions/billion?


So real question -- is Apple aggressively marketing this as "Apple Silicon" instead of ARM just a marketing gimmick, or have they done something substantive that makes this no longer and/or more special than regular ARM processors?


Well, it IS different to a rebadged Cortex big.LITTLE solution. They use the architecture but reimplement the CPUs themselves. Further, they have a bunch of their own cores on their chip.


Gotcha, this is the first real information I've been able to find. Thanks!


I (and my increasingly decrepit 2014 Macbook Pro) was hoping these would be compelling, and hoo boy, they are. First time in years I've actually wanted a new Macbook, as opposed to accepting what they have on offer.


Wait, they pulled out the FAN?


For sure this was expected, but the fan was actually pulled out in the 12" MacBook 2015 edition (dual-core Intel Core M processor).

  https://apple.stackexchange.com/questions/176391/which-macbook-doesnt-have-any-fans/176393


I was hoping they'd bring this form factor back with the ARM chips, it was one of my favorite machines.


none of the 12" macbooks had fans, and the early Air's didnt, but the later ones did.


And the latest Intel MBA doesn't have a heat pipe connecting the fan to the CPU.


Does your iPhone have a fan? It's a supercharged phone chip.


If you think about it, it has the biggest cooling surface in any notebook. It's the aluminium case.


which is pressed against my legs =(


You are holding it wrong, you aren't supposed to use a laptop on your lap :)


Mac Mini still uses a fan (higher clock ???) but not the Macbook Air.


Not exactly revolutionary. I'm typing this on a fanless Intel 7th generation "Kaby Lake" laptop where the CPU's TDP is only 6W.


This is the best part


...for portable Mac non-general-purpose computers...

How much are you willing to bet that there will be next to no public documentation for this SoC, just like with all the other tablet/phone ones out there? This is a scary turn of events - I still remember when Apple introduced their x86 transition and focused on how you could also run Windows on their "branded PCs", and the openness and standardisation was much welcomed... but then Apple slowly started to change that, and now they are on their way to locking down everything under their control.


Does anyone have any clue how/why Apple has such different architecture than others and if it's so useful for power/perf. why are others not following the trend. Even if it's not applicable for x64 due to different instruction set, it should still apply to other ARM vendor.

I'm referring to Anandtech article's claim that they have a 8 wide decode (vs. zen 3's 4), 600+ estimated ROB and 192KB instruction cache. Note I'm not a cpu expert but I know these are hugely different from other high performance chips out there.


I've been using an older 2016 MacBook Pro (1st with TouchBar) and was excited about something faster. Given that most people are working from home now, and the fact that the new MacBook Pro is virtually identical physically to what I got four years ago, I'm actually considering just getting the Mac mini to save about CAD $1,000. I also have a 12" MacBook which I adore, and so the times I do need something portable, it doesn't get more portable than that. (And it's surprisingly capable for my dev environment.)


All I care about is the hardware acceleration of video codecs.

If a $1400 M1 powered MacBook Pro can edit and cut 8K Canon RAW Light And ALL-I HEVC from the Canon R5, it’s very attractive to me.


I can't seem to find mention of the clock frequency of the chip anywhere. Am I missing something or is Apple withholding this information for the time being?


Hrmn... Does it matter? Without IPC, does clock rate tell us much? The real-world benchmarks will be very interesting to see.


So I have to assume that the GPU is another PowerVR offspring?


It's supposedly developed internally, Imagination of course claims that they stole their IP. The reality is likely somewhere in the middle.


I'm totally underwhelmed but maybe if they really are extremely fast I'd consider one. I guess we'll have to wait for the pro machines next year.


Same. Specs are underwhelming, but performance may be better than expected.


Wow, is this the beginning of the end for x86/64 architecture? Are PCs and all going to move to ARM? I feel this could be the start of a shift.


Same price as previous MB Air - $1k. That's a HUGE selling point to me. I was ready to see $2k for their newest kit.

They're looking for marketshare gains.


It was to be expected because they save a lot of money that they were paying to Intel. It was estimated that they could shave something like $100 per computer by switching to an in house chip.


They are saving the money, but they're just keeping it. You save nothing.


That’s not clear. They likely spent about the same total cost on components with $100 more for worthwhile components instead of the Intel tax

We will see when we get a full teardown


Mac Mini saw a price drop.


I'm curious to know why you expected 2k. With more vertical integration, the cost normally goes down. Why would the cost go up for apple here?


I guess I was expecting a high margin, top-end product, typical of Apple.

Some people would have paid more for iOS apps on their Mac.

Given this is an architecture shift, I guess it seems to make sense to test it out with a midrange product.


No PCIe, meaning no eGPUs. How did they look past that?

Are there any M.2 slots in these or will they'll just be set in stone with the storage they have?


> No PCIe, meaning no eGPUs. How did they look past that?

With ease, I imagine. eGPUs are niche at best, and user-serviceable hardware and GPU vendors are two of Apple’s least favourite things. Macs’ support for eGPUs, such as it is, has always struck me as a strange aberration. I say this as the owner of an eGPU, which I’ve given up any hope of using outside Boot Camp.


They're not niche at all when your presentation mentions Machine Learning work and has Cinema4D as one of the examples of pro-apps you'll be running on this machine.

Either it's a machine that will serve those professionals, in which case it needs to be able to access a real GPU somehow or it's not a machine for those "Pro"-fessionals, in which case why make it part of your sales pitch.


The internal storage they have is set in stone in the Intel machines.


I found it interesting, and curious, that Apple has included hardware specifically for "machine learning" on the M1 chip.

It's hard to imagine that anyone will be training ML models on their Mac laptop. So presumably, this must be some kind of accelerator for executing models which have been trained elsewhere... right??

What do they expect those models will be used for?


These benchmark scores claim that the only CPU faster than the Apple A14 is the recently released Ryzen 9 3950X:

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...

What's the catch here? Did Apple just blow x86 out of the water?


The catches are: 1. single-threaded performance only; 2. sustained load


You mean the 5950X, which makes this even more impressive.


Yes, I mistyped...


When they say "[faster than the] latest PC laptop chip", what exactly do they mean? Intel? AMD?

EDIT: added [faster than the]


Probably Tiger Lake-U. I definitely believe M1 is faster.

Apple has a history of pretending things like Nvidia or Ryzen don't exist when it suits them so I'm sure there will be gotcha benchmarks down the line.

Apple also compared against "best-selling PCs" several times, but the best-selling PCs are the cheapest junk so obviously Macs will be faster than those.


At the bottom of the macbook-{air,pro} page:

> with up to 2.8x faster processing performance than the previous generation [2]

> Testing conducted by Apple in October 2020 using preproduction 13-inch MacBook Pro systems with Apple M1 chip, as well as production 1.7GHz quad-core Intel Core i7-based 13-inch MacBook Pro systems, all configured with 16GB RAM and 2TB SSD. Open source project built with prerelease Xcode 12.2 with Apple Clang 12.0.0, Ninja 1.10.0.git, and CMake 3.16.5. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.


That's.. kind of weak. How many other perf tests did they throw away before taking this one because it showed so well? I guess we'll see the real-world benchmarks when people get their hands on them.

Geekbench is not a _great_ benchmark, but it's common enough that we could use it to roughly compare.

EDIT: Apparently there are Geekbench results that are unofficial that suggest it's faster than current MBPs, but we'll have to see.


https://www.apple.com/newsroom/2020/11/apple-unleashes-m1/ down at the bottom says:

“World’s fastest CPU core in low-power silicon”: Testing conducted by Apple in October 2020 using preproduction 13-inch MacBook Pro systems with Apple M1 chip and 16GB of RAM measuring peak single thread performance of workloads taken from select industry standard benchmarks, commercial applications, and open source applications. Comparison made against the highest-performing CPUs for notebooks, commercially available at the time of testing."

So, "Comparison made against the highest-performing CPUs for notebooks, commercially available [one month ago]". I guess there could be wiggle room on interpreting "highest-performing", but this seems pretty good.


Presumably, both.


Will the MBA have a different "M1" chip than the MBP? And the Mac Mini? Seems like they're all using the same processor.


I think they're using the same M1 processor, but the cooling solutions are different.

The Mac Mini and MacBook Pro will be able to sustain higher speeds for longer periods — this is already the case with the Intel versions of them.


Pros: * Amazing chips. Probably more faster than both i9 MacBook Pro 16 and iMac Pros, and GREAT battery life! * Finally better camera and wifi 6 * same price

Cons: * Wont not run all apps in full speed for now (temporary) * Memory expansion is expensive and limited to 16GB (temporary)

Most limitations are temporary, I think these are amazing products unless you want more than 16GB memory.


You probably want two newlines between those bullet points.


Also looks like it only has 2 plugs


I think the big news for developers is: - iOS apps run on M1 desktops - x86 apps are supported and sometimes outperforms vs Intel.


These macs will increase apples desktop/laptop userbase by a huge margin. I expect a lot over converts within 2 years.


I was thinking about building a new white box media server, but now I'm thinking I should get the mac mini instead.


Huh, the form factors for the Macs seem identical. I expected them to be thinner or something. Fanless is nice for the Air. But I'm really waiting to see what they do on the high end with the top MacBook Pro and even Mac Pro. Will we see an Apple discrete GPU? I guess we'll have to wait a year or two for that.


They left form factors the same when they switched to Intel. I'm sure it helps reassure people that there's a continuity of the platform.


Apple seems to usually change one major thing at a time, internals or externals/form factor, but rarely both.


They just announced a 13" Pro. Same form-factor as before, still just the M1 inside. It does have a fan, unlike the Air.


not exactly the same, less ports


Brought the air at first, was concerned about CPU throttling and switched to the Pro.

To be honest I'm a bit skeptical about performance here.

I'm going to assume the Air is under clocked to prevent over heating.

Anyway, considering I've had my mac mini for about 8 years, I jumped at buying this. Looking forward to getting some 4k !


Hey guys,

Anyone knows if the M1 would be faster for GPU bound work than 2020 MacBook Pro 16-inch with AMD Radeon 5600? I just spent over $4K and just got it delivered yesterday. Thinking of returning it if it does not compare favorably on GPU performance to the new M1 13-inch MacBook Pro. Insight/advice?


Could the like be change to Apple's more detailed press release? https://www.apple.com/newsroom/2020/11/apple-unleashes-m1/


I wonder what's the chance of running Linux on these, probably slim to non-existant, but they would make an exelent Linux machine. I've been waiting for an AMD laptop with thunderbolt/USB4 for GPU passthrough for ages. Sadly we're still far away from gaming on ARM.


This is the result of a decade of intense research and economies of scale with their A* architecture in iPhones and iPads. People liked to criticize Apple for putting overpowered processors in their devices and not having the software to leverage them. The day is finally here!


Reading between the lines of their announcement, it seems that they'll do well in performance per watt and battery life, which is where ARM is traditionally good. They will not do well in raw performance for workstations or workstation-replacement laptops.


Wonder if they could pop one onto a PCIe card for exisiting Mac owners to a a copro for a VM.



It would be a nice upgrade for my hackintosh


Very curious when 16" ARM mac will surpass the latest 2020 Intel i9 mac. Hopefully 2021.


I expect this CPU is already faster than the i9 in most metrics.


What are you basing this on?


Not OP, but

> Apple claims the M1 to be the fastest CPU in the world. Given our data on the A14, beating all of Intel’s designs, and just falling short of AMD’s newest 5950X Zen3 – a higher clocked Firestorm above 3GHz, the 50% larger L2 cache, and an unleashed TDP, we can certainly believe Apple and the M1 to be able to achieve that claim.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


Will the comparisons (M1 has a 3.5 times faster CPU than the previous MBA and similar comparisons for the MM and MBP) help us figure out what is the actual difference between the processor performance between the 3 systems?


So what to do if a powerful working machine is needed now, buy the current MBP 16" or go to thinkpad? Disappointed they didn't even upgrade the intel CPU on MBP 16" to the newer version.


What newer version are you thinking about?

Intel 11th gen H series are nowhere near available.


Would be interesting to see single-threaded performance (/processor speed), I assume this falls way short of the advertised "3x faster" number.

Also, only two Thunderbolt ports down from four!


I would expect single-threaded performance to be extremely good if this is in line with how Apple typically designs their processors. Also, you're comparing it to the wrong computer–this is clearly aimed at being a replacement for the 2-port MacBook Pro.


I'm not a Mac user, but what does this mean for virtualization? Does it spell the end of Macs being used by developers who need to run x86 VMs in them?


i don't understand how you're supposed to develop server-side x86 code on this machine. isnt this an important market for the mac?


A few months ago I build a personal linux server and use it via ssh and VSCode's remote development feature, which mount the server filesystem and do port forwarding so it feels as if it's a local machine even while working outside my home network. My usual workflow doesn't change at all, and I got the benefit of a fast server while using laptop form factor. The new arm laptop is probably perfect for this setup, allowing you to build for intel arch whenever you need it.


Cross compilation is a thing, JFYI. You don't need to be on an arm64e system to generate an arm binary today. This is the case for xcode, gcc, go, etc etc


rosetta is apparently really fast. just compile to x86.

If you normally boot into x86 linux, you might be in a tricky spot. Emulation will be your only bet.


Any indication on how the performance of the MBP with the M1 compares with the more expensive MBP with the i5?


So there was no AirTags introduced? There were rumors that they will release them during the November event


Any ideas how virtualization works?

Can I run Windows VM?


Unlikely, or very slowly, because Windows x86 would need CPU emulation.


there's arm64 builds for windows.


Microsoft even released a Surface device with an ARM processor.


People generally want to run Windows because of its huge software catalogue. That's thrown out of the window if you use the arm64 version of Windows. Pun intended.


They aren't generally available, only to OEM's.


I think they Apple said last time that you'd be able to run ARM Windows, not x86 as of yet.


AFAIK, virtualization will work, but dual booting will not work.


Yeah, rosetta will support x86 emulation. Might be slow though.


But that's for macOS binaries. Not for VMs.


"With the translation technology of Rosetta 2, users will be able to run existing Mac apps that have not yet been updated, including those with plug-ins. Virtualization technology allows users to run Linux"

https://www.apple.com/newsroom/2020/06/apple-announces-mac-t...


Nowhere does this mention virtualizing x86 VMs.

In fact, the virtualization framework does not seem to have anything to do with rosetta at all. Note that those are two different sentences, and they appear to be not related.


Virtualization of ARM Linux.


VERY curious to see if apple silicon will support pytorch or other common GPU based ML libraries?


Will existing programs run on this?


Yes, they have an Intel emulator (Rosetta I think they called it) and are pushing developers to produce "universal apps".


Rosetta 2 is the newest version. The universal apps are just an export in XCode from what the presentation made it seem like. Developers exclaiming it took 10 minutes to do the universal app build.


Universal apps have existed since the transition from PowerPC. For most apps it's simply recompiling in Xcode and it creates a fat binary with both architectures.

Any apps that rely on specific intel libraries will have a bit more work to do.


Ah that distinction was not made in the presentation. Thanks for this.


Here’s more details on Rosetta: https://www.theverge.com/21304182/apple-arm-mac-rosetta-2-em...

It actually depends on the app and what it’s doing. For example they said photoshop wouldn’t be available on it until early 2021.


They said Photoshop won't be available as a universal binary until 2021. It wasn't clear if it would run on Rosetta 2 in the meantime.


"Universal apps" that only run on OSX and Apple's mobile OS I presume?


Universal apps run on both Apple Silicon (ARM) processors and Intel processor-based Macs.


Oh, so if I run Windows/Linux on the Intel based Mac I can run these Universal apps? Sounds very un-Apple-like if true.

If not true, then my original comment seems to still stand.


No, "Universal" apps in this context means these apps can run in Mac OS, either on an Intel Mac or on an Apple Silicon (ARM) Mac. Nothing at all to do with iOS, iPadOS, Windows or Linux.


Isn't that misleading marketing? I know that companies can call their new efforts whatever they want, but if someone sells a "Universal keyboard" that only works with Windows, isn't that just straight up misleading marketing?

The only thing that comes close to being "universal apps" would be applications that run in a browser.


An M1-based Mac will run:

* Universal apps (also known as "fat binaries") that run natively on Intel and M1 Macs

* ARM-native Mac apps

* Intel-based Mac apps via Rosetta 2

* iOS/iPadOS apps (developer’s choice)

* Unix/BSD command line apps (Vim, tmux, bash, zsh, etc.)

* Linux via’s built-in hypervisor

* Java

* Electron apps (VS Code, Slack, etc.)

* Windows via virtualization

* Web apps (Service Workers, WASM, push notifications, etc.) on Safari, Chrome, etc.

Safari is already a universal app; I'm sure Chrome, Firefox, etc. will follow shortly.


Yes, its's backwards compatible, but older programs will not take advantages of the chip in larger ways yet.


No? It’s not x86. Compatibility is emulated.


Does anyone has idea why they do not have 32 or 64 GB RAM option? And Max out at 16 GB RAM?


Would anyone happen to have an ELI5 of how the M1 chip achieves greater power efficiency?


It uses the ARM architecture which is much more power efficient than the x86 architecture used by intel


My expectation for the #AppleEvent was actually a MacBook Pro without the Touch Bar.


I don't get it.

If we have same CPU on MacbookAir and MacbookPro - why would I get more expensive "Pro"? Can someone explain how is Pro faster than Air with same CPU?

Also, the "Windows Guy" bit is a bit lame IMO. I have two MacBooks and one custom built PC. The PC is faster than both MacBooks combined.


As one has a fan and the other not, they are probably clocked differently and the cooled one might sustain full power infinitely. The pro also has a larger battery, better speakers and microphones and the touch bar.


"and the touch bar" that is not exactly a selling point :)


Lets call it a "differentiation point" :)


Indeed, I'm considering getting an air specifically because it doesn't have the touch bar.


Me too.


same here


Yeah, haha. This was the reason I actually got Air instead of Pro last year.


Does it also come without touch bar?


No, but at least it has an esc key.


No.


It's already the case that the Macbook Pro and Air have the "same" CPU - 1068NG7 and 1060NG7 are physically the same die, but with different power limits.


Thermal Throttling ;-) Also the GPU is up to 8 cores so I guess some of them will be turned off in the Air.


> Can someone explain how is Pro faster than Air with same CPU?

Active cooling.


The price difference is pretty tiny anyway. Fan means it can keep that performance up for more than a minute or two.


Because of all the other components which are different? A laptop isn't just a CPU.


The article mentions that the new M1 contains the CPU, GPU, memory, I/O, etc. in a single chip.


> If you read the article

Accusing people not not reading the article is against the rules here.

> CPU, GPU, memory, I/O

Screens, batteries, form-factors.


Yep. Like more USB ports....


Almost certainly faster clock speeds on the MacBook Pro.


It'll be way faster with the fan to cool it.


Better cooling. Faster SSD. Faster and more memory.


Discrete GPU


The 13” MacBook Pro doesn’t have a discrete gpu. They’re all using the integrated gpu of the M1 chip. We have idea how well it’ll perform in real world benchmarks yet.


Seems that you’ll also need the Pro to get 16Gb of RAM or 1~2To SSD


No way, the current Air has 16gb ram.


How will this affect the prices of their high end models? Will the be cheaper?


What is value proposition of Air vs Pro? Touchbar? 18 vs 20 hours batter?


I don’t know if this was already mentioned but... omg... M1 does not support eGPUs.

The Mac is becoming just an more powerful iPad with an integrated keyboard. Apart from the the software and design difference.

I can believe how stupid and backwards that’s is and I would classify myself as a fanboy.


i'm just glad they still kept the headphone jack


will be interesting to see how these fare for developer workloads, but dropping 2 usb ports on one side of the laptop is annoying.


Would this basically be APU but with an ARM core?


16g max? Really? I bought my mini as mini as I can except the ram as this is limitation if you do not have enough ram. Xcode. Video. At least one should have an option.


M1 - the gpu performance claim is questionable


Not really. They already have class leading GPU's in ipad and iPhone.


Do the M1 computers support Docker Desktop?



I did not see it mentioned but I hope an app comes with Big Sur which lets us see which apps will work with Rosetta2 through Finder or some sort of report


Not a mac user, but at WWDC they showed you could check in your task manager / process viewer.


If it's anything like the transition from PowerPC architecture to Intel, you'll be able to see this information by selecting the app package in the Finder, and invoking the "Get Info" window (cmd-I) or palette (option-cmd-I).


"This affords faster performance on Mac computers using M1 versus separate CPU, GPU, RAM, and other components"

Are they really saying this vs dGPU?


I'm sure I heard them specifically compared it to other laptops with integrated graphics.


And very likely the intel integrated graphics in existing mac's (aka not Xe). The AMD integrated graphics are easily 3x faster in similar product lines in many benchmarks vs the intel's. Which means its probably roughly the same as the Amd product lines. Large parts of the presentation perf improvements are probably GPU related.

Course i'm viewing this with a healthy dose of skepticism, having been around in the PPC days when you would think from apple's marketing that the PPC based macs were massively faster than your average PC. In reality they were ok, but rarely even the fastest device across a wide swath of benchmarks, mostly sort of middling.


Just tell me if I need to buy one?


It states the following.

> the end of Intel inside Apple notebooks and desktops

Is this really a good idea for Desktops? Notice the graph is for laptops.


I’m just curious why do developers end up using macs when they can install linux on a custom hardware with more specs


I tried running Linux (for a year) before I gave up and switch to macOS. It was lacking in too many areas (high-dpi support, reliable wifi, easy of use, application support).

If you only live in the terminal then I'm sure it's fine but as a desktop it's not there yet.


> why do developers end up using macs

To develop apps for macOS and iOS.


Agreed but besides developing apps for macOS or iOS


I’m about to build a new Linux desktop. I came dangerously close to pre-ordering a mini instead - RAM and I/O is what held me back from that impulse buy.

Reasons to not use Linux: getting hardware working right is a PITA - note that I already have a powerful desktop PC, but I haven’t been successful in making it work reliably with Linux after several attempts. I’m going to build a new one, specifically researching user experience with Linux for each part because it really is a minefield. I’m also going to have to buy a new 4K monitor, because while Linux DEs can handle high DPI, they really struggle with mixed DPI.

The desktop experience on macOS is quite nice, and you don’t need to worry about whether apps will work most of the time. It handles mixed and high DPI beautifully - you don’t have to think about it and you don’t have to mess with X.Org configs.

I like the idea of Linux, and I am going to move forward with it, but I do that knowing I’m paying a heavy price in my time to do so.


For many, the human interfaces offered by macOS running on Apple hardware are unmatched.


The same reason as everyone else, to run macOS.


Linux is an awful UX.


So... will Homebrew work on it?


With a few hiccups, but it's coming along nicely.


I see no reason why it shouldn't. Most of homebrew is ruby script (that is architecture-agnostic), most of software installed through homebrew can be recompiled.


I bought a new Macbook Pro 13 with i7 chip and 32GB RAM, 3 weeks ago. I hope I'm not missing anything...


If the camera of the MacBook Pro 13 is as shitty as it is today, it's not worth it.


How many years has the 13” MBP been stuck at 16GB RAM now? 7?


People keep saying this but with super fast SSDs and I/O is it really necesssry? Especially in a small laptop. You have to remember more ram means more power consumption and also a longer time to sleep/wake.


Like negative 2? There’s been models with more RAM than that for a while.


Does anybody know if it can run Windows?


How uncompetitive will their igpu be?


Perhaps you should wait until people get their hands on it before forming an opinion on it…


Did I state an assertion or did I ask a question?


You asked a loaded question, so both, really?


I hate the Touch Bar


I think this will rock; just not right away.

I'll get one when the M2 has worked things out.


If the mac is on an arm SoC now why the heck does anyone have to tolerate iOS on their phone?


End of year 2020, and still you can have a Macbook Pro with 16 GB of RAM at most. Why not 32 yet?


You can have a 13" with 32GB or a 16" with 64GB. Just not with Apple Silicon.


So Apple will be shipping 5nm chips while Intel is barely able to produce 10nm? Must hurt to be Intel right now.


while it certainly hurts to be Intel right now, be aware that the generation notation does not mean what it used to. They just mark new generations, not actual transistor size. Intel's 10nm is roughly equivalent to TSMC 7mn, and Intel 7nm should be equivalent to TSMC 5nm. So yes, Intel is behind, but not as much as it might seem.


If what you say is correct, Intel appears to be a generation and a half behind right now. Isn't that the kind of lead Intel used to have that made them untouchable?


used to be. But no more.


I don't think that it's an apples-to-apples comparison between TSMC's and Intel's processes. I believe Intel's 10nm is closer to TSMC 7nm, but you're right, they're behind either way.


And just like that, within a year pretty much all of Apple's lineup will become the best available platform for AI inference, since each and every one of the new machines will have a TPU.


If you can't buy it on Mouser or digi-key then it is another attempt at expanding the walled garden. If for some reason you CPU dies in your PC you can just buy a new CPU without having to replace an entire computer. I bet that it will not be possible with Apple product. They are focusing on forcing consumers to buy new products instead.


OK, 8 cores, but big little config, so probably 4x A53 + 4x A72. So nothing revolutionary new here IMHO. Where is the 16 core A72 SoC we all want?


Apple hasn’t used off the shelf arm cores since like early 2010’s. They make their own custom CPUs. Only the ISA is ARM


so how do they differ from the A72, A53 etc?


Apple designs their own cores.



Uh, is there something specific I should be looking for? I’m familiar with ARMv8-A already.


A53, A72 etc are all based on that architecture. so what is apple adding?


Apple basically makes their own thing based on the architecture as well.


their own thing. OK, got it.


really, do you have a reference?


Uh, I guess I figured this to be fairly common knowledge so the easiest thing I can point to is Wikipedia, which names their custom cores: https://en.wikipedia.org/wiki/Apple_A14. They’ve been designing these since the A6 in 2012 (“Swift”).


Such a damn shame they went with ARM and not AMD Ryzen. Mac quality has really suffered the past few years, and ARM is just the nail in coffin for me.


I wonder of they're going to try and pull the same 'no specs' thing that they do with the iPhone/iPad.

I think I want to know how much RAM is in my MacBook Pro where I don't really care how much is in my iPhone.

edit: Store is LIVE, and has lots of specs!


They have pretty decent specs in the store, although there's nothing about clock frequency or TDP to explain the performance difference between the Air and Pro.


They didn't; the pricing page shows exactly how much RAM you're paying for.


I’m really worried that we have finally reached the point in computing where you can buy a machine that is fast/efficient, or you can buy a machine that is private.

Apple sends the hash of every binary you execute to Apple in current (and presumably future) macOS, and the changes in macOS 11 mean that Little Snitch will continue to work for all processes, except OS/system processes.

This means that it may be impossible to disable the telemetry without external filtering hardware.

This situation is extremely troubling to me, because Stallman and Doctorow have been predicting just this for about twenty years.

Today, it’s here.

I really hope these things can boot OSes other than macOS, or that a way to do such can be worked out.


My AMD Ryzen running Linux is pretty fast, efficient and private. No? Granted, it's rather hard to use in Starbucks, being a desktop.



I dropped the Apple ecosystem many years ago, but the pricing never ceases to intrigue me.

230€ for 8GB to 16GB RAM. (and RAM is shared between CPU and GPU)

230€ for 512 to 1TB SSD.

They sure know how to milk their customers.

So while the starting price seems surprisingly low, an acceptably specced 13 pro comes in at 2139€. And that's without knowing how the GPU will hold up.

The presentation is brilliant marketing though and so much better than the competition. As long as you disable your brain for claims like "Universal Apps are the fastest and most powerful apps you can get!", and never mentioning specifics like the chips the benchmark actually compared with or what that game is that runs faster under emulation than native.


If Apple is so good at chip design why don't they just pivot to that? Then we can all standardize on the single (objectively superior) software stack that is Android. There is a lot of money to be made with licensing and they wouldn't need to run a huge supply chain anymore. It doesn't seem right that the best architecture in the world is locked in to a very specific ecosystem when the whole world could be benefiting from their innovations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: