I remember each of the architecture transitions (M68k->PPC, PPC->x86, x86->ARM), and perhaps the most vivid recollection I have of each one is that it made perfect sense at the time. In other words, it wasn't a surprise move by Apple, and I remember widespread speculation for years beforehand in the broader community (Apple users and otherwise).
PPC offered tremendous promise in the early 1990s at the same time Motorola's evolution of M68k was lackluster at best - so it made perfect sense that Apple would need to migrate away from M68k, and PPC was the optimal choice so as not to compete directly with hardware used by the PC market.
And when Intel/AMD were innovating tremendously in the early 2000s while IBM was lagging with bringing out aggressive updates to their PPC line, the switch to Intel seemed inevitable.
Even the idea that Apple should heavily develop their ARM mobile chips for future Apple products was discussed heavily in my circles since about 2016 (around the same time Intel started to stagnate their offerings). The earliest I remember was this blog from 2011: https://www.mattrichman.net/apple-and-arm-sitting-in-a-tree/.
Oh it started way before 2016, I remember it was somewhat discussed within some circles in 2009 / 2010 already. At least in the hardware industry enthusiast forum. Remember Apple brought P.A. Semi in 2008. Partnered with Samsung Foundry. And Mac Tablet would come in 2009 before we know it as iPad in 2010 . There were talks / rumours and push of Intel becoming a foundry in 2011, and later 2013 they announced Intel Custom Foundry. [1]
There were lots and lots of signals in the background that Apple is serious about their chip strategy all the way back in 2009. It wasn't so much about switching to ARM as an ISA, which is somehow what most people focused on. But switching to self designed and fabbed SoC for cost efficiency. And that was before the internet knew anything about TSMC or Semiconductor Foundry.
[1] I still remember I gave another year to Intel to see if they improve their strategy but BK was simply a complete failure. Decided to bet against Intel and bought AMD at below $3 a share.
Yes, we were all speculating PA Semi’s PWRficient PPC chips were the future of laptop chips on Macs. I remember being very exited about that on MacRumors and ArsTechnica forums.
Apple always wanted to make their own chips (see Alan Kay’s famous quote), they just lacked the volume.
There was that moment when a friend had a Dual G5 PowerMac only to have it trampled in performance by the new Core Duo Mac Mini. That was when we knew Apple had made the right move. I called the G5 tower (Steve's shame), you could just tell that every time those fans kicked into top gear that there was a sense of shame that that thing even shipped.
The M1 felt like that all over again. I didn't really get that feeling during the 68K to PPC era but that also wasn't handled as gracefully.
Didn't Apple start shipping water cooled overclocked CPUs just to get some sort of a speed bump when the CPUs topped out? Was that the G5 you mentioned?
Yes, Apple shipped the top tier PowerMac G5s with water cooling, including the top-of-the-line quad-core (2 x dual core) models. I don't think they were overclocked, they just ran extremely hot at stock speeds.
And the M1 MacBook was quieter, lighter and I could actually sit it on my lap without worrying whether it would prevent me from having little Scarface74’s.
Apple didn’t come out with high end ARM Mac laptops until later. Yes from a laptop - a portable computer - I loved being able to plug it in overnight and to use it all day at a customer’s site without worrying about battery life.
> And the M1 MacBook was quieter, lighter and I could actually sit it on my lap
Yeah this is key. In a laptop that’s meant to be used as a laptop, designing for maximum power at all costs isn’t the winning move. M1 was nice in that it’s plenty powerful while also not making the laptop’s fans scream and being able to survive away from an outlet for several hours despite doing “real work”.
In short, balance, which is still surprisingly hard to come by in x86 laptops. If you want a 10lb monster desktop replacement, gaming laptop that does the 2016 Apple thing and crams too many watts into too thin of a chassis, or a really weak ultraportable with mediocre battery life you’ve got plenty of options. If you want reasonable performance with great battery life and silence however you’re restricted to a tiny handful of options.
I feel like most people with those needs aside from a handful of corporate purchasers are going to be building their own towers anyway, which Apple is going to have a hard time competing with even if they fixed the current issues with the Mac Pro towers.
It would be interesting if Apple took a crack at an M-series HEDT variant that’s allowed to chug electricity freely though, even if that’d be extremely niche.
I don't disagree with anything you said. If you not the comment to which I was replying, I am addressing the specific point that the M1 was a revolutionary boost in performance, which it was not. Apple's Pro stuff (Intel) at the time was a year behind everyone else on specs. When their Pro stuff came out - sorry, but even the M2 isn't "Pro" anything.
What I am doing is comparing top of the line from Apple, to top of the line period. And consistently, Apple is always behind on workhorses, to get work done. They are better in your "light quiet slightly longer battery" category. At 50% more cost. The free laptop from work - I want the most powerful portable thing available. For personal life, now way I'm paying the Apple tax. So - no use case in my life. I do always get my wife iphones and macbooks though. She teaches languages to little kids and I don't want to spend my time helping her with tech stuff, so the Apple tax is worth it. But never for the hardware - just for the walled garden and fisher-price UI for kids.
The Precision is not light, but that's because it's thick metal that you can run over with a car, and extremely durable. No one complained about a thinkpad being built sturdy. For the use case you describe, an XPS with an I7 at the time was super light, got about over 10 hours of battery life if you just did regular office work (no compiling or large data processing), and has similar specs. I used to take that with when I'd go internationally. It absolutely did not get hot - the key was to turn off turbo boost if on battery by setting max CPU at 99%.
The thing is, if we compare pricepoints, you could have a Precision for the cost of an M1, and now you have desktop power on the go. It's not super-light, but it's still light and thin enough to put in a shoulder bag and not get neck pain.
No one complained about a laptop pre 2020 that had bad battery life, was loud and hot because both Macs and Windows were using x86 laptops.
Just like no one complained about the clunky Nomad before the iPod was introduced. Then only CmdrTaco complained (deep cut. It’s a 20+ year old reference)
> an 8-core Xeon,
> and the battery went about 8-10 hours
I find that very hard to believe. Also how much did this "laptop" weight and how thick was it? Presumably it was in an entirely different market segment and not competing with the MacBook Pro. It's almost like comparing the M1 to a desktop...
The XPS was the equivalent Dell product and I don't recall it being particularly better than a Mac back in the 2018-2019.
Apple was much more open about future plans in the early 90s. Apple talked about the PPC transition long before hardware was available and said they were switching to PCI for the second generation PPC Mac’s before the first generation ones ever shipped or slightly thereafter.
But it seems clear that Apple is going to have the same problem with ARM that it had with PPC. It’s already falling behind on the high end. The ARM MacPro is laughably incapable compared to high end x86 processors and they still have no GPU story to compete with Nvidia on the high end.
I’m sure Apple could design something capable of competing with high end Intel CPUs. They just don’t have the stomach for it because the market for really high end Mac’s are so small.
Any time this comes up I try to entertain the idea of using a much faster computer without MacOS and then realize that the OS is ~70% of the complete user experience for me. I am not sure what percentage of users are like that. I have ptsd just thinking about the whole multivendor fuckery what it means to have hardware devices with drivers that barely work most of the time. After using a computer on a daily basis for three decades I have no idea why people wasting time with bad software other than being a work requirement. I still use Windows for work and the amount of time is wasted on that platform is insane. The majority of staff at current co agrees that Windows is a dead end. The most notable ‘innovation’ ms has in win11 is the removal of adjustable scrolling direction. I seriously thought it was a joke that was executed somebody at ms trolling people just to even consider using win11. Apparently this is ok.
Anyways, back to the subject if Apple start to ship Raspberry Pis from tomorrow then I still don’t care and buy Apple until they ruin MacOS, in which case I switch to a job that does not require a computer.
As a happy Mac convert, the drivers are not perfect (video is especially buggy and Sonoma only made it worse). I personally experienced driver issues very rarely mostly on machines I personally built (the story for OEM-qualified configurations is even better) and Windows does plenty better than the Mac UI, in my opinion.
> I have ptsd just thinking about [...] what it means to have hardware devices with drivers that barely work most of the time.
Same, but for my macbook pro. Bluetooth implementation is borked and I can't even use a single device properly - I have to constantly unpair/pair my headphones. If I use 2 devices there is some stuttering in sound. But if I use 3 - all devices stutter, unless I kill some bluetooth daemon, then it works normaly for 2-5 seconds and goes into the bork mode again.
But then the software is awful too:
1. Switching between workspaces takes up to a full second before the new workspace becomes active even after disabling all animations
2. Some windows just randomly decide to not show the three control buttons and it becomes impossible to close them without messing with the process.
3. For simple screen recording I have to open Quicktime player. Then the screenshot tool becomes screen recording tool with no apparent way to return it back to the screenshot tool.
And these are just the ones I experienced this/last week. Don't get me started on mouse getting stuck in a secondary monitor or disappearing completely and other shitty UX experiences I've had being forced to work with macos for the past 2 years. Can't wait to move away and not look back.
Can't comment on the Bluetooth issue, since Apple devices have the least BT issues for me by far. But for screen recording there's the cmd+shift+5 shortcut (or the Screenshot App) to do the same thing without opening Quicktime player.
It is the only platform giving these bluetooth headaches and the reason my nice BT headset sat in the closet for almost a year.
Regarding the screenshots, I've just learned these shortcuts, but if I open the screenshot tool using CMD + Spacebar + "screenshot", it often just defaults to screen recording and I see no option to switch that.
If they developed a competent eGPU implementation (currently entirely unsupported in Silicon Macs as far as I know), then that and another tier or two of CPU options fitted into the Studio line sounds more like Apple's style. The trashcan, for all its misgivings, was still pleasantly small and a small, quiet workstation bolstered by exclusively external expansion strikes me as ever-so-Apple.
And of course we keep the Mac Pro above that, meant for almost nobody.
You're right, they don't appear to be chasing these markets aggressively right now, but some of the stuff they're doing suggests that might change.
In terms of maximum RAM, I also wonder if their Unified Memory is also a factor. there's only so much you can cram into a single SOC. I suppose they could support off-chip stuff, but something something bandwidth.
> PPC offered tremendous promise in the early 1990s at the same time Motorola's evolution of M68k was lackluster at best
Apple entered into a trio with IBM & Motorola to develop the PowerPC in early 90's.
But at that time, there already was MIPS around.
Anyone know why this PowerPC project was started despite that? Especially if -per the article- the main goal was to move from CISC to RISC?
Even if MIPS cpus around at the time had performance/W issues (I've no idea yes or no), then improving technology for existing ISA would be easier than inventing entire new architecture, no? (+ all the software tools)
And there was also SPARC then.
So why invent new wheel, vs. improve MIPS (or SPARC) & use that? Not-invented-here syndrome? Licensing issues?
PowerPC was not really an entirely new architecture, it was a downsized implementation of the POWER architecture which IBM had already been shipping in workstations for a few years.
They AIM alliance also wanted to end the Wintel Duopoly. Adopting SPARC would help out Sun Microsystems, which I believe was the leader in workstations at the time.
MIPS was to be used as one of the 'standard' reference platforms for the Advanced Computing Environment[0] or Advanced RISC Computing (?) (the other being X86), that would run Windows NT.
IBM and a lot of systems that had been 68K. Though the architectures were different a lot of 68K assumptions were maintained in the PPC so porting was easier.
Others are natural evolution as the original cpu is a seemingly deadend, even though powerPC did move on and still some embedded 68k. But the salesmanship of steve can bring you that long n that platform is a miracle. Having the struggle later with wintel is hard as intel is really improved a lot.
Still a surprise as even steve said we have a secret project keeping next sorry osx I meant on intel as an option.
To my memory, it was known that Apple had been maintaining an x86 build of OS X for some time, so it wasn't completely out of left field.
But, it still felt surprising, if only for their intense marketing painting Intel processors as slow and their PowerMacs as "supercomputers," which still hits for its brazen absurdity. Then to turn the narrative on its head in one keynote!
By complete coincidence, I also wrote a post about Mac transitions a few days ago - with perhaps a bit more emphasis on the architectures that almost made it into the Mac - which may be of interest.
I actually commented on your post a few days ago, with the same sentiment - this is a repost from the Medium article I showed you, but I'm trying to build up my audience on Substack :)
Can I be very cheeky and ask if you'd be interested in recommending each other's publications? It seems like we have similar niche audience (albeit my work is a little more iOS-centric)
Amazing willingness on Apple's part to switch architectures significantly 4 times and leave behind so much code. I know Apple has done some clever stuff with translation and emulation, but this is so different than the Microsoft Windows approach to backward compatibility where you can upgrade from Windows 1.0 all the way to Windows 11: https://www.youtube.com/watch?v=nW4rk3gFOxM
Apple made perfectly good choices but history did not come out in their favor. So they had to keep switching and 3rd parties had to follow (if they wanted to stay on the Mac).
Microsoft ended up on what became the dominant CPU that just happened to continue to rocket up in performance for decades. So although Windows is capable of running on other processors, it was never much of a market and most of the software never came along.
I wouldn't say it just happened to be. There's been an orgy of synergies mostly but far from exclusively with Windows that put or has kept x86 at a crossroads of cost, performance, platform openness, and compatibility across vendors and iterations.
I have a hard time imagining what it would look like if MS put all of their weight into an arch transition, but I have a hunch it would come with the most ridiculous compatibility layer we've seen so far.
You’re right. Intel had the money to push the performance on their chips because of all the sales to DOS/Windows users. Still neither company knew which platform was going to be dominant when they started.
We don’t know if Motorola could have done what Intel did given similar resources. Maybe they could have.
I fear MS will continue to struggle with ARM. The problem with Microsoft is they just don’t control enough big applications. There are just way too many little programs out there that people depend on.
Each time Apple moved the performance difference was enough to cover the cost. Plus it was clear you HAD to switch if you wanted to stick with the Mac.
PC users aren’t going to lose Intel, it will still be a choice. No one can beat Intel’s best on desktop. Unless they can convince Nvidia and AMD to come along with native drivers games will suck. Anything else that doesn’t move will depend on a very high speed emulation layer.
Unless battery life can win the day, it’s gonna be a tough fight. And you know Intel is not going to go down easy.
The big difference is that Apple has, throughout most of its, been vertically integrated in a way where they control the whole stack. MS on the other hand has to deal with a bazillion IHVs that are lazy as all heck about updating their drivers.
The drivers are the smaller problem IMHO. Companies use 20 years old software which is a) difficult to port to a different arch b) is no longer developed c) the source code is lost.
I feel like Microsoft chooses to deal with a bazillion IHVs. Surely they have and had the resources to go vertical, but they do not seem to want to get their hands as dirty as Apple’s in hardware business.
They choose to do this, because they understand this relative openness and dedication to backwards compatibility is their major selling point for many well paying (institutional) customers.
Yes, they could choose to go the Apple way, but ... they would likely lose the customers mentioned above, and would likely still not be as good as Apple in Apple's game.
With the right hardware combinations, the only break you'd theoretically have in the System/MacOS/macOS history from 1984's System 1 to 2023's macOS Sonoma 14.1 is from MacOS 9 to MacOS X 10.0. OS X was actually a completely new operating system and oftentimes you had both OSes side-by-side on the same disk to either dual boot or to run OS 9 stuff in Classic mode under OS X.
(With full disk encryption, T2 chips and other various low-level hardware changes this might not actually be feasible under Apple Silicon chips, but at least through the Intel days you could have upgraded a system with hardware changes in the same way.)
Probably the most significant rewrite was from the original 68K assembly, not surprisingly championed by Jean-Louis Gassee who also opened up the hardware side.
Once code was in C (which was being standardized), they could work on porting across CPU's. The huge mistakes of Copland and Taligent were largely due to trying to also jump into some variant of C++ and object-oriented language at the OS level. Of course, that was at the same time Linus stuck with C for Linux and made much more relative headway.
This also gave Apple a lot more leverage in the application space. They spun applications out to Claris originally because the apps were such a pain point, but once C and MPW settled they could duck MS O$$ice leverage/fees with their own more integrated approach (and sell Works + round pink blobs to creatives et al). That made it feasible for Steve to ink the deal with Bill to keep Microsoft apps relevant on Mac's, when most of the other vendors were giving up on that 10% of PC share.
When I started at Apple in late 1995 there was still plenty of 68K assembly in the code base. Sometime when the push to PowerPC began the color picker code landed in my lap. While mostly in C, the RGB and HSB "wheels" were rendered using hand-coded assembly. (This allowed the user on a lowly Macintosh II to real-time drag the slider to adjust brightness or what have you and get a responsive color wheel updating.)
Having never written assembly it was fun pulling down a book on 68K assembly and determining how to do each instruction in straight C.
I remember one odd instruction — some kind of mask-and-shift-bit instruction — that I was unable to find documentation for in the book (pre-StackOverflow obv). Fortunately someone had a Motorola 68020 manual and the instruction I was looking for was there. It had not occurred to me that there were newer instructions for the '020 and that the color picker might use one. (After the fact it occurred to me that in the Macintosh line an '020 or better was the minimum CPU for color support).
A bit confuse here. I am not sure about coded in c part … osx is … but is os9 Pascal plus assembler?
To me, there is a struggle of transition but the strategy is to contain, extend/swap then extinguish, as today is Halloween let us quote the halloween paper, to kill the os9 the whole code base. Everything from assembler has to go … as hp once said you do not eat your own baby other will ….
Among these major 2 plus 1 then just announced transition, as steve said in my quote video above it is the brain transplant of os. That is hard part. Not the hardware as they start with emulation and the cpu power of powerpc and intel move them along “easily” (and arm but we do not know because of low end target of Qualcomm). The whole discussion on cpu is sort of misguided. It is not the cpu ….
Really the hardest part is the os part. Os9 to osx when the base is actually objective c … hence nextstep (the real osx) can move from hardware to software, and run on intel. Apple just to maintain it secretly for years.
Yes osx always can run on intel. Just os9 and all this carbon thing etc thing that is the baggage that has to leave behind. That is a decade of work. But not to migrate os9.
For the arm as pointed out by others, the NVIDIA is an issue. Given N can be a Trillion dollar firm … I think Apple car may not be the best next move. Apple ai for the rest of us might be. How to make that transition ….
> I am not sure about coded in c part … osx is … but is os9 Pascal plus assembler?
Not to any significant extent. The leaked System 7 code I've seen was mostly C and assembler, with a couple of isolated Pascal files in specific areas (mostly Apple Events and CommToolbox). I can't imagine they added much more Pascal after that.
With the rise of networking and multimedia this was also an incredibly chaotic time and I suspect some of the problems with the big OS rewrites stem from that. Microsoft did a better job getting something useful shipped. I don't know if it ever got documented but I'm curious whether there was any serious push to build on the nanokernel work incrementally rather than do the big rewrite.
Linus stuck with C, because Linux is an UNIX kernel clone, and Linus hates C++ no matter what, thus C.
Plenty of other OSes were quite successful using C++, BeOS (which only failed due to NeXT acquisition, could have been the other way around), Symbian, IBM mainframes eventually started adding C++ alongside PL/S and PL.8, on Windows many of those C APIs are actually C++ code exposed as extern "C" {}, and many other examples.
I’m curious, why is C bad at the OS level? It seems like having stuff like classes as improved structs would be helpful as long as you avoid pitfalls like virtual functions/STL containers. C++ compiled without the STL and specific features like virtual seems like entirely an improvement for writing an OS
OS's have limits on when you can do some stuff - for example allocating/deleting stuff in an interrupt handler (and the associated code that interacts with it) - mostly you code around this by just not doing it if you possibly can - but C++ might be doing this behind your back. And when you do need to do it you will likely use a special allocator and associated heap that knows how to interact safely with ISRs - just doing a 'new' without being able to say "where" and "how" is a problem. And then you have to be able to delete/free it the correct way too.
Essentially it means that an important chunk of the language and standard library are off limits in an important part of an OS, and lots of evil bugs are available to people new to the game who just blindly new stuff (or let the compiler do it behind their back)
RAII, templates and inline instead of macros, safer enums, strong types than plain C as less implicit conversions are allowed, namespaces instead of prefix, classes as modules with proper invariants, safer strings and arrays.
The 6501 was pin compatible with the 6800, but had its own instruction set. After the lawsuit, the pins changed and we got the 6502. This meant that if you enabled 6800 support on the Apple 1 you could drop in a 6501 and use it with software made for the 6502. If you were really ambitious you could use a 6800 but then you'd have to write a new monitor ROM. There's no evidence that this was done at the time, but about 10 years ago someone did it: https://www.youtube.com/watch?v=ag6pWUhps7U
The 6800 wasn't a drop-in replacement on the Apple 1. Installing one required some hardware modifications to the board. (It's unclear if anyone who owned one ever did this.)
Also Apple has used lots of CPU architectures over the years in various devices (e.g. time capsules, various dongles, etc).
Surely they must have RISC V deeply embedded in various locations invisible to the end user, both for cost and experience.
They could possibly even be used deeply in the bowels of the M and A devices doing some random housekeeping in the storage system or who knows what.
In a funny way the term “CPU” is returning to its mainframe roots with so many functional units harnessed into a system, yet at the same time having strayed so far from its original usage (with multicore, non-linear systems) as to have become meaningless.
> The M1 chips have a unified memory architecture shared between GPU and CPUs. This is a masterstroke for performance.
No it isn't. It's a nice optimization to have, but it doesn't radically change performance. If unified memory was that drastic AMD's APUs from 2012 to 2015 wouldn't have been so forgettable which is where heterogenous compute with unified GPU/CPU memory was pioneered.
Also every phone SoC has unified memory, and yet only Apple's can compete with Intel/AMD x86 on performance. M1's performance ain't from unified memory (especially since mixed CPU/GPU compute on the same data is very rare)
AMD's APUs or the Windows Display Driver Model did not support USM until very recently. They would explicitly carve out memory at boot for graphics use and physical pages could not be shared between GPU and CPU. They'd still make use of apertures to copy data in/out of respective memory ranges.
That is simply not true. Kavari (2014) had pointer passing between CPU & GPU. I don't know where you're getting that pages couldn't be shared from but it's not accurate.
For all intents and purposes, the Apple II was an entirely separate product family. There was very little software compatibility between the two, and the II existed alongside Macintosh for nearly a decade.
Lost me two paragraphs in, due to gross inaccuracy:
> Intel 8088 : 8/16-bit microprocessor — 8-bit registers with a 16-bit data bus.
Uhh...8088 is 16-bit CPU. Its external bus is 8-bits wide to save on pins...
It does not get better further in:
> 20-bit memory addressing range — supports 640kB of RAM.
It supports 1MB of addressable space, which the system integrator may allocate as they desire. 640kB is something that PCs did, but apple was in no way tied to..
> Intel 8088 : 8/16-bit microprocessor — 8-bit registers with a 16-bit data bus.
> Uhh...8088 is 16-bit CPU
The 8086 / 8088 have eight registers, four general ones (AX, BX, CX, DX) which are made up of two 8 bit registers (AL, AH, BL, BH, CL, CH, DL, DH), so it's both not completely incorrect about 8 bit registers (although I'd agree that it's misleading to say that) but definitely incorrect about the 16 bit data bus.
I used to tease the sudden change of Apple's stance on Intel CPUs at the time of the transition. Before the transition, Apple mocked x86 and insisted that PowerPC was indeed much faster than x86. There were even proponents of Apple who said the same thing. All of a sudden, Apple changed its mind and praised x86 as a superior one.
Not much has changed. We are now hearing the praise of Apple Silicon even though in practice it is nowhere near as good as they want you to believe. If you have a workload that will never extend beyond what Apple has planned and are willing to eat the insane markup on options, yes, it is ok for the efficiency. But otherwise, this is a waste of time on an unstable software platform.
I was an Apple user in the PowerPC era, and I remember very well how bad it got. And I have been burned too many times by software deprecation and unnecessary change that make things worse. I am still salty about Aperture that I regularly recommended to friends/clients; iWork got destroyed from an UI standpoint and did not even improve on anything (still perform like shit for big files and documents formatting preservation are even less reliable than Word across versions); Apple Music is so bad that I would rather use Spotify (if I really have to use a Frankenstein web client application, I would rather use the performant one).
I am still an Apple user in a way, got Mac Mini, iPhone, Apple Watch, and previously had an iPad who got stolen but nether bothered to renew (the pricing is ridiculous, the low end has non laminated display that make it annoying to read on, which was my primary activity on it). But considering the strategy that's pretty much the last things I'll have from Apple.
Apple only cares about itself, it is unhealthy to get in a long term relationship with them. Especially as innovation slowdown, stability and longevity are more important than ever.
But I guess we will keep hearing about how awesome Apple Silicon is. I'll wait for open market competitive version thank you very much (provided there is any advantage at all, which I sincerely doubt).
It was for quite a while. Then Intel pulled ahead by going out of order and solving many hard engineering challenges that PowerPC processors did not solve as well.
I appreciate articles like these because they're useful for people who don't know the background and details and aren't interested in a deep dive, but it'd be nice if the author spent a little more energy on details and correctness.
For instance, 20 bits of address space support 1 megabyte. The 640K limit in the IBM PC was due to the IBM PC's design. Also, Apple would've considered the 8086 over the 8088, considering the need for memory throughput to have a responsive, fully bitmapped display.
The addressing of the 68000 wasn't critical for the Mac like the article says; the Mac's designers took shortcuts that made 4 megs the limit for the first generation 68000 because it was easy and they had lots of space, although it really wasn't the limit because the Mac Portable can have up to 9 megabytes with the original 68000 with 24 bits of address.
The Pentium didn't compete with the m68040. The Pentium came out after the PowerPC 601. The 80486, which made it to 100 MHz from Intel (higher from other vendors), competed with the m68040.
I think some steps were missed between the power-hungry and heat-generating PowerPC and the MacBook Air. I don't remember any history showing the Air as something that Apple was trying to, but couldn't, make. The issue was that the G4 was good for its time, but there was never a proper G4 successor because the G5 was too power hungry and too hot. If there was some MacBook Air project that never got off the ground during the PowerPC years, I'd love to hear about it.
The comment about performance-per-watt of x86 versus PowerPC was only true of the G5, it's worth noting.
"Superscalar Architecture" is in the section of the article talking about, "What made Intel x86 CPUs so much better?", even though Motorola, Intel and PowerPC had been superscalar since the '90s. Intel definitely improved their superscalar implementations, but they certainly weren't unique ("to get superscalar architecture working effectively" is not how I'd put it).
The author wrote, "I had to know what actually caused Intel’s x86 architecture to be so far ahead of its competition.", but never mentions AMD anywhere in the article. If AMD hadn't seriously outperformed Intel and forced Intel to actually compete, and if AMD hadn't forced the creation of 64 bit x86, then Intel certainly would not have been an option for Intel. The Pentium 4 had the same heat and power issues as the PowerPC G5. Intel ended up cancelling the Pentium 5 and instead going ahead with a much modernized Pentium III core for their Core release which made up the initial release of Intel Macs because AMD were eating their lunch.
There are some other things, but they're mostly minor. It's a well written collection of information.
> Intel ended up cancelling the Pentium 5 and instead going ahead with a much modernized Pentium III core for their Core release which made up the initial release of Intel Macs because AMD were eating their lunch.
Intel based the Core on Pentium M, because Pentium 4 sucked. AMD happened to be competitive at that point of time, but that doesn't mean that without AMD, Intel would be blind to the fact that Pentium 4 didn't achieve Intel's own expectations - the NetBurst's ability to scale up frequency (they targeted 10 GHz) and keep wattage/heat in check simply did not work out.
"The Pentium M represented a new and radical departure for Intel, as it was not a low-power version of the desktop-oriented Pentium 4, but instead a heavily modified version of the Pentium III Tualatin design (itself based on the Pentium II core design,"
Yes, the MacBook Air was much more about being able to completely remove the DVD drive, SSD storage able to replace a traditional hard disk, and the battery formed of space efficient flat cells rather than a series of cylinders.
This one is as infamous as it is useful. Modern CPUs have the same instruction, but without the built-in loop (fused multiply-add).
Likewise, modern CPUs have dedicated CRC instructions, ARM Thumb 2 famously has an instruction for case statements (jump tables, instructions TBB/TBH), and many more.
The VAX was way ahead of its time, but failed to deliver on performance due to not having a pipelined or out-of-order architecture as is industry standard today.
My other favorite instructions on the VAX: the CRC instruction, the CASE dispatch table operations, and, glory of all glories, the _3-parameter_ operand + operand + destination instructions.
Frankly, I tried to go into depth on this because it was really interesting. I am embarrassed to say it was tough to find sources outside Quora and Reddit. It's possible this HackerNews comment parent might become the canonical source :)
PPC offered tremendous promise in the early 1990s at the same time Motorola's evolution of M68k was lackluster at best - so it made perfect sense that Apple would need to migrate away from M68k, and PPC was the optimal choice so as not to compete directly with hardware used by the PC market.
And when Intel/AMD were innovating tremendously in the early 2000s while IBM was lagging with bringing out aggressive updates to their PPC line, the switch to Intel seemed inevitable.
Even the idea that Apple should heavily develop their ARM mobile chips for future Apple products was discussed heavily in my circles since about 2016 (around the same time Intel started to stagnate their offerings). The earliest I remember was this blog from 2011: https://www.mattrichman.net/apple-and-arm-sitting-in-a-tree/.