I think this article paints a rosy picture of the PowerPC. I was a Mac user and owned G3 and G4 Macs (and a PowerPC 603 Mac). It wasn't a happy time that suddenly came to an end with the G5 and a decaying relationship. IBM and Motorola had been struggling to keep up with Intel for a long time. Apple kept trying to spin it and the next-great-thing was always just around the corner...the problem is that Intel kept getting there faster and cheaper.
Apple would talk about the "MHz-myth" a lot. While it's true that MHz doesn't equal performance, Intel was doubling the PowerPC's performance most of the time. The G3 saw Apple do OK, but then Intel went back to dominating in short order. The PowerPC never matched Intel again.
It was really bad. People with Windows computers just had processors that were so much more powerful and so much cheaper.
You can say that Apple always charges a premium, but not too much today on their main lines. Apple simply doesn't sell low-end stuff. Yes, a MacBook Pro 2GHz costs $1,800 which is a lot. However, you can't compare it to laptops with crappy 250-nit, 1080p screens or laptops made of plastic, or laptops with 15W processors. A ThinkPad X1 Carbon starts at $1,553 and that's with a 1080p display rather than 1600p, 400-nits rather than 500-nits, 8GB of RAM rather than 16GB (both soldiered), and a 15-watt 1.6GHz processor rather than the 28-watt 2GHz part. Heck, for $1,299 you can get something very similar to the ThinkPad X1 Carbon from Apple (though with a 1.4GHz processor rather than 1.6GHz) - $250 cheaper!
The point of this isn't to say that you can't get good deals on Windows computers or that there's no Apple premium or even that there's any value in Apple's fit-and-finsh that you're paying for. This is to say that I remember things like the original iMac with CRT display, 233MHz G3 processor, 13" screen (when people wanted 15-17" screens), and an atrocious mouse going against Intel machines for half the price with nearly double the speed and better specs on everything other than aesthetics. Things were really bad trying to argue that someone should spend $1,300 for an iMac when they could get a Gateway, eMachine, Acer, etc. for $600 with a 400MHz processor rather than 233MHz. A year later, Apple's at 266MHz while Intel has released the Pentium III and is cranking it up from 400MHz to 600MHz that year.
Yea, you can point to $700 laptops today and say, "why buy an Apple for $1,300?" Sure, but at least I can say that the display is so much better (500 nits vs 250 nits and retina), it's lighter than those bargain laptops, fit-and-finish is so much better, etc. At least I'm not saying, "um, no...all those benchmarks showing the Windows machine twice as fast...um...and the mouse is so cool because it's translucent...you get used to it being terrible." It's very, very different from the dark days of 2000.
Plus, today, a price premium doesn't seem as bad. Back in 2000 when you thought you'd be upgrading ever 2-3 years, you'd be shelling out a lot more frequently. If performance doubled every 18 months, 3 years later you'd be stuck with a computer running at 1/4th the speed of something new. With the slowdown in processor upgrades, paying for premium hardware doesn't seem like throwing money away in the same way.
The article also paints the RISC architecture as superior. I'm not a chip expert, but most people seem to say that while RISC and CISC architectures have a different history, modern CPUs are hybrids of the approaches without huge advantages inherent in their ideology. Frankly, if Intel were able to get down to 7nm and 5nm, Apple might not be looking at ARM as strongly.
I think it also paints Apple as some sort of more demanding customer. In some ways, sure. Apple likes to move things forward. However, it's not like MacBooks are that different from PC notebooks. The difference is that Apple has options. They can move to another architecture. Windows manufacturers don't really have that. Sure, Windows on ARM has been a thing, but Microsoft isn't really committed to it. Plus, Windows devs aren't as compliant when it comes to moving architectures so a lot of programs would be running slowly under CPU emulation.
The big issue is that Intel has been stuck for so long. Yes, they've shipped some 10nm 15-watt parts and even made a bespoke 28-watt part for Apple. It's not enough. I'd argue that PC sales are slow because Intel hasn't compellingly upgraded their processors in a long time. It used to be that every 18 months, we'd see a processor that was a huge upgrade. Now it's 5 years to get that upgrade.
There's a trade-off between custom products and economies of scale. With the iPhone using so many processors and TSMC doing so well with its fab, Apple now kinda doesn't have to choose. Intel has been charging a huge premium for its processors because people were locked into the x86 and it takes a while for new competition to happen. Their fabs have fallen behind. It looked like they might be able to do 10nm and move forward from that, but that doesn't seem to be working out too well for them.
The transition from PowerPC to Intel was about IBM and Motorola not being able to deliver parts. They were falling behind on fabs, they weren't making the parts needed for Apple's product line, and it was leaving Apple in a position where they simply had inferior machines. The transition from Intel to ARM is about Intel not being able to deliver parts. It wasn't simply a short time when they couldn't deliver enhancements, but a decently long trend on both accounts. Apple knows it can deliver the parts it wants with its own processors at this point. The iPhone business is large enough to ensure that and they can make laptop parts that really fit what they're trying to market. Intel got Apple's business because they produced superior parts at a lower price. They're losing Apple's business for the same reason.
> This is to say that I remember things like the original iMac with CRT display, 233MHz G3 processor, 13" screen (when people wanted 15-17" screens), and an atrocious mouse going against Intel machines for half the price with nearly double the speed and better specs on everything other than aesthetics. Things were really bad trying to argue that someone should spend $1,300 for an iMac when they could get a Gateway, eMachine, Acer, etc. for $600 with a 400MHz processor rather than 233MHz. A year later, Apple's at 266MHz while Intel has released the Pentium III and is cranking it up from 400MHz to 600MHz that year.
There are a lot of inaccuracies in your memories. The original iMac came out in 1998. At the time Apple’s G3 processors were very competitive with everything offered by Intel:
And you were not getting double performance machines for half the price of the iMac. You are also incorrect about the performance situation one year later which would be 1999. Yes, that was the year the top of the line Pentium III came out at 600 MHz, but it was also the year that the top of the line Power Mac G4 came out at 500 MHz (a machine I owned after it was delayed because yields on the 500 model were poor (we were offered a 450 MHz part as a replacement)). The G4 500 was superior to the P3 600 in many benchmarks, and crushed it in others thanks to the altivec vector unit:
I just linked to one pretty biased site here (first that came up on Google). But your extraordinary claims need some source because I don’t remember it like that at all in 1998 and 1999. PPC was a solid contender throughout the late 90s.
Also the original iMac had a 15” screen, not 13”. I had an iMac too.
I do remember it like GP. The G4 Powermac came out in '99, and while at 500mhz it could beat a 600mhz Pentium, Intel also released an 800mhz part that year. AMD would release a 1GHz K7 within the year as well.
So yeah, the G4 was perhaps winning the IPC battle, Intel and AMD were more than making up for it with higher frequencies.
That's exactly when motorola dropped the ball (as the article mentions) - a year before they were still more or less head to head, but once amd & intel started reaching for the 1 ghz barrier they left motorola & ibm behind.
Somehow the various RISC vendors managed to remain competitive for a bit longer at the top end[0] - maybe it was easier to compete at the high workstation/server/supercomputer end than at consumer hardware?
"We are starting to see some great games come back to the Mac, but this is one of the coolest I've ever seen...this is the first time anybody has ever seen it, the first time they've debuted it...Halo is the name of this game, and we're going to see, for the first time: Halo."
> I'd argue that PC sales are slow because Intel hasn't compellingly upgraded their processors in a long time. It used to be that every 18 months, we'd see a processor that was a huge upgrade. Now it's 5 years to get that upgrade.
I think it’s Moore’s law approaching its limits and the direction of chip improvements isn’t core speed but the number of them, power usage, etc which make a difference but for most folk it doesn’t appear like an improvement like say doubling the frequency every 18 months. I have an old laptop and it keeps up quite nicely after 8 years...
This is me today. I'm typing this on an 8 year old macbook pro. First gen retina. 4-cores and 16GB of ram. I want to upgrade, I really really do. I have a 16" with 8-cores and 64GB of ram at work, but I can't bring myself to purchase one for myself since I've been telling myself I would wait for 10nm.
The first 14nm processors started shipping in the 15" in 2015 - 5 years ago.
My current 8 year old macbook has a 22nm processor. I would never have thought 8 years ago that Intel would only have managed a single node shrink since then.
>I have a 16" with 8-cores and 64GB of ram at work
I'm jealous and genuinely curious where do you guys work that your employers can afford to get everyone such expensive machines.
I've been a dev in the EU for 8 years now and at most places I've worked or interviewed(not FAANG) the machines you get are some cheapo HP/Lenovo/Dell with only the executives having Apple hardware.
I never understood why companies in the west cheap out on hardware so much since compared to the cost of office rent and employee salaries that's a drop in the ocean, they could buy everyone MacBooks or Ryzen towers and it wouldn't even dent their bottom line.
Silicon Valley startups are pretty much, hey what do you want as far as laptop specs go? At my company new hires can pretty much order anything they want (which is mostly MacBook Pro max CPU and RAM) and one or more monitors, no issue. Heck we have one or two crazy people with Windows laptops :) Old employees are welcome to refresh at 2 years no issues. I am on my 3rd laptop in 5 years. In my case I travel so I have swapped MacBook Pro for a MacBook then a MacBook Air. I have the same Apple 4K LG on my desk at home and work paid for by the company. Same mechanical keyboards also.
This is pretty typical here.
Heck we swap build servers every 6-12 months based on test speed. We buy one of every new CPU and do a test run of auto build and ptest. If it is reasonably faster we order a new rack and replace the old boxes. Power and cooling is way more $$ then the HW. Every month we do not need a new cage is a win. We are deploying AMD Epyc now with 10G-T + 4x1G (not network limited in our test, just segmentation to test DUTs) with a core of 100G for fileservers and 25G for services. File servers are TrueNAS SSD shelves for test and with spinning rust for build artifacts. We run ~1000 containers on a single server in ptest (scaling that was fun to figure out ... hint you have to play with networking stack ARP timers - strace is your friend).
Big companies too. It’s too important not to. I’ve been in companies where employees constantly complain about their machines, and it legitimately causes people to leave their jobs. I’ve seen people offer to buy their own laptop, if they were just allowed to use it.
In a place where talent is as competitive as the bay, you wouldn’t survive making people use subpar machines.
There are a few places that kind of go 'to the nines' for employees and give adequate and even overpowered workstations. I think the crowd that gets that treatment is slightly over-represented on HN.
But most businesses here are the same, you get lucky to get any nice feature over the 'same laptop that sales gets' which is barely more than a Chromebook. And getting an external monitor that's not the cheapest bulk-buy model was also pretty hard to do (I had a friend in marketing who helped me get a larger display with better colors at that place).
Or you go self employed and get your own fancy workstation since you know it’s easily worth your money in the long run. Don’t work for people that don’t understand this.
The rule of thumb is that a typical dev here is 250k fully loaded (benefits, office, etc). When you are trying hire and your are competing with Google and Facebook that extra 2K on the laptop kit is a rounding error. It’s hard to hire as FANGs are a sure thing money wise.
I am in the UK, and I've only found startups willing to spend money on developer hardware. The larger companies seem to pick up low-end Lenovos for developers, and better i7 Lenovos for those managers who would never use them.
Though I would gladly dump the MacBook Pro 16" I currently have to use for work in an instant for a high-end Lenovo/Dell. Apart from MacOS being extremely flakey these days (why does Spotlight only seem to pop up 50% of the time), I don't understand why they don't provide proper ISO and instead give us some form of ANSI that has the dancing alien (§) dedicated key, and why they hide the # which as a Pro developer I use all the time.
That it also spends its entire time overheating so it burns my lap is just the icing on the cake.
No, switching the charging did nothing for me or any of my colleagues.
But when I used a MacBook Pro about 10 years ago the machine overheated all the time and burnt my lap. They are just a shitty design, but they look pretty.
If you are being paid 30k a 2K upgrade would be worthwhile if it resulted in a 6% increase in productivity. At 60k it would be justified by a 3% increase in productivity. Maybe your company just has no meaningful way to measure or understand what effects productivity.
Hm, when I worked for a public lab in France we got the "pro" line Dell laptops. We actually got complaints from users that the software was slow because we only ever tested it on high end machines. Later when working for a startup they gave us choice for a reasonable workstation, this was when the company was doing well in the beginning though.
I think if you have any way of escalating then the best thing is to come up with numbers, such as "a faster computer would let me compile the program in 20 less seconds, which is this much time earned", or "a better screen would not require me to have an external monitor".
Now, I think that to get a 16" MBP for work requires quite a specific use case, because it's really a machine one should use only when it's the only computer they have. For the same price I think you could get a faster desktop + a more portable laptop.
I do contracting for a professional services outfit, and they gave me the Windows corporate laptop (Dell Latitude) and also sent me a MacBook Pro 16", which my manager had to explicitly ask for.
The only difference? I can receive encrypted emails only on the Windows laptop due to some software not being available for the Mac.
I do all my development on macOS, and if spending $4k on a machine means I am more productive, can get work done faster, it's a return on investment that pays back multiple times.
Do note that laptop refreshes in my past companies (current is ~2 years) have been on average every 2.5 year... so it's not like I get a new laptop yearly.
That Dell Latitude is frustrating to use. The trackpad is absolutely atrocious, the display is dim, the keyboard is really mushy and causes pain in my hands when I use it for short periods of time...
Had a similar situation (a Windows and a MBP) and just putting VMWare and Windows 10 on the MBP solved pretty much all the problems of having to lug around two machines.
There's a Citrix setup as well, which while slow works fine for the one or two times a month that I need access to encrypted email... so I haven't carried around the Windows laptop.
Most of the top Finnish software consultancies have a (basically) unlimited budget for your main laptop and give you freedom to pick whatever you want.
My company is working on upgrading all of us to that config. Currently we mostly use the 2013 13", and people get upgraded when that one burns out. I'm debating asking for a halfway trade, getting a nice Thinkpad X1 and being able to use my favorite linux tools on it.
I'm in the US, but I've been in a similar position to you; my last job gave me a tower with 8GB of ram and a Core 2 Duo running Windows 7 32-bit. Utterly useless. I had to sneak Ubuntu onto it when nobody was looking. They couldn't even tell the difference.
I ended up having the last laugh when every computer in the office got wiped because somebody decided running the mail server on our ActiveDirectory server was a good idea, and also thought that leaving ports open on the mail server was a good idea.
My boss is very nice. He treats every person as an individual. Also we are fairly small.
Last place I worked I got an HP Windows desktop and told to remote in from a 12" laptop whenever I needed to work from home or show off something in a meeting. That place also wanted to knock down three single-person offices so they could fit 10 developers in the same space. And for the last month or so I had a developer working next to me on my own desk since management didn't prioritise us developers over HR.
My current workplace and my previous are both in the public sector in Norway.
The big subjective improvements appear to have been in screen resolutions/sharpness and in SSDs. An 8-year-old LCD will likely have dimmed substantially.
On the other hand if you like it, you like it, and cheap is beautiful.
It won't :) It has a LED backlight. Older LCD screens often had fluorescent backlight, which will and did degrade after hours of use. According to Wikipedia, LEDs are the most popular backlight in LCD screens since 2012.
The idea that LEDs last forever is a myth. LEDs degrade over time. They actually list it on the spec sheet. For example an L70 rated led with a 25k hour life will produce 70% of the light it produced when new after 25k hours.
Recently I replaced 4 Asus monitors with led backlights that were produced in 2014 and 2015. Asus says 300 nits. Tested them when I was calibrating their replacements and they were 110-120 at full brightness.
Monitors color shift and dim as they age... that’s why hardware calibrators exist.
Yes, Moore's law is hitting its limits: For example 5nm distance equals like about 25 silicon atoms. That's like: Quantum electron tunnelling will ruin your life.
So if we want increase our chip's performance any further, we need a fundamental change in our technology. And since silicon atom is 210 pm big and carbon atom (graphene is made from carbon) is 170 pm big, we need to change our architecture as shrinking transistors will not be possible anymore. I mean CISC/RISC, drop x86 support and so on.
Blaming Moore's Law is not fair since Apple, AMD, NVIDIA, Qualcomm, HiSilicon,... can deliver reasonable improvement in these 3 years while only Intel is stagnant.
To be fair, most of that is them catching up to where Intel already was. Noone seems to actually be fabbing meaningfully smaller feature sizes (or higher layer counts) than what Intel is stagnating at.
> To be fair, most of that is them catching up to where Intel already was. Noone seems to actually be fabbing meaningfully smaller feature sizes (or higher layer counts) than what Intel is stagnating at.
Eh, Intel's 14nm is 37M transistors/mm2. TSMC and SS are both up to 52M/mm2 at 10nm, and 92M/mm2 at 7nm. Both Apple and AMD's latest gen stuff is on TSMC's 7nm process _today_. Yes, Intel's 10nm is at 101M/mm2, but until they can get mass production on that they're falling substantially behind.
If you look at long-term trends transistor density has kept pace (slowed down consistently but not dramatically along the years), the big difference is that it no longer gives you as much as a performance boost as it used to.
The difference is that ARM has been able to deliver desktop-grade performance at power levels that are suitable for use in an iPad.
Intel and AMD might be able to deliver somewhat higher performance by throwing a whole bunch of cores at the problem, but they do so at a much higher cost in power requirements. And it would be easy enough to design ARM machines with an equal number of cores (or even way more), and still have much lower power requirements.
Intel stagnated, and has high power requirements. ARM has caught up, and has much lower power requirements.
Sure, but that's them having a competent (well, less incompetent) ISA and microarchitecture; that they've made better use of the transistors available, not that they've achieved better feature density than what would be expected from where they are on the Moores law curve.
Also Intel and AMD have not delivered higher performance via more cores; they delivered lower price for (say) 64 cores worth of performance, by putting them all on the same chunk of silicon (edit: or at least in the same package). (There are some slight improvements in inter-processor interrupt and cache-forwarding latency, but if that's a performance bottleneck, the problem is bad parallelization at the code level.)
> Sure, but that's them having a competent (well, less incompetent) ISA
Have you looked at the encoding of Thumb-2 (T32) and particularly A64 (their newly designed instruction set for 64 bit)? Their instruction encoding is in my opinion much more convoluted than x86.
> they delivered lower price for (say) 64 cores worth of performance, by putting them all on the same chunk of silicon.
Arguably AMD did the exact opposite - lower prices via splitting a processor into multiple pieces of silicon. (Chip price scale exponentially with area at the high end)
Well, my point there was that a 64-core CPU is not (significantly) higher performance than 64 single-core CPUs, so multi-core is - if a improvement at all - a price improvement, not a performance improvement, but fair point about the price-vs-area scaling.
> The transition from PowerPC to Intel was about IBM and Motorola not being able to deliver parts.
Actually, it was about nobody wanting to deliver a Northbridge for Apple.
I interviewed at Apple in this timeframe and was stunned that they used Northbridge ASICs with synchronizers everywhere. No clock forwarding to be found.
This kills memory and graphics performance dead.
On top of that the support ASICs were using more power than the CPU!
Once Apple switched to x86, they could use the Northbridge and Southbridge chips that everybody in the universe was using.
>The big issue is that Intel has been stuck for so long
My memory isn't so good, and I never owned a P4, but according to Wikipedia in August 2001, they released one at 2GHz on 180 nm. Almost 20 years later, my laptop i7 is running at...2.1GHz (base)? And 14 nm. That's kind of mind boggling. I think it would be interesting to read/write an article comparing the two in depth and what performance benefits you get from the newer chip.
Except that instead of 1, now it has 4 CPUs and a GPGPU as part of it, GHz aren't everything, the problem is that most programs as still written single threaded.
The frequencies may not have increased beyond 2-3 GHz but the speed has still got faster because modern processors are much smarter and are able to do more work per cycle. They have all sorts of fancy tricks to do that - speculative execution, hyperthreading, branch prediction, etc.
there's a very widely used measure here in the academic community: instructions per cycle (ipc). IPC boils down to how many instructions can you actually complete per clock cycle once you account for memory, caching, etc.
IIRC, that's maybe a 16x improvement (32x if you count 32->64 bit). Which accounts for less than half of the (orders of magnitude of) improvement we should have got from Moore's law.
(More cores aren't a performance improvement; if you were willing to deal with non-serial execution, you could have just bought 32 Pentium Fours; putting them all on the same chip is convenient (and cheap), but as a price/performance improvement, it's all price, no performance.)
> (More cores aren't a performance improvement; if you were willing to deal with non-serial execution, you could have just bought 32 Pentium Fours; putting them all on the same chip is convenient (and cheap), but as a price/performance improvement, it's all price, no performance.)
That's only true if you only consider ALU throughput for performance, but in terms of real world performance, where the interconnect between cores and memory is hugely significant, a multicore processor has many advantages over a rack of otherwise equivalent single-core NUMA nodes.
My guess is that there are now a lot of forms of hardware acceleration of specific things that make your daily experience seem faster, but I haven't seen them catalogued and put in perspective with measurements.
I haven't read about P4 and NetBurst in a long t ime, but is mu memory serve me right, P4 is usually less than 1ipc due to its very long pipeline (31 stages IIRC) that is very prone to pipeline stall. Modern processor can also do many thing faster. I think P4 takes ~110 cycles for integer division, while modern cpu can do in ~30-40 cycles. And IIRC, P4 cannot flush the division unit so if branch/jump prediction is wrong and division is speculated, it was to wait until the division unit finish computation before it can resume execution.
The biggest advantage of RISC derived designs is easy to parse instructions. The problem with X64 is not the number of instructions, as they can be thought of as macros anyway, but all the different instruction lengths and encodings. This makes decoding into a bottleneck and a source of overhead that ARM and other newer designs do not have.
Easy to parse instructions? I mean Thumb-2 is very much a variable-length encoding, though okay there's no equivalent AArch64 instruction encoding, but basically all AArch64 implementations still support Thumb.
All the modern research suggests very much that the decode stage isn't a significant difference nowadays; instruction density is increasingly significant as CPUs become ever faster compared with memory access.
The problem is that this calculation takes more cycles, and you do not know where the next instruction starts until it completes. It serializes what should be a parallel process. X64 chips use crazy hacks like caches and tables to work around this, but these add more transistors and power consumption.
In processors, I-cache and decode consume a disproportionate amount of power relative to their size. I'd also note that all the latest high-performance ARM chips include an instruction decode cache because the power cost of the cache plus the lookup is lower (and much faster) than a full decode cycle. Of course, there are diminishing returns with cache size where it becomes all about bypassing part of the pipeline to improve performance despite being less power efficient.
x86 instruction size ranges from 8 bits to 120 bits (1-15 bytes). Since common instructions often fit in just 8 or 16 bits, there are power savings to be had due to smaller I-cache size per instruction. That comes at a severe decode cost though as every single byte must be checked to see if it terminates the instruction. After the length is determined, then it must decide how many micro-ops the instruction really translates into so they can be handed off to the scheduler.
ARM breaks down into v7 and v8. The original thumb instructions were slow, but saved I-cache. Thumb 2 was faster with some I-cache savings, but basically required 3 different decodes. ARMv8 in 64-bit mode has NO 16-bit instructions. This reduces the decoder overhead, but obliterates the potential I-cache savings. No doubt that this is the reason their I-cache size doubled.
RISC-V is not being discussed here, but is the most interesting IMO. The top 2 bits in an instruction tag it as 32-bit or 16-bit (there are reserved bit schemes to allow longer instructions, but I don't believe those are implemented or are likely to be implemented any time soon). This fixed bit pattern means that length is statically analyzable. 3 of the 4 patterns are reserved for 16-bit use which reduces the instruction size penalty (effectively making them 15-bit instructions). The result is something 3-5% less dense than thumb 2, but around 15% MORE dense than x86 all without the huge decode penalties of x86 or the multi-modes and mode-switching of thumb. In addition, the effect of adding RVC instructions reduces I-cache misses almost as much as doubling the I-cache size which is another huge power consumption win while not having a negative impact on overall performance either (in fact, performance should usually increase for the same I-cache size).
I thin the whole idea of measuring computer screen as 1080p, 900p and such completely idiotic. Computer screens should have some sane dpi values and scale the resolution according to the size. This is how it was before attack of 16x9 screens and Apple is the only company that is following that old trend. Even praised System76 have the same crappy 16x9 scrrens.
I'm tempted to agree with you, but at the same time, I think the appropriate value for a sensible DPI depends on screen size. Because smaller screens are typically held closer to the user, it's kinda fair to say a 4k TV and a 4k phone screen have the same resolution, in the sense that if both are at a distance so the screen takes a reasonable fraction of your vision, they will have the same level of visible detail. Within a device category that may be less true, but between categories resolution seems like a reasonable measure.
> The transition from PowerPC to Intel was about IBM and Motorola not being able to deliver parts...The transition from Intel to ARM is about Intel not being able to deliver parts...Apple knows it can deliver the parts it wants with its own processors at this point.
I think this is the key reasoning. There's something really interesting happening here. In the past, when Apple transitioned from Motorola to PowerPC Apple wasn't big enough to design and fab their own chips, this was also true when they moved from PowerPC to Intel.
However, Apple has some choices here, and I think the decision comes down to long term supply-chain risk:
1) Switch to AMD. Their processors are blowing the doors off of Intel, at better prices. They aren't having the same process problems and their high-end components are fantastic. However, history has shown AMD surge ahead for a bit, then Intel, and back again. Apple probably doesn't want to risk this happening after they engage in some huge volume discount contract. More importantly, neither Intel nor AMD are winning in the lower power vs performance segments.
2) Become their own fabless designer. The risks are enormous. What if they can't keep pushing the performance envelope against Intel/AMD? What if their fab partners can't keep their processes moving forward? What if they fail to make this architecture jump (again)? But if gives them better supply chain control and increases their vertical integration.
In some sense it points to a weakness of highly vertically integrated companies...as a model it makes their entire product lines dependent on every component being able to progress. If any component lags, the entire product line suffers. So outside suppliers, who have multiple customers to please, become key sources of risk and it will become the instinct of the company to move the riskiest parts of its supply chain in-house.
If Apple is unable to keep advancing ARM chips in terms of performance (regardless of power) it will be a problem for them. But one final advantage of building their own, is that they can obfuscate this component from the rest of the market and make comparisons on cores/Ghz/etc virtually impossible. It's a bit like how Apple really doesn't even advertise how much RAM is in their portable devices.
> Apple wasn't big enough to design and fab their own chips, this was also true when they moved from PowerPC to Intel.
Note there were rumours going around about whether or not Apple was going to buy PA Semi (or at least contact them) for their PWRficient CPU (implementing PPC).
Ultimately they did buy PA Semi in 2008, though for the skills and they've since made all of the iPhone/iPad CPUs.
The fact that our low-power chips also happen to be lower die size is an artifact of path dependence. The primary market for lower power was battery powered devices of which phones are by far the most numerous. So, the lower power chips started there and didn't have much to do. As those have gained sophistication with time, they have also grown in die size.
Xeon and server chips generally want to maximize memory bandwidth--and they make a whole series of architectural tradeoffs to accommodate that.
Phone chips generally want to maximize power efficiency and basically don't care about memory bandwidth at all. They effectively don't want to turn on the system memory or flash, period, if they can avoid it. One way to do that is to cache things completely in local on-chip RAM.
Computer architects will make completely different tradeoffs for the two domains.
> Intel got Apple's business because they produced superior parts at a lower price
A big problem was that IBM was not designing low-power chips for laptops, and laptops were (and are) a major part of Apple's business.
Power6 (not PowerPC) hit 5 GHz in 2007 and Power has remained competitive - Power10 will be described at Hot Chips in August. Of course (perhaps consistent with "Hot Chips") these are not low-power architectures.
“Apple would talk about the "MHz-myth" a lot. While it's true that MHz doesn't equal performance, Intel was doubling the PowerPC's performance most of the time. The G3 saw Apple do OK, but then Intel went back to dominating in short order. The PowerPC never matched Intel again.
It was really bad. People with Windows computers just had processors that were so much more powerful and so much cheaper.”
Don’t know how much I would agree with that. I just took a look at the Megahertz Myth portion of the 2001 Macworld Expo video¹ where Apple compares an 867 MHz G4 processor with a 1.7 GHz Pentium 4 processor, and while I don’t know how valid Apple’s argument is in that video (don’t know enough about how processors work to judge), from the information they give, it does seem plausible that the 867 MHz G4 could outperform the 1.7 GHz Pentium 4 in some scenarios. Frequency certainly isn’t everything, especially when we’re comparing two different CPU architectures.
There were moments where PPC performance was acceptable or better than Intel, but they were brief, far between, and for most of the its life the PPC was far behind Intel.
Take the 867MHz G4 you mentioned. There might have been some applications where the G4 was beating a 1700MHz Pentium 4, but at the time of the demo the top-of-the-line from Intel was 1800MHz and they released a 2GHz only a few days later. A year later Intel was shipping 2.8GHz parts, and Apple was selling 1.25GHz G4s. So whatever architectural lead the G4 enjoyed, Intel was eroding it with faster clock scaling.
This does not even mention the mobile space, where in 2003 Intel was shipping the Pentium M, not the Pentium 4, and it was the Pentium M which derived from the Pentium Pro/II/III and foreshadowed the Core product line. The G4 had no architectural advantages over the Pentium M. Apple's mobile products were stuck on the dead-end G4 for years.
I owned a Mac of some kind throughput the PowerPC era but it was only because I had to run Mac applications. There wasn't anything good about them except on rare occasions you got to see the AltiVec unit really go crazy. Most of the time you just got to marvel at how slow and expensive they were compared to the other PC on your desk.
The G4 was where the rot started to set in, but people are forgetting about the 601, 603 and 604, which had themselves several years of history in Apple designs. The 604 in particular was a real piledriver for some applications and easily competed with x86 of the same era, and the G3's integer performance was even better (its main Achilles heel was a fairly weak FPU, but this wasn't a major issue at the time for its typical applications).
I ran a Powerbook 12inch as my main computer for 4 years while going through computer science study (i.e. doing some relatively intense assignment projects on it). While the CPU was slower I'd argue that the OS at that time made up for it - using 10.4 - 10.6 compared to XP and Vista was a breeze. I had 740MB of RAM that was very much under my control, the only background task I sometimes had to look out for was the search indexer. OSX Terminal was light years ahead of the crappy windows terminal at the time. PDF support across OS was applied to good use to create good looking reports.
Now the tables have turned though. MacOS software has suffered greatly while MS has embraced Linux and the Terminal. Win10+WSL+Windows Terminal+VS Code today is the superior toolchain IMO because it gives you access to the package managers that will also run on your target servers.
The problem is that both Apple and Wintel were fighting for consumer customers; people that were proficient enough to word process, email, and browse, but not proficient enough to understand chip architecture. They just see a spec and assume higher is better. If you have to start from a position of arguing that the your opponent's advantage is a myth, you're already at least a step behind.
Moreover, the enterprise market had lost that war already. Who cares if PowerPC was really faster? Intel was a cheaper chip that did the job. Everyone in my customer service, accounting, HR, etc department can have a PC for much cheaper than Apple.
I ported computation-heavy application code on the MacOS to PowerPC architecture, and benchmarked the results for the client. I do not have the graphs handy, but I cant agree with the first section here entirely. It is true that the performance did not live up to the expectations, but overall it was a win.
There was so much money, vanity and spin at that time, bringing millions of consumers into the world of computing, multiplied by media and unscrupulous marketers. I do not know what a consumer would expect in those days, depends on who you asked and their vantage point. Every camp was guilty of exaggeration I would say.
I don't believe it's going to be a 100% transition either way in the foreseeable future. Apple will move their low-end computers to ARM and keep their high end ones on x86. That way they will leverage over both sides whenever they want to get something out of them. "Hey, Intel, you know those expensive 10nm Xeons we wanted for the Mac Pros cheaper than you wanted to sell them? Would be too bad if we went with ARM this generation" "Hey, TSMC, know those CPUs for high-end MBPs? Give 'em to us for cheap or else."
I’d be a bit surprised about that. Apple likes to keep things like this unified because it drastically cuts costs across the board. Additionally the chip design is going to share a lot with their mobile variants. I would expect the entire lineup to be replaced for laptops. It’s less clear for things like the Mac Pro line (or if they’ll even bother continuing that line).
Apple would talk about the "MHz-myth" a lot. While it's true that MHz doesn't equal performance, Intel was doubling the PowerPC's performance most of the time. The G3 saw Apple do OK, but then Intel went back to dominating in short order. The PowerPC never matched Intel again.
My memory is more like they leapfrogged each other from time to time. The first generation of PowerMacs such as the 6100 absolutely spanked contemporary PCs (100 Mhz 486s and 50 Mhz Pentiums if I remember correctly). It was by no means obvious at that time that Intel would catch up.
What killed PPC was the interested parties (IBM, Motorola, Apple) squabbling over CHRP, and Motorola being unwilling to work on a part that would fit the thermal envelope Apple wanted for laptops. The fundamental architecture of PPC is sound, or at least, sounder than that of x86.
The problem is that the G4 problems break the narrative about the PPC->Intel transition resembling an Intel->ARM transition. The reason the G4 stagnated was because Motorola was heavily focused on embedded processors and didn't prioritize the Mac. There's a similar risk with moving the Mac to A-series, Apple-developed ARM processors because Apple themselves have been prioritizing mobile devices over the Mac in recent years, leaving a significant risk that desktop-class and even laptop-class processors from Intel and AMD will once again leave them behind in performance.
> Intel machines for half the price with nearly double the speed and better specs on everything other than aesthetics.
Back then it was because the monitor and hardware were carefully calibrated - the gamma was strange but the colors were spot on (assuming correct ambient lighting). An Apple made a lot of sense for visual artists of all disciplines, they still generally have this edge today.
What I don't get: what, other than politics, keeps Apple from going AMD?
Apple is already working with AMD when it comes to dedicated graphics chips, and people have done Hackintoshes with AMD parts for ages... so why go the (risky) ARM route instead?
Apple spends a lot of money at TSMC (about 75% of TSMC 7nm production was for Apple chips in 2018, according to a link below). AMD also makes its chips at TSMC and TSMC fab is a limited resource. Instead of paying AMD to make chips at TSMC, Apple can just make them itself.
Why does Apple do anything? I think one factor that is always a concern for them is control. They switched to Intel for more control over their product lines.
Same issue here - they’ll switch to ARM for more control. AMD has some good hardware, but they’d just end up switching one outside chip vendor for another. So while a move to AMD would be “safer”, it still wouldn’t offer any more control over their products.
It's ultimately an intermediary step. Apple already has the tech for making multi-arch software, it's already a part of the toolkit, and they already design and sell ARM chips in their other product lines. It's not a no-risk scenario, but it's a low risk transition from a company that's managed two architecture migrations before.
The Mach-O object file format supports fat binaries and has run working executables on ARM, SPARC, PA-RISC, PowerPC 32-bit, PowerPC 64-bit, x86 and x86_64 first under NeXT and eventually under Apple. It's what every implementation of Mac OS X, iOS and its derivatives use today and there's nothing stopping Apple from support other architectures down the line other than they probably don't want to or need to. If they decided they wanted to revitalize Mac OS X server and ship them with POWER9 or RISC-V CPUs, they could do that. Not sure why they want to, but it's an option.
If they're going to end a working business relationship with a supplier anyway and they have CPUs that outclass their current supplier in CPU performance (an A13 in a $400 iPhone SE outclasses the Xeons in their $6K Mac Pros in single threaded performance), they might as well go the whole hog, skip the intermediary step of switching suppliers (which might mean signing some kind of multi-year deal during which Intel might start outclassing AMD, so that's not without risk either) and run their own designs through TSMC fabs.
Apple has always been that company that likes to own the whole widget, or as much of it as possible. If they didn't, they might have switched to Windows NT in the late 90s rather than buying NeXT. They don't make everything that goes into a Mac or iPhone, they don't even fab the CPUs in their iPhones, just design them, but they figured out which parts they can control the designs for and which they can source from others and made it work by progressively integrating more of the design work in-house.
I am more worried about what it means to us consumers.
While Apple producing ARM based systems might mean slightly cheaper machines, but if it is the cost of giving up control to customize your hardware and software, I will never be looking at another iDevice.
We already have to face the bullshit of soldered RAM and soldered SSD's and locked app stores and now I fear with ARM chips we will be faced with an even more locked down ios-like macOS with installs only possible from app stores. And ofcourse, everything linked to the iCloud to leech off and data mine our personal information. The "secure" chip, and perhaps a locked bootlooder will ensure that we won't be able to install any other OS on it, and Apple could even remotely cripple the device with it.
(As you can tell, I am not at all enthused by this move. And mark my word, this is what Apple will do eventually.)
Soon we shall find out. I'm actually feeling optimistic, I'm not sure we're going to get that particular signal on Monday, but if Apple doesn't decide to lock down the OS completely with an ARM transition, I don't think they ever will and the more pressing concern will be whether Apple decides to keep the Mac line at all down the road.
That said, I keep virtual machines of operating systems that interest me up to date. I'm between 9front and Haiku OS as my eventual replacement, and I still might switch to a ThinkPad running 9front for my next computer regardless of what Apple announces. I'll still have much of what I value on a Mac in my iPad, and my laptop is essentially for writing, programming and backing up other hardware.
That PCs are the exception that confirms the rule, which existence can be traced back to the point where IBM legal team wasn't able to kill what Compaq has set free.
The 90's Apple, like everyone else, had its own vertical integration in software programming stack, network protocols and hardware. Also not every model had internal expansion slots, what we bought was what we got for the device's lifetime.
Naturally the more expensive LC and Quadras had enough internal bays, given their business purposes.
the PowerPC failure seem to be on IBM and Motorola. however, this time its Apple design and manufactured by TSMC. would this combo make any difference?
Well when you design your own CPU, you can't blame the people that made your CPU without blaming yourself. It's like that. IBM and Motorola were ultimately their own businesses, and had their own interests that weren't entirely aligned with Apple.
So if somewhere down the line Apple makes the CPUs in all of their Macs and can't compete, they have no one to blame but themselves. Right now they're not really failing to compete though, as their computers are stuck in the same holding pattern everyone else who depends on Intel is. They could switch to AMD parts, but they're trading one horse out for a horse of the same breed that's maybe a bit younger and prettier.
So if they're angling to axe their relationship with Intel down the line, at least for CPUs, why bother switching suppliers temporarily when they can switch their existing supplier out for something they designed in-house when they're ready and skip the intermediary step?
All reports I've seen over the past month seem to indicate that time is upon us. There's enough smoke that if Apple hasn't quietly passed a memo to the WSJ mentioning that they're not announcing anything of the sort on Monday by the close of yesterday, then the time is most likely now (well, Monday for the transition announcement, likely 2021 for the first shipping products).
> You can say that Apple always charges a premium, but not too much today on their main lines. Apple simply doesn't sell low-end stuff. Yes, a MacBook Pro 2GHz costs $1,800 which is a lot. However, you can't compare it to laptops with crappy 250-nit, 1080p screens or laptops made of plastic, or laptops with 15W processors.
Last time I checked, Apple was selling laptops with an Intel i3 processor packing 8GB of RAM and a 128GB SSD for 1300$.
Apple's cheapest laptop carrying more than 8GB of RAM is selling for around 2300$.
You can argue that you like Apple's gear,but the myth that they are not way overpriced simply doesn't pass any scrutiny.
Apple would talk about the "MHz-myth" a lot. While it's true that MHz doesn't equal performance, Intel was doubling the PowerPC's performance most of the time. The G3 saw Apple do OK, but then Intel went back to dominating in short order. The PowerPC never matched Intel again.
It was really bad. People with Windows computers just had processors that were so much more powerful and so much cheaper.
You can say that Apple always charges a premium, but not too much today on their main lines. Apple simply doesn't sell low-end stuff. Yes, a MacBook Pro 2GHz costs $1,800 which is a lot. However, you can't compare it to laptops with crappy 250-nit, 1080p screens or laptops made of plastic, or laptops with 15W processors. A ThinkPad X1 Carbon starts at $1,553 and that's with a 1080p display rather than 1600p, 400-nits rather than 500-nits, 8GB of RAM rather than 16GB (both soldiered), and a 15-watt 1.6GHz processor rather than the 28-watt 2GHz part. Heck, for $1,299 you can get something very similar to the ThinkPad X1 Carbon from Apple (though with a 1.4GHz processor rather than 1.6GHz) - $250 cheaper!
The point of this isn't to say that you can't get good deals on Windows computers or that there's no Apple premium or even that there's any value in Apple's fit-and-finsh that you're paying for. This is to say that I remember things like the original iMac with CRT display, 233MHz G3 processor, 13" screen (when people wanted 15-17" screens), and an atrocious mouse going against Intel machines for half the price with nearly double the speed and better specs on everything other than aesthetics. Things were really bad trying to argue that someone should spend $1,300 for an iMac when they could get a Gateway, eMachine, Acer, etc. for $600 with a 400MHz processor rather than 233MHz. A year later, Apple's at 266MHz while Intel has released the Pentium III and is cranking it up from 400MHz to 600MHz that year.
Yea, you can point to $700 laptops today and say, "why buy an Apple for $1,300?" Sure, but at least I can say that the display is so much better (500 nits vs 250 nits and retina), it's lighter than those bargain laptops, fit-and-finish is so much better, etc. At least I'm not saying, "um, no...all those benchmarks showing the Windows machine twice as fast...um...and the mouse is so cool because it's translucent...you get used to it being terrible." It's very, very different from the dark days of 2000.
Plus, today, a price premium doesn't seem as bad. Back in 2000 when you thought you'd be upgrading ever 2-3 years, you'd be shelling out a lot more frequently. If performance doubled every 18 months, 3 years later you'd be stuck with a computer running at 1/4th the speed of something new. With the slowdown in processor upgrades, paying for premium hardware doesn't seem like throwing money away in the same way.
The article also paints the RISC architecture as superior. I'm not a chip expert, but most people seem to say that while RISC and CISC architectures have a different history, modern CPUs are hybrids of the approaches without huge advantages inherent in their ideology. Frankly, if Intel were able to get down to 7nm and 5nm, Apple might not be looking at ARM as strongly.
I think it also paints Apple as some sort of more demanding customer. In some ways, sure. Apple likes to move things forward. However, it's not like MacBooks are that different from PC notebooks. The difference is that Apple has options. They can move to another architecture. Windows manufacturers don't really have that. Sure, Windows on ARM has been a thing, but Microsoft isn't really committed to it. Plus, Windows devs aren't as compliant when it comes to moving architectures so a lot of programs would be running slowly under CPU emulation.
The big issue is that Intel has been stuck for so long. Yes, they've shipped some 10nm 15-watt parts and even made a bespoke 28-watt part for Apple. It's not enough. I'd argue that PC sales are slow because Intel hasn't compellingly upgraded their processors in a long time. It used to be that every 18 months, we'd see a processor that was a huge upgrade. Now it's 5 years to get that upgrade.
There's a trade-off between custom products and economies of scale. With the iPhone using so many processors and TSMC doing so well with its fab, Apple now kinda doesn't have to choose. Intel has been charging a huge premium for its processors because people were locked into the x86 and it takes a while for new competition to happen. Their fabs have fallen behind. It looked like they might be able to do 10nm and move forward from that, but that doesn't seem to be working out too well for them.
The transition from PowerPC to Intel was about IBM and Motorola not being able to deliver parts. They were falling behind on fabs, they weren't making the parts needed for Apple's product line, and it was leaving Apple in a position where they simply had inferior machines. The transition from Intel to ARM is about Intel not being able to deliver parts. It wasn't simply a short time when they couldn't deliver enhancements, but a decently long trend on both accounts. Apple knows it can deliver the parts it wants with its own processors at this point. The iPhone business is large enough to ensure that and they can make laptop parts that really fit what they're trying to market. Intel got Apple's business because they produced superior parts at a lower price. They're losing Apple's business for the same reason.