It's a daunting task to get a new GPU architecture in the Linux kernel that's likely as complicated as AMD's. The AMD driver takes up a sizeable chunk of the Linux kernel [1], although that figure includes lots of autogenerated files and a lot of different product generations. Certainly a top notch high performance driver is a very daunting task without documentation, although I expect a driver that is more in the ballpark of the nouveau driver is in the realm of possibility in the long term.
I can't help but smile at the prospect that Torvalds' pessimism here - while warranted - creates an incentive to prove something to both Torvalds and Apple...
Last time I pointed out the AMD driver size, somebody on HN mentioned that it was common to copy the entire driver and make the modifications necessary for specific models/generations of hardware. Not all of it is necessary, it’s just by virtue of copying the code each generation
A decent chunk of it also comprises definitions of names/addresses/values that are (presumably) automatically translated from some authoritative description. This seems to be optimized for portability and consistent mapping to the input, not optimized for size, so there's a lot of stuff like [1]:
This might seem silly and redundant, and there are arguably better ways to express it, but the explicit approach makes a lot more sense if you skim the other definitions and see that some are 1-indexed, some have discontinuities, some have non-obvious reserved or otherwise special values, some have a totally distinct meaning for each value, etc.. Some of the definitions do look questionable, but on the whole I don't think any amount of macro magic could actually make these definitions small; the feature set of a modern GPU really is massively complex.
That makes me wonder: is there some tool for C that does dead-code elimination of source code rather than of the target binary? Something that'd strip out all these un-referenced constants?
It certainly wouldn't be useful for most projects, because most code that's un-referenced is only temporarily so. But in this codebase, it's a more permanent thing, with the driver for any given model being guaranteed to never need e.g. hardware register addresses for hardware doesn't have.
Sizeable chunk in the source tree. The binary driver built from that is only loaded when the hardware is present...
"Or all in, Linux 5.9 comes in at roughly 27.81 million lines distributed among some fifty-nine thousand source files.
...
For a while now the AMDGPU kernel graphics driver has been around 2+ million lines of code making it the largest in-tree kernel driver. With Linux 5.9, it comes in at 2.16 million lines of code plus another 247k lines of code comments and another 109k blank lines... Or up to 2.51 million lines of code is the AMD DRM driver code including AMDKFD, PowerPlay, DC, and all the kernel code ultimately making up the AMD Radeon support on that driver (but not the older Radeon DRM driver -- that older Radeon driver is at around 157k lines of code). "
If you build your own kernel and you can choose between hundreds of options including building many components as optionally loaded modules. Its all configured through an ncurses interface.
> We strongly suspect that by the time enthusiasts could reverse-engineer the M1 SoC sufficiently for first-class Linux support, other vendors will have seen the value in bringing high performance ARM systems to the laptop market—and it will be considerably easier to work with the more open designs many will use.
That’s true, but will they have the volume of Apple’s laptops? There’s a pretty decent demand for those. Nobody cares if no-name ARM chip user can run Linux.
USA is only 5% of the world population what happens there is vastly insignificant outside of it.
Android is on 75% of European phones, the CEO of the fortune 500 company I work for uses an Android phone, same goes for the vast majority of the management there, many highly influencial politics own Android phones, that's simply a spurious correlation
Macs are much more popular among developers than they are in the general population, and the programmer population has an outsized say on what software runs where.
> That’s true, but will they have the volume of Apple’s laptops?
Also, alternatives will generally not match the quality of Apple's laptops. And the M1 has unique features[1] that other vendors are unlikely to replicate anytime soon.
In the last 2 years my family completely switched away from iPhones and has replaced most of our macs with Windows. I am the lone holdout and when this laptop dies I will probably switch as well.
It makes me sad. I used to love Apple. But they became focused on locking down a closed ecosystem and have consistently made the user experience worse. I hate Windows. But Apple has made themselves obnoxious enough that I no longer trust their upgrades, and do not want to be locked in to their fragile hardware choices.
How are their hardware choices fragile? Do you mean build quality or do you think their ability to build their own chips will fail them and their chips for all the ipads and iphones haven't cut it?
How are you not locked into the hardware choices of any laptop maker? Are there realistic, upgradable laptops out there? or are you switching to a desktop system?
Speaking personally, 3 experiences locked in my decision.
1. Having my phone break, one day out of warranty, with a factory defect. And then being unable to even set up an appointment because that phone was required for 2-factor authentication on my apple ID. (Which I don't even remember having set up.) I lost all data.
2. Having keys fall out of my computer requiring losing it for weeks. After that every article that I've read about how Apple has tried to lock down their hardware and make it harder for independent shops to repair has added to my concern that I'll have no options if it dies.
3. The Catalina upgrade was bad. A friend whose whole life was saved to iTunes found he was unable to migrate his extensive collection. His purchased movies, TV shows, songs, etc was over a terabyte, all gone. As for me, I was bit by the Catalina and Microsoft Teams don't want to play bug described in https://answers.microsoft.com/en-us/msoffice/forum/all/macos.... If you look at the thread, people are still reporting problems a year later. As for me, I cannot run XCode, and can only share my desktop if Teams is full screen.
You pay a premium for Apple hardware. You used to get a premium experience. The "premium experience" that I have been getting sucks. And not on hypotheticals, but on actual bad experiences that I have personally had.
The last few years have been a bad time for Apple product quality. Terrible keyboards, broken backlights, software updates that lock down the system, and so on. They've definitely lost their quality edge.
That keyboard was terrible. Johnny Ive definitely went off the rails and this was a good example. They admitted it was crap and don't ship it anymore.
I don't remember anything about broken backlights being a thing? Can you point me to an article or documentation where this was widespread?
I assume by updates locking down the system, you're talking about things like gatekeeper. What ways has this actually impacted you? What can't you run, and are these apps that have had any active developer attention in years?
You say they've lost their quality edge. Do you mean quality control coming out of their factory? That their defect rates have gone up? Or do you mean their products are being poorly designed?
None of Apple's mistakes have impacted me because I refuse to purchase broken hardware powering a walled garden.
> are these apps that have had any active developer attention in years?
I've got apps I run in PCem because Dosbox won't cut it. Why should I stop using software I prefer simply because someone has made something similar, wrapped in electron, running atop a hideous tower of dependencies and bloat? Hilariously, emulating a 90s PC running FreeDOS takes less memory and CPU than utilizing some modern alternatives.
> Do you mean quality control coming out of their factory? That their defect rates have gone up? Or do you mean their products are being poorly designed?
Backlight failures are both design and factory quality. The app lockout was software quality and design. The keyboard was design.
> But they became focused on locking down a closed ecosystem
That's who they've always been (post Woz, at least, which is a long time). They've always wanted to own the whole thing, from OS to hardware. There are advantages to that, but freedom for people to experiment, tinker, and run what they want isn't one of them.
Considering that Apple's closed ecosystem is like living on the Globex Corporation campus, and everybody else's is... well, like post-monorail North Haverbrook, can you fault Apple for wanting to maintain the control that keeps the UX quality so high?
`sudo spctl --master-disable` to disable Gatekeeper, and `csrutil disable` in Recovery Mode to disable rootless/System Integrity Protection. Just like that, it's a Mac from a decade ago in openness.
It's also immediately a Mac from a decade ago in security. Heavy-handed or not, that is something people tend to forget (and no, it's not perfect security, but definitely more secure than allowing root at runtime).
Any particular laptop to recommend? I am looking for a new laptop and haven't found anything similar to the MacBook Air (I hate the touchbar) for a similar price (the 16GB of RAM model). Anything similar is either a lot more expensive or the build quality is a lot worse.
I travel and move around a lot (not so much now, but I still move around my country and to the office), so portability is very important to me. Another advantage of the MacBook line is that I can carry one Anker dual USB-C charger and charge everything I have.
If you're serious about using Linux as a backup, you can't be very tied in with the Apple software ecosystem. So if Apple concerns you that much, why even buy an M1 MacBook?
Personally, I find an Arm with 16gb and better than desktop performance very compelling for compiling everything for all my devices. I hope the excitement somehow spurs more full Arm motherboards by other manufacturers.
The performance is not better than desktop. Single threaded performance is on par with the Intel and less than 5950 (not significantly though) : https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste.... And because desktop can have many more cores multi-threaded performance is just simply way below. Of course when/if Apple increases core count in newer chips they may catch up in this department. Also desktop can have way better cooling. For example on my current desktop 16 AMD with 128GB RAM it goes sustained 4.2GHz without any throttle with the temperature at 60C.
there is no guarantee other ARM manufacturers will even approach what Apple has done and this is evident in how other ARM implementations work versus the A series chips.
Apple has the ultimate advantage of writing the OS and the chip and that level of integration is obviously not being met by anyone else in the industry.
I am not dismissing the possibility of an ARM desktop that is better performing than a similarly priced CISC system but I do doubt the level of difference Apple was able to achieve
I think the point is that if you can't install Linux running natively, you can't escape "undesirable behavior" by the Apple-supplied OS. For example, if you want to ensure your laptop isn't sending Apple information about what you're transmitting over the network, running a VM sandbox in OSX doesn't eliminate that possibility.
Right now VMs arn't running on the m1. the hypervisor stuff would only provide arm vms, so you couldn't drop a x86 unbuntu vm on it. Apparently it is coming, but it'll be a bit yet.
Apple’s share of the desktop and laptop market is minuscule. Competition already exists. Now, it may be that your use case is niche. But niche use cases don’t get mainstream support, and never have.
With huge capital costs, and patent entanglements. It's not something that some Randian hero can just do. The "free market" never precisely exists. Adam Smith himself was very cognizant of this.
Which is a shame. It would be really interesting to compare, to see exactly how much this is a "great chip" and how much it's a "great vertical integration of chip and software".
For the CPU side virtualized Linux (which has already been done) should tell us within a couple of percentage points, for the GPU side I'm not sure how much can be vertical integration vs "this is what the hardware does with a mature driver".
> Torvalds, of course, can already have an ARM based Linux
> laptop if he wants one—for example, the Pinebook Pro. The
> unspoken part here is that he'd like a high-performance
> ARM based laptop, rather than a budget-friendly but
> extremely performance constrained design such as one finds
> in the Pinebook Pro, the Raspberry Pi, or a legion of
> other inexpensive gadgets.
I believe he currently uses an AMD threadripper setup - there would be nothing stopping him using something like the PineBook Pro as a "dumb terminal" and having his builds done off-machine. It would mean that he is dependent on an internet connection, but it wouldn't have to be a fast one as he would only need to send deltas to the build process. (This would be on top of other mechanisms like CI.)
The PineBook Pro has quite a reasonable keyboard, is quite light and gets decent battery. He could reasonably use it all day without issue. Re-flashing it can be as simple as throwing a new SD card into it, which could also help in terms of security whilst traveling (re-flash it when you get to your location).
I would very much tell him and other people to seriously consider such a machine.
Plenty of folks here, dare I say most that use it at all, use it to censor posts and comments bearing opinions or facts that they just don’t want to read about.
Every laptop's battery life numbers are measured when it has a fresh battery, just halve the numbers or something if you want to know what it'll be like in many years.
Apple uses the ARM ISA, which everybody can license, with their own implementation. What should they be forced to open? Something about the GPU? I doubt anything about Apple is really bleeding edge. It's the vertical hardware-software integration which makes wonders.
If AMD (which already owns a good GPU architecture) chose to make an OS, it could get the same performance.
The ISA is open but the chip is custom and very different from any of the standard designs ARM Holdings has (or anybody else for that matter). Not exactly an "architecture" to open like OP said but just pointing out the CPU isn't some standard design with nothing to open up simply because it licenses the ARM ISA. That's like saying your book is off the shelf because it used an English dictionary for the words.
The vertical software integration creates many small wins that aren't insignificant overall but it's not why they are nearly 2x ahead of Qualcomm using the same ISA at the same power level in 3rd party userspace computation only benchmarks. There isn't any vertical integration in those, the raw performance is just better.
For another take on this Chrome vs Safari benchmarks put Chrome at ~90% the speed in web benchmarks and this is Chrome's 1st stable release of an M1 native version. At the same that's 45% faster than Chrome on my AMD 3950X. I remember when Apple added the Javascript rounding instruction to their chips people speculated that's why Safari was now so fast, turns out a Safari dev chimed it they hadn't started making use of it yet the new chips were just faster than before.
> just pointing out the CPU isn't some standard design with nothing to open up simply because it licenses the ARM ISA
The implementation is not an ARM reference design; but it's Apple's own implementation. Why should they be forced to open it up? How are they limiting their competitors?
> they are nearly 2x ahead of Qualcomm using the same ISA at the same power level in 3rd party userspace computation only benchmarks
I'm not sure whether you're speaking about M1 or A14 here. A14 is just a bit faster than other mobile SOCs, certainly not 2x.
If you're speaking about M1... well, what are you comparing it against from Qualcomm? Do they have a similar product, with a similar target and power envelope? The recently announced Snapdragon 8cx Gen 2 from Qualcomm is around 6-7W TDP, which is 50% that of M1.
> There isn't any vertical integration in those, the raw performance is just better.
I'm unconvinced. We need to consider the 5nm production process, the everything-on-soc approach, and the fact that Intel is selling more-or-less the same CPUs as 5 years ago. The Ryzen 7 4800U is quite close to the M1 both performance wise and TDP wise. Now, take a 4800U, make it 5nm, build a SoC with it which includes the memory and build a finely tuned OS - aka "vertical integration". Would you really think you could appreciate a performance or thermal difference?
Not original commenter that proposed it but I could see a case for splitting Apple separate hardware and software companies (among other possible divisions). Complete vertical integration where you're the only option for the hardware, software, app payment, account, browser, and so on is certainly creating a captive market segment which has only been growing in total size. How you think this should or shouldn't be broken up wasn't what I was really talking about though, just that the CPU IP itself is significant Apple specific IP that _could_ be broken off - definitely more than just licensed ARM ISA with a GPU attached. E.g. I imagine if Microsoft could simply purchase the M1 instead of the SQ2 (not-as-customized variant of the 8cx gen 2) in it's Pro line it would but the only reason that's not possible is Apple the software company wants to sell you macOS and their app store cuts and so on.
I was referring to the M1 at hand but actually the A14 is arguably in the 2x range as well, certainly not "barely ahead" as much as I love my Android with a Snapdragon. Qualcomm has been in this "ARM laptop" space for lack of a better term with things like the 8cx and now 8cx gen 2. While these Qualcomm laptop devices aren't completely blown out in multi-core (though it is a significant loss) Qualcomm's single thread performance isn't really that much better than the mobile chips it's based of off. Which to be honest is also true of the M1, it's not ridiculously better than the A14 by any means... the A14 just already trounced the 8cx even though it's for a lower segment. Hell I'd rather have an A11 from 2017 in my brand new Android phone.
I'm not really going to speculate with you on "what ifs" when we have the actual quantitative data from 3rd party benchmarks saying general purpose code ported by 3rd parties has nearly the same performance uplift as Apple got with its own software. Actual data tells more story than I could ever spin from personal theory.
As for the future Zen 4 going to the same TSMC 5nm process isn't going to close the wattage gap by any means but we'll have to see if AMD can make larger overall gains in single thread performance. Apples challenge with the M1X/M2 is going to be "does is scale to more cores" to compete with the likes of the *950X or Threadrippers as they move towards workstation hardware but that's a more tried and true challenge than trying to get x86 cores to run faster on less power. I don't want to count AMD out and I'm actually typing this from a 3900X with a 5950X on order but I can tell you right now it's noticeably faster to e.g. browse the web on my Fiancee's A14 Ipad than my liquid cooled 3900X with a 3090 and as Chrome has shown that's not because of magical software integrations because it's all got the same company name attached it's because the CPUs run the workload better.
(Disclaimer: maybe I haven't fully understood what you meant, I'm assuming you're claiming Apple ought to be split up somehow and/or limited in its advantages)
> Hell I'd rather have an A11 from 2017 in my brand new Android phone.
Beaucse of it single-thread performance, I suppose?
> Complete vertical integration [...] is certainly creating a captive market segment which has only been growing in total size
I can't say I agree with the captive market argument. The problem with Apple is that a) it sells a lot and b) its product are probably the best ones.
But we can't say there's no alternative to Apple products - there are a lot. They're just not as good as Apple's.
Tesla is the best electric car. Is anybody thinking about splitting it up and claiming they're anti-competition? Some few luxury car brands (BMW, Mercedes, Audi) command an awful lot of the high-end car market, and offer some features that cheap cars don't have (e.g. Mercedes PRE-SAFE). Should they be split up?
A monopoly is something different. We wouldn't blame Rolls Royce for making the "best cars" at the highest price level.
Of course, if Apple sold their phones for $50 and their M1 laptops for $300, there could be a case for pushing other solutions out of the market. But this is NOT going to happen; the cheapest Apple phone on the market (iPhone SE) is 2x the price of a decent midrange Android phone, and the cheapest Apple M1 laptop (MB Air) is 2x/2.5x a cheap windows/linux option.
It's also unlikely because unlike Intel, Apple doesn't sell their chips to other companies and thus can't illegally deal or try to maintain monopoly power over those companies.
I think it would be better if the chips can be in-house only, but hackitoshes and hackiniphones have to be legal. The OS/userland interfaces are incalculably wider so those are the more important ones to disembargo to encourage competition.
I know I would be much happier buying Apple hardware to run free software if I didn't feel like I was feeding the walled garden (because Samsung-made hackiniphones don't pay 30% to apple).
The only ARM IP Apple actually used was the ISA. The architecture in the M1 -- and all the A chips -- is 100% custom.
No other gadget maker has the internal engineering chops to do what Apple has done. And it'll be a few years before ARM releases a chip architecture that comes within spitting distance of the M1's performance -- by which time Apple will have released an even more powerful chip.