Thunderbolt is really an unsung hero here. It is surprisingly nice to be able to move various components around my desk that would have otherwise sat in a huge tower hogging all the PCIe slots they can find.
You don’t need it if you use llamacpp on Windows, or if you compile it on Linux with CUDA 13 and the correct kernel HMM support, and you’re only using MoE models (which, tbh, you should be doing anyways).
What MoE has to do with it? Aside from Flash-MoE that supports exactly one model and only on macOs - you still need to load entire model into memory. You also don't know what experts going to be activated, so it's not like you can predict which needs to be loaded.
> Nvidia's recent GPUs are more power-efficient than Apple Silicon in raster, training and inference workloads.
I think you can do better than the proverbial Apples and Oranges comparison.
In terms of total system, "box on desk", Apple is likely to remain the performance per watt leader compared to random PC workstations with whatever GPUs you put inside.
CUDA 13 on Linux solves the unified memory problem via HMM and llamacpp. It’s an absolute pain to get running without disabling Secure Boot, but that should be remedied literally next month with the release of Ubuntu 26.04 LTS. Canonical is incorporating signed versions of both the new Nvidia open driver and CUDA into its own repo system, so look out for that. Signed Nvidia modules do already exist right now for RHEL and AlmaLinux, but those aren’t exactly the best desktop OSes.
But yeah, right now Apple actually has price <-> performance captured a lot of you’re buying a new computer just in general.
> For example, when my Windows gaming machine comes out of hibernation my ethernet controller insists that there's no connection. I can't convince it otherwise except by disabling the device and re-enabling it. I can't figure out where I might find information that tells me why this is happening, so I just wrote a powershell script to turn it off and then on again. I bet some Windows IT dork could figure it out in 30 seconds
Windows and Linux dork here (heh). It has to do with how various computer manufacturers implemented the Sleep/Standby State (S3/S4), how they've resisted implementing a common standard at the hardware level, and how Microsoft eventually gave up arguing and patched around it with their own Modern Standby system in the S0 state.
Tbh, though, the only computer I've ever seen Hibernate work well on are Macs. Every x86 computer usually has some sort of issue with it, except for maybe business laptop models (eg HP's Elitebook line).
> Tbh, though, the only computer I've ever seen Hibernate work well on are Macs. Every x86 computer usually has some sort of issue with it, except for maybe business laptop models (eg HP's Elitebook line).
This has always been my experience, going back I'd say at least to the early 2000s on cheap laptops, and all the way back to the earliest days of sleep and hibernate on desktops, where sleep just doesn't matter that much.
When I started dabbling in boot code around 2006, I read a bunch of the specs and one of them was ACPI, which I only scratched the surface of.
I think until then it had just not occurred to me that a modern paged protected OS would even want to call into any code supplied with the computer, vs. having it come from a driver disk, or be built in to the kernel where everyone can see it.
The whole idea of a bytecode interpreter running random code supplied by a fly-by-night system builder is a little unsettling.
"If Apple Business were a real revenue source, if they charged luxury prices for a luxurious business support experience, they could pay for developers to fix their stuff. Instead, Apple Business is a free side hustle for Apple, a hobby."
I'm wrestling with something similar to this right now in Linux. The only real player that charges "enough" to have a "absolutely zero tolerance for base OS breakage" approach to OS development is Red Hat. Ubuntu LTS is more widespread but only really because it's $0 even for large businesses, and that's honestly reflected in it sometimes having hardware breakage during a version's initial two year mainstream support run. Having Windows's business backed level of "doesn't break" on hardware is rare on Linux.
* Predictability - eliminating the number of unknown factors that could cause a person to have issues using their computer. Reminds me of how a secretary I serviced was somehow able to install Google Desktop back in the day, and how that caused a massive argument between my boss and theirs when their computer needed to be re-imaged. Most IT approved programs are known to store user data in known locations on a computer, which makes backups and restorations very easy. Stuff like Google Desktop did not do that, which means likely breaking someone's workflow in the re-image process.
False. Local races directly determine the day-to-day laws and rules you live under way more than a POTUS could effectively decree. I don't know about you, but I sure enjoy having reliable electrical, water, and sewer systems.
In the US, we have the ability to either confirm or change a significant chunk of our Federal government roughly every two years via the House of Representatives. The argument here is that we, theoretically, could collectively elect people that are hostile to domestic mass surveillance into the House of Representatives (and other places if able) and remove pro-surveillance incumbents from power on this two year cycle.
The reasons this hasn't happened yet are many and often vary by personal opinion. My top two are:
1) Lack of term limits across all Federal branches
and
2) A general lack of digital literacy across all Federal branches
I mean, if the people who are supposed to be regulating this stuff ask Mark Zuckerberg how to send an email, for example, then how the heck are they supposed to say no to the well dressed government contractor offering a magical black box computer solution to the fear of domestic terrorism (regardless of if its actually occurring or not)?
reply