Normally, you would see this screen that allows you to switch storage modes
And back when PCs were far more open, good old IDE was always an option too.
Ever since BIOS became EFI, and flash ROMs started getting much bigger, it seems they've not been adding functionality but removing it slowly. The excuse is often "security" (against the user), and "legacy" (the oldest interfaces are also the most widely understood and stable).
That said, I'd stay away from the "prebuilt" manufacturers like Dell, Lenovo, HP, etc. if you want configurability. They've always had far less options in their BIOS than equivalent offerings from "enthusiast" or "gamer" oriented companies, although in the laptop space it's more difficult to do that.
I still hope to some day see Postgres ported to run directly on a RAID controller. EBPF already exists running on NICs but we need more things of that sort. I suppose Synology has a bit of an analog of this in that their NASes can run docker images on their end which makes better use of their gigabit ethernet connection. But that’s basically a whole second computer.
The next thing or what has even gone on with Apple Silicon is going to be attempting to outperform SoCs and if you can't then you get pushed on having to make SoCs on chips. I think there will still be Raspberry Pi or other vendors that will have open SoCs and even Risc-V boards being integrated with Framework laptops but as time goes on we are still going to most likely going to be stuck with SoCs on devices because the memory or any other device is just closer to the rest of the Silicon.
RaspberryPi's SOC famously boots from it's GPU first, then it's CPU.
The GPU is a binary blob (not open source) thanks to Broadcom...
Broadcom... that same hardware mfgr that makes your Debian install so much extra fun... (well, if you care about networking and integrated controllers...)
Well yea, a pre-built with no regards for repairability will always end up more tighly packed than something that needs to fit in a standardized battery, etc. And in Framework's case, an entire GPU.
That's pretty much why phones never had a chance for this idea until very recently (and AFAIK, those recent examples are still not commercially available).
This used to be a thing- I remember my father excitedly configuring a made-to-order laptop from ZipZoomFly[0] back in the day. I think that the market wasn’t kind to them though, the ecosystem about replaceable laptop parts never matured to the point where it was competitive with the proprietary designs, and standards constantly changed because of the form factor’ constraints, so the dream of just replacing a single part never materialized.
Closest thing to that dream now is the framework laptop, which does have replaceable parts.
Resellers of Clevo barebones offer a fair bit of flexibility to spec the system to order. It's not full freedom to mix and match, but still quite flexible. The price is that it is far less sleek, bulkier and heavier than most other laptops.
What the parent poster was talking about was not the old hardware IDE interface, but the emulation of the IDE interface within the SATA controller, which exists for compatibility with very old operating systems which understand the old IDE interface but not the newer SATA interface. Since nearly all modern operating systems understand the SATA interface (AHCI) natively (that is, without having to install any extra drivers), that compatibility mode is not very relevant anymore.
Across multiple laptops the Win 11 iso doesn't have Wifi drivers. And since we're in hell, you can't finish the setup process and actually use your computer without an Internet connection. Since then you'd be able to run an installer for the WiFi drivers.
Luckily you can use an Android phone to create an Internet bridge which will allow you to proceed.
However, The OEMs will automatically install CrapWare( we should start calling it this) even on vanilla Windows installs.
Compared to my last few CachyOS installs where everything just works out of the box. I don't need to provide my email address to use my computer.
Honestly as long as I can disable secureboot and install Linux it's ok. The moment I can't do this I'll be using legacy hardware ( or import a laptop from the EU where this is banned).
Edit: Hopefully the EU will make secure boot an option we can disable...
To be fair normal people don't care. My friend needed a new laptop, and since she's not too worried about the latest specs I just picked up what I could find at target for 300$. She is never going to reinstall. When Windows messes itself up in a year or two she'll probably just buy another 300$ laptop.
If this isn't explained during the setup process, it doesn't count. Magic undocumented incantations are not a user interface. Normal users will never discover this.
I've done my share of installing all kinds of OS's and flashing Android ROMs and I feel like I re-discovered crtl+C ctrl+V all over again with this revelation.
This is why you have to pay me to use Microsoft products. Their user experience is contentious with the continual degradation. The only nice thing I have to say about Microsoft is that when I retire, I will no longer be forced to engage with them and their enshitification.
They are even trying to push out local user accounts on their embedded / IoT version of their OS. The company I work for is evening having me port their product's hosted OS from Windows to Linux so they don't have to deal with their hostel user experience.
Because they closed the one where you use a fake email and get the password wrong.
Because they closed the one where you could fake not having an internet connection.
Because they removed the workflow to set up a local account. After they used dark patterns and misleading language to convince people to set up MS accounts instead.
Only on Windows Home, though. Windows Professional has none of that bullshit and I've done many installs.
Anyone who is this passionate about computers should be buying a Professional license anyway, namely to get proper access to Group Policy and other things peasants won't care for.
OP here, I run a computer shop that handles consumer computers. People bring me machines they purchased from electronics store. So i have to mainly contend with what is out there. I looked into getting into a partnership with a reseller, but since I dont really have much revenue it always seems like a zero sum game. Sometimes customers want me to purchase laptops for them, and this particular model I had trouble with was one of them. I'll admit I'm not very versed in areas I should be focusing on if I need to purchase OEM products because I am mostly a system builder for gamers. If anyone has any recommendations for getting system licensing and hardware from 3rd parties that's not to much of a upfront expense, I'm all ears.
>Windows Professional has none of that bullshit and I've done many installs.
Windows Professional has a lot of bullshit.Yesterday I lost 3 hours because, after a crash, Windows will run an "online checkdisk" and die before completion. Pulling the ethernet cable solved the issue. (this machine does not and will not have a wifi connection).
IIRC Windows 11 doesn't even have WiFi drivers for older Microsoft Surface tablets. I've got a Surface from ~2017 and I had to do a dance to get a fresh install on there.
We’ve taken to recommending Rufus in our setup guides rather than the Windows installation media tool due to the lack of recent Wi-Fi drivers in vanilla Windows.
> We are slowly being boiled to the point of no return. We need some kind of consortium for users to represent our interests otherwise in 10 or 20 years we will have very limited choice when it comes to computer technology.
For servers purchased in bulk by hyperscalers, OpenCompute has done a great job of coordinating owner requirements for hardware and firmware delivered by ODMs and silicon vendors in the server supply chain. Founded by Facebook, OCP built on the pioneering ethos of whitebox servers at early Google Search.
To create a similar organization for clients, one would need to pool enough buying power to influence the supply chain for "PC" (x86/Arm) client devices. OCP bypassed Tier 1 OEMs and worked directly with Taiwan ODMs. This worked because hyperscalers could implement custom firmware and do their own support. The closest existing client vendor might be Framework, which has not yet managed open-source firmware. Plus Clevo (coreboot) OEMs. DMTF [2] might have some interest.
A possible baby step towards open clients would be an OCP reference design for a "privileged access workstation" to access security-sensitive administrator web consoles for hyperscaler clouds. Include both a discrete TPM and an open silicon root of trust (OCP MS Caliptra or Google OpenTitan are both open firmware). AMD OpenSIL has promised OSS client firmware by 2026 and AMD mini-PC boards are everywhere. The building blocks are present for a high-integrity reference client with open firmware, under control of the client-owning cloud customer.
With suitable licensing, anyone from Tier1 PC OEMs or small vendors could create custom derivatives of the OCP reference client IF they retain mandatory core properties like open firmware and open silicon RoT. We could intentionally reboot the accidentally-open IBM PC ecosystem, at least for the small niche of secure cloud administration. If the OCP reference client is successful, it could motivate a new client-focused org for open client hardware.
Although on the subject of hyperscaler hardware and restricted closed ecosystems, AMD EPYC processors have that nice feature which allows OEMs to permanently vendor-lock the processor to their hardware on first boot, so you can't take a chip from a Dell server and stick it in a non-Dell server for example. If you're buying a random surplus EPYC processor you might not even be able to know which vendor it's bound to until you try it.
At some point, EU "circular economy" rules will need to look at secure mechanisms for transfer of decommissioned hardware ownership for sale on secondary markets. This also applies to hyperscaler server recycling, and even old Apple hardware that could be made to work with alternate operating systems.
If AMD EPYC CPU policy is enforced by PSP firmware, then it can be negotiated by customers in a large enough, non-OEM, buying pool.
It's not just EPYC, Lenovo makes heavy use of PSB (Platform Secure Boot) locking for their desktop/workstation machines too (TR or Ryzen Pro). The PSB feature is enabled by default so when installing a new CPU the user is prompted to lock it at every boot until the feature is disabled in BIOS.
The locking is handled on-chip, it's permanent, and it leaves the CPU working only on the OEMs motherboard/BIOS. This poisons the used parts market. But worse, these systems are always just one BIOS update away from not booting the fused CPUs anymore (e.g. signing the firmware update with new keys).
AMD included this in their CPUs because large OEMs like Lenovo and Dell asked for it. There's no extra security being offered by this feature, despite the OEMs throwing the word around every time to cover up it's only securing profit.
Sure, but what if the motherboard (or some other proprietary and expensive component) has a major failure? Then you're stuck with a locked CPU.
Generally, the used market values standard form factor stuff more, so you'll frequently see people running a supermicro/etc server with a low end CPU wanting to upgrade to a better CPU from the same gen. So, pulling CPUs to sell to the enthusiasts in the used market from proprietary servers (HP, Dell, etc) when they go EOL has been pretty standard practice for at least a decade. There then ends up being a big pile of undesirable proprietary motherboards/chassis, and low-end CPUs for them. Both eventually get e-wasted, sometimes with the motherboards/chassis getting parted out.
The reasoning for this is pretty simple. Shipping an entire server is expensive - at least $100 CAD, sometimes more. Shipping a motherboard is cheap (sub-$30), and a CPU basically free (~$10). A proprietary server cannot be upgraded down the line (whereas a standard SSI-EEB chassis can have its motherboard swapped for a newer one), which decreases its value further. If someone has a standard chassis, they can buy any standard motherboard; to sell your proprietary one, you have to find someone with a proprietary chassis and a dead/missing motherboard, a very small market (supply vastly outpacing demand). For someone to want to buy your proprietary whole server, they'll have to be willing to accept that the chassis is junk whenever they want to upgrade past its generation of hardware, which is a relatively small market. Resale value in a few will also be terrible, because no one wants old hardware they can't upgrade. The market has a pretty hard cap on value for old hardware. All of this put together means that proprietary whole servers are worth (maximum_price_for_used_server * proprietary_undesirability_multiplier) - (shipping_cost), while a standard motherboard is worth (maximum_price_for_used_server) - (shipping_cost) and an unlocked CPU is worth (maximum_price_for_used_cpu) - (shipping_cost). It can sometimes be the case that selling an unlocked CPU from a proprietary server nets more money to the seller than selling the entire server, depending on era/brand/specific CPU.
OP here, is OpenCompute mostly for server grade hardware? I feel like the enthusisant marketing is being left behind by all these solutions that the average computer user doesn't need.
That reminds me of how Dell was (still?) used different PSU/motherboard power connections, and at one point the physically matching connectors with different pinouts meant Dell PSUs would fry regular motherboards or vice-versa.
This sounds like more of the same, a kind of uncaring that shades into hostility.
and at one point the physically matching connectors with different pinouts meant Dell PSUs would fry regular motherboards or vice-versa.
I think it's more than "uncaring" - since they just shifted the pinout 3 pins over from standard ATX, got rid of the 3.3V and added another 2 5V lines: https://www.vogons.org/viewtopic.php?t=59959
I think dropping the 3.3v and -12v connections is a valid choice, but they probably should have found a different connector to use so the incompatability was obvious.
They could have also dropped those rails by just dropping them, and leaving the connector the same. It’s pretty unlikely they needed the extra current on the 5V rail.
> I think dropping the 3.3v and -12v connections is a valid choice,
More than just a valid choice, in the long run even 5V is going to be dropped. Intel's ATX12VO standard has only 12V coming from the power supply, all other voltages are generated on the motherboard when necessary.
A lot of Dell design is slightly more proprietary. At their scale the slight cost or efficiency gains they pick up are probably worth the extra design changes. At work we use all Dells and generally speaking the changes generally lead to a nicer technician experience. And sometimes standards suck: Dell E-series docks were drastically more reliable and less fragile than USB-C and I miss them terribly.
For what it's worth, I'm not sure the consumer is hurt too bad either: Because of their sheer scale, aftermarket Dell parts and clone parts are extremely cheap. Any Dell part number turns ip tons off off-brand replacements on Amazon and eBay.
The biggest downside to Dell's approach is e-waste. When they change a design, otherwise serviceable hardware isn't useful when working with newer models. Speaking of those wonderful old E-series docks, we had to replace them all simply because they don't sell laptops with the connector anymore.
The AHCI / RAID switch the author is describing is only relevant to drive controllers which support SATA disks - AHCI is the protocol that's used to interact with a SATA controller. This switch has no effect on NVMe drives, and never has; the fact that the BIOS control for it even mentioned NVMe is odd.
The "RST storage driver" (i.e. Intel Rapid Storage Technology) is only required for systems which support SATA RAID using an Intel integrated RAID controller. It is not required to use NVMe devices. I wouldn't expect Dell to provide this driver for systems which don't use SATA disks, since it wouldn't do anything.
It certainly wouldn't surprise me if there were some weird trick required to perform a clean install of Windows on these machines. But I don't think the author has identified it correctly, and I find their theory that this is a deliberate effort to "control the user experience" unconvincing.
I think there is a newer version of RST based on NVMe and it has the same problem: if the system is in RST mode the SSDs aren't visible. Dell even has a recent article about this exact problem: https://www.dell.com/support/kbdoc/en-us/000188116/intel-11t...
Also, you can reboot into safe mode on Windows 10/11 and even 8/8.1, and before it reaches Windows initialization, enter the BIOS/EFI, toggle the AHCI/RAID/RST mode flag, save and exit BIOS/EFI, continue booting Windows to safe mode, install drivers if they aren't automatically installed by Windows, then reboot to exit safe mode. Apparently safe mode re-initializes the HAL similarly to the first install reboot OOBE mode, but normally HAL doesn't like you doing this in normal boot mode.
enter the BIOS/EFI, toggle the AHCI/RAID/RST mode flag
He said (with screenshots) that flag doesn't exist. Maybe he didn't look hard enough for the flag and didn't look hard enough for the driver though. I got a "I don't want to fix my laptop; I want to complain" vibe from the blog post.
I suspect that Fast Startup and/or dirty bits on partitions are causing the EFI to disable those modes, and/or secure boot and/or Windows 8/10/11 "optimized settings" checkboxes causing the mode to not display or otherwise not be available, as I allude to in another thread on this post. Dell is not the only offender in this, as I've seen even more locked-down EFIs than Dell's that won't even allow you to disable secure boot.
Yes, I would agree! I also agree with you that OP didn’t seem to be troubleshooting as much as venting, but from the Reddit thread that someone else posted, OP seems sympathetic and responsive to helpful commenters, so I hope that they see this HN post if they still need it, which was unclear from the Reddit thread.
You can switch into ACHI mode for NVME drives. The point is you cannot install windows clean without either having the RST driver, if your storage device is set to a RAID configuration in the BIOS, or switching into AHCI mode which my article describes. Since the Dell Laptop model I described did not have an option to switch from RAID into ACHI I was unable to install windows clean without the RST Driver. And that driver is not included on Dell's Website.
IDK about misguided, but I just went to the Dell website, went to support, looked up the Inspiron 16 Plus model 7640, and promptly found an Intel RST driver [0].
I am not disputing your experience at all, but it is weird to me that we'd need an RST driver to install Windows with an NVMe device; RST is a SATA/AHCI RAID tech, and while it does also do something with the 'optane memory' accelerators that Intel used to sell, I wouldn't have expected it to also make NVMe drives visible to the installer. My own experience is not current at this point, though.
I was also able to find that driver and posted it in another thread. I can fully believe that it wasn't available when OP searched for it, but maybe they just didn't find it due to Dell's support site being somewhat difficult to navigate for people who aren't used to troubleshooting Dell computers and especially clean installing or dual booting them.
One workaround is to install in AHCI mode and then use the safe mode method to switch the storage mode out from under Windows, install missing drivers for the desired boot mode under safe mode if they aren't installed automatically by Windows, then reboot to exit safe mode.
However, OP doesn't seem to have the option to switch to anything other than RST, which could be due to BIOS settings and/or Windows settings, as well as flags on your partitions that cause Windows to boot in read only mode. You might need to toggle some settings for legacy/uefi compatibility mode, reset BIOS/EFI to (un optimized for Windows 8/10/11) default settings, disable secure boot temporarily, etc to let the EFI allow you to toggle the boot mode on some Dell BIOSes, however, especially in OP's case where they appear to not have any alternate drive modes besides RST in BIOS/EFI. In especially bad cases, you might even need to update the BIOS to enable these modes and/or wipe the EFI partitions and possibly the entire boot drive, as sometimes the BIOS/EFI is inserting itself in the boot process via "dirty bits" and/or Fast Startup mode.
This must be a difference between the methods we are using to look up drivers. I'm using dell's support site and typing the dell service tag to look at all the drivers. When I click on storage drivers section, I see nothing in the RST department. Your link seems to point to a search tool to find RST drivers which I did not know about.
I did dell.com -> support -> drivers & downloads -> type "Inspiron 16 Plus" instead of the service tag and pick the 7640 model from the drop down -> select 'Driver' for download type and 'Storage' for the category.
I'd be pretty disappointed if I put in an actual service tag and nothing showed up. I wonder if it's showing you stuff that's been released since your specific laptop left the factory? This RST driver is dated November 29.
I'm a bit lost in all this, except for kind of thinking that it shows X86 is a giant mess.
First, doesn't Windows knows how to use NVME drives as NVME drives, without going through some weird SATA compatibility layer? Second, if I'm reading you right, Windows also has bundled drivers for RST. It's just that the installer doesn't.
Is that right? Are you saying that the Windows installer doesn't know how to use storage interfaces that the final installed Windows system would know how to use, and furthermore that those include storage interfaces that might be used for the boot drive? So in order to install, you have to get an RST driver and somehow load it into the installer, but after that the system will work?
Because that sounds like a Windows problem more than a BIOS problem. You're installing your OS on reasonably vanilla storage[^1]. Why would you expect to have to download a driver from Dell at all?
[^1]: Sort of vanilla. I don't know about this "RST" nonsense and am suspicious of anything that might make it hard to move a drive, or indeed an entire RAID array, to another computer or controller intact.
Are you saying that the Windows installer doesn't know how to use storage interfaces that the final installed Windows system would know how to use, and furthermore that those include storage interfaces that might be used for the boot drive? So in order to install, you have to get an RST driver and somehow load it into the installer, but after that the system will work?
After reading the Dell documentation... yeah, that's exactly the situation. A laptop only has one drive BTW.
Presumably the author really is missing a driver; I doubt he'd have missed being able to install without it. If he really does need such a driver; then the exact name of it or the details of Dell's BIOS options and whether they help sound fairly incidental to the underlying story.
Your criticism may be reasonable; but does it really cut at the heart of the issue? Also; some of these options are occasionally oddly named, so let's not ignore the possibility that the article's author is right on this.
> Your criticism may be reasonable; but does it really cut at the heart of the issue?
The article hardly provides any support for its "closed ecosystems" thesis beyond this anecdote. Whatever the situation is on this hardware (and wmf's sibling comment points out that there might indeed be something odd going on), it seems far more likely that it's the result of sloppy engineering by Dell and/or Microsoft, rather than a deliberate plan to restrict user choice (or something).
I'm the OP, yes I did not provide much evidence in that specific article, and I'm getting to the point, based on responses that there might be some irregularities between search methods for model specific drivers, so Im willing to admit I might be wrong about this, but the security features that are included in consumer hardware BIOS these days and the way OEM manufactures are designing OEM products leads me to believe that we are moving in a direction that is more closed and locked down, which its not good for the enthusiast community and in the long term, bad for society. We can even see this in other system build vendors ASUS. I have published other articles [0] related to what I believe a slow move towards locked down consumer hardware.
PCs are only open because IBM failed to keep it closed, that wasn't the plan.
They became the exception among 16 bit systems in regards to openness, and while it contributed to PC taking over everything else, OEMS want their margins back.
No they didn't, read on Compaq clean room reverse engineering, the lawsuit, and IBM's failed attempt to regain the PC market with MCA and PS/2, after losing it.
"By June 1983 PC Magazine defined "PC 'clone'" as "a computer [that can] accommodate the user who takes a disk home from an IBM PC, walks across the room, and plugs it into the 'foreign' machine".[7] Demand for the PC by then was so strong that dealers received 60% or less of the inventory they wanted,[8] and many customers purchased clones instead.[9][10][11] Columbia Data Products produced the first computer more or less compatible with the IBM PC standard during June 1982, soon followed by Eagle Computer. Compaq announced its first product, an IBM PC compatible in November 1982, the Compaq Portable. The Compaq was the first sewing machine-sized portable computer that was essentially 100% PC-compatible. The court decision in Apple v. Franklin, was that BIOS code was protected by copyright law, but it could reverse-engineer the IBM BIOS and then write its own BIOS using clean room design. Note this was over a year after Compaq released the Portable. The money and research put into reverse-engineering the BIOS was a calculated risk. "
Copyright law and clean room design being the magic words.
Followed a couple of years later with,
"Compaq, IBM Reach Broad Patent-Sharing Agreement"
Last week there was a "Switched Back to Windows After over 10 Years on Linux" thread ( https://news.ycombinator.com/item?id=42496032 ), and this story shows one of the reasons I prefer Linux to Windows.
This issue goes back a long way, and the blame goes to a combination of OEMs and Intel (and AMD) wanting to sell RAID solutions without needing real RAID hardware, and the Intel CPU and chipset teams playing along in ways that they really should not have.
In the AHCI era (and earlier), drives connected to a SATA controller, and there were three ways this could work:
a) The controller was just a controller. Perhaps it appeared as an AHCI device over PCI. No funny business, and the OS could talk to the drive more or less directly via the controller.
b) Hardware, at at least hardware-ish, RAID. A RAID controller speaks SATA to the drives, and the OS speaks some protocol to the controller. It’s possible for the protocol to be obnoxious and to require obnoxious drivers and/or management software, but at least it makes sense.
c) Software “RAID” that pretends to be hardware. The CPU really does speak AHCI to the drive, but the vendor has decided to make it pretend to be a high end vendor thing and to integrate it with BIOS. But this is a mess, since the controller really is AHCI. So some hack is done to prevent the OS’s native driver from noticing the AHCI devices and instead let a (generally very bad) vendor “driver” that is actually a full RAID stack claim the devices. This could be as simple as firmware asking the AHCI controller not to report AHCI compatibility. Intel has also enabled this through multiple generations of disgusting kludges.
Enter NVMe. Unlike SATA, there is no controller. NVMe drives are PCIe devices. So the choices are different:
a) Vendor does nothing except boot support. NVMe drives show up on PCIe just like anything else would.
b) Vendor has an actual RAID controller. It’s a device that speaks PCIe to the NVMe devices and, itself, acts as an NVMe (or AHCI) device as seen by the OS. This could work fine, but it’s unlikely to be as fast as the drives themselves (NVMe is fast).
c) A truly atrocious hack, again enabled by Intel, in which firmware can ask the Intel PCIe hardware to straight up lie to the OS about what devices are connected. The hardware will try to identify NVMe drives that it’s supposed to hide (which is itself a mess — these drives are all actually just PCIe devices, and there is no reliable way in general to figure out which devices are supposed to be hidden). Then some magic “RAID” driver will do some other kludge to talk to the NVMe devices behind the OS’s back and pretend to be a disk itself. Of course it works poorly.
In any case, I wonder if the OP’s machine is actually an example of (b), where the NVMe drives, presumably connected to a fancy backplane, are genuinely connected via PCIe to an actual RAID controller, not the CPU or chipset. If so, a firmware option for “native NVMe” would make no sense, and the actual correct solution is for the OP to arrange for the Windows installer to use the right driver. This has been officially supported, usually in some annoying way, since at least Windows NT 4.0 and probably for quite a bit longer.
Scenario C is more likely the culprit. I have seen multiple examples of prebuilt PCs and laptops defaulting to software RAID mode for reasons unknown, and they did not always have a toggle just like in OP’s case.
The only time I have come across scenario B was with a VAIO laptop from around 2011. The machine was advertised to come with “fastest SSD on the market” which turned out to be four (!) off the shelf eMMC modules in RAID 0 through a hardware controller. As janky as it sounds, OS compatibility was never an issue because the controller was a fairly common model with well established driver support rather than some bespoke mystery.
The article pretty clearly lays out it's at least functionally B. The problem is Dell doesn't publish the drivers necessary for the Windows installer on it's website. You can only reinstall windows from the recovery partition or via online download via an EFI program, similar to Apple Recovery's online re-installation. Those install methods include all the Dell bloatware and telemetry settings cranked up to 11.
If Dell really wants, they can use Windows Platform Binary Table (WPBT) to install their bloatware on a clean install of windows too. I think most OEMs don't use that for all of their bloat though.
One trick that I have discovered that does wonders in a wide array of scenarios is to run OS installation inside a visual machine with a real drive connected.
Yes, it’s unfortunate that Dell didn’t put this specific driver on their website for this specific model. However, I think the simplest explanation is that Dell is getting lazy with their driver coverage on their website, not some conspiracy to slowly boil the industry alive.
Use the common tools for exporting a driver from the OEM media if you can’t find it online.
The Dell website does provide the RST drivers. For the Inspiron Laptop 16, the RST drivers are found under the driver category, not the firmware category (which is what the screenshot shows in the article). For the desktop, the driver appears under Recommended drivers.
That being said, I recently did a dual boot install (on an MSI motherboard) with Windows and Pop!_OS and the Linux install went much smoother than Windows. For Windows I had to get the RST drivers and run the setup exe in extract mode in order to get drivers that worked during initial install. And Windows also didn't even include an Ethernet driver by default.
This is only true for very sophisticated computing. From my point of view general purpose computing is alive and and every bit as free as the good old days. The only barriers are points of view.
Here is just one example of computing that has no limits on problem solving. The only limits are scale and what good is scale anyway. Problems solved at scale are too general and over complex for most individuals problems.
I consider myself a mediocre geek and a big fan of the look and feel of the DELL Optiplex 7020 SFF. Without diving into too many details, I typically buy around 3-4 of them yearly (i5 CPU, 16 GB RAM, 256 GB disk) and add an additional 1 TB drive. I always replace the OS with Ubuntu.
Do you have any recommendations for an alternative machine? I’m looking for a solid, reliable workhorse that could serve for years. My current DELL setup has never failed or let me down, and I’d like to find something within a similar budget that I can rely on for constant use for years.
There's no such thing as "fast enough" for Rust compilation. I'm putting the AMD CPU, 96 GB RAM, and a fast NVMe all to good use and still have time for a sword fight..
It’s probably about $3M to design, manufacture, and bring to market a modern PC mainboard with semi custom firmware today. There’s no money in making an open and hackable system today because the dozens of customers simply won’t make back the up front cost. Look at Raptor Computing, super open super focused on hackability, but selling in dozens to hundreds of units because their customer is a niche market.
When they came out with the POWER9 systems originally the price to performance wasn’t that bad. It’s been a while since then. But also they seem to have known their market so priced it to hopefully be a sustainable business to recover the engineering costs.
Losing money to enter a market requires a lot of money. This generally only happens in a big company, which is the current market where no one cares about being open or hackable, or VC funding, which has historically been anti-open and anti-hackable in the long run. VC money has made first or second generation products which are in that direction but longer term has not had a great record.
I think Raptor’s plan was/is the right one but the market of consumers who will pay extra to get that openness is quite small.
Framework is close. They’ve had a LOT of VC money.
It boggles my mind that the entirety of the hardware world cannot accept the simple concept of making computer hardware capable of booting and running <latest open source operating systems literally purpose-built to take advantage of general hardware standards the Hardware Manufacturers WERE ALREADY MAKING for Microsoft Windows (TM)>
I mean, it's not like Linux didn't spend 30 frikkin' years specifically aping hardware that carried the WIntel brand from the IBM PC with MS-DOS up to WindozeWhatever13 or whatever they are up to now...
I cannot fathom the strange incentivization schema that would cause the entirety of the existing mass-retail OEM computer production to be "Windows ONLY" and actively contributing to literally selling LESS of their own units via efforts to lock their own hardware against running your own damned OS upon it!
I buy hardware I can run Linux on. I have no use for Microsoft Windows in any capacity for any purpose, period. end of story.
Hardware Manufacturers please take note! (I remain puzzled... Dell even sells some preloaded Linux laptops, IIRC... what gives, Dell?)
Dell even sells some preloaded Linux laptops, IIRC... what gives, Dell?
What gives is that the Developer Edition gives them permission to make the other models 110% Windows-only because "if you want Linux just buy the Developer Edition". In their minds there is no reason to make the normal edition support Linux.
Who do you buy from when they’re all bad. I don’t think buying a Linux laptop is a serious option even for most enthusiasts. The software, hardware, and battery life just aren’t there.
It is quite a bit worse in practice depending on what you do.
For example, with web browsers, most Linux ones (including Chrome and Firefox) disable much of hardware accelerated graphics on Linux by default because of "unstable graphics drivers". If I remember correctly, Chrome just disables it across the board, while Firefox has a whitelist of drivers considered stable which is basically just Intel.
In my testing, unless you muck around with these settings, you're can easily lose something like an hour of battery life compared to Windows if all you do is just browse websites (and I'm not even talking about anything fancy here; my automated test was literally just scrolling the main Reddit feed).
I had one of the early Project Sputnik laptops (XPS 13 with Ubuntu preloaded) and loved it. Now I use a Framework which is fine, but not anything amazing.
I'm thinking either Framework or Lenovo. It might be interesting to see what good arm laptops will be available next year when I might finally decide to upgrade my current machine.
> The software, hardware, and battery life just aren’t there.
Writing this from a Framework 13. I've been using Linux laptops for the past 15 years, and this is the first time a Linux-friendly machine has checked all my boxes (including excellent battery life).
I think it's just some lingering propaganda of those who oppose adoption of Linux. Things are pretty usable and are there for those who want to use them. Not necessarily Dell though - haven't used them in a long time.
> Who do you buy from when they’re all bad. I don’t think buying a Linux laptop is a serious option even for most enthusiasts. The software, hardware, and battery life just aren’t there.
I'm writing to you from a $250 laptop running Mint (with 11 hours of charge remaining) that this is is pure, unadulterated FUD from 20 years ago.
Lenovo certifies their systems for use with both RHEL and Ubuntu [1] which means that most mainstream distros will work contemporary, high-end business ultrabooks that are widely regarded as some of the best in their class. Arch has a lush, green compatibility matrix [2] for these laptops too. Once a year on average I buy a four-year-old T series and throw Mint on it for a friend or relative and everything works out of the box. This has been the case for at least a decade. If you don't want to buy a Thinkpad then the acclaimed XPS 13 is one of many [3] Dell laptops that comes pre-loaded with Ubuntu. Others in this thread have also pointed out the numerous vendors that ship Linux on rebadged barebones hardware.
We need to cut it out with the "desktop Linux is hard" meme. It's a counterproductive mind virus. It hasn't been true since the Bush administration and almost always amounts to some hand-wavey comment like the above. The hoops you have to jump through with modern Windows a la Group Policy hacks, TPM requirement bypass, selecting the exactly-correct LTSC version, etc. are consistently greater than anything you have to deal with on modern desktop Linux in my experience.
> Don’t buy shitty products, or soon there will be no non-shitty products
The only way to fight enshitification is government regulation.
Otherwise, for the 0.1% of market population able to make an informed choice, there remains a 99.9% of consumers ready to buy the cheapest thing and then shrug up or whine.
I think we need more details: do you mean TVs aren't getting worse, or are? Why? What examples do you have?
I think LG's webOS is nice - its consumer friendly, but as a developer, you can whip something up and install it on your TV easily in an afternoon. OTOH, I believe not all the APIs are documented, as some of the interfaces provided to/from Netflix/Prime Video/Dinsey+ don't appear to be possible with documented APIs, and ofc, from a hardware perspective, they are completely locked in.
If capitalism means manipulating people to make choices that are against their own best interests or eliminating other choices that have better utility, then I’m on board.
But really, capitalism properly managed shouldn’t be about figuring how to profit by making things worse, it should be about profiting by solving problems.
When a business profits by creating problems rather than solving them, it is a parasite on society. Such businesses are effectively criminal enterprises, and should not be allowed to exist.
The whole point of capitalism is for a minority with the means to profit off of the majority without the means. It doesn't care about the quality of products as long sa they make money. Since owning all the means of production means owning the power, capitalism will never self-regulate: it doesn't have the interest of the majority in mind
"Capitalism properly managed" isn't capitalism anymore, so if that's what we want (and I agree with you) then let's use proper terminology. Workers-owned coops, democratically-chosen regulations, communities governed by and building for themselves,federating between each other, the stuff
Dell's rack servers are one of the rare servers which refused to die under continuous heavy load (HPC).
Their R815s after eliminating the early failures (leading edge of the bathtub curve) just trucked on. I used to run an OpenStack cluster on top of them till last April or so. At the end, either the CPU power regulators died, or their RAID cards just called quits. No other errors after 10+ years of service.
We also have their newer CPU and GPU servers. They just work. Scream occasionally due to high load, but not overheating or dying.
Counterpoint, running Dell R430 rack server for nearly ten years -- in that time a single SSD in raid 10, 8 disk array, has failed.
Maybe I got lucky?
Similarly, my last three laptops have all been Dell Precision -- only issue until I switched to Intel integrated gpu was Nvidia on Linux (black screens, laptop attempting liftoff due to gpu heating issues) causing periodic grief.
Also, for the author, Dell Precision provides advanced BIOS options out of the box, something that their consumer line of laptops probably doesn't offer.
We had 4 Dell Itanium racks (circa 2003) all fail with the same exact power-supply overvoltage issue over the course of 4 months. Maybe we got unlucky.
Dell's business laptops are pretty solid, as are their tiny form factor PCs. I'd advise against Dell in regards to anything geared towards consumers, but there are many good things in their business offerings.
Previous gen Dell laptops at my workplace had issues with expanding batteries. Newer ones don’t, but are made from really cheap plastic, and have bad battery life.
I would definitely avoid their laptops and get Thinkpads instead.
And back when PCs were far more open, good old IDE was always an option too.
Ever since BIOS became EFI, and flash ROMs started getting much bigger, it seems they've not been adding functionality but removing it slowly. The excuse is often "security" (against the user), and "legacy" (the oldest interfaces are also the most widely understood and stable).
That said, I'd stay away from the "prebuilt" manufacturers like Dell, Lenovo, HP, etc. if you want configurability. They've always had far less options in their BIOS than equivalent offerings from "enthusiast" or "gamer" oriented companies, although in the laptop space it's more difficult to do that.