Linux had a different model than windows. It doesn't really have separate drivers. This has a number of upsides and downsides, let me mention just one.
Windows used to have a really bad reputation for stability. I'm sure a lot of this was windows's fault, but a lot of this was bad third-party drivers that crashed.
You might remember that nvidea does have a driver for Linux. The approach they took isn't really supported, which is one of the reasons why graphics drivers are always such a nightmare under Linux.
You are likely not even using the nvidia gpu or the driver (nouveau) if your thinkpad is new. Or you are using just the nvidia gpu if you have discrete mode. For older designs with weird wiring and optimus, nouveau is very problematic for even basic use with external screens. Problems range from screen tearing (reverse prime, no solution), stability, poor performance, and fundamental usability issues such as screen not coming back on after suspend. Not that this is nouveau's fault, for it is a reverse engineered driver afterall.
AMD should, in theory, be quite good these days, especially with a new kernel. Note that amdpro drivers are generally not recommended despite the word pro in them. The newer amd drivers are all free and being merged upstream continuously. pro (afaik) is the older proprietary driver with specific optimisations for things like CAD, but otherwise doesn't have much use these days.
These days I use AMD, and I just haven't had to think about it. Before this I had Nvidea, and even there the open drivers were fine until I wanted to play games.
Interestingly, Windows improved its stability problem through a combination of moving drivers into userspace and formal verification. Unlike in Linux, when your video driver crashes it usually just results in a momentarily blank screen instead of a core dump.
I would not say its true. Crashed gpu driver on linux will most likely leave you with locked out screen that does not accept any user input.
On the contrary, windows GPU(WDDM) drivers are restartable and can handle crashes even without restarting the user graphical apps. This've been so since at least windows 7.
One upside is simply the fact that you don’t need to hunt down drivers and install them.
I’ve had to install wifi drivers for a Windows machine before. Hunting for the right driver, downloading it and transferring it from my phone was quite a pain, as opposed to the wifi just working on Linux.
That's not entirely true. It works that way because your distro maintainer built a kernel with nearly every module turned on to support the widest range of hardware out of the box.
I've found that it still happens almost every time I try to install a consumer Windows edition on server hardware, or server Windows on consumer hardware, even for Intel Ethernet. And it's often necessary to edit an INF file so that a perfectly functional driver won't refuse to load on such an "unsupported" configuration.
It supports separate drivers in the form of dynamic kernel modules (DKMS). But this is about default upstream support. Good upstream support is always preferable to chaotic dkms mess.
You could've just used DKMS and use the kernel module in 4.18 and such [1] (unless you run Fedora who have the driver baked in the kernel; then you gotta recompile your own kernel...).
Heck, you could've used the Apple Magic Trackpad 2 on previous kernels. It's just that the multitouch wouldn't work. Which is the great thing about the device.
FWIW, the device doesn't work out of the box with Windows either.
They are drivers as others have mentioned. However I disagree with those calling it stupid or a mistake. Since Linux and most of it's drivers are open source, developing Linux together with it's drivers is really not that bad of an idea in my opinion. It allows the API to evolve without making your old hardware stop working. This is definitely not a theoretical concern, a lot of old hardware I had such as scanners never worked beyond Windows XP because the drivers were simply not compatible.
The fact that there is no stable ABI is a sort of downside but also benefit of this model. The ABI can always be improved, but it will often be at the cost of breaking so-called out of tree drivers (Nvidia proprietary, VMware's networking and VM monitor drivers, etc.)
Linux doesn't have a stable driver interface. There's no way to write a driver that works on a few different versions of Linux. The way you are "supposed" to do it according to the kernel developers is to open source your driver code and convince the kernel developers to merge it into the kernel itself.
Yes this is a stupid design. They do it because it means they don't have to worry about backwards compatibility or API stability at all, which is a nice thing to not have to worry about. But it comes at the cost of bad hardware support in Linux (and difficult-to-update Android phones).
They do it so that anyone can improve the state of the drivers, not just programmers with apple.com email addresses working in secrecy and dumping a shit binary over the fence
It is a driver. It's just that it is released for production with this version. Linux bundles most drivers, unlike say a microkernel, which would hand the job off to userspace.
True. I would just add a few notes for non Linux users.
- Drivers are bundled with the kernel but they're loaded dynamically when requested, ie supporting more devices doesn't make the kernel any bigger or slower. The Linux kernel from mostly static in the beginning became more and more modular and today save for developers or early adopters kernel rebuilds are very rare among users. Embedded cards aside, I don't recall having rebuilt a single kernel since 2.6 on normal PCs.
- Having drivers bundled in the kernel solves the problem of that piece of old hardware we lost the drivers disk and the manufacturer's site resolves to nowhere because they're no longer in business. Caring about older hardware seems of no importance in the desktop PC business, but not uncommon in the industrial world where one happily trades 100x speed loss in exchange of 10x reliability gain and there's still a lot of old perfectly functioning iron out there.
- Drivers are brand-free. Unless specified, they support the chipset, not the hardware brand and they definitely don't bundle other junk themselves, which is one of the plagues in the Windows ecosystem where 5 cards by 5 different manufacturers but using the same chipset all come with their set of drivers and associated ton of rubbish because the vendors fight to splatter their name on your desktop. Under Linux if you have 5 cards by 5 manufacturers you need one small driver and all software interfacing to standard device drivers can use all 5 cards. There's no such thing as "one card, one driver, one software". (Big exception for expensive niche proprietary hardware of course).
I don't see how including the driver inside the compiled kernel image can NOT make the resulting linux OS bigger than not including the compiled driver.
External modules (on the fs) wont, but we're talking about the bundled ones, right?
I'm old enough to have played the embedded engineer using floppyfw on a spare 486 board screwed on a piece of wood along with a 1.44 fdd and a power supply cannibalized from somewhere. That was my embedded development system nearly 20 years back:)
A bit later I ditched the floppy in favor of a spanking new 4 or 8 MB (megs, not gigs) flash parallel ATA "diskonmodule".
Just checked, I'm surprised that the floppyfw page is still there.
https://www.zelow.no/floppyfw/
Speaking of small distros, I gave a try at both DietPI and TinyCoreLinux on virtual machines and was amazed at how good they perform.
Bundled doesn't mean statically linked, it would be plain dumb if not impossible to add every driver out there into the kernel image; nowadays about everything is dynamically loaded so it doesn't impact the kernel size and doesn't waste memory or CPU cycles unless loaded, which happens only when necessary (eg when a USB device is inserted).
While it's true that a microkernel would hand the job to userspace, this is unrelated to Linux's bundling of drivers.
Linux, unlike the Windows kernel (which is also not a microkernel), does not have a stable kernel API for drivers. This means that drivers that live outside the tree have to play catch-up to every change that the kernel devs make.
When a driver is in-kernel, the person that made the changes fixes the driver as part of their change.
So Linux really encourages drivers to become open-source and submit for inclusion in the mainline kernel, just to avoid the maintenance hassle.
Why aren't these things a driver?