Linux had a different model than windows. It doesn't really have separate drivers. This has a number of upsides and downsides, let me mention just one.
Windows used to have a really bad reputation for stability. I'm sure a lot of this was windows's fault, but a lot of this was bad third-party drivers that crashed.
You might remember that nvidea does have a driver for Linux. The approach they took isn't really supported, which is one of the reasons why graphics drivers are always such a nightmare under Linux.
You are likely not even using the nvidia gpu or the driver (nouveau) if your thinkpad is new. Or you are using just the nvidia gpu if you have discrete mode. For older designs with weird wiring and optimus, nouveau is very problematic for even basic use with external screens. Problems range from screen tearing (reverse prime, no solution), stability, poor performance, and fundamental usability issues such as screen not coming back on after suspend. Not that this is nouveau's fault, for it is a reverse engineered driver afterall.
AMD should, in theory, be quite good these days, especially with a new kernel. Note that amdpro drivers are generally not recommended despite the word pro in them. The newer amd drivers are all free and being merged upstream continuously. pro (afaik) is the older proprietary driver with specific optimisations for things like CAD, but otherwise doesn't have much use these days.
These days I use AMD, and I just haven't had to think about it. Before this I had Nvidea, and even there the open drivers were fine until I wanted to play games.
Interestingly, Windows improved its stability problem through a combination of moving drivers into userspace and formal verification. Unlike in Linux, when your video driver crashes it usually just results in a momentarily blank screen instead of a core dump.
I would not say its true. Crashed gpu driver on linux will most likely leave you with locked out screen that does not accept any user input.
On the contrary, windows GPU(WDDM) drivers are restartable and can handle crashes even without restarting the user graphical apps. This've been so since at least windows 7.
One upside is simply the fact that you don’t need to hunt down drivers and install them.
I’ve had to install wifi drivers for a Windows machine before. Hunting for the right driver, downloading it and transferring it from my phone was quite a pain, as opposed to the wifi just working on Linux.
That's not entirely true. It works that way because your distro maintainer built a kernel with nearly every module turned on to support the widest range of hardware out of the box.
I've found that it still happens almost every time I try to install a consumer Windows edition on server hardware, or server Windows on consumer hardware, even for Intel Ethernet. And it's often necessary to edit an INF file so that a perfectly functional driver won't refuse to load on such an "unsupported" configuration.
It supports separate drivers in the form of dynamic kernel modules (DKMS). But this is about default upstream support. Good upstream support is always preferable to chaotic dkms mess.
You could've just used DKMS and use the kernel module in 4.18 and such [1] (unless you run Fedora who have the driver baked in the kernel; then you gotta recompile your own kernel...).
Heck, you could've used the Apple Magic Trackpad 2 on previous kernels. It's just that the multitouch wouldn't work. Which is the great thing about the device.
FWIW, the device doesn't work out of the box with Windows either.
They are drivers as others have mentioned. However I disagree with those calling it stupid or a mistake. Since Linux and most of it's drivers are open source, developing Linux together with it's drivers is really not that bad of an idea in my opinion. It allows the API to evolve without making your old hardware stop working. This is definitely not a theoretical concern, a lot of old hardware I had such as scanners never worked beyond Windows XP because the drivers were simply not compatible.
The fact that there is no stable ABI is a sort of downside but also benefit of this model. The ABI can always be improved, but it will often be at the cost of breaking so-called out of tree drivers (Nvidia proprietary, VMware's networking and VM monitor drivers, etc.)
Linux doesn't have a stable driver interface. There's no way to write a driver that works on a few different versions of Linux. The way you are "supposed" to do it according to the kernel developers is to open source your driver code and convince the kernel developers to merge it into the kernel itself.
Yes this is a stupid design. They do it because it means they don't have to worry about backwards compatibility or API stability at all, which is a nice thing to not have to worry about. But it comes at the cost of bad hardware support in Linux (and difficult-to-update Android phones).
They do it so that anyone can improve the state of the drivers, not just programmers with apple.com email addresses working in secrecy and dumping a shit binary over the fence
It is a driver. It's just that it is released for production with this version. Linux bundles most drivers, unlike say a microkernel, which would hand the job off to userspace.
True. I would just add a few notes for non Linux users.
- Drivers are bundled with the kernel but they're loaded dynamically when requested, ie supporting more devices doesn't make the kernel any bigger or slower. The Linux kernel from mostly static in the beginning became more and more modular and today save for developers or early adopters kernel rebuilds are very rare among users. Embedded cards aside, I don't recall having rebuilt a single kernel since 2.6 on normal PCs.
- Having drivers bundled in the kernel solves the problem of that piece of old hardware we lost the drivers disk and the manufacturer's site resolves to nowhere because they're no longer in business. Caring about older hardware seems of no importance in the desktop PC business, but not uncommon in the industrial world where one happily trades 100x speed loss in exchange of 10x reliability gain and there's still a lot of old perfectly functioning iron out there.
- Drivers are brand-free. Unless specified, they support the chipset, not the hardware brand and they definitely don't bundle other junk themselves, which is one of the plagues in the Windows ecosystem where 5 cards by 5 different manufacturers but using the same chipset all come with their set of drivers and associated ton of rubbish because the vendors fight to splatter their name on your desktop. Under Linux if you have 5 cards by 5 manufacturers you need one small driver and all software interfacing to standard device drivers can use all 5 cards. There's no such thing as "one card, one driver, one software". (Big exception for expensive niche proprietary hardware of course).
I don't see how including the driver inside the compiled kernel image can NOT make the resulting linux OS bigger than not including the compiled driver.
External modules (on the fs) wont, but we're talking about the bundled ones, right?
I'm old enough to have played the embedded engineer using floppyfw on a spare 486 board screwed on a piece of wood along with a 1.44 fdd and a power supply cannibalized from somewhere. That was my embedded development system nearly 20 years back:)
A bit later I ditched the floppy in favor of a spanking new 4 or 8 MB (megs, not gigs) flash parallel ATA "diskonmodule".
Just checked, I'm surprised that the floppyfw page is still there.
https://www.zelow.no/floppyfw/
Speaking of small distros, I gave a try at both DietPI and TinyCoreLinux on virtual machines and was amazed at how good they perform.
Bundled doesn't mean statically linked, it would be plain dumb if not impossible to add every driver out there into the kernel image; nowadays about everything is dynamically loaded so it doesn't impact the kernel size and doesn't waste memory or CPU cycles unless loaded, which happens only when necessary (eg when a USB device is inserted).
While it's true that a microkernel would hand the job to userspace, this is unrelated to Linux's bundling of drivers.
Linux, unlike the Windows kernel (which is also not a microkernel), does not have a stable kernel API for drivers. This means that drivers that live outside the tree have to play catch-up to every change that the kernel devs make.
When a driver is in-kernel, the person that made the changes fixes the driver as part of their change.
So Linux really encourages drivers to become open-source and submit for inclusion in the mainline kernel, just to avoid the maintenance hassle.
Some clever brain thought drivers should be part of the kernel. I personally think that's what's prevented Linux from conquering the desktop because of GPU vendors having no stable API to build upon. That is also the reason Android phones get no updates. It was a horrible decision.
Oh THATS why android gets so few updates.. (pardon the sarcasm). Its all because of those damn linux renegates - ruining software for the rest of us. Somebody ought to stop the linux bullies preventing those poor corporations from updating their products.
> It's unreasonable to expect OEMs to keep updating the code every time the driver API is broken, especially when you have dozens of mobiles.
Nobody expects that. Once drivers are in the mainline, the OEMs don't have to do any work to keep them from getting broken when Linux devs decide to do the kind of refactoring and systemic improvements that are impossible on Windows. All Android OEMs have to do is git pull, make, and send the new OS image off to the same QA processes any other update needs before deployment.
The drivers aren't where the important trade secrets are. Those are all in the hardware or in the proprietary firmware that the driver has to upload to the hardware, or in a userspace blob as with AMD's GPU drivers. The code that actually runs in the kernel on the host CPU doesn't reveal any valuable IP given the way most hardware is designed these days.
They don't have to. They just need to open-source their drivers and mainline them, don't they? People will gladly maintain what Qualcomm et al. aren't willing to, they just need to abandon the binary blob bullshit.
Yes - very unreasonable. Im sure its an enormeous expense. Better keep those devices 10 security updates behind to make sure the bloatware still works. I mean - they already paid right: why bother updating.
Neat news. I've been thinking about trying one of those out for a while now - I really want to find a pointing/clicking device other than a traditional mouse as my hands age, but nothing's properly worked yet.
On macOS, they are absolutely awesome. The Magic Trackpad 2 has a motor for haptic feedback. Applications like OmniGraffle use this to give a subtle vibration when you are dragging an object and it aligns with another object. So you can feel the alignments when you’re dragging.
I've been a fan of Apple's trackpads for a long time, but I was still surprised at how much the addition of haptic feedback improved them. Just getting away from the hinged click mechanism was nice, but being able to adjust in software the force required to click is a really great feature, and it doesn't require support at the application level.
Yes, I’m aware. I’m just clarifying that Force Touch, which is separate but easily confused feature, requires support from applications. “Basic clicks” are handled by the OS, and the amount of force necessary for one isn’t particularly important to apps.
I use the external Apple trackpad exclusively, it is in my opinion the only viable pointing device right now. Why Apple is the only company making these is beyond me.
I find a classical clicky-button mouse to be superior to trackpads (or Apple’s weird no-button mice) and I am extemely happy that not all industry is blindly copying Apple at this point.
Do you mean superior to trackpads generally or have you specifically tried and rejected the Magic Trackpad 2?
I ask because like the parent poster I love the Trackpad 2 on macOS but I hate most other trackpads when using them with Windows/Linux. Meanwhile I also hate using most mice I've tried on macOS including the Apple ones but am fine with mice on Windows/Linux.
I imagine there are probably tracking speed and acceleration settings that would make the unpleasant combinations work better but I usually give up after making a few simple tweaks.
I wasted a lot of time tweaking low level Synaptics parameters in the config flies when trying to make the Magic Trackpad work well for me on Linux and I think I've exhausted my lifetime supply of patience for excessive pointer setting adjustments.
I still wonder how well it'd work without Apple's software to drive the thing. Apple's trackpads are always less magic when using Windows, for instance.
Yes, isn't it crazy how this seems to be something that is soooo hard to duplicate? I never feel the need for a mouse on my macbook, switch to Windows or Linux: I immediatly set out out to find my mouse (in contrast, external mice are just a pain in MacOS I feel that the acceleration is way off, like I'm mousing through sticky mud, and methods to adjust it were removed 5-6 versions of osX ago.)
It actually works very well with large screens. I use a Magic Trackpad with my 27" iMac.
> I feel the old ball type was superior in this regard, you could rely on momentum and just give it a flick for large and quick movements
That's exactly how the pointer works on macOS, both for traditional mouses and trackpads. That's what makes the Magic Trackpad work for a large screen for; I can do a little flick to move the pointer across the screen — and I can do it with just a finger, not my whole wrist/arm.
At work, I started off using a mouse for my multi-monitor setup, but quickly switched back to a Magic Trackpad. I think it scales rather well, because macOS provides a nice acceleration curve to cross the screen quickly while still allowing for precise movement.
On the topic of the mouse, I suspect you (along with nearly everybody else I know) is used to Windows-style tracking. Me, I'm used to macOS-style acceleration, so using mouses on other machines always feels inaccurate to me.
But back to the topic, it is rather interesting how hard trackpads are to replicate on other systems with the same level of quality as Apple's own trackpads under macOS. I suspect that's why one of the recent Windows 10 updates brought in consistent APIs for trackpads and multi-touch gestures.
If Microsoft are only just getting around to it now, I've my doubts there'd be any consistency on the matter over in Linux land. Not so much for lack of trying, but in a land of multiple desktop environments each with their own ways of doing things, the uncertainty of whether we keep improving X or focus everything on Wayland … well, I doubt the using the Magic Trackpad 2 under Linux would be all too pleasant.
It's getting better, especially since the Ubuntu people took an interest since it's the only option on recent GNOME even under x11. There's even a gsettings option to disable tap dragging, my personal peeve!
> But back to the topic, it is rather interesting how hard trackpads are to replicate on other systems with the same level of quality as Apple's own trackpads under macOS. I suspect that's why one of the recent Windows 10 updates brought in consistent APIs for trackpads and multi-touch gestures.
That's just as subjective as the regular mice. Personally I find their touchpads miserable to use compared to the competition.
I'm being absolutely genuine when I say "what competition"?
PC notebooks still ship with tiny, unresponsive things that make two-finger scrolling a mess and multi-touch gestures drop — apparently input, the one thing one does all the time with a computer, is the place to cheap out on.
On the external front, I've got a brand new wireless Logitech multi-touch trackpad right next to me whose responsiveness still doesn't hold a candle to what I was using on a PowerBook G4 back in 2004.
Then again, it's really hard to compare. Like I say, the hardware isn't what does the magic, it's the software. macOS has had multi-touch trackpad support done right since one of the later versions of 10.4, so we're talking something like 2006 or 2007. Microsoft doesn't seem to have taken the whole thing all that seriously until relatively recently with last year's Precision Trackpad hardware spec and APIs, and I suspect that's because as soon as Microsoft started doing their own Surface hardware, they realised (better late than never) that you can't rely on third parties to get this stuff right, so we'll see what comes of that.
> PC notebooks still ship with tiny, unresponsive things that make two-finger scrolling a mess and multi-touch gestures drop — apparently input, the one thing one does all the time with a computer, is the place to cheap out on.
Exactly those. I'd take them over Apple's sirupy mess any day.
Can you try to explain things more usefully than "sirupy mess"? Do you find that you cannot adjust the tracking speed to be fast enough for your liking? Do you want more or less acceleration? Are you experiencing unusually excessive input latency? Is it the physical texture of the trackpad surface that bothers you?
Replaced my 2016 macbook pro with a T480s and the touchpad is okay (running Ubuntu). I don't miss the old touchpad so much for pointing, but the two-finger right click doesn't work for me on Ubuntu (tapping does, but not clicking), it's still the old fashioned hinged design and it doesn't have inertial scrolling, which is really a drawback. But the keyboard is so much better and the keycaps won't break and fall out. So it's still a win for me, I don't want to support apple as a company and their planned obsolescence.
OT: out of curiosity, did you get the one with FHD or WQHD screen? Having nightmares trying to set it up with an external monitor and scaling UI properly in Gnome.
WQHD, but just for the color gamut. Using gnome with 2x scaling and xrandr scaling to 1.25 to get effective 1600x900 (I think). I don't have an external monitor.
Touchpads are great for clicking and scrolling, especially if you set it up to 'click' with just a tap. Alas, it's much worse for dragging, so e.g. graphic design is problematic. Even with dragging via double-tap-and-drag, supported by MacOS, it's a big nuisance, especially dragging outside the touchpad area (which you can do by changing fingers mid-drag).
I've heard good things about vertical mouses: they're supposed to keep the lower arm from being rotated unnaturally, and the hand position seems to be more relaxed. But I still have doubts about pressing the thing, as that seems to be the root of RSI.
In my experience, staying away from computers altogether works much better for the hands than any devices I've tried, which is not good news for a computer geek.
> Alas, it's much worse for dragging, so e.g. graphic design is problematic.
What kind of dragging operations are you talking about? Drag and drop operations and box selection are no more difficult on a Mac trackpad than with a mouse, and running out of trackpad area was pretty rare even on the pre-Touchbar generation before they almost doubled the size of their trackpads. Drawing might be harder, but neither device is at all good at that task; that's what Wacom is for.
I tried this recently and got fairly bad pain in my thumb (bottom towards the palm area). I’ve since switched back to just using my old vertical mouse and the pain is gone. It’s a shame because I did get used to it and quite like using the ball.
Maybe try a Kensington Expert Mouse [1]. I use these a lot. I'd suggest the wired version - the wireless one seemed to interfere with my Bluetooth keyboard (probably the keyboard's fault though).
I've used the Kensington SlimBlade for years and really love it. The only difference from the Expert Mouse is that instead of a scroll ring you just spin the ball in place. Completely eliminated the RSI I was getting in my mouse hand and is my favorite hardware for mouse-like functionality.
I've actually played quite a few FPS games with a Kensington trackball.
It takes some getting used to, but if you set the ball up so that a 180° rotation corresponds to the cursor travelling the width of the screen it seems to work well.
Disable acceleration on a trackball – whatever your use.
Linus is amazing. Is anyone else a bit unnerved that the entire tech world seems to hinge on the good instincts of this one guy? I hope there is another benevolent dictator to take over once he's gone.
Linus took a break during the previous release cycle and much of his work was taken up by Greg Kroah-Hartman. At this point the kernel development process is very extablished and hierarchical and seems to be in a good place for the long term.
I would not be worried in the slightest if GregKH had to take over. Not that I'd want Linus to leave, just sayin'. There's a few other good candidates among the lieutenants as well.
Thanks. I knew of Tso of course through his impressive work on the RNG and was vaguely aware of some scandal involving him. Under the CoC as I understand it personal attacks are forbidden and surely dredging up something from 2011 out-of-context counts as that.
Oh, and my use of "sharp" in my original comment was obv unintentional, how funny
let' not go too far. linux is still a hobby operating system grown into some bloated piece of crap even it's inventor is depressed about ;)... but yeah... i if people are still afraid of bsd then you are somewhat right...
Hundreds of companies and professional software projects are dependent on Linux. I’m not sure I could call it a “hobby project”, no matter what its origins were.
Currently writing code targeting the perf_event_open system call. It's the nastiest thing I've ever had the displeasure of working with. clone() is similarly "interesting".
Glibc really does a lot of good work to hide the mess underneath.
Have you ever had the (dis)pleasure of porting to Windows? It’s a pile of hot garbage that keeps on accumulating because of so precious backwards compatibility; every single idiosyncrasy from thirty years ago lives on for ever.
Yea, I wrote windows code for 10 years and while it has its warts I will say the ETW subsystem is much more thought out. The ntdll way of abstracting syscalls is also a lot nicer and something Linux should consider.
The biggest problem with Linux is it doesn't have a coherent design philosophy. So some subsystems are nice and others are horrendous. Knowledge of one subsystem may lead to misleading assumptions about another part of the kernel.
An example is the kernel supposedly doesn't have threads, they are just processes that share address space. But of course other parts do in fact need to understand that there is one coherent bundle of threads that compose this abstract idea of a process. So some places differentiate between thread id and process ids and others mix them. Windows has its inconsistencies, but not with something so fundamental as a process.
You’ve obviously never tried to write performant I/O logic.
To see what I mean, try using epoll to manage a set of network connections. They apparently didn’t consider the case where you have more than one CPU and also want to handle more than one network connection. Also, if you do get it to work without crashing on stale fd’s, you’ll find it bottlenecks on a spin lock.
If you want to save some time and jump to the current state of the art, use DPDK or some other user space network driver + IP stack to completely bypass the kernel. :-(
No matter what we think Linux is or has become it is still better than windows. I was a windows guy for many years and have no desire to return to the winblows world
Linux is so close to convincing me to move (especially once I moved to Pop_os!, but there are a few things that it is simply bad at. This is especially true for laptops.
My laptop is always hot enough to burn my balls off. This is after disabling my dedicated GPU, disabling turbo boost and underclocking the cpu.
Consequently the battery life is abysmal as well.
Lastly, trackpads seem to worse on linux across the board.
Those things are major sticking points in almost all linux laptops.
I have been able to find near-perfect replacements for everything else. (apart from some MS office and enterprise software, but can't blame linux for that)
Run “sudo powertop” and probably “man synaptics” (depends on the hardware) to find out how to fix power consumption and the trackpad, respectively.
In particular, in Ubuntu 18.04 (and maybe all of gnome) they removed mouse / trackpad acceleration, so using pointing devices feels like drinking a pot of coffee and working with you hand immersed in thick mud. There’s a config file option / cli to fix that somewhere. Same with the non-existent palm rejection.
I want a system that JUST WORKS. I don't have enough time in my day, with all of my other responsibilities, to figure out why something is slightly broken.
Corporate VPN's are the other one.
That's why I really don't mind running MacOS at work - it works well, and the defaults are sane.
Laptops contain a lot of proprietary tech, not all is well supported. It's not all bad though: Chromebooks run some version of modified Linux just fine (and the trackpad of the original Pixel works great). My Dell M6800 (Ubuntu 'certified') is well supported and runs cool. It's an age-old, but often ignored advise to first consider one's needs and then chose fitting soft- and hardware. Instead people just go out and buy cheap or shiny and then wonder what to do with it ;-}
FWIW my nvidia laptop also became a heater running Ubuntu (which Pop!_OS is based on) but has been running like a cool dream since I installed Manjaro (Xfce, out-of-the-box non-free driver setup).
I have to say that in e.g. https://lkml.org/lkml/2018/12/22/221 he is extremely polite compared to 1 year ago. Of course it may sound harsh to some but I can see him trying.
I think that mail is entirely positive for the Linux kernel users. Clear unequivocal language expressing the policy and stance of the project. I can’t see any reasonable way to temper the harshness without removing a key element of the message.
Used it since 10.04 against over 100 servers and I agree it worked really well.
95% of the time it had packages I need and the rest of 5% only meant stuff I wanted to use was too new to go into packages of the LTS at that time.
Unless you need 10 years level of stability at the cost of losing package count and freshness to go with RHEL/CentOS, I don't think there isn't much of a reason not to use Ubuntu Server edition.
Ubuntu Server is nice due to the commercial support that's available from Ubuntu, but Debian is also very widely used on the server. And with the newer releases (Debian Stretch and later) it's gaining potential as a desktop OS too.
What does decentralised means in this context? There are possibly thousands of companies/projects/orgs releasing their own version of the linux kernel.
Plus, as a user, I trust Linus more than any decentralised process I can think of.
>Plus, as a user, I trust Linus more than any decentralised process I can think of.
Can you make a compelling argument for why that is? Is it that you believe Linus has your personal well being in mind or you are unfamiliar with decentralised "processes"?
edit: Questioning why someone trusts a person they never met over transparent processes that mitigate risks should not deserve downvotes. If we control what can be discussed, we also control what can be known...
> Can you make a compelling argument for why that is? Is it that you believe Linus has your personal well being in mind or you are unfamiliar with decentralised "processes"?
Yes, the project is successful, as successful as an open source project can be. It seems to work fine since 1992.
Why would you bring up my personal well being ? Does a decentralised process (whatever that means actually, it's not defined at all yet) take my personal well being in to consideration???
Linus may have the final say on a lot of things, particularly regarding policy surrounding contributions, but "unchecked" isn't so much an overstatement so much as a joke. Vast numbers of contributors, security auditors, and generally-interested hackers keep close eyes on the Linux kernel.
You should explain what you mean by decentralization. I think you are mixing concepts here and don't bring much to the discussion. What does "Decentralization of open source" means ?
It seems you are saying that having multiple contributors and auditors keeps linux safe, I wouldn't really call this "decentralization", you can have a centralized process that is audited by many persons.
I've tried to explain but the comment was blocked because I was replying too quickly.
Decentralisation has three dimensions: Political, Logical and Architectural. In the case of Linux, I argue that the decentralised nature of open-source software development is what guarantees its safety and usefulness and not the individual merits of any one participant (person or company).
I think I'm not saying anything contentious when I say that Linus' employer has no special treatment in terms of linux development roadmap, and that most contributions are voluntary and no one needs to ask permission to download the code and fork the project and this is logical and political decentralisation.
I believe this quote is appropriate given that some users seem to be unaware of how the kernel development process is decentralised:
"Instead of a roadmap, there are technical guidelines. Instead of a central resource allocation, there are persons an companies who all have a stake in the further development of the Linux kernel, quite independently from one another: People like Linus Torvalds and I don’t plan the kernel evolution. We don’t sit there and think up the roadmap for the next two years, then assign resources to the various new features. That's because we don’t have any resources. The resources are all owned by the various corporations who use and contribute to Linux, as well as by the various independent contributors
out there. It's those people who own the resources who decide."
- Andrew Morton on the kernel process
If someone has that many problems with Linus they’re free to fork and maintain their own kernel source. If all someone wants is support not offered by the Linux kernel project then there are plenty of options there as well..,RHEL, SuSe, etc.
Absolutely agreed. That's one of the ways that political decentralisation in open-source manifests itself.
I've made the argument elsewhere in this thread that it's the decentralised nature of open-source that makes it safe to use and build on and not the personality or behaviour of any one person. Do you agree?
Can you make a compelling argument for why you would trust a random collection of unverified people on the Internet over someone with a well-known reputation that doesn't include anything bad (aside from being a bit of an arse occasionally, but at least he is honest about that!)?
That's not an argument. You're just naming a game theory concept. A compelling argument would at least explain why and how it applies to this situation.
Ok then, my argument is that decentralisation is a very ancient concept and that game theory exists because people with different world views, interests and goals have to negotiate with each other to survive.
Decentralisation is the process of ensuring there is no SPF and to minimise unbalances in power.
For instance, in my country, decentralisation would mean we would have independent state law (like the US) and that most of our decision making structures aren't physically and politically centred in the nation's capital. In technology, we can take the Linux kernel maintenance process as an example of how decentralisation makes it possible to have China, US and EU companies funding the development process and using the software without (too much) fear.
No, but I am always trying to learn new topics and ever since I got professionally involved in blockchain I have picked up a couple of econ books but I wouldn't falsely represent myself as an econ expert. Why do you ask?
If you read the kernel mailing lists it's abundantly clear that Linus cares deeply about the quality of the kernel and rejects bullshit and corporate politics. He's exactly the kind of person I want to maintain a piece of technology that I rely on and trust every day.
I trust Linus from the fact that he put the user at a level of priority more important than anything esle in a time when few people did it, and this decision makes userspace solid as a rock to me over the years
I replied to a comment about trust and decentralisation that are both topics I am very interested in, and I believe you replied to my comment. I fail to see the malice...
I agree with you so I would like it if you could please show me what criticism I made exactly. I can't find one in this thread and don't understand why you've suspected me to be malicious or my intent to be destructive.
I whole heartedly disagree. Being angry (and making it known) at someone who broke the “first rule of kernel development” and then tried to hide that fact is completely accepted. Attacking said person with personal insults isn’t. I don’t see any of the latter here, compared to Linus’s past reactions to similar situations anyway.
What I meant is this ~ Linus is able to express himself just as well as before, but with different language, without using swearing.
Actually, instead of hot like before, this Linus is cold. Instead of raging, he coldly expresses his disappointment. In some ways, this feels more brutal than his old style.
Criticizing the mistake as "complete garbage" is great. You don't even need to read anything more to sense Linus is annoyed like he always is when shit hits the fan like this.
I agree with your sentiment, but in this case it’s not a personal attack. Linus, the person in charge, has standards for how certain choices should be made. The prevalence of Linux makes any breaking changes dire.
Linus identified that there was an unacceptable action taken and Is being stern so that there can be no ambiguity in the future as to what the accepted action ought to be.
I believe that’s what I said, no? Maybe my phrasing wasn’t clear, but I said “I don’t see any of the latter here” with latter being “personal attacks”.
> Linus identified that there was an unacceptable action taken and is being stern so that there can be no ambiguity in the future as to what the accepted action ought to be.
Most of Linus post are polite, but the one that are shared are the other ones. If you want to trigger him, just break the kernel ABI and then defend your decision, this never fails.
Anyway, it looks too polite, like he is filtering the email using a "polite" version of the "simple" English writer by xkcd https://xkcd.com/simplewriter/
If I’m understanding you correctly, you are saying the time off allowed him to realize he could get the same point across without being as caustic as he was previously with Mauro. If so, I’m not sure why you are being downvoted.
> Best burn here is him flatly telling Eric that he won't be pulling anything from him until he fixes his attitude.
It's not the contributors personal attitude that's in question here but his attitude towards breaking user space, so it's not even ad hominem or anything.
> Eric, I want to make this 1000% clear: there are no user space bugs.
> If it used to work, then user space was clearly doing the right thing.
> The fact that you tried to several times claim it was buggy user space is a serious breach of trust.
I find Linus' wording remarkably polite. Surely you're not suggesting that Linus should accept breakage of user space out of sheer politeness?
Erm, no. I was never suggesting that. Maybe I expressed myself imprecisely, then.
What I meant by attitude is exactly what you meant ~ the nonchalant attitude towards breaking userspace, and thinking that it's okay if no-one notices.
Which is when Linus gets the most pissed of all.
Linus was polite, yes, but he expressed his anger and disappointment in a great way. Feels more brutal than his old style, actually.
Yes, before he sounded like a moody hacker you had to appease to get your code accepted into his pet project, now he comes across as the head of a multinational corporation who just ended your career.