As an aside, the part about how ROMs are writeable via manufacturer tricks is what makes "amateur" reverse engineering of drivers a potentially costly affair.
You never know when you pop a hidden write trigger, and subsequently fill the ROM of your expensive hardware with garbage.
I recall reading about such a incident involving a DVD burner, where the OEM had reused a seldom used signal (stupid on their part, but still) as the trigger for a firmware update...
I believe that was Plextor drives, although I can't find a reference.
Anyways, smart manufacturers these days put some sort of a signature on the firmware (even just a CRC will do), and don't write updates to flash unless they have a good signature. That makes it much more difficult to accidentally trigger an update.
It's quicker to write a stupid enhancement request than it is to write a response declaring why that idea is bad. Project outsiders are not entitled to a well-argued response on why something is going to stay the way it is. They are entitled to nothing unless they are paying for support.
That said, these rants themselves take much more time to prepare than a simple "we're not going to do that, sorry". Theo strikes me as someone who takes joy from hurling insults, rather than simply being busy and therefore curt.
The initial poster really does deserve some rebuke. People should not be posting messages/requests that require time to review, if they haven't even spent a small amount of time to check if their opinions/requests have any basis in reality.
This kind of OP is terrible for any community. Its one thing to not understand something ... its another thing to write messages as if you deeply understand the problems in the space. OP is making all kinds of elaborate requests about what the project should do ... based on what? What made him decide to make these requests? Think about that. Its just ridiculous that someone could sniff some little bit of scent somewhere about something, and then make sweeping prescriptions about what should be done for the project.
I find it slightly amusing that even his relatively harmless answers to relatively harmless questions are full of hate and rudeness. And can still be informative!
In this case at least, I can empathize.
But I have no sympathy for this and it certainly isn't one of the more constructive ways the matter might be handled.
Right, all I see is him coming off as an arrogant dickhead. The initial email from the user was nicely written and expressed the point clearly. De Raadt just started insulting the guy out of the blue. Yes, he did make a mistake and a bad point (we all do sometimes, what's the big fucking deal?), but he was very court.
Is it cool these days for open source OS/kernel development efforts to be run by these self-indulgent egotists?
It's not at all courteous for idiots to wonder into email lists and effectively steal a minute or more from every reader of the list. Now multiply this by the number of idiots in the world.
-- someone who has had to triage an email list for an open source project
The real self-indulgent egotist is the person writing the initial email to the mailing list. At the very least he/she could have spent the time to research facts. The majority of Theo's response is debunking things that could have been answered with a minimal amount of effort.
It boils down to the ol' pragmatical versus ethical debate again.
a) Pragmatism dictates that in order to use the `new shiny' we use non-free firmware.
b) Ethical concerns dictate that no amount of `new shiny' is tempting enough to compromise our ideals.
I guess there are points in between and maybe these are poles in the debate with yer Stallman-types tending towards (b) and yer Torvalds-types tending towards (a).
I'd like a world without scary binary blobs and the Stallman in me chides me on my bad decision-making and lack of character. On the other hand, ooh look at the new shiny.
I think this is news because we had open hardware (or at least, we had fully-documented GPUs that did not require firmware blobs), and with Skylake we will not.
Almost surely it did, since software is eating the world.
However, the question I care about is "does the original manufacturer have more power over the physical device I paid for than I do". If my Ivy Bridge GPU has on-board firmware, that I can't change, well, Intel probably can't change it either. Or if they can, they haven't documented it and so the kernel driver doesn't provide the facility and so effectively they can't, short of NSA-type hackery, an answer I am happy enough to truncate to "no".
However, for Skylake the answer to "Does the original developer have more power over the device I paid for than I do" is "very obviously yes" and no amount of handwaving or approximation will suffice.
This is not to mention that even if it were open source, it is not likely that the tool chain used to create it is even available, let alone open source. Firmware blobs are just a fact of life, for now at least.
Intel has licensed GPU designs from third-parties before, but typically only for their low-power (Atom) chips where they just didn't have the technology in-house.
Here, we're talking about a new micro-architecture for Intel's premium product line; I would be very surprised to hear Intel licensed anything in the design from third-parties. If Intel wanted their tool-chain to be available, they could make it so.
> GuC is designed to perform graphics workload scheduling on the various graphics parallel engines. In this scheduling model, host software submits work through one of the 256 graphics doorbells and this invokes the scheduling operation on the appropriate graphics engine.
I'm highly suspicious this idea is patent encumbered.
I am willing to tolerate custom firmware in hardware that lives behind an IOMMU and under an OS that uses it. That will at least make it unlikely that something scary lives in there and collects my data. Without an IOMMU in use, the entire RAM is fair game, and, with some creativity, the network card buffers to sneak the data out.
I've never understood this. Isn't it always possible to detect unsanctioned network traffic. Sure, most people don't. But all it takes is one person to spot an errant packet and a malicious actor's cover is blown. What purpose could possibly be served by putting code into firmware that gives itself away by generating network traffic.
Hacking firmware is another matter. But a vendor distributing malicious firmware code that generates network traffic? Not wittingly, it doesn't make sense. Of course if it's for some sensitive piece of machinery and the vendor has been compromised. But then if you're buying sensitive parts maybe you should be extra-cautious to ensure they operate as intended. But consumer hardware? I'm not seeing it. Call me naive or not tin-foil-hatty enough :)
You don't have to create additional packets to transmit additional information. You also don't have to transmit by default. So the detection model isn't "anyone dumps packets", it's "someone actively being monitored knows exactly what to look for".
X86 system management mode is not something you can turn off. It is there to protect motherboard firmware code that runs on the CPU. That code does things like emulate PS/2 mouse and keyboard. See chapter 34 in [1].
SMM is not for enterprise sysadmins. They use other hardware and software for systems management.
> The future is manufacturers—of devices ranging from phones, to laptops, to cars—being the centralized ops staff for all the devices they make.
That's open to so many forms of abuse. Extra-judicial punishment by government agencies and companies, hackers wiping your devices for the lulz, disgruntled employees, ex-spouses...
It's really too bad, but linux has failed as a consumer os. As an example, most of us use laptops and linux power management is horrendous on a laptop with battery life of 50% or less compared to Windows. And their consumer desktop UIs are a joke and the programmers have no interest in improving them in a consistent strategic manner. There is no distinctive driving force with a vision. It's no wonder graphics chips manufacturers don't see it as worth their time to provide source code.
I am sure everyone is tired of this argument. But keep hoping - we love you for it.
But on the server side it's great. The majority of my servers run some form of Linux. I used to run FreeBSD but have moved Linux.
Android is a Linux/Java-ish OS, with some conventional Linux non-UI userland thrown in and abstracted by the middleware layer.
The entire Android UI layer is open source, from the graphics stack to the window manager to the UI widgets. A commercial venture run by an autocrat developed it, but there it is, with a permissive open source license. I'd argue that Android 5.x is the most elegant and sophisticated UI and app environment available today.
I would also argue that alternative open source Android distros like CyanogenMod are more relevant to more users than Ubuntu.
The main "problem," if you want to call it that, is that Android doesn't play nice with legacy Linux desktop software. You can't easily have Android and a Linux desktop in the same device. If you WANT it badly enough, I could point you to some people who could deliver exactly that. It takes some doing to have a merged UI and graphics stack other than by using an emulator, but it's been done.
If you're looking for consistently improving desktop with a good design vision, Gnome 3 is a good choice. I've been using it for about 3 years and it has never been better.
That's kind of your personal point of view. Highly subjective matter. I, for instance, like xfce — it's not perfect, but far better than anything else I tried. Some people, I know, like KDE. I tried to get used to it for a half a year to "get a feel of it", but never understood how it's even usable.
I mean, you better not pass opinions like that one as "a clever lifehack".
Also, Android doesn't exist, and doesn't work as a consumer OS?
On a more serious note, I can't really understand the claim about a big difference in power-saving between Windows and (GNU/)Linux. It's not been an issue on my (admittedly rather old) devices. One thinkpad t420s (whose battery could use a replacement, due to age) and an ageing netbook.
I never have run tests, but have heard about "bad Linux battery time" many times, so I really wonder if it's true. And if it is: is it fixable or is some almost inherent flaw?
[1] https://marc.info/?l=openbsd-misc&m=143354954711286&w=2
[2] https://marc.info/?l=openbsd-misc&m=143355112811564&w=2