Let's go to an alternative universe where Hurd was successful in the 90's and it reached common usage to the extent that Linux has today.
You're Western Digital in 2008 and you're making a TV set-top box called the WDTV-Live. I own one of these in real life universe. It runs linux, which is awesome, because that means that I can SSH into it. It runs an apache server in my home. It can download from usenet or torrents. I can control it via SSH instead of using the remote control.[0]
In this alternative universe, WDLX going to use Hurd instead of linux, because for this small device it will certainly have better performance on their underpowered MIPS chip. And they're not going to ship anything besides what they have to, becasue this is a small embedded computer.
What happens to that homebrew community when they ship a microkernel with proprietary servers for everything, and nothing else? It's going to be profoundly difficult to develop on this. You might already see this if you own a chromebook or a WDTV-- missing kernel modules means that you simply can't do anything without compiling your kernel. Couple this with secureboot and you're locked in.
I'm no expert on these things, most of this is based on brief research from years ago. If you think that I'm wrong, please tell me why, I'd love to be proven wrong. But for the time being, I believe widespread implementations of microkernels would be very anti-general-purpose computing.
The idea of the Hurd is that any user is able to run whatever server they want - GNU has been concentrating on microkernels not because it's the new hotness but because they believe it's good architecture for more openness.
So presumably in this hypothetical case you'd be able to upload and run whatever additional servers you needed on the WDTV. You might say "but they might make it impossible to login and do that", but they could have done the same under Linux just by not running sshd - however they didn't.
Your example doesn't make much sense because Hurd is the servers. The microkernel component itself is GNU Mach.
Shipping an embedded appliance with a microkernel and proprietary servers again makes no sense, because it's akin to rewriting userspace from scratch on top of the base VMM, schedulers and disk I/O. Just for a TV set top?
> And they're not going to ship anything besides what they have to, becasue this is a small embedded computer.
It happens already, to keep hardware costs down. The whole point of linux is that you can pick and choose which userland services to ship... (SSH being a userland service)
> ship a microkernel with proprietary servers for everything, and nothing else?
Whats to stop them now? effort. It costs real money to create propietary programmes from scratch. One of the reasons they would have chosen linux in the first place is that half the work is done for them (decoding libraries, network stacks, hardware interfaces, communications daemons)
I suppose you've never run into an Android device that didn't run ssh out of the box, or had a locked bootloader? Perhaps not distributed with full sources so you could easily modify the system?
This is the reason for AGPL/GPL3 -- not much of an argument against modular software/kernels.
> What happens to that homebrew community when they ship a microkernel with proprietary servers for everything, and nothing else?
In order to do what you did in your WD TV-Live you flashed the image with a new one. Otherwise you wouldn't be able too. So even in the case of a micro-kernel you would just flash the pre-installed with a new one (voiding warranty).
But to get to the point, do you have any idea how much effort would it take for a corporation, to write a reliable httpd server that has apache's capabilities, plugins, testing and support? Then write their own update system, dhcp client and so on? It would take huge amount of $$$ and time. And most of them would probably be buggy. So either way they would have gone with Free Software, if wanted to stay in the current price range.
You'd achieve the same thing by building a proprietary userland, from the C library up, on top of the Linux kernel. As far we know, no one has bothered (on a large scale) because it would be a massive waste of time and money.
The closest you'll find to this alternate universe is Android.
Where/how do the proprietary servers come in for a kernel that doesn't want to allow them (and therefore would go through no special effort to make them possible)?
Remember that the reason you can link against glibc is because it's LGPL and not GPL. The LGPL was created for a reason. There's also a reason why when the decision was made to release Java under the GPL, Sun explicitly added a linking exception. It's because that isn't something you just automatically get for free.
Isn't this dependent upon the microkernel's license? Wouldn't it be possible to use an open-source license which also explicitly forces servers which use it to also be covered under the same license?
But isn't basically always the tradeoff - if you want security ,you play by the rules of the company who built it ? is there even a theoretical way out of this ?
Is it because you know that usage decisions of software are not based on technical merits? Or do you not want to be proved wrong? Or something else?