They emphasised architectural elegance over short term practicality - ie do it right rather than do it quickly. The problem with operating systems is that you need to have a minimum set of working functionality in order for it to be somewhat useful to people not developing the operating system itself. You need a way of storing data, you need a reasonable number of commands to run, you need editors etc. On the hardware side there was immense turnover in the late eighties and nineties. Processor architectures soared and waned, it wasn't uncommon for cpu speeds to double, memory become increasingly cheaper, peripherals changed (mice, cdroms, floppies changed size, Zip drives, tape drives), popular hardware buses changed (ISA 8/16 bit, ADB, VLB, PCI, PCMCIA, USB, serial, parallel). Just keeping up without changing your operating system core functionality was a lot of work!
The GNU project tended to work very cathedral like, while Linux was very bazaar like. The latter meant people could support and update the kernel for their own devices ("crowdsourcing" in today's terminology). http://www.catb.org/esr/writings/homesteading/cathedral-baza...
To give you an idea of how many shortcuts Linux took, the way the original kernel worked was that it used a single large address space. Each process was slotted into 64MB of that address space. That left you with a maximum of 63 processes. A context switch just involved changing protections to enable the appropriate slot. This is far more lightweight than how Linux now and other operating systems do it with no such process limits and a far more heavyweight full address space switch. Back then it made Linux really fast at process switching. This is an example of "worse is better": http://en.wikipedia.org/wiki/Worse_is_better
What the detractors didn't realise is that the initial simplistic Linux implementation could evolve away the constraints, and were instead betting on up front "intelligent design".
The GNU project tended to work very cathedral like, while Linux was very bazaar like. The latter meant people could support and update the kernel for their own devices ("crowdsourcing" in today's terminology). http://www.catb.org/esr/writings/homesteading/cathedral-baza...
To give you an idea of how many shortcuts Linux took, the way the original kernel worked was that it used a single large address space. Each process was slotted into 64MB of that address space. That left you with a maximum of 63 processes. A context switch just involved changing protections to enable the appropriate slot. This is far more lightweight than how Linux now and other operating systems do it with no such process limits and a far more heavyweight full address space switch. Back then it made Linux really fast at process switching. This is an example of "worse is better": http://en.wikipedia.org/wiki/Worse_is_better
What the detractors didn't realise is that the initial simplistic Linux implementation could evolve away the constraints, and were instead betting on up front "intelligent design".