Wow, from the queue presentation this OS looks even less usable than DOS (MS aka Q). Why would the CLI terminate when it runs a programme? How do you run more than one programme at once? If each core polls the run queue does that mean that it busy cycles when there is no more work to do? Won't this waste energy? (A sin in the HPC world). Does it do the same for IO? IO request completion on a slow and/or busy disk could take as long as 50ms, for a 2.5GHz CPU for example this is 125 million clock cycles. Will the CPU busy cycle this as well?
If you want a simple OS that fits a lot into a small space, why not get a copy of the Lions' Commentary and translate the edition 6 kernel into asm. If you wanted to go gung ho you could add a simple BKL, demand paging and a network stack to complete the job. You could probably do all the above and keep it somewhere close to 20,000 lines of assembly (excluding drivers).
If you are looking for an education doing the above will probably do you just as good a job as trying from scratch.
TAOCP and the Lion's commentary? Edition 6 kernel into asm? Why?
You're working hard to sound old-school, but these are just random allusions. TAOCP is not especially relevant to OS programming, and translating _all_ of an OS into asm to make it "fit a lot into a small space" is just... plain... stupid...
You will make the code smaller (as compared to gcc -Os) in a few places; you will likely give yourself a hernia maintaining it, and you are likely to get considerably worse performance than a decent compiler will.
To expand on that: it's fun to take a look at the assembly code "gcc -O3" produces. It's absolutely wild: even after staring at it for a while, I often have no idea what clever tricks gcc used to turn the C code into that assembly code. It does a better job of making fast, compact code than I suspect I ever could.
There was a time when C compilers generated crappy assembly code, because that was easy for the compiler writers. That time is long past, with exceptions for a very few situations where the compiler misses a trick that a human can do.
Well, sometimes it's fun. Other times it's utterly horrible. gcc doesn't always do a wonderful job and the maintainers sometimes just don't seem to care - induction variable optimizations and gcse were broken in 4.2 and 4.3 and the issue was left unfixed, with clear examples of obvious FAIL on simple and performance-critical inner loops.
However, this doesn't apply so much to OS code, which doesn't have nearly as much opportunity for micro-architectural shenanigans either way. Typically something like -Os does a good job of handling operating system style code.
Hand-coding OS asm remains a fine example of pointless tedium for the most part. There are clear places where you absolutely need asm, of course, but writing stuff in asm that can be perfectly well done in C is like cleaning out the barracks room bathroom with a toothbrush.
This was one of the early innovations of UNIX (high level language use), so there's something wildly anachronistic about this guy's suggestion. It's like suggesting that you rebuild grandpa's Studebaker so that it can be drawn by a team of horses.
I guess I didn't elucidate my ideas properly. My point was _if_ you wanted to write an asm OS and learn from it, following in the steps of the masters might be more productive and enlightening. The PDP11 is sufficiently different from modern architectures, the scheduler as written in Lions relies on undefined CPU behaviour to work, that it wouldn't be a straight re-write; there is plenty to learn here. It would mean however that you would avoid plenty of unnecessary dead ends.
My dream is to write a cloud programming language that compiles high level code to a low-level kernel and deploys. I abhor waste, so efforts like this are a great start. I look forward to more!
Some years ago a friend of mine and I built a "self-propagating" system to speed up server deployment. Every server inside the datacenter would periodically look for hardware it wasn't previously aware of in its subnet, and then it would try a few known exploits to root the box, and run our shim. The shim would install all the software we wanted on the box, apply patches, reboot the machine, bring it online and register it so that all the other machines knew about it.
It was kind of a fun experiment, but in the end didn't save us enough time to maintain it for very long - the real time sink was getting the hardware bootable and into the datacenter to begin with. (This was pre-"cloud" days, I guess if we were doing it again now we might have used it for a little longer.)
> My dream is to write a cloud programming language that compiles high level code to a low-level kernel and deploys.
I'm curious as to what you want to gain from that compared to running a custom compiled Linux kernel with your program as init or as the only running process.
Your dream idea will work for cpu-bound programs but it seems the majority of applications are bound by factors other than the cpu. The waste frequently comes from waiting for I/O to complete.
They mean proprietary in the sense of "we built one ourselves". You can download source from the links at the bottom of the page: http://www.returninfinity.com/pure64.html
The distribution zip actually just contains binary pure64.sys file and sources for their example kernels. The sources for Pure64 are not included. Also, no information on licensing is included.
There seems to be some "64-bit" buzzwordism going on here. There's nothing special about 64-bit; it's just a different processor mode. Writing 64-bit code is no different than 32-bit.
AMD64 also included a lot more registers, and has some extensions to the instruction set. It also makes a baseline for other features like SSE, as all CPU:s supporting AMD64 are relatively modern. Also having larger address space to work with may affect some design decisions.
So while you can write 64-bit code just like 32-bit code, it's bit like saying that writing c++ is no different than writing c.
64bit allows and/or requires different designs to be efficient. While the difference is usually negligible in user mode, 64bit in a kernel is rather different.
This reminds me of MenuetOS (http://www.menuetos.net), although a bit larger BareMetal OS, still manages to cram in a unholy amount of functionality in to a couple of megs.
FWIW, I found a local privilege escalation vulnerability (arguably just a bug allowing apps to trash the kernel, since it's single-user anyway) in Menuet32, and I'm not optimistic about the network stack. I would have expected ping of death to work, but it doesn't even seem to support IP packet fragmentation.
These throwback OSes are cute, but there would be hell to pay if they ever took off.
They cause too much overhead for teaching. A minimal OS is a great way to show how one works, what the essentials are, and even what weaknesses it has.
It's amazing, when you look at a simple operating system like TinyOS, just how simple some of the things are. (Once you start actually hacking on the OS code, you realize that they're also very finnicky and easy to mess up if you don't know what you're doing. And hard to debug.)
Could be useful if it did something new - support virtual apps, manage restartable/persistent state processes, something. Just another thread/memory/interrupt kernel? Why?
I read the link in the article, so I'll just paste what it says:
* High Performance Computing - Act as the base OS for a HPC cluster node. Running advanced computation workloads is ideal for a mono-tasking Operating System.
* Embedded Applications - Provide a platform for embedded applications running on commodity x86-64 hardware.
* Education - Provide an environment for learning and experimenting with programming in x86-64 Assembly as well as Operating System fundamentals.
For the first use, it seems like it'd be nice if there were some way it could reuse existing hardware-compatibility work done on projects like the Linux kernel or the BSD kernels. Even if you want a single-tasking system with little in the way of OS services, you still usually don't want to be debugging the quirks of every hardware device, or writing your own driver every time you get a machine with a new SATA controller or NIC.
If you want a simple OS that fits a lot into a small space, why not get a copy of the Lions' Commentary and translate the edition 6 kernel into asm. If you wanted to go gung ho you could add a simple BKL, demand paging and a network stack to complete the job. You could probably do all the above and keep it somewhere close to 20,000 lines of assembly (excluding drivers).
If you are looking for an education doing the above will probably do you just as good a job as trying from scratch.
Lions' Commentary on UNIX 6th Edition, with Source Code http://en.wikipedia.org/wiki/Lions%27_Commentary_on_UNIX_6th...
The Art of Computer Programming http://en.wikipedia.org/wiki/The_Art_of_Computer_Programming