The machine simulator is known as eXperimental String Machine (XSM). It is an interrupt driven uniprocessor machine. The machine handles data as strings. A string is a sequence of characters terminated by ’\0’. The length of a string is at most 16 characters including ’\0’. Each of these strings is stored in a word. (Refer Section: Memory) The machine interprets a single character also as a string.
What an... odd architecture. Fun for experimentation, I'm sure, but I think it's too different from contemporary CPUs to give a good taste of what the "real" thing is like.
Reminds me of Nock (the abstract machine for Urbit). Nock handles data as either base-two arbitrary-precision integers (i.e. bitstrings) or cons cells. There's no string type or integer type; both are just untagged streams of ones and zeroes. Basically, they're what some languages call "symbols" or "atoms" or "interned strings": where in, say, Erlang you'd have either the atom 'hello' or the list-of-codepoints "hello" or the binary-string <<"hello">>, in Nock the VM would only know that it's holding onto the number 448378203247 (i.e. 0x68656c6c6f—the bitstring ASCII representation of the characters for "hello".)
In Nock, instead of beta reduction privileging certain types (e.g. symbols, closures) as having special meanings at the head of a cons cell, Nock just defines a mapping between integers and functions. Effectively, Nock is a Lisp-2: the f in (f x) and the f in (g f) have different meanings.
In fact, in Nock, both the meaning of the f in (g f), and the meaning of the f in (g (f x)), are entirely determined by g—you could think of this as every Nock function being a hygenic macro, though it's more than that. This seems dangerous, but Nock is supposed to be a "VM target language" only: the source languages that compile to it are left to define their own "platform semantics" by restricting and making guarantees on how arguments to functions in those languages will be evaluated (eagerly/lazily, early/late-bound, pre-beta-reduced like a function or post-beta-reduced like a macro, etc.) They're also left to define dynamic/lexical scoping rules, by leaving the g the responsibility of resolving the f—and the x, if it wants—in (g (f x)).
issue is the simulated "processor" is actually interpreting asm source code on the fly. Direct string parsing on an fpga, including string to int, i.e. "1234" -> 1234, would make it extremely "fun"
Should use x86/ARM/MIPS so that students get real world experience. You might never have to write an OS, but knowing x86 assembly will help you debug and optimize code.
Also, if we're talking about theoretical "students," quality is often more important than subject. Academics often recommend undergraduates "choose your course by the teacher, not the subject." Same principle here, if they work on something fun and they have fun, they will learn more.
Or http://pdos.csail.mit.edu/6.828/2014/xv6.html -- the os used in a similar course at mit (based on Unix v6, as in ye olde Lions book, but updated to use modern C and modern hardware).
x86 assembly, however, can also lead to existential crises. 'I have to do WHAT? And they did it this way because of a decision made when designing the 286? Why am I in this field again?'
"Project XOS or eXperimental Operating System is a platform to develop a toy operating system. It is an instructional tool for students to learn and implement OS data structures and functionalities on a simulated machine called XSM (eXperimental String Machine)."
XOS is used at the National Institute of Technology Calicut, India to teach Operating System principles [0]. The OS Lab essentially consists of students designing and implementing a kernel for XOS, starting from scratch:
"Cross compiler, debugger, file system interface and other supporting software tools are provided. The student will implement the scheduler, memory management, process management, exception handling and file system management functions of the OS."
"Lehman and Yao's high concurrency B-Tree in C language with Jaluta's balanced B-link tree operations including a locking enhancement for increased concurrency"
Most people enticed by this headline probably actually want xv6 (a modern simple implementation of UNIX v6) or {buildroot, OpenEmbedded} (frameworks for generating Linux distros).
That's the first "toy architecture" I've seen that attempts to define a PCI-like plug-and-play scheme... and a set of device classes that are suspiciously reminiscent of USB. :-)
What's really unique is the option to use Notch's DCPU-16 or a MIPS-like TR3200 as the main processor. The whole system has a good "real computer" feeling to it, which I think is especially important for educational use.
That's the first "toy architecture" I've seen that attempts to define a PCI-like plug-and-play scheme... and a set of device classes that are suspiciously reminiscent of USB. :-)
Yeah, it get details of both. Also, we did take a look to Amiga's autoconfig and Z80 and 8086 interrupt handling schemes.
The TR3200 itself is inspired on MIPS and ARC (a simplified SPARC cpu for using on computer architecture courses )
There is not a TLB, cache, ring modes or a MMU, so IMO is pretty far more simple.
Trillek is open source game inspired on Notch's 0x10c . So the idea of the game is have a open world space game were you have a computer were you can do anything with it.
As for Trillek I see that now. However if you looked at the project page alone without these references, it's very off-putting by the lack of a concrete plan and the abuse of flamboyant language.
How are you handling libc? you have your own (or is this even an issue with Lua)? I've been thinking of putting a Common Lisp implementation on bare metal (initially thinking of SBCL, but I think MKCL may be a easier challenge), but haven't found the time to start yet.
I've grabbed the missing routines from musl libc (and culled some parts of the Lua standard library for the sake of reducing the number of requirements, e.g. io.*), and used venerable Doug Lea's malloc for memory allocations.
The stock Lua 5.1 has the least number of dependencies, so I picked it - the idea is to practice and then to eventually have LuaJIT run on bare metal.
You can use the NetBSD rump kernel [1] to do this relatively easily - I have run LuaJIT on Xen using this and there is now a bare metal implementation too.
At my University (Imperial College London), our operating systems coursework was built around a toy OS called Pintos (http://en.wikipedia.org/wiki/Pintos), which is written in assembly and C and runs on x86. The project was to fill in the blanks in the implementation (more advanced scheduling, memory management, swapping etc) and I think most people thought it was one of the most fun projects we had. Being more realistic wasn't really a problem as the really arcane parts were already implemented.
What an... odd architecture. Fun for experimentation, I'm sure, but I think it's too different from contemporary CPUs to give a good taste of what the "real" thing is like.