Say what you will about TempleOS, but HolyC is actually brilliant. The same language is used for the OS, programs, and shell, the entire OS is JIT-compiled (modify the source and reboot to make changes), and it's just generally a really different way of doing things. You can also put vector graphics in source code.
I agree with you completely. The idea of a single language to rule the entire system is brilliant, and is something I wish others would consider putting their attention on - i.e. distro builders who compose a "Lua-only" stack, or a "Python-only" stack, and so on.. well, such things exist in Dockerland easily enough these days, but one wonder what it would be like if the 'modern desktop OSes' had this hit-one-key-and-access-code-anywhere mentality .. it has so much appeal over the current worm bucket o' mystery that is the modern OS 'distribution'/'package'/'bundle'/'module collection' ..
Smalltalk-80 and the resulting modern implementations of it follow this idea. Typically, the IDE/development tools come with the runtime environment and these allow the browse the system's code as well as inspect existing instances of object. There's also a primitive form of a built-in VCS allowing the viewing of earlier method versions and reverting to them. Finally, the underlying VM is written in a mixture of C and either Smalltalk or something resembling it.
It could even be C with something like the tiny c compiler. It can compile C at such ridiculous speeds it makes compile times practically irrelevant. I've never even been able to throw enough at it to know how fast it actually runs on a modern computer. SQLite and Lua both print a timing of 0.001 seconds (and that is both compiling and linking I think).
Emacs has what you called the hit-one-key-and-access-code-anywhere mentality, and I think part of the reason Emacs users tend to use it for much more than just editing text is exactly the appeal of an OS with that mentality.
This is what makes http://www.redox-os.org/ an interesting project. Full stack Rust OS, which is probably the most advanced language you can use right now that can also bootstrap itself from ASM.
Nice, I will give it a try. I've installed templeOS in a VM before and played around with it. I really like the concept of a graphical console, being able to literally put images right into source code and the C++ like shell that uses some kind of JIT compiler, it's different, and interesting.
As someone who grew up with a C64 and programmed them, I can totally appreciate this. The C64 had ROM basic at the same address as part of the physical RAM.
I developed software called CNet BBS back in the day, and the software, written in assembly, would constantly switch the basic ROM interpreter in and out to simulate a multi-tasking environment allowing basic programs to run along side a mini OS.
I absolutely love the sound of his keyboard!
I wonder if TempleOS could find practical applications in the embedded world as an RTOS.
I currently am working with Jacinto J6 processors which are multi-processor devices. They have two ARM A15 cores and 4 ARM M4 cores (called IPUs). My work involves running Linux & Android on the A15 cores while bringing up a hardware monitor on the M4 cores. I looked at several possibilities such as uCLinux, FreeRTOS, and a commercial product called uVelOCity by Green Hills.
Something like TempleOS might feed this need for what appears to be a market of CPUs needing a small RTOS running along side another OS.
The link posted mentions the OS being developed to provide a similar programming experience as the c64. Are there any learning resources available like there are for the c64? Thank you!
"I capped the
line-of-code count at 100,000 and God said it must be perfect, so it will never
be an ugly monstrocity. It is currently 80,590 lines of unblemished code.
Backward compatibility is not promised."
"I wrote all 119,580 lines of TempleOS over the last 12.5 years, full-time,
including the 64-bit compiler."
Do you have any current or distant ideas on implementing basic GPU support in TempleOS? Virtualization has gotten to the point where GPU passthrough is beginning to be feasible/viable.
PS. I think this is an amazing project. I'm grateful you accepted the challenge in writing it. :D
Because God said so. That and the aim of the system is to create something akin to the Commodore 64 in which the entire system is open to the user. http://www.templeos.org/Wb/Doc/Charter.html#l1
"Graphics operations should be transparent, not hidden in
a GPU."
Is GPU programming opaque, I've not done any low level stuff? I imagine it's a whole bunch of different API calls, but based on triangles, rather than points and lines?
Could one write a simple GPU in code, for example, I wonder?
You'd be surprised just how many layers of abstraction there are between getting something done 'outside' the context of the GPU, and across the CPU/GPU bridge, and getting it done on the GPU in a modern 3D stack these days .. You can do things any one of a number of different ways - pass off a blob of data for rendering, or write shader programs that get compiled for the GPU when the app requires it .. those compilers are not open (shader compilers are an arcane and highly contentious realm of IP-rights-holders in a very competitive and volatile industry) and often-times, the hard work of a 3D developer is spent in moving existing assets (code/resources) from one 3D-pipeline-fashion-runway de-jeur to the next ..
It is pretty arcane.
That said, of course you can write a software renderer and simulate a fair amount of the work that the GPU will usually offload from the CPU - and in many cases this has been successfully applied - e.g. the Emulation world - to the task of maintaining legacy binary assets in lieu of having source code to port. The emu guys have amazing stats in that regard.
Well with how modern GPU's, their driver and API's work you write code to do something then the driver "decides" on the "best" way of doing it.
Display drivers can replace entire shaders and modify pretty much any instruction and they often do so if nothing than for the code to actually work because game developer not only have shifted allot of the "performance" optimization burden to the GPU vendors but also quite often ship completely non complaint (and nonfunctional) graphics code, there have been plenty of gem posts on gamedev.net including multiple AAA studio's that botched their code so poorly on multiple titles that if it there wasn't a generic fix in the driver already it would not display anything at by not calling D3D::Create, D3D::EndFrame/Begin frame properly or at all in many cases.
Overall the majority of the driver codebase today is abstraction and fixes, only about a 3rd of it is actual API implementation.
So this is one reason why GPU driver code is necessarily closed :( to save face.
There are some software-based graphics cores out there, and one or two VHDL/FPGA efforts, but the performance/watt ratio between those and mainstream GPUs is laughable without a second thought.
Here's hoping AMD's vision to try and be more open in the future really works out. Also that Vulkan is at least mildly sane with open-ness.
Because the current state of things reinforces ideas like "the GPU is opaque" - architecturally, graphics processing is not a magic box, and while it would take a long time to fully understand it's not technically insurmountable; but the current status quo with drivers makes it so.
How much performance would a pure and thus leaner driver gain, I wonder.
Also a fair bit of backwards compatibility will be involved, like with x86 cpus. And even then, the same fixup is required to run opengl on directX cards, I believe.
Considering that even with API's that have some what good internal compliance testing like DirectX developers still ship utterly broken and un-optimized code - none.
This is also why Vulcan will probably not succeed (at least now how people think it will).
The last thing that say Nvidia wants is to maintain a code base of 3-4M LOC's which 50-60% of it is intended to allow for games to run anywhere from running at all to running well.
With how the current market works the driver is the "secret" sauce that GPU makers use to compete in the market and is just as important (or even more in some cases) as the hardware it self.
Can we also undead the actual TerryADavis account? With maybe some kind of one-off flag attached to him that says, in not so many words, "this man is a talented hacker, but also a diagnosed schizophrenic so when he goes off the rails he gets a break that you don't" ?
We created vouching for cases where accounts post bad comments as well as good ones. As far as I can tell it works as intended, indeed better than expected.
This rather reminds me of the "Coral Castle" in Florida. Basically one guy went nuts after getting dumped by his girlfriend, so he built a fortress-sized structure out of blocks of limestone (not actually coral). It's quite a goofy little tourist attraction now.
If only all people who lost their minds could devote all their time to a harmless quest that doesn't hurt anybody.