There are three files in the ZIP for PilOS with x86_64 assembly, totaling about 2000 lines, dedicated to the boot and lower-level details, as usual.
From a quick look I think that two of the files (the big ones) are mutually exclusive, so it may really be closer to about 1000 lines of x86_64 assembly to get it minimally working.
Wow, I thought (setq foo (1 2 3)) was a typo because the inner literal list is not quoted. Strangely, a list with the first element being a number auto-quotes the whole list.
I have been a heavy Lisp user since the the late 1970s and I don’t recall seeing this before.
That said, picolisp is cool. I just installed it in 10 seconds. BTW, it does not include a read line clone, so alias picolisp to wrap it in wlwrap.
That seems like an awful convenience -- isn't it far easier to recall that all lists must be quoted, than to remember which are auto-quoted under which circumstances?
Interestingly, in PicoLisp, if the car of a list evaluates to a number (but isn't a literal), it's taken to be the address of an fsubr (this is how all of the builtins are implemented; if you evaluate their symbols, you'll see they're bound to integers). If you want to do this with a literal number, you have to quote it, like ('1 2 3)
Industry. Most of my day-to-day is now working with low-footprint embedded stuff, where C is king, and anything else needs months of vetting. Picolisp just isn't a nice fit.
This is a distraction from the post. But the thread may attract minds who are glad to correct.
The post shows an example where code calls into shared-object files from picolisp.
Something I have been wondering: why are the inspection options so weak for SO files? I expect you should be able to introspect a SO. In response, you would receive function and struct documentation, and clear type information about functions.
Relevant stack overflow question, https://unix.stackexchange.com/questions/61940/introspection.... "For C-type functions, you'll only get function names, not argument types or return values. For C++ functions, you'll get mangled names - pipe the output of that command through c++filt to get function names and argument types (still no return values though)."
You could change Elf to fix it. An alternative, the compiler could do it. Gcc could run an early preprocessor round to pick up metadata and type information. It would freeze this metadata to blobs in the code area. Through this, you could have a standard set of introspection methods built with that compiler.
I have been programming on Windows recently. In some situations, in order to build against a DLL you need need an accompanying "lib" file. If you do not have one, you can perform a circus to create one from the DLL.
Mostly because on all mainstream OSes you see today, C is the lowest-common-denominator ABI that a lot of the code is written in, and for historical reasons C does not require or expect any extra information. Thus, it's not standard.
Which is why some systems have tried to add their own layer of extra metadata, such as Windows COM and GTK's gobject(introspection). I don't know how these systems store their metadata, but ELF at least is flexible enough that you should be able to store your metadata in extra sections of the SO.
However, (and I'm just speculating here, hopefully more informed people will correct me) there's also a risk that part of the tool ecosystem doesn't know what to do with the in-ELF metadata and accidentally removes it: "strip" comes to mind, the tool that removes extra symbols and debugging information from binaries and libraries.
> I don't know how these systems store their metadata, but ELF at least is flexible enough that you should be able to store your metadata in extra sections of the SO.
Window executable and dynamic libraries are very flexible and always allowed for resource and metadata sections.
COM uses type libraries, which have been replaced by .NET Metadata on UWP (basically COM Runtime reborn, .NET genesis).
gobject-introspection generates metadata from source code, stores it as XML files in $PREFIX/share/gir-1.0 and as binary files in $PREFIX/lib/girepository-1.0
And yeah, seems like strip is one of the reasons why they don't store it in the ELF. But also: cross-platform compatibility (GTK does support Windows and macOS).
You can't even tell whether a given symbol looked up with dlsym is a function or data!
Re:> In some situations, in order to build against a DLL you need need an accompanying "lib" file. If you do not have one
.lib files are not required to use a DLL. A DLL is opened with LoadLibrary and symbols can be resolved with GetProcAddress. Those import .lib files are just some quirk of Microsoft Visual C.
> You can't even tell whether a given symbol looked up with dlsym is a function or data!
On many modern unixes you can! Functions will usually be in a "text" section (marked execute or read+execute) and data will usually be in a "data" or "bss" section (marked read-only, or read-write).
The dl* family of functions don't expose this information, but after getting a symbol you can check the protection bits on the page the pointer is on (using a system-defined mechanism).
(Although despite planning to use it for some personal project I haven't got around to it yet...)
It parses C/C++/Objective-C into JSON metadata. It uses clang/LLVM so the parsing/etc should be very accurate.
So, to implement your idea, you could just embed this JSON into an ELF section. (Or, if you don't like JSON, convert your JSON to some other format, such as S-expressions or protobuf or whatever.)
I guess in the windows world COM was supposed to fix this. There was a special interface, mainly for VB, which would expose introspection so things could be discovered.
They're
(a) in no way connected to the .so files,
(b) not necessarily available,
(c) incomplete information since build information (e.g. compiler, command line options) is missing, and
(d) much harder to parse than necessary.
You don't need special compiler options to use a shared object. If you got it as a binary, there is nothing to compile.
The incomplete information in header files isn't the compiler command line options, but the nuances of the API semantics. Like if a structure is passed and the structure contains pointers. Who owns the structure and must free it? Who owns the pointers inside it? Oooh, here is a functional argument: what parts of the API can be safely invoked from that callback? That size argument, is that bytes or array elements? And so on.
Thanks. Yes, I was after an experience along the lines of python's inspect module, or Java reflection, but against native code shard objects. And feeling incredulous that it had not been standardised decades ago in major tool chains.
Another poster highlighted Stephen Kell's work, and this is exactly what I had in mind. He highlights obstacles with mmap in one video. I did not recognise these issues until I had watched his presentation, but now see that they are difficult hurdles that are inherent to the problem.
Tons of pushback. But at the end of the day, the people who moan don't manage the databases.
I stick with that is available in centos+EPEL, which includes sbcl, and use ODBC for most of my work.
The common tooling for open source databases is varied. A lot of what makes mariadb and postgresql happen is c, shell scripts and perl. A lot of 3rd party tools are still php.
Rather than more, it might be a symmetry relationship where picolisp seems to try to give a simple small lisp for giant computers, whereas ulisp tries to provide a giant lisp for small computers.
Neat! How would you differentiate picolisp from say, Clojure (which also interoperates well with Java libraries). When would you use picolisp vs Clojure?
One thing about PicoLisp that might stick out for people coming from CL,
Scheme, or Clojure is that it's dynamically scoped (like most Lisps,
historically), so you can't make closures. Compare this Scheme session:
There are 3 flavors of picolisp: the 32-bit version written in gcc-specific C, the "Ersatz" version written in Java, and the 64-bit version written in its own high-level assembly-language that can target x86-64, PPC, ARM64 and "emulation" in C.
The Ersatz version has good Java integration. The 64-bit version has some more recent Java integration:
Clojure compiles to JVM bytecode, so it should run much much faster than an interpreted picolisp. Clojure works on Windows too, while Picolisp needs Cygwin I think. Picolisp is really simple from a language point of view. It has a small group of core users who think very differently about software. There aren't many picolisp libraries, but you can hook into C & Java libraries when needed. It has an integrated prolog like DB. Clojure runs Walmart during black Friday online sales...I would not recommend that with picolisp.
"Erzats" PicoLisp runs anywhere Java runs, that was it's original reason for existance - to be used in bootstrapping the compilation of PicoLisp, IIRC.
However, nowadays, I think Erzats PicoLisp, AKA the Java version of PicoLisp, gots too little love. It's an interesting way of accessing the JVM, just like Groovy or Clojure. Also, I don't have benchmarks, but PicoLisp is pretty damn fast owing to minimal datastructures and I would expect that to spill over into the Java implementation.
Alexander has a "Need for Speed" article where he shows it's in the neighborhood of Clisp, but Clisp isn't really all that fast. Comparing to SBCL would be more useful as it is very fast.
Alexander Burger has some free books out on it, but a lot of it still requires more work to really learn it than I can put in right now. I'd love to see a book on building really small business apps with it though.
The 32 bit version should compile pretty easily if you use gcc rather than clang. Unfortunately it uses variable sized arrays, which clang says will never be supported, and the 32 bit code is unlikely to get a rewrite.
The 64 bit version however, I'm currently trying to get working on macOS. The 'normal' 64 bit version won't compile as it targets x86_64 ASM, in a GNU as dialect. MacOS, even using gcc appears to use clang as, I haven't found a GNU as which supports mach-O. There is an 'emulated' 64 bit version which I've been working to get running with Alex, but it currently seems to be hanging indefinitely on some of the unit tests and I haven't yet had the time to establish which bit.
That's very interesting, and I quote short summary from 2014
> PilMCU is an implementation of 64-bit PicoLisp directly in hardware. A
truly minimalistic system. PicoLisp is both the machine language and the
operating system:
>* Memory management is trivial, just the Lisp heap and the stack
>* The built-in database is extended to hold a "file system"
>* One SSD per database file for mass storage
>* "Processes" run as tasks and coroutines
>* Events (timing and interrupts) via a 'wait' instruction
>* Complex I/O protocols are delegated to peripheral chips
> The final hardware can be very lightweight. Low transistor count and
power consumption. No overhead for an OS. It is conceivable for a later
stage to put many interconnected CPUs on a single chip.
> At present, we have it running in the Verilog simulator, and in an
emulator (adaption of the PicoLisp 'emu' architecture).
https://picolisp.com/wiki/?PilOS