Author here. Lisp-Stat is no April's fool's joke. The project is real and only a coincidence that it was mentioned here shortly before 1st of April.
The Overview [1] and About [2] pages provide the best background to the project. Also see Tierney's paper, Back to the Future:
Lisp as a Base for a Statistical Computing System [3]
Although inspired by Tierney's XLisp-Stat, this is a reboot in Common Lisp. XLisp-Stat code is unlikely to run except in trivial cases, but existing XLisp-Stat libraries can be ported with the assistance of the XLS-Compat [4] system. One of the goals was to make porting easier so as to have a ready-made eco-system of sorts. To that end I've also collected all the XLS code that is still readily available and put that on github. A summary of what's there is available on the website [5].
In developing the system, I wanted to avoid the 'lisp curse' [6] and picked existing libraries where possible, developed what didn't exist, and documented them all in an attempt to make the learning curve somewhat less steep. It's now to the point where I can use it in my own work and thought the broader CL community might find it useful.
Your comment makes it seem like Tierney et al. only authored XLisp Stat, and you wrote a "reboot in Common Lisp" called Lisp Stat. That is not the case.
Tierney et al. authored both XLisp Stat and Lisp Stat. You took the code and modified it. There's nothing wrong with that, but you also present this modified code as Lisp Stat, a project "inspired" by XLisp Stat. You gave this code a new license, too.
I wonder what's the purpose of using "Lisp Stat" and "Symbolics" as project/company names, if not to mislead others. If that was not your intention, consider being more explicit about what is authored by you, and what is authored by others, and avoid using the same name for your project as that of the historical project.
Lisp totally works for scientific/statistical computing! Julia, for example, is commonly called a Lisp. And S/R/&c. are also very lispy. As is APL, to a lesser extent.
Aside from the language features, some of the libraries in Julia make it really useful for statistical computing. One really cool library I am trying to use more and more in Julia is the Measurements library [1]. With the multiple dispatch system in Julia its super easy to integrate into most problems and can let you estimate error bounds on values programs produce. Super important for scientific applications.
I am hoping in the future that I can mix this in with some auto-diff problems to get uncertainty bounds on estimation problems with minimal fiddling with covariance matrices. Right now the performance is the only problem in integrating the library into pretty much any problem :(
This is great, I hadn't seen this before. Semantic Lawful / Syntactic Chaotic seems like a fair characterization of Julia. There's too much syntax for hardcore lispers to be entirely comfortable but almost all of the syntax is just method calls in disguise, so it's semantically pretty clean.
I've never understood what makes R lispy. I think R is a very flexible and interactive language, more so than Python, but does that mean it is lispy? Pardon me, I don't know enough about languages to pin it down.
The "Metaprogramming" section of Advanced R by Hadley Wickham is a good introduction, although it emphasizes the author's own library and their own (intelligent but opinionated) approach, rather than the core language primitives: https://adv-r.hadley.nz/metaprogramming.html
For really powerful demonstrations of what R metaprogramming can do, see the Dplyr and Data.table packages.
Personally, I've used it to implement things like shorthand lambdas, the Clojure threading macro, and a function composition operator that constructs and evaluates an expression of nested calls to keep runtime overhead down to a minimum.
R macros are sometimes more powerful than true Lisp macros, in that there is no separation between the macro expansion phase and code execution. This is possible because R function arguments are lazily evaluated. The argument is a special "code object" that represents an expression. If you want to capture or change that expression, you can do so freely inside the function. So there is no need for macros, just functions that can capture unevaluated expressions.
Incidentally this also makes R very hard to optimize, since any function can do this at any time.
> on a somewhat larger scale, the Lisp user community was not growing much either, and much of what was done in Lisp was now done in newer (an leaner) interpreted languages such as Perl, Python,or Ruby. All in all, the developments did not seem to indicate a healthy state of affairs.
Good news is, Lisp is reawakening and developers shy away from Python :)
> S was rapidly becoming the lingua franca of statistics.
> We were not teaching a marketable skill.
Surprisingly, there are no more reasons. The rest of the paper is dedicated to his love to Lisp and his regrets:
> I very much liked the idea of taking a general purpose programming language, such as Lisp, and adding the statistics on top as a library or a set of plug-ins. There are now various more or less satisfactory attempts to put S/R on top of Java, Perl, and Python but they have the major disadvantage that they involve mapping two competing systems.
> putting the scientific software on top of a general purpose programming language (in the same way as we used to put subroutine libraries such as IMSL on top of FORTRAN and C) is still a good idea, maybe even a better idea than developing special-purpose little languages.
> It is also important not only to standardize on a language, but even on a GUI and a runtime system. That is why it is unfortunate that we still have two major and incompatible imple- mentations of the S language, S-PLUS and R.
etc etc
> The main difference between Lisp-Stat and S/R is that between a set of commands added to a large and popular general purpose programming language and a special-purpose little language for statistical computing. My personal opinion is that it is unfortunate that the statistical community made the choice that it made, because more was given up than was actually gained. [at his time of writing. And now?]
This brings back memories. I used xlispstat for my engineering thesis on pruning of neural networks in 1992. I enjoyed it very much. I still have Luke Tierney's book LISP-STAT. I also used Splus, the ancestor to R, which was also very good, but not open source.
On a quick scour of the source code at https://github.com/Lisp-Stat/lisp-stat, I can see that there's a `Copyright (c) 1991 by Luke Tierney` on `base/variables.lisp` in the initial commit. Interestingly, this code is released under the Microsoft Public License, which includes the text: "Copyright Grant- Subject to the terms of this license, including the license conditions and limitations in section 3, each contributor grants you a non-exclusive, worldwide, royalty-free copyright license to reproduce its contribution, prepare derivative works of its contribution, and distribute its contribution or any derivative works that you create" which would imply that the answer to the GP's question needs to be "yes".
Note: I have no idea who Luke Tierney is or what his contributions to this area might be, which is a failing on my part.
The Overview [1] and About [2] pages provide the best background to the project. Also see Tierney's paper, Back to the Future: Lisp as a Base for a Statistical Computing System [3]
Although inspired by Tierney's XLisp-Stat, this is a reboot in Common Lisp. XLisp-Stat code is unlikely to run except in trivial cases, but existing XLisp-Stat libraries can be ported with the assistance of the XLS-Compat [4] system. One of the goals was to make porting easier so as to have a ready-made eco-system of sorts. To that end I've also collected all the XLS code that is still readily available and put that on github. A summary of what's there is available on the website [5].
In developing the system, I wanted to avoid the 'lisp curse' [6] and picked existing libraries where possible, developed what didn't exist, and documented them all in an attempt to make the learning curve somewhat less steep. It's now to the point where I can use it in my own work and thought the broader CL community might find it useful.
[1] https://lisp-stat.dev/docs/overview/
[2] https://lisp-stat.dev/about
[3] https://www.stat.auckland.ac.nz/~ihaka/downloads/Compstat-20...
[4] https://github.com/Lisp-Stat/XLS-compat
[5] https://lisp-stat.dev/docs/contributing/xlisp/
[6] http://www.winestockwebdesign.com/Essays/Lisp_Curse.html