There's a bit of subtext in the title that may be worth drawing attention to. (I dunno, maybe it's all obvious to everyone interested enough in this stuff to be reading the dissertation. But perhaps not, so here goes.)
In the 1970s, after their paper introducing Scheme, Sussman and Steele published a series of papers showing how to implement a bunch of things "on top of" a functional-programming substrate, many of them having titles or subtitles of the form "Lambda: the ultimate X". (The first one was "Lambda: the ultimate imperative", the second "Lambda: the ultimate declarative", others "Lambda: the ultimate GOTO" and "Lambda: the ultimate opcode".)
The idea of these papers wasn't merely "look, you can implement X in Lisp", but something more like "You might be inclined to think of X as simple and primitive and cheap, and Lisp as sophisticated and complicated and expensive -- but that's mostly prejudice, an accident of the particular hardware we happen to have built first, and using something like Lisp as your lower level and implementing these things on top is better than doing it the other way around".
So writing something -- especially something about a Lisp implementation -- with a subtitle of the form "X: the ultimate Y" where X is not lambda -- is pointedly one-upping that.
Especially as the thing that $vau is allegedly providing the new best way to do is ... function abstraction, which is exactly what lambda is for.
The idea is that in a function call, the arguments are normally evaluated before the call as usual. But if a formal argument in the function's declaration is prefixed with ampersand then during a function call, that argument is not evaluated, but is instead packaged up as a zero-argument lambda function. The argument can be called later using the normal function call syntax, but also there is semantic sugar: the * operator calls a zero-argument function.
Further, when a variable is evaluated, you get both its value and a reference to it. This allows variables to be evaluated before being assigned to in assignments. This sounds weird, but it works with the above. You can make a "set" function which does the same thing as assignment:
# Define set function
fn set(&left, right) {
*left = right
}
# Set x to 10
set x 10
Here is how you could define switch using primitives (if the last argument is prefixed with ampersand, then any extra arguments are treated the same, but can only be accessed via argv):
fn switch(&val) {
var rtn, x, v = *val
x = 1
while x < len(argv) - 1 {
if *argv(x) == v {
return *argv(x + 1)
}
x += 2
}
if x < len(argv) {
return *argv(x)
}
return void
}
Example use (8 arguments passed to switch- each set of braces is an argument):
Right now the delayed arguments are evaluated in the closure of the function call (so they are lexically scoped). The wrap/unwrap concept from this paper gives me ideas on how to make this more versatile...
> Further, when a variable is evaluated, you get both its value and a reference to it. This allows variables to be evaluated before being assigned to in assignments. This sounds weird, but it works with the above. You can make a "set" function which does the same thing as assignment
Have you read Strachey's "Fundamental Concepts in Programming Languages"[1]? You seem to have rediscovered lvalues.
Well the language is not LISP, meaning that the source is immediately compiled (to byte code in this case), and provides no access to an AST that can be manipulated.
It's interesting to see how far you can get without eval.
The evaluate operator would require an environment parameter in order to avoid capture issues in the same way as passing down the lambda.
If you're passing around a piece of code with an environment, you effectively have a lambda already: one that is condemned to slow interpretation.
Without the environment passing for proper referencing semantics, the implementation wouldn't be a correct call-by-name reference; it would not pass Knuth's Man-or-boy test.
As you can see, the make-cbn-val constructor packages up the argument as an object ("thunk") that contains zero-argument lambda for getting the value, and a one-argument setter.
Inside a call-by-name function, references to all the call-by-name arguments are converted according to the pattern X -> (cbn-val X), using a lexical macro.
(cbn-val ...) syntax is treated as a place thanks to the (defplace (cbn-val thunk) ...) definition.
In the following, the (inc a) applied on the call-by-name argument a turns into (set-cbn-val #:g0098 (succ (cbn-val #:g0098)))): get the value, find successor, store the value.
An empty function is defined, named by a gensym. Two steps later in the progn, this function binding is overwritten with the real function. A macro called foo is generated which provides the syntactic sugar for calling foo, automatically converting the arguments to thunks. The syntax (foo a b c) turns into (cbn #:g0091 a b c), which is another macro operator for doing call by name.
The actual function starts at (lambda (#g:0092) ...). The (let ((foo)) ...) part simulates another Algol feature: returning a value is done by assigning to the function name. Our function returns nil because it doesn't assign anything to foo.
The make-struct call is an expansion of the new macro operator; that comes from the object system:
11> (macroexpand-1 '(make-cbn-val x))
(new cbn-thunk
get (lambda () x)
set (lambda (#:g0104)
(set x #:g0104)))
In action: watch x get incremented by foo:
12> (defvar x 10)
x
13> (foo x)
nil
14> x
11
Of course, this is objectionably hacky because foo isn't a function; we can't pass it around or anything.
The goal was only to solve the Rosetta Code "Man or boy" task, with a focus on actually emulating the language features to some degree of fidelity, so that the program could truly be a transliteration of Knuth's code into Lisp expressions, almost node for node.
I would like to see a mature language using fexprs. They are what I first thought about when I heard the word homoiconicity. Macros seem inelegant in comparison (since there is a separate layer of code that operates on code, and also you don't get self-modifying code, just compile-time transformations).
If something like Symbol -> AST -> Closure is a type you accept for lambda, I guess vau's type would be the same except with two Symbols (one for the argument like lambda except it won't be evaluated before being passed to the closure, and one for the dynamic environment which you will need to evaluate the argument).
In the 1970s, after their paper introducing Scheme, Sussman and Steele published a series of papers showing how to implement a bunch of things "on top of" a functional-programming substrate, many of them having titles or subtitles of the form "Lambda: the ultimate X". (The first one was "Lambda: the ultimate imperative", the second "Lambda: the ultimate declarative", others "Lambda: the ultimate GOTO" and "Lambda: the ultimate opcode".)
The idea of these papers wasn't merely "look, you can implement X in Lisp", but something more like "You might be inclined to think of X as simple and primitive and cheap, and Lisp as sophisticated and complicated and expensive -- but that's mostly prejudice, an accident of the particular hardware we happen to have built first, and using something like Lisp as your lower level and implementing these things on top is better than doing it the other way around".
So writing something -- especially something about a Lisp implementation -- with a subtitle of the form "X: the ultimate Y" where X is not lambda -- is pointedly one-upping that.
Especially as the thing that $vau is allegedly providing the new best way to do is ... function abstraction, which is exactly what lambda is for.