This doesn't quite hit the mark. The example given in Klisp will work with an arbitrary number of arguments. What looks to be the second of two named arguments, e, is actually the dynamic environment from which $and? is called.
Special Forms in Lisp [0] by Kent Pitman (1980) is about FEXRs vs. MACRO:
> It is widely held among members of the MIT Lisp community that FEXPR, NLAMBDA, and related concepts could be omitted from the Lisp language with no loss of generality and little loss of expressive power, and that doing so would make a general improvement in the quality and reliability of program-manipulating programs.
> There are those who advocate the use of FEXPR's, in the interpreter for implementing control structure because they interface better with certain kinds of debugging packages such as TRACE and single-stepping packages. Many of these people, however, will admit that calls to FEXPR's used as control structure are bound to confuse compilers and macro packages, and that it is probably a good idea, given that FEXPR's do exist, to require compatible MACRO definitions be provided by the user in any environment in which FEXPR's will be used. This would mean that a person could create a FEXPR named IF, provided he also created an IF MACRO which described its behavior; the FEXPR definition could shadow [12] the MACRO definition in the interpreter, but programs other than the interpreter could appeal to the MACRO definition for a description of the FEXPR's functionality.
But, of course, if you just write the macro IF, it's pointless to then write a FEXPR. This is because the interpreter can use the IF macro just fine. Nobody is going to write every operator twice in a large code base, first as a FEXPR operator and then a macro operator. It's extra development work, plus extra work to validate that the two behave the same way and are maintained in sync.
Kent is writing very theoretically there and being very generous to the idea.
Single stepping through macro-expanded code is perfectly possible. There is no debugging disadvantage between stepping through a macro-expanded control flow operator, versus one which is interpreted. In both cases, the single-stepping interpreter can know the source code location where the argument expressions came from and jump the cursor there, providing visual stepping.
Not to mention that compiled code can be stepped through a source code view; countless programmers have been doing this in C for decades, and similar languages. Given that we can write an if statement in C, compile it and step through it in gdb, the position that we benefit from an FEXPR to do the same thing in a Lisp interpreter is rather untenable.
but not a macro in the sense Common Lisp knows them... FEXPRs are first class objects. FEXPRs can be passed around as values of variables, whereas macros cannot
The point isn't that eval or fexprs are good to use all over the place. It's interesting as an exercise in language design to see what is possible when they are available.
And to the fact that fexprs operate on second class data. It's still a win that they are first class objects. It means you can dynamically pick which fexpr (or applicative operator) to call on a set of arguments, which like you said, can be selectively eval'd.
As far as the FEXPR itself is concerned, exactly that is possible with (some-fexpr env arg1 arg2 ...) as what is possible with (some-function env 'arg1 'arg2). It's just to that you have syntactic sugar there in not having to quote the arguments to suppress their evaluation.
The enabler of interesting semantics is not the FEXPR but the env: that the environment is available to the program itself, reified as an object. We can write code which somehow receives this env as an argument and then use it in eval. (Then it's basically an afterthought that we can put such code into functions, hook them to operator names, and have the interpreter dispatch them for us, and automatically pass them the environment.)
Given access to the environment, we can explore questions like, "what if we dynamically build a piece of code, say, based on some external inputs, and then evaluate it in the environment where it can see the local variables of the current function?"
Ultimately, this sort of thing is entertaining bunk, which could be why it disappeared: the evaluation-semantic equivalent of Escherian impossible waterfalls and such puns and ironies. (I just coined a term: trompe d' eval).
Or, maybe the ancient Lispers were wrong; was there a tiny baby hiding in the bath water? Was it really just chauvinism (our main program is research into better compilers, and whatever gets in our path is to be pushed aside).
Possibly, the Algol people and lexical scoping had an influence: lexical scopes encapsulate and protect. You don't want to reveal run-time access to the environment, which breaks the doctrines of lexical scoping, allowing a function to peek into or mutate another's environment, if it only it receives that environment as an object. That would have been repugnant to the Wirths and Dijkstras of that heyday.
We have a less powerful version of this in the lexical closure, which binds a specific piece of code to a specific environment, without revealing that environment as an object. The closure is reified; the environment isn't, being considered something lower-level that remains hidden under the hood (and subject to a myriad implementation strategies which make it hard to model as a cohesive object).
Great response. I admit I am interested in fexprs because of the syntactic sugar; not having to quote arguments means code can look more like words at the top level, and smaller functions can deal with how they are interpreted.
As far as the search for compilers is concerned, I think what is considered powerful notation should be kept around, even if it's tough to compile at the moment.
If you have a compiled Lisp with an interpreter also, adding fexprs creates the interesting possibility the fexprs themselves may be compiled.
Suppose the Lisp is bootstrapped in some other language, like C or assembler. The special operators in the interpreter are written in C. If you write the IF operator in C, and that operator itself needs an if operator, it uses the C if statement or ternary operator. (Obvious, right? No level confusion.)
If you add FEXPRS, they are interpreted code themselves: interpreted code controlling the interpretation of code. If you write an IF FEXPR and it needs an if operator, and you use IF, then you get infinite regress/recursion: while trying to interpret IF, the IF FEXPR calls itself, and then runs into the same situation, calling itself again, ...
If the Lisp has a compiler and macros, then you can write an IF macro, and compile that FEXPR. Then, when the interpreter evaluates an IF form, it now dispatches a compiled function. When that function needs IF, it's just running the compiled code, and not recursing any more; the IF FEXPR is only for interpreted code.
FEXPR's can do some "impossible things", and if you want to do those things fast, compiled FEXPR's could be useful.
In fact, that ought to work not just in Racket, but in any Scheme implementation that conforms to R5RS or later (or even R4RS plus appendix).
The Klisp authors picked a pretty bad example for demonstrating the power of fexprs. You can think of them as being first-class macros in a way [0].
Joe Marshall demonstrated that fexprs can be divided into two distinct classes: safe and unsafe [1]. He showed that all safe fexprs could be implemented as macros with no loss of expressiveness. (An unsafe fexpr is one that relies on metacircular fixpoints (whatever that means)).
[0]: That's not exactly true. Macros are syntactic transformers whereas fexprs are procedures that can syntactically modify and selectively evaluate its arguments in a given environment. Despite this semantic difference, there's a very large overlap in their use-cases.