Hacker News new | past | comments | ask | show | jobs | submit login
Technical Issues of Separation in Lisp Function Cells and Value Cells (1988) (nhplace.com)
85 points by adgasf on Oct 26, 2016 | hide | past | favorite | 47 comments



My favorite quote:

    We feel that the time for such radical changes 
    to Common Lisp passed, and it would be the job 
    of future Lisp designers to take lessons from 
    Common Lisp and Scheme to produce an improved 
    Lisp. 
I don't know if it is truly indicative of a general outlook, but the implication I read is that future Lisps are expected and encouraged.

An open question I have, and if this is covered somewhere already, I'd love to see it. It is interesting to me that while Lisp was looking to condense namespaces dramatically, it seems many other languages went the opposite route. Is that a trend I just don't understand, or is there something inherent in Lisp that favors fewer namespaces?


I dunno about expected and encouraged. Neither really, nor discouraged though. Really the point was that this was written in the context of a particular standardization effort and was to say "this is out of scope". As it happens, RPG and I disagreed completely on whether Lisp1 or Lisp2 was a good idea. Most readers read this like a Rorschach, assuming that the article confirms their own beliefs, not stopping to think that the article just presents two sides of an issue. In the context of the design process, Lisp2 won out (and I happen to be happy about that). But it is reasonable and appropriate for both to thrive if there are users who like those sorts of things.

I created the Lisp1/Lisp2 terminology as a dodge because we started out writing this paper using terms something akin to "Scheme-like" and "CL-like", and Scheme was winning for reasons unrelated to the issue at hand. I wanted people to separate their warm fuzzies for Scheme from the particular design choice, as I really think there are very strong and reasonable reasons to have a Lisp2. The net result was that in the context of CL, those of us liking Lisp2 successfully argued that it was an unnecessary change to the language from a stability standpoint.

This paper is called "Technical Issues..." because the committee document (titled "Issues...") was longer and went into other issues that RPG didn't think were pertinent to formal publication.

I frankly don't understand (or perhaps don't agree) with any claim that Lisp was looking to condense namespaces. By convention and John McCarthy's official request, Lisp is the name of a family of languages, not of a particular language. Lisp has no preference. It spans languages with broadly varying points of view.

Ironically, CL is really at least a Lisp4, by the way. block tags and go tags are legitimately different namespaces. They just messed up the discussion so we left those out. Whether you also count the type/class homes as a namespace is something that doesn't fit neatly into the terminology so is the reason I say "at least a Lisp4". Left to personal subjectivity. -kmp


I'm not trying to start a flamewar here but could you go into why your preference is with Lisp2 instead of Lisp1?

A lot of commenters here have a strong preference to Lisp1's so I'd be happy to hear a different point of view.

(Disclaimer: I'm mostly a CL user and appreciate it being a Lisp2.

Also, nice to see you here! I always really liked your posts in comp.lang.lisp.)


Not having to name your list variables "lst" is a big one.


Yes, I am with Grue on this one. The overcrowding of namespaces means the very common case of receiving a thing that is known by its type (list as "list" or "string") occludes a common constructor that you might want and makes you spell it badly. I really very strongly prefer proper spellings of things, but moreover I think this is "natural" in a way that is neglected in this discussion.

Simplicity can be measured in different ways, and there is a tendency in the Scheme community to think of simplicity as a measure of the size of a formal semantics. But two things come to mind about that metric: (1) I have often claimed that small languages make big programs, and big languages make small programs. So to some degree it's the case that tiny languages mean you have to laboriously reconstruct as program or library what the language did not let you do in its zeal to not offer functionality. (2) Human beings are not designed in the Scheme way. The natural languages include no language at all, out of many hundreds, in which words have only one meaning and do not enjoy contextual distinctions. So in my view, simplicity can also measure the lack of dissonance between the program model and the brain model. My brain is, I believe, well-adapted to understand word meanings differently for noun and verb, and Scheme affirmatively chooses not to rely on that, leaving my brain bothered by the lack of ability to use it natural mode and forced to use what seems more cumbersome. I don't want to reach for more words, I want to use obvious context. And I claim that this is at least a valid way of thinking, if not uniquely valid. I'm not trying to disallow the way others think (or think they think, we being poor at actually introspecting sometimes), but to say that the way others think is no more or less valid than my own.

This is, as Aerique implies, somewhat a religious matter and not to be warred over. So I don't want to provoke debate on whether my way is THE right way, only A right way among MANY simultaneously right ways.

At another level, though, this is the very essence of what it is to be a CL user, or a member of the CL community, at least in my mind: The language expressly accommodates and encourages a pluralistic society in which multiple paradigms are simultaneously supported. I don't want to accuse the Lisp1 community of being a bunch of intolerant folks, but I will note that in my mind it's hard to escape the sense that it bears a striking resemblance to that. It's a community that wants people to learn the Scheme way, not a community that wants to accommodate the various ways people naturally think.

I keep coming back to the old joke "There are two kinds of people in the world: People who think there are two kinds of people in the world, and people who don't."

CL has preferred casifications, but expressly goes out of its way to accommodate others. It has ways of thinking about loops in various paradigms. It has macros and read syntax things for letting you override almost any decision that we could figure a way for you to override. There are places where we do a poor job or an incomplete job, but that's more an artifact of the energy and funding than of design. There was pressure to reduce out the redundancy and we opted not to.

People think in multiple namespaces. We know that when you license a production you can create a license (British respelling notwithstanding, it is possible to use the same word in different contexts without confusing whether it is a noun or a verb). In Spanish, a normal speaker would not flinch at the sentence "Como como como." (I eat how I eat, where the middle "como" is "how" and the outer two are "I eat".) These are natural, and so simplicity in this context, at least for me, is being to write thing the way I think.

The imagined need to crowd out these names doesn't really come up in CL. Nouns and verbs mostly operate in different orbits and don't interact, and that feels pretty natural. Languages are more about ecologies than about a la cart features, and there's a danger in liking a feature and thinking it can just be injected into an ecology and will behave either as expected or even just pleasantly.


Although I'm with you on the issue of some Lisp-1 people being parochial, we can argue that "como como como" is an example of something bad that we don't necessarily want in a programming language. Not everything natural, in natural language, is good in a computer language. Never mind that a word can be a noun and verb in different ways: in natural languages, even if the word has the same role, like noun, it can have different "bindings" at the same time in that same space due to homonyms. Do you want the same symbol to have several completely unrelated global bindings in the same space, the dispatch being resolved based on semantic context (perhaps not even known until run-time)? Ouch.

Simply the fact that you have exactly two namespaces, in each of which there can only be exactly one binding for a symbol at a given lexical level, is different from natural languages.

I'm convinced that Lisp-2 and Lisp-1 have merit, and made a Lisp dialect that offers both, in a reasonable way that manages to be relatively clean.

An ideal Lisp dialect supports the reasonable request of him or her who wants (list list) to Just Work, and it supports that programmer also who has a function-valued variable f and just wants (f x y) to work. That ideal, if taken too literally, is contradictory, but an acceptable compromise is to have [f x y] work, where [] changes the evaluation of atomic forms that are bindable symbols to Lisp-1 style (utterly, with deep support from the macro-expander and evaluation semantics).


I am totally comfortable with funcall. It is my preference, not something I bargain down to. I don't have a need to have (f x y) "just work". You needn't agree. I acknowledge differing tastes here. But you need to, as well. It is not proper to speak as if we obviously have the same goals. We do not. It is not that we have the same goal and are differing on tactics. We have actually different goals. That is important to see or else you cannot wrap your head around why the issue played out as it did.

As you say, notational games can be played. And there are other approaches. Just because people have different goals doesn't mean that's the end of discussion. But an honest discussion must recognize the legitimate desires of both sides without disparaging one side. Getting inside another's head is important.

I don't pass functional arguments a lot. CL programmers often don't. Where Scheme would pass functions, CL often uses keyword arguments. In sort, for example, we customize behavior not by passing some function but by passing a list of keywords. There are places we pass functions, but it is not our ordinary business. So calling it out, by doing (funcall list...) rather than just (list...) gives a signal that something unusual is happening. If you plan for this not to be unusual, you will go a different way. But we do not all aspire to pass functions at every point. I like having it for certain purposes, but doing things in other ways as well.

For another take on a middle ground than the character syntax you describe, see my http://www.nhplace.com/kent/Half-Baked/spiel/index.html


> Do you want the same symbol to have several completely unrelated global bindings in the same space

Absolutely. Yes.


Are you sure? Global, in the same space? As in X having three different bindings as (say) a variable, at the same environmental level ("top-level")?


Thanks. Not sure how notifications work here, but I replied to your comment as a reply to Grue below this, so check the page.


My claim for the Lisp community wanting to condense is strictly colored by my (possibly incorrect) view that every Lisp since has been a Lisp1.

I have not personally taken too deep of a dive on Lisp1/Lisp2. Reading your response, I get the impression that this was actually a somewhat contested choice. I have to admit, I'm not clear on why I would personally care. Are there hard reasons to prefer one over the other? (In particular, I have not seen any advantages for either way explored from a user perspective.)

I'll have to read this particular paper in more depth, so apologies if the answers I'm asking for are in this article.


My own take on it, in a nutshell, is that while Lisp-1 has a certain elegance which is attractive, it makes the macro hygiene problem much worse (sections 6 and 13). I've seen it claimed that there's never been a really good solution for this in Scheme, though I've never written much Scheme and haven't attempted to evaluate this claim personally. In any case, I'm very much accustomed to how one does things in Common Lisp and don't mind either its Lisp-2 namespace-crossing operators ('function' and 'funcall', section 5) or its "low-level" macros.

I also tend to think that far more electrons have been spilled over this issue than it deserves. It's an engineering decision with tradeoffs either way; there's no god-ordained right answer. I'm sure that if I were trying to decide whether to write a large system in Scheme, its being a Lisp-1 would be the least of my concerns.


Hygiene is still needed in CL, but is accomplished in different ways that are not so explicit. Packages and namespaces end up doing most of the heavy lift, so CL's macro system, while it looks more raw than Scheme's, is still pretty robust. And many of us just find it simpler because it involves a lot less mechanism. (Odd that in that isolated realm of macro processing, the Scheme system is complicated and CL is not. You never get anything for free--you just trade one thing for another. Laws of thermodynamics and all that. They're hard to escape.)


I did find that most all of my questions were directly addressed in the paper on a slow read. I should have picked up on them in a fast read.

Your second paragraph seems to hit it on the head, though. Since most of my exposure was to scheme and racket, I have not looked deeply at Lisp2s. However, having switched to emacs and written some elisp, I have yet to really be annoyed by the fact that it is a Lisp2.


> future Lisps are expected and encouraged

That's right. There are two reasons that Common Lisp never got revised:

1. AI winter hit and the money dried up

2. Common Lisp as it stood when it was first released turned to be a very (very!) good design, in particular because it's almost infinitely flexible. You can implement almost anything within CL because you have control over every part of the code-processing process. You can modify the lexer by making new readtables, and you can change the way the AST is processed by writing macros. There is almost nothing you can't do in CL. (There are a very few things you really can't do without changing the standard, but even those are generally possible using vendor-specific extensions, or by making small tweaks to an implementation.)

FWIW, here's my take on bringing CL into the 21st century:

https://github.com/rongarret/ergolib

and the beginning of an associated book:

https://github.com/rongarret/BWFP

> is there something inherent in Lisp that favors fewer namespaces

Not really. It's pretty easy to make a Lisp2 act like a Lisp1 when you want it to (https://news.ycombinator.com/item?id=12801372). Much harder to go the other way.


>You can implement almost anything within CL because you have control over every part of the code-processing process.

Almost. There is one big missing piece. Standard code walker. If CL had standard code walker that could correctly walk trough any piece of standard CL code, it would implement fully the promise of Lisp and you could build almost anything on top of it with ease.


True, but 1) there are open-source code-walkers that do a pretty good job, 2) many implementations expose their internal code walker in one way or another and 3) you can do somewhere between 90 and 99% of what you'd want to do with a code walker using MACROLET.

My pet peeve is that ((...) ...) is a syntax error, so you can't implement Scheme in Common Lisp without violating the standard. But see http://www.flownet.com/ron/lisp/combination-hook.lisp for an example of how easy it is to hack at least one implementation to support this.


The lack of mandatory tail calls and call/cc also make implementing Scheme in CL hard, although the former is actually present in most mainstream implementations. Various flavors of continuations can be implemented with macros, but they require coding changes and are not as general as Scheme continuations.


We tried to do a standard code walking / macro introspection thing but it had much detail we realized we were going to get wrong, so we backed out of it rather than freeze something bad. It's appropriate to a level 2 standard, just as the MOP was.


It's not the only thing I would add, but I agree.

The other two pieces I wish had been standardized are coroutines, and the internal syntax used by backquote.


Well, many have argued that Clojure is that future lisp. (It's a lisp-1, incidentally)


Yeah, Clojure was what I would expect to be the popular frontrunner here. Evidently, a few years ago arc would have been considered.

My point was more that it intrigued me that the view given was not that one or the other was correct and everyone should move to it. More that both sides had lessons learned that could be taken to the next iteration.

This seems different from most other language discussions which are about building on more and more to the given language. Maybe I'm just reading the tea leaves, as it were, incorrectly.


Arc is also a Lisp 1.

There's also Mark Tarver's Shen (http://www.shenlanguage.org/). I don't know whether it's a Lisp 1 or 2, but I wouldn't be surprised if it's also a Lisp 1. I have read about Shen but not programmed in it, but I see it's added quite a few features absent from Common Lisp and the newer Lisps, which I think are worth having.


I decided to read up on Shen, and followed some reddit threads about it, and the consensus seems to be that it is a very inefficient language and many of its abstractions are not actually portable between platforms, i.e. the language changes a bit depending on where you use it.


I read the article some time ago.

I have my own Lisp, Emblem, which I released well over ten years ago. (There were few takers. I have improved it considerably since then, but not made a re-release.) I decided that it shouldn't be gratuitously different from Common Lisp. However, it is a Lisp 1. The reasons are: (1) my programming style was never inconvenienced by the lack of separate function and value cells when programming in Scheme instead of Common Lisp; and (2) Emblem uses a world (i.e. an image) to store its compiled code in, and an extra, rarely used, cell on every symbol increases the image size.


In TXR Lisp, I implemented a design which achieves a harmony between the separation of function/value bindings, and their union. You can program in both styles in the same scope.

http://www.nongnu.org/txr/txr-manpage.html#N-02DC9E04

TL;DR: The underlying Lisp dialect is a Lisp-2. However, when a form is denoted by square brackets, Lisp-1-style evaluation applies to each of its positions: every position is evaluated as a form, and any form which is a symbol is evaluated in a conflated single namespace. This rule does not recurse into the forms: any nested forms also have to use square brackets to work the same way.

Thus we can do:

  [mapcar cons '(1 2 3) '(a b c)] ;; no (fun cons)

  [f arg] ;; f is a variable
And so on.

This works by making [...] a syntactic sugar for (dwim ...) where dwim is a special operator. This operator is deeply integrated into the language (both macro-expansion semantics and evaluation), which is why it is able to change the semantics of name lookup.

A rule I imposed is that macros are not allowed in this notation:

  [let ((a 3)) ...] ;; nonsense: let not recognized as operator here
so it is actually implements the Lips-1 mantra literally: "evaluate all positions of the form equally". There is no exception for recognizing a function-like macro in the leftmost position of a form, but not doing so in other positions.


It gets tricky. Bracket/dwim forms can designate access to objects, because sequences are funcallable. So we can do:

  (set [a b] c)
What does that look like?

  1> (sys:expand '(set [a b] c))
  (let ((#:g0138 (sys:lisp1-value
                   b))
        (#:g0137 c))
    (sys:lisp1-setq
      a (sys:dwim-set (sys:lisp1-value
                      a) #:g0138
                      #:g0137))
    #:g0137)
The a and b forms turned into (sys:lisp1-value a) and (sys:lisp1-value b). These secret operators bring in the name lookup semantics anywhere it is required, disembodied from the dwim operator. Also sys:lisp1-setq is used to store the updated sequence back into a. sys:setq can't be used because a could be a function binding in the given scope.

There are hacks you can do in Common Lisp to achieve a bit of a Lisp-1 style, but not to such detail. See this piece of documentation under labels and flet:

http://www.nongnu.org/txr/txr-manpage.html#N-0209307D

"Furthermore, function bindings introduced by labels and flet also shadow symbol macros defined by symacrolet, when those symbol macros occur as arguments of a dwim form."

A test case for this is the following, which must produce (1 1 1):

  (let ((g (lambda (x) 0)))
    (symacrolet ((f g))
      (flet ((f (x) 1))
        [mapcar f '(a a a)]))) ;; Lisp-1 f: flet shadows symbol macro
However, this must produce (0 0 0):

  (let ((g (lambda (x) 0)))
    (symacrolet ((f g))
      (flet ((f (x) 1))
        (mapcar f '(a a a))))) ;; now Lisp-2 var reference


For those who are more used to non-lisp scripting languages, I've found the awkwardness of functions in a lisp-2 (Emacs Lisp) compared to a lisp-1 (Scheme) to be similar to the awkwardness of functions in PHP compared to e.g. Javascript or Python.


PHP's functions are interesting. You can always use variables as functions, but the reverse isn't true. So, `$foo()` is fine, as is `bar($foo)`, but you can't do `$foo(bar)` or `baz(bar)`.


C would be exactly the same way, if functions didn't implicitly take their own address, so that the & operator had to be used:

   var();     // okay
   func(var); // okay
   var(func); // nope: &func required
   func2(func); // ditto

C++ is like this with respect to member functions, which require the & operator to be used:

   foo(myclass::staticfun); // error
   foo(&myclass::staticfun); // good
(As it is, even calls in C work via a function pointer being involved. In foo(arg), foo is a primary expression denoting a function (if that is how foo is declared in this scope). This primary expression produces a pointer-to-function. The (arg) postfix operator then applies to the pointer.)


You can do `$foo('bar')` and `baz('bar')` though. Which is horrible ;)


In fact, come to think of it, you can do `$foo(bar)` and `baz(bar)`, but only because undefined constants evaluate to a string of their name by default. (Yes, it's awful.)


But isn't that what you have to do to make baz($foo) work also? I mean, how do you put a value into foo so that baz($foo) calls bar? I'm guessing, using foo = 'bar', no?


> I mean, how do you put a value into foo so that baz($foo) calls bar? I'm guessing, using foo = 'bar', no?

You could, but that makes $foo a string, which is horrible. Since PHP has actual function values, it would be much nicer to use them, and hence do:

    $foo = function($x) { return bar($x); };
This is annoying though, since we're forced to perform an eta-expansion of bar just to avoid the syntax error that `$foo = bar;` gives.

It's like writing `0+$n` to avoid getting a syntax error when writing `$n`; completely unnecessary, and most likely a bug in the parser.

Unfortunately this isn't always possible either, since it depends on the signature of bar. A more flexible example would be:

    $foo = function() { return call_user_func_array('bar', func_get_args()); };
This still doesn't preserve things like default arguments though, and we're forced to use a string as a function too (although at least it's encapsulated, so it can't cause us to encounter random 'bar' strings in our context and wonder whether or not they're meant to be strings or functions)

:(


But we started with the problem baz('bar') being horrible; so can't that be substituted here too then?

  baz(function($x) { return bar($x) })
it's just verbose and indirect: we're making a wrapper (which looks like it may be an extra object at run-time) just to get around a quoting problem.


Yes, that's what I meant when I said:

> This is annoying though, since we're forced to perform an eta-expansion of bar just to avoid the syntax error that `$foo = bar;` gives.

> It's like writing `0+$n` to avoid getting a syntax error when writing `$n`; completely unnecessary, and most likely a bug in the parser.

Maybe we differ on where the problem lies; "a quoting problem" sounds like you think the script is at fault; I would say that the PHP language/implementation is at fault.


Why would functions in a lisp-2 be more awkward than in a lisp-1?

PHP has been too long ago for me to understand your comparison.

Would you please explain it differently?


In a lisp-1:

  (define inc (lambda (n) (+ n 1)))
When referring to `inc` (assuming no shadowing in a different scope), `inc` refers to a function object, we can do:

  (define g (lambda (f) (f 3))
  (g inc) ;; yields 4
In a lisp2, it won't work the same way:

  * (defun inc (n) (+ 1 n))
  * (defun g (f) (f 3))
  ; in: DEFUN G
  ;     (SB-INT:NAMED-LAMBDA G
  ;         (F)
  ;       (BLOCK G (F 3)))
  ; 
  ; caught STYLE-WARNING:
  ;   The variable F is defined but never used.
  ; in: DEFUN G
  ;     (F 3)
  ; 
  ; caught STYLE-WARNING:
  ;   undefined function: F
  ; 
  ; compilation unit finished
  ;   Undefined function:
  ;     F
  ;   caught 2 STYLE-WARNING conditions
Instead we can define it like:

  * (defun g (f) (apply f '(3))) ;; specifying the parameters as a list
  * (g 'inc) ;; get the symbol for inc, as a variable it has no binding
  4
  * (g inc)
  debugger invoked on a UNBOUND-VARIABLE in thread
  #<THREAD "main thread" RUNNING {10028F6AF3}>:
    The variable INC is unbound.
There are some benefits, though it's been so long since I've used either Scheme or Common Lisp that I can't clearly articulate why, but I recall at the time preferring lisp2 (or lispN, really).


It is not hard to make a Lisp2 act like a Lisp1 when you want it to, e.g.:

    (defmacro define (name&args &body body)
      (let ((name (car name&args))
            (args (cdr name&args)))
        `(defun ,name ,args
           (flet ,(mapcar (lambda (arg) `(,arg (&rest args) (apply ,arg args)))
                          args)
             ,@body))))
    
    ? (DEFINE (F X) (X 1))
    F
    ? (f '1+)
    2
It is much (much!) harder to make a Lisp1 act like a Lisp2.


I'm pretty sure that the lispy way to write the example would be:

    (defun g (f) (funcall f 3))
    (g #'inc)
Which I think is actually prettier than the Scheme:

    (define g (lambda (f) (f 3))
    (g inc)


Thanks. Couldn't remember funcall but I knew there was a better way.


In any languages with multiple namespaces, anything inhabiting some non-primary namespace (i.e., not the matrix containing normal variables) feel second class. Emphasis "feel", and hence "awkward" over "technically limiting".

Additionally, it makes the transition to lambda calculus more difficult to grok, which somewhat hinders understanding. It's pretty trivial to make some simple rewrite rules from (most of) Scheme to lambda calculus: it's a two-hour project at most. Doing this for CL is much less intuitive/elegant, possibly making it more difficult for those with that sort of theoretical background.


It is expressly not the goal of CL to be all things to all people, and yet to be many things to many people. Scheme has a kind of fixed set of things it definitely wants to be and sacrifices others, it just does it in a different shape so that the particular examples you pick are easy.

One way I sometimes conceive it is that in any given language there are a certain number of small expressions and a certain number of large ones. Differences in semantics don't make things non-computable (which is why turing equivalance is boring) but they change which expressions will be easily reachable. There are certain things Scheme wants to be able to say in not many characters and different things CL does. Neither is a flawed design. But they satisfy different needs. It's possible to dive into either and be fine. As others have pointed out here, it's not as big a deal in practice as it seems like in theory. What matters in practice is to have an intelligible and usable design, which both languages do. But to assume that the optimal way to say something in one language should stay constant even if you change the syntax and semantics of the language is to not understand why you would want to change the syntax and semantics of the language.


In Lisp-1 dialects such as Scheme, macros are not in the "matrix containing normal variables".

You cannot say with a straight face that "all elements of a form are evaluated equally and then the rest values are applied as args to the first value" because the counterexample (let ((a 42) a) doesn't work that way.

The Lisp-1 has to treat the leftmost position specially to determine whether let is a macro to be expanded or a special operator. That will not happen in a form like this (list 3 let 4).

In the TXR Lisp dialect (which provides Lisp-1 and Lisp-2 style evaluation), I fix this. In the Lisp-1 style forms, macros are not allowed. So [let ((a 3)) a] is a priori nonsense. The let symbol's position is evaluated exactly like the other positions, without being considered a macro (other than a symbol macro, which all the other positions may be).

The combination of Lisp-2 and Lisp-1 in one dialect let me have a cleaner, purer Lisp-1 in which that half-truth about all positions of a Lisp-1 form being equally evaluated is literally true, always.

Lisp-2 for macros and special ops, Lisp-1 for HOF pushing: beautiful. (list list) works, no funcall anywhere, and even if let is a variable that holds a function [let 42] call the damn thing:

  1> (let ((let (lambda (n) n))) [let 3])
  1
The let operator is not shadowed:

  2> (let ((let (lambda (n) n)))
       (let ((a 'b))
         [let a])) ;; let var visible across inner let
  b
Basically, as far as I'm concerned, this whole Lisp-1 versus Lisp-2 squabbling is an obsolete debate and solved problem (by me).


There are reasons to think second-class isn't always bad. Function objects aren't second class but the privileged position of the function namespace means you can watch what's being stored there at storage time, which happens infrequently, and can jump to things faster, which happens comparatively more frequently.

But also, there was a very interesting proposal to ISO that did not survive in which variable number of arguments were handled by an alternate namespace with very specific operations that were understandable to a compiler. You could promote them to the regular namespace if you needed to, but the compiler could do nice things with the stack if you kept them in their more limited arena where it could figure out what you were meaning to do with them. That proposal got voted down, and I was one who didn't like it, but I came to think it was less of a bad idea than I had thought when I saw some of the confusions that came up with managing rest lists on the stack in CL, which are very hard to manipulate and know for sure when they need copying and when not. First class implies that there's probably a halting problem in the most general case of a compiler trying to do code analysis on what you're doing with a thing. Often compilers can recognize sufficient idioms that this doesn't come up in practice, but second class spaces can lead you in certain ways to do things that are better.

Each paradigm has its value.


> Why would functions in a lisp-2 be more awkward than in a lisp-1?

Jtsummers explained this quite well (as does the article).

> PHP has been too long ago for me to understand your comparison.

> Would you please explain it differently?

PHP has a function namespace, which is used by C-style function definitions (similar to 'defun'):

    function foo(x) { return x; }
and a separate value namespace, which is used by anonymous functions (similar to 'lambda'):

    $bar = function(x) { return x; };
Both sorts of function can usually[1] be called directly, e.g.

    foo(42);   // Returns 42
    $bar(42);  // Returns 42
Passing around functions as values is tricky though. It's easy for the lambda-style values:

    array_map($bar, [0, 1, 2, 3])
For named functions, we have to provide a string containing the function's name, e.g.:

    array_map('foo', [0, 1, 2, 3])
This is very similar to the quoting of names in a lisp-2 (although using strings seems much uglier to me).

[1] See https://news.ycombinator.com/item?id=8119419 for some (historical?) counterexamples


It is probably more accurate to say referencing functions is more awkward. Just doing a standard function call looks the same in both. Specifically (function arguments). However, in a Lisp1, referencing the function is to just name it. In a Lisp2, referencing the function requires you to use a special syntax.

So, as an example, passing it as an argument to another function, in Lisp1: (other-function function), in Lisp2 (other-function #'function). Note the #' in before function.

There are a few other restrictions, they are all in the linked page.

That make sense?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: