A good question for any "macro" systems like this is something like the following:
1. Suppose you've written a function 'foo' in your language that does something useful (e.g.: partitions a sequence, validates a map, or pretty much anything really).
2. Some time later you want to write a macro 'bar'.
3. Can you use function 'foo' when writing 'bar'?
If the answer to 3 is "no" then you don't really have Lisp-like macros.
This is what makes Lisp macros so powerful. It's not just that you have a way to mangle abstract syntax trees. If that's all you want then yeah, you can write a parser and template language to do it, like this thing or sweet js, but it's not the same.
Lisp macros are beautiful because there's no real divide between "writing code" and "writing macros". It's all just code. You don't have to worry about which things are available in code-land and which are available in macro-land because they're the same place. There's just "the language" which you extend and mold with functions and macros woven together as necessary.
Contrast this with a language without real Lisp-like macros, like Clojurescript. If I want to write the 'bar' macro in Clojurescript I need to think "wait 'foo' is a Clojurescript function so now I need to port it back into Clojure land before I can use it in this macro because macros in Clojurescript live in Clojure-land not Clojurescript-land". I need to think about this for everything I call inside a macro.
(Admittedly the situation isn't as bad in Clojurescript because it's fairly close to Clojure in syntax and it's possible to cross-compile code so it lives in both "lands", but the ugly divide is still there.)
Common Lisp, Clojure, Scheme, Wisp, Julia, etc have Lisp-like macros. Sweet JS, Clojurescript, C, etc don't.
These types of macros are very similar to Scheme's syntax-rules macros without the hygiene.
Viewing Lisp macros as an arbitrary function from syntax to syntax is fine, but such macros are not very Scheme-y. It's very difficult to have such macros and guarantee they're hygienic, which is why syntax-rules limits you to matching patterns and producing templates -- it effectively limits the kinds of functions which can be macros.
Other types of Scheme macros (which are non-standard), like explicit renaming and syntactic closures, require programmers to opt-in to hygiene. syntax-case (which is in R6RS) allows programmers to break hygiene by jumping through hoops, but is otherwise similar to TFA's system as well.
So if Scheme is a Lisp and these macros are like Scheme macros, I don't think it's inaccurate to call them Lisp-like macros. But it is imprecise.
As an aside, I've written a compiler for a Lisp-like language. Speaking from experience, getting the compiler to answer "yes" to 3 is non-trivial (even though it's dead-simple in an interpreter). I suppose that's why Racket has all the phase distinctions it has.
For something similar, take a look at Terra [1], the "low-level counterpart to Lua". It's based on Lua, but has "low-level" functions that are JIT-compiled to machine code using LLVM. It supports calling Lua macros from low-level functions, or you can use Lua code to write complex high-level frameworks (e.g. Go-like object system [2], Java-like object system [3]).
It's still in development, so there are parts of the system that don't work very well (e.g. global variables in compiled executables cannot be statically initialized), but it's a very interesting project.
It would be great to be able to integrate this into the Clang pipeline so that Clang based auto completion and error detection would not trip over it but correctly evaluate the resulting source and use that. Then, it would be easy to integrate this into an existing project workflow. Otherwise it would limit usability due to a lack in tooling support, I'd guess.
Clang already does that for the C preprocessor, although that is probably much easier to do since it is not Turing complete [1]
I actually started out trying different parsers (Clang, pycparser), but they validate too much. cmacro uses a home-spun parser because it's meant to let you write things like:
Yeah, I'm guessing the parsing would be a problem for integrating with other actual c-language parsers. It looks like the the parsing is completely open ended. I.E. there is no way to tell if you've completely parsed the macro without knowing the macro definition.
Maybe taking a page out of rust's handbook and adding slightly more syntax to the macro would help. If the parser knows that a macro is any identifier followed by a ! and then enclosed in braces, it could simply ignore the contents and continue parsing.
> Because a language without macros is a tool: You write application with it. A language with macros is building material: You shape it and grow it into your application.
"Lisp isn't a language, it's a building material."
...in a positive sense, because he was also quoted describing Lisp as "the greatest single programming language ever designed", though I bet this was not in a Lisp vs. Smalltalk comparison :)
Here is another quote, this time from Alan J. Perlis:
> Pascal is for building pyramids imposing, breathtaking, static structures built by armies pushing heavy blocks into place. Lisp is for building organisms imposing, breathtaking, dynamic structures built by squads fitting fluctuating myriads of simpler organisms into place.
This is a standard Lisp idea, but I think it's kind of off base.
Lisp has this idea of creating a language in which you can solve your problem. That's fine, but the Lisp people mean something rather different than the rest of us do.
When I "create a language to solve a problem" in, say, C++, I create some nouns (objects), some verbs (methods), maybe some adjectives (other objects or flags) and adverbs (more flags). Then I can write my application using that "language", but using normal C/C++ syntax.
When Lisp people say "create a language to solve a problem", they mean "write completely different syntax". The article gives some examples.
Why would I want to do that? Well, I might want to explore a new paradigm - some new thing like aspect oriented programming, say. I don't have to wait for a language that implements it, I can just do it myself. This is great for academic research projects, but not nearly so great for production code. (Bringing new people up to speed becomes a much longer process, if your code outlives the original developers.)
But I might have to create new language features just to get anything done in Lisp. One example is LOOP. It's a macro, because Lisp without macros doesn't have very good looping ability. "It's a building material" partly in this sense: It's not a very good tool. It's not all that usable until you add the parts that, in most other languages, you already get.
Now, does LOOP do more than a C-style for or while loop? I doubt it, but perhaps it does it more neatly. But C gives you 90% of what you need without a macro, whereas Lisp gives you 10% without a macro. If you need more in C, you can do it, even if it's a bit clumsy. But in Lisp without macros, it's horribly clumsy all the time.
(Yes, I know that there are Lisp and Haskell types who seem to regard writing a for loop over a container as a great waste of programmer time. I think that they are mistaken.)
You can do some amazing things with Lisp macros. Paul Graham gives the example of writing an extension language for Viaweb that Lisp macros turned into Lisp code, which was then run on the server. That's really slick (though in the current situation, you have to watch out for security issues unless you validate that file very carefully). But if he had chosen another approach, what would change? The macros have to turn the file syntax into Lisp syntax. If he had written an ordinary parser, he would have had to do the same. The only difference is that, by using macros, he let the Lisp compiler do the grunt work of the parsing.
TL;DR: Lisp needs macros to be usable. I'm not convinced that C does.
> When I "create a language to solve a problem" in, say, C++,
Most of the time you just add words to a language by adding verbs/adjectives/adverbs. It's not a new language.
> One example is LOOP. It's a macro, because Lisp without macros doesn't have very good looping ability.
That's backwards thinking.
LOOP is a macro, because macros are the way to implement code transformations in Lisp. Lisp has other iteration constructs, which are implemented as functions (MAP, REDUCE, ...).
Common Lisp is also its own compilation target. So at the very bottom there is only one iteration construct provided: GOTO. The rest are macros/functions on top of that. The language a compiler needs to understand is thus very small. The built-in extension mechanism then allows almost arbitrary code transformations. This is used by the language implementation AND the user. The user/developer has access to the same facility to implement code transformations in applications or libraries.
This is great for production, since you don't have to wait for some committee or a benevolent dictator to implement the language feature you need. Instead of waiting for a new iteration facility for months or years, you can implement it in an afternoon. Productivity goes up. You don't need to wait for tools generating code or more compact notations reducing boiler plate code - just implement it yourself. You also don't need external preprocessors - just use Lisp. You can also debug/extend your code transformation using the same development tools, instead of maintaining external preprocessors. It also enables complex programs to have comparatively small code bases. Which often makes maintenance easier.
TL;DR: Lisp gives the user more expressive power and trusts them.
In Lisp, you can write it in an afternoon. But in Lisp, because the language (sans macros) doesn't give you much, you have a bunch of things that you have to do that way. In C, because the base language gives you more, you have enough that you don't have to write the features that you need. (Granted, this doesn't work if your goal is to reduce boilerplate code to zero...)
> But I might have to create new language features just to get anything done in Lisp. One example is LOOP.
LOOP is a standard part of Common Lisp, so whatever it is it can't be an example of how Lisp-as-it-comes isn't good enough and you need macros to bring it up to standard.
> Now, does LOOP do more than a C-style for or while loop? I doubt it
In the sense in which a C-style for or while loop doesn't do more than goto, you're correct. Otherwise, of course you're incorrect; LOOP does a lot more than for and while do. (Random example: (loop for i from 1 to 10 collect (* i i)) makes a list of squares. No instance of C's for or while does anything like this.)
> Lisp needs macros to be usable. I'm not convinced that C does.
Could you give an example of something that C lets you do "out of the box", that Lisp (let's say specifically Common Lisp) doesn't, but that Lisp could be made to do using macros? I don't think I can think of any; there are things you can do better in C than in Lisp (e.g., close-to-the-metal bit-twiddling, or interfacing with other things written in C) but they don't have anything to do with macros.
LOOP is a macro, right? It's a standard macro, sure. But the point isn't standard vs. non-standard, the point is macro vs. non-macro. Without any macros, Lisp can't do loops very well. That's my point - Lisp without macros isn't very useful.
>Random example: (loop for i from 1 to 10 collect (* i i)) makes a list of squares. No instance of C's for or while does anything like this.
for (int i = 1; i <= 10; i++) list.push_back(i * i);
Yeah, it's C++ using STL rather than straight C. (I suppose you could argue that C lacks a list type and therefore isn't very useful... but I could create a linked-list struct and write a push_back function. For that matter, I could just have
int list[11];
for (int i = 0; i <= 10; i++) list[i] = i * i;
I don't really think that list-vs-array is really relevant to the relative power of the loops in C and Lisp.)
> Could you give an example of something that C lets you do "out of the box", that Lisp (let's say specifically Common Lisp) doesn't, but that Lisp could be made to do using macros?
Well, macros produce Lisp code, so there is nothing that you can write in a macro that you cannot write in straight Lisp. But in non-macro Lisp, loops are really painful to write - the LOOP macro is part of the standard for a reason.
> But the point isn't standard vs. non-standard, the point is macro vs. non-macro.
But why?
Imagine an alternate world in which all the things in the Common Lisp standard that are currently defined to be macros are special forms instead. I claim that (1) in that world, the criticism you're making would not apply, and (2) that world's Common Lisp is (a) not better in any way than our world's and (b) actually slightly worse, because code-walking macros would be more painful to write. Doesn't that indicate that there's something wrong with your criticism?
What I'm not seeing is how the fact that some standard features of Common Lisp are defined to be implemented via macros is a weakness, which you seem to be arguing it is.
On LOOP: Sure, you can use a C for loop to build a list of squares (but, note, it doesn't in fact do the same thing as the Lisp code and couldn't possibly, because (1) the Lisp code is an expression and a for loop is a statement but not an expression, and (2) that invocation of LOOP makes a new list and returns its final value rather than appending to some specific list that's already been created as your kinda-sorta-parallel code does). But that's what I meant about goto. You can use goto to do what a C for loop does. If you wouldn't say "Does for do more than goto? I doubt it" then you shouldn't say "Does LOOP do more than for? I doubt it".
> But in non-macro Lisp, loops are really painful to write
But "non-macro Lisp", if by that you mean something like "Common Lisp, with all the features defined to be implemented as macros taken out", is a thing that doesn't exist and that never would exist. (If for some reason you were implementing a Lisp without macros, and if you actually intended it to be a good general-purpose language, then of course you would include some decent looping facilities.)
So this is not an example of something that Lisp doesn't let you do "out of the box"; LOOP is right there in the box. As you say, LOOP is part of the standard for a reason.
Well, my initial claim was that Lisp needs macros in a way that C/C++ doesn't. You are now stating that Lisp could have been written in a way that it didn't need macros to be usable. And you're right, it could have been. But I don't think that that's the point.
The point is not that implementing things as macros is a weakness. It's that the language (as written) needs those macros. And C, as written, does not.
But I suppose from your point of view, my argument is a distinction without a difference...
I understand what you are saying. Its more of a claim that C/C++ does not need this project, as macros are not a necessity of the language.
I think the one thing that this project could provide (though due to the lack of function call access inside of macros that IS provided in Lisp macros maybe not) is to take that 10% of functionality that C/C++ does not provide and make it much more easily accessible. Of course this is always the difficult situation to discuss when first learning about Lisp macros because so much "standard" functionality already exists but I assume the old trusty could be applied here.
In many tutorials for learning Lisp macros the idea of "Lets say you wanted a statement (when <test> <expression>)." Its something that doesn't exist and the developer can add it in during development. You do not have to wait for someone else to implement this into your compiler.
This is a case where Lisp _needs_ macros to implement and where C/C++...can't do anything. Yes you can make a function that is similar to the format required but that is kind of the point. Its a hack to get the functionality and not something that looks natural.
I agree, there's that 10% in C/C++. And, you're stuck. But working around that 10% is an annoyance, not crippling. (In over 20 years as a professional C/C++ developer, I have never hit something that I couldn't write. "when <test> <expression>", for example. Yes, I can't write that exact syntax, but I don't care. I can get the same result, though perhaps slightly more clumsily.)
But Lisp without macros is crippled.
But, as lispm and gjm11 pointed out, Lisp could have been implemented with those macros as special forms instead, so Lisp wouldn't have to be crippled without macros.
My main point: I think the Lisp crowd overestimates how limiting it is for other languages not to have macros.
Well, yes, it is. You can turn Common Lisp (as it is) into a "language that doesn't need those macros" simply by going through the spec and changing everything that says, e.g., "macro DEFUN" to "special form DEFUN". The result would be a language that's no more powerful, no easier to implement, and marginally less convenient to use.
So how can "needing those macros" be a weakness, if the minimal change that makes the language not "need those weakness" improves nothing and actually makes the language a little worse?
Normally, the lack of tooling support is the first thing that turns me off to attempts such as this to adjust fundamental aspects of the coding pipeline.
In the context of C, however, you're up against a language (the preprocessor) that itself has only bare-minimum tooling support, so you've got a leg-up over similar projects in that regard. :)
So I tried installing this on a debian box, did anyone else have issues compiling? I apt-got sbcl, flex, but make failed with "asdf-linguist" not found until I swapped lines 46/47 in the Makefile. Now it looks okay
[saving current Lisp image into cmc:
writing 5952 bytes from the read-only space at 0x20000000
writing 4000 bytes from the static space at 0x20100000
writing 49545216 bytes from the dynamic space at 0x1000000000
done]
In Lisp one usually does not write something like (begin (set! a (+ a 1)) a), while in c this is pretty common, so this is a extra issue that might be addressed separately.
I think it does. In your implementation of `square`, you could use `gensym` to first create a new variable to store x in. Then you would write a second statement that evaluates to that variable times itself.
C++. Too many other languages get wrapped up in ideology. D comes pretty close. I can't get past Rusts unreadability. I'd like to give Nimrod a try... especially since it can be compiled to C and presumably integrated in to C++ projects.
[Edit] Read Nimrod tutorials and ported a few toy programs. It's encouragingly clean and doesn't seem to shy away from features... I wonder if Alex Stepanov knows that, in Nimrod, "if you overload the == operator, the != operator is available automatically and does the right thing" ;)
Yes on both counts. The problem is that I can type faster than I can think. I don't believe that terseness inherently improves readability or encourages better design. I have felt this way since Perl.
Transcribing that took a few seconds of thought, and all the terseness of a few symbols has accomplished is to obscure a poor choice of API and ownership semantics. That's just one symbol... how horrific can we make something without noticing with just a few more params and symbols?
Why is that a poor choice of API and ownership semantics? That's entirely natural. You use a range in order to provide the maximum convenience to your callers, and you use unique ownership to provide efficient memory management and reduce the amount of copying of data that has to be done when the backing store is resized.
(Also, your equivalence is not quite correct: `&[T]` is more like a Boost range. In particular your transcription makes it look like the original code only accepted Vec<T>, which is not the case.)
> That's just one symbol... how horrific can we make something without noticing with just a few more params and symbols?
You've pretty much covered all the type-level symbols in Rust, except for * for unsafe pointers.
If &[T] is somewhat generic already, why even have 'Drawable' in the signature?
Also, doesn't having a borrowed array of unique references to Drawables mean the elements of the array are either now implicitly borrowed, or I have to borrow each of them before they're accessed? Just knowing the symbols don't make the semantics clear. In C++ all smart pointers are values in their own right. In my example I have a reference to an array of smart pointers, and there's no magic.
> If &[T] is somewhat generic already, why even have 'Drawable' in the signature?
`&[T]` isn't generic: it's a bounds-checked slice. Two pointers: start and end.
Presumably `Drawable` is in the signature so that methods specific to `Drawable` can be called.
> Also, doesn't having a borrowed array of unique references to Drawables mean the elements of the array are either now implicitly borrowed, or I have to borrow each of them before they're accessed?
They work like anything else: if you want to take a reference to a Drawable, you borrow it.
> Just knowing the symbols don't make the semantics clear.
Yes. Also true for C++'s symbols; e.g. `&`.
> In C++ all smart pointers are values in their own right.
Same in Rust.
> In my example I have a reference to an array of smart pointers, and there's no magic.
A good test for me is whether the simplest possible implementation works:
template <typename Range>
void draw_all (Range r) {
for (e : r) { draw(e); }
}
There are no magical symbols here at all. It's not efficient, but it's completely memory safe... breaks with unique_ptr though, which is a good indication for me that unique_ptr is the wrong choice. Here's the less safe 'borrowing' version:
template <typename Range>
void draw_all (Range const& r) {
for (e const& : r) { draw(e); }
}
and a sane compromise:
template <typename Range>
void draw_all (Range r) {
for (e const& : r) { draw(e); }
}
The reason for use of a unique pointer is that if the caller had an array of unique pointers to Drawables, then you want the caller to be able to call draw_all() without recreating the array.
You could come up with a generic function that doesn't require unique ownership (for example, one that takes an Iterator<&Drawable>), but the function in that example wasn't generic because making everything generic just in case is overengineering.
You seem pretty confused about how borrowing works. Borrowing is tangential to that function.
I'm not confused about borrowing, what I'm saying is is that borrowing, plain refs in C++, are a performance hack in both languages. In my final version above, I have assumed copying a range is cheap, and copying a Drawable is expensive and unnecessary... seems perfect to me, and it will not compile passing a vector of unique_ptr's, which is what I would want
References are not a performance hack; they're a fundamental way to avoid lots of moves, which obscure algorithms and cause a lot of mutation. (In Rust they are memory safe.)
Anyway, if you wanted a generic reference-taking version:
fn draw_all<I:Iterator<&Drawable>>(iterator: I) {
for drawable in iterator {
drawable.draw();
}
}
And a generic move version:
fn draw_all<I:Iterator<Drawable>>(iterator: I) {
for drawable in iterator {
drawable.draw();
}
}
Internal moves make no sense. You can't move an object out of a container that has invariants, like an ordered map. This is why it only makes sense for the caller to move a container in to the function. This is efficient. I've edited my 3rd version appropriately, as moving from a const& was clearly bogus.
template <typename Range>
void draw_all (Range r) {
for (e const& : r) { draw(e); }
}
I've totally lost track of what we are arguing about. Anything containing a unique element has to be logically unique itself, right?
In C++, which defaults to value semantics, it's required that you move your container if it contains a non-copyable (unique) element. So you only need to move into the draw_all function in this case, which is why taking the range by value is not just efficient, but semantically correct. If the caller moves in to the function, then when it returns the caller will no longer own any elements. The callers vector will be empty, and the elements themselves will still be unique having never been copied, moved or "borrowed".
If borrowing isn't a performance hack, then why not make everything you're ever likely to borrow shared? I'd argue anything you're drawing is shared between the draw routine and the caller. Drawing a distinction just because the caller is suspended, seems like an impediment to future change if, for example, you later switch to a coroutine or an asynchronous/threaded operation. Copying the range and sharing elements gets you this for free.
In summary, 'draw_all' as specified was a bad API because:
* It restricted the type of range/container passed to it
* It had unnatural ownership semantics (borrowing a box of unique things without saying you're borrowing those things is weird).
* The implementation, as was, required further borrows which were only implied. In C++ you take everything straight away.
> * It restricted the type of range/container passed to it
Yes, it did, but as I mentioned before, it's overengineering to make everything generic that could possibly be generic.
> * It had unnatural ownership semantics (borrowing a box of unique things without saying you're borrowing those things is weird).
No, it's not, it's quite natural. `&[&Drawable]` is not a subtype of `&[~Drawable]`, so if your caller has an array of `&[~Drawable]`, then they would have to recreate the array to pass it to that function.
> * The implementation, as was, required further borrows which were only implied. In C++ you take everything straight away.
I don't understand what this means, but in any case C++ and Rust don't differ substantially on ownership/reference/move semantics.
Why is it a problem to "require further borrows which are only implied"? Borrows are compile-time only, and they conceptually happen anyway in your C++ implementation. It's just enforcing the correct ownership pattern in the compiler.
I don't see why either is right or wrong, but 'foo' is the reference and vector is the type. It seems logical to me the C++ way, but I'm not overly fussed.
Different parts of the type are in different places, merely for the space saving of being able to replace "var" or "let" with the type, like "int". While it made sense when C was created, this is horribly wrong this many years later.
It's an interesting time for native development with all these languages making some serious attempts to dethrone C++ when in comes to combining high/low level in one language.
I'm still using C++ myself until the dust settles a bit, but I've found that my C++(11) code is trending towards being less stateful and more functionl-ish.
It's cool to see things changing in this space after feeling like it would be old style C or C++ forever.
There are many people here, from many backgrounds, with many different perspectives. Almost all languages have proponents here (with COBOL the possible exception), and all languages have detractors here.
Look, I don't have time for your childish pedantry. There were entire articles on the front page of HN blaming C for having poor language design on the issue of "goto fail". Now there is an article promoting macros in C, no less.
HN may not be one person, but does have a front page as a result of aggregate behavior.
Why, I think he raised a very important point. HW is not one person. People have different backgrounds and different needs. Sometimes, C macros are the the best you have. Just because they're generally to be avoided doesn't mean they should never be used.
C macros are a cute but dangerous hack. But this software implements Lisp macros for C. Lisp-style macros are safe and nonhacky. The main feature that allows this is `gensym`, which lets you store values in variables that are guaranteed to not name-clash.
Also, the “goto fail” issue had nothing to do with macros. So people might hate C’s block syntax while loving other features such as macros. People can hate parts of a language and like other parts. Like the author of the book JavaScript: The Good Parts, who likes JavaScript, but recommends against using certain features of it.
1. Suppose you've written a function 'foo' in your language that does something useful (e.g.: partitions a sequence, validates a map, or pretty much anything really).
2. Some time later you want to write a macro 'bar'.
3. Can you use function 'foo' when writing 'bar'?
If the answer to 3 is "no" then you don't really have Lisp-like macros.
This is what makes Lisp macros so powerful. It's not just that you have a way to mangle abstract syntax trees. If that's all you want then yeah, you can write a parser and template language to do it, like this thing or sweet js, but it's not the same.
Lisp macros are beautiful because there's no real divide between "writing code" and "writing macros". It's all just code. You don't have to worry about which things are available in code-land and which are available in macro-land because they're the same place. There's just "the language" which you extend and mold with functions and macros woven together as necessary.
Contrast this with a language without real Lisp-like macros, like Clojurescript. If I want to write the 'bar' macro in Clojurescript I need to think "wait 'foo' is a Clojurescript function so now I need to port it back into Clojure land before I can use it in this macro because macros in Clojurescript live in Clojure-land not Clojurescript-land". I need to think about this for everything I call inside a macro.
(Admittedly the situation isn't as bad in Clojurescript because it's fairly close to Clojure in syntax and it's possible to cross-compile code so it lives in both "lands", but the ugly divide is still there.)
Common Lisp, Clojure, Scheme, Wisp, Julia, etc have Lisp-like macros. Sweet JS, Clojurescript, C, etc don't.