This is a really good video. Here is a simple timeline of what it is all about, so people can jump to what they would find interesting:
23:43 "It really is incredible that we can go close to one order of magnitude" (to C)
26:00 Lisp code is faster than optimized C (quantum) programs when running on the QVM. For non trivial benchmarks (...) {Lisp faster} on an average of 40% percent.
30:00 quick explanation of quantum machine code
33:06 What is the team dynamics of using Lisp? "Condescencion: Lisp's number one enemy" (fun part)
"Some time between 2 and 6 decades ago, Lisp invented approximately all of the popular things in programming, some of which are just starting to make an appearance [long list of features follow]"
Really fun part follows.
35:41 things Lisp programmers tell the other programming teams
36:40 Interesting, beautiful comparison of using Lisp versus doing calligraphy with calligraphy pens.
41:21 Presenting internship program at Rigetti. Stressing the importance of first having a SBCL, Emacs, Slime, Paredit, and Quicklisp, before thinking how writing productive Lisp looks like.
43:33 Comments of interns after the internship, favorable to Lisp, as well as unfavorable comment. (Fun part here)
51:00 Explanation of what quantum computing is all about
YMMV but for what it's worth I became frustrated with smartparens when I couldn't figure out how to do something very simple. I think it was traversing down into the next sexp. I got different behavior when my cursor was right up against the sexp versus one space before.
Also whereas paredit really pushes me to have a mental model that the lisp source code forms a tree (e.g. up, down, left, and right being the main operations), I found smartparens drove my mental model back to treating the code a string of characters.
Interesting. I wonder if it has to do default settings, or something like that. I'm using bbatsov/prelude[+], for several years now, and Smartparens (per prelude's config) has always behaved for me like a better ParEdit.
This was a much more enjoyable video than I was expecting. The audio quality really had me tempted to skip, so I encourage others to not let that turn them away.
The quote which I will paraphrase that "lisp is not freed from having to justify why you would use it." This is huge and the section before it going off about how condescension is a terrible enemy of the language is one that I would hope more people contemplate.
I apologize for the sound quality. I'm still learning how to do this stuff, and one thing I forgot to check is whether the final audio sounded good in headphones. On my laptop's speakers, I couldn't hear that there was only one channel. Checking that is now in my post-production checklist.
Unfortunately, while the lapel microphone I use normally produces good sound, I needed to do some level setting first. That's also in my checklist now. And I didn't expect the amplifier's sound to be picked up so strongly by the lapel mic. That may be something I'll have to solve using equipment.
I promise that I'm working on getting better at this.
Many thanks to Philip Greenspun, of Greenspun's Tenth Rule [1], for giving me the video camera that I've been using.
By the way, this talk was co-hosted the Bay Area Lisp & Scheme Users Group, [2], of which I'm co-organizer. You can see videos of prior BALISP talks at [3]. If you're in the area, please join us. We encourage you to give a five-minute lightning talk on anything interesting you've been working on that is related to Lisp.
Thank you so much to Robert Smith and Rigetti for giving and hosting such a terrific talk.
b) http://realmofracket.com/ . Great book I learned a ton and especially when I went through the book with my 11 year old daughter
4. The great minds of Lisp are at Racket (Ever read Little Schemer?)
5. Racket keeps getting better and better with great modern ideas. It might have been born from Academia it has awesome real world legs. Just look how easy it is to deploy your program after you are done.
6. Best documentation system of any language. The documentation is code and it makes for great documents. http://docs.racket-lang.org/
Racket isn't near as fast as SBCL though. However, now that Chez Scheme is open source, Racket is in the process of leveraging that run time. Also I don't know, but would guess deployment is easier with SBCL as it can build a standalone executable (albeit it is very large due to saving the running image).
I guess it's all based on perspective. Comparing to C, D, Nim...etc, it's huge. However, the mathematical model my company exports alone is 10-22 MB alone in just text or binary form (no code), so yeeeaaa I see what you're saying :)
One of the most popular Common Lisps is Steel Bank Common Lisp (SBCL) [0], which has some degree of compile time type checking and compile time type inference.
There's a website article called "Clojure Sucks" that is written by a Common Lisp user who uses a symbolics lisp machine and has some interesting insight although it is very aggressive.
You can mostly combine 3 & 4 on that list. Racket (formerly PLT Scheme) has moved beyond Scheme in many ways (language features, batteries), but is still exceptionally similar.
It's the only lisp that can interface seamlessly with any JS library you want. Just `npm i leftpad && LUMEN_HOST=node lumen` and type `(require 'leftpad)`.
$ npm i leftpad
$ LUMEN_HOST=node lumen
> (require 'leftpad)
function
> ((require 'leftpad) "foo" 5)
"00foo"
Other lisps are nice, but they all try to build their own ecosystems instead of use existing infrastructure. So if you want to do webdev and run into a problem with the library, your only option is to fix it yourself or write your own, since most people don't use lisp for webdev.
That brings us to Clojure: the prima facie "lisp for webdev". It's a good lisp, but it forces you into non-optional immutability. That's a feature for some and a burden for others. And it's very difficult to transpile into JS. In an era where size and speed matter due to bandwidth concerns, this is a severe limitation.
That said, all of the lisps are a delight to use in their own way. Racket is fun to wrestle with, mostly to coerce it into doing what you want. SBCL is neat for doing archeology in -- you can run all kinds of interesting old programs. Elisp is fun because you can extend your editor to do anything you can imagine. Arc powers the website you're reading this on, and its underlying ideas are worth internalizing.
This is wonderful. Thank you for sharing! I've been putting off a JS coding project for some interview, which I now think I'll write in Lumen. Another pleasant surprise: it was written by the HN moderators.
Immutability is optional in Clojure, though it is the default. Also I'm confused what you mean with size and speed mattering, ClojureScript does better then most by going through the Google Closure compiler for minification and dead code elimination.
Lumen seems interesting, but at a quick glance the GitHub repo has zero information as to how to install it and getting it to run. Is it an npm package/lua rock as well?
(edit: it's all in the repo.)
Follow-up: I don't see much in the way of interop documentation... Pointers?
The cool thing is, there's no interop. It's literally JS or Lua. Think of it like CoffeeScript -- there's no "interop" between CoffeeScript and JS. It's just JS.
You can see what each expression compiles to by passing it through (print (compile (expand ...)))
> (define-macro see (x)
`(print (compile (expand ',x))))
(macro: function)
> (fn (x) (+ x 1))
function
> (see (fn (x) (+ x 1)))
function (x) {
return x + 1;
}
The best way to learn it is to read test.l and mess around with the expressions while running `make test` to see what breaks.
If you have questions, be sure to reach out or post them here. The maintainer is also quite responsive to opening new issues.
By interop I mean how to interact with existing functions (fine, most of the above covers that) and syntax for accessing data structures (arrays, tables, etc.).
For the free ones, there are already plenty of answers.
For the commercial ones, that would be Allegro Common Lisp and LispWorks, both having a complete "Lisp Machine" like experience, given that they are around for a few decades.
Every time, I see one of these links or videos, I feel the urge to learn Lisp. But after some time, I lose the motivation. I think that is because I don't know what benefit learning lisp will provide me concretely. Anyone has any suggestion?
Don't force it. It will come at the right time. Enjoy your spot on the programming map, it's no use to learn Foo if it means you'll suffer your day job or can't get enough value out of it.
That said, lisp is a goldmine / rabbithole crossover. As other said:
- opens for a hackable tool mindset, use lisp on lisp to make it do what you need [1]
- as said in this talk, if you want a dsl, you don't need a parser. Only later if you need others to write it without '(lisp syntax) you can do so. And if all are happy with lisp, you just skip that entirely.
- it's often highly interactive, and value oriented. You get to "touch" the data a bit like material. Nowadays repl's are common, so it doesn't seem special but it was so for 30years.
- lists mindset is different from mutable arrays, saving you immense amount of time in the first phase (by avoiding the vast majority of stupid bugs). When you need speed you can tailor; also lisp implementations are performant (SBCL is in the top 10 of all IIRC)
- functional mindset also open for a lot of new ways to think about problems, leads to other languages like ml/haskell too
- similarly you often get a step in the logic programming world (prolog, kanren) which are very mind opening about "programming computers"
it's not perfect, and if abused you can get stupid code, too cryptic because you used too many weird notation or structure; this is a balance skill you get to learn, lispers are not that asocial, they know when to stay in the human readable most of the time.
> it's often highly interactive, and value oriented. You get to "touch" the data a bit like material. Nowadays repl's are common, so it doesn't seem special but it was so for 30years.
REPL's are common, but they're rarely as useful in other languages. Even relatively basic (to Lisp programmers) things are almost universally missing, for example:
[1]> (defun f (x) (1+ (g x)))
F
[2]> (f 4)
*** - EVAL: undefined function G
The following restarts are available:
USE-VALUE :R1 Input a value to be used instead of (FDEFINITION 'G).
RETRY :R2 Retry
STORE-VALUE :R3 Input a new value for (FDEFINITION 'G).
ABORT :R4 Abort main loop
Break 1 [3]> :r3
New (FDEFINITION 'G)> (lambda (x) (* x 2))
;; Now that we gave an fdefinition for G, our (f 4) call can continue, so it
;; resumes execution and completes. Until now, it was just paused--no stack
;; unwinding unless we ask for it.
9
[4]> (g 10)
;; Because we used STORE-VALUE, it went ahead and saved the value we gave in
;; G for future use. If we had used USE-VALUE, it would have finished the (f
;; 4) call using that definition, but G still wouldn't be fbound after.
20
> lists mindset is different from mutable arrays
Mutable arrays are ubiquitous in Lisp. Explicitly cdring down lists is mostly only done in introductory textbooks; in practice, most people will use MAP, REDUCE, REMOVE-IF-NOT, etc instead, which all work just as well on arrays or lists, or they might use an imperative construct like DO, LOOP, or ITER (also note that conses are mutable as well; it's not at all unusual to setf a car or cdr or call NCONC). It's not terribly different from what you would do in eg modern Java (aside from the part where none of it requires special support from the implementation and it could all be implemented in user code).
I would also add that learning Lisp will teach you a lot about object-oriented programming. Even things that Java programmers use extensions for and give names like aspect-oriented programming just come built in as part of Lisp's stock object system. But then, Lisp's stock object system is also extremely flexible, especially given the pseudo-standard metaobject protocol. Quoting from Wikipedia about the MOP book:
"In his 1997 talk at OOPSLA, Alan Kay called [The Art of the Metaobject Protocol] "the best book anybody's written in ten years", and contended that it contained "some of the most profound insights, and the most practical insights about OOP", but was dismayed that it was written in a highly Lisp-centric and CLOS-specific fashion, calling it "a hard book for most people to read; if you don't know the Lisp culture, it's very hard to read"."
map / filter over list is different from arrays. Even in the 2000s we had to hand write Java Iterators and fear mutation while iterating .. that sort of things.
> map / filter over list is different from arrays.
How?
CL-USER> (remove-if-not #'evenp '(1 2 3 4 5))
(2 4)
CL-USER> (remove-if-not #'evenp #(1 2 3 4 5))
#(2 4)
CL-USER> (loop for x in '(1 2 3 4 5) when (evenp x) do (format t "~&~a~%" x))
2
4
NIL
CL-USER> (loop for x across #(1 2 3 4 5) when (evenp x) do (format t "~&~a~%" x))
2
4
NIL
Even a function like:
(defun print-evens (sequence)
(loop for i below (length sequence)
for x = (elt sequence i)
when (evenp x)
do (format t "~&~a~%" x)))
Works for both, although it's considerably more efficient for arrays.
> Even in the 2000s we had to hand write Java Iterators and fear mutation while iterating .. that sort of things.
And even in Lisp, the standard specifies "The consequences are undefined when code executed during an object-traversing operation destructively modifies the object in a way that might affect the ongoing traversal operation."
In non lisp languages, the mutable array often came without map / filter, weren't generic, didn't have lambdas so you end up writing imperative loops and potentially mutating elements in place because it's tempting; changing the paradigm right away
> In non lisp languages, the mutable array often came without map / filter
What languages do you know that have map and filter for lists, but not for arrays? Lisp has them for both, C++ has them for both, Java has them for both; Java didn't used to have lambdas or map, but it had standard lists long before it got either of them. Several decades before Java existed, Lisp had mutable arrays with map and filter. Supporting higher order functions has nothing at all to do with arrays vs lists.
> didn't have lambdas so you end up writing imperative loops
Lisp does have lambdas and imperative loops are still extremely common in Lisp code.
I'm not criticizing lisp at all here. And Java had lists but no functional API on top of it. Processing ArrayLists is extremely different from (mapcar #'f '(....)), in idiom, paradigm. You can, but you'll have to do anonymous inner classes gymnastics and you'll end up writing loops before you know it. And my point is: it sucks.
> Java had lists but no functional API on top of it. Processing ArrayLists is extremely different from (mapcar #'f '(....))
My point was that the functional API is the key difference, not arrays vs lists. Java in 2000 had lists and no functional API, whereas Lisp in 1970 had mutable arrays with a functional API available for use on them. Mapping over a list in Lisp looks like `(map 'list #'f '(...))` (The first argument is the return type; MAPCAR is just a list-specific version of MAP), mapping over a Java-style array looks like `(map 'list #'f #(...))`. Lists vs arrays makes no real difference (if they were in variables instead of literals, you couldn't even tell if it were an array or a list by looking at that call). Java in 2017 similarly has a functional API that can be used on lists or (much more commonly) on mutable arrays.
As the sibling post illustrates, it really isn't. And this is by far one of the biggest strengths of lisps that many languages are finally getting. The structure of your program often doesn't have to change just because you modified the underlying representation of some data.
Now, also pointed out by the sibling post, modifying the underlying driver of your logic is a bad idea. Just like having a printer print on its own circuitry would be a bad idea.
It is one of the very few languages where the source code is made of a data structure that the language is very good at manipulating.
Thus, writing code that generates code, be it at runtime or at compile-time, is downright easy in Lisp. This opens up enormous possibilities not found in other languages.
Also, on regular programming languages, your code executes only at run time. In Lisp, or at least in Common Lisp (and probably on Scheme as well), there are three times: Read time, Compile time, and Run time, and your code can selectively run at one of those three. This also opens up lots of possibilities.
Another feature is that it is really suitable to interactive development in a way that isn't matched by any other language except for Smalltalk. It allows, for example, changing the definition of a function while your code is running.
Now, two features that are particular to some Lisps:
Scheme has continuations. Continuations are like magic teleportation within your code. They are very powerful.
Common Lisp has the Common Lisp Object System (CLOS), possibly the most powerful OOP system out there. It rocks!
The problem for me is that I've learned to avoid macros in other languages. It seems to me that when you've got a fairly complex program, you need a fairly complex mental model of what the program "is" to understand it. But if you introduce macros, or code writing code, then you have a much more complex mental model since now even what the code "is" can change. What is it about lisp that makes this an easier sell?
It is true that even in a Lisp, unfettered use of macros can result in write-only code, so you do have to be careful about what kind of cognitive burden you are imposing upon people who will have to read your code.
With that said, I tried to learn about metaprogramming in a number of other languages, and Lisp is the first one with a metaprogramming model that I could actually grok, because the meta-language is equivilant to the language itself. Lisp macros are more or less just plain old code that manipulates data structures.
When I'm working on a lisp project, I give myself a budget of a very small number (often just 1 or 2) of magic macros that I can build for it. Other than that, I only use macros in the core library, or well-known third party libraries that other people are likely to be familiar with.
When the "atoms" of your language are causing whatever problem you're solving to be difficult to express, then it may be wise to introduce new "atoms" to make the idea easier to express.
The new atoms are supposed to communicate intent faster and more concisely, and decrease the burden of understanding what's going on. That's the role of an abstraction, especially syntactic abstractions.
>The problem for me is that I've learned to avoid macros in other languages.
Macros in other languages are significntly unlike Lisp macros. You should take a look at the Practical Common Lisp book (available for free online) to have an idea on how you can easily leverage Lisp macros for more readable, succint, easy to understand code.
>But if you introduce macros, or code writing code, then you have a much more complex mental model
In Lisp, you use the different tools available (macros, CLOS, readtables, etc) to achieve a simpler translation between the problem domain and your actual source code.
If you are using them to create more convoluted, harder-to-understand code, then "you're doing it wrong", just as you can, for example, write rather clean C code versus code that would win the IOCCC (International Obfuscated C Code Competition).
Another free Lisp book, one that focuses on macros -- and is quite readable -- is YC-founder Paul Graham's "On Lisp".
There's a download page [0] (for the book and code) at paulgraham.com as well as the following on a description page [1]:
On Lisp is a comprehensive study of advanced Lisp techniques, with bottom-up programming as the unifying theme. It gives the first complete description of macros and macro applications. The book also covers important subjects related to bottom-up programming, including functional programming, rapid prototyping, interactive development, and embedded languages. The final chapter takes a deeper look at object-oriented programming than previous Lisp books, showing the step-by-step construction of a working model of the Common Lisp Object System (CLOS).
As well as an indispensable reference, On Lisp is a source of software. Its examples form a library of functions and macros that readers will be able to use in their own Lisp programs.
maybe someone can help me understand. I now get homoiconisity but the example he uses doesn't seem to explain why the macro is needed. He explains a problem that can be solved by writing a function. Instead he presents what seems to be in effect an unevaluated function... so what's the point?
I agree that the macro example is not chosen well. Macros are just functions that take their arguments unevaluated and output code, so each macro call could be replaced by a function call where the arguments are wrapped in a list and the function executes the code directly instead of generating it.
Then the only reason you'd have to use macros is if you want to do something at compile time. Like generating specialized code so the compiler can optimize it, instead of reinterpreting the describing data on each invocation. Or like reading the database schema to generate accessors.
Macro examples are often poor, and my advice is to always favour functions, since then you are working with actual values. For instance, with-open-file should be a higher order function, not a macro as it is in CL.
That leaves a few other uses: macros as compiler extensions for generating fast code (see "Paradigms in Common LISP" for a lovely example of a parser generator). But heed warnings about premature optimisation.
It leaves macros for novel binding strategies. List comprehensions, do-notation, destructuring-bind, pattern matching, CL's LOOP and macros that anonymously bind a pronoun like "it".
I have no shame in using macros for really succinct control structures where delaying evaluation with a lambda would just look gross (see "or" and "and" macros), but maybe that's just a cry for lazy evaluation :P
Lisp has several time phases that may or may not interleave, so yeah it can be confusing.. but keeping things simple and mostly only thinking about macros at compile time or read time can still go a long way. Ultimately an important and practical high level effect is macros let you create new syntax, not just computation at compile time to save some function calls at runtime. Want infix math? You can have infix math: https://github.com/rigetticomputing/cmu-infix Going crazy with lexical nesting (similar to callback hell, but imagine each callback needs to close over the environment it's called from, not just defined in)? A one-liner might help: https://fare.livejournal.com/189741.html Want powerful looping constructs, powerful OOP, pattern matching, etc? Of course you can stick with simple macros that are little more than function wrappers, but you can do a lot more.
Lisp teaches you that there's always a better tool for the job than something that's already in your toolkit. More than any other language I've worked with, Lisp makes it incredibly easy and low-friction to write domain-specific languages to solve the exact problem you're working with.
That's not always a good thing -- it can make working with a foreign codebase difficult -- but it's definitely a powerful concept when applied correctly.
Lisp seems to take the exact opposite approach as Go. The power of languages like Lisp appeal to me, so I have a hard time understanding why people want a language that intentionally limits itself. Read Graham's book and he's talking about how macros are great for writing maintainable code because you can make it both short and very readable because it's close to the domain.
But then you hear the arguments in favor of Go maintainability, and it's about copy/pasting being preferable to abstraction and not having too many ways to do things, so everyone's code looks familiar. Also that the extra LOC are more readable to Go maintainers than powerful abstractions. That surprises me, because languages like Lisp, Smalltalk, Ruby and Haskell are all about powerful abstraction capabilities so you can express yourself exactly as you need instead of writing a lot of boilerplate.
That surprises me, because languages like Lisp, Smalltalk, Ruby and Haskell are all about powerful abstraction capabilities so you can express yourself exactly as you need instead of writing a lot of boilerplate.
There is a mindset that's popular among programmers which says, "I cannot understand what anything does except by knowing all about its internals." Languages like Go let the programmer keep that mantra instead of understanding the things they use based on a description of external behavior.
The complex features, lots of syntax and many ways to do things are not a problem for experienced devs in a particular language. The real trouble comes when these experienced folks leave the project for something more challenging and you start with newbie devs with no experience and they feel demotivated when they come across these tricky features and abstractions which requires months of conditioning. At that point, all you want to do is make a meaningful contribution in the reasonable amount of time increasing your motivation level. The beginning is the hardest part.
That is the theory, the practice is the factory factory pattern, design pattern books, code generation frameworks, IDE plugins,error handling libraries,... all to workaround the language limitations.
So when a new one comes into the project there is this spaghetti of workarounds in place.
Plenty of typechecking primitives are available, and CL itself is strongly - but dynamically - typed. Now, the Lisp way strongly favours interactivity with programming, so by default, those typecheck helpers are not too convenient, to use, and a lot happens at runtime. But:
- CL standard defines a pretty decent type hierarchy (not Haskell-level decent, though).
- While not required by the standard, good CL implementations make use of typing for optimization and safety checks at compile-time. SBCL is particularly great in that domain, employing a solid type inference engine.
- CL allows you to override optimization/safety levels at very small granularity - even sub-function level - with (declare (optimize ...)) forms - like, you can e.g. drop (declare (optimize (speed 3) (safety 0))) inside a loop inside a function, to optimize just this particular section of code.
- Even though CL type-related primitives aren't very convenient (they generate a bit of line noise in the code), the macro system gives you all the power you need to hide it under whatever syntactic sugar you like. There's nothing stopping you from writing (or finding a library defining) e.g. a macro:
(declaim (ftype (function (real (fixnum 0 100) t) rational) foo)
(defun foo (x y z)
(check-type x real)
(check-type y (fixnum 0 100))
(the rational (progn ... some code ...)))
which will give you both runtime checks and, with compilers at SBCL, plenty of compile-time type checks too.
--
I see "do the right thing" in the context of that epigram as meaning that you can choose to do the right thing without having to make compromises for syntax and semantics, as Lisp will happily let you remove any and all boilerplate with its macro system.
"Do the right thing", in this context, goes beyond just the presence or absence of typechecking.
But regarding type checking, Lisp (at least Common Lisp) is strongly typed. Really, very strongly typed (for example it will complain about putting a "byte" in a "character" array; or of using an "array" when a "simple-vector" was expected... Lisp is very nitpicky regarding types!), but the type checks happens mostly at runtime. Some checking also happens at compile time, even more if you intentionally include type declarations. (Type declarations are part of the ANSI Common Lisp standard.)
> Some checking also happens at compile time, even more if you intentionally include type declarations. (Type declarations are part of the ANSI Common Lisp standard.)
Just a note that this is entirely implementation-dependent. SBCL, for instance, is very good about using type declarations as correctness checks (those that can't be statically verified transparently degrade to runtime assertions) but their exact behaviour isn't specified in the standard; for example, implementations are free to take them as declarations that the programmer knows things the implementation doesn't and to trust them, which could cause weird bugs if they're not correct.
Emacs, one of the most reliable software ever was written in a language with no typechecking -- in Lisp! Emacs never crashes although it is configurable by the user in almost any way -- also in Lisp.
>Lisp seems to take the exact opposite approach as Go. The power of languages like Lisp appeal to me, so I have a hard time understanding why people want a language that intentionally limits itself.
Me too.
I don't understand how people in HN vouch for restrictive languages.
People don't want powerful features because they can be misapplied, making a mess. These people aren't worried that they will make a mess of their own code, they are worried that they will have to deal with someone else's mess.
What happens as you add more developers to a code base, with larger variance in ability and favored abstractions is going to be important in some cases, and irrelevant in others.
Imagine that you're joining a project. It's been worked on by a team of 100 people for a decade. Half of those people were below-average programmers. Many were newbies in the language, and some were newbies to programming. And you're going to get to try to maintain this code.
Now, do you want it to be written in a restrictive language, or in one that gives developers the ultimate amount of freedom?
Having worked both with big Java projects written by newbies, and on a large Common Lisp codebase that's older than I am (29), I don't really see much difference. Large projects are large, they always require time and effort to get into. That said, my current experience is that:
- A dumb language and simple code in a big project means lots and lots of code. On such codebase, my biggest issue is keeping track of how things fit together, because there's just so much of it (hint: they don't; people writing it can't keep track of all that stuff either).
- A powerful language and complex code in a big project means dense code. Like in that Lisp codebase, where I dealt with big macro-writing-macros, I would spend an hour with a macrostepper, trying to grok what a single line of code does. But once I did, that line of code (and similar lines in other places) were not a problem anymore, and they compressed what would otherwise be thousands of lines of boilerplate.
Which one I like more? I don't know. Big, old codebases suck, that's a fact of life. But I lean a bit towards "more Lispy" than "more Java-y", because it makes me feel I'm using my brain to actually think, instead of just tedious bookkeeping.
>Many were newbies in the language, and some were newbies to programming.
I wouldn't join this project, no matter what the language is.
Imagine it's a popular languae. Javascript or C code, for example. There are no true namespacing / packages and modules facilities in JS or C. It would be even worse than the theoretical nightmare you think Lisp would be. (Lisp has extensive namespacing facilities; code can be contained within modules that don't clash.)
If it was Java, you will see wrongly applied Design Patterns, leading to over-complicated, hard to mantain code.
No, thanks. I wouldn't accept no matter the languages.
Newbies to programming should be educated and trained, not incorporated directly into an important project.
>Then you will never be employed, because that describes virtually every software project at a for-profit company.
I have had 8 years doing software dev (at a for-profit company) most of them as software development director in command of a 15+ people team, thus, I reaffirm what I said: Newbies should be trained first, and only afterwards included in the projects, and that's what I made sure happened on the team, under my command.
They deserve to be trained first, in an environment where they can make mistakes freely until they feel confident.
Learning a functional language has been eye-opening, but I've always struggled with Lisp. It was Erlang that finally clicked with me.
I would suggest, if you haven't already learned one, to find some FP language that looks interesting. Elm, perhaps, or Elixir. I'm not a fan of "hybrid" languages like Scala; finding one that hews closely to the FP ideas will be more useful, I think, in learning how to think functionally.
Any new language that doesn't operate exactly like the ones you already know can give you new ideas on how to program. Don't get bogged down in the expectations others have; just find one that piques your interest and dive in.
While I agree with you, it's important to note that the defining trait of Lisp isn't that it's a functional language.
It can be functional, just as it can be object oriented or procedural, but those labels matter less to what Lisp is than does the intense focus on things like metaprogramming, in my opinion.
That is far from the defining trait of lisp. Probably brought about because it was easy to pass around functions in lisp. However, i find lisp is at its most powerful when you understand some of the imperative abstractions that are available to you.
Around 60 years ago, Lisp pioneered functional programming and was the only thing that supported it at all. Up until the mid-90s or so it was still far and away the most popular language that had meaningful support for closures and higher order functions. That hasn't been true anymore for about 20 years, but for a long time anyone interested in using that stuff probably learnt it from Lisp, and Lisp was probably their best bet for getting to use it.
There was always more to Lisp (I read a cute essay from the mid-60s about how to balance assignment-and-goto style programming with recursive-pure-function style programming in Lisp), but older people making that connection isn't unreasonable, or younger people who've only heard older people talk about it.
60 or so years ago, many of the main tricks of functional programming today were far too expensive in terms of memory to actually be used. So, I find this claim somewhat hard to take at face value.
More, early lisps were far more up front about their imperative abstractions. Something that we try our damnedest to hide from folks nowadays. In ways that are actually hard to fully explain. Used to, you were given an array of functions not just as a programmer, but as a user of the machine. The "side effects" of the functions were the point of them. They literally made the machine do something.
So, yes, functional has always been a defining element of lisp. I can fully support that statement. The defining element, though? I have a hard time supporting that one.
Yes, I agree. If you've ever read old Lisp code it's full of PROG and assigning variables and jumping around with gotos, McCarthy even said: "LISP also allows sequential programs written with assignment statements and go tos. Compared to the mathematically elegant recursive function definition features, the ``program feature'' looks like a hasty afterthought. This is not quite correct; the idea of having sequential programs in LISP antedates that of having recursive function definition." Mind, if you've ever read modern Lisp code, it's full of LOOP and ITER. Lisp has always been largely imperative.
As for what makes Lisp "different," I think it was true in about 1960 that it was mostly the functional support. It hasn't been for a long time, but I just think that people who don't know Lisp assuming that it is isn't completely unfounded.
> More, early lisps were far more up front about their imperative abstractions.
I will say that I have no idea how someone could look at Common Lisp and not realise it was largely an imperative language unless they had some major preconceptions going in.
My reading comprehension on first waking up was terrible. I thought we were disagreeing, but I can see you were clearly adding to my statement, not contradicting it. :)
Love that quote from McCarthy, btw. I have not seen that before. Is there more in the context of where that came from?
I'd seen this link before, but clearly hadn't read it as well as I'd thought I'd done in the past. Likely I was too unfamiliar with the techs to really grok what I was reading at the time. Thanks for sharing!
I used StarLisp on the Connection Machine 1, and it was such a good fit for using the hardware. I didn't get too much time on that project, but StarLisp let me work offline on code before getting access to the hardware.
What are you saying "not really" to? I wasn't claiming that lisp didn't allow you to use hardware.
Rather, I was claiming that most of the hallmarks of functional code in today's programs wasn't possible in older hardware. Specifically, many of the "functional data structures" that people are growing to love nowadays were decidedly not possible on so little memory.
But what is that contradicting? Lisp certainly ran on older hardware.
My assertion is that "functional" is not the defining feature of lisp. My evidence is much of modern functional programming was not done for a large part of it's history. My claim is further that many modern idioms couldn't be done on older hardware. Not that lisp couldn't run there, but that modern practices couldn't. Regardless of language.
Functional is certainly a feature. Even a prominent one. Just not a defining one.
Doesn't that still support what I'm saying? Specifically, was it a defining idea, or the defining idea?
I've never claimed that functional was not a facet of lisp. Just that it is not the facet.
Now, the claim I'm making that requires the most evidence is that many of the common functional datastructures people learn of today would not have been feasible on older hardware. I will not claim that they are too slow. I will claim that they are a bit too memory intensive for older hardware.
So, immutable lists are not equal to cons lists. Since most implementations allow direct modifications of cons lists. (Scheme bucked this trend, and people laud it for that. But it was certainly not the norm early on.)
Similarly, the highly branched vectors and other datastructures popular in clojure and friends are just too unfriendly to hardware. Could they have been done? Possibly, but the mutable structures had massive advantages on the hardware of the time. (Arguably, they still have advantages, just nowadays we get to make more tradeoffs between bleeding performance and maintenance.)
I'm definitely open to the idea what I'm saying is flat out wrong. I doubt I've reached my quota on stupid claims for my lifetime. And if my reading of your post was as off as one of the siblings where you weren't trying to contradict my claim, but strengthen it, my apologies. :) And my thanks for sticking with the thread.
Note that Clojure is not really a Lisp dialect, so you will need a different tutorial if you want to try a Lisp like Scheme, Common Lisp, Racket, TXR Lisp, etc.
Clojure simply differs too much from the Lisp languages. For example take into account the "atom" keyword and its meaning in Lisp versus Clojure. And also the way lists are used in Clojure, see "cons" for example.
These are not frivolous differences -- atoms and conses are the key building blocks of the Lisp language!
Take any Lisp book from 1958 onwards: not a single example program will work in Clojure. Most would need a complete rewrite, because the concepts are different.
Do you like Python? Python is basically simplified Lisp. Common Lisp is Python plus first-class lexical closures (rather than second-class) plus true multithreading plus a real compiler so it runs much faster.
Plus parentheses rather than indentation to delimit expressions; parentheses are much more versatile once you get used to them.
Python is in no way a Lisp. Python is not homoiconic, does not have first-class identifiers (symbols), and does not have full support for dynamically loading code (https://news.ycombinator.com/item?id=14666300). All that makes Python much closer to BASIC than to other dynamic programming languages. I think of Python as a BASIC with an object system and a couple of incorrectly borrowed ideas from Scheme (lexical scoping and first-class functions). There is not much "simplified" there compared to Lisp 1.5, just less features with more complexity.
Well, the comparison by Peter Norvig makes Python and Lisp look pretty similar (https://norvig.com/python-lisp.html, also linked below in this thread).
In some cases, you can get around no-first-class-identifiers in Python by using strings and getattr(object, symbol) or locals()[symbol]. (What are other use cases of first-class identifiers, other than making some function arguments or macros look prettier?)
And when Norvig made that claim in from of McCarthy, McCarthy disagreed.
> Peter bravely repeated his claim that Python is a Lisp.
> Yes, John?" Peter said.
> I won't pretend to remember Lisp inventor John McCarthy's exact words which is odd because there were only about ten but he simply asked if Python could gracefully manipulate Python code as data.
> "No, John, it can't," said Peter and nothing more, graciously assenting to the professor's critique, and McCarthy said no more though Peter waited a moment to see if he would and in the silence a thousand words were said.
> What are other use cases of first-class identifiers, other than making some function arguments or macros look prettier?
You are completely missing the point. Not having identifiers be first-class objects with unique identity is how Python ended up being unable to reload code properly.
>Do you like Python? Python is basically simplified Lisp. Common Lisp is Python
I am very experienced and proficient with Python. Common Lisp goes way, way beyond what Python brings to the table. Take CLOS for example and compare it with Python's OOP facilities. CLOS is light years ahead.
Another differences (among many): Python is a high-level language. In CL, you can be high level and low level at the same time. For example, in CL you can disassemble your code to machine language and apply optimization directives and declarations to produce the shortest code, which can approach C speeds if done right.
That said, I still like Python a lot.
>Python is basically simplified Lisp.
... and I thank you for this phrase, perhaps I can use it whenever I need to justify my usage of Common Lisp at my workplace.
The sad part is that Ruby and Python could have taken some lessons from Lisp, how to have a dynamic language that has a good toolchain to generate native code.
Thankfully we have now Julia, as yet another Algol-Lisp attempt.
This video was awesome, and entertaining as well, Mr. Tarballs-are-good/Symbol1cs/Stylewarning ("stylewarning" as a twitter ID made me ROFL). In particular, the comparison between calligraphy and Lisp programming was a very good one.
I've sent you a LinkedIn request, by the way. I'm the bald one in business suit.
It's funny that I've been downvoted for this comment, given that I've made a living writing Common Lisp for most of my career (including working with the presenter in this video at one of the companies he describes). Of course Python is nowhere close to CL. (If anything, Javascript is closer because it has first-class closures.) But I've found that if you tell people the whole truth up front -- rather than letting them discover it for themselves -- it comes off as condescending.
People probably just thought you missed the biggest differences. None of your three would come to mind if someone asked me the main differences of CL and Python (especially real multi-threading considering it's not in the standard even if bordeaux-threads works everywhere) and I'm not close to being a CL expert. I might get to 'performance' eventually if I had to list a bunch.
If you 'let people discover the truth' then you get a person who reads a couple chapters of SICP when they're 16, thinks they know Lisp, dismisses it as a cool idea language but not practical (compared to the more familiar and productive Python, Java, PHP..), and maybe just maybe ~10 years later they see an example like https://news.ycombinator.com/item?id=12222404 and say "I never knew Lisp, what have I missed out on?" But they could have been told what they were missing out on in the beginning! "Condescending Smug Lisp Weenies" may be Lisp Enemy #1, but I think Enemy #2, not far behind, is probably "People who think they know Lisp, but actually don't, write it off due to incorrect assumptions".
That said, I do think there's a certain something common to Lisp and Python that draws people to use both. e.g. Norvig. I think his page at https://norvig.com/python-lisp.html is a pretty balanced comparison.
The only foreign languages which come close to Lisp with a Python like syntax are Stanza [1] and Nim [2].
Stanza feels like a Lisp with infix syntax. The closeness to Lisp explains why Stanza makes a semantic difference between "f(x)" (which is a function application) and "f (x)" which is a sequence of two elements (f and x). Unlike Nim which is strongly typed, Stanza allows to mix typed and untyped data freely.
Having started in python land, and having started to wander into the borders of lisp land, this talk covers the experience of the transition very well. More importantly it provides great suggestions for the social aspects of getting new users over the hump (with happy intern comments as well, (I would like to hear the unhappy ones as well)).
Overall a good talk about the realities of one way to use lisp successfully in a 'modern' software development environment.
32:50 "Despite the odds, i've used Lisp at no less than 7 companies, 5 in which the Lisp ended up in the product that was actually delivered to the customer."
Sorry about that. Our microphone/recording rig wasn't set up in time, and had to resort to a batter-powered lapel mic. We will try to at least duplicate the channel.
I hope that the audio is at least relatively clear, and not muffled and incomprehensible!
indeed. there is nothing even coming out of the right channel. I had to verify with another audio source that something wasnt broken. To top it off, comments are disabled on the video so there is no way to give this feedback.
23:43 "It really is incredible that we can go close to one order of magnitude" (to C)
26:00 Lisp code is faster than optimized C (quantum) programs when running on the QVM. For non trivial benchmarks (...) {Lisp faster} on an average of 40% percent.
30:00 quick explanation of quantum machine code
33:06 What is the team dynamics of using Lisp? "Condescencion: Lisp's number one enemy" (fun part)
"Some time between 2 and 6 decades ago, Lisp invented approximately all of the popular things in programming, some of which are just starting to make an appearance [long list of features follow]"
Really fun part follows.
35:41 things Lisp programmers tell the other programming teams
36:40 Interesting, beautiful comparison of using Lisp versus doing calligraphy with calligraphy pens.
41:21 Presenting internship program at Rigetti. Stressing the importance of first having a SBCL, Emacs, Slime, Paredit, and Quicklisp, before thinking how writing productive Lisp looks like.
43:33 Comments of interns after the internship, favorable to Lisp, as well as unfavorable comment. (Fun part here)
51:00 Explanation of what quantum computing is all about