I've been hearing claims my entire programming career about how Lisp is supposedly "superior" to mainstream programming languages, but I've never seen a concise code example that actually demonstrates this.
For instance, it's easy to demonstrate how Rust is superior to C: Just show a short piece of code where an array is returned from a function. In C, this will involve raw pointers and manual memory management with all associated safety and security pitfalls, whereas in Rust you just return a `Vec` and everything is taken care of. Simple, obvious, real-world superiority.
How does such an example for Lisp look like? I'd love to see 10 lines of Lisp that show me something that:
Lisp is absolutely superior and you should absolutely learn it but you'll never get concise examples as to why. Only jokes and anecdotes.
How do you fix a waterlogged smartphone? Put out a bowl of rice, which attracts an Asian guy who will repair it for you.
Languages have pedigrees. If you pretend like your company is enamored with javascript you'll get people who love fedoras and call themselves Ninjas. Big teams, single function libraries, lots of code shipped - move fast and break things. Cats pawing at Macbook keyboards. Mumble rap.
If you pretend you love Haskell you'll attract mathematicians in elbow patches. Great stable code will sporadically appear once every couple of years seemingly at random. Genius solutions to neat problems that have nothing to do with what the company is actually trying to accomplish. Ents. Classical music.
If you pretend to love lisp you'll attract people who read PG essays and will quit to start their own companies. Maybe they'll help you close out some tickets in Jira before they bounce if they can get your Rube Goldberg monstrosity working on their laptop. Honey badgers and hamsters. U2.
If you pretend to love latin you might get elected PM.
If you actually learn a few orthogonal languages to cover the very finite amount of paradigms you'll eventually come to realize they are all crap.
I genuinely like Rust, but it's pretty amazing that you managed to write a comment on this article and make it about Rust. Peak HN :-)
Re Lisp's superiority - Lisp was certainly a superior language when it was devised many decades ago, but over time much of its comparative advantage has been absorbed by other languages. Starting in the 90s when productive, GC'd scripting languages like Python started being prominent this trend accelerated.
I can iterate dozens of times between #2 and #3 in the time of a single incremental production build in rust (debug builds are not useful for gathering profiling data).
Generally speaking it takes less than a second to recompile and load a source file, and it can be done while the program is running.
SBCL can get much faster results than what i can see there via use of SIMD procedures. i think in some tests it beat rust on spectral norm calculations
I haven't used VS in a decade, but you needed to restart your program after changing the code.
Also, C++ linkage, while far faster than rust isn't the fastest thing, and compilation can be slow too (particularly with the popularity of header-only libraries).
Ten years ago, VS already had the "Edit and Continue" feature. But it's not very good compared to what you get in a Lisp environment. It's slow, there are restrictions on where/when it can be used, and it's possible for the edit operation to fail.
> you needed to restart your program after changing the code
Wow. No. This is a really basic debugging feature that's been around for over a decade. You can also drag-and-drop the instruction pointer to another line, watch and change variables in real time, etc. I swear, every LISP-related thread has someone touting ancient features as if they are somehow new or unique.
whats your etc.? in common lisp i can pretty much redefine the WHOLE compiled program at runtime. boasting about redifining variables to someone using lisp is a big LOL moment. as far as interactivity and debugging is concerned, besides smalltalk, its not nearly as good in any other language as it is in common lisp
I don't think you know just how modular a Common Lisp environment is. Sure there is some hot reloading that sometimes works and you can change the value of variables, maybe even jump back and forth in your stack trace. Rarely does it do any of them as well as Common Lisp though, where the only thing that doesn't dynamically update when you change something is struct-instances and macros. Functions, variables/values, classes (including instances) can all change as you want it without restarting anything. When something goes wrong, like errors or exceptions, you can modify some code, restart some where on the stack, ignore it and continue execution, insert dummy value or insert a placeholder function.
Some form of interactivity with the debugger is common in my experience. The level of interactivity you get with Common Lisp is not.
That's fair. In this case the cave is called "embedded development."
I found my old VS CDs and the latest version I own is .NET 2003, so I only missed that feature by a single release. I couldn't truly be a smug lisp weenie without some snide remark about how nice it is that blub developers finally got something not quite as powerful almost 40 years after Lisp could do it, so just consider me to have made such a statement.
Does this include replacing all existing instances of this class with new instances? Otherwise, how is this different from what we can do in any other dynamic language like python or javascript?
We all know about memoize, but let's say I want to define a global hash-map, where the keys are actual pieces of code and the value the result that would be evaluated when executed. Something like this:
Which can then be used like this in a trivial way, just passing code because code is data:
(let ((path "~/foo")
(tags (cache `(git-tags ,path))))
(format t "Tags of ~a~% ~{- ~a~%~}"
path tags))
Sure, something like this is possible in other languages, but having done macros in languages such as Rust and Nim, it involves such a verbose and syntax soupy way of dealing with the AST that I don't feel like reaching for those abstractions that often. I'd rather just write boring code, and most consider this a feature.
Nim has `quote do:` with a kind of ghetto quasiquoting as well as genAST (and other things) to lessen the burden, but it is always simpler to write boring code (and better unless you have a burning need for The Fancy).
One way to rephrase objections to "all those parens" of Lisp is that the most common style of using it makes it necessary to "write boring code 'in AST'", if you will, and not even in a very nice, commonly accepted 2-dimensional tree notation.
I always wonder how different the history of prog.langs would be if early on one of the many indent/offside rule based 2-D notations had become popular with "boring code" writers in Lisp and not eschewed by "fancy macro writers" in Lisp.
> I always wonder how different the history of prog.langs would be if early on one of the many indent/offside rule based 2-D notations had become popular with "boring code" writers in Lisp and not eschewed by "fancy macro writers" in Lisp.
That sounds interesting but is hard to search for, have you got an example?
This is the latest for Scheme according to Wikipedia's Offside Rule article [1]:
http://srfi.schemers.org/srfi-119/srfi-119.html
I have not read this "Wisp" spec lately, but IIRC it has many back references to prior attempts..at least in the Scheme community..not sure about the common-lisp community.
EDIT: To elaborate on my `quote do:`, this is a little macro to avoid doing many tedious code repetitions:
macro strp(sVars: varargs[untyped]): untyped =
result = newStmtList() # strip some string vars; Assume new-scope
for sV in sVars: result.add(quote do: (let `sV` = `sV`.strip))
Maybe that's one man's "syntax soup", but I don't think it's so bad. (My 3 letter idents are probably worse!)
The static typing of Nim (rather than gradual typing defaults like Lisp or Cython) tends to make beginner programs less "performance cringe" (as long as they compile with `-d:release -d:lto`!).
Considering Lisp was here first, shouldn't the real question be "why use Rust/C++/Python when there's Lisp?" You can't even create a real closure in Rust.
I'd love to see 10 lines of Rust that showed me something that:
#![allow(arithmetic_overflow)]
fn main() {
let x = 1073741823;
println!("x = {}", x*3);
}
# cargo build && cargo run
thread 'main' panicked at 'attempt to multiply with overflow', src/main.rs:4:24
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
To be fair, Rust is improving over time, as of I think last year you now have to explicitly have that first line to allow the overflow? This behavior is somewhat annoying to replicate in Lisp if you aren't familiar with type declarations and suppressing the debugger:
(defun main ()
(let ((x 1073741823))
(declare (type (signed-byte 32) x))
(format t "x = ~a~%" (the (signed-byte 32) (* x 32)))))
(handler-case
(main)
(simple-type-error (e)
(format *error-output* "Panicking because of ~a~%" e)
(uiop:quit 1)))
# sbcl --script main.lisp
; file: /tmp/main.lisp
; in: DEFUN MAIN
; (THE (SIGNED-BYTE 32) (* X 32))
;
; caught WARNING:
; Derived type of (* COMMON-LISP-USER::X 32) is
; (VALUES (INTEGER 34359738336 34359738336) &OPTIONAL),
; conflicting with its asserted type
; (SIGNED-BYTE 32).
; See also:
; The SBCL Manual, Node "Handling of Types"
;
; compilation unit finished
; caught 1 WARNING condition
Panicking because of Value of (* X 32) in
(THE (SIGNED-BYTE 32) (* X 32))
is
34359738336,
not a
(SIGNED-BYTE 32).
A bit more work and you could muffle the compilation time warning too. As for how important this is, I'unno, personally I prefer to have my math Just Work by default -- (expt (expt 2 64) 64) or #I((2^^64)^^64) -- and I like by default being given the chance to fix things and continue/restart via the debugger rather than panic.
This a trick question because 10 line programs are trivial. Please show me a high performance hardware-accelerated 3D game engine written in lisp. Or a web browser.
Lisp has a rich history in games and graphics that you are not apparently aware of yet -- here's a few mostly Common Lisp Game Engines and resources to get you started (though GOAL was a Scheme variant):
How about IDE with base codebase of 1.5M loc and tons of 3rd part packages? (emacs)? Most modern programming is not about performance, it is about managing complexity.
This a trick question because 10 line programs are trivial.
I mean, OP asked for a 10 line program showing something that can't be done easily in Rust and actually matters, so he seems to believe 10 line programs aren't trivial.
That said line counts are an awful metric because I could just cram an entire library definition in a single line as if it were minified JS.
These questions are so silly. Nobody's written a web-browser in Rust, Python, Ruby, Java, Clojure, Haskell, Erlang, Typescript, Javascript, etc. but that hasn't stopped anybody from using them.
I learnt Lisp sometime around 1986 (from library books and later, on the Sinclair QL LOL), I do love it in some ways, but found distribution of final code to be an issue. I don't think it's suitable for all tasks.
> but found distribution of final code to be an issue
in 1986 ?
> don't think it's suitable for all tasks
to this day im yet to see an example of a software engineering problem that lisp is not suitable to solve. if you have such an example ready i would be genuinely curious. otherwise its just a thought from a very common and lazy misunderstanding of what lisp actually is
im talking about computational problems. things that are non-examples include such things as business sense/requirements, availability of programers, availability of ready made libraries/frameworks, or claims of apparent unsuitabilty in large teams
the hard part about a web browser is the rendering engine, which is why it was raised as a challenge. Sticking a Lisp front end over an existing rendering engine doesn't answer the challenge.
then he didnt ask the right question. nyxt is a browser in every sense of the word. anyway there is no reason lisp cant be used for making a good rendering engine
I'd say the most satisfying experiences I have with lisp are a cross between clean abstractions and light code-golfing. To call out specific approaches, I'd highlight anaphoric macros, code-walking via (symbol-)macro-let [1], and the freedom to control when expressions evaluated (generally at read-time, compile-time, or run-time) for optimization [2][3].
Yeah, there is nothing complicated about returning a static array in C.
C is a much simpler language than many people assume -- of course, it can get really hairy and complicated especially when you need to do dynamic memory management, but that's not what the GP was asking for in this case.
> that's not what the GP was asking for in this case.
You really think that when they mentioned the dynamically allocated `Vec` in Rust (which has fixed-sized arrays) and "manual memory management" in C, what they were actually talking about was a static array?
> whereas in Rust you just return a `Vec` and everything is taken care of
The downside is that you just performed a hidden heap allocation. If automatic memory management is desired, a garbage-collected language might have been the better choice in the first place.
Both solution don't use heap allocation but pass around the array data by value on the stack or packed into register and should compile to the same assembly code (ABI differences aside).
That's also one of the rare cases where Rust code actually turns out more simple and readable than the C equivalent ;)
I have no idea what you can or can't easily do in Rust. Here is something many languages can't do succinctly, without closures and code-as-data. From Paul Graham's book, On Lisp, modified to work with Lisp code as keys.
(defun make-dbms (db &key (test #'eql))
"Make a database, db should be a list. Test determines what keys match."
;;Three closures in a list to make a database.
(list
#'(lambda (key)
(rest (assoc key db :test test)))
;;add
#'(lambda (key val)
(push (cons key val) db)
key)
;;delete
#'(lambda (key)
(setf db (delete key db :test test :key #'first))
key)))
(defun lookup-dbms (key db)
"Return the value of an entry of db associated with the key."
(funcall (first db) key))
(defun add-dbms (key val db)
"Add a key and value to db."
(funcall (second db) key val))
(defun del-dbms (key db)
(funcall (third db) key))
which uses a Lisp to define itself. This means roughly that if you understand enough Lisp to understand this program (and the little recursive offshoots like eval-cond), there is nothing else that you have to learn about Lisp. You officially have read the whole language reference and it is all down to libraries after that. Compare e.g. with trying to write Rust in Rust where I don't think it could be such a short program, so it takes years to feel like you fully understand Rust.
Indirectly this also means that lisps are very close at hand for “I want to add a scripting language onto this thing but I don't want to, say, embed the whole Lua interpreter” and it allows you to store user programs in a JSON column, say. You also can adapt this to serialize environments so that you can send a read-only lexical closure from computer to computer, plenty of situations like that.
Aside from the most famous, you have things like this:
1. The heart of logic programming is also only about 50 lines of Scheme if you want to read that:
3. The object model available in Common Lisp was more powerful than languages like Java/C++ because it had to fit into Lisp terms (“the art of the metaobject protocol” was the 1991 book that explained the more powerful substructure lurking underneath this object system), so a CL programmer could maybe use it to write a quick sort of aspect-oriented programming that would match your needs.
> 3. The object model available in Common Lisp was more powerful than languages like Java/C++ because it had to fit into Lisp terms (“the art of the metaobject protocol” was the 1991 book that explained the more powerful substructure lurking underneath this object system), so a CL programmer could maybe use it to write a quick sort of aspect-oriented programming that would match your needs.
In addition to that, Lisps are sufficiently flexible that before CLOS itself was developed people were extending Lisp in Lisp to try out different object oriented models that fed into what became CLOS. That's hard to accomplish in most other languages, if it's even possible without going to a third party tool or digging into the compiler itself.
So, the first thing is, what's the essential difficulty versus the noise: Lisp has two features which enable the evaluator to work. The first is quotation, which means that you can put a little apostrophe before some code and then it saves that code as a data structure, rather than implicitly evaluating it. The second goes by the overfancy name “homoiconicity”, that data structure is actually just singly linked list consumed by “car” (“first”) and “cdr” (“rest”) operations.
You can focus on understanding those two aspects pretty quickly, and then there are many tutorials about building your own Lisp that will get you there in an afternoon, if you'd like.
Or, you can take the royal road to this expression, in all of its cheesy 1980s glory:
It's lecture 7A where this specific program gets explicitly dissected by Sussman, wearing a fez. If he seems familiar, you might have watched Strange Loop 2011’s “We Really Don't Know How To Compute”, https://youtu.be/HB5TrK7A4pI
Watch Will Byrd's MiniKanren Uncourse. The topic sounds off, but he takes a few videos to get there, making a Lisp on the way, explaining this, or a variant of it. Should be within the first 4 videos.
Lisp is both low-level and high-level at the same time. Common Lisp has more than the power of modern Python and Go combined, and the language is concise - it has about a tenth of Python's size.
Just as examples, Common Lisp supports arbitrarily-long integers, low-level bitwise operations like popcount, rational and imaginary numbers as well as, say, POSIX file operations or easy calling into C functions. It has things like list comprehensions, pattern matching, and dictionaries, and full Unicode support since a long time.
It supports both procedural and functional-style programming, and is, like Rust, a child of the language families which stem from the Lambda calculus, where everything is an expression. The latter is an extremely valuable property because you can replace any expression in lisp code with a function or its value, even if it is an if-statement.
It has still facilities which other languages do not have, like built-in support for symbols, which are used similar to interned strings and keywords in Python.
At the same time, Common Lisp is extremely mature. For example, it is possible to define and use error handlers, which is a generalization of exceptions, and is useful in library code. Or while other languages have only local and global variables, nothing in-between, Lisp allows to define parameters, which are global values, that however can be modified in the scope and call stack of a certain function call, similar as environment variables can be inherited and changed in sub-processes of a program.
And it compiles to quite fast native code, which Python can't.
Here is an introduction to Racket, which is a dialect of Scheme - I think it shows quite nicely the uniformity and simplicity of Lisps: https://docs.racket-lang.org/quick/
Lisp is functional programming. You can leverage the functional programming paradigms to write more correct code.
Also Lisp makes it easier to not repeat yourself. It's shorter to create functions, even macros, leading to the ominous "DSL" rabbit hole.
Another superpower is that it has no predefined keywords or operators. So you can redefine everything to whatever unicode you want. Including other languages and writing systems. It's a lot harder to write a programming language/compiler that uses another natural language idiomatically using something more similar to C or Python.
In about one line of Lisp we can make an object with referential cycles in it, without declaring that we would like to abandon safety. We can have that object printed in a notation from which a similar object will be recovered, with the same cycles in the same places. All of this matters in practice.
My relatively amateur take is that the REPL and the debugging experience seem powerful. I would like to know how they compete with other languages.
The REPL is the center of everything and it enables you to change functions on a running program. Tracing a function (shows function arguments on every call) is a simple as calling trace(function-name).
If a program crashes, it does not really crash... it enters some debug mode which offers possible resolutions, including change the function that failed.
The REPL can trivially show you the assembly code of individual functions. You can also add declarations for each function with hints for the compiler (and then check the size of the resulting assembly).
I believe this series of articles highlights some of the debugging features.
i can program my numerical-heavy program in SBCL with much better interactivity and debugging than python can offer and with a much much better performence
as far as writing the actual code, lisp syntax allows me to perform structural editing which to me is just on another level
but as with all things in life, you should try before you buy
"For instance, it's easy to demonstrate how Rust is superior to C: Just show a short piece of code where an array is returned from a function. In C, this will involve raw pointers and manual memory management with all associated safety and security pitfalls, "
Typedef the array, return that. One pitfall, it's not resizable.
Well, you need to think that LISP was a thing already before 1960. The only competitor at the time was FORTRAN. Even C was introduced more than 10 years later.
LISP had garbage collection and was designed for symbolic manipulation. Given that programs were just list of symbols, it was fully meta, from the beginning. This gave it a raw power that was decades ahead of time. Even nowadays, this malleability give Lisp languages the power to provide as libraries things than in most languages would require the modification of the language itself. For example, there is library for Clojure (a popular modern Lisp which runs on the Java and Javascript virtual machines) that adds type support. Think about that. Yes, I know. Mypy adds types to Python, but in the case of Clojure the language and runtime didn't need to be touched at for the library to work. You can basically do whatever you want as a library, because the core of the language is so powerful.
Another distinct characteristic of Lisps was the possibility to treat programs as living things you can just "talk to" through the REPL. Programs are developed in a more "conversational" way than with languages such as Java, Rust, etc. Some would say that this is a superpower and other would say that it's of marginal value. It just depends on the personal preferences.
Now, we are in 2022. Obviously, languages have evolved a lot and they are ridiculously more advanced that FORTRAN. Lisps don't have a clear killer feature that can't be found in some other languages and actually they normally lack some convenient things. There is no type system for Lisps that it's practical, convenient and with tooling support. That is a big disadvantage on an era where programs are big beasts normally done by several people mostly gluing together a bunch of libraries with big APIs that you need to explore somehow. Typed languages with accompanying IDE tooling (i.e. having a language server) offer a much quicker and effective way to develop than spending the day reading API docs.
Now, should you learn a Lisp in 2022? Well, I think there are some advantages of doing so. They have a lot of historical value and their simplicity and power are quite instructive, I'd say. Playing a bit with some Scheme (or Racket) can be very fun. If you are curious about Lisps and also functional programming, I'd suggest learning some Clojure. It's a very nice language and it really changes how you think about things, especially if you haven't been doing "hard" functional programming before.
Clojure general approach and concrete libraries as Reitit, Malli or Specter really can change how you look at things and give you a deeper understanding of other characteristics of your other languages of choice. It's a bit like learning some Japanese if you are a German or French speaker. It can help you understand, for example, how unnecessarily complex your verbal system is and how unnecessarily complex Japanese numbering system is. If you are a fish, it's difficult to understand what water is unless you get out of it. Maybe Lisps can be this breath of fresh air.
SBCL probably does more type checking that one thinks. It catches many useful type errors and warnings, especially since we get them instantly, after we compile a function with a keyboard shortcut.
Then we have the new Coalton library, that brings ML-like type checking on top of CL.
(and yes CL still has killer features, and no one brings all of them together!)
Amazing! Ferris is loved and Tölva's art at that site is so nice and funny. Very cool and appropriate that she's also managed to have "Rust" as part of her name.
Hylang is a great way to dip your toes into Lisp style languages IMHO since you have the entire python ecosystem at your fingertips. It was very eye opening to rewrite some scripts in Hy. I'm not sure if I'm ultimately a fan of lisp, but I had a lot of fun learning Hy a while back after discovering it in another HN post.
>I doubt anyone uses hy in production for completely other reasons, would love to be proven wrong.
When I was working on Hy, almost 10 years ago, there was a couple of Hy core devs that had managed to push some Hy code into production. I won't name names, but a known company used Hy when processing firewall rules (if i recall correctly).
I doubt that code is still being used, and I'm not aware of anyone beyond a couple of previous Hy core devs that managed to push Hy into production.
There are plenty of reasons not to add a transpilation step to production code:
* Looking at stack traces is suddenly so much harder and involves reverse-engineering the transpiler unless your transpiler is sophisticated enough that it can de-transpile the stack traces back to the source language.
* You're adding a step to your build pipeline.
* You're taking on the risk of bugs or security issues that may exist or be introduced to the transpiler itself.
* Your developers need to learn both the source and target languages.
* If the transpiler project is abandoned, you have to begin maintaining it yourself, or you need to de-obfuscate your transpiled code enough to make it your canonical code.
I think that anyone who used CoffeeScript in the late 2000s / early 2010s understands the pain of using a transpiled language that wasn't worth the trouble.
Not to say that one should never transpile, or that Hy's not worth it or not production ready! Just that, there are things to consider, and how much you need to consider them depends on what your "production" is.
Hylang doesn't add anything to your build pipeline, because of some tricks. Stack traces aren't that useful in Lisp but the bytecode refers to the actual source code positions.
Multiple languages are less of an issue when you have a small team or a team of one.
I'd be willing to use Hylang in production. I wouldn't go all-in, but rather pick something that looks like it needs a DSL. You can still interface with Python code interchangeably whenever you want, so there really is not much pain to abandon the attempt completely.
If you write your code in Python, many team members or new hires will already know Python and hence be able to dive right in to the code base.
If you write your code in Hy, many people are going to have no idea. Small chance they already know it. A minority of people will have some background in Lisp, and if you already know both Python and another Lisp, learning Hy should be straightforward. But for a Lisp-naive developer, it is a big speed bump.
Also there is a lot of existing tooling around Python code – static analysis, etc – which might not be able to handle Hy. Sure, it can process the transpiled Python, but that produces a lot less pleasant experience for the developer. You really want native Hy tools, which are going to be behind native Python tools.
Giving people the ability to create more useless unsupported DSL in a world where people think YAML dialects in the CI was a good idea will damage Python in the long run.
And I say that while I wished I could use macro several times in Python because the syntax was lacking.
I'm likened to agree. As much fun as syntactic macros in python would be, and despite all the doors it would open, it would really kick up the potential complexity. Heck, folks were complaining about pattern matching and the walrus operator.
I mean maybe the council will go with it, but I'm bearish.
I do really like the idea of jit macros and zero-overhead decorators though.
I feel the same way. I'd love to use macros to implement dataclasses. But it does open the door for unreadable code.
I'm torn on the issue: I don't want to be restricted from improving dataclasses in what would be fairly obvious ways, but I'm sure the feature would be abused. But maybe this falls under the "we're all consenting adults" guideline.
Zero overhead exceptions are on the way so it would be natural for decorators to be next, even if it's likely way harder to implement given that decorators can basically do anything side effecty.
I don't think you're likely to see zero overhead decorators any more than "zero overhead software" in general.
Take dataclasses: it adds a bunch of synthesized methods to a class, and needs to use "exec" to create them. How could that be zero overhead?
To help with this, I've proposed moving knowledge of dataclasses into the Python compiler, but even I don't think it's a great idea. For example, it would leave attrs in a disadvantaged position, and I don't want to do that.
Another option would be to move dataclasses before the code generator. Syntactic macros are one way to do that. Then there really would be zero overhead when loading the cached .pyc files. That's their appeal to me. Of course there are lots of other things that could be done with syntactic macros, which is both good and bad.
I’m contemplating pushing for it in 3.12, but it’s probably more than a year’s worth of work to implement and shepherd the PEP through. So 3.13 is more likely.
And I’m not sure it won’t get shot down anyway. It’s a big step for Python.
Much like Clojure is lisp on the Java virtual machine, Hy is a lisp on Python. There are definitely plenty of ways in which lisps vary and I've heard Hy tends to follow some Pythonisms more closely than other lisps may follow their host languages.
Hy (or “Hylang”) is a multi-paradigm general-purpose programming language in the Lisp family. It’s implemented as a kind of alternative syntax for Python. Hy provides direct access to Python’s built-ins and third-party Python libraries, while allowing you to freely mix imperative, functional, and object-oriented styles of programming.
Hy is great fun - the batteries of python and the expressiveness of lisp. I've written a few scripts here and there, and I've had a fantastic time with it.
Hylang is fun and everything, but they make a lot of breaking changes in minor releases. It was "(import [module [function]])" in 0.20 and now in 0.25 it's "(import module [function])" and the old code doesn't compile anymore.
Wonder why not syntax totally common lisp so one does not need to change with switch to turn on certain or full Hy features. It is a bit confusing this partially Python partial Common Lisp then throw in a “!” for defmarco …
Why Hy? Hy (or “Hylang” for long; named after the insect order Hymenoptera, since Paul Tagliamonte was studying swarm behavior when he created the language)
I'm spending my HN karma to make this petty comment that if the first sentence of your docs immediately opens up with the story of how you named your project, I instantly don't want to use it or read any more.
It's like when you open a cooking recipe on a website and you get the author's entire life story
I don't get it. I am a Smug Lisp Weenie. Lisp is not a syntactic sugar - the only reason it looks like it does is for metaprogramming. Those parens are not syntax - they indicate the underlying structure of the code, a tree which may be manipulated by Lisp macros.
Why would anyone pretend to program in Lisp without any benefit? Also, Python is the worst imaginable engine for running Lisp. [correction: I see there are some kind of macros...]
I'm not sure what this comment is getting at to be honest. That is the benefit: that you can do macros and manipulate out the syntax in a way you never could in Python.
I believe homoiconicity is more related to the syntax than the underlying implementation - fans of homoiconic code use it to treat code as data (macros and eval) rather than to reason about performance concerns/having a transparency into the underlying implementation (see: Clojure).
think of it as clojure to java, even without embodying the whole lisp ethos, it gives you a lispy base to write python code, plus extra niceties (threading -> IIRC)
Clojure is more than just a Lisp for the JVM – it is a Lisp for the JVM with a big focus on immutable data structures – whereas traditionally most Lisps put mutability first instead. There are other Java-based Lisps, such as Armed Bear Common Lisp (ABCL), which are more traditionally Lisp-like in this regard. I think Hy is more of a traditional Lisp, since it shares Python's native focus on mutable data rather than trying to foreground immutability.
That! Clojure is an elegant experiment with immutability. The implementation details of Clojure's datastructures are a pure joy to look at. JVM is an unfortunate choice as I see it, but it's a pretty solid system which is benefiting from decades of research on JIT, GC, etc, so not entirely unreasonable as a back-end.
I have few good things to say about Python, other than it sort of works. I classify it in the big-sack-of-stuff languages, along with Perl and shells. Not much elegance or style.
Clojure is definitely more than an experiment. It processes many millions of dollars daily through things like banks and even at Walmart. It might not be used a lot, but experiment is not accurate at all. You didn't flesh out your comment about Java, they used to also transpile to C# and still to Javascript. I think Java's fast memory allocator is particularly suitable to the immutable data structure memory requirements. I also find the JVM a very robust and mature platform. It's also a great gateway being able to interop into corporate environments as a way to get traction. I am curious what other language you think would be better given those desired traits that Hickey designed for.
I wasn't trying to gatekeep anyone (and I make no claim to be a "Clojurist"–my Clojure experience is rather minimal). I was making a point about categorisation, taxonomy.
There is a certain category of Lisp-like languages – of which Hy, Fennel and LFE are good examples – which take an existing language, and provide a Lisp-like syntax for it, but generally keep the semantics reasonably close to that of the underlying language. Beyond the syntax, the other main addition tends to be a Lisp-style macro system. Maybe we might call them "veneer Lisps", since they put a Lisp veneer on another language, but beneath the surface it is largely the same.
Clojure isn't a veneer Lisp, because its semantics are quite different from Java – Java is primarily about mutable data, and Clojure-style immutable data structures aren't the mainstream Java approach. A JVM Lisp which dropped Clojure's emphasis on immutability could be a veneer Lisp. Armed Bear Common Lisp and Kawa Scheme are examples of mutability-oriented JVM Lisps, but they don't belong to the category of veneer Lisps either, since they are ports of pre-existing languages to the JVM, and their mutability comes from those pre-existing languages not a desire to conform to Java/JVM semantics.
I'm not saying there is anything wrong with immutability, or that Clojure's focus on it is a mistake, or that one ought to either prefer (or avoid) "veneer Lisps": I'm attempting descriptive taxonomy, not prescriptivism.
No, Python does not have tail recursion optimization, because Guido decided that he wanted to preserve stack frames for better tracebacks. The language team may revisit that decision someday.
I assume you're referring to an optimization, because one can write tail recursion in any language that supports subroutines.
> because one can write tail recursion in any language that supports subroutines.
If only! For anyone else who got saddled with some FORTRAN 77 code, you may or may not have recursion available even with subroutines. It wasn't required by the language standard, but some implementations supported it. Not the one I "got" to use (and quickly moved on from) a while back, though.
In those cases, you may wish to use a dynamic programming technique. A bit of caching might be a greater optimization than tail call elimination. The `@functools.lru_cache` tool is very easy to experiment with.
They made a highly redundant traceback display much more pleasant a couple years ago, writing "the same thing 1,998 more times ..." or something like that.
That's good news ... I thought they had added it but were forced to remove it. I think this will make Hy a bit easier to use, especially for those who are more comfortable with CL or Scheme than Python :)
For instance, it's easy to demonstrate how Rust is superior to C: Just show a short piece of code where an array is returned from a function. In C, this will involve raw pointers and manual memory management with all associated safety and security pitfalls, whereas in Rust you just return a `Vec` and everything is taken care of. Simple, obvious, real-world superiority.
How does such an example for Lisp look like? I'd love to see 10 lines of Lisp that show me something that:
1. I can't easily do in, say, Rust.
2. Actually matters in practice.