Hacker News new | past | comments | ask | show | jobs | submit login

I love lispy languages— Clojure being the one I’m most familiar with, but it’s hard to give up static types.

Anyone here worked on a large lisp codebase? How big was it? How big was the team? What’s the secret sauce?




I have been using Common Lisp professionally for over a decade. Largest codebase was probably ~100k lines for an embedded optical fingerprint analysis and booking system for US law enforcement agencies. The largest Lisp team I've worked on was about 30 people for satcom hardware.

Every time I've described to people on the internet what makes Lisp nice, I've been met with a riposte that such aspects aren't unique to Lisp—be them interactive/incremental development, macros, efficient machine code, editor integration, whatever—or that a laundry list of features means nothing and instead Lisp's value should somehow be proven through its (hitherto comparatively dismal) popularity with employers and software engineers. I myself have definitely given up on an academic approach to proselytizing the language, and instead have just focused my energy in building applications and libraries people (especially employers) find useful, at smaller cost and in shorter time than other people are able to provide.

It's like classical music. I can't convince you such music is incredibly rich and pleasant, especially not by talking about the harmonic theory of common practice. Instead, you just have to listen and agree or disagree, I think.

And Lisp certainly has no secrets; it's old as can be and extraordinarily and thoroughly documented. Countless people (myself included) have spilled ink on the (alleged) virtues of Lisp. It's all out there.

When it comes down to it, it's an incredibly flexible language useful for prototyping and rapid development, while at the same time supporting everything you need to transform your application into a rock solid, robust, static binary. All of Lisp's features balance in such a way that developer time and energy are prioritized and consequently optimized.

P.S., Like a sibling comment mentioned, you can have Haskell-level static typing in Common Lisp with Coalton: https://github.com/coalton-lang/coalton


Common Lisp is just tastefully designed. Part of it is the language itself, but also the libraries, and the culture around it. Even tiny things like underscores-in-variable-names versus camelCaseVariableNames has knock on effects like encouraging descriptive names. It’s hard to distill it into just one feature.


hyphenated-names also makes it much easier to see and remember whether you split something that might be a compound word- did i call it hotDog or hotdog? while hot-dog jumps out as more distinct and harder to mix up with hotdog

plus the canonicalization as ALL-CAPS makes it easy to point out SYMBOLS among the prose of documentation

edit: and people make fun of it for having like six equality functions, but at least it's only got one false value


I don't think it's just the features of Lisp, but rather how they interact.

One thing I realized about Doom (1993 video game) is that -- sure, it's got scary monsters and cool weapons, but what really makes the game awesome is how these elements interact with each other. For example, an Imp has just enough health to be taken down with one shotgun blast, so in the early stages of the game, getting anything stronger than the pistol will be a priority when you encounter a pack of Imps. And a Cacodemon is very dangerous, but it has a high chance to stun, so you can pretty safely kill it by stun-locking it with a rapid-fire weapon like the chaingun. The way the monsters behave, together with the terrain, allowed the developers to combine just a few elements to create a huge variety of combat situations that pretty much required the use of every single weapon, and a variety of skills and strategies from the player.

And, not only did this make Doom a more exciting game to play, it opened up an enormous possibility space for user-generated content that a dedicated fan base is still exploring nearly three decades later.

Lisp is like that. It's not just that it has a REPL, or macros, or homoiconicity, a fast compiler, incremental development, or that the evaluator is itself built into the runtime -- it's that it has all those things, they compose nicely, and they interact in such a way that it opens up a tremendously huge possibility space to even a single programmer that other languages do not. Could you countenance writing an expert system in Java? It's possible, but it would be tedious work.

And Lisp's dedicated fans are still exploring that space today, six decades later.


I completely agree, many people look at a language as the sum of its features, but it really isn't.

Also, I think Lispy syntax is key in REPL ergonomics. It means you can, more often than not, take a chunk of code and trivially insert it into the REPL.


Yeah, I absolutely agree. It really isn't about the laundry list, but how the parts come together to form a whole.

But I understand that, to many, it still sounds handwavy and nondescript. The Doom comparison is great; I'll keep that in my pocket.


how do you go about producing static binaries for sbcl?


Load your application up in SBCL and call `save-lisp-and-die` with the options you want (like what the `main` function should be).



Like this, it's easy: https://lispcookbook.github.io/cl-cookbook/scripting.html#wi... (save-lisp-and-die run from a terminal)


You don't. As others noted, you can save a core image. Cross compiling, tree shaking, and other such things don't exist in the sbcl world and (from what I've read in the past) the people developing it have little or no interest in those features.

If you need those sorts of features paired with common lisp you could give embeddable common lisp a try. Otherwise if any lisp language will work for you then chicken scheme and gambit scheme both compile to c.


> save a core image.

which can be made executable. So saving an image is the way to create an executable.

> the people developing it have little or no interest in those features

the demand for these features from the SBCL user base also does not seem high. If it would have been needed, then the development could have been sponsored, like for example the support for Apple Silicon was sponsored by users.

Though other CL implementations provide different deployment options:

ABCL compiles to JVM code.

ECL can compile to C or portable byte code. It also has been used for Android and iOS apps.

LispWorks & Allegro CL have treeshakers and can generate shared libraries. LispWorks also has a runtime for Android and for iOS.


SBCL certainly produces an executable containing fully native machine code. You can ./run it just fine.

I consider how it arranges that code (the "image" you speak of) to be merely an implementation detail that the user is not concerned with.


> tree shaking

The technique of 'tree shaking' was originally developed for lisp.


And? When I last checked sbcl didn't support it. Are you implying that has changed?

Neither did sbcl support cross compiling. To the best of my knowledge all you can do with sbcl is save an image.

When I was looking in to this I read through many discussions where the developers appeared hostile to the idea of implementing these features. A common refrain I encountered was that tree shaking is unnecessary because the core images are compressed.


You can delete data from the image if you want. We used to do it: scrub all doc strings, inlining data, etc. It's trivial.

In 2022, removing a few megabytes of data doesn't seem worth the maintenance cost of the feature. In other words, for most users, it's a non-problem.


Tree shaking isn't data removal, it's dead code removal.

And assuming he doesn't want what he wants is kind of in bad taste. You don't know his requirements.


As far as I'm concerned, as a Lisp programmer, code is data. :)

Yes, and what exactly constitutes "dead code"? Tree shakers rely on heuristics that can fail in practice. There is provably no way to remove functions from a Lisp image and guarantee your application to work. LispWorks has a "tree shaker", but has umpteen options to tame it so your application doesn't inadvertently break.

Fortunately, it is possible to remove these functions if you, an application developer, so please. Nothing stops you. It's not even difficult. But SBCL has not implemented these (potentially fallible) heuristics for you, because (and only because) an insignificant number of users would find shaving a little binary size off to be valuable at the expense of program correctness.


In 2022, if you're linked to libc.so, it's probably substantially bigger than your Lisp image, and so that's where you'd want the tree shaking. Good luck!

The kernel supports manual tree shaking: "make menuconfig" and prune away.


SBCL supports saving compressed images so even without tree shaking, standalone executables are small. Not a problem.

I pay for a maintenance contract and licensing for LispWorks that does support tree shaking.


I wonder if there isn't a generally distributed congenital allergy to parentheses.


I suppose it depends on what you mean by "static types" since you can optionally specify types in Common Lisp. SBCL even does precise type checking[1]:

"If a variable is declared to be (integer 3 17) then its value must always be an integer between 3 and 17. If multiple type declarations apply to a single variable, then all the declarations must be correct; it is as though all the types were intersected producing a single and type specifier."

[1] http://www.sbcl.org/manual/#Handling-of-Types


I would like to add that optimizations may also be performed in addition to type-checking (type-checking, by the way, depends on safety optimization level). What's also handy is that you can specify the optimization levels (space, safety, speed, debug) per function or per file (or in a `locally' block).

Function return types may also be specified, with serapeum's ->, it may be as simple as (-> f (fixnum fixnum) boolean) for instance, right above the function.

Classes and structs let you specify types for slots.

Expressions may be typed with `the' special operator aka (the fixnum (first list)).

PS CL has gradual typing.


Dang. I didn’t know sbcl did this. I’ve never used it, but now I want to try. Thanks!


Type systems at least as rich as Hindley-Milner are pretty much table stakes in 2022. Common Lisp's type system is not as rich as Hindley-Milner. I believe it doesn't even support user-defined parametric types (so it's kind of like pre-generics Go in that regard). It lets you type function args, but it doesn't incorporate the args' types into the function type.

So no, Common Lisp's type system, even with static type checking enabled, is not adequate for software engineering in the present day. Coalton, however, solves many of these problems.


What languages in common use in 2022 support a Hindley-Milner type system? This sounds a bit like pointless academic gatekeeping…


Basically, any statically typed language with parametric polymorphism, so Ada, C++, Java, C#, Rust, TypeScript, Python with static-typing support, post-Delphi Pascal, and recently Go all qualify.


Parametric polymorphism != Hindley-Milner, not even basically. H-M is a construction for describing and ascertaining types of terms in a lambda calculus. Many of those languages don't even typically idiomatically achieve polymorphism through generic types (though of course it's possible), but instead from subtype and ad hoc polymorphism. I don't think it's accurate to imply their designs orbit H-M.

I don't want to belabor on technicalities. Your original point was about "table stakes" for type systems in order to be relevant for modern-day software engineering. While I agree a language that attempts to incorporate fashionable computational constructs (like variable binding and lambda terms) ought to derive its type system from something H-M-esque—if only because it's so well understood—it's certainly not a requirement for the language to be taken seriously, and it's certainly not without significant peril in the presence of mutation constructions. I honestly think C, vanilla Common Lisp, and pre-generic Go are sufficiently disparate examples of that.


Type inference is hinted at in a 1934 paper by Haskell Curry on "functionality" (i.e. types) for combinatory logic. Max Newman (Turning mentor and leader of the Colossus project at Bletchley Park) published a paper in 1943 giving the first algorithm for type inference for the lambda calculus that I know of.


Comparing Go generics to those other languages is putting on some very rosy colored glasses. The primary design goal of the current generics implementation appears to have been discouraging their use. I hope they'll iterate on it, but at this point they're basically painful and annoying at best.


Table stakes for what? Fucking around with types for 6 months while someone builds a competitor in a weekend?


You might want to look into Coalton as statically typed, functional language on top of Common Lisp: https://github.com/coalton-lang/coalton

That said, Common Lisp while dynamically typed, is also strongly typed. Many type errors are prevented at runtime. Also you can declare types and have some limited form of static type checking in SBCL.


Working with several teams of combined 10+ devs on a few large code-bases in Clojure.

It works just fine. Very often, the context of your changes are local to a handful of functions - most of them being pure and simple to test.

I'm actually surprised how well it works compared to Java - for example - where long-distance dependencies can bite you in surprising ways - even with refactoring tools.

Clojure does have types and you can add compile-time tyoe-checking as an option, but it often is just in the way.

I don't miss types for modelling a domain at all. Maps/sets/vectors/lists are sufficient. Granted: Your names better be good and you better use something like spec/malli etc. on the boundaries of your app/module for safety.

If bugs happen it typically is due to nil-punting and bad tests. The nice thing: Those things are easily fixed. There's nothing complicated in Clojure.

What I can't get back to: compile, run, fail loop (vs. read, eval, print, loop). The REPL allows you to live inside your app and it is just beautiful.


There was a thread about Franz Lisp founder where he explains that DARPA D.A.R.T project was done in C, about 30M LoC, and failed hard, while the Lisp implementation was running nicely, easy to fix in hours not days.

I'm not an experienced lisper, but it's not the first time I read this kind of report.


That sounds interesting! Google doesn't find any mention of that, only https://en.wikipedia.org/wiki/Dynamic_Analysis_and_Replannin... that says DART, far from failing hard, was a huge success. Do you have a link to the thread?


here https://news.ycombinator.com/item?id=31044510

btw, I, too, couldn't find traces of the lisp side of this project after a 15 minute search. I was curious about the fix times and lisp codebase size


Thanks! Right, so the successful version was the one written in Lisp. Okay, that's actually understandable. At that time, managed languages were not widely used, and C is really not a good choice for that kind of project. If those are the two options, I can certainly see how Lisp would do better.


I built an application for low-latency financial application in SBCL using genetic programming. This runs about 75k lines.

I have a bunch of small projects totaling about 70k lines. This includes

  * Simple library to do application logging. Provides time stamp options, file naming options, others.
  * Pull apart excel spreadsheet to generate exception report.
  * Download images from Montana roadside cameras (15 minute cycle), Glacier park cameras (5 minute cycle). Includes crude detection of stars for dark images.
  * Convert rtf format to text to enable reasonable text-bases search. Used in scanning all household documents.
  * Parse burp logs to prepare for statistical analysis
  * configurations as s-expressions
  * Parse, report ham radio logs from trlog, wsjtx others.
  * Create makefiles for LaTeX projects
  * Command launcher to simplify working with many emacs projects, starting x-terminals for other systems, monitoring application status, tracking ntp sync of multiple systems.
  * Very small program to bump version number as part of lisp program builds.
  * Generate invoice from journal entries produced by https://github.com/wglb/for-hire-tools
  * Generate full penetration test report in LaTex from findings, glossary entries, vulnerability counts.
  * Simple api utils package used for web penetration tests
  * Pinboard api access used to generate pages for https://ciexinc.com/blog/solarwinds-articles/
  * Library to read, dissect pcap files. (Unfinished)
  
And countless tiny replacements for shell scripts.


Out of curiosity, what sorts of processing were you doing for burp and penetration test data? I'm in security, and reporting on tests is always something of a pain point.


So the burp log analyzed certain statistics about cookies and url segments. Nowadays there might be ways to do that within burp. Could also look for secrets in bodies.

The report one combines all elements of a pen test report including descriptive articles (e.g., what are CSRF, clickjacking, secure headers, and the like), one page for each vulnerability. These pages would include severity, impact, CVE links and CWE links, whether the vuln is closed, open, or risk accepted. Included is a tally of vulns by severity. A glossary of terms is also included. At the end is the glossary and index. A table of contents of course is at the beginning.

One improvement is a pie chary of vulnerabilities.

The lisp program generates LaTeX and references the article document type. Vastly simplified generating the report as the test went along. Document was one unit and no paring was required.


Very cool, thanks for the info!


Some of these will end up on github, but it will take a little time.


Nice. I suspect I'll mostly stick to taking inspiration, rather than code. I am looking for a solution for an eventual replacement for our team's reporting tool, but would be more likely to simply buy something since writing a replacement isn't my expected role. ;)


One advantage of lisps is that you work at run time. So, you can access runtime information in your editor, which makes up for some of the limitations of dynamic types: static types propagate information ahead of time that a Lisp can discover through introspection.


Have you tried Typed Racket https://docs.racket-lang.org/ts-guide/quick.html ? You can add static type, and the compiler will complain if there is a mismatch. (The types are also used by the compiler to make optimizations of the code, so compilation is slower than (untyped) Racket, but the final code may be faster.)


If you used Typed Racket you get static typing checked at compile time ^1

1. https://docs.racket-lang.org/ts-guide/quick.html


There are a number of companies with 100s of Clojure developers and I know of several Clojure companies with 100ks of LOC. Most larger projects I know of use ancillary systems to validate map attributes via spec, Schema, Malli, etc.

Statically typed libs and programs seem to still have issue trackers full of lots of bugs, and issue trackers for dynamically typed programs are not (in my experience) mostly type-related problems. That is, static types don't actually help you as much as you might first assume at writing correct programs.


This is purely my speculation, but I think it is OOP and dynamic types that is the toxic mix.

Lisp codebases tend not do it as much (CLOS notwithstanding). Data tends to be flatter and simpler. Data shape is in theory just as hard to refactor properly, but IME Lisp projects just do a lot less of that.

Of course there are also alternatives like gradual types or libs like spec in Clojure.


I havent. But my hunch is it works best for rapid prototyping with small teams.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: