Hacker News new | past | comments | ask | show | jobs | submit login
The Tragedy of the Common Lisp, Or, Why Large Languages Explode (esdiscuss.org)
144 points by error54 on June 18, 2015 | hide | past | favorite | 134 comments



One cool thing about Lisp is that you can easily embed new languages in it, and those languages can be small and beautiful. For example, I have a Python-esque FOR macro that uses an iterator protocol, and a universal binding macro that subsumes all of Common Lisp's binding constructs (LET, LET*, LABELS, FLET, MULTIPLE-VALUE-BIND, etc.) So for me, Common Lisp has actually shrunk without losing any functionality. This is not possible in languages without macros. Such languages are indeed doomed to either grow forever, or change in non-backwards-compatible ways (e.g. Python3). But with macros you can shrink a language as well as grow it. This is one of the reasons Common Lisp continues to thrive.

[UPDATE]: You can find my code here: https://github.com/rongarret/ergolib

Also, I forgot to mention another language-shrinker included in that library: REF. REF is a universal de-referencer that subsumes NTH, ELT, SLOT-VALUE, GETHASH and probably a few other things that I can't remember right now. It also lets you build abstract associative maps (a.k.a. dictionaries) with interchangeable implementations (see the DICTIONARY module), which lets you get rid of ASSOC and GETF.


I don't know Lisp, so correct me if I am wrong. The whole idea of embedding your own language sound pretty much the same as a growing the language, except that you are doing it yourself, in a non standard way.


If you build one new feature that subsumes N>1 old features then you're effectively shrinking the language because you never need to use those N old features any more. You can then support legacy code with macros that compile the old features into the new feature. At that point you've actually shrunk the language. You can even have those legacy support macros give compile-time warnings that the old, now-deprecated feature is being used and that the code ought to be changed. In fact, you can even easily write tools that will do this translation automatically. So yes, you really can shrink the language.


But even then, you have only shrunk the language for you - unless you can get everyone else to use your new feature. But if several such things are proposed, typically none of them gains traction, and so they all stay as just individual solutions.

Which is still fine - for the individuals who write them. But the language, as generally used, does not shrink.


Obviously anyone who chooses not to use a particular macro will not reap the benefits that macro provides. That doesn't change the fact that it's possible to write language-shrinking macros in Common Lisp, and so it's ironic to choose Common Lisp as the poster child for how languages inevitably grow without bound once they reach a certain level of complexity.


Indeed. Lisp is a poster child for how you can have one standard put out in 1994, and everything is reasonably fine 21 years later.

I really respect that about Common Lisp: the committee had the good sense (lacking in C and C++ and their ilk) of going their separate ways when they got the job done.

Good sense in a way that is related to good taste, I should add.


The way Common Lisp is generally used is by writing macros that shrink the language at higher levels. It's used that way because it can be. Whether one considers the lack of such capability in other languages a handicap or feature doesn't change that it is common Common Lisp practice. There aren't Common Lisp shops that don't use macros since Common Lisp includes standard macros.


>unless you can get everyone else to use your new feature.

When you are writing functions, classes you are extending language in similar way, and everyone else have to use this extension. You just not used to macros and your opinion is based only on your habits.


That sounds cool, but I have a deep-seated conviction that programming language design and implementation should be done by someone smarter than me.


Yet you're designing your language whenever you write a function.


I am ashamed to say that I only had this incredibly important insight very recently, after 20 years or so of programming .

I think it is one of those leveling up moments - a thing that once you know allows you to write better code at all times.


There are many articles on lisp that explain that in lisp you can try out many things in a few evenings and find suitable solution. In this way you shouldn't be smart enough to make an ideal, general design in one turn, but you will find it through many iterations. And this is what makes lisp different - these iterations are very cheap.


Like lisper :)


I completely agree. That's the reason I usually use libraries rather than roll my own for many tasks.


It is very easy, and DSLs do boost productivity dramatically, so not designing languages is a form of self-harming.


The 'standard' thing is both a gift and an issue. You're free to fulfill your needs, but everyone will do so. The community needs to be mature and sensitive, sharing good ideas, not building silos.


Both rigidly defined and flexibly defined languages appear to have their issues. I'm not much of a historian of Lisp but my basic understanding is that it's great for lone developers, but a pain for larger teams, and the flexibility of the language is the reason behind both of these.

In the Go language the tool gofmt formats the source code to certain standards. I haven't heard a single developer complain about gofmt, developers seem to appreciate that it takes away the bikeshedding over style, there's one established way to lay out your Go code and that's to use gofmt.

Now if bikeshedding can happen over something as trivial as code layout, imagine what's going to happen if you make it trivial to change the language. Someone didn't design a regex library the way you like it? Code your own. So will everyone else. You end up with thousands of developers reinventing the wheel, all building their own versions of the same thing, that aren't necessarily compatible.

CL now has its own package manager (Quicklisp), I'd be curious to know if this has had an impact on fragmentation, that's CL's best bet for reestablishing itself as a language that's growing in use.


> a pain for larger teams

I'm calling out this nonsense; "larger teams" of Common Lisp programmers are a myth. If such a thing happens, they will make it a priority to get along so that their good luck persists as long as possible. :) :)

> there's one established way to lay out your Go code and that's to use gofmt.

There is basically one established way to lay out Lisp, and even Vim's Lisp mode gets it about 95% right. (The main issue is not quite having the correct vocabulary, out of the box, of what symbols denote forms that have a body requiring sub-form indentation rather than list element alignment.)

  (operator (with maybe some args)
    (and a)
    ;; here is a header comment
    (body of forms)  ; margin comment
    (indented by one level such as two spaces))

  (function call with many arguments
            split across several
            lines lined up with the first argument)

  '(quoted list
    split across lines
    lining up with first element
    (oh here comes a nested
     list))

  ;;;
  ;;; three semi's for block comment
  ;;;

  #(vector literal
    follows quoted list)
The above represents the majority of what you need for writing nice looking Lisp.


You've missed my point. I used gofmt as an example of a tool that exists because programmers like to do things multiple ways, even when it can be beneficial to do things in a standardised way.

The formatting of Lisp is not what I was referring to, it was the flexibility of the language. Lisp encourages writing DSLs. DSLs by their nature are opinionated. The only way to stop the language going in lots of directions at once is to have a canonical implementation of supporting features to work against. My point was that without a package manager, this focus was not as strong as it should be, but there's a hope that with standardisation on packages the language would grow with a sense of direction.

To use another example, look at how many implementations of CL and Scheme there are, why does all this fragmentation exist? What barriers are in place for coders working on extending a smaller subset of implementations?

http://www.cliki.net/Common+Lisp+implementation

http://community.schemewiki.org/?scheme-faq-standards#implem...

For what it's worth, I'm not saying this is a Lisp-only trait (for another example in the computing world, just look at how many Linux distros exist), but the simplicity and flexibility of Lisp does lend itself to fragmentation. Whether this is a good thing or not is a matter of debate.

Out of interest, what impact have you noted since Quicklisp began being used?


> To use another example, look at how many implementations of CL and Scheme there are, why does all this fragmentation exist?

For Common Lisp the reasons are: competition, different licenses, create proprietary property to sell, different runtime implementation strategies, different compilation strategies.

Basically the same reason why there are several different Java implementations.

Common Lisp was created, because it was clear that there is a fragmentation of Lisp, but there shouldn't one for the basic language. Originally it was also thought to standardize on some libraries, but the whole thing ran out of steam. But you can read the Common Lisp standardization groups on various topics.


Back to Quicklisp, should or could there be an effort to finish the work related to libraries ?


Quicklisp discourages people from re-inventing the wheel because it makes the pre-existing wheels trivial to access. No one in their right mind would try to reinvent regexps in Common Lisp (except perhaps as an exercise) because CL-PPCRE is just so fucking awesome. There's a CL library for just about everything nowadays, and most of them are very high quality.


That's what I meant about being mature. Lisp literature spends a lot of time explaining when to or not to use macros. Don't be silly and write something that won't be a game changer just because you want regexes literals like expressed #r#.... instead of (rx ...). I'm pretty positive any seasoned lisper can sense when abstractions must be added.

Btw, large teams may exist because tools don't solve the problem easily.

ps: It would be interesting to ask the CL community about Quicklisp social effects.


I don't think it's much different than the library situation with most other programming languages.

The one big difference is that Lisp libraries can add new controls structures to the language in ways that most language's libraries can't.

For example, in Python, I can fetch a website using the stdlib's http.client, or I can use the more convenient functions from the Requests library.

In Lisp, I can iterate and loop using the built-in "loop" construct, or I can use the more convenient control structures defined in the Iterate library.


The standard way to embed your own language is to use a Lisp to begin with.

Any Lisp language is effectively designed to provide standard tools and mechanisms for adding new language constructs so that you can write your own language without having to write either a full implementation from the tokenizer to the interpreter or use non-standard glue to hook an existing scripting language into a compatible C runtime.


The key idea is to grow layers of your own DSLs embedded in a Lisp. It is a bottom-up process of growing of a layers of abstractions, similar to building if a house (but, paradoxically, without blueorints) which some people called 'exploratory programming'.

PG in his "On Lisp" book explained this process in details. In some sense, those who never read it cannot claim to be a Lisp programmer.

So, you are not extending the language, you are growing a several new ones (layers) according to the domain at hands.


"In some sense, those who never read it cannot claim to be a Lisp programmer."

Yes, it's well known that - in some sense - there were no Lisp programmers until 1993.


This is good one!) The insight were there for s long time - DSLs were popularized in SICP and AoMOP, and CL's looping expressions and SETF/GETF are the most famous, but PG is good at concise writing and he walked his talk with the Arc language.)


Don't get me wrong, I liked On Lisp a lot. I just think it's important to recognize that much of it is a distillation of pre-existing understandings and traditions.


Not necessarily. If you do not allow a fallback to your host language from your DSL, it's possible to narrow it any way you like.


Can you disable a part of language reliably? For example, is there a way to ban `setf` in a package/file?

If not, what's the point of pretending that a part of the language does not exist if anyone can break this status quo anywhere?


Sure, just create a new package and don't import CL:SETF into it.


This allows you to shrink the language syntactically, but not semantically. For instance, you can make it so setf isn't visible from your own package, but you can still call out to functions in other packages that will modify lists, so it's not suitable as a way to reduce the conceptual overhead "things you have to worry about when using lists".


If you really want to go hard-core you can unintern all the symbols in the common-lisp package.


In Haskell, there are language pragmas that enable extra features for a given module, but sometimes can also remove them (completely forbidding unsafe functions, disabling record syntax, and the like).


Haskell is really the kind of exploding language being discussed in the article though. While it has simple toggle switches for the end user, each of the underlying "extensions" is really a large modification to the compiler. There's still a monolithic parser, and authors of language extensions need to take into account all the other extensions to make sure they play nicely together. It's like they've heard of the open/closed principle but don't know how to apply it.

Specific extensions (like Safe/Trusted), or disabling record syntax etc are demonstrative of the problem - they're quite specific and implemented as part of the compiler because the language itself does not have the means of selectively enabling/disabling features via runtime code.

Kernel[1] has a much more interesting model based on first-class environments. One can selectively expose parts of the language to any execution context by creating a first-class environment containing only the required bindings, then execute some code in that environment. It provides a function ($remote-eval (code-to-exec) environment) for this purpose.

To give a very trivial example of this, lets say I want to expose only the numeric operators +, -, etc to some "safe calculator" context. I can simply bind these symbols to their kernel-standard-environment ones and be done.

      ($define! calc-env ($bindings->environment (+ +) (- -) (* *) ...)

      ($remote-eval (+ (* 2 3) 1) calc-env)
Trying to do something like unsafe "read-file" in place of (+ (* 2 3) 1) will result in a runtime error because the binding read-file doesn't exist in that environment.

There's much more interesting stuff to be found in Kernel if you like the "small core" based approach to computing. Kernel is more schemy than scheme. Compiler hacks like "macros" are an example of something you wouldn't want lying around in your exploding language when you can implement them trivially with first class operatives. And why would you want quote implemented as a language primitive? Yuck!

[1]:http://web.cs.wpi.edu/~jshutt/kernel.html


> So for me, Common Lisp has actually shrunk without losing any functionality. This is not possible in languages without macros. Such languages are indeed doomed to either grow forever, or change in non-backwards-compatible ways (e.g. Python3). But with macros you can shrink a language as well as grow it.

Not true. If a language has adequate general constructs then you can replace language features with ordinary code written in the language. E.g. in Scala no-one uses "return" any more, because you can get the same functionality (in a better / more consistent way) using library types - just ordinary classes with ordinary methods, no macros needed.


Ok. Replace an optimising static BNF compiler macro with "classes" and "methods".


A lot of things that seem like they would need compiler support actually don't. Look at Spire.


With macros you've got a compiler support for anything you can imagine. And with macros you implement your DSLs as compilers, not interpreters, which is a huge advantage: you've got static verification, you've got any performance you like, and compilers are so much, much easier to implement than interpreters. So what's the point in doing the wrong, slow and error-prone thing instead of fast, easy and robust one?

If you're talking about this spire ( https://github.com/non/spire ) than it's using generics (i.e., poor man macros) and some real macros too, apparently.


> With macros you've got a compiler support for anything you can imagine.

Most of what you can imagine isn't useful or maintainable. Good tools should have more structure to guide you.

> And with macros you implement your DSLs as compilers, not interpreters

I'm not suggesting interpreters. If anything I'd say that macros - running arbitrary code at compile time - are more interpreterlike than what I'm describing.

> it's using generics (i.e., poor man macros)

Well if you're going to define every useful language feature as "macros" then of course you need macros to implement anything! But most of us consider generics to be different from macros.


> Good tools should have more structure to guide you.

It's not possible to have more solid and strict structure than with macro-based DSLs.

> I'm not suggesting interpreters.

You do. Either macros or interpreters. There is no other way.

> are more interpreterlike than what I'm describing.

You did not describe your approach to a problem yet. How would you implement an optimising BNF-based eDSL without macros? Spir and similar things are a totally different topic.

> But most of us consider generics to be different from macros.

Most people have absolutely no idea what metaprogramming is and how to implement DSLs properly. I would not refer to an opinion of a crowd.


> You did not describe your approach to a problem yet. How would you implement an optimising BNF-based eDSL without macros?

Let's get more concrete. What does the business requirement look like? Do you mean "a language that happens to be expressed in BNF", or are you asking for a DSL for expressing languages which itself looks like BNF?


Let's imagine you want to embed parsers into your language. Choose any parsing algorithm you like, but parsers must be defined in a BNF-like syntax.

You can go slow an buggy way (Parsec and alike), or you can compile your embedded BNF into a nice, verified, optimised implementation.


> Let's imagine you want to embed parsers into your language. Choose any parsing algorithm you like, but parsers must be defined in a BNF-like syntax.

Doesn't sound hard to do in (macroless) scala. Just create objects and methods with appropriate names - and for the optimization part just ensure everything is lazy and preserves the structure so you have the AST available in the language and can do your optimizations at that level (which doesn't have to mean interpreting - we can use the type system to perform these computations at compile time[1]). The syntax will probably end up being slightly differently punctuated from actual BNF, which is a tradeoff for having syntax that follows the ordinary rules of the language and is accessible to e.g. an IDE.

I can agree that languages need to be able to perform complex transformations at compile time. But this doesn't have to be exactly the same kind as the compiler does itself, and as long as the language provides a sufficiently lightweight way of constructing an AST "in" the language, I think it's worthwhile making an explicit distinction between such ASTs and the AST of the language itself.

[1] I can imagine you objecting that this is just a macro system by another name, but it isn't (except in the trivial sense of turing equivalence). It has a different grain: it's more natural to create companion trees that mirror the structure of the AST exactly, and less natural to transform the AST by moving nodes around. And any such companions are explicitly distinct from the "original" tree, and the structure naturally lets you see both.


No, it's not easy, it's a barely usable hack. Coding anything on a type system level is like coding in Brainfuck or Unlambda, while with macros I can use whatever fancy DSLs I already have implemented for nice, declarative compiler construction.

Can you stop in the middle of your translation from BNF to low level code and dump a bunch of nice .dot files plus a tex documentation for the grammar? No. Your type system cannot do it, and you certainly do not want to do it in runtime, it's something to be done exclusively in compile time.


I think we definitely found the Common Lisper.


> in Scala no-one uses "return" any more

Huh??? Does Scala even have a "return" statement? I can't find it in the docs.


There is: http://scala-lang.org/files/archive/spec/2.11/06-expressions...

The general consensus is: don't use it.


Ah, there it is. OK, so I'm still confused.

> you can get the same functionality (in a better / more consistent way) using library types

RETURN is a control construct. How do you emulate a control construct using types?


In a language with first-class functions, control constructs can be replaced by polymorphic methods - see smalltalk for the ultimate version of this. In scala the idiomatic way to do "short-circuit the rest of this in some cases" is usually for/yield with a type like scalaz's \/.


I've worked in Clojure for years and have no clue what you're even talking about


Do you have the code for those macros somewhere online?



Thnks for sharing.


Thank you


There is a great book about Lisp from Christian Queinnec titled "Lisp in Small Pieces" or LISP. Here is a little excerpt from it.

"There are subjects treated here that can be appreciated only if you make an effort proportional to their innate difficulty. To harken back to something like the language of courtly love in medieval France, there are certain objects of our affection that reveal their beauty and charm only when we make a chivalrous but determined assault on their defenses; they remain impregnable if we don't lay siege to the fortress of their inherent complexity. In that respect, the study of programming languages is a discipline that demands the mastery of tools, such as the lambda calculus and denotational semantics. While the design of this book will gradually take you from one topic to another in an orderly and logical way, it can't eliminate all effort on your part."

Lisp has some of the best literature around of any programming language. Anybody who really cares about craft of programming should make use of wisdom therein.


seeing this comment on HN, made my PG, a Lisp book author, is amusing. I have to admit that while I don't do Lisp day-to-day probably my favorite Lisp book(s) were written by him. The practical hacker in me prefers Python, Java and C.

But the elegant hacker in me? Prefers Lisp. And PG captured that in his writing.


IDK where the elegant but impractical (as it was a trade off). CL is a practical language. Consider loop, format, multiple values or the standard methd combination in CL.

While the only practical thing about python is that it has more libraries. Nevermind the fact that python scope is misdesigned even in Python 3!


It was painful watching Common Lisp happen; too many competing interests, and commercial stakes. Unfortunately languages went a different direction. (I also mourn Smalltalk's "loss", but Java had much more money coming into it.)

Lisp50 went into this somewhat (http://www.nhplace.com/kent/Papers/cl-untold-story.html) and, unrelated, was a freakin' awesome good time. I sat next to Guy Steele for one talk but was too in awe to even say anything.

(Anecdote about same: I IMed a friend and said "I'm sitting next to Guy Steele" and he replied "Cool, ask him who Guy Steele is." Damn kids.


I really did not enjoy Lisp50. I was surprised to discover such a disconnect between the old-school and new-school Lispers. Guy Steele and co really didn't seem to have any interest at all in what people have done with Common Lisp these past 20 years or so. That is a pity because I had always really valued the perceived continuity of the Lisp community.


I think Clojure's reception was great--the oldies, overall, were very positive.


Agreed. That was classy. But that is also a sign of them having no interest in the modern Common Lisp community :).


I read it differently; I think they thought it was an interesting direction, but only that, another direction. The oldbies are pretty CL-oriented, although obviously many of them have moved on (e.g., Fortress for GLS).


Though the 1994 ANSI standard is 1153 pages, Common Lisp somehow doesn't feel large. A lot of it is library pieces that can be understood more or less on their own and work independently. Somehow you can know the language well, without reading 1153 pages cover to cover. If you see anything in someone's code which is standard, but which you don't know well (or at all), it's not going to throw you a big curve ball.

C is a "small" language and is pushing 700 pages now.

Projects written using small languages tend to use lots of extensions. So do projects in larger languages, too; they use some subset of the core language, probably a small one, and then other libs which address problems not covered in the language at all.

How big is Perl? How much of CPAN should be included in that measurement? If the answer is "none", how realistic is that? Do you know Perl if you don't know any CPAN module?

How about Scheme? The base standard is small. But then there are SRFI's. It seems disingenuous not to count htem. And then there are implementations and their environments and extensions, which projects depend on. What better represents "Scheme size"? The R6RS document, or some measure of the size of, say, Racket?


I think the post was more about syntax, not standard library. Functions and methods are reasonable easy to lookup, syntax much less so.

Also try googling for some unknown syntax.


> I think the post was more about syntax, not standard library

If that is so, it has no point.

The bulk of the 1153 pages of the Common Lisp standard is in fact describing a standard library, so if the definition of "large language" is one that has a large core syntax, excluding standard library, then it's a small language, in fact.

Most of the syntax of a typical Lisp dialect (Common Lisp included) takes the form of a standard library. If you seen an unfamiliar syntax, it consists of a form with an unfamiliar symbol in the leftmost position:

   (unfamiliar-symbol ... stuff (you (do not)) understand)
You search your help resources for "unfamiliar-symbol".

The lexical syntax ("read syntax" in Lisp terms) is quite very small. It consists of elements like what constitutes a symbol token, what numeric and other constants look like, and other such elements. Stuff like:

  #(this is a vector)
  #c(3.0 4.0) ;; complex number 3.0i + 4.0.
  `(quasi ,quote)
  '(quoted list)
  package::symbol


In the case of Java it is not that Java is a "large language", it is that to get anything useful done with Java you need to know about Maven and Spring and Log4J and Apache Commons Logging and SLF4J (because if you're using a lot of libraries surely all of those will be in use.)

That is, it is the complexity of the ecosystem, not of the language.


Yes, it's a problem I call the Java Jungle - and it's something that happens (and will happen) to every successful language, I believe. Therefore it's not enough to ditch it and start again - devs need to figure out how to manage complex communities and library/architecture options. Although I'm not entirely convinced Java is going to be the one that really figures it out.

It's probably Node or Go or Rust that will finally get it right (they already get it right tacitly acknowledging that it's okay to couple to linux, therefore it's okay to be native, and that the correct unit of deployment is the whole damn server image.)


You know if you're going to downvote, the least you can do is respond and say what you think is wrong. Otherwise, you're just being a jerk.


+1... We could have learned something from a response.


When I was learning Java recently in anticipation of a job programming Java - I was surprised by this reality. The core of Java is remarkably simple to learn - there's not really all that much to it.

The complexity is indeed in all of the libraries and build frameworks and well intentioned but silly HammerFactoryFactoryFactoryFactories.


I think the case can be made that Java was too simple. It's inability to express very much within itself is what led to explosion of external tools to make it "better", or indeed, "work".

I think it was deliberately designed as a simple language to be used by large groups of people in simple ways, but actually failed so epically at that goal because of being too simple that it actually destroyed the entire idea of building a language deliberately for large corporate use. (Note that it has grown a lot since then; it had to.) Go's the first language I've seen since Java try for that niche. I've said it before: In the short term Go may be stealing from Python and Node, but in the long term, Java's the one that needs to be worried about Go.

Edit: Literally six minutes later, my feeds produce for me: http://www.businessinsider.com/google-go-update-from-jason-b...


> I think the case can be made that Java was too simple. It's inability to express very much within itself is what led to explosion of external tools to make it "better", or indeed, "work".

Yes, Java (IIRC) originally aspired to be a small, simple language with a few and honest constructs that everyone would understand and use. Of course—speaking of CL!—this is Greenspun's Tenth Law at work. To be fair, that's not necessarily to say that blowing the syntactic budget on a for construct is a good decision, and there are very good reasons to let language design take place in a marketplace of extensions rather than in a centrally-mandated core language. But if the idea is that if you mandate a simple language then the language as people use it will necessarily be simple and uniform, then no.

(Doing my crazy-man turn for moment: this is just one manifestation of a much wider problem. The idea that pushing unavoidable but unwelcome complexity (or unreliability or untrustworthiness, or things like only-partial support for interfaces) in-band is equivalent to making it somehow go away is the great all-pervading madness that afflicts computing. "As simple as possible, but no simpler"...)


I've said Java's model of abstraction is both incomplete and insufficient; it's very frustrating.

It was definitely designed for the lowest common denominator.


> In the short term Go may be stealing from Python and Node, but in the long term

Go is a niche language.As a niche language it will perform well in its niche, but it will never be as big as Java or C#. Go total lack of expressiveness makes it unfit for a wide range of applications.

Java is rigid, but I think version 8 makes it more enjoyable. But it will not make all the terrible java core apis and framework go away.They are still here.


Java was also a niche language. People never learn unfortunately.


Java, or, how "software engineering" can kill a decent enough language with incredible complexity.


I blame a lot of that on the type system. It's almost very nice, but in such a way that make compensating for the almost very painful.


"software over-engineering"


I think this is a very fair way to characterize Java's problems. The syntax is annoying but generally let's you get tons of shit done in a reasonable matter. It lacks the features of a stronger type system like Haskell or Scala, but you can get pretty far.

The ecosystem on the other hand can be totally befuddling, maven, gradle, the dozen or so DI frameworks and the various codebases that seem to use all of them, choosing between Apache or Google Java libs (or both!), etc.

Scala on the other hand suffers from an explosion of language features that means you either get a ton of shit done because you love Scala or you get nothing done. I'm not sure which is better anymore since I work in both on a daily basis but it's a different tradeoff.


> The syntax is annoying

That's a matter of perspective no? I'm not fond of symbols so reading Ruby super-terse code makes me choose watching a movie on Netflix over that.

> The ecosystem on the other hand can be totally befuddling

You mean slightly better than Python that keeps re-inventing the (half) wheel? :D

Maven is used by the majority projects with Android projects as an exception because Google pushed hard for Gradle.

For DI frameworks: Spring is the majority winner with Guice/CDI on the second place.

Apache vs Google Guava only because Guava came in late and both are just a nice small library (not a framework). Older code within the codebase might have already used Apache Common lib and newer code within the _same_ codebase will more likely use Guava where it is fit (I/O is an area where Apache has better library).

We should also compare this situation with various Auth & Auth lib for Rails/NodeJS project :).

So shrug ... Java has been around longer, at most usually there are 2 competing libraries for certain area and the better ones tend to win (again, depend on your perspective what "better" means: some prefer Maven over Gradle).


> Scala on the other hand suffers from an explosion of language features

This isn't what I feel. Scala's feature set is small, but powerful. Some examples:

- Scala has no notion of static methods, unlike Java. Every reference or value is an object, every function call is a method call, with Scala's OOP being much, much closer to Smalltalk than any of the C++ inspired bastardizations tend to be

- Scala doesn't have special syntax for certain types, like Java's plus operator for Strings

- Scala's support for variance is much simpler and at the same time more powerful than Java's use-site variance by means of wildcards (I've met no Java developer that can tame Java's wildcards, I'm sure they are out there, I just haven't met them)

- speaking of variance, Scala's type-system has Null and Nothing and AnyVal and Unit; in Java you've got "void" as a special construct, in Java the primitives are special, in Java "null" has special and unexplained treatment, in Java Nothing is surely there somewhere in the implementation, but you can't use it ;-)

- Scala-async is a library, instead of a language feature like in C#

- Slick is a library, instead of a language feature like Linq in C#

- Scala's for comprehensions are much more general purpose than the foreach construct in Java, or than for comprehensions in Python, which means that Scala doesn't need new constructs for dealing with async stuff or with (god forbid) monads

- Scala's traits are much better and I might say easier to understand than Java 8's default interface methods

- Scala does not have side-effecting keywords such as break or continue

- Scala does not have the special indexing syntax of arrays

- Scala does not have special syntax for building arrays or maps, as the basic language is enough for doing that in an expressive way

- Scala does not have operator overloading, or operators for that matter; as in Scala the operators are just plain methods

And then indeed, we can talk about things like pattern matching or case classes, which in my opinion add tremendous value. But you know, static languages need features in order to be usable / expressive and cannot be minimal in the way that Scheme or Smalltalk are. For example people complain about implicit parameters, however implicit parameters happen anyway in any language (e.g. undocumented dependencies, singletons) and at the very least in Scala you can document those dependencies in the function's or the constructor's signature and have it statically type-checked and overridable. Plus implicit parameters allow one to work with type-classes and compared to Haskell, in Scala a type-class is just a plain interface and its implementation is just a value. And also the CanBuildFrom pattern is not a type-class and isn't possible in Haskell. So such a small feature such as implicit parameters yields tremendous power.

I could probably go on, just wanted to point out that Java's simplicity and at the same time Scala's complexity is entirely misleading. And also, I happened to introduce many rookies to Scala and by far the biggest hurdles are posed by exposure to new concepts or design patterns, brought by functional programming of course. Even explaining Future is problematic, a standard library thing that otherwise leaked into Java and many other languages as well.


I would agree with you, except there's no way to fit type erasure into that sentiment. Type erasure is a complex solution to an easy problem, done purely out of laziness and a broken sense of what "backwards compatible" should mean. The moment you try to do anything "interesting" with generics, you realize the sham that they are and start passing around `Class<T>`, which is exactly what you would have done before generics anyway.


Type erasure kills the language. Otherwise, it would have a bright future.


Isn't that the same with _any_ ecosystem?

Ruby => Rails (most of the time...) Python => Django NodeJS => ExpressJS

Ruby => RubyGems + Rake + Bundler Python => (finally something ... static) pip NodeJS => NPM Browser JS => Bower

NodeJS tries to be as simple as possible but at the end of the day, you need to use/download/learn libraries with different quality/documentation level and different API-feel/code-style.


Java libraries tend to have "magic" that alters the language semantics. E.g. Spring's dependency injection breaks your reasoning about how an object is constructed. Hibernate breaks your reasoning about when object fields can change. Tapestry breaks your reasoning about basically everything. There's a difference between a library that follows the rules of the language and a framework that changes them.

(admittedly to a certain extent I've heard the same said of rails)


> Isn't that the same with _any_ ecosystem?

It is!


Except it isn't. Attributes of languages include their documentation syntax and extension API. Getting this right makes a remarkable difference for how easily a person can grok a new library or extension and make it useful. A good language has a common "language" overall, not just code syntax.


Which language(s) do you think do a particularly good job of documentation syntax and extension API?


That may have used to be true, but with every release of Java the language and standard library grows. More features means simplicity is lost, and Java is huge.


The language hasn't changed very much since inception. Java 8 was probably the biggest of the changes with Lambdas. But even so Java 8 looks a lot like Java 1 at the language level.

The standard library has always been bloated. Hopefully Java 9 and Project Jigsaw will break it up into smaller manageable trunks. Really a lot of it needs to be burned away for new stuff to grow.

The tough part for Java will be when they have to break backwards compatibility to move the language and VM forward. If not done careful they will have another Python 3 on their hands.


I think lisp could benefit from a small core and building out a standard library. You could pack all the features it needs (packaging, lexical/dynamic scoping (defvar), let/lambda, defun/defmacro, multiple values (via values, multiple-value-call), setf (w/ setf expansion), simple arithmetic, declare/declaim/proclaim, maybe a few more) into the core and have standard libraries: cl.bind (multiple-value-..., defparameter, etc), cl.math (sin, cos, etc), cl.clos, cl.collections (arrays, hash tables), cl.io, etc etc.

I think this would clean things up a lot, still preserve the spec (aside from documenting what's in which libs), and make things more approachable.

Shoving everything into the "common-lisp" package works but it's cumbersome and you have to have the entire language sitting there to use anything.


I don't have the exact quote/source right here handy, but I believe that was Guy Steele's intention with Scheme.


He gave a brilliant talk about it: "Growing a Language"[1].

The idea is to make languages that grow—ones that provide a small, uniform core that can be extended by the user. Ideally, these extensions feel like first-class citizens: things added by users should feel on par with built-in language features.

It's still one of the best technical talks I've ever come across.

[1]: https://www.youtube.com/watch?v=_ahvzDzKdB0


He even humorously implied that a growing language shrinks, by replacing many specific constructs with a single general one.


https://en.wikipedia.org/wiki/Scheme_(programming_language)#...

This was a concept for R6RS (not sure what happened, apparently some controversy with it) and R7RS has (attempted? succeeded?) in going in this direction.


IIRC R6RS was deemed too modular for not much reason while abandoning backwards compatibility. Thus, R7RS was split into small/large specs, and largely builds on R5RS.


R6RS was the systemd of language standards. It went against the very philosophy of the language it purported to standardize, and was basically a prescriptive standard based on a few influential individuals' notion of what Scheme "should" be.

That's another reason why I remain unswayed in my detestation for systemd: I'd seen this movie before and I don't like how it ends.


If that's so important why has nobody done it? CL is programmable programming language.

It would take less than a hour to separate those core 25+ symbols in Common Lisp into core package and then separate other symbols into other packages. Common Lisp has less than 1000 symbols.


Actually, I think I might do it. After thinking about it for a few minutes, you're right, the work is somewhat minimal. Then you have a standard library. Instead of :use'ing :cl, you'd just use what you want out of the lib.

If it caught on, implementations could use it for hinting when compiling, and everything would be backwards compatible by just using :cl again.


I would like to do this too. Would you like me to join this effort? I want to propose to write up a CDR that standardizes a hierarchy. Please send me a mail! :)


I think nobody's efforts there have really caught on because Common Lisp as a community is hopelessly conservative.

At least Lisp as a whole isn't -- Clojure is a hell of a lot simpler than CL, though even it now has someone with a "let's separate things out into different packages" project (called Dunaj -- I actually think it's a pretty good idea, but nobody's really talked about it).


This exists; it's called the R7RS standard for Scheme. It basically divides Scheme into two languages: a small core with a basic module system, and a more extensive language with a robust standard library packaged as modules.


I'm glad someone said this. I'm an occasional JavaScript programmer, and I looked over ES6 last night and was surprised by how large it's become. And I learned that ES7 is already on the way.

That said, most of the features seem nice, and many are borrowed from stable languages like Python, so perhaps it's not too much. I'll have to try it and see.

It made me wonder what Crockford is up to, and what he thinks of this.

https://github.com/lukehoban/es6features

http://es6-features.org/


Yeah, I share the linked author's opinion of ES6 -- it's good stuff, but it's also a dangerous direction.

Part of me thinks that maybe what's needed is an updated version of "use strict" -- "use es6" or whatever -- that would let you use the new features, but also prevent you from using some deprecated features, to keep the surface of the language somewhat smaller even as new stuff gets added.


That was seriously considered some years back and thrown out as likely to cause poor adoption and poor intermingling of language features.

http://www.2ality.com/2014/12/one-javascript.html


For many years I was fiercely against ES. With ES6 I start to change.

Your suggestion make sense and I applaud it. Something like

"use strict es6"

would make our lives easier. Backwards incompatibility here has a goal.


It's expanding the language, but adding such sorely needed features.

I've been working in it for a little while now, and egads is it painful to go back. Block scoping, arrow functions, and destructured assignments are all a godsend.


From one of his more recent talks, Crockford said that most of the new features in ES6 had not yet been proven to be good parts. Fairly damning, I think.



The plan for Common Lisp originally was to have a "core" and a standard library. From Daniel Weinreb's blog post "Complaints I’m Seeing About Common Lisp":

    It’s just too big. Actually, the real problem is that the core of the language is not cleanly separated from the built-in libraries.  The Common Lisp designers had originally intended to do this separation, but there wasn’t time enough.
https://web.archive.org/web/20100706204555/http://danweinreb...

(Daniel Weinreb was, among other things, one the designers of Common Lisp.)

Zach Beane has some more information on this at https://xach.livejournal.com/319717.html

EDIT: "time enough", in the quote, may seem strange; after all, work began in 1984 and the standard was finalized in 1994. But remember that many stakeholders were companies with jobs to do, and they had to assign employees to the design/standardization work at real costs for said companies.


I like the author’s remarks and philosophy about keeping JavaScript small, but I thought the opening was remarkably uncharitable. The specific person and the specific feature are quite irrelevant to the point he’s making here.

I am left with some admiration for his goals, but also a great deal of trepidation about ever suggesting anything or even talking about JavScript.next. Will I be the next one called out by name if I make the mistake of asking whether traits might be a good addition to JavaScript?


For what it's worth - the authors know each other from before and knowing both parties Mark did not intend to mean any offense.


I came back to note that Mark has subsequently clarified that he meant absolutely no slight against Kyle. He is a gentleman.


Interestingly hardly anybody uses Algol, Smalltalk, Pascal and early Scheme anymore, while people still use Common Lisp. Perhaps "being small and beautiful" is actually a bad thing for a programming language?


I didn't realize that Scheme had become bloated. I haven't looked at it in years, and thought it was still the basic language described in SICP.


See now the R7RS which is split into a small and large standard (the latter still in progress). R6RS was largely ignored by the community, and there wasn't huge growth prior to it.


Common Lisp actually has a core of a mere thirteen "special operators". You can think of everything else as standard library.


ECMAScript 6 is a shame, there's a lot of stuff added which is unnecessary.

"let" is unnecessary. JS now has two kinds of variable scoping! "var"'s hoisting is annoying, sure, but we don't need two kinds of variable scope. If you want to scope something to a block, you can just use an IIFE.

"class" is unnecessary at best. JavaScript has a bunch of ways of constructing objects to choose from, and that's not a problem. Why lock users into one paradigm, and obscure what's actually happening underneath? This will just confuse people when they have to deal with code that doesn't use "class" syntax or the OOP model it presents.

Object property shorthand is confusing. Why the hell is {bar} equivalent to {bar: bar}? Isn't that a set literal (Python, math)? Why isn't there the colon, if it's an object? What the hell? Try explaining that to newcomers.

Computed property names looks weird and is misleading. You'd logically expect {[1+1]:2} to be an object with an Array (coërced to string?) key, because [] is an Array literal. But instead it means "compute this expression". In which case, why isn't it ()? That's what you'd intuitively expect. I've tried to use () before and was surprised it didn't work, even.

Method properties, e.g. { foo(a,b) { ... } }, are unnecessary given => functions.

All that being said, I think ES6 has some quite positive additions. Maps, sets, tail-call elimination, =>, modules and symbols are all very important and useful features JavaScript really needed.


>If you want to scope something to a block, you can just use an IIFE.

Now try doing that in a loop that you want to break out of. Edit: To save you the trouble - https://github.com/babel/babel/issues/644

>"class" is unnecessary at best. ... Why lock users into one paradigm?

It canonicalizes one of the popular ways of doing classes (the other being the same but without `new`).

>Why the hell is {bar} equivalent to {bar: bar}? Isn't that a set literal (Python, math)?

{ ... } in JS has never meant set literals. It does however mean objects (dictionary literals) which is also how Python uses it.

>You'd logically expect {[1+1]:2} to be an object with an Array (coërced to string?) key, because [] is an Array literal.

[] has also always been used to index objects and arrays, so using it when generating the object with keys and values follows as an extension of that.

>Method properties, e.g. { foo(a,b) { ... } }, are unnecessary given => functions.

Arrow functions capture lexical this, which method properties do not. Compare `({ x: 5, foo() { return this.x; } }).foo()` with `({ x: 5, foo: () => this.x }).foo()` Arrow functions also do not have an arguments object.


> Now try doing that in a loop that you want to break out of.

Fair point, although this can be worked around. Though it begs the question of why you need block scoping anyway. If you have a function large enough to need it, you should probably break it down into smaller functions, and compose them.

> It canonicalizes one of the popular ways of doing classes

But there are other popular ways, and this way new users will have the underlying details hidden from them, meaning they'll encounter problems later. It's also potentially misleading.

> { ... } in JS has never meant set literals.

Yes, but it's never been { a, b } - there's always been a colon. Python also uses {} for dictionaries, but with colons. Having { a } magically use the variable's name as a key name, and also use the variable's value, is unintuitive. { a, b } in another language would be an array (C, C++) or a set literal (Python, mathematics). Nobody would expect it to do what it does here in ES6.

> [] has also always been used to index objects and arrays, so using it when generating the object with keys and values follows as an extension of that.

I suppose that makes some sense, but we don't use [] for string keys in literals.

> Arrow functions capture lexical this, which method properties do not.

Oh, right, good point.


I think the jury is still out on `class`. I can say that `class` is somewhat "dishonest" both the sense that it makes the language more complicated, under a guise of simplification; and in the sense that it lures developers from classical languages into thinking that JavaScript has classes in the same manner, when `class` in JS is just sugar.


I see this line of thinking a lot and I think it's a mistake. Are classes in other languages consistent with each other? Clearly not, so why is this distinction made here? ES6 classes ARE classes, it's not just sugar, that is what they are.


No, it's just syntactic sugar. They're not a basic construct of the language. JavaScript has objects and it has prototypes on objects. ES6 classes are sugar over this. Classes in other languages, however, are the basic construct.


Why is this distinction important when it comes to actually using the language?


Because when you run into OOP that doesn't use the sugar, you'll have trouble if you'd only learned "class", rather than what it's syntactic sugar for.


The common fallacy about simple languages.

Yes the language might be simple to understand, but then the result is the complexity lands in the shoulders of developers and an ever increasing library of workarounds to compensate for missing features.

Hence why every simple language that achieves mainstream use, ends up becoming like the ones it intended to replace.


My prespective could be very wrong but isn't this a sort of quality is in the eye of the beholder issue. I get building a silo isn't very beneficial to another. But isn't building monolithic library just as destructive. I don't think anything is perfect even if it's perfectly executed.


Here's a title for a rebuttal in case anybody wants to write it. ;)

The Tragedy of ISLISP, Or, Why Small Languages Implode


Whenever I encounter people arguing about Python3 I am reminded that I still miss Python1.


Python also suffering in this regard. It has moved from being "a language that fits in your head" to a language where very few people on the planet know most of what's in it.


You hear this "safe subset" thing about the syntax rich languages all the time and I really don't get it. I really do use all of C++. There are some pseudo deprecated bits and some extremely esoteric bits I don't hit, but beyond that I do wind up jogging most pieces of the language and standard library. Pretty much the same with my usage of perl. I've never written a source filter or a format, but otherwise I use a lot of the "niche" features regularly to great effect.

This whole sentiment comes from people who work on large rotating teams with enough inexperienced people, I guess? Sorry, learning a language properly takes a couple years or more. The features aren't wrong or bad, your team just doesn't know the tools well enough. You can't play modal jazz with the big boys until you can do your scales. I guess have fun doing your pop medleys with "simple" languages.


The Arc language (which runs this site) is a remarkable attempt to "fix" what went wrong with CL. It lacks a decent runtime and native code compiler (it offloads everything to mzscheme, the way clojure did to JRE) but it is already more than a proof of concept.

The problem is that there is no more DoD or other grants for creating new Lisps anymore (particularly due to Java mass hysteria and prevalence of packer's mentality).

BTW, making something similar to SBCL (everything written in itself, except for a tiny kernel written in C) for Arc (a core language, without the kitchen sink syndrome) is of moderate difficulty compared to meaningless piling up of more and more of Java crap.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: