Hacker News new | past | comments | ask | show | jobs | submit login
Why I love Common Lisp and hate Java (2012) (kuomarc.wordpress.com)
121 points by pmoriarty on Nov 3, 2018 | hide | past | favorite | 165 comments



It is a recurring theme for people to point out how verbose a hello world program is in Java:

    class HelloWorldApp {
        public static void main(String[] args) {
            System.out.println("Hello World!");
        }
    }
For argument sake, let's compare it not to Lisp but to Python instead:

    print("Hello World!")
Once you get proficient in a language and start writing larger programs, your perspective can change entirely. What appears to be verbose in the Java version starts to make sense. For example:

* In Java the top level only contains classes and interfaces. All functions must be put inside a class, and I believe this design simplifies conceptual understanding. Whereas in Python, you have a distinction between top-level variables and functions versus fields and methods bound in classes.

* Python always executes the top level, so you end up with awkward idioms like: if __name__ == "__main__": main(sys.argv)

* Static typing versus dynamic typing. We all know this debate, so I won't repeat it. (This is regarding void and String[].)

* The "public" keyword might seem like noise, but in Python you denote private by prefixing the name with one or two underscores.

* Do you really want print() to be so easily accessible in the top-level namespace? It seems to me that the bigger an application gets, the worse it is to easily litter the codebase with print statements.

* You still eventually need to learn what "@staticmethod" means in Python. Java just forces this learning early on.


Totally agree with you - I like Java precisely because it's so verbose. I'd rather have to type 20% more than spend 20% more time figuring out other people's code.

And honestly, I always feel that the Hello World example is kind of biased. It's more an example of Java's way of launching a program being verbose because every program requires a certain minimal structure. But for the rest of the entire Java language that's not the main method, the only real "verbosity" comes from having to declare types, which is pretty much just a requirement of statically typed language and isn't so specific to Java itself.

I've written reasonably large amounts of code in Python, Javascript, Ruby, Racket, StandardML, etc. Every language has its own little forms of "beauty" - chunks of code that are particularly short and pretty to write in those languages. And I do get a certain amount of satisfaction doing trickier operations in a couple lines in languages like Python. But I always end up preferring Java's more verbose style because I don't want to have to think about the correct syntax to maximize my code's stylishness.


I would say that Java has ~50% more code than Python due to types and braces. But I agree with you that it's worth it, because the compiler can check that a programmer satisfied a constraint, instead of relying on informal conventions or comments.

You can have a statically typed language without verbosity, via type inference (Haskell, newer Java, etc.). I wouldn't say I enjoy Java's verbosity, but I would take this over shooting myself in the foot because Python does little to protect me from my own silly type/name mistakes.

I do lament that Java falls short on being able to declare tuples, lists, and dictionaries easily.


Most Haskellers write out the types even though they can be inferred. In a pure language, signatures are extremely close to documentation (you know exactly what a function will do by its signature).

Java requires more code than Python or Haskell because it's extremely imperative and statement based. Python on the other hand includes a lot of functional, more expressive idioms like concise list comprehensions.


Types aren’t documentation, no matter how many Haskell users delude themselves thinking it is.

Documentation is more than a function definition. Types don’t explain rationales and how to use functions and programs. Types don’t give proper examples.


Types are really good documentation. I can tell I'm not deluded by thinking so because I can sit down with a Haskell library containing zero examples and write code that uses it. If your assertion was correct, that would be impossible. But it's not only possible, it's easy. Perhaps there is more to types than you realize.


Alice: Hey Bob, want to try my nifty new function?

Bob: Sure, Alice, what does it do?

Alice: It takes an object and it returns an object.

Bob: But what does it actually do?

Alice: I told you. It takes an object and it returns an object.

Bob: But what does it do with the object? What's it for? Why should I use it?

Alice: Hey, I just gave you the full documentation for my function. You have everything you need. Now go use it.

Bob: Um, no thanks. I like to know more about what I'm getting myself in to.


Thankfully, Haskell supports much nicer types than only `Object`


Alice: Hey Bob, want to try my nifty new function?

Bob: Sure, Alice, what does it do?

Alice: It takes a string and it returns an integer.

Bob: But what does it actually do?

Alice: I told you. It takes a string and it returns an integer.

Bob: But what does it do with the string? What's it for? Why should I use it?

Alice: Hey, I just gave you the full documentation for my function. You have everything you need. Now go use it.

Bob: Um, no thanks. I like to know more about what I'm getting myself in to.


I think you're criticizing a language you have no experience in. Haskell development is type driven. You start by defining and constraining your types until you get a DSL to write signatures in. See the Idris O'reilly book for more on type driven development.

By the time you've added type constrains through typeclasses, selected types named after the domain, and selected a suitable name for the function, it's possible to write against the signature with no knowledge of the body or comments.

You're also forgetting that signature includes the name of the function.

reverse :: [a] -> [a] is easy to understand by it's name and type.


I didn't critisize Haskell. I only criticized a statement about its type system.

> You're also forgetting that signature includes the name of the function.

That's what I wanted to point out. The name of a function is even more important than its types. Types alone seldomly give you the complete picture of what's going on in a function.

Of course the name can be wrong whereas types can't, but the name is less likely to be wrong than a comment.


I suggest reading Type-Driven Development with Idris by Edwin Brady. I don't think you have enough experience writing Haskell to understand the outcome of type driven development.

In Haskell you start by defining your domain as types, and constrain those types with typeclasses. By the time you get to writing signatures, the implementations can almost be inferred (in Idris they actually can be inferred, the code literally writes itself).

In an imperative language with side effects, examples and documentation are a must because behavior is hidden in the body of a function; the signatures lie.

A journeyman Haskeller can write code that is completely self documenting. No examples are required, because the types coupled with the purity of the language allows us to tell the whole story.


This is (probably often) true for all those who can load a few dozen arbitrary type signatures into their active memory and start drawing conclusions. For the rest of us a few examples of how to do common things would help a lot. Please include documentation.

I have experienced what you're talking about with Haskell but it was a lot of uncomfortable work. But it's "technically" true that the types often describe how to use a library, sometimes so well that errors cannot happen.


Types are documentation. Documentation isn't types.


It's better documentation than no documentation


> Most Haskellers write out the types even though they can be inferred.

I don't think that's true. Idiomatic Haskell has explicit declarations only for top level functions and uses type inference for everything else.


I often copy paste the inferred type to top level declarations, so I’m a sense it is true


>In a pure language, signatures are extremely close to documentation (you know exactly what a function will do by its signature).

The same can be true of tests. I've started generating API documentation from mine: e.g. https://hitchdev.com/strictyaml/using/alpha/scalar/email-and...

I tend to think of types and tests as attacking the same problem from opposite directions.


> StrictYAML can validate emails (using a simplified regex)

Public Service Announcement: don't validate emails with regex. Every time you do, god kills a kitten.


Just checking for the @ symbol is fair game. Anything beyond that is a bad idea.


Agreed :)


> Most Haskellers write out the types

Maybe that's partly because Haskell can easily become a bit too elegant. Between all the currying and combinators, the type helps to understand code "top down", i.e. when you don't have studied and memorized all "bottom up" component parts.


Tuples especially, and more specifically how the language supports their unwrapping.

for idx, entry in enumerate(list)

vs. list iterators in Java where you have to repeat complex types at least 3 times and get about 5-10 times more code when it would only have to be about 2 times to achieve static type safety. Lack of typedef and the resulting drive towards dummy classes is exacerbating this problem further.


Java is verbose, but it isn't expressive. In Java you talk a lot without ever saying anything. In an expression based language like Lisp or Rust, you can more compactly write your ideas into code, even though Rust might still be verbose syntactically like Java.


The kind of verbosity you describe isn't really the issue in my view. The contentious verbosity is theKindWhereVariableNamesLookLikeThis and "fluent" programming style that turns "assert(testRes == expected)" into "assertThat(testRes).isEqualTo(expected)". It's excessive and removes value most of the time.


Well the fluent style is more verbose, but has the advantage of telling you more information about what went wrong. You get a report telling you that "x was not equal to y" rather than "expected true". The verbosity is way more helpful to me.


or you can use a language with macros, then a testing library can report the code being evaluated and the values in it.


Even Java has testing libraries that provide optional failure case messages with asserts to provide specific context where necessary.


Swift is statically typed and a “Hello, World!” in Swift is just print(“Hello, World!”).


So going back to the original commenter's points: how does Swift handle top-level code? Where is it legal for me to write print("Hello, World")?


Anywhere outside of things like a class or structure definition, if you have a single-file project. In a multi-file project, only in main.swift.


> I'd rather have to type 20% more than spend 20% more time figuring out other people's code.

If that were the trade off, it would be a no brainer. But instead it feels like double the code, for no clarity (or negative clarity because the noise becomes obfuscation).


Can't you get all the same advantages of Java with less verbosity in languages like Go, Swift, Rust or Haskell?


These criticisms look to me like you are trying to use your Java programming style in Python.

Prefer to use modules and tables of functions as an organising principle instead of classes.

Use inheritance rarely, and method overriding almost never. Otherwise you can make a really big mess (see Django's Form class hierarchy).

This sort of bad design where you have the flow of control jumping up and down the inheritance hierarchy happens in Java too, but the `abstract` keyword and static typing makes it easier to avoid.

In Java, make your code regular, so someone can understand the piece they're looking at without too much context.

In Python, make your code interesting. Factor out the unimportant repetitive parts, so someone can see a lot of the big picture all at once.

Incidentally, I'm quite happy with print() in the top-level namespace. Java's logging situation (5 widely used logging frameworks and a 6th to abstract over them all) is not better.


> Django's Form class hierarchy

I personally liked it. It is a little overwhelming at first, but after you get used to it, it's actually quite powerful.


> Do you really want print() to be so easily accessible in the top-level namespace? It seems to me that the bigger an application gets, the worse it is to easily litter the codebase with print statements.

Yes? It's great for debugging, which is generally what you'd use a print for anyways in a large application.


You really should be using a logging framework instead though, if you're working on a large application.


Except you don't want to have your logger in all places: unit tests, early in startup lifecycle when guice or spring haven't wired everything up yet, etc. Printing to stdout is simple and ubiquitous.


> Except you don't want to have your logger in all places

Yes, I do.

Now, I may have to work with a language/ecosystem that makes that awkward, sure, but that's a problem, if an incredibly common one (Julia is one of the few languages that avoids it.)


You use logging for production, but if you just want to see what a value is while when you run your program in the IDE and might not for a logger imported, print is handy.


Coming from python I like to set a breakpoint, and dive in with pdb. It's super easy to navigate through the stack frame and poke around to see what all the variables are. I get more out of inspecting state from inside of pdb than I do out of print statements.

On the other hand, I tend to use only print statements when I'm roughing in a program in an iPython Notebook. This gives a sort of visual documentation when navigating a library that I'm new to, and allows me to glance back at a verbose description of otherwise opaque data structures.


If you're running from an IDE and want to inspect values, why not just set a breakpoint?


Doesn't work with multithreaded applications and works poorly with non-deterministic behavior, so sometimes you just have to print lots of info and see if anything interesting pops up. It also requires knowledge about tooling for the language you're working on right now, which as a polyglot you might not care to learn.


Nah there's debugging for that


... which you can build using a LoggingFrameworkFactory.


It's java.util.logging.Logger.getLogger(...), and normally you'd have "java.util.logging.Logger" imported.


Whats even better than print for debugging is using a debugger...


I disagree. Most of the time, I find a debugger just slows me down. It's super helpful in some cases, but good logs can pinpoint problems far before a debugger can. Also, building in debug mode can change everything, so you may not even catch your bug, especially if it's concurrent in nature.


This idea that debugging with print statements is superior to using a debugger is simply false. Learn to use the debugger for your platform it will pay huge dividends throughout your career.

I regularly see pais+ of println debuggers debate and speculate while the guy with the debugger drills straight down to the issue, and fixes it.


I think it is a fallacy to choose either printing or debugging. I forgot where I read this, but the two techniques are fundamentally different. A debugger lets you stop execution and examine data structures at one point in time. Printing lets you accumulate a log of one particular data structure over a span of time. I think these techniques are complementary and have different effectiveness on different problems.


Yep. They call it “tracing” in debugging and a good debugger will be able to directly catch and log the values of any variable at a particular line of code.

By using logging instead you’re reimplementing years of good work done by engineers before you.


>I find a debugger just slows me down

1st you have to add the useless print statements and after decades of programming a debugger is vastly superios in terms of 'debugging' when compared to printing. (Back in time basic had no debugger even)

Also printing is utterly useless for high concurrent code, as printing alters memory visibility, usually adds global sync, etc..


Attaching a debugger will also change the behavior of concurrent code.


This would depend on the language/compiler/linker - take Java (which the article is about). Attaching debugger does nothing prior to adding a breakpoint.

The breakpoint would cause the method to be deoptimized, executed in the interpreter. Removing the breakpoint would allow the method to be optimized again.

Now obviously during stepping in, the thread would be blocked and not highly concurrent. However print statements just bare the concurrency.


Let's face it - concurrency sucks to debug in general.


So does repl, to be fair.


I think there are two use cases here, I was referring to debugging during development and a lot of replies are regarding troubleshooting an active prod system.

Of course we all hope for well thought out logging to troubleshoot issues we're seeing in prod.

I'm referencing an pattern I see with junior devs who simply use "printf debugging" in development instead of learning to use a debugger properly, even with distributed systems.


> an pattern I see with junior devs who simply use "printf debugging" in development

Whereas I see this pattern more with senior engineers.


I don't use printf. I tie a pin to an assertion of the expected result, and watch it with an oscilloscope.

Get on my level, normies. /s


If I have easy access to a debugger. It’s extra work to hook things up to a debugger, and there are certain restrictions that may apply (attach too late if process launch is not under our control, program may behave differently, etc.). If I do have access to a debugger, often I will just do “printf debugging” there by setting a breakpoint and adding an action to “p someVariable; c”. Usually I treat my debugger a sort of IPython for statically compiled languages, to mess around with and inspect values as programs are executing.


I have yet to come across a situation in which it is not worth the effort to figure out how to attach a debugger to a piece of code I'm modifying.


Again, attaching a debugger is occasionally not helpful–for example, if you're trying to figure out why your program isn't loading certain plugins at launch, you trying to attach the debugger may happen after this step occurs. So you don't get to debug this process.


Or if an issue happens in your staging environment but not locally. That happened to me just yesterday, and a simple print statement gave me the information I needed to resolve the issue.

I probably could have attached a remote debugger, and executed the relevant function a few times until my request got routed to the right process in the cluster, but that honestly would have taken me more time than just committing the print statement and letting CI take it away.


That seems like a good thing to use logging for, instead of a print statement.

But you're right, figuring out a dev / prod discrepancy in already-running code is a case where a debugger is not as useful.


Exactly, and in the replies it is really obvious what the debate is really colored by.

In many cases there is no debugger available for someone's favorite platform. And so they "hate debugging". Go programmers, javascript people (where you can't do client->server debugging, but really, really have to), ...

How many Java programmers don't use debuggers ? How many C# developers ? Those languages have excellent debuggers. Python, C/C++, ... decent at best. Go/Javascript/... dismal debugging support.


> Javascript/... dismal debugging support.

Eh? I thought JavaScript debuggers were pretty good.


I always compare my React debugging experience to something like GWT debugging from 5 years back ... and I find it very lacking indeed.


Doesn't quite work when debugging a distributed system, though. You end up adding a lot of "distributed print", AKA logging.


Fair enough, but attaching multiple debuggers across several interacting components with conditional breaks gets me there faster than incrementally inserting progressively more print statements, in a dev environment. Proper logging is a given doping out problems in a production system to then verify and correct in dev.


Logs aren't a bad thing and they aren't going away. Whoever let's you setup debuggers on prod should be fired.


Sometimes having a debugger on prod is the right answer.

"The Remote Agent software, running on a custom port of Harlequin Common Lisp, flew aboard Deep Space 1 (DS1), the first mission of NASA's New Millennium program. Remote Agent controlled DS1 for two days in May of 1999. During that time we were able to debug and fix a race condition that had not shown up during ground testing. (Debugging a program running on a $100M piece of hardware that is 100 million miles away is an interesting experience. Having a read-eval-print loop running on the spacecraft proved invaluable in finding and fixing the problem. The story of the Remote Agent bug is an interesting one in and of itself.)

"The Remote Agent was subsequently named "NASA Software of the Year"."

http://www.flownet.com/gat/jpl-lisp.html

http://ti.arc.nasa.gov/m/pub-archive/176h/0176 (Havelund).pdf


No one said otherwise, and the comment you replied to specifies "in a dev environment". The language you are using is unnecessarily combative: this is likely to inhibit the adoption of your ideas.


Only with python. With LISP it's the common case to debug prod.

I would rather argue that such incompetent managers who speak such nonsense need to be fired.


Smalltalk folks will debug in production too. Although to be fair, Smalltalk folks will write whole programs in debug in development.


Sure. Arguably Smalltalk is a LISP, just with non-Lisp syntax. It actually was prototyped in lisp first.


Agreed, great logs are great.

My comment explicitly stated dev environment, not prod, for debugger usage.


There is much more to logging than print statements. I have yet to run into production code which uses vanilla stdout print statements for logging.


> In Java the top level only contains classes and interfaces.

This design makes sense from conceptual purity, but Java is never a conceptually pure language. It distinguishes value types and object types, for instance. Not allowing free functions is really just a poor design choice.

Worse, the proliferation of single-method interfaces (which are really just functions) and static methods (which are in most cases really just free functions) really prove that free functions are useful. Java just makes it harder to use them.


>Not allowing free functions is really just a poor design choice.

I don't see why. The verbosity is pretty minimal and you enforce the availability of a privacy scope you wouldn't have had. ie private static methods and members that your public statics can use.

Although, pure functions are nice if you can't trust the code you're calling won't surprise you.


I think this has to do with naming. Naming is generally hard, with free functions you must name the function, in Java you must also name the class.

If you could show some examples of single method interfaces whose naming makes sense, I'd be delighted!


>If you could show some examples of single method interfaces whose naming makes sense, I'd be delighted!

IDisposable with a single Dispose() method?


Fair point, though I'd smack you for IDisposable :) Why not just Disposable?


The "top level" problem is real in Python and in Lisps; but in practice it's not a huge issue (if you really don't like __name__ == "__main__" you can create your main app as a separate script to your library). main in Java is as magical as __name__ in Python.

Python now has type hints and good static checking libraries so you can add as much type checking as your want. Java's type system is atrocious (c.f. Haskell, OCaml); it doesn't check nulls (a common bug), making simple constructs (tuples, unions) often requires a new class file of boilerplate.

The HelloWorldApp is a bad fit for objects. All the functions are static. This confused me for a long time learning Java. You can get a very long way in Python without knowing @staticmethod.

The static nature of Java does make it easy to know where to look for things and introspect code; there tends to be less magic.


Java's insistence on using one class to one file, naming files with class names and placing everything in a class is extremely helpful in industrial software engineering. When you pile up devs with different backgrounds, some bright, some bad, the rules that force everyone to follow some standard are great. This also helps with creating IDEs and refactoring tools. If your area is to play with math, go use whatever you want. Java is not that, it's sweet spot is different. Why is thislink here? There's literally nothing smart or useful in it.


there is a lot more structure in this java snippet though. At equivalent structure, the python code isn't that much shorter.

It's just convenient for a scripting language to be able to print stuff without any code structure. Something that Java enforces.

I've concluded it's because Java isn't designed for scripting that it doesn't allow scripting practices in its code bases. I just started learning Java and I guess if I use a class that could throw an exception I have to catch it (or transfer it to my class' user it seems). In this particular test I knew it wasn't gonna throw but it wouldn't compile: I was opening a file I knew existed for some quick test - instead of 1 python line I got 7 java ones. But with exception / catch structure.

It's an enterprise language, I'm all for the code structure enforcement. Help a dev catch an exception, so many things could already go wrong on larger codebases ;)


From the perspective of a python programmer, I must say this is an interesting list. I'm rather convinced by your points on print and public.

On the other hand,

"__name__ == __main__" is just a convenience for debugging and small scripts. I wouldn't use it in real programs.

I dont use @staticmethod, or find it useful. If I want a function, I use a function.


> I dont use @staticmethod, or find it useful. If I want a function, I use a function.

In principle, it makes sense to "bundle" functions that are strictly related to a class within the class itself (i.e. as static methods). Of course, the benefits viz. top-level functions are only noticeable if your codebase is big enough.


What are those benefits?


Of course it's just better organization of the codebase and a less-polluted namespace. Also, if you're writing a library, it may affect the API you give to the user. If you move a function to a static method, the end user only needs to import the desired class.

But again, they are "usability" benefits. I'm no CPython expert but I don't think that a static method would have noticeably worse (or better) performance than a function.


> Do you really want print() to be so easily accessible in the top-level namespace?

Not necessarily, but I definitely want functions to be accessible in the top-level namespace. Not everything is an object.


> In Java the top level only contains classes and interfaces. All functions must be put inside a class, and I believe this design simplifies conceptual understanding.

Strongly disagree. (Traditional) Java has no way to represent a function or a class as a first-class entity and it suffers for it. In Python functions and classes are plain old values (and indeed objects) and any value can live at top level in the module.

> * Python always executes the top level, so you end up with awkward idioms like: if __name__ == "__main__": main(sys.argv)

How is that any more "awkward" than the Java way? It's fewer lines of ceremony in all cases as far as I can see.

> * The "public" keyword might seem like noise, but in Python you denote private by prefixing the name with one or two underscores.

So Python has a) a better default (it's not that Java doesn't have a default visibility level, it's just such a spectacularly useless one that it never sees use) b) a concise symbolic syntax for something that's used extremely commonly, which is the right approach: http://www.lihaoyi.com/post/StrategicScalaStyleConcisenessNa...

> * Do you really want print() to be so easily accessible in the top-level namespace? It seems to me that the bigger an application gets, the worse it is to easily litter the codebase with print statements.

Big applications will always have their own frameworks and no possible default set of imports makes sense for big applications (Java's default imports from java.lang. get in the way for big applications too). But a good language should work for small scripts as well as big applications, so having print in there by default makes sense. (I can sympathise with having a no-default-imports option or something like Haskell's custom preludes).

> * You still eventually need to learn what "@staticmethod" means in Python.

My 10+ years of Python say otherwise. You really don't.


You wouldn't be able to write linear scripts in Python if it were so restrictive, or execute the interpreter from the command line with the -c command. You'd be forced to use OOP patterns for trivial code, whereas Python as it is now doesn't require any OOP at all. I really don't understand why you'd want to force everything into OOP doctrine.


You can have your cake and eat it though thanks to kotlin.

It is way more concise than java but does not become ambiguous because of it.


But “forcing learning early on” is exactly the problem. Approachable languages lead you up their learning curves in small, easy steps. The more knowledge you require newbies to swallow in one bite, the greater the number that will choke.

(Not that anyone would call Lisp approachable either, of course.)


Agreed. When asked, I keep saying that Java is not the best "first language" to teach to a person, because it overwhelms people with concepts from the get-go.

A normal person who's sincerely trying to learn, when shown the Hello World will ask about everything - what is this "class" and why it's needed? What does "public", "static" and "void" mean, and why they're written. What "System.out" is? Not to mention, the package declaration at the top of the file.

All of those things have reasons for them, but when explaining this to a novice, you'll have to initially handwave most of those reasons away. That seems very uncalled for, especially when the basic abstraction you're trying to teach is that "computer executes simple instructions top to bottom, one at a time, and everything is built out of those simple instructions".


Agreed. I went back and read the original discussion on HN about this blog post and one commenter phrased it as such:

You need to learn and memorise 15 facts to fully grok the Java Hello World snippet (including how to compile and run it), for Python it's only four facts.

That's a big difference in cognitive load.


> Python always executes the top level, so you end up with awkward idioms like: if __name__ == "__main__": main(sys.argv)*

And that's more "awkward" than the idea of a "public static void main" and a MANIFEST file because?


I've made an attempt at a java scratchpad/REPL that guesses what imports you want. It allows just print by itself but Dump("Hello World!") is better in most cases as it handles collections well and outputs to tables/HTML as well as text.

Example: http://jpad.io/example/1y/test-different-output


IMO, it’s a balance of entropy with maintainability. The amount of info I a line of java is likely lower than lisp, so the cognitive load is likely lower and the code reviews of diffs are spread out over more lines. Generally less value per line but more decomposed.


Having done both Java and Lisp for years (both professionally), I agree that in general Lisp requires more cognitive effort to read per line, but I disagree about the reason.

Cognitive load vs. verbosity can be visualized with something like Laffer curve - sure it's easier to read a couple lines of Lisp than one line of APL, but when you get into situations like one line of Lisp (or Python) gets expanded into 30 lines of Java, then the former is much easier. Terse languages make you pay cognitive cost for unpacking meaning, verbose languages make you pay for keeping track of what all these lines of code are doing. Ever had this situation when you read two pages of a book and realized you didn't really read anything, and have to re-read it (and 3 pages prior) to understand what's the point? That's Java in a nutshell, especially pre-1.8.

As for Lisp and cognitive load, the difficulty comes from a) dynamic typing, and b) macros. The former makes bad naming choices much more painful, as you can easily lose track of what's the code doing, and to what data. The latter are fine when written well, when written badly, your problem suddenly explodes in complexity as you have to peel off a layer of syntactic abstraction, and fix the underlying mechanism.


And all the verbosity disadvantage almost goes away anyway once you get outside default functions and stop using globals


The languages that tend to become popular are simply languages that appeal to the lowest common denominator. Java was the language invented for the "common programmer", and people find that comforting. Now it's Go; as Rob Pike once said of it, "They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt."

That's why it's always a battle to push languages like Lisp, that offer mathematical elegance over languages that are a soup of half baked idea that veer programmers away from ideas that take more than a few days to master. For those of us who aspire to do better, we should continue to try to bring the from first principles approach of functional programming to the mainstream.


Not gonna lie, to me that’s a lot of words to just say “people who use popular languages are beneath me because they simply can’t understand the ‘beauty’ that mathematical-esque and functional functional languages offer”.


The issues mentioned are largely addressed by jshell and lambdas.

Regardless, his issues mostly boil down to "Java is not a REPL-first and Lambda-first language". This is like me "hating" my cat because it's not a dog.

I find many beginner programmers like REPL langs. When I talk to them, though, it becomes apparent that they've never worked on a code base with 100k+ lines with tens of colleagues.

One guy actually suggested that systems should never be that big! What do you say to that? I just smiled.


I think many Haskell, OCaml and Scala programmers would disagree heavily with you.

Repl driven development works very well with pure functional code and at no point did I feel an increase in loc begin to make it tedious or whatever.

The repl approach perhaps doesn't work as well in Java because Java and its ecosystem encourage writing messy jumbled code built upon unsafe abstractions.


I've recently come around on Java (although I'll still rarely choose it), mostly due to the idea that "all things get complicated, so what matters is how you manage that complexity".

Java started to "click" when I realized how much you build objects out of other objects like layers of an onion, just adding a bit at each step, and it clicked a bunch more as I understood Spring's bean system, which itself clicked more when I began to understand that the object-based dependency graph is basically the equivalent of the top-level Ruby script. All the code describes how X and Y relate to eachother, and then you go to the bean definitions to find out what X and Y are. That's actually pretty neat.

I still prefer the "simple script" approach you can do in Ruby; and I'm loving the way Scala is kind of Ruby, but in Java, and so (and to relate to the actual content) inherits a lot more from Lisp.

So, PS, author, if you're reading this, seriously give Scala a try. It's been pretty fantastic so far for me!


Java has been gradually eating all the good parts of Scala. So far, it's eaten lambdas, map / reduce / fold functions, option types, raw string literals, the concept of a built-in REPL, type inference for local variables, and default methods in interfaces (which is kind of like mixins). Pattern matching and type classes are on the menu, but not yet formally merged into Java.

So what does Scala have left? Operator overloading, which tends to lead to unclear code. "Implicit," which should give any former C++ programmer PTSD flashbacks. A compiler which is pretty slow. The ability to embed XML into code (why?). A collections library which is nightmarishly complex. (See https://yz.mit.edu/wp/true-scala-complexity/ )


> Java has been gradually eating all the good parts of Scala.

Horray! I remember the first time I poked at Java, back in 2007, and I hated it, in large part because it was missing these. It's fantastic it's eaten these good things.

> So what does Scala have left?

Style/syntax and community?

I don't know enough about either language to debate on the points you've brought up; what I do know is that Scala feels a lot more like writing Ruby/JS/Python than writing C++/C#/Java.

I've only just started writing real class definitions in Scala, and the way that "constructors" work really drives home this similarity and difference; the kind of code you write to define the class looks to be the exact same kind of code you write to define everything else.

I also haven't poked into the larger Scala community, but I can't imagine it's all that similar to the larger Java community. Even if Java reaches some kind of feature parity with Scala, I bet this difference remains, and it'll come across in the libraries, common usage patterns, help on the internet, and conventions.

Again comparing to Ruby, the effect that "Matz is nice" and (when you add in Rails) "convention over configuration" have goes beyond code features into what it's actually like to use and work with the language day in and day out.


> Java has been gradually eating all the good parts of Scala. So far, it's eaten lambdas, map / reduce / fold functions, option types, raw string literals, the concept of a built-in REPL, type inference for local variables, and default methods in interfaces (which is kind of like mixins).

If you adopted Scala 5 years ago, you could have had all the good stuff Java has today, 5 years ago. If you adopt Scala today you can have all the good stuff Java will be getting in the next 5 years, today.

> Pattern matching and type classes are on the menu, but not yet formally merged into Java.

How many years or decades will those "formalities" take? Java language enhancements have a history of taking much longer than originally claimed.

> Operator overloading, which tends to lead to unclear code.

The problem with traditional operator overloading is that you have to memorize which symbol corresponds to which magic method name (I can never remember what method * calls in Python, for example). Scala avoids that because it doesn't actually have operator overloading; rather, operators are just normal methods that you call in the normal way.

> The ability to embed XML into code (why?).

Already moved into an optional library, being dropped entirely in the next version.

> A collections library which is nightmarishly complex. (See https://yz.mit.edu/wp/true-scala-complexity/ )

Replaced in the next version. (And the complex parts were only ever for doing things that are completely impossible in any other language).

> So what does Scala have left?

Higher-kinded types (which let you represent secondary effects in a uniform way - obvious things like validation or async, but also custom effects like database transactions or an audit trail). Uniform representation of structures/records (via Shapeless) that lets you traverse data structures in a normal, type-safe way.

Put that together and you get a language where you never need the magical frameworks that (real-world) Java needs. No reflection-based runtime serialization that you have to control via a type registry separate from the language type system. No reflection-based database mapping that you have to control via another, subtly different type registry. No AOP decorators where you rename a method and it magically stops having a database transaction applied. No magical annotations for your http routes that tell it which string class name to load at runtime for serialization, where if you want your web threads to do something before/after you use yet another magic method or registry. No magical autowiring framework instantiating all your classes for you.

Just plain old functions and values. Standard language features (plus one macro in shapeless that might as well be part of the language for it's used) for all of the above. One standard syntax (for/yield) for effect management that all the language tools understand. Libraries for things that would be language features in most other languages, but without having to resort to the free-for-all of custom macros.

That's what Scala has. If Java catches up to that one day, great! But without higher-kinded types, and without some kind of uniform representation of records/case classes/data classes/what-have-you, it will never get close.


> The problem with traditional operator overloading is that you have to > memorize which symbol corresponds to which magic method name.

That's not even in the top 5 problems with operator overloading.

> How many years or decades will those "formalities" take? Java language > enhancements have a history of taking much longer than originally claimed.

Almost certainly a shorter amount of time than it would take to migrate our codebase away from Java. Oracle has actually been moving pretty fast with Java recently, for better or worse.

> [XML is...] Already moved into an optional library, being dropped entirely in > the next version.

And this illustrates another problem: Scala is constantly breaking backwards compatibility. Which is a real-world problem, unlike "I want the latest gee-whiz language feature."

> Put that together and you get a language where you never need the magical > frameworks that (real-world) Java needs.

Scala has plenty of "magical frameworks": scalaz, akka, shapeless, the list goes on.

Newer Java libraries like Jackson don't need a type registry (or at least, not one that have to manually set up.)

I won't try to defend J2EE or Hibernate, but also, I don't use them.


> That's not even in the top 5 problems with operator overloading.

What other problem is there? Some libraries define functions with stupid names and that's a problem, but it's a problem you have without operator overloading too.

> Almost certainly a shorter amount of time than it would take to migrate our codebase away from Java.

I rather doubt that; a typical codebase has a half-life of what, 5 years? Which is less time than many of the Java features you listed have been delayed for.

> And this illustrates another problem: Scala is constantly breaking backwards compatibility.

Hardly. One major compatibility break in the language's entire history, eight years ago; removal of XML literals will be part of the second, and is only happening after they've been a) universally agreed to be a bad idea and b) deprecated for four years and counting. Yes it's not quite the extreme levels of backwards compatibility that Java offers, but it compares favourably with most languages out there.

> Scala has plenty of "magical frameworks": scalaz, akka, shapeless, the list goes on.

Maybe some parts of akka (I don't use it), but certainly not scalaz or shapeless: they're libraries, not frameworks, and there's no magic (no reflection, annotations or anything like that, aside from the one macro I mentioned), just ordinary values and ordinary functions that you call in the ordinary way (even the macro behaves like one).

> Newer Java libraries like Jackson don't need a type registry (or at least, not one that have to manually set up.)

Jackson is the example I was thinking of actually, it absolutely does have a (magic, reflection-based) registry that controls how things get serialised. See e.g. https://github.com/FasterXML/jackson-datatype-joda#registeri... .


> So what does Scala have left?

This is why I think long term Groovy is better positioned than Scala, even if short term Scala has more of the mindshare. Groovy actually offers something fundamentally different, while Scala is perpetually trying to compete on Java's home turf. If people really want a pure, statically typed language Kotlin is pretty good and has less impedance mismatch with Java.


> If people really want a pure, statically typed language Kotlin is pretty good and has less impedance mismatch with Java.

I see this the other way around: Java genuinely has eaten most of the good parts of Kotlin, because Kotlin is positioned as a small enhancement over Java rather than a language that brings major improvements as Scala does. Adopting Kotlin means paying much the same cost as adopting Scala, but for much less in the way of benefits.


This is why Kotlin is such a hit for Android devs, who are stuck with only half the features of Java 8 (at best).


What fundamentally different does Groovy offer?

What do you mean by "Scala is trying to compete on Java's home turf"?


When Apache Groovy was first released, it offered closures and dynamic typing on the JVM ecosystem, but nowadays Java has lambdas and inferred typing, so Groovy doesn't really have much fundamentally different to offer anymore. And since version 2.x, Groovy is also "trying to compete on Java's home turf" because it has static-typing annotations, but Kotlin and Scala are probably better choices if you want static typing on the JVM because it was baked into them from the get-go instead of being bolted on as in Groovy 2.x.


Groovy is a truly dynamic language (with all the pros and cons that come with that). So you can put it into contexts such as interactive scripting where Java will always be suboptimal and solve problems Java doesn't even want to solve. On the other hand, every single good idea that Scala comes up with will eventually be co-opted by Java, because there is no reason for that not to happen. It is just a matter of time.


When higher kinded types come to Java, please ring me up!


Touche - yes, I agree that Java is probably never going to attempt to introduce that!


Implicits are a sharp sword. Either very good or very bad depending on the people using them.

Scala doesn't stand still either. Dotty / Scala 3 will bring many refinements and new features like union types, opaque types and implicit function types.


It's ironic that the author claims that Common Lisp is easier to learn than Java. One of the most common pro-Lisp, anti-Java arguments I hear is that Java is too easy to learn, so that a lot of people who aren't actually very good at programming can get Java jobs. This means that the Java world is populated with not-so-bright people, while the rarefied Lisp world is full of elite wizards.


Java doesn't contain conceptually difficult features. In Rust you have to learn lifetimes and borrow checking, in Haskell Monads... Java just has a huge amount of relatively simple features. Lisp is middle of the road conceptually, but is a small language.

So in that sense I agree with you. Java is favored by people who struggle to learn harder concepts. That's why it's the usual target of the lowest common denominator programmer and loved by large enterprises that can churn Java programmers like mackerel.


Beside the REPL, another impressive feat of Common Lisp is meta-programming. You can write software to write software. Say what? Yeah, that was my initial reaction too, but Common Lisp’s macro system allows you to write functions that return code snippets. It completely redefines the word. Or more accurately, re-redefines, since Lisp is much older than MS Office, which is what most people associate macros with, sadly.

The article could be improved by including an example. Pick one in which Java makes the problem to be solved tedious, verbose, repetitive, or error-prone and where Lisp macros do not. Bias the example as much as possible to a problem where Lisp can shine. It should, however, be a real problem most programmers are likely to face in some form.

Then present the source for both solutions.

This approach would even help those who don't use Java regularly because they could try to implement the same functionality in their language of choice and compare it the the Lisp implementation.


Here is something from Clojure that stuck with me as particularly elegant:

The -> macro.

  (-> {}
      (add-person 'servbot)
      add-age
      add-info
      add-to-db
      notify-status-by-email
      log-user-addition
  )
So the threading operator is exactly what you ask. It inserts the results of the previous expression as the first argument as the first argument as the next expression. Note this still supports situations where an expression can take more than one argument.

Additionally, there is the ->> operator which inserts the previous result as the last argument in the next expression and some a library that includes the "magic wand" operator "-<>" ala

  (-<> {}
      (add-person 'servbot)
      (add-age 21 <>)
      (add-info 'servbot <> 21)
      (add-to-db 'my-db-handle 5432 <>)
      notify-status-by-email
      (log-user-addition 'dev 'email <> 'paint-by-numbers)
  )
This particular set of macros seems entirely trivial until you use it, and then it changes your entire perspective of what functions do since your ability to express the computation pipeline changes completely.

Edit: Update for formatting.


The Clojure standard library has had the the 'as->' macro for a while which serves the same purpose as the "magic wand". It doesn't need to use '<>' as the identifier, but I usually choose to do so because I learned about the magic wand library before I learned about the 'as->' macro.

  (as-> {} <> 
      ...
      (add-age 21 <>)
      ...)


Implementing async/await. You can accomplish that entirely via macros in Common Lisp, whereas in Java you would need to do bytecode transformation via a javaagent (which, shockingly, I just discovered somebody has actually done: https://github.com/electronicarts/ea-async).

That being said, I write Java for a living, and I don't find it terribly verbose, outside of the lack of sum types, pattern matching, and typeclasses.


Isn't that really hard to implement correctly in Common Lisp? Your macro would need to perform a continuation-passing style transformation on arbitrary code that could involve jumps, error, etc.


You don't need to do CPS transformations if your language supports first class continuations. Lookup `call/cc`


Indeed, but Common Lisp doesn’t!


I remember being struck by a snippet of pseudocode in a discrete mathematics textbook[0]:

https://pastebin.com/L2dwawgu

So basically being able to run an arbitrary number of dependent, nested loops. Now imagine you want to run arbitrary code in the interior, rather than just incrementing a count.

It's a simple idea. And it can be expressed so succinctly in just a few lines of pseudocode in a freshman-level computer science textbook.

And yet I wouldn't want to touch this in Java. The lisp macro to write this isn't much more verbose than the pseudocode itself.

[0] Rosen's Discrete Mathematics and Its Applications, 7th e.

Edit: Traded formatting for pastebin


Inline some data structure. Json, interestingly, embeds trivially in lisp. Not so much in Java. Especially once you learn the quasi quote.


I would love to see the reverse. Ive never seen a Java codebase that I thought was pleasant to work with, but maybe I'm just not looking hard enough.


You're really underestimating the power of Java's boring workhorse methodology. If you're looking for love there's a lot of other languages that are more pure and more beautiful for certain use cases.

However, there's a reason Java is in the top 3 most popular languages. Its pragmatic, it scales, it plays nice with tooling and the language itself is pretty passable.


> However, there's a reason Java is in the top 3 most popular languages. Its pragmatic, it scales, it plays nice with tooling and the language itself is pretty passable.

I wonder if those are true reasons. I currently believe in the following set of reasons, in the order of importance: runaway feedback loop of popularity (popularity -> jobs -> popularity), JVM is cross-platform, low cognitive effort per line of code written.


I'm not asking for anything fancy - just an example of a Java codebase that is "good" for whatever definition of "good".

I'm currently dabbling in Android development, and the few Java libraries I've dealt with, especially Android's own APIs, have been a thorough exercise in frustration, to put it mildly.

Where are the good Java codebases hiding?


I agree that the Android APIs are not good.

Have you looked at JDK itself?


If you are running JDK 9 or later, you can use jshell:

jshell> "Hello World!"

$1 ==> "Hello World!"

https://openjdk.java.net/jeps/222


Java is not really designed for a REPL though. I've used REPL like environments in Swift long before, this and it simply never feels right IMHO compared to a real dynamic language.

And Swift is a much better designed language for this than Java.


Thank goodness for jshell. The only thing holding Java back was its lack of a repl and now it has one, we can embrace Java as the useful concise dynamic succinct language it really is.

I don’t think the argument in the linked article is very good but this apology is worse. It doesn’t really show any merit of Java (more that the repl in Java is much less powerful than in eg CL) or argue against the spirit of the article.


Turns out C# also has a REPL now, it's called "C# interactive". I've found out just now, spurred to look for it by your post.


Ummmm yes & no. It has a REPL, but it sure isn't easy to use and very few C# coders actually use it. Contrast that with a dynamic language like Python where people basically live in the REPL. Common Lisp takes this to a whole new level. You really can't equate C#'s REPL to Common Lisp's. This is one of those things where the sum of the thing is greater than its parts. Note that I'm not saying C# isn't a decent language, just very very different than what you get with languages like Smalltalk & Lisp.


for those running JDK 8 or earlier, you can also script Java through Rhino >> https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Rh...


And lambdas were introduced in 8 making nearly all of the author’s issues mute.


I'm curious, how much Common Lisp have you written yourself. Because what I'm reading between the lines is that you consider modern Java to be pretty much equal to Common Lisp in power, and that doesn't make much sense from here.

Take lambdas for example, the amount of arbitrary exceptions you have to keep in mind while using them makes it almost not worth the effort. It's still the same old crappy Java; only with shiny, barely working marketing gimmicks duct taped to the side.


*moot

You're right, but to be fair the article is almost 7 years old, and Java has evolved a lot.


That's the thing though... they aren't real lambdas. They're just syntactic sugar for an interface. If you watch the talk by the head of Java right now at the Clojure conference, he talks about how they're essentially trying to tack on low hanging fruit in hacky ways to keep the language up to date. But secretly it's a mess.


What about closures? Also, those lambdas are actually classes that try very hard to look like lambdas, but aren't actually.


What do you mean "aren't actually" ? Anything that behaves according to the rules of a lambda is one — they can certainly be implemented with classes.


There are quite a few complications of Java lambdas and their abstraction bleeds out in several edge cases. This is a result of their class implementation and would not exist if functions had first class support in the language and barcode bytecode (there are complications there too because they didn't want to change anything at the bytecode level for lambdas).

https://dzone.com/articles/java-8-lambas-limitations-closure...


Lambdas in Java aren't really that complicated. If you understand anonymous classes, you understand lambdas.

That article just compares Java's closures with Javascript's closures. The "limitation" is that Java's closures can only access the final variables of the enclosing scope. But the article agrees that this limitation "can be considered negligible" (seriously, read it-- that's the conclusion.)

Also, starting in Java 8, you don't have to explicitly declare things "final" to use them in anonymous classes or lambdas. They just have to be effectively final, meaning they are not mutated.


> But the article agrees that this limitation "can be considered negligible" (seriously, read it-- that's the conclusion.)

Working both in Java and Lisp professionally, it sure as hell isn't negligible. It changes the way you write code. Lambdas in Java are 80% solution. Java 8 really did make this language finally bearable, sometimes pleasant to work with. 80% solutions are good, but the remaining 20% is not "negligible".


A turing complete macro system is seen as a positive thing but if someone dares to mention Java’s annotation based code generation and DI everyone is complaining about “too much magic”



Nit-picking a specific claim made in the article, but I honestly don't believe Lisp would be any less confusing to new programmers than Java. In fact, I think it would probably just confuse them more.


FTA :

>>> But for a skeptic like myself — who spent 4 months intensively practicing Tai-Chi in Beijing, just to confirm the existence of “qi”

Has anybody here experienced this chi thing. My tai chi teacher can surely demonstrate it but I have hard times understanding with my occidental way of looking at physics... My understanding it's more a state of mind that allows the body to move in a very specific way.


One big thing I notice about Java (and it’s ancestor, C) is that it’s designed to make it easy to pinpoint exactly what went wrong, when something goes wrong. Lisp, on the other hand, is designed so that you can get it right the first time.


My biggest gripe with Java is it's insane backslash-enhanced regular expressions. No other language seems to require this ridiculous butchering of what brought me into programming.


Java 12 is going to add raw string literals which will fix this problem. https://openjdk.java.net/jeps/326


I tried reading the Java Language Specification once, but it was such insane garbage I stopped and never give it another look.


I had diametrically opposite experience: I read Java Language Specification and Java Virtual Machine Specification back in 1996 - after many years of programming in (pure object-oriented, dynamically typed, invented by Alan Kay) Smalltalk. These two specs changed my course in IT forever... After more than 22 years later I have no regret reading it...


I read parts of the JLS and I found the text quite down to earth compared to the respective specifications of C, C++, and ECMAScript (JavaScript). Oh and as for scripting languages like PHP, Python, and Ruby? They don't have a standard because the implementation is the spec!



An old lisper would have used '(Hello World) over "Hello World". We are dealing with symbols mostly, not strings.


Why are we still discussing it?

JAVA is used in most companies in the world. Lisp on the other hand - is not advised to be used in production.

To me it looks like long lost battle


Lisp is the best language in the world. ^^




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: