Hacker News new | past | comments | ask | show | jobs | submit login
Jodd – The Unbearable Lightness of Java (jodd.org)
177 points by datalist on Jan 17, 2022 | hide | past | favorite | 232 comments



I remember the days, when the Spring framework was advertised as a lightweight alternative to Enterprise java beans (ejb); now Spring outgrew the pretence of being lightweight, don't know when that happened. A year and a half ago, i got back to work with java and spring boot, and i was overwhelmed by the prevalence of annotations in spring boot.

To cope with all this, i wrote this little project: https://github.com/MoserMichael/ls-annotations

It's a decompiler that is listing all annotations, so it becomes easier to grep a text file in order to detect the dependencies between annotations.

it is using the asm library https://asm.ow2.io/ to inspect the bytecode files, so as to extract the class signature, along with the reference and declaration of annotations included in a classpath, or class files included within a directory structure. A limitation/feature is, that it is inspecting already compiled bytecode files.


Indeed. I have a dozen or so microservices supported by team. Most are SpringBoot a couple of them I wrote myself with plain java and embedded tomcat. Needless to say Springboot stuff is rather complicated for such a simple business functionality. Errors are indecipherable being swamped by thousand line framework exception trace. But being an "enterprise standard" framework all projects must be move to this turd of a framework.


And in 99% of cases that little microservice will suddenly need thread pooling, logging, some more advanced db management or God help some random messaging service and you are back to re-implementing the myriad features of spring in a shittier way.

It is not an accident that things like ruby on rails are popular. These are well-tested toolboxes with a solution for almost every conceivable problem. There are exceptions where it is not needed, but for business applications those are not numerous.


I don't think people have any issues with the fact that spring is batteries included. It seems to me (and this is my personal experience too) that the large amounts of abstraction and indirection through annotations makes the code very hard to parse. It makes it hard to create a mental graph of how it all works together


My experience is that it's not only the usage of annotations, but the way Spring handles/implements those annotations which is confusing.

As an example, Micronaut[1] also uses annotations a lot, but their implementation is a lot easier to reason about, because there is less indirection with proxy objects and other weird stuff that Spring uses.

Micronaut does not implement nearly as many annotations as Spring though, which basically means less functionality pre-built. I'm not sure that's a bad thing, but it could be.

[1] https://micronaut.io/


I found the real challenge to be that it's very difficult - if not impossible - to determine how spring functions simply by reading code and using it. In simpler libraries or frameworks, I normally just read the source to understand how I should be using it. With Spring, I've had to spend a lot of time reading and re-reaading docs to understand what's going on.

I think that this is sometimes a hard shift for developers who otherwise have spent their lives with an ability to puzzle out the constructs that they come across.


I absolutely understand it, but I think the correct, although bit inconvenient approach is the one you mentioned — properly learning the framework either through docs or other materials.

Way too many developers try to write spring (but also jpa and many other useful, but complex tool) by trial and error, which let’s be honest, not a good tactic even if one can easily inspect the source. (The recently posted microsoft blog post “even if the precondition doesn’t do anything, you still have to call it” comes to mind)


You're, of course, right that you just need to study and learn a complex framework. Basically the same way that you learn a new programming language- they're all different and have different behaviors and idioms (pass-by-copy vs. pass-by-reference, etc).

However, there's another dimension here, and while it's not totally unique to Java, it's definitely present in larger magnitude in Java, in my experience. There are two parts:

1. None of these frameworks are 100% consistent. I haven't used Spring{,Boot} in years, but I can tell you that JPA/JDBC are full of little "surprises" and rough edges, like handling nullable database columns. If you are not careful with your annotations, you'll get a `0` value for your `Int` object field instead of the `null` that was in the database. You can then go for quite a while before you figure out that's what happened. Similarly, JacksonXML has all kinds of little gotchas when it comes to date-time types and timezones, primitives and null-ness, etc.

2. Most projects have more than one of these complex frameworks. See above. I listed JacksonXML and JPA/JDBC. Odds are that you have AT LEAST these three frameworks (including Spring) in your Java project, which means you have to study all three and learn all of their intricacies before being confident in the code you write. That's on top of learning how to write half-decent Java, which is hard enough with its type-erased generics, bug-prone null-handling, and very verbose class definition syntax. If it were just one thing, I'd be sympathetic and tell people to just RTFM. But, unless you plan on only writing Java code for the next decade+, I have come to believe that it's probably not worth it.


I don’t think that it is any easier in other languages either — object serialization, conversion between language’s object/json/xml/etc, and database access with object relational mapping is just complex. You can make the trivial way trivial, but you have to expose the hard ways as well and that will not be pretty either way. For what it worth, java has a really high quality ecosystem for all these things - I would be really interested in what you think as an alternative. Sure, there are alternatives as well within the jvm ecosystem (though with much smaller userbases), but .NET is famous for having worse copies of java libs, node.js is in my opinion terrible for enterprise scale applications, there is erlang, ruby, php and whatnot, and sure enough you can make good web applications in any of them, but I really don’t buy that any of them would be easier.

Also, java is solid as a language. Sure, it is not the most modern one, but it is reasonably productive, has great tooling, is very performant and perhaps most importantly, it is observable in a very fine way.


> I don’t think that it is any easier in other languages either — object serialization, conversion between language’s object/json/xml/etc, and database access with object relational mapping is just complex. You can make the trivial way trivial, but you have to expose the hard ways as well and that will not be pretty either way.

I agree that these things are just complex. But, what's interesting to me is that I fully agree with your second sentence and I see it as an indictment against the Java ecosystem around these operations. The common Java frameworks, IMO, serve to make the trivial stuff even more trivial, but then make the complex stuff even more complex. It's exactly the opposite of what I want.

Let's look at JacksonXML for serialization.

Here we have a framework that uses runtime reflection to more-or-less guess how to (de)serialize an object. Figuring it out at runtime is a tough engineering choice already, because it pretty much immediately means that you're going to have to figure out a caching system for what types you've already analyzed, so that performance isn't terrible. And we all know how hard cache is.

But, on top of that, Java uses type-erased generics, so you can't actually reliable use runtime reflection to figure it out! But the compiler certainly won't tell you it's a problem, because Jackson will try to (de)serialize anything you throw at it. You don't even need any annotations for most stuff. It "just works" (TM)... until it doesn't.

So, if you use generics or inheritance or any non-trivial mapping, you have to write a custom serializer. Okay, that's no big deal.

But then you realize that JacksonXML will IGNORE time zone information on an incoming serialized date field that is encoded as an ISO8601 string and just use the current JVM system time zone. Because why the fuck not, I guess?

So, in other words, Jackson makes already-trivial things a little less verbose, it makes non-trivial things a pain, and it even makes some things that should be trivial into a pain.

I can play the same game with JPA/JDBC. In particular, it also does really stupid things with time zones and date-time types. It also can't really handle complex types for similar reasons to JacksonXML.

> For what it worth, java has a really high quality ecosystem for all these things - I would be really interested in what you think as an alternative.

My favorite languages to work with at the moment are Rust, Swift, and Kotlin. All three have way better serialization stories than Java. Rust has serde, which is like JacksonXML, except it's compile-time and your types have to implement a Serialize "trait" (interface). I truly think I'm being honest when I say that I've NEVER had a runtime serialization (type) error in my Rust projects that have used Serde. If it compiles, it works. The same is almost true of Swift and Kotlin (with kotlinx.serialization), except I do think I've encountered some runtime issues in both of those (IIRC, there was a surprising limitation with Swift Dictionary keys needing to be Strings or something). Kotlin's approach is my least favorite of the three.

When it comes to ORM/SQL stuff, I've been using a query builder in Rust that's a delight to use. Basically, when you execute a query, you use the type system to indicate the type you expect the returned rows to be. As long as the type implements a `FromRow` trait, it will "just work" or return an error value or crash (you can choose to call a "safe", error-returning, function or a "crashy" version of the function that assumes success and crashes otherwise). Rust has ad-hoc tuple types, and the library implements its `FromRow` trait for all standard types as well as all tuples up to 13 or so elements, so often you can just write something like `let (id, name) = query<(u32, String)>.execute();` and it will just work. If the `id` column is `null`, then the call will FAIL instead of doing something insane like just making up data (like JDBC returning `0` instead of `null` for nullable int columns).

The Rust query library I'm using is the perfect example for the point you made earlier about making trivial things trivial. This library does require a little bit of boilerplate to implement `FromRow` for your custom object types. It's not really worse than JPA for the simple cases, but it's WAY ahead when it comes to dealing with the non-trivial cases.

> Also, java is solid as a language. Sure, it is not the most modern one, but it is reasonably productive, has great tooling, is very performant and perhaps most importantly, it is observable in a very fine way.

Credit where it's due: Java does have phenomenal tooling and it's very fast (except when you use the frameworks that we're discussing...). I think I'd lump in the observability with the phenomenal tooling.

But, no, I wouldn't call it a solid language. It's far too primitive and bug-prone for writing robust applications. The null issue doesn't really need to explained, the ease with which we can leak resources from Closeable things, the awkwardness of the type system (e.g., being unable to implement an interface for types you don't own, being extremely tedious to define "newtypes" like a `NonEmptyString`, etc), the incompatible-yet-ubiquitous use of runtime reflection and type-erased generics, etc.

I'm sure you're a Java expert, but I'd wonder how many years you think it took you to get to the point where you feel like you aren't bitten by all of the things I've described in this comment. If the answer is more than 1, then I'll go ahead and assert that Java is not a good programming tool for the domain in which you work. I've been working with JVM languages for about 5 years now, and I'll say that I'm now familiar enough to avoid these traps most of the time, but holy shit- it should not have taken nearly that long.


Thanks for the non-flame-baity answer! Hopefully I wasn’t too emotional in my previous reply, because it unfortunately does happen from time to time.

Regarding Jackson and JPA the only thing I can tell about these is that their age shows, and they come from a domain and age where the (in my opinion, bad) POJO and Java Beans conventions originate. So I fully agree that things could be much better, and hopefully there will come a renaissance replacing these tools with modern java equivalents, that don’t rely on runtime magic as much, and will use the modern datetime APIs by default, etc. Serialization is especially in need of a huge revamp, hopefully records will make it much better.

Regarding ORMs, have you by chance tried JOOQ? You may prefer it over JPA.

Also, just a small note on Rust - I find it to be an excellent language, but I really don’t think it fights in the same domain as Java. Systems programming is fundamentally different. So writing a huge business application in Rust (or in C++, equivalently) is a suicide mission in my opinion — initial write time may indeed be low for an experienced Rust dev, but with the often changing client requirements that mandate quick changes touching everything, the low level details that leak into the high level view of the app will slow one down (now you also have to change the memory model because this lifetime has to be extended, etc). But I only mention that as an explanation for why Rust is not a replacement for the huge, ever-living business app domain (at least for me)

So all in all, I think that Java fights a good fight, it remains religiously backwards compatible which is painful at times, but is perhaps the biggest value there is; but it is improving with a huge pace with records, sealed classes (giving us algebraic data types), upcoming `with`ers will provide a good syntactic sugar for “modifying” immutable objects, and full-blown pattern matching is coming built on top of these. But the real deal happens under the hood, Loom will make blocking codes magically non-blocking and Valhalla tries to heal the rift between primitives and objects and will provide excellent performance boost. So the part I actually like and defend about Java is this one, not the historic baggage it comes with. But I also work on CRUD apps, and that’s not an exciting domain no matter what.


> Thanks for the non-flame-baity answer! Hopefully I wasn’t too emotional in my previous reply, because it unfortunately does happen from time to time.

I didn't pick up any high emotions, but I get it. For some reason, I get fiery about this stuff, too. I don't know if it's that I get equally worked up no matter what I'm arguing about, or if it's worse because I'm passionate about computers and programming.

> Regarding ORMs, have you by chance tried JOOQ? You may prefer it over JPA.

I haven't used it, but I've read their docs and API. It looks great, albeit very large. It also preserves some of the... conventions... from JDBC and JPA that I find egregious, like converting null to actual values when mapping query results (https://www.jooq.org/doc/3.15/manual/sql-execution/fetching/...). At this point I have to assume that Java devs actually prefer this behavior, but I think it's crazy- if I expected a non-null int and I read out a null, I want to crash- not pretend like I got a valid int...

> Regarding Jackson and JPA the only thing I can tell about these is that their age shows, and they come from a domain and age where the (in my opinion, bad) POJO and Java Beans conventions originate. So I fully agree that things could be much better, and hopefully there will come a renaissance replacing these tools with modern java equivalents, that don’t rely on runtime magic as much, and will use the modern datetime APIs by default, etc. Serialization is especially in need of a huge revamp, hopefully records will make it much better.

> Also, just a small note on Rust - I find it to be an excellent language, but I really don’t think it fights in the same domain as Java. Systems programming is fundamentally different. So writing a huge business application in Rust (or in C++, equivalently) is a suicide mission in my opinion — initial write time may indeed be low for an experienced Rust dev, but with the often changing client requirements that mandate quick changes touching everything, the low level details that leak into the high level view of the app will slow one down (now you also have to change the memory model because this lifetime has to be extended, etc). But I only mention that as an explanation for why Rust is not a replacement for the huge, ever-living business app domain (at least for me)

At the risk of coming off as a combative asshole, I'm going to pick on you for second because it's relevant to the part about Rust.

In your previous comment, you asserted that serialization and ORM/data-access is just as difficult in every other language as it is in Java. You also asserted that Java has a "really high quality ecosystem for all these things".

But, here you're acknowledging that serialization and data mapping are "showing their age", follow "bad" conventions, "could be much better", shouldn't rely on runtime magic, and are in need of a "renaissance".

Even though I wasn't advocating for Rust as a great fit for enterprise web apps, I will say this: I think you're rationalizing. You've already made up your mind that Java is good for application development and has a good ecosystem. But I think I can make a solid case that Java is a primitive, unexpressive, bug-prone, language with a large-but-mediocre ecosystem (where the most widely used parts need a "renaissance").

Even after I argued (convincingly, I assume, since you changed your expressed opinion about serialization and data-mapping in Java) that Rust is better at both serialization and data-mapping, you're asserting that Rust would be a suicide mission for a large scale business app. Well, it's apparently better at two of the most fundamental parts of any web app, so I think it's looking pretty good as far as suicide missions go.

Hell- Java doesn't even have single-thread concurrency!

Rust's type system allows us to express more elaborate abstractions with less code than Java (enums vs sealed classes, type classes vs adapters and decorators)

Rust makes concurrent code safe. No need to remember to use mutexes or to make inefficient copies/clones of "immutable" classes- if you write code that would cause a data race, it just won't compile. In Java, you'll just get bugs and corrupt data.

In Rust we'll never get NPEs.

So, I don't agree with your assessment at all. Writing a large enterprisey business app in Rust will likely run faster, have fewer bugs, use less memory, and even scale out better. If you're hosting your app on a cloud provider, it'll cost you less money to operate as well.

I think that you just want to believe that Rust would be worse than Java, and I think the cargo cult agrees with you. But, having done significant work in both languages, I think that's incorrect. My Rust code is generally an order of magnitude less maintenance than my Java and Kotlin code have been. Furthermore, it took me LESS time to become truly proficient at Rust than it has to become proficient (as in writing relatively few bugs the first time) in Java and Kotlin.

Some day, Valhalla will land and Java will get some things that other languages have had for a while. And eventually, Java may even become a solid language for its major use-case. But today is not that day. And today we have languages that are already better. Literally everything you listed (sum types, records, pattern matching, and non-blocking) already exists in Rust and Swift (and Scala, and Kotlin).


So, I don't agree with your assessment at all. Writing a large enterprisey business app in Rust will likely run faster, have fewer bugs, use less memory, and even scale out better.

That's true but which is more flexible for the "ever changing living business app domain" the GP is alluding to? You seem to keep ignoring this part, flexibility matters. In rust is easy to code yourself into a corner and spent lots of time rewriting stuff over and over.


Fair. You're right that I didn't address that concern.

I guess the problem is that I don't know what we mean by "flexible". The GP did mention lifetimes around the same part of their comment, so I assume that there's some concern about business requirements changing in some way, and that Rust's lifetimes would get in the way of adapting to code to meet the new requirement.

Is this also what you mean by "code yourself into a corner"? Or are you thinking of a superset of that?

When we say "flexible" are we talking about the language being opinionated about the style of code we write or are we talking about the language making it harder to be agile in the face of requirement changes? It sounds like we're talking about the latter.

First, let me repeat myself that I don't believe Rust is the ideal enterprisey web app language. There's almost no reason that a web-app benefits from a language not having garbage collection or automatic ref-counting (like Swift).

But, I'm not backing down from my assertion that Rust is still probably a better web app language than Java, if we're willing to ignore Java's ecosystem's 30 year and billions of dollar head-start for niche, vendor-specific, libraries. Or, phrased another way, just because FooCorp gave you a jar file to connect to their smart sex swing, that doesn't make Java a better language in a fundamental sense, even if it does force your hand from a business and engineering perspective.

So, since I don't actually know what we're talking about with "flexibility", I'll just ramble about a few things.

First, lifetimes. Lifetimes are scary. But, I honestly don't see how or why lifetimes should be an issue in a high-level application, like a web app. If you have any specific scenarios, examples, or lived experiences, please share. Let me explain some of my experience with writing a couple of web services in Rust, with respect to lifetimes.

I've been using an http server called Actix-Web when I do Rust web stuff. It uses the same architecture style as Vert.x: it spins up N reactors (where N = number of CPUs by default) and each reactor is single-threaded and concurrent, which means that once a Request is routed to an available reactor, it never leaves that thread. This means that there are no complex lifetime issues with handling a Request- all of your logic can be single-threaded and treats the Request as having "static" lifetime (the Request outlives your handler function, so as far as your function knows, the Request lives forever. The caveat is that you borrow its content such as headers, uri, etc and would need to take copies if you wanted to send them elsewhere). I've never had non-trivial issues with lifetimes when it comes to the basics of request processing.

The SQL query builder I referred to in a previous comment gives us a transaction object with a legit lifetime, because it uses RAII to close the transactions and to return connections to the connection pool. The only time this has caused me grief was when I was trying to be clever by implementing a type class around transactions for some reason that I don't even remember now. I don't see how or why a changed business requirement would require us to extend a transaction's lifetime explicitly.

A further point: it's fairly easy to leak resources in Java because you can't do RAII except with the try-with-resources stuff. But, you can easily forget to try-with-resources and leak. Or, on the opposite side, since an object can still be referenced after you close it, you could pass an already-closed connection around and cause an error far from where the connection was first obtained and/or closed. In Rust, such mistakes would never compile.

Really, in an idiomatic Rust app, I would expect that the only place where you'd see explicit lifetimes is from RAII. Everything else is either going to be plain-old-data or some kind of ever-living actor/service. I'd be surprised to know that a high-level app is actively managing lifetimes of pretty much anything.

I'm not saying that it's impossible to end up with some ugly function signatures because of lifetimes. I can imagine writing a function that takes two parameters with independent lifetimes. But, I don't know why it would limit your agility.

Moving away from lifetimes.

Rust does preclude certain designs and architectures. You can't really do self-referencing structs (easily/simply/whatever), so you're not going to see a complex web of sibling objects referencing grand-parent objects, referencing the town they live in, referencing the grand-child objects. In this sense, yes, Rust is less flexible, and if you try to write Java style code in Rust, it's going to be painful. But does this make a Rust app less agile? In my opinion, no. Sure, you need to write your code in a different style than you would with a different language. And, sure, there's a learning curve to writing "good" Rust code, but are you willing to tell me that there isn't a learning curve to writing enterprise Java app style? The millions of pages printed and watts burned by people teaching and learning Gang Of Four design patterns, and Domain Driven Design, and Clean Architecture would suggest otherwise. Then the millions of watts burned on StackOverflow posts about == vs .equals(), and how static methods work with inheritance, and how to implement a generic interface for multiple types (you don't), and what the difference is between DAOs and Repositories and Services, etc, would also suggest otherwise.

In fact, here are some things that have made my Rust code MORE agile:

* You know how people praise static typing as allowing more confidence in refactoring? The idea of doing a big refactor of a Python or JavaScript code base makes me break into a cold sweat. Rust's type system is way stricter than Java's and I'm much more confident that when I refactor Rust code, I won't accidentally introduce a race condition or resource leak.

* If I want to extend a type with new functionality, I don't even have to own that type. Or, if I do, I don't even have to change the original file. I can define a new trait *and* write the implementation of that trait near the code that uses it. How do you do it in Java? You write your new interface and then write a wrapper class that delegates to the original class. Except now, you can't use that wrapper class in place of the original- you have to convert back and forth. Not so in Rust. Much more "flexible", IMO.

* modules > packages for namespacing and visibility.

* traits allow me to define/require "static" methods on implementing types.

* If you have two interfaces, Foo and Bar, in Java, and you want to write some code that does something special for a type that is both Foo and Bar, what do you do? It's been a while, but if I remember correctly, you have to define a new interface called FooBar that extends Foo and Bar and you have to go find every class that implements both Foo and Bar, and change them to implement FooBar, instead. In Rust, I can just write a function: `fn <T: Foo + Bar> do_stuff(o: T)`. Done. Didn't have to define a new type, didn't have to touch old stable code, etc.

* I can implement a generic trait for multiple type parameters (eat that, Comparable<T>!).

All of the above have allowed me to add or change functionality with minimal added code and minimal regressions.

Java being flexible is a truism, IMO. It's not flexible. We've just mastered it to the point that we don't even try to do things that we know are impossible, but are totally reasonable to want to do. We've gotten so used to its restrictions and limitations that we don't even see them anymore, or we just pretend like it's actually better this way.


Rust is great for reliable production systems, for sure, but for 'let's quickly prototype this new feature', it’s too strict. Imagine figuring out a perfect algorithm and spending a few hours implementing it just to be told by the borrow checker it won’t let it pass.

When this new features start to queue up I’m happy to have leaks as long as I get to try out ideas quickly (later you hardened them). And it’s hard to convince Rust to let it go.

Maybe it's my inexperience with Rust, definitely need to give it a second try for more than three weeks but haven't had a good reason to do so.

I don't like Java at all and prefer Clojure when on the JVM, but as you said, the Java ecosystem(libraries get the job done) and the GC are definitely good reasons to pick it up for a webapps.


> Rust is great for reliable production systems, for sure, but for 'let's quickly prototype this new feature', it’s too strict. Imagine figuring out a perfect algorithm and spending a few hours implementing it just to be told by the borrow checker it won’t let it pass. > When this new features start to queue up I’m happy to have leaks as long as I get to try out ideas quickly (later you hardened them). And it’s hard to convince Rust to let it go.

I don't know. This is starting to feel like moving the goalposts.

The first person I replied to claimed that Java's ecosystem is high quality and that serialization and data-mapping is not only good in Java, but that it's not better in any other language.

I showed that both of those claims are false.

Then they claimed that Rust, being a systems language, was not suited for enterprise apps with evolving requirements. And you kept me honest about addressing the evolving requirements part.

I explained how Rust apps will run better, scale better, be more robust and bug-free, AND allow us to better adapt to changing requirements than Java.

And now, of course, it's some other reason that Rust can never work.

I'm sorry for the snark, but it always seems to be the EXACT same common talking points over and over again when it comes to Rust nay-saying, and after having worked with Rust on-and-off over the last 4-5 years it just gets exhausting hearing about all of these hypothetical things that don't happen in real life, from people who haven't actually used Rust in a real project, but somehow seem to know what it's good and bad at (and what a coincidence! It's bad at the exact thing they're working on, and the language they chose for the task was definitely the best choice! I'm happy for them, but it's discouraging to know that I'm the only person who ever makes the wrong choice sometimes.).

I mean, it's literally always even the same words. It's always "prototype" and "quick and dirty" and "that darn borrow checker!". You'd think that everyone on HN and Reddit were Thomas Edison with all of the "prototypes" they're writing.

I mean, what exactly do you think is going to happen in your quick prototype Rust code? A reference to a piece of data is going to come out of nowhere and sink your experiment? Hell no. At worst, you're going to type `.clone()` and `.unwrap()` a bunch of times to take copies instead of references and crash on any errors, and call it a day. If this is experimental-prototype-whatever, then what the heck do you think you're doing writing a bunch of fancy lifetimes and cross-thread mutable data sharing?

This shit doesn't happen.

You know what does happen when I try to "prototype" in a "good" prototyping language like Python or JavaScript? I spend a bunch of time re-running the same code over and over and over until I stop forgetting or misspelling what keys are on the dictionary I passed to the function, or accidentally passing arguments in the wrong order.

When I "prototype" in Java, I can't figure out if my algorithm sucks or not because I accidentally wrote `o1 == o2` instead of `o1.equals(o2)` or I forgot that getting a `null` from a Map could mean that the key has no entry in the Map *or* that an actual `null` value was inserted into the map for that key. Or, I get an NPE because I accidentally mixed up `int` and `Integer` somewhere.

I've already spent too much time on this. If you decide to give Rust another shot some day, that's great. If not, that's fine, too. There are probably other languages that will give better bang for your buck, too, like OCaml or Scala (3). I like Clojure as well, even though I prefer my static types. Clojure is at least a well-designed and consistent language. Java is a hot-mess.

EDIT: Also, if the borrow checker won't let your algorithm pass, it almost definitely means your algorithm has a race condition that you might not have realized. "You're welcome for not letting that crap eventually end up in prod" - Borrow Checker


Fair, I'll shut up until I have more real world experience with Rust. One thing that I remember well that really annoy me were the extremely slow compile times (these was two years ago, maybe it's much better now) since I value fast feedback. I understand for some that compile time cost is worth it and doesn't bother them.


If you have two interfaces, Foo and Bar, in Java, and you want to write some code that does something special for a type that is both Foo and Bar, what do you do? It's been a while, but if I remember correctly, you have to define a new interface called FooBar that extends Foo and Bar and you have to go find every class that implements both Foo and Bar, and change them to implement FooBar, instead. In Rust, I can just write a function: `fn <T: Foo + Bar> do_stuff(o: T)`. Done. Didn't have to define a new type, didn't have to touch old stable code, etc.

Since Java 8 there is static and default methods in interfaces.

Curious, what rust sql query builder are you refer to?


> Since Java 8 there is static and default methods in interfaces.

That's not relevant to the part of my post that you quoted...

I was describing a hypothetical situation where you already have two interfaces, but would like to have functionality that only makes sense for an object that implements BOTH of those interfaces. The only way to do that in Java is to write a THIRD interface that combines the two and then go through and change your implementations to implement that new interface instead of the two separate ones.

However, in the bullet point above, I mentioned static methods on traits in Rust, which is different than what static methods on interface in Java are. In Java, a static method on an interface is just a function on the interface, itself. In Rust a trait can declare that an implementing type must have a static method matching the signature. This is because Rust traits are type classes, while Java interfaces are just object interfaces and cannot constrain the implementing type, itself.

> Curious, what rust sql query builder are you refer to?

I have been mostly using mysql_async (https://docs.rs/mysql_async/latest/mysql_async/), but recently started playing with sqlx (https://github.com/launchbadge/sqlx). I guess "query builder" isn't the right way to describe them, but I'm not sure what else to call them...


Looks like an IDE / language server that unrolls the annotation generated code might be what's missing here. I like my abstractions to be hidden, but I like to be able to peek under the hood. That's one of the problems of C++ templates, sometimes I want to look at the expanded code.

The GNAT Ada compiler has an option to output a much-simplified code. Not compilable Ada, but very inspectable unrolled, expanded code. Makes for a great teaching tool. Aaaaaaah this generic mechanism does that!

Edit: link https://docs.adacore.com/gnat_ugn-docs/html/gnat_ugn/gnat_ug... look up -gnatG[=nn]... Good stuff.


> Looks like an IDE / language server that unrolls the annotation generated code might be what's missing here.

I think you're missing the point.


Lucky me, there were libraries available, for mail, JMS, Kafka, logging and so on. Also implementing things in shittier way than Spring is difficult feat to achieve for my modest skillset.

However there were some impossibly complex requirement like thread pool and with great effort I was able to found a solution in standard JDK like :

`ExecutorService exec = Executors.newFixedThreadPool(st.threads);`

Spring could have greatly simplified this code I guess.


> It is not an accident that things like ruby on rails are popular.

Now if we could have that without the magic (neither from annotations nor from open classes), and with strong type safety and proper sum types... That'd be great!


> I have a dozen or so microservices supported by team.

Why do you need a dozen of microservices? Why not to use role-based monoliths? Why not to keep your "microservices" as independent modules, pack them as one app and let such app to configure itself with proper set of services and dependencies according to the config or CLI parameters?..


Just a developer there. So not calling shots on overall architecture. And yes we do not need probably 80% of that crap but not in position to make them see reason.


I'm in an enterprise and have successfully lobbied people to use anything other than Spring. Such organizations and teams while rare do exist so don't give up hope! (or just move to a more progressive org)


> I'm in an enterprise and have successfully lobbied people to use anything other than Spring.

I was in a team that used Spring Boot for a greenfield project. The documentation was great, there was tons of help of Stackoverflow (as it's Spring) and the consideration given to testing was first class. Deployment was also easy, as we just created a fat JAR. No application server necessary.

It was a great place to work.


I have also been in multiple organizations where Spring was used, including "modern" Spring Boot and greenfield projects, and people who knew every nook and cranny of Spring.

I don't agree with any of the things you bring up.

Spring documentation is and has always been poor and the sheer volume of outdated documentation (let alone ways to do the same thing) makes it needlessly difficult to find an answer to any given question.

Differences between the real app and real http requests to the app, and using the Spring test application context and Spring HTTP tests, result in tests misleadingly passing when things are actually broken. (or vice versa)

This is different to eg DropWizard where you actually boot the app (no different to how it does in a real env ie no "test application context") and make real http requests to it. (not some watered down fake Spring HTTP test requests)

Ability to deploy a standalone jar without the need for an application server is hardly unique to Spring.

Add in the horribly, horribly ingrained, unidiomatic ways people use Spring (eg sprinkling field autowiring all over the place instead of instead of using constructor injection) etc etc and every codebase is quickly completely ruined vs if it had been implemented in literally any other framework.

But! Fortunately for you, the Java community has made their choice, and it's not going to change - Spring is the default and correct option, and anyone who uses any other framework is just stubborn and wrong.


> Spring documentation is and has always been poor and the sheer volume of outdated documentation

Spring documentation is excellent. I had to learn Spring as a PHP developer, so I put the documentation onto a Kindle and read it. It's also versioned, so you don't need to read out of date versions:

https://docs.spring.io/spring-framework/docs/

> This is different to eg DropWizard where you actually boot the app (no different to how it does in a real env ie no "test application context") and make real http requests to it. (not some watered down fake Spring HTTP test requests)

Spring Boot allows you to write full application tests that will boot it up on a random port with the @SpringBootTest annotation, as is covered by the excellent documentation

https://spring.io/guides/gs/testing-web/

> Add in the horribly, horribly ingrained, unidiomatic ways people use Spring (eg sprinkling field autowiring all over the place instead of instead of using constructor injection)

You can use whichever.

> anyone who uses any other framework is just stubborn and wrong.

Not at all. Spring Boot is just a great solution, that's all. It's got strong support from a company, lots of documentation and first class support for testing. It also allows you to easily swap out different underlying technology, eg. you can switch from Jetty or Tomcat, Liquibase to Flyway.

It's a disservice to persuade businesses to use smaller projects that don't have a comparable level of support, or flexibility.


> It's a disservice to persuade businesses to use smaller projects that don't have a comparable level of support, or flexibility.

This is why we use Spring. We have confidence that it'll be supported long-term and will continue to have backing from a whole host of companies.

It's slightly different in a large enterprise environment where a service might be fairly straighforward - take in input, spit out output to some data store. But for the kind of work we do - end-to-end webapps - we'd be doing ourt clients a great disservice to sell them on a microframework with our homespun implementations of security, transactions etc. bolted on top.


> we'd be doing ourt clients a great disservice to sell them on a microframework with our homespun implementations of security, transactions etc.

One of the benefits of Spring Boot was that Pivotal test and ensure all the components work together, so you can upgrade safely. It made keeping things up-to-date much easier.

There's no business value in trying to knit it all together yourself, if you can pass the work onto somebody else who is paid to do it.


> @SpringBootTest

Oh what do you know, yet another annotation...

I'm literally looking at the docs for @SpringBootTest

``` @SpringBootTest public class SmokeTest {

    @Autowired
    private HomeController controller;

    @Test
    public void contextLoads() throws Exception {
        ...
    }
} ```

This is case in point of what I'm saying - I highly doubt this actually spins up a full fledged app.

+ where is the controller coming from? It's not instantiated anywhere. What about it's dependencies? If I add one, this test will still compile, when in reality, the tests will all fail. (or at least, you would imagine most would) This field autoinjection approach encourages adding arbitrary dependencies with zero thought to anything - who cares, everything will just compile anyway. Meanwhile, a sane framework would force you to send in dependencies, forcing you to be smart about how to structure things and not compiling if you add a dependency to a controller but not to its test.

Also, what does @SpringBootTest actually do? How does it work? What if I need to do <anything that doesn't align with the simplest use case that the Spring devs cater to>? Who knows! "Just add the annotation, it's easy!" (but not simple - it's extremely, and extremely needlessly, complex)

> You can use whichever

Yes, and that is wrong. Even Spring devs themselves are now trying to put the field injection cat into the bag in favor of the correct constructor injection approach. They're not succeeding.

> Not at all

You say it's not wrong not to use Spring and... go on to say it's wrong to use anything but Spring. Classic Spring dev mentality, and completely disingenuously portraying things as if other frameworks like eg DropWizard isn't constituted of well maintained, well documented components. In reality, unlike Spring, frameworks like DropWizard are a collection of some of the best tools for each job, and it's both simple and easy, whereas Spring is just Spring, Spring and more Spring, and while it's easy, it's not simple- there's a lot of magic.

Anyway, we're not going to agree, and as I said, very close to 100% of Java devs and Java shops are 100% committed to Spring, so you win.

I'm just glad throughout my career I've found a few orgs who have been through the Spring grinder and realized the emperor really does have no clothes and have been open minded enough to look outside of the Spring bubble and try something else.


> Also, what does @SpringBootTest actually do? How does it work? What if I need to do <anything that doesn't align with the simplest use case that the Spring devs cater to>? Who knows! "Just add the annotation, it's easy!" (but not simple - it's extremely, and extremely needlessly, complex)

You can read the documentation to find out what every annotation does.

https://docs.spring.io/spring-boot/docs/current/reference/ht...

There are several different annotations that you can use to test each particular "slice" of your application, ranging from a fully embedded server, to just testing the controller layer without the embedded server, to just testing the data layer without controllers or servers.


What do you recommend instead?


A good first framework to wane off Spring is DropWizard. But there are countless others and they all do more or less the same thing, except much better, and with far less complexity, than Spring.


> Errors are indecipherable being swamped by thousand line framework exception trace

Don't you just look at the top lines?


If you're lucky and the exception happens in your code directly. If not you're gonna have about 30 lines of interceptors and generators and whatnot before you get to your class. If you call ClassA::foo from ClassB::bar you'll get about 5 lines of interceptors between foo and bar in your stack trace. Debugging is also a nightmare in IntelliJ as step-into and step-out will go through all those interceptors.


Debugging bothered me too. But in the end I have configured IntelliJ to skip all the interceptor classes while stepping into. Had to add a whole bunch of classes and packages for that though


I used Spring Boot and just set debug breakpoints, that way you don't have to step into and out-of, you just press "play" and it moves to the next breakpoint.


Another big difficulty is the handling of depedencies, as spring boot is bringing in gprc, jpa, jdbc and countless other libraries. One really needs a dedicated team to figure out all these issues!


Very true! But management is sold on "best practices" from VMWare suits. So any practical difficulty is just an excuse for not learning the latest, Next generation technologies.


It brings in whatever you use. Also, it has a goddamn webpage where you can click together what do you want to use and it will create an initial project for you with the chosen build tool and what not. It hardly gets easier than that.


This isn't true. Spring Boot by itself brings in very little, you can however _add_ GRPC, JPA and JDBC support by adding a library and Spring Boot will even autoconfigure it for you.


IBM WebSphere Application Server took several minutes to start or stop. Deploying war file took another 10-30 seconds. And you had to restart application server sometimes.

Spring Boot application with few controllers starts in 2 seconds on my outdated laptop.

Spring is lightweight, compared to old tech.

Not the most lightweight, that's for sure. Simple Java web server which uses socket API to server requests, starts in few milliseconds. That's the bar.


Spring Boot is not the same thing as Spring, and is indeed recapitulating all of the mistakes of EJB (I guess it's been long enough that the new generation of developers doesn't know about the problems). You can still use vanilla Spring though.


> Spring Boot is not the same thing as Spring

Spring Boot is an opinionated way to configure Spring applications.

> recapitulating all of the mistakes of EJB

Which mistakes is it repeating?


> Spring Boot is an opinionated way to configure Spring applications.

If by "configure" you mean "make arbitrary changes to the behaviour of". Spring Boot adds a bunch of spring-boot-specific stuff that can't be replicated in vanilla Spring and isn't supported for use outside Spring Boot (e.g. @ConditionalOnMissingBean and friends are explicitly not supposed to be used in non-boot Spring configurations). This is a long way away from just sugar for an ordinary Spring configuration.

> Which mistakes is it repeating?

- Huge and incomprehensible

- Components can refer to other components in ways that are completely invisible in the code

- No way to understand your application's behaviour by just looking at your code, because it can vary drastically depending on the things that are instantiated by the container at runtime. (In EJB this was container-provided services, in Spring Boot it's configurations that are automatically instantiated if they're present on the classpath, without the application ever referring to them at all)

- In practice applications depend on implementation details of the framework and cannot safely upgrade or migrate


Spring Boot has remoting like EJB did??


Well, Spring has multiple fundamental problems. They've chosen wrong language and wrong techniques. Runtime metaprogramming is sick and slow.

Though that doesn't mean that anything is wrong with JVM as a platofrm.

spring-core can be replaced by, essentially, several hundred lines of LoC: https://izumi.7mind.io/distage/index.html And in fact these lines can do lot more than Spring.


It is always like that, eventually the revolutionaries become the government they set out to replace and the wheel of time turns again.


Annotations are a band aid. They easily move what otherwise were compile time errors into runtime errors. The advantage of using them is that you have to write less (repetitive) code.

I prefer code without them. They add magic. I dont like magic in my code.


I honestly think the big issue here is using Java for these use-cases. I know that sounds flame-baity, but I'm being sincere.

Java is a very primitive language. For the vast majority of its life, it's basically been C + basic classes + garbage collection.

As a result, it's very verbose, which is totally fine for a low-level language. But, when building large, high-level, business apps, it's just a weird fit. I think that's why we see all of these annotation-heavy band-aids on top of Java (Lombok, Spring, JPA, etc)- it's because Java is actually not the right tool for the job, but instead of migrating (or inventing) a better tool, we just sunken-cost-fallacy ourselves to death.


I disagree here. Having GC, VM, streams, big stdlib, makes is quite highlevel.

It's not very terse (like Ruby maybe), but modern Java is terse enough.

To keep a language small is a good thing: less to remember, easier to join the team. Go, Elm, Reason/ReScript, LISPs all go that route.

Java misses some things badly. Like being able to have a reference to a method (Jodd has a library fix for this). Or like sum types and pattern matching.

But I'm more bitten by features that Java has than what it has not. Overuse of Exceptions (instead of sum types) and Annotations are my biggest pains.

You see a lot of Java's shortcomings properly being addressed in Kotlin. Like the getter/setter story. And "standards" like Bean and XML config have given Java a bad rep.


> I disagree here. Having GC, VM, streams, big stdlib, makes is quite highlevel.

I used to think that, too. I probably wont't convince you otherwise, and it really doesn't matter how you or I categorize the language, but I think a solid argument can be made that Java's abstraction power is almost zero, especially if you consider versions older than two or three years (before records, switch expressions, sealed classes, etc). I also think that the ability to differentiate between boxed and unboxed primitives, no concept of immutability/const, primitive synchronization tools like mutexes and raw threads, etc, make a compelling case that Java is not well-suited for thinking at a high level of abstraction.

Think about how much code it required in Java to create a value type with four fields before records. You needed to list all four fields and their types in the class body, then you need to list all four fields and their types in the constructor signature, then you need to write `this.foo = foo` four times in the constructor body. Then, depending on your conventions and preferences on mutability, etc, you'll need to write getters and/or setters for the four fields. Then you need to write a custom `equals()` implementation. Then you need to write a custom `hashCode()` implementation. Then you need to write a custom `toString()` implementation.

I hope you don't have to update that class, either, because forgetting to change your `equals`, `hashCode`, and `toString` will cause bugs.

There's basically no universe where you can convince me that this shouldn't be considered low-level programming.

> To keep a language small is a good thing: less to remember, easier to join the team. Go, Elm, Reason/ReScript, LISPs all go that route.

I agree! Small/simple is great. But look at how expressive Reason/ReScript/OCaml is/are compared to Java. Same with LISP. They aren't huge languages with endless features being added on all the time, but they allow for much more high-level programming than Java, IMO.

> Java misses some things badly. Like being able to have a reference to a method (Jodd has a library fix for this). Or like sum types and pattern matching.

To be fair, though, this is not what Java was designed for. Java was initially an object-oriented language. Sum types and pattern matching are not OO. Object-orientation was supposed to be about black-boxes sending "messages" to each other while maintaining their own internal state and invariants. In "true" OOP, you wouldn't have a sum type, because you'd have an object that would exhibit different behavior depending on the implementation.

Granted, we're moving away from hardcore OOP as an industry (thank goodness). But, I'd argue that the "problem" isn't Java's lack of sum types and pattern matching, but rather that we're trying to make Java into something it isn't. We should just use a different tool. I'm not in my garage trying to attach a weight to the end of my screwdriver to make it better at driving nails- I'm going to my toolbox to grab a hammer, instead.


Agreed with many points. So Java is then somewhat in the middle.

OTOH lets consider Rust. It is in my book a low-level lang, close to the metal (hence Rust?). It has a muuuuuuch better feature set compared to Java (IMHO). But it is geared at low-level, so no VM and certainly no GC out-of-the-box... In your def Rust'd be a high level lang: which is cool. I like your def :) But I still def'd high level slightly different: more in terms of the ability program close to the machine, or more in abstractions.

> Sum types and pattern matching are not OO.

Traditionally not often found in OO, but otherwise verrry much compatible with OO.

> In "true" OOP, you wouldn't have a sum type, because you'd have an object that would exhibit different behavior depending on the implementation.

I think this is more about tradition than "trueness". I cannot return an Either<Error, Result> from Java. That sucks. Many have used Exceptions to fix it, but that suck even more. I'd say OO is compatible with sum types.

> But, I'd argue that the "problem" isn't Java's lack of sum types and pattern matching, but rather that we're trying to make Java into something it isn't.

This always happens. And some langs are better suited for that than others. I work on a Java codebase currently and welcome those features, and actively consider moving the whole show over to Kotlin. Kotlin to me is like a typed Ruby. And in Ruby many libs are in C (in Kotlin then many libs'd be in Java).

I think OO and FP bite eachother. You cannot have both. See Scala. It becomes way too big as a language, and lack idiomatic ways of doing things. But one can have a lot of FP in an otherwise OO lang (see Kotlin for instance).


> OTOH lets consider Rust. It is in my book a low-level lang, close to the metal (hence Rust?). It has a muuuuuuch better feature set compared to Java (IMHO). But it is geared at low-level, so no VM and certainly no GC out-of-the-box... In your def Rust'd be a high level lang: which is cool. I like your def :) But I still def'd high level slightly different: more in terms of the ability program close to the machine, or more in abstractions.

I'm not a super clever person, but I once made a quip that I was pretty proud of, and I've repeated it online a few times:

"Rust is the highest level low-level language I've ever used. Java is the lowest level high-level language I've ever used."

Of course, the hardest part of all of these discussions is agreeing on what the words we're using actually mean. So what does "high level" and "low level" mean when it comes to programming languages? Are they mutually exclusive, or can a language be both? Is there such a thing as a "middle level"?

I don't have a great objective definition. Basically, I see "low level" approximately meaning "I have to think a lot about computery shit" and "high level" as "My code mostly looks like domain logic". There's a lot of wiggle room in there, for sure.

But I'm curious to challenge you more on what you mean by "close to the metal." Is being close to the metal somehow about abstraction-ability of the language, or is it a euphemism for some languages just being inefficient with computing resources? And, specifically, the things that make Rust closer to the metal than Java. I think the "obvious" answer is that Java runs in a virtual machine and has garbage collection, whereas Rust has neither of those. But I'm going to push back on those "obvious" high-level features.

First of all, Java-the-language has no idea that it's running in a virtual machine. I could, hypothetically, write a compiler for Rust that spits out JVM bytecode- would that make it a high-level language? Probably not.

As for garbage collection, I'd agree that compared to manually allocating and de-allocating memory space, garbage collection certainly allows us to think in a higher level of abstraction by letting us ignore details about how and when our data come to exist in our program. But (safe) Rust's approach to memory allocation is pretty far from manually allocating and de-allocating blocks of memory from the OS (which is basically a VM, itself, isn't it?). Rust largely allows me to ignore how much memory I might need for a String or a vector of data.

Now, it would be crazy for me to claim that Rust's memory model is as high-level as something with a garbage collector. After all, in Rust we have to think about borrows, Sized vs. unsized types, and sometimes have to actively think about lifetimes.

But, I will claim that Rust's memory management is still a big step up the ladder of abstraction from C. So, if being "close to the metal" is about needing to think less about nitty-gritty computer stuff, then Rust isn't as close to the metal as our gut instinct might say it is.

On the other hand, in Java, I still need to think about specific computery stuff when choosing between boxed and unboxed primitive types. I need to think about bits and bytes when choosing Short vs Int vs Long, rather than having a default Integer type that can be arbitrarily big or small. I need to think about mutexes and threads and thread-pools for concurrency/parallelization. I need to worry about stack overflows. That's all true of Rust, too, of course, but my point is that both of them require putting a lot of thought into non-domain concepts while programming.

Java did recently get records and sum types, so its abstraction ability has gone up substantially from where it was just a few years ago.

Rust has async/await for concurrency that can even be used in single-threaded contexts. Java doesn't even have that yet.

Rust has type classes. Java does not.

Rust has easy-to-implement newtypes. Java does not.

Rust has (im)mutability as a language concept. Java does not.

Rust has data copying as a language concept that actually works. Java has Clone.

Rust has hygienic macros that can be used to extend the language, create DSLs, and reduce boilerplate. Java has annotations that can be used to reduce boilerplate- mostly with a runtime cost and runtime errors.

So, which language is more capable of higher levels of abstraction? Honestly, it's probably Rust. Which language requires you to think more about memory stuff? Rust- but I think it's less of a lead over Java than most people would guess.

Which is closer to the metal? I don't know. Rust runs faster.

> Traditionally not often found in OO, but otherwise verrry much compatible with OO. > I think this is more about tradition than "trueness". [snip] I'd say OO is compatible with sum types.

We'll probably have to agree to disagree. Of course sum types can exist in a language that touts itself as OO, but using them extensively is just not OO. It's literally inside-out from OO. If you look at a language like SmallTalk, even True and False are objects, and there are no if-statements. Rather, True and False are both sub-types of Boolean, and Boolean requires the methods ifTrue and ifFalse. True implements ifTrue to perform any action that was sent as a parameter, and implements ifFalse as a no-op. You can imagine False's implementations. So, some object sends you a Boolean and you call the ifTrue method of the Boolean with an action to be performed if the Boolean feels like it (it feels like it when it's a True :p). In true/extreme/hardcore/pure OOP, you wouldn't even have or use if-statements when implementing logic- it's polymorphism all the way down.

Is that useful or practical? I don't think so. But that's why I claim that sum types are not OO. I also claim that if-statements aren't really OO. And boolean is a sum-type.

> I cannot return an Either<Error, Result> from Java. That sucks. Many have used Exceptions to fix it, but that suck even more.

Yes you can. Java now has sum-types anyway, but you could always implement a generic class that could be in either one of two states with whatever methods you need to check its state and extract the data. It's awkward and janky, but this is Java- what isn't awkward and janky?

> I think OO and FP bite eachother. You cannot have both. See Scala. It becomes way too big as a language, and lack idiomatic ways of doing things. But one can have a lot of FP in an otherwise OO lang (see Kotlin for instance).

I agree with the first premise, but I disagree that Kotlin has successfully added FP stuff to an OO language. I think that Scala has done a much better job of being FP and OO, actually.


Java has had method references for almost a decade. Pattern matching also was recently released.


> Java has had method references for almost a decade.

So how can I pass a method to another method? (without using lambdas)

> Pattern matching also was recently released.

I know, great improvement (if it comes together with proper sum types). One less reaosn to switch to Kotlin.


someObj.someMethod(Some::reference);


the pernicious thing about spring is there appears to be 15 different ways to do the same thing. Everyone's idea and enhancement request was thrown in. Plus things got left in that should have been deprecated after better ideas came along or the java language improved to allow new techniques.

I'm glad someone is working on a light weight replacement to spring. I had some ideas on a light weight DI framework but never got around to it.


> the pernicious thing about spring is there appears to be 15 different ways to do the same thing.

Oh gods below this. I was half wondering if I was writing Perl with how much TMTOWTDI was floating around in the cesspool of Lombok and Spring.


I use http://sparkjava.com in my hobby project. It mostly does what I want, but I had to hack it a bit to be able to stream responses. It's also crazy fast and about as lightweight as these things get.


Spring is certainly a divisive topic, and I think it's hard for people on different sides to fully understand each other's experiences.

I have used Spring for years. Yes, there are some things I don't like about it, for instances Spring Boots overeager auto configuration, but it provides an unparalleled level of flexibility and productivity. I have never encountered a behavior in Spring that I have not been able to read the source and figure out what's going on and then change the behavior to be what I want. Spring is absurdly flexible and you only need to use the parts that you want.

A few years ago, I decided to try an alternative and wrote an app in Vert.x with no Spring. It worked fine, but it was a hell of a lot more work than leveraging the Spring ecosystem. I later rewrote it using Boot, and it works better, is easier to understand, and uses less code.

Have you seen Spring Data JDBC? It's such a good idea that saves so much boilerplate and I'm not aware of anything else like it. It threads the needle between rolling your own SQL and descending into the hell of a full on ORM.

Anyway, the closets I can come to understanding why people hate Spring so much is to consider my own opinion of Rails. I don't like Ruby and I don't like Rails. I hate all of the magic and I don't want to learn it. But, I'm sure, like Spring, it's enormously productive if you do understand what it's doing and how to use it.


I think you hit the nail on the head with your reflection on your attitude with respect to Ruby on Rails.

From my point of view, Java is an anemic language, and the "cure" appears to be to introduce a bunch of annotation-magic frameworks (Spring + JacksonXML + Hibernate/JPA/JDBC/whatever + Lombok?) that each have their own magic and inconsistencies, to the point that your Java code is more of a configuration file than actual logic (which sounds great), but with the downside that you don't actually know where anything is actually implemented and have little idea about what can fail and where.

As a "polyglot" dev, I just don't have the time or patience to learn all of the magic on top of the language itself.

On the topic of Vert.x, it's definitely a different philosophy than Spring, as you experienced. I'm honestly not sure what domains Vert.x would be superior in, but it seems like it's way overkill for your typical mostly-crud backend app. Vert.x is less of a framework and more like a "build-your-own-framework" toolkit.


Spring Data JDBC is, by default I believe, backed by a full on ORM, that being Hibernate.

I'm open to different opinions on this, but I dislike Hibernate because of the complexity and the pains it causes when trying to do simple things. Hibernate, and Spring's use of it, is a leaky abstraction. When running into bugs, just trying to use a flow like, read sql row to POJO -> update POJO -> Save POJO to DB, using Spring JPA repository interfaces, I find myself needing to know about Hibernate internals, like the persistence context, how it relates with transactions, when objects are merged vs saved vs are they already in the persistence context or not? Plus Hibernate docs suck in my opinion.

One time we hit a bug in Hibernate. This was within the last two years, using a newer version of Spring Boot. We read a row from SQL in a transaction. Later in time, in a whole different Transaction, we read the same row. We read it with an annotation on the query method, "@LockType(PESSIMISTIC_WRITE)". We were using MSSQL and this sends table hints like "(updlock, rowlock, holdlock)". So essentially we wanted exclusive access to this row for the length of the transaction. But the data we were getting in the row didn't make any sense? We could see the sql query with the table hints hitting the sql server, but Hibernate was giving us a cached POJO!? If we "evicted" the pojo before we queried then it worked right. Again, this was at the very beginning of a fresh transaction. Wtf.


This is not correct. You're thinking Spring Data JPA [1]. Spring Data JDBC [2] does not use any Hibernate nonsense.

[1] https://docs.spring.io/spring-data/jpa/docs/current/referenc...

[2] https://spring.io/projects/spring-data-jdbc


Ah, I see. Thank you!


I haven't really touched Java in a while but I don't get why you'd want a lightweight DI container.

You can just build your object graph and pass dependencies manually if you want a lightweight approach, no? That's just the way people do it in most languages.


I think there are a lot of Java developers, that have just never worked without a DI framework, and just don't have a grasp on just how simple it can be to write code without one.


As someone who hated Java, used it for a few years, and now occasionally misses it...

I only miss DI. I miss being able to say "this system depends on these external things" and having a consistent, convenient way of sharing/swapping/testing those components and dependencies.

The solution in other languages? Unstructured globals, deep argument passing, or monkey patching with mocks?!

Yea, I can write simpler code without DI... By ignoring a bunch of stuff.


You can do DI without a framework.

If you write classes with final fields, with a constrcutor that takes the class' dependencies,and don't use static fields to hold mutable data.

You are doing DI. Just call `new` yourself, instead of having the framework do it for you.


When you have lots of things-that-create-things-that-create-things, this gets tedious really fast. DI frameworks exist because they result in a lot less code that does nothing but pass dependences along.

This reminds me of SQL/ORM debate. "Just use SQL!" Sure, until you get tired of typing the same SQL over and over and realize you can cut out most of that crap by adding an ORM.


The trick is to not encourage that many things-that-create-things-that-create-things. That's a uniquely Java problem.

https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...


If you take the single responsibility principle even as much as half-seriously, the problem domain more or less decides which things will create which things. If your software platform can't support that, you get spaghetti mess when programmers inevitably build workarounds.


You know, you hear Java repeat things like that a lot, while Go programs just tend to stay simple and readable. It's either the culture or the language causing the problem. shrug


I did a fair bit of work in Go at Pivotal. I found Go anything but readable - a comical amount of boilerplate (especially around error handling), incredibly wordy constructs for simple tasks like making http requests, and the language is almost overtly hostile to functional programming (no generics!).

I use Go as a "better C". Though I'm honestly disappointed with even that. My current company, we built an image processing service in Go. It performed poorly and had poor stability (the imagemagick bindings appear to be half-baked). I rewrote it in Java and it is faster, more stable, and the code is much cleaner.

Honestly, the next time I need a "better C", I'll probably pick up Rust or D.

YMMV.


> I did a fair bit of work in Go at Pivotal. I found Go anything but readable - a comical amount of boilerplate (especially around error handling), incredibly wordy constructs for simple tasks like making http requests, and the language is almost overtly hostile to functional programming (no generics!).

Are you saying that Java is better about any of that?


Yes, absolutely. Java has had a competent implementation of generics since 2004 (Java 5) and really embraced functional programming in 2014 (Java 8). Any application of significance will require more LoC in Go than Java, hands down.

Just compare Java streams with Go container classes. Go's aren't typesafe (though that will hopefully change when generics are officially released) and almost every operation requires imperative code. And endless `if err != nil return err` every time you want to call a function - which actually destroys useful stack information.

I won't apologize for the crap Java code out there - but you can write crap in any language. Modern Java is capable of producing pretty, svelte code.


Fair points. I haven't worked with Go in a few years, and I remember hating it when I did, but I feel like I remember hating Java more. It's possible that part of the Java hate is not from the language itself, but from the ecosystem.

Can you elaborate on Java streams vs Go's containers? I assume you mean things like List and Heap in Go? I'm not sure why you'd compare those to Java's stream API rather than Java's collections. In any case, I do agree that Java's standard library has WAY better collections than Go does, and Go doesn't have the excuse of wanting a minimal standard library.

However, I'll push back a bit on the complaint that working with Go's containers/collections/whatever requires imperative code for everything. Now, I'll remind myself that one of your original points was that Go was "actively hostile toward functional programming" and I retorted to imply that Java was just as bad at all of the things you mentioned. I'll concede that Java isn't actually quite as hostile toward functional programming as Go. But, I'll move the goalposts a bit and claim that supporting some few functional programming patterns isn't inherently good and doesn't automatically make a language better.

> And endless `if err != nil return err` every time you want to call a function - which actually destroys useful stack information.

I agree and disagree. I'm one of the few people who still thinks that checked exceptions are a good idea for a language. I have my complaints about how they're implemented in Java, but I think the concept is still a good one and I honestly think that even the Java implementation of checked exceptions is mostly fine. The issue, IMO, is with training and explaining when to use checked vs. unchecked exceptions and how do design good error type hierarchies.

Go's idiomatic error handling is mostly stupid because Go doesn't have sum types. But, I'd argue that if you are wanting stack information, it means that you shouldn't be returning error values at all- you should be panicking. Error values are for expected failures, a.k.a. domain errors. You can and should attach domain-relevant information to error values when possible, but generally, there shouldn't be a need for call-stack information. A bug should be a panic.


Here's a Java example that sums the populations of a list of Countries:

    int population = countries.stream().mapToInt(Country::getPopulation).sum();
The Go implementation:

    var population = 0
    for _, country := range countries {
        population += country.Population
    }
It gets more perverse if you need to flatMap, or transmute components of map types, etc. If you want even more power, take a look at https://github.com/amaembo/streamex. This sort of container manipulation is bread and butter for business processing. I use it every day, sometimes with a dozen operations. This (with liberal use of `final` values) makes for some pretty functional-looking code.

I'll grant you the Kotlin or Scala version is slightly more compact. But not fundamentally different, like the Go version.

I (and the pretty much every language designer in the post-Java era) disagree with you about checked exceptions, but that's a whole different thread...


The go version looks perfectly fine to me (saying this as someone who uses clojure every day) ;)

Something else to consider is performance, in most implementations the for loop is going to be more efficient.


That's exactly my complaint- most languages have eager, mutable, non-persistent, collections because they were not designed with functional programming in mind.

Then FP became the hot new shit, so they all added some of the lowest hanging fruit so that people can say absolutely weird things like "I do FP in C#". The problem is that the majority of these implementations just eagerly iterate the collection and make full copies every time. So, you're much better off with a for-loop.

To be fair to GP, though, Java has legit engineering behind it, and the way they did it was to introduce the Stream API, which is lazy sequences, and they made the compiler smart enough to avoid actually allocating a new Stream object per method call (which is what the code nominally does, IIRC- each method wraps the original Stream in a new Stream object that holds on to the closure argument and applies on each iteration).


If you really want to go wild, take a look at https://www.vavr.io/ (formerly jslang). You can make programming in Java as functional as you want.


I have to admit, that looks pretty slick.

Have you used it? I'd be curious to hear how well it works in practice.

It seems like the only "big" things Scala has over this is its implicits (which so many people hate, but have been really improved in version 3) and its for-comprehension syntax.

It's so interesting to see a bunch of projects converge on really similar things. You look at Scala, at this Vavr stuff, and at Kotlin + Arrow.kt, and they're implementing all of the same stuff over Java.


Ah. You know what? I forgot that the Java implementation of these concepts isn't stupid like it is in some other languages (except what the heck is mapToInt? Some optimized version that makes a primitive array, I guess? Yucky- I wish the compiler could just figure that out).

So, I concede that Java's addition of the stream API is a legitimately good example of adding an aspect of functional programming to an otherwise very non-FP language.

But, let me go off on my tangent, anyway. ;)

It's not that you need to convince me that functional programming is great. It's just that I find that consistent and coherent designs tend to work well and that kitchen-sink or be-everything-to-everybody approaches tend to be good at nothing and mediocre-to-bad at everything.

MOST languages that have tacked on the low-hanging fruit of FP (map, filter, etc combinators on collections) have done it in a really sub-optimal way.

JavaScript, for example. JavaScript has eager, mutable, non-persistent, arrays as the default collection data structure. When they added map, reduce, filter, etc to Array, they added them in the most naive possible way, which means that doing something like your example above (map-then-sum), would create an entire extra array with the same number of elements as the original, and would end up looping both arrays once. So we have ~2N memory usage and 2N iterations where we really should just have an extra 8 bytes to hold the sum and iterate over the array once (N iterations).

Same thing with other languages like Swift and Kotlin.

Kotlin maybe should have an asterisk because it has Sequence, which will mostly work like Java's streams. However, there are two issues: it still offers them on eager iterables, instead of forcing us to use a sequence/stream to access them, and with suspend functions you have to be careful with Sequences. In you Java example, we're theoretically allocating a new Stream object with every combinator call, BUT we "know" that the compiler is smart enough to avoid those allocations and the result code will be about as fast as writing a for-loop. With Kotlin's suspend functions, we can very easily thwart the compiler's ability to do that. If you use a Sequence chain inside a suspend function and call another suspend function as part of that chain, then that's a yield point and the compiler can no longer optimize away the allocation of the intermediate Sequence object(s).

So, my point is that designing a language with some initial philosophy and then trying to borrow from, frankly, incompatible other philosophies usually leads to sub-optimal implementations and/or APIs. Again, though, Java's streams are a good counter example to my claim.

> I (and the pretty much every language designer in the post-Java era) disagree with you about checked exceptions, but that's a whole different thread...

Indeed it is! :) I'm willing to be the black sheep, and die on that hill, though (too many metaphors?). And, honestly, I don't think it's as unanimous as some people claim. I see returning monadic error values as isomorphic to checked exceptions, and several languages have gone that route since Java: Scala, Swift, and Rust, to name a few. Kotlin's lead dude, Roman, simultaneously claims that checked exceptions were a terrible mistake, but then also advocates for using sealed classes for return values when failure is expected or in the domain, which sounds a lot like what checked exceptions are supposed to be used for. TypeScript can't have monadic error handling because of its design philosophy of being a thin layer over JavaScript, but many in that community have embraced using union types for return values instead of throwing Errors.

Cheers!


Yeah, mapToInt is annoying because the primitive/object dichotomy in Java is annoying. No question about it, it's a wart on the language. Though it does offer some optimization abilities, so the dichotomy is not completely meritless - it's easy to understand why the language designers did it this way. Maybe project valhalla will fix this someday, I don't know. In the mean time, it's not a fatal flaw.


I imagine that is because Go is not used for applications of the same breadth as Java.

Go is typically structured with many relatively small binaries. Each binary can be relatively self-contained.

The way I've seen Java used, it typically has fewer binaries with each binary bundling many services. Many of which include clients for services at the company but a different org - where that other org can just provide a Guice module that sets up the client to call their service and anything that needs it can easily inject it.

I still hate Java but, damn, I see why it's used at B I G companies.


As mentioned, Go is not used for ENTERPRISE APP^TM — Java programs can really hold up under insane abstractions and complexity.

Also, Go has really poor abstracting capability, which may be good for small code bases where having abstractions is a detriment, but abstractions are the only way to handle complexity. If you have the logic spread out over many different parts (or God save us, copied code!), a new programmer will have much more trouble picking up what the hell is supposed to happen.

In the extreme case, compare reading assembly to a high level language. Sure, each instruction is trivial in the former case, but you have no idea what does the whole do.


So, COBOL? That's an argument that works in a historical moment of Java legacy, until it doesn't. Monzo is an example of a bank writing everything in Go.


Good luck with that for them..

Java is in the unique position of excellent performance (state of the art GC, very good JIT compiler) and observability with no-overhead real time options. Due to the language having multiple implementations of a standard and it being one of the top 3 biggest ecosystem, it is nothing like Cobol. You can say it is legacy for 3 decades to come, but it will not die. Hell, it improves with a never-before seen speed.


Now you seem to have switched the conversation of "Java projects tend to be overly complex" to "Java is great". Common talking point, and you have a lot of people who will agree with you, but pretty much unrelated to the topic.


My original point was that abstractions are not evil, hell, without them we would only have calculators, not computers.

Go not having too high abstraction power, while can be an advantage (as per the creator, not my words, you can throw as many bad developers at a project as you want), but it is a disadvantage as well, because then you will have the logic in distributed places, copied verbatim etc, hindering maintainability, understanding the original intent, new dev onboarding, everything.


These words are bandied around a lot by people outside the Go community, while the people who end up actually using Go a lot tend to say it's the most readable code they've worked with in their lives. shrug


Is that why Google has a DI code generation tool for Go (Wire)?


That thing that's largely not used? Sure, some person who signed Google's onerous employment contract wrote that. Look what other stuff those people are pushing https://cloud.google.com/open-cloud/ and see how it's all "enterprise solutions" while the community adoption is at 694 projects importing wire: https://pkg.go.dev/github.com/google/wire?tab=importedby


When I debug a well written C/C++ code usually the callstack is about 10 levels deep.

When I debug a well written Java code usually the callstack is about 50 levels deep.

It's not because of the Single Responsibility Principle.


Why? With such a well-known framework like Spring, you will get the benefit of any Spring-developer knowing instantly the conventions (which is not true with your in-house conventions where I will have to hunt down where does this class come from, oh this ugly abstraction which is buggy as well), less code is less opportunity to introduce bugs, less thing to maintain. Annotations are basically just a declarative DSL for a significant chunk of your code base.

I really don’t see any cons, other than a slight learning curve (and yeah sure, “developers” that just bash keys will have trouble with understanding what does an annotation do and blindly copy-pasting them can be dangerous but they will also fk-up regular code as well..)


> Sure, until you get tired of typing the same SQL over and over and realize you can cut out most of that crap by adding an ORM.

And adding an ORM isn't either/or. You can still use native SQL when necessary.


Yep, but if you have to change 5 constructors to get a new dependency to where it needs to be, calling `new` yourself starts to suck.


> Just call `new` yourself, instead of having the framework do it for you.

But at that point, why would I want to?

There are reasons I wouldn't want to, but there is no inherent value, to me, in manually calling new.


by calling new yourself you get a sane stack trace when something is misconfigured. that alone is worth the tiny additional amount of code in my book.


How is that worthy? You pretty much only have to look at the topmost exception, or at worst the causing one. Whether it has 100 lines after or 3 doesn’t matter, not the slightest.


But how do you handle configuration then ? At some point you want a user-facing UI where the available features (which are generally classes) are listed and the user can choose the feature, say which log backend is enabled, without having to change code - that's the whole point of it. (And the most tedious code to write by hand - a complete waste of time)


> But how do you handle configuration then ?

In the main method, then you can pass the configured values wherever you need to when new-ing classes.

> At some point you want a user-facing UI where the available features (which are generally classes) are listed and the user can choose the feature, say which log backend is enabled, without having to change code - that's the whole point of it. (And the most tedious code to write by hand - a complete waste of time)

I consider DI a valuable pattern, but I've never experienced anything close to this need.


What happens with proxied classes? My ClassWithTransactions is actually a subclass of the written one auto-generated by Spring. I can’t inject a new instance of that manually.

And you may say that you don’t need Aspect Oriented programming, but the usual handling of transactions in many other languages without some meta-programming is.. to not handle transactions. Putting a single annotation over a method is imo a very elegant way to handle this needed functionality.


This is all considerably more abstraction than I have wanted or needed when writing Java. When handling transactions, I’ve passed around the same connection before committing.


The point of the transaction in this context is that both the database(s) and the business logic/state stay in sync. I don’t think that naive attempts will be logically correct.


What do you consider naive? JDBC supports transactions, I’ve had success with that.


As I mentioned, you will have to roll back not only the database, but the relevant application state as well. This is really error-prone if repeated enough times, or the flow of control is through many different methods, etc.


By application state, do you mean something like an in memory cache? I would prefer having no such state in the first place, and have all meaningful state in the DB to be pulled out or mutated as needed.

I recognize that what you’re advocating for makes of sense in some applications, I just wanted to point out that I haven’t felt the need for it in my eight years of software development.


> I consider DI a valuable pattern, but I've never experienced anything close to this need.

Literally every non-toy software I had to develop in my life required that lol


Do you seriously allow users to configure which logging framework is used through a UI? Perhaps I’m misunderstanding what you’re saying.


in my case it's more often which audio, gamepad or graphics backend but yes, I actually had the log backend configuration request once ! (wanted to choose to log stuff in text files or websockets depending on the case, for a GUI app ; there was an explicit requirement that the entire software could be configured and used with only a mouse, no keyboard so many configuration menus were needed)


My company chose exactly this design for several of our microservices. It is now almost universally considered to have been a mistake.


Why? What happened?


It very quickly becomes an unmaintainable mess once the service grows past a certain size.


But, how, exactly?

The only bad things I can see are:

1. Constructors with many parameters

2. Needing to pass a dependency many levels deep

But, I would still think that those are not big deals (what do you have, 40 parameters or something?) and that the explicitness can be helpful. Isn't it good to know that the top level service depends on your email-sender dependency from just looking at its code instead of needing to analyze its code and every single object under it?


Which is the standard way to do DI in Spring as well? It will be just called by reflection instead.

But frankly, how will you call that new if it depends on a class which is a singleton, another which has some more complicated scope so it may or may not have to be reused? DI is not only about calling new..


Another useful feature of Spring is aspect-oriented-programming (like when we manage transactions boundaries with @Transactional).

Spring takes care of that, but doing it manually (and without dynamic proxies) would add to the verbosity.


Is what you're thinking of equivalent to deep argument passing? I've seen it done where you pass around a global Factory object that can provide dependencies. It's basically rudimentary DIY DI.


It's really very simple, no you don't need to pass around a factory object.

You just have a class/classes that construct/wire all of your singleton objects and passes the required dependencies into their respective constructors as necessary.

Here is a contrived example of what the wiring code might look like for a web app that uses a database.

    public static void main(String[] args) {
        MyConfig config = readConfigFile();
        DatabaseConnection dbConn = new DatabaseConnection(config.dbHost(), config.dbPort());
        UserDao userDao = new UserDao(dbConn);
        UserController userController = new UserController(userDao);
        List<Controller> controllers = List.of(userController);
        WebServer webServer = new WebServer(config.listenPort(), controllers);
        webServer.runAndBlock();
    }


How is it better to make a dev write out that plumbing and others reread it? I’m made of meat, so I want to automate everything we safely can.


Advantages:

- Don't need to depend on a DI library, makes code more modular and portable.

- Faster application initialization time.

- Easy to navigate and understand relationships between classes, good IDE support.

- Easier to break apart and test parts of the application.

- Easy to understand, don't need to learn the intricacies of a complex DI framework.

I'm not saying there is no place for DI frameworks, although I do think they are overused.


> don’t need to learn

I know it is a nitpick, but I see this way more often than I should as a main reason to prefer alternatives.

Finding out what gets injected is not particularly hard, especially when only the basic capabilities of spring’s DI is used. In that case it will be almost always the single implementing class of the given type.


So if you are making any kind of reusable design, you cannot annotate your classes with @Bean anymore. Instead you will make an @Configuration (like spring boot auto configuration) that by discretion may pull in some more general (not @Configuration annotated) reusable configuration. Since some classes will be considered implementation details, you won't want to expose those into the dependency injection container of spring (since that is equivalent to making them public, people will inject them and depend on them!). So instead you will only create them inside your own @Configuration and pass them directly when generating an @Bean from a method.

Congratulations, your @Configuration is manual dependency injection. That is easy enough. Why did we need inversion of control over the dependency injection in the first place? It isn't immediately obvious to new engineers what aspect of the @Autowired is dependency injection and which aspect is inversion of control. Many of us don't see much of a benefit to the inversion of control if you are taking care of your application's hygiene in the first place.


A @Bean method is a signal that a class is so complicated that Spring can’t figure out how to create a valid instance after @Import or @ComponentScan. For limiting use, package-private types and methods are better than creating components yourself and reinventing pieces of Spring like @Profile and @Value and @Scope.


You’re reading and writing the “plumbing” no matter what you do. So, does it matter if you are writing a config file or Java code?


Unless WebServer is the only class that needs dependencies you're either going to have to pass those dependecies repeatedly from class to class or you're going to have a global factory that provides the dependencies to everybody.


Yep. Once you get too many arguments what you do is usually to create some kind of Context class that bundles all of them and just pass that on everywhere.


I'd say that the only one of your listed solutions-in-other-languages that is actually a valid solution is deep argument passing.

And I fail to see why it's a problem. If your FooService depends on a BarService, which depends on a BazService, and BazService needs a database connection, then that means your FooService really does also depend on a database connection. Hiding that information, to me, seems like a mistake. Can you articulate why one would prefer not to have FooService explicitly require that database connection, or am I inadvertently arguing against a straw man? If so, please correct me, because I'm asking sincerely.

Of all the time I spend thinking about my code and writing code, I truly can't say that adding a dependency and having the compiler complain until I fix a bunch of constructors has really caused me that much grief. And I'm not going to pretend that it has never been the case that I've had to fix 20 constructors.


Ultimately, I think thisnis going to come down to preference.

I would prefer not to have to fix 20 constructors.

It's tedious and time consuming. The intermediate classes that _do technically depend on FooService because BarService does_ - the intermediate classes don't care! It clutters the code everywhere else for minimal benefit.

Manually, you see all your dependencies just shy of main where the binary initializes them all and starts passing things down. In DI, you have a module file somewhere with them all.


Definitely a preference thing- no doubt.

But thank you for responding anyway.

(As a clarification, in case it's needed: I obviously didn't LOVE it when I had to update 20 ctors after changing a somewhat fundamental "service" to need a new dep. My point was that, even as painful as that was, it wasn't that bad and it's usually much less bad than that.)

I guess the (philosophical) difference comes to this statement:

> The intermediate classes that _do technically depend on FooService because BarService does_ - the intermediate classes don't care!

I can definitely understand what you're saying there, but it's interesting to me that I don't see it that way. I think I'm just less pragmatic and more... "academic" (?) about how I read and understand my own code. If X depends on Y and Y depends on Z, I'm comfortable with X explicitly depending on Z because I imagine "inlining" Y's functionality in X. Either that or you turn Y into an interface and then X only depends on IY. But, my brain just likes the explicit continuity I guess.

Cheers!


The solution in other languages is to use a DI framework written for them. Which one doesn't have any? In .NET, the basic DI interface (imports/exports etc) is even part the standard library as System.Composition.


You say that, and I usually agree, I mean, constructor args are the simplest form of DI.

But then, working in a complex codebase, I introduce a new dependency that is instantiated early in the tree, used two disparate classes rather deep in the tree, suddenly I'm changing 10 different constructors just to get the new dependency where it needs to be.

The tree of constructors is where DI shines as an alternative.


That REALLY depends on the size of your codebase. When it’s small, no need for a DI framework. But when it grows large, it becomes quite a pain, and a DI framework is nice, eliminates a bunch of boilerplate with every code change.


A good DI framework just saves you from having to spell out all the glue code; or at least minimize that. DIY dependency injection is indeed a useful skill to have with other languages. Unfortunately, it's not what a lot of people do with other languages because they simply don't know that it would help them.

Particularly in the javascript world there seem to be a lot of people struggling to write good, testable code mainly because they make the rookie mistake of not separating their glue code from their business logic. Basically they have bits of code that initialize whatever and they need to put it somewhere and it ends up in the wrong place and before you know it, it becomes impossible to isolate any functionality such that you can actually test it easily without booting up the entire application. Add global variables to the mix and you have basically an untestable mess.

I still use Spring (but very selectively). They've added multiple styles of doing DI over the years, which is confusing. The latest incarnation of that uses neither reflection nor annotations and is very similar to the type of code you'd write manually if you had the time to clean it up and make it nice to use. Another benefit is that it enables native compilation, which with the recent release of spring-native is now a thing people do. Spring is large and confusing but the DI part is actually pretty easy to use. If you've used Koin or Dagger on Android, it's similar to how that works.


I've used Spring DI. I understand the argument for it, when building bigger applications, though it invariably brings its own complexity too.

What you say about compile-time DI to allow native images makes me feel like we've almost come full circle. I'm still not convinced you need automatic DI at all for smaller services.


Sure, and eventually you end up rebuilding an [ad hoc, informally-specified, bug-ridden, slow] DI container because:

* Static references become a tangled mess, and you start wanting some structure around that.

* You have to answer "how does ABC component get access to DEF?" for increasingly difficult combinations of ABC and DEF.

Excepting Spring, pretty much all Java DI containers are lightweight.


Why do "static references become a tangled mess"? In my (limited) experience with runtime DI libraries (albeit in Go) they turn clear, IDE- and debugging-friendly code where the compiler tells you at compile time if you got it wrong ... into a hard-to-debug magical soup.

With static, using-the-language dependency injection, isn't the question of "how does ABC component get access to DEF?" answerable with the normal IDE/language tooling, rather than some magical library's way of doing it? You can just find the calls to a constructor and look at the arguments.

My experience is based on my bad experience with runtime DI libraries, and is definitely biased against them, but I must be missing something here.


There are lots of reasons why static references are undesirable, but some of the more serious are:

* Static dependencies make testing harder, no question about it. This is mediated in dynamic languages like Ruby by mocking statics. While you can actually do this in Java with Powermock, avoiding mocks entirely is even better. If you can't use a real object, use a fake that implements the relevant interface.

* Statics mean singleton, and that invariant often changes as a product matures. It's very easy to go from "the database" to "a database", and when you have 500 places getting "the database" it's very hard to make that evolution.

* Statics make it very hard to maintain module boundaries, because every static is its own exported interface. In a long-running project, binding tends to get tighter and tighter as every module reaches out for the statics of other modules.

Sure, folks can write bad code with DI systems too. And I'm no fan of Spring - not because of the DI, which is fine, but because of the need to wrap everything else in the universe and now you have to understand both how the underlying thing works and the way Spring changes it. But something like Guice or Dagger is just the right amount of glue to hold a system together, without getting in your way.


Just a note: I overloaded (excuse the pun) the word "static": I didn't mean the "static" keyword, but "statically compiled/typed". So it doesn't mean singletons, just that you pass dependencies as explicit arguments to constructors and functions.


I do not understand what distinction you're making. In the world of DI, you still have typed constructors and factory methods. It's not like Guice turns Java into Ruby. The only difference is that you don't have to chain together constructor boilerplate - in fact, the static types determine the injections.

Do you object to passing interfaces vs concrete types? That is a wholly orthogonal concern; you make the choice to extract interfaces with or without DI.

Maybe you have an example?


> but I must be missing something here.

As a Java developer for all my career here is my take. In Java world there is this cultish cottage industry of "frameworks" for all sorts of work. Most Java developers are not expected to write plain code with JDK standard + some external libraries. Creating an object via "new Obj()" might cause programing universe to collapse so DI framework is must for enterprise Java developers.

If I were to tell at work that Go has inbuilt http server where we could implement a few handlers and have a basic service running. They would not be shocked that a http listener could be this simple but rather ask "Does Go bundles "weblogic/websphere/tomcat/netty" server with it or else how can it work? Same with testing, no understanding on how, what is to be tested but everything about "JUNIT/Mockito/Mockster/SpringUnit" or whatever.

There is no requirement for understanding basic concepts for testing, client/server, dependency, error management etc. So even basic functionalities are understood in terms of a branded framework. This is their main frame of reference.


Just to echo this is similar to my experience. Java development culture that I have known in the work place is extremely coddled by their frameworks.

It is hard for them to open the terminal and execute their application JAR from the command line--only ever from the IDE. Oh wait--they need Tomcat/Apache & a few hours of dealing with classpath issues.


That's just a problem with developers who do Java as their 'nine to five' job and don't have any interest or passion to really find out how stuff works. I've met a lot of those people and there's no reason they can't contribute if the project is set up to accommodate it.

On the other hand there are the enthousiasts (like you I presume) who like tinkering and using the language to the fullest. Any successful project needs at least a few of these people, but they can also go overboard by building a lot of custom functionality where any standard library could have been used.

While I'm also an advocate for increasing knowledge for the systems you're working with it's no 'sin' to use some libraries. For instance: for your HTTP server example, it's quite easy to just listen on a socket and respond to a request. But you want parallelism, so you need a thread pool. And queues, configuration and error handling. That will escalate quickly so why not pick whatever Java servlet implementation which has most of the complexity and - more important - is already production tested so it won't fall over when you deploy it live. And then there's stuff like OpenID connect or SOAP (yes, still exists) where you can 'plug in' an implementation on some servers so you can get work done instead of worrying about getting all the implementation details right for some complex protocol.


> but they can also go overboard by building a lot of custom functionality where any standard library could have been used.

Well I am kind of recommending using standard libraries. And servlet implementation I am using embedded tomcat as I mentioned in other comment. What I am not doing is generating gratuitous scaffolding of dozen packages and innumerable classes because that is the "best practice".


I’m a little confused. Why is creating a new Java object via “new” a sin? After all, it’s right there in Chapter 1 of any Java tutorial.


Just to clarify it is not my opinion. It is the groupthink of enterprise Java programing where reading Java tutorial itself would be obscure thing. Everything has to be looked from "framework" perspective. Framework says 'new' is bad so it is bad, dependency has to be constructor/setter injected by DI container so that's how it has to be.


No framework says `new` is bad. Feel free to grep for new in any framework application. But if one uses DI, than do use it for classes that ought to be injected. But inside methods of course one can and do use `new` many times over.


In my experience most DI just interferes with being able to use the IDE to track down instantiations. Ive worked on these projects that basically have these fancy runtime things to answer questions that could be answered by the IDE if it werent so obscured. I remember one project we had a fancy thing to generate a graphviz graph, and it was like neat, but we could just use find all references if we just called new.

The dumb thing is most of the time only one type is ever injected. Its all hypothetical flexibility which has a cost but no benefit


> I remember one project we had a fancy thing to generate a graphviz graph, and it was like neat, but we could just use find all references if we just called new.

Ha. Calling `new` would either be an absolute enterprise Java sin or an obscure aracana. Some people are quite proud of the fact of converting compile time errors to runtime exceptions. Because you know "best practices" and all.


It definitely has a benefit at scale. I’m not sure what application were you developing, but the amount of time a single instance was changed into an interface because the client wanted the n+234th little change in this special star constellation.. with DI you don’t have to write any more code, you can even use different implementations per environment (@Profile), so this is not accidental complexity in most cases. Sure, if you need a 100 lines web server that prints hello world it is an overkill, but the correct tool for the job..


"static" in java DI can refer to setting global variables with singleton instances. E.g. java logging libraries usually do this so that you don't have to DI everywhere you want to log. In Go, some packages do this like flag and log. In Rails, this is so common that it replaces DI entirely, but I usually didn't feel like rails suffered from it.

I think what you are referring to is just manually doing DI. I.e. you defined constructors that accept dependencies and then call them all in a main function. I think this is tolerable if your codebase is structured for it. In typical java codebases, it gets ugly really fast. IMO this is caused by a general proliferation in the number of classes (due to class-per-file among other reasons), as well as a tendency to never use "static" DI. As an extreme example, if you needed to inject a logging dependency, then almost all code would need to be part of the DI graph. In a typical web backend, you might DI the sql connection pool. This causes basically all code to need DI since it either uses sql or has a transitive dependency that uses sql. IMO injecting the connection pool is not useful since it's not useful to write tests where you inject anything other than a real sql connection.


Ah, that explains the confusion. Yeah, I didn't mean static as in the "static" keyword in Java / C++, but as in specified in the statically-typed code. Defining constructors that accept dependencies -- exactly. Ah yeah, I can see how Java exacerbates things here with the one-class-per-file rule -- ugh.

Go doesn't require one class (well, type or struct in Go) per file, and has much more flexibility in how you build packages as a result. I think it's a good thing that dependencies like the logger and the database are passed around explicitly: I've learned the hard way that "explicit is better than implicit" even when it means a bit more boilerplate.


Dependency injection does not have to be dynamic, it can totally be done at compile time. Boost DI is an example: https://boost-ext.github.io/di/


There’s plenty of Java frameworks that are compile time too. Quarkus, Avaje Inject, etc all do their wiring at compile time.


This, it’s really exhausting to read this never ending wheel reinvention. Sure any one can use simpler non-spring frameworks, and other “non standard” frameworks and libraries for 1/10 or 1/100 of the functionality, and get 10-100x the bugs and much less or zero support. But we need netty! And then when you add thread pools, jdbc, logging, etc? Yep you’ve reimplemented spring. Just use spring, spend the time to learn it and reap the rewards.


As someone who dealt with a ton of Spring in the recent past, I completely disagree.

First of all, thread pools are part of the standard library. Spring adds little to no value on top of it.

Second, reinventing some of that stuff is absolutely worthwhile, because Spring's library design/implementation is not very good.

Finally, when I had the opportunity to start a new Java project, I opted to not use Spring. I finally had a server that started up fast, took less code than a Spring project, was easily navigable in an IDE, and whose code was generally easier to follow. It was also easier to write tests for.

One thing I learned is that people seem to underestimate just how thin Spring's abstractions are over stuff in the library, servlets, etc. Most of what Spring does is wrap things in a bean interface so they can be used with DI (which is something I’ve never found any value in).


Well you are missing one of the great feature of Spring framework: Converting compile time errors in to runtime exceptions.

Jokes apart you are absolutely right about non-spring based services. I did same using plain Java + embedded tomcat for some services. No cargo-cult like endless decorative packages and classes. Exactly same result as you observed. Less code, fast to start and vastly improved error management.


> Converting compile time errors in to runtime exceptions.

Heh. Stealing this. Short, sharp, undeniable.

I usually say something snarky like "Spring is an exception obfuscation framework" or "...flow of control obfuscation framework".


> Heh. Stealing this...

Yeah, please popularize it. In my case I am unable to make management see reason. If more devs become vocal about it make it a trend, it will be a good thing to happen.


Same here. I've gotten rid of all that stuff, it's just layers and layers of indirection that contribute nothing.

One rule that has also helped me a lot to keep my code clean and make it easier to debug is to fail as much as possible in the constructor. So when you call new MyThing(), you will either get a usable object or it'll throw an exception. Further method calls are expected to work. Of course this is not doable for everything, but it sure helps keeping the methods clean and not have them throw various exceptions.


Could you share your Spring/Spring Boot alternatives? Are they Java based? I'm doing backend stuff with Spring Boot and I would like to test alternatives. Spring boot is not that difficult to work with, but I would like to test a "simpler" solution.


I would recommend Quakus, Microprofile, Micronaut.,


I've had similar experiences. A few years ago, I wrote a small service in plain Java, no frameworks as part of a quick change to improve performance. It worked and we all moved on. Later it was converted to Spring Boot and it slowed way down.


> But we need netty! And then when you add thread pools, jdbc, logging, et

java.util.concurrent has threadpools and pretty damn decent at that; jdbc is a part of very standard jdk, logging is part of java.util.logging. Why do you need netty (which is not a part of spring either way)?

In over 23y of working with Java, I have never needed spring.


Agreed. Stuff like this feels like magic for magic’s sake, and as someone who has had to operate services that use these DI frameworks, they are a big pain.


How about learning about the tools you use beforehand?

You don’t sit into a car without any knowledge about it and blame it that it is magical.


I read all of the documentation for the service that I operate--per my other comment (https://news.ycombinator.com/item?id=29973282), often times things aren't well-documented and it's crucial to be able to look at the source code to figure out what something does (e.g., how is a configuration parameter used? what are its valid permutations?); however, when the source code is obscured by gratuitous complexity then it imposes a high cost on the user and in the case of DI frameworks, that gratuitous complexity comes with no discernible benefit (a car offers me something of value to justify its learning curve). Personally I'm of the opinion that a person shouldn't have to be a seasoned Java developer to use so many tools that are implemented in Java (or any other language, for that matter).


What does “operate” mean in this context? I’m generally curious why one should consider this comment to be anything other than low effort flame bait?


Operate means “run the service”. If you need to configure things which aren’t well documented, it’s nice to be able to look at the code, but the DI frameworks obscure the code path. This is pretty straightforward; no idea why this would seem like flame bait—I didn’t even realize this was something people held deep emotional attachments to.


Plumbing the construction of your object graph manually does not have a particularly high cost/benefit - most of your services are singletons that depend on each other, and it's already clear enough which ones depend on which others without repeating yourself. A very basic "here's a bag of classes that depend on each other, wire them all together and then let me pull out the instances by type" is often worthwhile for avoiding all that boilerplate, even if it does break the rules of the language a little. Something like Picocontainer or even Guice is pretty good IME.


Not using Java but generally in OO languages I ended up passing forward dependencies in grouped and themed classes like LoginDependencies, InboxDependencies etcetera. Everything under an Interface so you just mock whatever you need. Never ran into serious issues.

Of course logging might be static but “true” dependencies like networking classes never are.


I want lightweight, and compile time. But I'll take compile time only if need be.

In terms of lightweight, I have never needed to use the @Alternative binding [0]. Nearly all of my needs are met by being able to define "this is a singleton, this is a dependency that you should always inject a new instance of, and this is a property."

But it's surprisingly hard to find DI that limits itself like that. The DI in Micronaut and Quarkus are probably the closest to my ideal. Compile time, and only implement a subset of CDI etc.

[0]: https://netbeans.apache.org/kb/docs/javaee/cdi-validate.html


here is the talk, good stuff, Dead-Simple Dependency Injection

https://www.youtube.com/watch?v=ZasXwtTRkio


Replacing DI with Free Monads is not what I'd call simple. It's not even possible in a type safe way in most(?) languages.


I really like the Go-like simplicity of these libraries, without the cursed architecture astronomy from the 2000s.

In general it's interesting times for Java. With all of language improvements from Kotlin/Scala, and upcoming Go-like concurrency it really feels like a renaissance for the language.


"upcoming Go-like concurrency" can you elaborate on this?

Java will have CSP at the language level? I find it hard to believe.


Project Loom’s virtual threads (without dedicated OS threads and stacks), which will hopefully relieve devs from manually doing a CPS transform of procedural code into chains of futures for thread pool workers to complete.


> it really feels like a renaissance for the language.

So is this the 2nd or 3rd Java renaissance?


I'd say that first Java renaissance is Java 5 with generics. Second Java renaissance is Java 8 with lambdas. IMO third Java renaissance will be with re-introduction of green threads.


I’m actually looking forward to Valhalla more than Loom I think.



Thanks! Macroexpanded:

The Unbearable Lightness of Java - https://news.ycombinator.com/item?id=20063945 - May 2019 (6 comments)

Jodd – The Unbearable Lightness of Java - https://news.ycombinator.com/item?id=9278704 - March 2015 (108 comments)

Java lightweight framework - jodd - https://news.ycombinator.com/item?id=4084498 - June 2012 (33 comments)


I wonder if someone can recommend a lightweight http server library? I like Javalin but it's based on Jetty which is a fully JavaEE compliant framework and includes support for things like OSGI which I don't need. With the whole Log4j situation, I'm re-evaluating some the libraries I've previously relied on.


If you have Java 11+ I presume you can't get any simpler than a standard library module:

https://docs.oracle.com/en/java/javase/17/docs/api/jdk.https...


This comes from way before SE 11. I was using it in 7. The doc says it comes from 6. https://docs.oracle.com/javase/7/docs/jre/api/net/httpserver...


Oh neat. I erroneously thought it might have come in part with the http module in 11.


That server is no where near production ready.


Vert.x

It's built on top of Netty but has some additional niceties that make it more practical to use. It's also one of the fastest things out there: https://www.techempower.com/benchmarks/#section=data-r18&hw=...


We looked around since we wanted to move off Tomcat and decided on Netty: https://netty.io/

I'm not on the engineering team so can't speak to the cost/benefit, but it seems to have been a pretty successful transition.


EDIT - it seems maybe I was wrong here

Netty copies the response body when sending to each client, so it's not as lightweight as I've found. For streaming large response bodies, it does not work well. I haven't found a good Java alternative yet (probably will switch to C++ and uWS...)


Netty core is about as close to the metal as networking gets on the JVM. It's abstractions are built over a zero-copy capable byte buffer, and there is generally a lot of care taken to avoid copying where possible. I haven't used the websocket codec, but I'm sure the maintainers would welcome a patch that removes unnecessary copying.


Here's what I was thinking of, under "Vert.x Memory Usage": https://www.tikalk.com/posts/2018/04/30/vertx-memory-usage-w...

Quote: "But how does Netty do things so fast ? One of the reasons is that it is using native memory pool to store network buffers. If you did some file reading or network action with Vert.x you probably used io.vertx.core.buffer.Buffer class. This class is actually a wrapper around Netty io.netty.buffer.ByteBuf class. Why am I telling you all this ? Let assume that you have a service where clients are downloading 20Mbyte files. Netty will have to allocate at least 20Mbyte for every connected client."

Although this may be an issue with how Vert.x is using Netty. I have to dig into it more.


FWIW, here is a fairly minimal example[0] of broadcasting over websockets reusing the same buffer.

I'm not very familiar with vert.x(not a netty expert either), but I think the author of that article is ascribing blame to the wrong place.

[0]: https://github.com/juggernaut/netty-websocket-broadcast-exam...


Not using websockets, but thanks! I have to look into it again. Right now the service is working fine so I haven't been motivated to work on it again. (in-memory CDN based on vertx-web + Caffeine + custom on-disk LRU cache)


> zero-copy capable byte buffer

This is similar to .NET Standard 2.1 Span<T>?



Very Sinatra like: https://sparkjava.com/


also not actively maintained sadly.


https://github.com/NanoHttpd/nanohttpd

A bit outdated and not actively maintained, but it's truly small.

If you like async stuff, take a loot at Helidon.


If you’re on Kotlin, consider http4k

It can use netty, undertow, and others under the hood


OkHTTP or netty.


Okhttp is a client, but a good one.


My bad, absolutely, not sure where my head was.


Knock-Knock who's there? ... Long Pause ... Java!


Looks nice and clean. It does seem to be maintained by a single person (at least the JSON subproject [1]) which will be a major turn off for adoption by an "enterprise"

[1] https://github.com/oblac/jodd-json


First thoughts: the JSON subproject seems to be very unprincipled. The documentation documents general usage through a few examples, but it doesn't really give you a good idea of the semantics of the library. It appears to scan your objects using reflection for things that it determines to be fields (what are the criteria?), but for some reason does not serialize collection types by default because "This plays well with some 3rd party libraries (like ORM) where collections represent lazy relationships". The library is configured by modifying the state of global objects which is just a disaster waiting to happen.


"just a disaster waiting to happen" - if I was the maintainer, I would appreciate a test that demonstrates the failure scenario.


This is great. Java BADLY needs to shed weight and verbosity and in general just catch up with the times.

Having used not only traditional Java and Spring (including "modern" Spring boot) but also alternatives, like eg DropWizard, I MUCH prefer the alternatives.

DropWizard in particular seems to me a more neutral collection of some of the best tools for each job, and it's both simple and easy.

Spring is just Spring, Spring and more Spring, and while it's "easy", it's not simple- there's a lot of magic.

I'm glad to finally be in a team where people are open minded enough to look outside the Spring bubble. TBH these days, we don't even use Java anymore, we use Kotlin + Arrow which is amazing.


Java’s verbosity/abstraction problem is unfortunately not due to the language or libraries at this point as much as it is the programmers - the hardest thing to change. You need only to look around this thread to see Java programmers who can’t imagine writing a useful application without a DI framework that supports runtime implementation swapping, or aspect oriented programming.


Agreed! I lost patience with the backwards Java community long ago, there's no arguing with them, they refuse to even consider trying anything other than what they're used to, so what's the point. Better to move on and leave them to it.


One thing that gives me hope is that the actual language designers have the right view on it, and I think are guiding the community in the right direction without explicitly condemning the way a lot of things are currently done (which would be a political nightmare).

For instance, records are a step away from mindless getters and setters - but rather than just add the syntactic sugar of properties, they introduced immutability as well.


For sure, it's great that people like Josh Bloch, Brian Goetz (and many others) have been very aware of and trying to address all the problems and endorse (and enforce) solutions to them (for decades at this point)

but as much as I hate to say it it feels like it's a bit too little too late :/

and there's still this bizarre situation where seemingly most of the Java community is still living in the 90s

oh well ¯\_(ツ)_/¯


Looks good.

Is there a "Getting Started" guide or a list of examples anywhere? I'm on mobile so may have missed them. All I could see were links to the separate component docs.


It seems every project has its own documentation (powered by GitBook), for example https://lagarto.jodd.org/, https://http.jodd.org/ etc.


Nothing can be light forever unless it is opinionated


Looks nice, and reminds me of the ecosystem around Quarkus. I have two questions:

1) Is this compatible with GraalVM? I'm mostly asking this out of curiosity.

2) Is it using "modern" Java features? Records, pattern matching, optionals.


A lot of this looks like functionality offered by other, more popular libraries. Jodd JSON looks functionally (and syntactically!) similar to Jackson, but Jackson has a lot more users:

https://mvnrepository.com/artifact/org.jodd/jodd-json

https://mvnrepository.com/artifact/com.fasterxml.jackson.cor...


Literally every common enterprise problem has a java library over a decade old. Jodd seems to be aiming to be lightweight and fast, not solve new problems necessarily.


Very impressive that all of this is maintained by a single person in their free time! His blog (only Serbian, sorry) is at https://oblac.rs/


Speaking of lightness: Is it just me, or is the Java folder-per-namespace thingy a huge turn-down when it comes to lightness?


Why exactly? I think the two concept is very meaningfully merged. For simple programs you don’t need multiple namespaces so you have a single folder, for more complex one, tree hierarchies are good for both namespaces and folders.


You're not forced to use packages, there's anonymous package for simple small apps.


Is this the true Java framework we were promised Spring would be


>Book book2 = new JsonParser().parse(json, Book.class);

why not: JsonParser().parse<Book>(json)


the `<Book>` generic type doesn't translate to anything at run time, so you cannot actually parse the json out as a book class, unless you already knew it was going to be a Book. The parse() method cannot be generic over all possible inputs as is - unless the user also pass in the `Book.class` parameter!


You can use something like `new JsonParser<Book>(){}.parse(json)`

Not saying that's a good idea, though.


Type erasure.


JsonParser() is Scala syntax. Not sure how you'd accomplish that in Java and as for parse<Book> Java's generics are erased at compile time.


Parent just missed the new keyword, and generics can work based on return type as well with the above syntax. It has nothing to do with erasure, we are at compile time.


I have used https://sparkjava.com/ when I still did Java some years back. It was as thin as they come and a real joy to get started and going.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: