> The bottom line is, no one ever really used inheritance that much anyway
If you think that, you have no idea how much horrible code is out there. Especially in enterprise land, where deadlines are set by people who get paid by the hour. I once worked on a java project which had a method - call a method - call a method - call a method and so on. Usually, the calls were via some abstract interface with a single implementor, making it hard to figure out what was even being executed. But if you kept at it, there were 19 layers before the chain of methods did anything other than call the next one. There was a separate parallel path of methods that also went 19 layers deep for cleaning up. But if you follow it all the way down, it turns out the final method was empty. 19 methods + adjacent interface methods all for a no-op.
> The bottom line is, most people don't have any original opinions at all and are just going with whatever seems to be popular.
Most people go with the crowd. But there's a reason the crowd is moving against inheritance. The reason is that inheritance is almost always a bad idea in practice. And more and more smart people talking about it are slowly moving sentiment.
Bit by bit, we're finally starting to win the fight against people who think pointless abstraction will make their software better. Thank goodness - I've been shouting this stuff from the rooftops for 15+ years at this point.
I don't think Inheritance is always bad - sometimes it's a useful tool. But it was definitely overused and composition, interfaces work much better for most problems.
Inheritance really shines when you want to encapsulate behaviour behind a common interface and also provide a standard implementation.
I.e: I once wrote a RN app which talked to ~10 vacuum robots. All of these robots behaved mostly the same, but each was different in a unique way.
E.g. 9 robots returned to station when the command "STOP" was send, one would just stop in place. Or some robots would rotate 90 degrees when a "LEFT" command was send, others only 30 degrees.
We wrote a base class which exposed all needed commands and each robot had an inherited class which overwrote the parts which needed adjustment (e.g. sending left three times so it's also 90 degrees or send "MOVE TO STATION" instead of "STOP").
> I don't think Inheritance is always bad - sometimes it's a useful tool.
I can only think of one or two instances where I've really been convinced that inheritance is the right tool. The only one that springs to mind is a View hierarchy in UI libraries. But even then, I notice React (& friends) have all moved away from this approach. Modern web development usually makes components be functions. (And yes, javascript supports many kinds of inheritance. Early versions of react even used them for components. But it proved to be a worse approach.)
I've been writing a lot of rust lately. Rust doesn't support inheritance, but it wouldn't be needed in your example. In rust, you'd implement that by having a trait with functions (+default behaviour). Then have each robot type implement the trait. Eg:
trait Robot {
fn stop(&mut self) { /* default behaviour */ }
}
struct BenderRobot;
impl Robot for BenderRobot {
// If this is missing, we default to Robot::stop above.
fn stop(&mut self) { /* custom behaviour */ }
}
> The only one that springs to mind is a View hierarchy in UI libraries.
I'd like to generalize that a little bit and say: graph structures in general. A view hierarchy is essentially a tree, where each node has a bunch of common bits (tree logic) and a bunch of custom bits (the actual view). There are tons of "graph structures" that fit that general pattern: for instance, if you have some sort of data pipeline DAG where data comes in on the left, goes out on the right, and in the middle has to pass through a bunch of transformations that are linked in some kind of DAG. Inheritance is great for this: you just have your nodes inherit from some kind of abstract "Node" class that handles the connection and data flow, and you can implement your complex custom behaviors however you want and makes it very easy to make new ones.
I'm very much in agreement that OOP inheritance has been horrendously overused in the 90s and 00s (especially in enterprise), but for some stuff, the model works really well. And works much better than e.g. sum types or composition or whatever for these kinds of things. Use the right tool for the right job, that's the central point. Nothing is one-size-fits-all.
> But what do those functions return? Oh look, it's DOM nodes, which are described by and implemented with inheritance.
Well of course. React builds on what the browser provides. And the DOM has been defined as a class hierarchy since forever. But react components don’t inherit from one another. If the react devs could reinvent the DOM, I think it would look very different than it looks today.
This is starting to look a lot like C++ class inheritance. Especially because traits can also inherit from one another. However, there are two important differences: First, traits don't define any fields. And second, BenderRobot is free to implement lots of other traits if it wants, too.
If you want a real world example of this, take a look at std::io::Write[1]. The write trait requires implementors to define 2 methods (write(data) and flush()). It then has default implementations of a bunch more methods, using write and flush. For example, write_all(). Implementers can use the default implementations, or override them as needed.
How does one handle cases where fields are useful? For example, imagine you have a functionality to go fetch a value and then cache it so that future calls to get that functionality are not required (resource heavy, etc).
// in Java because it's easier for me
public interface hasMetadata {
Metadata getMetadata() {
// this doesn't work because interfaces don't have fields
if (this.cachedMetadata == null) {
this.cachedMetadata = generateMetadata();
}
return this.cachedMetadata;
}
// relies on implementing class to provide
Metadata fetchMetadata();
}
But then you have the getters, setters, and field on every class that implements the functionality. It works, sure, it just feels off to me. This is code that will be the same everywhere, and you're pulling it out of the common class and implementing it everywhere.
But if there's a lot of classes that implement the same thing, then not duplicating code makes sense. And saying "it's an implementation detail" leads to having the same code in a bunch of different classes. It feels very similar to the idea of default implementations to me; when the implementation will be the same everywhere, it makes sense to have it in one place.
So to be clear about your example: You have a whole lot of different - totally distinct - types of things, which all need to have the same logic to cache HTTP requests? Can you give some examples of these different types you're creating? Why do you have lots of distinct types that need exactly the same caching logic?
It sounds like you could solve that problem in a lot of different ways. For example, you could make an HTTP client wrapper which internally cached responses. Or make a LazyResource struct which does the caching - and use that in all those different types you're making. Or make a generic struct which has the caching logic. The type parameter names the special individual behaviour. Or something else - I don't have enough information to know how I'd approach your problem.
Can you describe a more detailed example of the problem you're imagining? As it is, your requirements sound random and kind of arbitrary.
From a very modified version of something I was working on recently, but with the stuff I couldn't do actually done here (and non-functionality code because of that, but is shows the idea)
public interface MetadataSource {
Metadata metadata = null;
default Metadata getMetadata() {
if (metadata == null) {
metadata = fetchMetadata();
}
return metadata;
}
// This can be relatively costly
Metadata fetchMetadata();
}
public class Image implements MetadataSource {
public Metadata fetchMetadata() {
// goes to externally hosted image to fetch metadata
}
}
public class Video implements MetadataSource {
public Metadata fetchMetadata() {
// goes to video hosting service to get metadata
}
}
public class Document implements MetadataSource {
public Metadata fetchMetadata() {
// goes to database to fetch metadata
}
}
Each of the above have completely different ways to fetch their metadata (ex, Title and Creator), and of them has different characteristics related to the cost of getting that data. So, by default, we want the interface to cache the result so that the
1. The thing that _has_ the metadata only needs to know how to fetch it when it's asked for (implementation of fetchMetadata), and it doesn't need to worry about the cost of doing so (within limits of course)
2. The things that _use_ the metadata only need to know how to ask for it (getMetadata) and can assume it has minimal cost.
3. Neither one of those needs to know anything about it being cached.
I had a case recently where I needed to check "does this have metadata available" separate from "what is the metadata". And fetching it twice would add load.
Here's my take on implementing this in rust. I made a trait for fetching metadata, that can be implemented by Image, Video, Document, etc:
trait MetadataSource {
fn fetch_metadata(&self) -> Metadata;
}
impl MetadataSource for Image { ... }
impl MetadataSource for Video { ... }
impl MetadataSource for Document { ... }
And a separate object which stores an image / video / document alongside its cached metadata:
struct ThingWithMetadata<T> {
obj: T, // Assuming you need to store this too?
metadata: Option<Metadata>
}
impl<T: MetadataSource> ThingWithMetadata {
fn get_metadata(&self) -> &Metadata {
if self.metadata.is_none() {
self.metadata = Some(self.obj.fetch_metadata());
}
self.metadata.as_ref().unwrap()
}
}
Its not the most beautiful thing in the world, but it works. And it'd be easy enough to add more methods, behaviour and state to those metadata sources if you want. (Eg if you want Image to actually load / store an image or something.)
In this case, it might be even simpler if you made Image / Video / Document into an enum. Then fetch_metadata could be a regular function with a match expression (switch statement).
If you want to be tricky, you could even make struct ThingWithMetadata also implement MetadataSource. If you do that, you can mix and match cached and uncached metadata sources without the consumer needing to know the difference.
Isn't this essentially the generic typestate pattern in Rust? In my view there is a pretty obvious connection between that particular pattern and how other languages implement OO inheritance, though in all fairness I don't think that connection is generally acknowledged.
(For one thing, it's quite obvious to see that the pattern itself is rather anti-modular, and the ways generic typestate is used are also quite divergent from the usual style of inheritance-heavy OO design.)
In this example, ThingWithMetadata does the caching. image.fetch_metadata fetches the image and returns it. It’s up to the caller (in ThingWithMetadata) to cache the returned value.
But part of the goal is to not need the caller to cache it. Nor have the class that knows how to fetch it need to know how to cache it either. The responsibility of knowing how to cache the value is (desired to be) in the MetadataSource interface.
The rule is that you can't cache a value in an interface, because interfaces don't store data. You need to cache a value in a struct somewhere. This implementation wraps items (like images) in another struct which stores the image, and also caches the metadata. Thats the point of ThingWithMetadata. Maybe it should instead be called WithCachedMetadata. Eg, WithCachedMetadata<Image>.
You can pass WithCachedMetadata around, and consumers don't need to understand any of the implementation details. They just ask for the metadata and it'll fetch it lazily. But it is definitely more awkward than inheritance, because the image struct is wrapped.
As I said, there's other ways to approach it - but I suspect in this case, using inheritance as a stand-in for a class extension / mixin is probably going to always be your most favorite option. A better approach might be for each item to simply know the URL to their metadata. And then get your net code to handle caching on behalf of the whole program.
It sounds like you really want to use mixins for this - and you're proposing inheritance as a way to do it. The part of me which knows ruby, obj-c and swift agrees with you. I like this weird hacky use of inheritance to actually do class mixins / extensions.
The javascript / typescript programmer in me would do it using closures instead:
> The rule is that you can't cache a value in an interface, because interfaces don't store data.
Right, but the start of where I jumped into this thread was about the fact that there are places where fields would make things better (specifically in relation to traits, but interfaces, too). And then proceeding to discuss a specific use case for that.
> A better approach might be for each item to simply know the URL to their metadata.
Not everything is a coming from a url and, even when it is, it's not always a GET/REST fetch.
> but I suspect in this case, using inheritance as a stand-in for a class extension / mixin is probably going to always be your most favorite option
Honestly, I'd like to see Java implement something like a mixin that allows adding functionality to a class, so the class can say "I am a type of HasAuthor" and everything else just happens automatically.
I don't see how that solves the problem. It seems like Video will need to keep it's own copy of CachedMetadaSource, which points back to itself, and go through that access it's metadata in the getMetadata implementation it makes available to it's users. At that point, it might as well just cache the value itself without the extra hoops. The difficult part isn't caching the value, it's preventing every class that implements MetadataSource from having to do so.
It would be the other way around. You wouldn't pass around the underlying suppliers directly, you'd wrap them. But if you must have state _and_ behavior, then `abstract class` is your friend in Java (while in Scala traits can have fields and constructors, so there is no problem).
The commenter used inheritance and thought it was fine. Probably not necessary to re-write in Rust just to be able to say that it doesn't use inheritance while being functionally the same thing.
> And yes, javascript supports many kinds of inheritance
Funny you mention it, since JavaScript has absolutely no concept of contracts, which is one of the most important side-effects of inheritance. Especially not at compile time, but even at runtime you can compose objects willy-nilly, pass them anywhere, and the only way to test if they adhere to some kind of trait is calling a method and hoping for the best.
At least that had been the case till ES6 came around, but good luck finding anyone actually using classes in JavaScript. Mainly because it adds near-zero benefits, basically just the ability to overwrite method behavior without too much trickery.
Inheritance is not the only way to share behavior across different implementations — it'a just the only way available in the traditional 1990s crop of static OOP languages like C++, Java and C#.
There are many other ways to share an implementation of a common feature:
1. Another comment already mentioned default method implementations in an interface (or a trait, since the example was in Rust). This technique is even available in Java (since Java 8), so it's as mainstream as it gets.
The main disadvantage is that you can have just one default implementation for the stop() method. With inheritance you could use hierarchies to create multiple shared implementations and choose which one your object should adopt by inheriting from it. You also cannot associate any member fields with the implementation. On the bright side, this technique still avoids all the issues with hierarchies and single and multiple inheritance.
2. Another technique is implementation delegation. This is basically just like using composition and manually forwarding all methods to the embedded implementer object, but the language has syntax sugar that does that for you. Kotlin is probably the most well-known language that supports this feature[1]. Object Pascal (at least in Delphi and Free Pascal) supports this feature as well[2].
This method is slightly more verbose than inheritance (you need to define a member and initialize it). But unlike inheritance, it doesn't requires forwarding the class's constructors, so in many cases you might even end up with less boilerplate than using inheritance (e.g. if you have multiple overloaded constructors you need to forward).
The only real disadvantage of this method is that you need to be careful with hierarchies. For instance, if you have a Storage interface (with the load() and store() methods) you can create EncryptedStorage interface that wraps another Storage implementation and delegates to it, but not before encrypting everything it sends to the storage (and decrypting the content on load() calls). You can also create a LimitedStorage wrapper than enforces size quotas, and then combine both LimitedStorage and EncryptedStorage. Unlike traditional class hierarchies (where you'd have to implement LimitedStorage, EncryptedStorage and LimitedEncryptedStorage), you've got a lot more flexibility: you don't have to reimplement every combination of storage and you can combine storages dynamically and freely. But let's assume you want to create ParanoidStorage, which stores two copies of every object, just to be safe. The easiest way to do that is to make ParanoidStorage.store() calls wrapped.store() twice. The thing you have to keep in mind, is that this doesn't work like inheritance: For instance, if you wrap your objects in the order EncryptedStorage(ParanoidStorage(LimitedStorage(mainStorage))), ParanoidStorage will call LimitedStorage.store(). This is unlike the inheritance chain EncryptedStorage <- ParanoidStorage <- LimitedStorage <- BaseStorage, where ParanoidStorage.store() will call EncryptedStorage.store(). In our case this is a good thing (we can avoid a stack overflow), but it's important to keep this difference in mind.
3. Dynamic languages almost always have at least one mechanism that you can use to automatically implement delegation. For instance, Python developers can use metaclasses or __getattr__[3] while Ruby developers can use method_missing or Forwaradable[4].
4. Some languages (most famously Ruby[5]) have the concept of mixins, which let you include code from other classes (or modules in Ruby) inside your classes without inheritance. Mixins are also supported in D (mixin templates). PHP has traits.
5. Rust supports (and actively promotes) implementing traits using procedural macros, especially derive macros[6]. This is by far the most complex but also the most powerful approach. You can use it to create a simple solution for generic delegation[7], but you can go far beyond that. Using derive macros to automatically implement traits like Debug, Eq, Ord is something you can find in every codebase, and some of the most popular crates like serde, clap and thiserror rely on heavily on derive.
To my mind, the challenge is not "sharing behavior"; it is "sharing behavior in a way that capture human-understandable semantics and make code easier to reason about instead of harder."
I suspect part of the problem of inheritance is that it is a way to share behavior that some humans, especially visual thinkers who understand VMTs, find easy to reason about.
In my experience verbal thinkers struggle with inheritance, because it requires jumping between levels of abstraction and they aren't thinking in terms of semantic units. I have found that books like Refactoring can help bridge the gap, but we have to identify it as a gap to be bridged and people have to want to learn this new skill.
And then on the flip side you have people who try to use it just as a way to de-dupe code, even when it doesn't reflect a meaningful semantic unit.
> In my experience verbal thinkers struggle with inheritance, because it requires jumping between levels of abstraction and they aren't thinking in terms of semantic units.
This is too dismissive of the criticism. The problem with inheritance is it makes control flow harder to understand and it spreads your logic all over a bunch of classes. Ironically, inheritance violates encapsulation - since a base class is usually no longer self contained. Implementation details bleed into derived classes.
The problem isn’t “verbal thinkers”. I can think in OO just fine. I’ve worked in 1M+ line of code Java projects, and submitted code to chrome - which last time I checked is a 30M loc C++ project. My problem with OO is that thinking about where any given bit of code is distracts me from what the code is trying to do. That makes me less productive. And I’ve seen that same problem affect lots of very smart devs, who get distracted building a taxonomy in code instead of solving actual problems.
It’s not a skills problem. Programming is theory building. OO seduces you into thinking the best theory for your software is a bunch of classes which inherit from each other, and which reference each other in some tangled web of dependencies. With enough effort, you can make it work. But it almost always takes more effort than straightforward dataflow style programming to model the same thing.
I do not believe "it makes the control flow harder to understand" is as universal as you claim.
If used badly any flow tool (including if-statements) can be be confusing. But "it can be complicated" doesn't mean we shouldn't use the tool when it is appropriate. One of the reasons I like Java Enums is because they provide much more structured guidance on what communicative inheritance looks like.
But we may also disagree on what "productive" means in the context of writing software.
The "taxonomy of code" you are dismissing is I believe what Fred Brooks describes as the "essential tasks" of programming: "fashioning of the complex conceptual structures that compose the abstract software entity".
It's not that I don't sympathize with your concern: being explicit and clear about "what the code is trying to do" is why TDD is popular among OOP programmers. But the step after "green" is "refactor", where the programmer stops focusing on what the code is trying to do and refines the taxonomy of the system that implements those tasks.
To me (as a Java programmer) inheritance is very useful to reuse code and avoid copy paste. There many cases in which decorators or template methods are very useful and in general I find it "natural" in the sense that the concepts of abstraction and specialization can be found in plenty of real world examples (animals, plants, vehicles etc etc).
As usual there is no silver bullet, so it's just a tool and like any other tool you need to use it wisely, when it makes sense.
As a full stack developer who's current job is mostly Java on the backend - at least for the last 8 yrs: I'm not aware of anything you would lose by switching to interfaces with default implementations over inheritance... And that's the usual argument: use composition over inheritance.
But would switching to interfaces with default implementations fix any of the complaints that people have about inheritance? In my mind, they're pretty much equivalent, so it seems to me that anything you can do with inheritance that people complain about, you could also do with interfaces and complain about it in the same way.
1. A class can be composed out of multiple interfaces, making them more like mixins/traits etc vs inheritance, which is always a singular class
2. The implementation is flat and you do not have a tree of inheritance - which was what this discussion was about. This obviously comes with the caveat that you don't combine them, which would effectively make it inheritance again.
Yeah there can be a ton of derivative and convenience methods that would either have to be duplicated in all implementations or even worse duplicated at call sites.
Call them interfaces with default implementations or super classes, they are the same thing and very useful.
> The reason is that inheritance is almost always a bad idea in practice.
It's just slightly too strong of a statement.
I'm working in a very large Spring codebase right now, with a lot of horrible inheritance abuse (seriously, every component extended common hierarchy of classes that pulled in a ton of behavior). I suspect part of the reason is the Spring context got out of control, and the easiest way to reliably "inject" behavior is by subclassing. Terrible.
On the other hand, inheritance is sometimes the most elegant solution to a problem. I've done this at multiple companies:
Payment
+ PayPalPayment
+ StripePayment
Sometimes you have data (not just behavior!) that genuinely follows an IS-A relationship, and you want more than just interface polymorphism. Yes you can model this with composition, but the end result ends up being more complex and uglier.
It doesn't have to be all one or the other. But I agree, it should be mostly composition.
There used to be times when language-level composition did not exist, so inheritance was practically all you had. There used to be ugly hacks to implement mix-ins, for example, in PHP (first versions of Symfony used them and did their best to make them not ugly, but they had to devote a whole chapter on how to do them right anyway). I suspect a lot of contention comes from those times — and from the fact that even when you can do better, many folks still have the muscle memory wired to "if inheritance is the only tool you have, everything looks like a subclass".
I like languages where I can have both, and where the language authors are not trying to preach at me.
That is a great example! Abstraction is most useful when it captures the way several things are more-specific versions of a more general thing. At that point it's not just about the functionality: it communicates to the reader. Anyone coming in can now easily answer the question, "what kinds of payments exist?"
> But there's a reason the crowd is moving against inheritance.
I doubt it; the majority of code is in enterprise projects, and they do Java and C# in the idiomatic way, with inheritance.
I'm working on an Android project right now, and inheritance is everywhere!
So, sure, if you ignore all mobile development, and ignore almost all enterprise software, and almost all internal line-of-business software, and restrict yourself to what various "influencers" say, then sure THAT crowd is moving away from inheritance.
Java and C# are already a huge step up from what came before, since they at least introduce the concept of an interface as a distinct thing from a parent class. The fact that you don't notice that is proof that progress does happen, if only slowly.
Objective-C and Smalltalk were always niche languages, at least by comparison to Java and C#, and I think Smalltalk fans underestimate the value of many things.
C++ does not (or at least did not at the time) have a concept of interfaces. There was a pattern in some development communities for defining interfaces by writing classes that followed particular rules, but no first-class support for them in the language.
> C++ does not (or at least did not at the time) have a concept of interfaces. There was a pattern in some development communities for defining interfaces by writing classes that followed particular rules, but no first-class support for them in the language.
Your distinction between "first class support for interfaces" and "C++ support for interfaces" looks like an artificial one to me.
Other than not requiring the keyword "interface", what is it about the C++ way of creating an interface that makes it not "first class support"?
An interface is just a base class none of whose virtual functions have implementations. C++ has first class support for it. The only thing C++ lacks is the "interface" keyword.
The main reason (other than self-documentation) that some other languages separate interfaces form normal classes is that they only support multiple inheritance for interfaces.
C++ doesn't have this restriction, so interfaces would add very little.
Because multiple inheritance causes the diamond problem. C# forces you to solve it by declaring one method as the "canonical" and the others as "explicit interface implementations" (only accessible if the variable/receiver is typed as that interface).
The diamond problem strictly speaking only has to be one when the common base class has constructor arguments. While a Java-style interface construct makes it easy to prevent, it also imposes much stronger restrictions than the above. It would have been possible to only impose the above restriction. Yes, there are failure cases with separate compilation, but the way Java dynamically loads classes, that would be similar to how when the JVM loads a class file that is supposed to be an interface, it discovers that instead it has been changed to a class.
Java having an explicit ”interface” construct is one thing I didn’t like about it, because it muddles the notion of a class implicitly having an interface (a notion that clearly exists in C and C++, by way of header files if nothing else) with that construct, while on the other hand there is no a-priori reason to have a distinction between Java’s interfaces and pure abstract classes. Both specify an interface to be implemented. And Java 8+ muddles its concept further by allowing default methods and static members.
The important thing is to distinguish between interface and implementation, and that is relevant to any class, whether it implements a separately defined interface or not.
Java (following Objective-C) does need a differentiation between interface and pure abstract class - this is because it is single inheritance - a class can have any one class to inherit from but it can have many interfaces.
That doesn’t follow. The restriction could have been defined in terms of allowing at most one parent class to be non-pure-abstract. And there are lesser restrictions hat would have been conceivable as well. Java is single-inheritance only in the sense of the particular interface–class distinction it makes. For example, since Java 8 you can inherit method implementations from multiple interfaces.
Java's object model is based on Objective-C's so a direct descendant and Objective-C's object model is based on Smalltalk so there is a direct connection there.
I definitely agree that the crusade against inheritance is just a fad and not based on good reasoning. Every time people say "inheritance is garbage that people only use because they learned it in school" it pains me because it's like, really? You can't imagine that it's because those people have thought about the options and concluded that inheritance is the best way to model the problem they are facing?
Contrary to what the hype of the 90s said, I don't think OOP is the ultimate programming technique which will obsolete all others. But I think that it's equally inaccurate to make wild claims about how OOP is useless garbage that only makes software worse. Yes, you can make an unholy mess of class structures, but you can do that with every programming language. The prejudice some people have against OOP is really unfounded.
I think there is a tendency in our industry to externalize imposter syndrome, blaming the tools rather than thinking "huh, I don't understand OOP yet."
Which doesn't mean everyone has to learn to understand OOP, but just because one person doesn't want to doesn't mean no one should.
I’m surprised this is considered a controversial take.
You can write spaghetti in any language or paradigm. People will go overboard on DRY while ignoring that inheritance is more or less just a mechanism for achieving DRY for methods and fields.
FP wizards can easily turn your codebase into a complex organism that is just as “impenetrable” as OOP. But as you say, fads are fads are fads, and OOP was the previous fad so it behooves anyone who wants to look “up to date” to be performative about how they know better.
Personally I think it’s obvious that anyone passing around structs that contain data and functions that act on that data is the same concept as passing around objects. I expect you can even base a trait off of another trait in Rust.
But don’t dare call it what it actually is, because this industry really is as petulant as you describe.
I think every new technology or idea is created because it solves some problems, but in the long run, we'll discover that it creates other problems. For example, transpiling javascript, C++ OO, actors, coroutines, docker, microkernels, and so on.
When a new idea appears, we're much more aware of the benefits it brings. But we don't know the flaws yet. So we naively hope there are no flaws - and the new idea is universally good.
But its rare to change something and not have that cause problems. It just always takes awhile for the problems to really show up and spoil the party. I guess you could think of it as the hype cycle - first hype, then disillusionment, then steady state.
Sometimes I play this game with new technology. "In 10 years, people will be complaining about this on hackernews. What do I guess they'll be saying about it?". For rails, people complain about its deep magic. For rust, I think it'll be how hard it is to learn. For docker, that it increases the size of deployments for no reason. And that its basically static linking with more steps.
Calling everything a fad is too cynical for me, because it implies that progress is impossible. But plenty of tools have made my life as a software developer better. I prefer typescript over javascript. I prefer cargo over makefile / autotools / cmake / crying. Preemptive schedulers are better than your whole computer freezing. High level languages beat programming in assembly.
Its just hard to say for sure how history will look back on any particular piece of tech in 20 years time. Fortran lost to C, even though its better in some ways. I think C++ / Java style OO will die a slow death, in favour of data oriented design and interfaces / traits (Go / Rust style). I could be wrong, but thats my bet.
> I think it’s obvious that anyone passing around structs that contain data and functions that act on that data is the same concept as passing around objects.
I hear what you're saying - but there's some important differences about the philosophy of how we conceptualise our programs. OO encourages us to think of nouns that "do" some verb. Using structs and functions (like C and Rust) feels much more freeform to me. Yegge said it better: https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...
But lets see in 20 years. Maybe OO will be back, but I doubt it. I think if we can learn anything from the history of CS, its that functional programming had basically all the right ideas 40 years ago. Its just taking the rest of us decades to notice.
Although Java/C# make you put functions in a class, you aren't compelled to think of a class as a "noun". Just call it "Utils" or something like that. A class is just a thing that you can put functions and / or data in. Use that however you want.
The word "module" comes to mind. A class in Java can be viewed as a software module, of which more than one instance can be created. Sometimes, this is even the best way to view the class. Other times, it's better to view it as a class representing some noun.
A thing like a "comparator" or an "XYZ factory" is not a domain noun, but rather a pluggable code module.
if anything, in C#, you can import the entire class as `using static MyFunctions;` and make such functions top-level. Well, usually you write an extension method instead since most functions act on some form of data but you get the idea.
(can also be imported globally with 'global using static ..' in a usings file)
Yeah, the problem with OO isn’t really in the languages. The problem is in the community, and what people consider “best practice”. C#, Java and C++ are all arguably multi-paradigm languages. They give you a lot of flexibility in how you structure your code. C# and C++ support value types. Modern Java has great support for a lot of FP concepts too.
So I agree with you. You can write good C# if you want to. The problem is that a lot of people - for some strange reason - actively choose to make their programs heavily OOP.
Maybe we need to tease "community" apart from language. Let's have Java / C# "A" people (who need at least 10 levels of inheritance, gotta use DI, insist on every character of SOLID (and actually remember and care about the Liskov substitution principle - and insist that it wasn't chosen simply because it starts with "L" and makes the acronym sound better) and have never written any code that added any value - only frameworks. Then we can have Java / C# "B" people that care about allocations, hate DI, avoid inheritance, know when they are messing up cache line hits and even feel slightly bad a about using generics.
Something like that, pick your tribe or, even better, be an individual and do whatever (TF) you want.
Yep. That’s why I prefer to criticise OOP (and in particular, inheritance). Not specific languages.
I met this old guy at a conference one, ~15 years ago. He said he didn’t get why people say Java is slow. His Java, he said, runs just as fast as C. I asked him to show me his code - and I’m so glad I did. It was amazing. He did everything in one big static class, and treated Java as if it were a funny way to write C. He ignored almost the entire standard library. No wonder his code ran fast. It was basically JIT-compiled C code.
Java isn’t the problem. “Java best practices” are the problem. It’s a culture thing. Likewise, can write heavily OOP code in C if you really put your mind to it and write your own struct full of function pointers. But it’s not in culture of the C community to overuse that design.
As they say about OOP, everything is somewhere else.
The only part of inheritance I’ve ever found useful is allowing objects to conform to a certain interface so that they can fulfill a role needed by a generic function. I’ve always preferred the protocol approach or Rust’s traits for that over classicist inheritance though.
I'm fine with trait inheritance. (If you want to call it that - its maybe better to describe it as trait preconditions.)
I'm fine with it because trait inheritance doesn't increase code complexity in the same way C++ / Java class inheritance does. If you call foo.bar(), its usually pretty obvious which function is being called. And you only ever have to look in one place to see all the fields of a struct.
In C++, its common to have a class method say "blah = 5;" or something. Then you need to spend 5 minutes figuring out where "blah" is even defined in the class hierarchy. By the time you find it, you have 8 code windows open and you've forgotten what you were even trying to do. And thats to say nothing of all the weird and wonderful bits of code which might modify that field when you aren't looking. Ugh.
> In C++, its common to have a class method say "blah = 5;" or something. Then you need to spend 5 minutes figuring out where "blah" is even defined in the class hierarchy.
Some of this can be remedied with tooling. A nice "show usages" would solve this. Also, some IDEs have a class browser, where you can see the inheritance tree with all the members.
> And thats to say nothing of all the weird and wonderful bits of code which might modify that field when you aren't looking. Ugh.
Agreed, mutation tends to make everything worse and definitely more complicated.
Mutation is a powerful technique, but needs to be treated with care. Haskell and Rust (and Erlang) amongst others have some interesting approaches for how to recognise the danger of mutations, but still harness their upsides.
Haskell even has quite a few different approaches to choose from, or to mix-and-match.
That is probably because you identify with the Rust tribe. Anything that Rust has is good while things in other languages have seem less good. This is fine, use the innate tribe affinity energy to get better at Rust.
Thanks for the diagnosis but no. I’ve had these opinions for years - since long before rust came along. If we had this conversation a decade ago, I might have made the same argument on the back of Java’s interfaces or obj-c’s protocols - which are both more or less the same concept.
> Usually, the calls were via some abstract interface with a single implementor
What's described here is over-generic code, instead of KISS and just keeping an eye on extensibility instead of generalizing ahead of time. This can happen in any paradigm.
We're all flavoured by our experience. You can for sure make a mess with flat C-style code that uses structs and global functions. But whenever I've seen a mess in C, its a sort of "lego on the floor" type of mess. Code is everywhere, but all the pieces are uniquely named and mostly self contained.
Classes - and class hierarchies - really let you go to town. I've seen codebases that seem totally impossible to get your head around. The best is when you have 18 classes which all implicitly or explicitly depend on each other. In that case, just starting the program up requires an insane, fragile dance where lots of objects need to be initialized in just the perfect order, otherwise something hits a null pointer exception in its initialization code. You reorder two lines in a constructor somewhere and something on the other side of your codebase breaks, and you have no idea why.
For some reason I've never seen anyone make that kind of mess just using composition. Maybe I just haven't been around long enough.
"But there's a reason the crowd is moving against inheritance"
Yep: it requires skills that aren't taught in schools or exercised in big companies organized around microservices. We've gone back to a world where most developers are code monkeys, converting high-level design documents into low-level design documents into code.
That isn't what OOP is good for: OOP is good for evolving maintainable, understandable, testable, expressive code over time. But that doesn't get you a promotion right now, so why would engineers value it?
> That isn't what OOP is good for: OOP is good for evolving maintainable, understandable, testable, expressive code over time.
Whoa that’s quite the claim. Most large projects built heavily on OO principles I’ve seen or worked on have become an absolute unmaintainable mess over time, with spider webs of classes referencing classes. To say nothing of DI, factoryfactories and all the rest.
I believe you might have had some good experiences here. But I’m jealous, and my career doesn’t paint the same rosy picture from the OO projects I’ve seen.
I believe most heavily OO projects could be written in about 1/3 as many lines if the developers used an imperative / dataflow oriented design instead. And I’m not just saying that - I’ve seen ports and rewrites which have born out around that ratio. (And yes, the result is plenty maintainable).
> This isn't to say Java is bad and Go is good, they're just languages. It's just how they're typically (ab)used in enterprises.
Yeah; I agree with this. I think this is both the best and worst aspect of Go: Go is a language designed to force everyone's code to look like vaguely the same, from beginners to experts. Its a tool to force even mediocre teams to program in an inoffensive, bland way that will be readable by anyone.
Yeah, I have seen things like you describe. But I have also seen the same code, copy-pasted a dozen times throughout a codebase and modified over years. That is a much worse situation; the links between the abstractions still exist without the inheritance, but now they are untraceable. At least with inheritance there are links between the methods and classes for you to follow. Without it, you've got to crawl the entire codebase to find these things. OOP is easily the lesser of the two evils; without it, you're doomed to violate DRY in ways that will make your project unmaintainable.
I would even go so far as to argue that a small team of devs can learn an OOP heirarchy and work with it indefinitely, but a similar small team will drown in maintenance overhead without OOP and inheritance. This is highly relevant as we head into an age of decreased headcounts. This style of abandoning OOP will age poorly as teams decrease in size.
Keeping to the DRY principle is also more valuable in the age of AI when briefer codebases use up fewer LLM tokens.
> OOP is easily the lesser of the two evils; without it, you're doomed to violate DRY in ways that will make your project unmaintainable.
Inheritance isn't the only way to avoid duplicating code. Composition works great - and it results in much more maintainable code. Rust, for example, doesn't have class based inheritance at all. And the principle of DRY is maintained in everything I've made in it. And everything I've read by others. Its composition all the way down, and it works great. Go is just the same.
If anything, I think if you've got a weak team it makes even more sense to stick to composition over inheritance. The reason is that composition is easier to read and reason about. You don't get "spooky action from a distance" when you use composition, since a struct is made up of exactly the list of fields you list. Nothing more, nothing less. There's no overridden methods and inherited fields to worry about.
I've experimented with GoLang and found the lack of inheritance to be crippling for cases when I want to set a pattern in the code that is to be easily used by other devs with minimal training and a shared definition of behavior. That said, I truly think some mix of inheritance and composition is probably best to avoid the situations we're describing.
I suspect that an experienced golang programmer could solve whatever abstraction problem you have using Go's tools of composition and interfaces. Chatgpt could probably get you started too, if you prompt it in the right way.
Generally, don't treat Go as if its some bad imitation of C++ or Java. Its a different language. Like all languages, idiomatic Go is its own thing. It looks different to idiomatic Ruby or Javascript or C++ or Perl.
I think of programming languages kind of like pieces of wood. Each language has its own "grain" that you need to follow when you work. If you try and force any programming language into acting like its something else, you're going against the grain of the language. You'll need to work 10x harder to get anywhere if you try to work like that. Spend more time learning.
It is possible, just look at all of the go packages out there. Also, maybe you don't need to wrap it up as tightly as you think. The "other devs" will use it wrong anyway.
I think you have the consequences of AI exactly backwards. AI provides virtual headcount and will vastly increase the ability of small teams to manage sprawling codebases. LLM context lengths are already on the order of millions of tokens. It takes a human days of work to come to grips with a codebase an LLM can grok in two seconds.
The cost of working with code is much lower with LLMs than with humans and it's falling by an order of magnitude every year.
So if you've got a data object, defined in multiple places in a sprawling codebase, that you want to change, are you going to trust the LLM to find them all, and not miss a single one?
> Why is your data object defined in multiple places in your codebase?
Because that's the negation of my premise which you disagreed with: "Keeping to the DRY principle is also more valuable in the age of AI when briefer codebases use up fewer LLM tokens."
> And why aren't you using your IDE to change them all at once?
It sounds like you're assuming that they're all defined in the same way that you can catch them with a search.
If you think that, you have no idea how much horrible code is out there. Especially in enterprise land, where deadlines are set by people who get paid by the hour. I once worked on a java project which had a method - call a method - call a method - call a method and so on. Usually, the calls were via some abstract interface with a single implementor, making it hard to figure out what was even being executed. But if you kept at it, there were 19 layers before the chain of methods did anything other than call the next one. There was a separate parallel path of methods that also went 19 layers deep for cleaning up. But if you follow it all the way down, it turns out the final method was empty. 19 methods + adjacent interface methods all for a no-op.
> The bottom line is, most people don't have any original opinions at all and are just going with whatever seems to be popular.
Most people go with the crowd. But there's a reason the crowd is moving against inheritance. The reason is that inheritance is almost always a bad idea in practice. And more and more smart people talking about it are slowly moving sentiment.
Bit by bit, we're finally starting to win the fight against people who think pointless abstraction will make their software better. Thank goodness - I've been shouting this stuff from the rooftops for 15+ years at this point.