I feel people don't understand what inheritance and (object orientation in general) is useful for, misuse it, and then it gets a bad reputation. It's not about making nice hierarchies of Cars, Fruits, and Ovals.
For me the main point is (runtime) polymorphism. E.g. you have a function that takes a general type, and you can pass multiple specific types and it will do the right thing. And if you want to avoid huge if-else statements, you should put the code for the special cases in the classes, not in each function that operates on them.
You can get this without implementation inheritance, it is also possible to just have something like interfaces. But I do find it very convenient to put common code in a base class.
I feel people don't understand what inheritance and (object orientation in general) is useful for, misuse it, and then it gets a bad reputation. It's not about making nice hierarchies of Cars, Fruits, and Ovals.
Agree 100%. It starts from the earliest programming course where we just teach it all wrong; way to abstract (no pun intended).
One point to add to yours: well executed OOP allows for "structural" flow of control where the object hierarchy, inheritance, events, and overrides allow for the "structure" of the hierarchy to control the flow of logic.
I wrote two articles on this with concrete examples and use cases:
I used to be an OO hater until I started playing with Smalltalk. In particular, I worked through the Pharo MOOC (https://mooc.pharo.org/), which teaches you exactly this: designing the hierarchy IS designing the control flow of the program.
That said, Smalltalky hierarchies are a nightmare in most other languages because another key part of the Smalltalk env is the tooling -- staying inside a running system and being able to edit code from within the debugger is absolutely great and keeps you in a flow state much better than any other workflow I've been exposed to (including Lisp + Paredit + SLIME). The result is that editing class hierarchies in blub-y OO languages is usually a massive pain in the ass, while doing it in Pharo or a similar Smalltalk env is fun and painless. This is why you can't practically write Smalltalkly in Python/C#/Java/etc. even if on paper they have all or almost all of the same features.
I think the first part of the first article immediately introduces an anti-pattern - forcing the user to make an instance of a class just to be able to call a pure function. It's just unnecessary noise, either make them static methods, group them in an object literal, or import the module as a namespace.
Adding the "pattern" as a mutable public field is a bit sketchy, and would make it show up in intellisense, but hopefully nobody will access/modify it. Making the pattern as a `const` at module scope solves that problem but you handwave away that approach saying it "pollute the symbol space for intellisense", which isn't true (non-exported items aren't available outside the module).
The next section on validation is a good example of another anti-pattern. The example of:
public get isUserValid(): boolean
is not a good way of doing it, because it relies on hopes and prayers that the user of the class remembers to call this. A better signature would be:
function isUser(input: unknown): input is User
Notice the key difference - you can't get an instance of User without it being valid. There shouldn't be a notion of "yeah I have a User, but I don't know if it's a valid User". Your way allows people to operate with User, blissfully unaware of whether it is valid or not, hoping that they might notice the right method to call. Instead: Parse, Don't Validate [1]
Note that to do this correctly, you have to either:
1. Separate data and functions
2. If you must use a class, then hide the constructor and provide a static constructor with a return type that indicates the function can fail
Part 2 was an interesting exercise in how the popular OOP patterns from the GoF book vastly overcomplicated code, negatively impact readability, and make following the code feel like Mario (the princess is in another castle)
No need for inheritance, abstract base classes, of any of that complexity, all of that could be done with:
type ShippingCalculator = () => number
const shippingCalculators = {
USPS: calculateUSPS,
UPS: calculateUPS,
FedEx: calculateFedEx,
DHL: calculateDHL,
} as const satisfies Record<ShippingMethod, ShippingCalculator>
Intellisense works fine of course, and code navigation (via F12) is straightforward and easy to navigate.
> "yeah I have a User, but I don't know if it's a valid User".
This, imo, is one of the big reasons people so easily dismiss OOP. They put whatever data _they think they probably need_ in an object using setters/builders/what have you. This leads to abstractions of data that don't accurately reflect state. They will then let an external entity (service or whatever pattern) manipulate this data. At this point people might as well use something analogous to a C struct where anything can be done to the values. Objects are not managing their own invariants. When you rip this responsibility from an object, then nothing becomes responsible for the validity of an object. Due to this, people wonder why they get bugs and have trouble growing their software.
This also leads to things like "isValid". People don't understand that an object should be constructed with only valid data. The best example I've found of protecting variants and construction of valid objects to be this strategy in F#:
In psychology, there is an idea of "locus of control".
I think OOP done well and applied in suitable scenarios results in entities that have internal locus of control; it's mutations of state are internally managed.
OOP done poorly has external locus of control where some external "service" or "util" or "helper" manages the mutation of state.
Except inheritance is the premature optimisation of interfaces.
Inheritance forces you to define the world in terms of strict tree hierarchies, which is very easy to get wrong. You may even do a great job today, but tomorrow such properties don't hold anymore.
Regular composition allows the same functionality without making such strong assumptions on the data you are modelling.
> Inheritance forces you to define the world in terms of strict tree hierarchies,
No, it doesn’t.
Inheritance is the outcome of deciding to model some part of the problem space with a tree hierarchy (that potentially intersects other such heirarchies). It doesn’t force you to do anything.
I suppose if there was a methodology which forced you, as the only modeling method, to exclusively use single inheritance, that would force you to do what you describe, but…that’s not inheritance, that’s a bunch of weird things layered on top of it.
It may not force you directly, but I think it's safe to say that when a language focuses on inheritance (e.g. c++) it does not offer good alternatives (e.g. rust traits). This means that if you want features such as runtime polymorphism, you are _kinda_ forced into inheritance.
True. Modelling the world as a tree oversimplifies it, as there are also lateral and even backwards dependencies, and at a sufficiently complex level abstractions start to leak all over, making it a mess. But at some low to medium level of complexity it might work.
Unfortunately a lot of developers are conditioned so heavily to believe that inheritance is intrinsically bad, that they contort their code into an unreadable mess just to re-implement something that could have been trivial to do with inheritance.
I'm not saying that I like it everywhere; IMHO it's a tool to be used sparingly and not with deep hierarchies. But it's not reasonable to avoid it 100% of the time because we're soiling ourselves over the thought of possibly encountering the diamond problem.
> For me the main point is (runtime) polymorphism. E.g. you have a function that takes a general type, and you can pass multiple specific types and it will do the right thing.
The runtime part is what I dislike. If I have a fruit which is an apple or a banana, I can't pass that to a method expecting an apple or banana. It can only be passed as a fruit.
> And if you want to avoid huge if-else statements, you should put the code for the special cases in the classes, not in each function that operates on them.
This is common OO wisdom that I strongly disagree with. For example, in my program I have a few types (Application, Abstraction, Variable, etc.), and a lot of transformations to perform on those types (TypeCheck, AnfConvert, ClosureConvert, LambdaLift, etc.).
I prefer to have all the type-checking code inside the TypeCheck module, and all the closure-converting code inside the ClosureConvert module. I'd take the "huge if-else" statements inside TypeCheck rather than scatter typechecking across all my datatypes.
> The runtime part is what I dislike. If I have a fruit which is an apple or a banana, I can't pass that to a method expecting an apple or banana. It can only be passed as a fruit.
Heh? An apple is a fruit, you can pass it to any place expecting the former. Like, this is Liskov’s substitution’s one half.
With generics, you can be even more specific (co/in/contra-variance).
Fruit* fruit = new Apple();
ConsumeApple(fruit); // Doesn't work; requires Apple*
fruit = new Banana();
ConsumeBanana(fruit); // Doesn't work; requires Banana*
ConsumeFruit(fruit); // Okay, function signature is void(Fruit*)
But why would you want to be able to pass an apple to a function expecting a banana? Doesn't even make sense, the whole point is that the type system forces you to consider this stuff.
If OP wanted to be allowed to do whatever and just have the software fail at runtime, JS is right there.
> But why would you want to be able to pass an apple to a function expecting a banana?
I would never. I know that I'm holding a banana, I want to pass it to a method that receives a banana. But what I can't do is put my banana through a rotateFruit function first, because then Java will forget that it's a banana and start treating it as a fruit.
Inheritance is just a tool, that you use if it fits you software architecture. You use other tools, like generics, interface, where it makes senses to ise them. I think people wants the one true way to not have to design their software. And when they use inheritance in a ways that does not fit, they blame inheritance.
If you pass your object to another function, you only get static dispatch. What you want instead in this case, is a simple instance method. Then yourApple.rotateFruit() would do what you want, and rutateFruit could be an interface method declared in Apple, the superclass/interface.
> I prefer to have all the type-checking code inside the TypeCheck module
There are ways to have your cake and eat it too, here, at least in some languages and patterns.
For example, in Go you could define "CheckType" as part of the interface contract, but group all implementors' versions of that method in the same file, calling out to nearby private helper functions for common logic.
Ruby's open classes and Rust's multiple "impl" blocks can also achieve similar behavior.
And yeah, sure, some folks will respond with "but go isn't OO", but that's silly. Modelling polymorphic behavior across different data-bearing objects which can be addressed as implementations of the same abstract type quacks, very loudly, like a duck in this case.
> And yeah, sure, some folks will respond with "but go isn't OO", but that's silly. Modelling polymorphic behavior across different data-bearing objects which can be addressed as implementations of the same abstract type quacks, very loudly, like a duck in this case.
It's less "this is not OO" and more this is not inheritance, which is why a lot of people are saying you can find more elegant solutions (like this) rather than use inheritance for no clear benefit.
> If I have a fruit which is an apple or a banana, I can't pass that to a method expecting an apple or banana. It can only be passed as a fruit.
You can by overriding the method on apple or banana. If your method is on some other object, then yes, you cannot do this unless your programming language supports multiple dispatch.
I feel like Clojure-style multimethods accomplish this better than inheritance. I can simply write a dispatcher that dispatches the correct function based on some kind of input.
This is evaluated at runtime, thus giving me the runtime polymorphism, but doesn't make me muck with any kind of taxonomy trees. I can also add my own methods that work with the dispatcher, without having to modify the original code. I don't feel like it's any less convenient than inheritance, and it can be a lot more flexible. That said, I suspect it performs worse, so pick your poison.
> you have a function that takes a general type, and you can pass multiple specific types and it will do the right thing
What does this have to do with inheritance? This is just a generic function.
> And if you want to avoid huge if-else statements, you should put the code for the special cases in the classes, not in each function that operates on them
This doesn't sound any different from how people usually talk about inheritance.
I am not convinced it gives you anything more than composition does. Composition is very easy to setup and easy to change. And there are certainly functional ways to do runtime polymorphism.
> > you have a function that takes a general type, and you can pass multiple specific types and it will do the right thing
>
> What does this have to do with inheritance? This is just a generic function.
It's a generic function with compile-time compatibility checks.
You can have polymorphism without inheritance and inheritance without polymorphism. The big problem with inheritance is that it is really tricky to write those base classes. Making class inheritable by default in programming language is often considered a big mistake, because only carefully designed and documented class can be safely extended.
If people get it wrong so regularly, what value is it providing as a concept? These concepts are supposed to help us reach something better, if you have to add 30 caveats to every part of it, all it did was hide its own complexity from you, instead of managing it for you.
Because the tree is a nice abstraction for some problems. But sometimes you need a collection of pure functions. Sometimes it’s best to think of your object as data blobs going through a line of map,filter,reduce functions. Not every part of your application is the same, use the right abstraction for the job.
Polymorphism is doable in plain old C with lookup tables and function pointers. If that is the only benefit, what is the point of creating a language where everything is an object?
Presumably one would write an object-oriented version of quicksort to go with their OO C library design and not use the stdlib functions. For example GObject is OOP in C, and I imagine it has collection types with sort methods.
Inheritance is what will bite hard when hand-rolling OOP in C. For one, you can forget about the compiler enforcing substitutability and co/contravariance for you.
> what is the point of creating a language where everything is an object?
I think that's the ultimate culprit in everyone hating inheritance. If it weren't for Java, I think we'd all have a healthier view of OO in general.
I learned OO with C++ (pre C++-11), and now I work at a Java shop, but I'm luck that I get to write R&D code in whatever I need to, and I spend most of my time in Python.
In C++ and Python, you get to pick the best tool for the job. If I just need a simple function, I use it. If I need run-time polymorphism, I can use it. If I need duck-typing I can do it (in Python).
Without the need for strict rules (always/never do inheritance) I can pick what makes the best (fastest? most readable? most maintainable? most extensible? - It depends on context) code for the job.
Related to TFA, I rarely use inheritance because it doesn't make sense (unless you shoehorn it in like everyone in the threat is complaining about). But in the cases where it really does work (there really is a "is a" relation), then it does make life easier and it is the right thing.
Context matters, and human judgement and experience is often better than rules.
> If it weren't for Java, I think we'd all have a healthier view of OO in general.
>
> I learned OO with C++ (pre C++-11), and now I work at a Java shop, but I'm luck that I get to write R&D code in whatever I need to, and I spend most of my time in Python.
>
> In C++ and Python, you get to pick the best tool for the job. If I just need a simple function, I use it. If I need run-time polymorphism, I can use it. If I need duck-typing I can do it (in Python).
Those first two you can do in (modern) Java. The third is a mess to be avoided at all costs. Interfaces and lambdas will cover most reasonable use cases for polymorphism.
Yes, you're right. I've done that in high-performance code where I couldn't afford the double function call of a virtual function. I forgot about that.
Control flow is doable with gotos. What benefit therefore is structured programming?
Dynamic dispatch is implemented under the hood with lookup tables and function pointers. Sometimes, it is nice for a language to wrap a fiddly thing in a more abstract structure to make it easier to read, understand, and write.
if i were writing an intro to programming book, i would introduce OO as a means of building encapsulation. i'd only get into inheritance in later chapters.
The core feature of OOP is just bundling functions with the state it processes.
When you bundle state and functions together, you can't predict what calling the function will do without knowing both the code and state.
You can say its 'the real benefit', I guess, but that feels like circular reasoning. Its pretty much the definition of what OOP is, so calling it a benefit feels weird.
Unfortunately, designing systems as a collection of distributed state machines tends to become maintenance nightmare. Functions and data being separated tends to make code better, even when working in so-called 'OOP' languages.
> You can say its 'the real benefit', I guess, but that feels like circular reasoning. Its pretty much the definition of what OOP is, so calling it a benefit feels weird.
It's a benefit compared to how people were writing code before it became mainstream (for polymorphism: by jumping to data-defined parts of the code and hoping for the best, and for encapsulation: subroutines working on global variables).
You’re describing the strategy pattern, which is probably one of the most practical coding design patterns. Ex: each chess AI difficulty gets its own class, which all extend a common interface.
Sorry, but right off the bat this is just painting you as an example of "There are N camps, and each camp declares the other camp as wrong."
Yes, that's what inheritance is for you. For others it is something else. Why is your way the one that "correctly understands" it?
The article itself covers this - that some languages have lumped 3 different concepts into one that they call inheritance, leading to the different camps and comments like yours. Your camp is specifically mentioned:
> Abstract data type inheritance is about substitution: this thing behaves in all the ways that thing does and has this behaviour (this is the Liskov substitution principle)
I see that a lot - the alternative to inheritance, when inheritance does make sense, is code duplication, which is much worse than inheritance, or first-order functions, which many languages don't actually support or don't support efficiently.
Yes. I have a framework for an embedded system that uses various types of sensors. When changing a sensor, instead of rewriting the polling loop for every new case, I can keep looping through ‘sensor[i]->sampleNow()’ and add the specifics in a class inheriting from SensorClass.
But you don't actually care about runtime polymorphism here. You care about polymorphic behavior, which can be implemented in a much more composable way with parametric polymorphism.
You can’t build a dynamic list of objects implementing the same interface in different ways with parametric polymorphism.
As another example, the Unix file interface (open(), read(), write(), flush(), close()) etc. is an example of runtime polymorphism, where the file descriptors identify objects with different implementations (depending on whether it’s a regular file, a directory, a pipe, a symbolic link, a device, and so on).
All operating systems and many libraries tend to follow this pattern: You can create objects to which you receive a handle, and then you perform various operations on the objects by passing the respective handle to the respective operation, and under the hood different objects will map to different implementations of the operation.
Not without runtime polymorphism. Parametric polymorphism does not imply nor by itself implement runtime polymorphism. I.e. C++ templates, or generics in other languages, provide parametric polymorphism, but not runtime polymorphism.
For me the main point is (runtime) polymorphism. E.g. you have a function that takes a general type, and you can pass multiple specific types and it will do the right thing. And if you want to avoid huge if-else statements, you should put the code for the special cases in the classes, not in each function that operates on them.
You can get this without implementation inheritance, it is also possible to just have something like interfaces. But I do find it very convenient to put common code in a base class.