Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Inheritance Is Terrible (lionelbarrow.com)
73 points by lbarrow on March 19, 2016 | hide | past | favorite | 94 comments


"On the surface, this looks reasonable."

No, it actually doesn't.

  class Item
    def add_to_cart(cart)
      cart.increase_total(this.price)
This is where the trouble is. It is not the inheritance, it is the add_to_cart method not being in the correct class. If this instead had been

  class Cart
    def add(item)
      .. add item to some collection of stuffs implementing price().
    def sum()
      .. calculate total sum.
then the whole problem evaporates. If your code reads do_something_to_foo(foo), then very often usually it really ought to be foo.do_something()

Inheritance can be ok when used sparingly, but this example is not one of those.


Yep. There is no reason the Item class should "know about" the Cart class. The add_to_cart method doesn't belong in it.


>If your code reads do_something_to_foo(foo), then very often usually it really ought to be foo.do_something()

Personally I'd make an exception when using extension methods in C#. If you make an extension method which looks (schematically) as follows:

    add_to_cart(this item, Cart cart) {
        cart.add(item);
        return item;
    }
You can write code that looks like:

    item.add_to_cart(cart)
        .check_availability()
        .do_something_else()
        .etc()
Well, this example isn't the best, but you get the general idea.


You're hand-waving a lot away with "some collection of stuffs implementing price()". That stuff was the actual problem the article was about. The question of how it gets in the cart and how and when the total is calculated is irrelevant.

The actual problem of the article was that if the stuff you put in a cart simply derives from a single base class, then any new stuff-based behaviour (potentially) requires changing all subclasses. For example, if you wanted to to add shelf life to perishable items and have the cart maintain the minimum shelf life, then you have to add shelf life to the item class and potentially override it on all subclasses.

An alternative implementation would be to define a Perishable interface with a ShelfLife method and only implement that for perishable items.

    class Cart
      def minimumShelfLife
        items.ofType<Perishable>().minimum(item -> item.shelfLife, ifEmpty: infinity)
This avoids cluttering up the base class with loads of methods which won't be meaningful for most of the types, and which may cause problems when extending things later.

I'm not convinced, but I believe that's the essence of the argument the article intends to make.


> This avoids cluttering up the base class with loads of methods which won't be meaningful for most of the types

I'd argue that shelfLife is meaningful for all Items, and it's unnecessarily presumptuous to assume only Carts will want to know about shelfLife. Plus it's less code to simply add a shelfLife method in Item to return infinity. Later, when you realize expirationDate is what you really want to know you'll have less code to modify to get the logic correct.

In most cases you'll have fewer problems doing things the traditional OO way and sending the expirationDate message directly to the item.


But that's not true as it's not a binary choice. The idea of adding a Perishable interface works just as well whether or not items extend a single base class.

Sometimes cluttering the base class is a better solution if there is a meaningful default. For example, if you start with all non-taxable items and the requirements change for taxable items -- it makes sense to implement in the base class as a Tax property that defaults to zero. Then you don't need conditionals and interface checks whenever you want to display or do tax calculations.


Yes totally agree.

The best way that I learned OO was when it was explained that, in most cases, its best to take objects as nouns (ie Item, Book, Prices etc) and the object methods should be verbs (ie do_something() or act_on_this() etc)


'item.add_to(cart)' seems to work fine from a gramatical point of view. It's only when we think in terms of ownership "The cart should own the item" that we choose the right order, no?


cart.add(item) is also grammatical and less conceptually complex than any sort of "add_to" construct.


I concede the example is a bit of a straw man. But I'd challenge you to provide a similarly concise problem and solution where 1) inheritance is useful and 2) there isn't a better solution that doesn't use inheritance.


1. Inheritance is about extensibility. You won't encounter suitable applications in self-contained toy universes where you can almost always rewrite something as some form or another of a typecase statement (whether that's Pascal-like variant records with case statements or ML/Haskell-style pattern matching). See also the expression problem [1].

2. "Inheritance" itself is a fairly fuzzy concept. Do you mean implementation inheritance (for code reuse), polymorphic subtyping (for polymorphism with late binding), do you account for virtual types, etc.?

[1] http://c2.com/cgi/wiki?ExpressionProblem


The classic examples...

Person > Employee > Manager

Shape > Rectangle, Circle

Object > anything

Product > Book, TV

I think these are good examples where inheritance makes lot of sense. There is commonality between things and there is specialization (overrides) or definitions (abstract methods). It's wrong to say that the concept of inheritance is evil. Like pretty much everything, it can be severely abused (very deep hierarchies, multiple on inheritence etc). The classic example of OOP abuse is probably things like MFC library.


I would disagree with regards to the Person chain. Mainly because being an employee is a role, as opposed to being a kind of people.


It's hard to evaluate whether or not those examples make sense without some pseudocode showing how the classes are used.


Would you also support Shape > Rectangle > Square? Or are all your super classes abstract?


If we're being cheeky, a square is also a rectangle. Unless I'm misreading your post, I'd expect your superclasses to be as abstract as they needs to be and no more:

  Vehicle < Car, Truck, etc.
  Person < Employee, Customer, etc.
There's a pretty obvious logic to them that doesn't really demand much extra thought. I can't think of anyone outside of an irreverent ontologist who would suggest going beyond that unless there's a valid reason for it, like say:

  PhilosophicalBeing < Person < Employee, Customer, etc.


A mutable square is not a mutable rectangle, because a mutablr rectangle can change its width independently of its length. This is the classic problem of contravariance: subtyping in the face of mutability.


Good point. Without giving it much if any thought, I made an assumption--that the square is immutable--that probably wouldn't even be accurate in this context. In fact, it's the usual example used to talk about substitutability. Oops.


"better solution" is subjective though.


So when I read this, "Later, however, the business needs to start tracking the royalty paid to the author of a book separately from the rest of the price." I felt the author is conflating "inheritance" with "requirements".

I don't think anyone would argue that if you design system, and then change the requirements of what the system should do. You have two choices; build a new system that meets both the old and new requirements or patch the old system to kind of meet the new requirements.

And I get that this critique is looking at the example perhaps more literally than the message, which is that it is hard to make systemic changes to object oriented systems, but that criticism of OOP is well trodden. Bjarne and others will tell you that it is easy to make reliable and testable additions to an OO system and that is its strength. Refactoring such systems in the face of new requirements will always be hard, and yet the argument goes that if you do so correctly you will be able to re-use tests from the previous system to verify it still works the way it did and write new tests to show that it works the new way as well.

The example of books and movies is a good one because shows how initially the thinking that every item has a price, but what should have been considered is that some items are of a type 'media' that has other attributes that regular items don't. If you are familiar with the joke[1], it is the dichotomy between a toaster and a "cooker of breakfast foods".

[1] http://www.danielsen.com/jokes/objecttoaster.txt


Why are those things still being rediscovered?

The fragile base class problem is a very well known one since the 90's.

Already in the 90's we had programming languages like Component Pascal, whose designers pushed for the idea of component based programming. Basically what nowadays is known by using COM interfaces, regular interfaces, traits, protocols, whatever.

The first edition of "Component Software: Beyond Object-Oriented Programming" is from 2002:

http://dl.acm.org/citation.cfm?id=515228


Inheritance is the first thing I learned when I started learning object oriented programming.


As well it should've been! If you're doing OO, you're kind of missing the reason for it if you aren't aware of inheritance, isolation of concerns, and polymorphism.

Granted, you could have a lot of copy/paste code and NOT use inheritance, but then why use OO in the first place?

EDIT: I mentioned in another comment, and a commenter to this correctly noted- it's probably because it's easier to understand, and a fundamental piece of OO. It's actual usefulness in the real world varies, but it is also how one would step into understanding interfaces, composition, and other aspects of OO that get used heavily in OO in the real world.


Inheritance isn't necessarily part of OO. Go has radically weaker inheritance mechanisms than classic OO languages, for example.


I'm not sure that's true. I think that Go has much stronger inheritance mechanisms than, say, C# or Java, albeit also more foot-gun friendly.

    type Doer interface {
      DoIt()
    }

    type Foo struct {}

    type Bar struct { Foo }

    func (*Foo) DoIt () {
      fmt.Println("Just do it!")
    }

    func Incite (d Doer) {
      d.DoIt()
    }

    func main () {
      b := &Bar{}
      Incite(b)
    }
One can transparently invoke Foo's DoIt on instances of Bar or, if Bar had its own DoIt defined, it would override that.

Foot-gun wise, of course, if you convert a Bar to a Foo and pass it to something, you no longer get Bar's version of DoIt.

However, anywhere that accepts a Doer accepts a Bar without decomposing it into a Foo, so Incite will invoke Bar's DoIt and, if Bar didn't define a DoIt method, it would still implement Doer via composition of Foo.

One might argue that this is composition not inheritance, but although it's implemented as composition internally, from the outside it appears that Bar isa Doer, even though it doesn't implement DoIt.

Usually, composition would require either forwarding methods which disguise the fact that composition is happening, or for external code to actually access the components directly. Go allows something that looks very much like inheritance.

(I'd also note that, in reality, inheritance is internally implemented as composition. What Go makes explicit is what implicitly happens in C++ or Java anyway.)


Weaker might have been the wrong word; it's more just different than anything else. Go struct embedding doesn't do much more than forward a method call to an inner struct:

>There's an important way in which embedding differs from subclassing. When we embed a type, the methods of that type become methods of the outer type, but when they are invoked the receiver of the method is the inner type, not the outer one. In our example, when the Read method of a bufio.ReadWriter is invoked, it has exactly the same effect as the forwarding method written out above; the receiver is the reader field of the ReadWriter, not the ReadWriter itself.

https://golang.org/doc/effective_go.html#embedding


I think the important thing isn't what happens internally, but what it looks like from the outside.

Fundamentally, invoking a (non-virtual) superclass's method in C++ doesn't do much more than forward that method to an inner struct.

    struct Reader {
      int fd;
    }

    struct Writer {
      int fd;
      void Write (int) {
        // *this* points to the Writer, not the ReadWriter
      }
    }

    int writeForMe (Writer *w) {
      // *w* points to the Writer, not the ReadWriter
    }

    struct ReadWriter: Reader, Writer {
    }
The only real difference, aside from the syntax, is that in C++ one can dynamically convert a pointer to a Writer that is part of a ReadWriter into a pointer to that ReadWriter, something Go doesn't support.


I think what he means is that in school we are taught inheritance and nothing else, so many programmers only know it's "pros" and not why other techniques like composition can be beneficial.


And correctly so, but the teachers should have also teached that there isn't a single way of doing OOP and what are the cons of certain design approaches.


Right, I'm definitely not the first person to make this argument. But I still see people making these mistakes; that's why I wrote the article.


Apparently there is still quite a room to improve how people get to learn these subjects.


Here's how I've experienced languages, technologies, and paradigms over the past 20 years of my professional career:

Step 1: settle on technology X because it fits in rather nicely for our requirements.

Step 2: The requirements change a bit. Mostly you can handle it elegantly, but there are some rough edges.

Step 3: More new requirements. Now things are starting to look pretty ugly in places. Evangelist B says "See? I told you technology Y was the right choice! Now look at this mess!"

Step 4: Go with technology Y on the next project.

Step 5: GOTO Step 1


A little OT...but does anyone have insight into the historical reasons of why inheritance seems to be the default entry point into teaching/explaining OOP and classes? I remember learning all about inheritance in school but only tangentially about interfaces. And looking at explainers/tutorials in the wild, it seems that many/most start out with the classic Duck > Animal or Thing > Car examples.

I want to chalk it up to inheritance being an inherently easier thing to think about in real world terms, with a name that has easily-understandable connotations, even if some of those connotations are probably harmful in the way they muddle up via leaky abstraction when thinking computationally. Whereas with mixins/interfaces...those terms have never made immediate sense to me.

And yet when it comes to implementing OOP, mixins/interfaces seem so profoundly more suitable as the go-to strategy that I wish I had learned exclusively about them, with inheritance being left for edge cases. Maybe I just had a bad curriculum or bad memory when it came to the lessons about interfaces. But if there is a stronger focus on inheritance when it comes to teaching OOP, is it due to the design of C++/Java? Or just the practicalities of teaching OOP? Teaching about classes seems like a good first step in teaching OOP, and teaching about inheritance requires fewer new concepts/syntax from defining a class. Whereas mixins/interfaces require introducing the concepts of modules.


Interface relationships are a subset of inheritance relationships. If you understand inheritance you understand interfaces. So it makes sense to explain inheritance before understanding interfaces.

There's an analogy here between inheritance -> interfaces and pointers -> object references as in a language like Java. Object references are pointers, in effect, but syntactically pared down so that it's (much) harder to break them. They are less powerful but safer. Similarly, if you have inheritance—particularly the dreaded multiple inheritance—you have everything interfaces can do and then some, but you also have much more responsibility and a lot greater chance of creating bad designs. In both cases, it makes sense when teaching the concepts to teach the fuller, more powerful concept. In day to day work, however, many programmers will make fewer mistakes using the more limited concept.

To say this a bit more clearly, but perhaps more insultingly: inheritance, properly and fully understood, is useful for a massive range of situations, of which mixins and interfaces are a part. I find that relatively few programmers (and I've worked in development for more than twenty years and taught programming at the Master's level for 7: I've taught a lot of programmers) can really grok proper inheritance to this level; therefore when they try to use it they end up shooting themselves in the foot, then blaming the tool. For them it's better to use the more limited tool. But it's still good to know the broader tool.


C++ doesn't have interfaces, so it's quite understandable you wouldn't learn about them if you were learning about C++.

Java-as-taught is often ancient Java, from before enhancements like default interface methods. The only non-annoying way to inherit behaviour in ancient Java is inheritance. Even modern Java lacks automatic implicit composition ala Go.

Java also encourages large interfaces with many methods. Without any ability to inherit behaviour for those methods, it's often less tiresome to inherit from a class instead. Go, by way of contrast, encourages tiny interfaces with, ideally, one or two methods.

In this example, we might suppose that a quintessential "vintage" Java-esque design wouldn't have a single-method "Priceable" interface. It'd have a CartItem interface with hundreds of methods, all of which you'd have to implement - even if only to stub out with exceptions - in order to implement it. So you wouldn't actually use an interface, or else you wouldn't be avoiding all that typing.


C++ doesn't have the interface keyword, but that does not mean you cannot have a class that is a pure interface (all methods abstract).


C++ doesn't have the interface keyword, but it also doesn't have interfaces.

An interface isn't simply a class whose methods are all abstract. It is a distinct kind of type which has no precise analogue in C++ because the language constraints which enable it to exist don't apply in C++. Namely, C++ has multiple inheritance.

To abuse a car analogy, this situation is akin to asking why older drivers weren't taught to use cruise control, and the answer being that many older cars don't have cruise control. Whilst it's possible to "emulate" cruise control by simply maintaining constant speed manually, learning to do that isn't the same as learning how to use cruise control.


Can you tell me what's the problem with multiple inheritance if your superclasses only consist of pure virtual functions?

From [1]:

"Bjarne Stroustrup: I had a lot of problems explaining that to people and never quite understood why it was hard to understand. From the first days of C++, there were classes with data and classes without data. The emphasis in the old days was building up from a root with stuff in it, but there were always abstract base classes. In the mid to late eighties, they were commonly called ABCs (Abstract Base Classes): classes that consisted only of virtual functions. In 1987, I supported pure interfaces directly in C++ by saying a class is abstract if it has a pure virtual function, which is a function that must be overridden. Since then I have consistently pointed out that one of the major ways of writing classes in C++ is without any state, that is, just an interface.

From a C++ view there's no difference between an abstract class and an interface. Sometimes we use the phrase "pure abstract class," meaning a class that exclusively has pure virtual functions (and no data). It is the most common kind of abstract class. When I tried to explain this I found I couldn't effectively get the idea across until I introduced direct language support in the form of pure virtual functions. Since people could put data in the base classes, they sort of felt obliged to do so. People built the classic brittle base classes and got the classic brittle base class problems, and I couldn't understand why people were doing it. When I tried to teach the idea with abstract base classes directly supported in C++, I had more luck, but many people still didn't get it. I think it was a major failure in education on my part. I didn't imagine the problem well. That actually matches some of the early failures of the Simula community to get crucial new ideas across. Some new ideas are hard to get across, and part of the problem is a lot of people don't want to learn something genuinly new. They think they know the answer. And once we think we know the answer, it's very hard to learn something new. Abstract classes were described, with several examples, in The C++ Programming Language, Second Edition, in 1991, but unfortunately not used systematically throughout the book. "

[1] http://www.artima.com/intv/modern.html


I think you're spot on with the lack of a default method in olden days Java. It was certainly the driver behind having an AbstractBaseWhatever class back when. Though there could still be utility in having a base class that had some functionality acting on internal state that was shared by lots of implementations of that.

Another use like Priceable and CartItem in modern Java is Connection and AutoCloseable (database connection objects). Anything AutoCloseble can be automatically closed after a try/catch block is done (with some specifics I glossed over on how). And a Connection is that, some specific form of DB connection. You also have a DB ResultSet that is AutoCloseable. So by having another interface extending another, or a class implementing the interfaces you can drive very related functionality on classes that may do very different things.

CartItem could extend/implement the one or two method Priceable, along with all its other methods.


> C++ doesn't have interfaces

Isn't a class with exclusively pure virtual methods effectively an interface?


> A little OT...but does anyone have insight into the historical reasons of why inheritance seems to be the default entry point into teaching/explaining OOP and classes?

Meyer's OOSC doesn't. It explains inheritance 1/3 through the book, after generics (parametric polymorphism) and Design by Contract. Incidentally, OOSC has this paragraph about inheritance that is worth remembering:

"Neither the Open-Closed principle nor redefinition in inheritance is a way to address design flaws, let alone bugs. If there is something wrong with a module, you should fix it — not leave the original as it is and try to correct the problem in a derived module. (The only potential exception to this rule is the case of flawed software which you are not at liberty to modify.) The Open-Closed principle and associated techniques are intended for the adaptation of healthy modules: modules that, although they may not suffice for some new uses, meet their own well-defined requirements, to the satisfaction of their own clients."

(Emphasis in the original.)

The Item class is not an example of a healthy module. It has been written in an ad-hoc fashion, without concerns about its requirements or thoughts about what contracts it should offer to its clients (i.e. how it interacts with other modules). Unsurprisingly, adapting it through inheritance results in breakage.


I'm hazarding a guess- I've never looked into the teaching of CS, just been taught a lot of it.

I think it's because people just starting out are able to grasp the concept of a duck is a bird is an animal AND you can reduce the code you write by saying all animals walk, birds fly, and ducks quack. So you are building on from the parent class.

With interfaces you are saying all implementations of animals walking can be different, so you're going to have to write code for each one to this contract. You're not nexessarily suggesting that you could reduce the code you write in that case. Then when you get to all other sorts of class composition it gets pretty hard to understand at first.

I think that's the main reason, it's easier to get started with. Also, CS teaching is nothing like real life development, as you stated, interfaces get used quite a bit more than the idea of concrete, or even abstract base classes but you just might have an abstract base class that implements part of the interface.


platypus. that is to say: the teaching is still bad.


Exactly why I kept going back to- it's easier. By the time they get to platypus they're hopefully in the real world having picked up some of the additional knowledge required to solve it in an elegant fashion.


The first implementation was weak from the start. By exposing the instance variable directly, you threw away all the power of inheritance and guaranteed a future meltdown. If you had correctly implemented setters and getters from the beginning, you would have been perfectly prepared for what would have been a minor and appropriate change to the pricing scheme that are the whole point of encapsulation. You wrote procedural code using an object-oriented language. Don't blame a fundamental design approach for your lack of good implementation.

"a lot more code" - Really?


From the viewpoint of semantics the problem is that things don't categorize out in a strict tree. The Borges encyclopedia list

http://avery.morrow.name/blog/2013/01/borges-chinese-encyclo...

is actually a list of decent categories (for the most part you can say if an animal is a member of a category or not) but they are not mutually exclusive, do not tile the space of all animals, etc.


Animal taxonomy is actually much weirder than you would think at first glance. The classic definition of species is the group of organisms which are capable of producing fertile offspring. However, there are also ring species, where you have populations A,B,C where B can interbreed with either A or C, but A cannot interbreed with C. Or in an alternate phrasing, members of B are members of both species A and species C. This obviously becomes intractable beyond a very few interbreeding populations.

So really, even nature favors composition over inheritance.


This is really interesting.

I've been turning this idea over in my head every now then as a mental exercise (bus ride material). One, it validates my reasoning that rigid single-inheritance trees are a bad representation.

What my mind kept turning to was the possibility of Neanderthals and early humans interbreeding yet being different species that had a common ancestor. It isn't really a tree in that case.

Two, this comment gives my past time more food for thought!


Biological taxonomy is just not a good analogy. Object behavior hierarchies aren't analogous to any real world thing, IME. They are what they are.


in fact the tree of life can't be traced back to a single root because us Eukaryote organisms are the result of a train crash between Eubacteria and Archaea maybe 2 billion years ago.


How do Cobb's Normal Forms fit in here? They are all about relations. I suppose a single inheritance pattern cannot get more complex than first normal form, but I don't really remember this from a long time ago.


"Codd's"


I had trouble following along because I couldn't get over how backwards the code is. Why should a book object have the responsibility of adding itself to a cart? Instead of calling

  book.add_to_cart(cart)
It should be

  cart.add(book)


I think it ruins the whole argument. It is not an argument against inheritance but an argument against bad coupling masquerading as an argument against inheritance.


This would have made more sense if the inheritance were different.

Item (a thing that is sold at a final price and which goes in to a shipping container (has physical size / weight data)).

Book (a type of Item) which has a price that decomposes in to various costs...

BD_Movie (a different type of Item) which has other features that decompose differently from other Items.

Though an argument could also be made for simply having a generic ShippableItem object which has extra key-value tags in that same application.

Maybe the issue isn't that the ability to use Inheritance is terrible but that the particular design choice of doing so turned out to be a poor fit for expressing the business needs.


My eyes start bleeding when I think too hard about this, but seems in my world inheritance in these cases is being used as a really bad substitute for properties. Which is a lot of 'things' have flat collections of properties not a hierarchy of properties.

I suspect this is why LINQ kinda popular. No one cares if book inherits item. They want to query the random collection of crap someone ordered for 'is a book' or is 'food' so they can properly calculate the sales tax in Minnesota.


> I'm disappointed to see that Scala has largely preserved Java's inheritance model.

Scala builds on the JVM and exposes the Java standard library. It doesn't have much choice in supporting that model. In theory, Scala could provide additional support for an alternate model, but that would add complexity without taking any away, and would require defining how the two models interact.


That's a fair point. Scala definitely seems to have had to compromise the quality of the language features it offers in order to achieve Java interop.


I'm not sure I get his point. Is it that "A member (field) of a class is not a proper interface"? In Python, that's not really true, since he could have used a property and kept his first design. How is that significantly different from his final solution?


I've added comments to the OP but to summarize:

OO inheritance is useful for codifying a limited set of real-world models. But to truly generalize, we need to stop pretending that it's feasible to make progress without actually investing the effort to codify state v time behavior contracts over software. Today, if anyone bothers to even try to document these behaviors, it's done inconsistently and not in a form that's readily usable by tools that help you through the maze of detail. This is a shame.

Re-usable software requires (a) strong contracts for data I/O (what we call API) and (b) state v time contracts in _machine readable form_. Conceptually, no hardware system on earth would exist if not for the fact that both (a) and (b) are formally defined. Don't take my word for it: go read up on state-of-the-art SoC design and IP blocks and ask yourself if exchanging _text files_ (IP blocks) is really any different than software. I think it's not.


This can be done in Java just fine:

  public interface PriceAble
  {
	int getPrice();
  }

  class Book implements PriceAble
  {
	int royalty = 2;
	int markup = 8;

	@Override
	public int getPrice()
	{
		return royalty + markup;
	}
  }

  class Movie implements PriceAble
  {
	int price = 15;

	@Override
	public int getPrice()
	{
		return price;
	}
  }

  class Cart
  {
	int cartTotal = 0;

	void increaseTotal(PriceAble priceAble)
	{
		cartTotal += priceAble.getPrice();
	}
  }

  public class forExample
  {
	public static void main(String[] args)
	{
		PriceAble movie = new Movie();
		PriceAble book = new Book();
		Cart cart = new Cart();

		System.out.println(cart.cartTotal); //prints 0
		
		cart.increaseTotal(movie);

		System.out.println(cart.cartTotal); //prints 15

		cart.increaseTotal(book);

		System.out.println(cart.cartTotal); //prints 25
	}
  }


Inheritance was wrongly added to the OOP paradigm by convention, not theory. Languages that tried to implement OOP used "classical" inheritance (C++, Java, Pascal, Python, PHP, etc), so it was handy to say the 2 concepts were related when they were never intended to be.

"Actually I made up the term ‘object-oriented’, and I can tell you I did not have C++ in mind." — Alan Kay

That's the pre-history.

The more recent history is that classical inheritance fought and fought to avoid multiple inheritance. Interfaces were poor substitutes that still only act as API signatures to match, rather than providing orthogonal-inheritance properties like a logging interface. Naturally, over time, composition based multiple inheritance exists in almost every classical inheritance language ... or soon will as codifying common mixin behavior just allows for better compiler optimization (https://en.wikipedia.org/wiki/Trait_(computer_programming).

The interesting question is if Classical inheritance or Prototype inheritance is superior and on what KPMs? Javascript, Lua, Erlang, and more have had a lot of programmer time spent on trying to convert the prototypical systems to classical inheritance idioms. Is this because most programmers are poor and the poor programmers reason better with a flawed system? Is it because they can't comprehend prototypical chains clearly? Is it because of the positioning/dominance of prototypical languages in their respective tech niches that they still exist?

There's lots to be said about the weaknesses of inheritance, but it's far from terrible. Programmers are lazy and so far, it's been the ideal paradigm. These blog posts/youtube videos/rants just seem pathetically misguided. Once someone can provide some real data, an opinion based on the interpretation of that data is something worth reading.


>"Actually I made up the term ‘object-oriented’, and I can tell you I did not have C++ in mind." — Alan Kay

And that's why C++ traces its origins to Simula, which predates Smalltalk.


I feel like this article attacks a strawman. Obviously there are going to be times when <feature> is not the right tool for the job, in that case you don't universally bash <feature> you simply don't use <feature>.

The problems with inheritance in this article seems largely fabricated to make it look bad, while in reality (IMO) it's a very elegant and useful tool in any programmer's toolbox.


An interesting aspect of Haskell is that it tries really really hard to avoid subtyping, relying instead on other mechanisms. This talk by Simon Peyton Jones touches on it from minute 40 onwards: https://imperial.cloud.panopto.eu/Panopto/Pages/Viewer.aspx?...

"In a language with generics & constrained polymorphism, do you really need subtyping too?"


Haskell has interface based generics, just like the article wants people to write. It just calls its interfaces by classes, and has a more powerful implementation than Java, that deals with default code and makes incompatibilities impossible.

Haskell even allows interface inheritance, just like Java, but calls it by... well, no name that I'm aware of, but it's there.


I don't understand his comment, "I'm disappointed to see that Scala has largely preserved Java's inheritance model." This directly translates to scala:

    trait Priceable {
      def price: Int
    }

    def addToCart(cart: Cart, item: Priceable) {
      cart.increaseTotal(item.price)
    }

    class Book extends Priceable {
       ...same thing...
    }
The only thing non-idiomatic about this in Scala is the use of a mutable cart.


Traits are great, but Scala still has subclassing. I'll update the article to make it more clear -- thanks for pointing that out.


Subclassing isn't evil. Subclassing which violates Liskov Substitution, now I'm listening.


>but Scala still has subclassing

C++ has goto and labels, and although they are not needed for the most part in modern C++, they are still there. The key here is the programmer does not need to use them.


His example is bad.

    class Item
      def add_to_cart(cart)
        cart.increase_total(this.price)
The fault is really that Item.price isn't defined, so it can't be assumed to exist in Item.add_to_cart. When he later adds get_price to all subclasses, he is really just fixing that bug.

In a language like Java, this problem could never have arisen in the first place, since the compiler would know that Item.price didn't exist.

In a language like Ruby, it isn't a problem. Ruby doesn't expose "fields" and methods differently, so the consumer of Book doesn't need to know that the price "field" is now a method that computes the price. That's proper abstraction.

His interface-based solution solves nothing, because exactly the same sequence of events would have unfolded if he'd originally been using an implicitly defined price field. If he then needed to make that a method, he'd then need to define an interface for get_price and then make all the things that should be Priceable implement that interface[1].

[1] And this bypasses one of the principle advantages of Go's interfaces. You don't say that a type implements an interface; types automatically implement an interface simply by having the appropriate methods.


Try to eat a steak with a hammer, and then argue that hammers are terrible. No shit Sherlock!

Don't use inheritance to solve all your OO problems. Ever heard of Design Patterns?

https://en.wikipedia.org/wiki/Design_Patterns


Why not just bake the various Price components right into Item, besides the get_price action? Every Item should have a price (ie the get_price stub), so I don't think it would be bad design to include it.

If Item includes a Price object that has all the possible Price properties, it could be easily modified as specs change.


I think the problem is that, typically, all the various Price components are not known a priori when Item is first implemented. They become known or realized as more Item subclasses are created. If a software dev group actually knew all of their future requirements before implementing a framework...well, a whole lot of software engineering problems would be mitigated :)


Ok, but it surely can be known (and obviously was due to the get_price() interface stub) that every Item will have a Price, so I stand by my intuition that a Price object could be included in Item.


I remember learning OOP back in the day. Personal projects from scratch took forever because I kept imagining more edge cases which required more and more abstract ABCs. And I felt the reason I couldn't work with all this OOP stuff was because I wasn't as smart or devoted to it.

I'm not saying I'm smart, but it's funny now that it at least seems like the whole "modern approach" is anti-inheritance and more about "interfaces" and composition, which essentially is what I did anyway because inheritance OOP got me nowhere.


As mentioned elsewhere, interfaces really a form of inheritance. They just generally have no implementation code in them.

I also very much understand the thrashing about that occurs when trying to abstract and inherit and compose my classes while making progress on a new idea. I think I early on solved it by just ignore trying to do everything The Right Way and write code that works to the solution. Then as the code evolved I could easily see where code was duplicated and begin grasping where things should be sharing code in some way. It greatly improved my efficiency AND made my code cleaner in the long run.


Many, many programmers don't really understand inheritance, and that fact is terrible. Many who don't understand inheritance think they do, and therefore believe that their bad designs are characteristic of all designs, and that misunderstanding is terrible. Terrible inheritance is terrible. Good inheritance is good.

To enjoy good inheritance, first learn to think in layers of abstraction. If this is hard for you (it's not easy for anyone), either train yourself to do it or don't use inheritance—but then don't criticize inheritance for your own shortcomings.

Second, make each class properly express its proper abstraction. Abstract things like "items" should be very abstract. They might have a price but have no idea how price is calculated. More concrete things like books and movies should have more concreteness, but no more than necessary. Often, "deep" layers of abstraction really are little more than interfaces. All the major languages have ways of expressing interfaces as distinct from classes as such, but there is no distinction between them at heart: an "interface" is just a very abstract class; that is, it offers methods without (or with few) implementations. Whether you use "interfaces" or "protocols" or "abstract classes", your language probably offers ways of expressing "lots of abstraction" as well as ways of expressing "more concreteness". Think in layers of abstraction and then use all the tools available to express that thinking directly.

When requirements, or insights into the design should work, change, then naturally anything can change, from interfaces to detailed implementations. No tool, language, or technique will prevent change, but a clear mental model of what is being expressed and code that succinctly and directly expresses that mental model is your best basis for coping with change.

Don't blame inheritance. Don't, for crying out loud, praise interfaces when they are simply a purified form of abstract inheritance. Learn to use the tools that have served millions of programmers well for twenty or thirty years and then the tools won't seem so dangerous and strange. Or, if the tools really just don't suit your way of thinking, don't use them; use other tools; but either way don't blame the tools.


Basically CLOS (Common Lisp Object System) implements Flavors' mixins as normal classes. Example:

    (defclass priceable-mixin () ()
      (:documentation "we can calculate a price for something"))


    (defclass standard-priceable-mixin (priceable-mixin)
      ((price :initarg :price)))

    (defmethod price ((i standard-priceable-mixin))
      (slot-value i 'price))


    (defclass shared-priceable-mixin (priceable-mixin)
      ((royalty :initarg :royalty)
       (markup  :initarg :markup))
      (:documentation "the price has two components: royalty and markup"))

    (defmethod price ((i shared-priceable-mixin))
      (with-slots (royalty markup) i
        (+ royalty markup)))


    ; we want to sell books and movies

    (defclass book (shared-priceable-mixin item)
      ()
      (:default-initargs :royalty 2 :markup 8))

    (defclass movie (standard-priceable-mixin item)
      ()
      (:default-initargs :price 10))


    ; there is a cart with items and a total price

    (defclass cart ()
      ((total :initform 0   :accessor cart-total)
       (items :initform nil :accessor cart-items)))

    (defmethod add-item-to-cart ((i item) (c cart))
      (push i (cart-items c))
      (incf (cart-total c) (price i)))


Perhaps the Brian Will anti-OO videos provide more useful examples:

https://www.youtube.com/watch?v=QM1iUe6IofM "Object-Oriented Programming is Bad"

https://www.youtube.com/watch?v=IRTfhkiAqPw "Object-Oriented Programming is Embarrassing: 4 Short Examples"


More like being terrible at using inheritance is terrible.


Inheritance isn't terrible if you design your classes properly.


I find that inheritance isn't really necessary in Python for most cases due to the ability to define magic methods which are basically interfaces. One place I have seen it used effectively is in the Django ORM where it makes sense to use subclassing.


This is a non-issue in C#. In C#, properties have getters and setters. Instead of making a function named GetPrice() you'd simply modify the price getter to include the additional fees.


It's not so much about that specific case, any other side effect will get you in trouble. it's more about how 'extends' introduces 2 tricky features. 1) you get a neat dispatch mechanism. if foo doesn't have a method check foo's superclass, and see what's available there, recursively until you run out of parents. 2) you can slip a different implementation in to some complex system, and they never need to know.

A nice example might be a DatabaseConnection, and a TestDatabaseConnection, that doesn't actually talk to a database, but logs what was requested. The thing that sucks about inheritance is, you never really have a DatabaseConnection, you wind up having a MYSQLDatabaseConnection or a PGSQLDatabaseConnection, that take slightly different syntax. It's not real clear if you should make a TestDatabaseConnection for each database permutation, or just make a logging connection.

Anyway, traits can fix this up a little nicer than interfaces and delegates can.

Inheritance is helpful, but it kind of sucks because you only get to use it once, and it mixes two different concepts together. "easy" API extension by tacking on more features, and the cuckoo's egg that lets you slip different implementations in without a recompile. IMHO #2 is really what people are after, and you can get that by just using interfaces for everything.


You've pointed out a distinction without a difference.

On the back end C# properties are implemented as a pair of CLR methods named get_[PropertyName]() and set_[PropertyName](). The rest is syntactic sugar. So when you "simply modify the price getter", what you're doing is simply modifying the implementation of a method named get_Price().


Right, but the article is arguing the inconvenience of having to make such methods across all classes that derive from a superclass. If this is inherently provided in the language then I would say that language doesn't suffer from the issue discussed in this article. And that language I'm speaking of is C#.


With respect to Java, the getters and setters are automatically generated in an IDE if you want, so it's not all so bad. And if you get to a spot where your getter and setter needs to do more than spit out the value or assign it for some reason, you're really at the same amount of code.

It is definitely nice in C# not having to LOOK at all the damn getters and setter though.


Let's assign values in verbal direction.

Other ideas, with traits:

> Statistical traits combinaisons: class City with % Sinjar, Theba, Techonolochikan %

-> better auto tests (33/33/33) -> 3 tests

=> ! data config, not logical structure

class Cluster with % Node1, Node2 %

class Cluster with % 25 Node1, 75 Node2 %

=> class Cluster with %

% = 25 Node1, 75 Node2

> Dna traits combinaisons: class Dna3 with Θ dna1, dna2 Θ

https://en.wikipedia.org/wiki/Theta


I get the point being made, but the rules probably shouldn't be directly associated with the Item objects in the first place. How would you apply a pricing rule like: "Free shipping if item total is > $20 after discounts applied"?

The general approach I've seen to this sort of thing is to create a stack of rules, and each rule applies its changes if needed. And for that, an array of function pointers will work fine.



Isn't this the talk that leads to the creation of setters and getters. If all items implement a getPrice() function then the caller doesn't care about what mix of vars adds up to that items price in the background, return royalties+tax+whatever..


I understand now why a GO feature list I saw recently included "no inheritance". I thought it odd that a lack of something was listed as a feature.


Bad strawman is bad.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: