Hacker News new | past | comments | ask | show | jobs | submit login
Pretending OOP Never Happened (johndcook.com)
290 points by zdw on May 15, 2020 | hide | past | favorite | 339 comments



Usually when I see somebody arguing against object-oriented programming, I don't see them arguing for functional programming, but instead arguing for procedural (like Cobol, Basic, or Pascal) programming. What they usually miss is that there's a reason procedural programming was abandoned some time around the mid-90's, and for good reason: you can't realistically develop useful software in pure-procedural way without introducing a lot of global state. Even if you look at well-designed non-OO code like, say the Linux kernel, you'll see that there are object oriented concepts like information hiding and polymorphism all over the place; they just don't formalize them with the "class" or "private" keywords. Unfortunately, what I see most programmers do is give up on OO design (and never even consider FP) and instead create global state that they call "singletons" to pretend that they didn't just create a global variable. Because, as we all know, global variables are bad, but too few of us actually remember why.


you can't realistically develop useful software in pure-procedural way without introducing a lot of global state. Even if you look at well-designed non-OO code like, say the Linux kernel, you'll see that there are object oriented concepts like information hiding and polymorphism all over the place; they just don't formalize them with the "class" or "private" keywords.

You are misattributing that to OO programming.

The original edition of Code Complete, written in 1993, makes absolutely no mention of OO programming or any OO principles. However it has a very good discussion of information hiding, and procedural code written in the various styles that it recommends do not create a lot of global state. (And yes, I have worked with such code.)

One of the best things about OO was that it pushed programmers who had not absorbed best practices towards that. However OO encourages combining the principle of information hiding with OO notions of inheritance. Many learned the hard way to prefer composition over inheritance. (Many, unfortunately, never learned that. Just as many procedural programmers never learned about information hiding.)


I feel like inheritance is the actual problem with OOP. Trees of types with increasing specialisations just don't describe many real world problems that well.

When you do build programs like this, code gets more and more rigid until it becomes a maintenance nightmare, and extremely expensive to pivot functionality. Sometimes having a complex object hierarchy literally stops you making some new feature.

A lot of mental effort is sucked up in mashing problem spaces into hierarchies, and once there the program is constrained by them.


It has little to do with subtyping per se and everything with the way that implementation inheritance, specifically, breaks modularity. The whole idea of inheritance contra composition is that every call to a virtual method-- including calls internal to the class-- goes through a dispatch step so that the actual code that gets executed depends on whether the object is one of a "derived" class, where that method might have been changed in ways that might break any amount of expected invariants. This introduces brittleness both in base class code (which generally has to call methods that might get overridden in derived classes) and in the derived class itself (which has no way of knowing what invariants might be expected of it as part of base class code).

Implementation inheritance is often justified as a way of "reusing" code, but as it turns out, we've merely introduced undesired coupling instead of the seamless reuse we might have expected.


> every call to a virtual method-- including calls internal to the class-- goes through a dispatch step

You've touched on a whole host of performance issues here too. Not only are we making indirection after indirection when accessing virtual methods and data (my poor cache), but invariably these are all stored in heap memory, so we get the worst possible performance profile baked in.

> the actual code that gets executed depends on whether the object is one of a "derived" class, where that method might have been changed in ways that might break any amount of expected invariants.

On top of that inheritance makes the concrete type unknowable at compile time just to add to the pain.

> Implementation inheritance is often justified as a way of "reusing" code, but as it turns out, we've merely introduced undesired coupling instead of the seamless reuse we might have expected.

Yes this 100%. Inheritance actually stands in the way of code reuse. If I have a class that is kind of similar to another, but different enough to "require" being another class, oh dear they're now incompatible. So you try to kind of fit it into a subclass, but it makes more sense in another, and now you're spending mental effort on a problem created by architecture and not the actual thing you're trying to solve.

This is really clear in the game development world, where game objects often just don't fit into the inheritance format, even with multiple inheritance. This is why most complex games are built with the entity component system pattern as it focuses entirely on composition.

Perhaps the crux of the issue is that inheritance encourages design by "identity" (mental concept) rather than "attribute" (data). Thus to solve problems we must mentally classify them into some taxonomy as if they are animal species before we even write any code. This is just extra work in exchange for all the issues mentioned.


> where that method might have been changed in ways that might break any amount of expected invariants

It's not supposed to, and if it does, the design is clearly broken. Now the (rhetorical) question is: does it happens often. The not so rhetorical question is: is it possible to make it happen rarely. The even more interesting question is: is it easy / more practical compared to alternative approaches, and if not then what is the point.

My opinion on SOLID is that there is precisely one hard "principle", and it is worthwhile: the LSP (it derives directly from logic, that's why). I believe that Open-close is even borderline insane, and that if there is a crazy way to make it not insane, this way is probably applied by so few people that it may well be irrelevant -- most people will try to apply it in ways that will quickly put them at risk for LSP violation, but LSP is far more important (or they will try to apply it in bad places, but that is another story). Plus programs designed with complex hierarchies are often missing the point of execution contexts, and then understanding them is absolute hell -- their original authors sometimes do not understand them themselves. (The rest of SOLID are soft attempts at fixing self-inflicted wounds, sometimes even reasonable if you insist on doing Javaesque / old-C++-esque OOO, but I digress.)

I'm not sure why anybody thought that kind of OOO was a good idea, or that the main characteristic in interesting big programs was the usage of "classes". I even find the suggestion of causation instead of mere correlation dubious; they were already quite a good number of big programs before, and what permitted the explosion of program size was more the ever growing capabilities of computers, that happened, in affordable versions, during the Java-like OOO hype. Besides the simplistic reductionism (we can start with: which classes model entities, which classes model values, which classes are controllers, etc.) which is not too much a big deal in practice, modeling with class diagrams is often missing the river in the middle of the forest for some groups of intertwined trees.


What does "OOO" stand for? Usually it means "out-of-order [execution]" but that doesn't make sense here.

The Smalltalk-80 container hierarchy demonstrates that you can get a lot of mileage out of simple single-dispatch virtual methods with inheritance. The Smalltalk-78 system, which you can try at https://lively-web.org/users/bert/Smalltalk-78.html (although it's not working for me at the moment), got a multiwindow GUI with an IDE running usably on an Intel 8086 with 256KiB of RAM, in only about 100 classes and 2000 methods totaling 200 KiB of code. This is not what I would describe as a "big program", but it is a fairly impressive program nonetheless. Seeing that kind of thing is what led people to adopt object-oriented programming.



Yeah I meant OOP but mechanically wrote OOO.


> the derived class itself (which has no way of knowing what invariants might be expected of it as part of base class code).

In principle, with a sufficiently-robust type system, the invariants expected by the base class would be encoded in the typing, and the derived class would only be able to narrow them.

(Of course, OOP languages tend to either be untyped or have insufficiently robust type systems for this; robust type systems are more frequently associated with functional languages.)


If the type system already trivially enforces your custom invariants, aren't the types defined by your classes already in it?


> every call to a virtual method-- including calls internal to the class-- goes through a dispatch step

why would you make internal methods virtual though?


Virtual methods should only be internal methods. There's no point in making the public API have virtual methods. That would be a bass-ackwards parody of proper OO design.


It's common to put virtual methods in a public API; indeed, in Java we have the "interface" construct consisting entirely of public virtual methods and, occasionally, named constants. (In more recent versions it can also include default implementations.) In Golang the only virtual-method mechanism works by means of a similar interface construct, which also only contains public virtual methods.

So, for example, Java's Map is an interface type with, for example, .get() and .put() methods; TreeMap and HashMap implement those methods differently. There are similar things in Smalltalk-80 and the C++ STL.

So clearly you think Java, Smalltalk, and C++ are "a bass-ackward parody of proper OO design." I'm interested to hear what systems you consider paragons of proper OO design.


Interface inheritance is broadly fine though, because it specifically lacks the conflation of "base class" and "derived class" code. The interface can be assumed to directly imply some invariants, and it's easy to see of any implementation whether it respects them. Everything is nicely localized.


I agree, but I didn't interpret the comment I was responding to as objecting to implementation inheritance as such, rather the contrary: they're objecting to an enormously wide category of OO designs which includes not only interface inheritance but also most, but not all, uses of implementation inheritance; for example, the Smalltalk-80 design that lumps inject and select into some abstract base class for containers.


Wait, really? I don't know much about "proper OO design" but am forced to use Java at work. I commonly use classes that have e.g. a virtual public getFoo, then someone's made a RealFooGetter that actually talks to the network and a CachingFooGetter that delegates to another FooGetter and maintains a cache. Should I ask my coworkers to stop doing such things?


No.

You might find the book Program Development in Java: Abstraction, Specification, and Object-Oriented Design by Barbara Liskov and John Guttag very useful for understanding OOD/OOP.


What in the world are you talking about? The whole idea of a Type/Class Hierarchy is based on publicly accessible virtual methods or more precisely, on dynamic dispatch.


You didn't motivate your claim. Could you elaborate?


I feel like inheritance is the actual problem with OOP. Trees of types with increasing specialisations just don't describe many real world problems that well.

I firmly agree.

I wrote https://www.perlmonks.org/?node_id=318257 over 15 years ago. I still agree with the fundamental criticism of "OO everywhere" that I offered there.


Thank you for the excellent piece.

Though w.r.t. the speculation in your final "Disclaimer", I'd expect the algebraically inclined to prefer either the elegant combinatorical explosion of a J, or the build-it-yourself language of a metaprogrammed Lisp.


Thanks, enjoyed this. That SICP quote really nails it too.


Agree 110%, and I want to take your notion as a cue to expound a little more from personal trauma :)

I find OOP useful for configuration and library-level APIs (e.g. for injecting dependencies and wrapping things), when there is really zero expectation that a user should have to understand the class definition entirely. But that's about it.

Where it seems to break down for me is when classes are actively used at the application implementation level for general encapsulation. E.g. data access layers. This can be done perfectly fine with modules/packages. Adding inheritance is just asking for trouble in a team setting IMHO.

Trouble comes when there is no clear peer review and style policy to avoid classes for anything other than config or distributing libraries (as separate projects). What ends up happening is a proliferation of subclasses or method overrides when developers are in a rush to ship features without understanding the whole codebase. This is a technical loan with very high interest.

It makes sense rationally in the moment, as classes have an inviting feel to the user as a kind of grab bag of related functionalities that are easily introspected (at first). Compare it to searching through docs for all the different namespaces in a package and learning what types their various functions support, it requires more thought and grasping the concepts of the library. Alternatively, a class with a broadly defined purpose starts to look like a common utility to throw things at. It's a grab bag of stuff that is more amenable to hacking with blinders on, in a way.

Next thing you know the chains of inheritance and method overrides have grown into a very hard to disentangle hydra. The accumulated overhead of maintaining it is compounded by the debugging challenge of knowing where exactly a given instance is coming from and which layers it has been through on its way to a given breakpoint.


The concept of "if you want to touch this code base you have to understand the entire thing" is pretty tough on basically everybody but the people who either wrote most of the code or who live in that code base all the time. It would seem like code-bases should (either because people architected it to be possible to work with without understanding the whole thing or the programming language helps facilitate it) be able to be worked with while only understanding a portion of the code base. In some cases classes and interfaces help with that, but in plenty of cases you end up in one of the sort of child classes trying to figure out how to get out of the hole you are in where you can't interact with anything anywhere in the code base and you have to basically plumb pipes through lots of places to allow moving data where you want.


Agree completely, that's a really nice summary of the key problems.

The mental effort of building up taxonomies in particular is exactly what put me off inheritance (and fussy type systems in general).

It's too easy to jump down that rabbit hole without considering whether it's a productive activity.


Perhaps an issue is that the well known saying "prefer object composition over class inheritance" isn't done, because a majority of programmers simply don't know how to apply it.

Inheritance is used inappropriately (outside of frameworks) almost everywhere.


This is highly codebase dependant. I have seen code with crazy hierarchies like you say, and ones without it. It's possible to "just don't do that".


The concept of information hiding came a bit before that, almost 50 years ago, first being described in a paper by David L. Parnas entitled “On the Criteria to Be Used in Decomposing Systems into Modules”.

I wrote a blog post earlier this year based on this paper, describing how using the information hiding criterion naturally leads to an improved system structure when compared to using a procedural criterion [1]

I'm an advocate for OOP because it greatly favors information hiding and encapsulation, helping to manage the ever-growing complexity of software projects, making "it possible to develop much larger programs than before, maybe 10x larger" (quoted from the linked article)

[1] https://thomasvilhena.com/2020/03/a-strategy-for-effective-s...


Thank you. Every Software Engineer should read Software Fundamentals: Collected Papers by David L.Parnas. People are nitpicking things and just parroting what is written in other books without really understanding the idea behind it all. For example what is a Interface/Class/Library/Framework/Module(logical and physical) if not Information Hiding at different levels? The concept is the same but the realization is different. OOD/OOP just gives you convenient syntactic sugar to express them. The rest is up to you.


Polymorphism via indirect virtual dispatch basically means you want to treat different kinds of objects differently via the same interface. If you take OOP as programming with objects, it most definitely is doing OOP; if you take OOP as being Java or some other more specific definition, then it isn’t.

When you implement an extended jump table (a table set that extends an existing table def) in C, that is also just inheritance.

There is a good reason the first C++ compiler was just implemented as a bunch of macros in C.


These conversations aren't helped by the fact that OOP isn't well-defined, and observations about 'typical' OOP--the banana that has a reference to the gorilla and transitively the entire jungle, or the pervasive abuse of inheritance--are "aren't really OOP and you can write bad code in any paradigm!" even though other paradigms (for whatever reason) don't share these issues at nearly the scale as OOP. Inheritance and god objects were certainly how OOP was taught at universities and in the predominate literature when I was in school circa 2010. And if these (mis)features aren't part of the definition of OOP, then what distinguishes it from functional and/or data oriented paradigms? Encapsulation? Seems like FP has encapsulation and DO is orthogonal. Certainly localization of state is common to both FP and DO--it's not an OOP innovation.

If OOP is so abstract as to be indistinguishable from other paradigms, then what value does it add?


An object is in some ways a function that defines its own parameters (fields and methods) and arguments (state). This then is injected into another such function.


if it's a function, what are the inputs/outputs? do you mean that in the way of "an object is a poor person's closure"?

in case anyone's unfamiliar, here's one basic way to emulate objects with closures. i think i saw it in SICP (?), reproducing it here because i just love how simple it is:

  // might be easier to first look at
  // the usage example at the bottom

  let Point = (x, y) => {
    let self = (message, args=null) => {
      switch (message) {
        // getters
        case 'x': return x;
        case 'y': return y;

        // some operations
        // (immutable, but that's not required)

        case 'toString':
          return `Point(x=${x}, y=${y})`;

        case 'move':
          let [dx, dy] = args;
          // use our getters
          return Point(self('x')+dx, self('y')+dy);

        // let's get DRY!
        case 'plus':
          let [other] = args;
          return self('move', other('x'), other('y'));

        default:
          throw Error(`unknown message: ${message} ${JSON.stringify(args)}`);
      }
    };
    return self;
  };
  
  
  let p1 = Point(3, 5);
  // sending messages
  p1('x') === 3;
  p1('y') === 5;
  
  p1('move', [1, 2])('toString');
  // --> "Point(x=4, y=7)"
  
  let p2 = Point(1, 2);
  p1('plus', [p2])('toString');
  // --> "Point(x=4, y=7)"
  
dynamic dispatch & message passing, just like that! and you can easily do `__getattr__/method_missing`-style dynamicism just by looking at `message`.

for the other way around, see how Java lambdas desugar to objects with a "call" method and closed-over variables as members.


do you mean that in the way of "an object is a poor person's closure"?

Objects are a poor man's closures. And closures are a poor man's objects.

Modern languages have both. And they serve different purposes.


i intended to put "(and vice versa)" in a footnote, but forgot about it! you can see a relic of that in the last paragraph.

though i must say, i'm pretty happy in languages that have closures but don't have objects, as long as there's a nice way to do ad-hoc polymorphism (like traits/typeclasses)


also, i just made a possibly interesting connection with JS methods (and their weirdness). in the above implementation, a message is first class, so you can send the same message to multiple objects:

  let moveUp = ['move', [0, 1]];
  p1(...moveUp)
  p2(...moveUp)
but you can't get a `move` that's "bound" to a particular object¹ – the "receiver" is decided when you "send" the message. which reminds me of how JS methods work: `this` is bound to what's "to the left of the dot" when you call a method, so unlike Python,

  obj.foo(1)
is not the same as

  let f = obj.foo
  f(1) // Error: 'this' is undefined
maybe there's a connection to some language that inspired JS' OO model, with prototypes and all that?

---

¹ well, unless you explicitly wrap it in another closure like

  (args) => p1('move', args)


p1.move.bind(p1)


i know :) the point is that that's not the default (unlike many other languages), and i was wondering about a possible origin for that choice.


btw this could (sorta) be considered a special case of the fact that a tuple/struct (product type) can be represented as a function:

  let pair = (a, b) => (
    (ix) => (
      ix === 0 ? a :
      ix === 1 ? b :
      undefined
    )
  );

  
  let p = pair(3, "foo");
  p(0) // 3
  p(1) // "foo"
i've seen sth like this used as the definition of tuples in a math context. (there are also other ways to do it too, like church/scott encoding)

however the object has the extra wrinkle of self-referentiality, because the `self` function is defined recursively


This is a neat example, but wouldn't dynamic dispatch imply polymorphism?


It’s dynamic dispatch because the decision about what code to call happens at runtime (via the switch statement) rather than compile time.


But that's essentially a table lookup, not a vtable lookup, right?

Dynamic dispatch implies that you dispatch based on the class of the receiver. For instance, you can't have a Point3 value that uses Point's implementation of 'x' and its own implementation of 'toString' under this example.


  let Point3 = (x, y, z) => {
    let parent = Point(x, y);
    let self = (message, args=null) => {
      switch (message) {
        // getters
        case 'z': return z

        // some operations
        // (immutable, but that's not required)

        case 'toString':
          return `Point(x=${x}, y=${y}, z=${z})`;

        case 'move':
          let [dx, dy, dz] = args;
          // use our getters
          return Point(self('x')+dx, self('y')+dy, self('z')+dz);

        // let's get DRY!
        case 'plus':
          let [other] = args;
          return self('move', other('x'), other('y'), other('z'));

        default:
          parent(msg, args)
      }
    };
    return self;
  };


maybe "extremely late binding" would be more appropriate? it's true that usually "dynamic dispatch" means a vtable, but i think this is dynamic dispatch in spirit - in fact "dispatching" messages to implementations is pretty much the only thing a closure-object does :) i chose to write the impls inline, so they're p much impossible to reuse, but that's not required.

> you can't have a Point3 value that uses Point's implementation of 'x' and its own implementation of 'toString' under this example.

you can't do much of anything under this this example! i didn't think a whole object system would fit in a HN comment, i intended it to be minimal :)


this is particularly visible in the implementation of `plus` – `other` might not be a Point at all, so `other('x')` and `other('y')` might do anything; which i'd say is textbook (ad-hoc) polymorphism.

i remember reading this somewhere: "branching on values is the ultimate dynamic dispatch" :)


Excellent example, thank you.

Edit: To answer your question, the set of all public methods could be seen as 1, but not the only, interpretation of the functional interface.


thank you :) i hope i didn't hijack the thread! but i just love sharing stuff like this

> the set of all public methods could be seen as 1, but not the only, interpretation of the functional interface.

could you describe another interpretation? "the set of public methods"¹ is the only thing i can think of when i hear about an object's "functional interface". it could be a terminology issue though – i'm reading "functional interface" in a general sort of way, but i could imagine it having some specific definition in OO theory.

---

¹ or recognized messages, in a more smalltalk-ish context


The other interpretation has slipped my mind since yesterday, sorry mate. I do think it's an interesting idea that I now see I have no formal understanding of. Cheers.


shame! cheers :)

PS. if you're interested in weird perspectives on this, you might find Coinduction (and "codata") interesting. from wikipedia:

> Informally, rather than defining a function by pattern-matching on each of the inductive constructors, one defines each of the "destructors" or "observers" over the function result.

it's a deep rabbit hole, but i remember it being applied OOP-ish things – "destructors" would roughly correspond to exposed methods.


In perl an object is literally a data structure with funtions glued on. Which is kind of the opposite of what you say!

Usually this is a hash (dictionary) which is similar to the object with fields that we all know, but it can just as easily be an array or scalar value.


> These conversations aren't helped by the fact that OOP isn't well-defined, and observations about 'typical' OOP--the banana that has a reference to the gorilla and transitively the entire jungle, or the pervasive abuse of inheritance--are "aren't really OOP and you can write bad code in any paradigm!"

From Alan Kay (who coined the term)[1]:

> OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them.

Almost anyone who has dealt with present-day OOP (Python, Ruby, C++, C#, others, or, God forbid, Java) has had the same thoughts as you, I'm guessing. I've come to the conclusion that OO as originally envisioned had a lot of good ideas, and these are surprisingly compatible with and complementary to a lot of the (almost-)pure FP ideas that have gained traction over the last few years.

OOP also had huge mistakes:

- Class inheritance is the biggest one. Kay himself mentions inheritance appearing only in the later versions of Xerox's Smalltalk. No need to really elaborate. Implement interfaces, compose behavior and don't inherit. He has posted some more of his thoughts on this exact topic. [2] - Java was a giant industry-harming one. Now multiple generations of programmers have been brought up to think that the definition of an object is to write a class (which is not an essential characteristic), declare internal state and then write accessors and mutators for every one of them. We might as well write COBOL. But, having tried to solve problems in Java, I get why it's done. The OO is so weak and awful that I'd rather just crank out slightly-safer C in Java than jump through the horrific hoops. (Just as an aside, I find it sad that Sun had the teams that developed both Java and Self. Self used prototypes instead of classes, which influenced Javascript, but also had a killer JIT that made it super fast and a ground-breaking generational garbage collector. Sun ended up cannibalizing the Self team to assist with Java, and some of the features of the JVM were ports of work that had originally been done for the Self VM.) - C++ was kind of a mistake, because it introduced the interminable "public" and "private" and now the endless litany of stupid access modifiers. They were needed because C++ had this need to be compatible with C and do dynamic dispatch and have some semblance of type safety (though C is only weakly typed) and have it run (and compile) before the heat death of the universe. C++ isn't as horrible as what came after it though IMO (i.e. Java). - Xerox horribly mismanaging Smalltalk was another mistake. Xerox was smart enough to realize that Smalltalk was insanely valuable, but instead of realizing that it should be a loss leader/free drugs, they decided to lock it away in the high tower and then let everyone else pillage their research that could be commercialized (GUIs on the desktop, laser printers, Ethernet, among many others). This literally led Apple to develop Objective-C.

I really went down a rabbit hole on this topic around two years ago. Two ideas really helped crystalize the idea for me, both from Kay.

One is that the Internet is an object-oriented system. It communicates via message-passing, it is fault-tolerant, and all of its behavior is defined at runtime (i.e. dynamically bound). We don't need to bring the entire Internet down when someone develops a new network protocol or when a gateway dies or some other problem happens. It continues to work and has in fact never gone down (though early on during ARPANet, there were instances of synchronous upgrades [3]).

The other, deeper one (and the much more open one) is that "data" is error-prone and should be avoided and is, in fact, "the problem" (i.e. the root of the Software Crisis). This sounds preposterous to most programmers.[4] In fact, I didn't even get it when I first heard it. I think the basic idea is that he uses "data" as a physicist or statistician would use it: quantities measured in the world with varying accuracy/precision. This seemed preposterous or even heretical until I started noticing it in the large codebase I was working on at the time. A lot of the code was in a nominally object-oriented language, but the style was very procedural (i.e. "If a then do X, do Y; else do Z"). I noticed that all of our code was working on large data structures (think a relational database schema), but there was no coherent description of it or what it meant anywhere. If I touched one of the pieces of code, I invariably had to start adding more conditional branches in order to preserve correctness. In order to make any sense of the code, I made a ton of implicit assumptions about what the patterns I saw actually meant, rather than relying on the program itself to communicate this through its code or meaningful constraints on the data. We would be better served with "information" with less noise and more signal than what we get with data and data structures (think Shannon). The only real success we've had with data is a few encoding schemes (e.g. ASCII/Unicode, 8-bit bytes, IEEE-754, etc.), but that approach clearly doesn't scale beyond a dozen or two data encoding schemes. OO (of the high signal-to-noise variety championed by Kay) at least has a story for this, which is that certain patterns do indeed have predictable characteristics, and complex patterns can usually be composed by a handful of simple ones. I have yet to see a system like this actually in practice, but I can imagine something like it could exist, given the appropriate tools (in particular, a good, state-of-the-art language that isn't some rehash of stuff from the 60s - Erlang might fit the bill).

The ideas of DO as articulated by Mike Acton are antithetical to the ideas of OO (although a well-designed future OO system - including late bound hardware - could probably represent and implement a lot of the techniques that Acton talks about to improve performance and quality). This doesn't mean they're wrong, but I see treating data as the _only_ thing that matters as being fundamentally opposed to the idea that data is actually the root of most evil. People adopting DO should be aware that robustly applying the technique in the long run will require having detailed knowledge of every aspect of the system on every change. (I think it's not a coincidence that it was originally developed for game dev, which often doesn't have the same long-term maintenance requirements as other kinds of software, and certainly not software like the Internet.)

FP and OO are more complementary. Immutability and referential transparency can be very valuable in object systems; encapsulation, extreme late binding and message passing can be very valuable in functional systems (in fact, Haskell and Erlang surpass "OO" languages when scored on some of these characteristics). Scala explicitly embraces the complementarity between the two paradigms.

[1] https://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_ka...

[2] https://www.quora.com/What-does-Alan-Kay-think-about-inherit...

[3] https://en.wikipedia.org/wiki/Flag_day_(computing)

[4] https://news.ycombinator.com/item?id=11945869


Oof, I just looked at the formatting of this post and realized I rendered the list wrong. Here's the non-eyebleed version:

- Class inheritance is the biggest one. Kay himself mentions inheritance appearing only in the later versions of Xerox's Smalltalk. No need to really elaborate. Implement interfaces, compose behavior and don't inherit. He has posted some more of his thoughts on this exact topic.[2]

- Java was a giant industry-harming one. Now multiple generations of programmers have been brought up to think that the definition of an object is to write a class (which is not an essential characteristic), declare internal state and then write accessors and mutators for every one of them. We might as well write COBOL. But, having tried to solve problems in Java, I get why it's done. The OO is so weak and awful that I'd rather just crank out slightly-safer C in Java than jump through the horrific hoops. (Just as an aside, I find it sad that Sun had the teams that developed both Java and Self. Self used prototypes instead of classes, which influenced Javascript, but also had a killer JIT that made it super fast and a ground-breaking generational garbage collector. Sun ended up cannibalizing the Self team to assist with Java, and some of the features of the JVM were ports of work that had originally been done for the Self VM.)

- C++ was kind of a mistake, because it introduced the interminable "public" and "private" and now the endless litany of stupid access modifiers. They were needed because C++ had this need to be compatible with C and do dynamic dispatch and have some semblance of type safety (though C is only weakly typed) and have it run (and compile) before the heat death of the universe. C++ isn't as horrible as what came after it though IMO (i.e. Java).

- Xerox horribly mismanaging Smalltalk was another mistake. Xerox was smart enough to realize that Smalltalk was insanely valuable, but instead of realizing that it should be a loss leader/free drugs, they decided to lock it away in the high tower and then let everyone else pillage their research that could be commercialized (GUIs on the desktop, laser printers, Ethernet, among many others). This literally led Apple to develop Objective-C.


> Even if you look at well-designed non-OO code like, say the Linux kernel

but the linux kernel is definitely OO, except vtables are filled by hand instead than with the help of a compiler.

https://www.kernel.org/doc/html/v4.10/driver-api/infrastruct...

structs with data + function pointers are literally OOP objects. see e.g. https://lwn.net/Articles/444910/

> you'll see that there are object oriented concepts like information hiding and polymorphism all over the place; they just don't formalize them with the "class" or "private" keywords.

yes, that's still OO. python doesn't have a private keyword either and that does not make its classes not OO.


Smalltalk doesn't have a private keyword, or private methods, and it's the language the term "object-oriented" was invented to describe.


Procedural code is quite popular in Java as used with IoC.

Global variables are called "Singletons". DI injects a reference to all the globals you need to do your work, so you don't feel so grubby. Often DI injects some kind of accessor to the database, a giant store of global variables.

Procedures are called beans, but you can tell they're procedures because they have a single public entry point and are usually called SomeKindOfVerber with a main verb() or do() or execute() method, they have the procedures they call (sorry, I mean "collaborators") injected, and they themselves are injected to where they'll be called from. Their state is ephemeral, and only exists for the purpose of executing a single flow of logic (this is called "bean scope").

All you need to do to describe your code as object-oriented is convert your procedures into beans, and tada!


> Procedural code is quite popular in Java as used with IoC.

Yep, you get all the downsides of coding in Basic with the bonus of the performance hit you take from all the reflective instantiation.


You don't have to incur the cost of reflection. Dagger2 for instance, is a compile time DI framework. Micronaut is also compile time.


DI frameworks only encourage bloat, e.g., 1-to-1 interface to concrete classes. You know, for "unit testability" or "just in case" we need to implement the interface with a different concrete class later.

They can help simplify what is essentially a problem of complexity that is self-induced by the original decision to use OOP. But it's rare that they don't also introduce tons of additional other complexity, e.g., at runtime. Debugging these applications is one of the most infuriating things I've ever done as a programmer.


What kind of bloat are you talking about here? Binary and/or runtime cost? In the case of Java/JVM (and I believe C#/.NET) the JIT is almost trivially able to inline monomorphic calls, meaning that effectively, these interface calls are free.

Secondly, you don't even need to use interfaces with one-off implementations if you don't want. You can inject your concrete implementation class instances directly. Using interfaces just makes it easier to swap implementations as needed. The unit testability argument is completely orthogonal (and also in the case of the JVM, you can mock classes as they're open by default, so again, you don't even need to use one-off interfaces).

And I don't see how this issue is limited to OOP. At the end of the day, you're going to have to hook up your components and object graph anyway, either manually or using some kind of DI, so the complexity is always there. Even if you don't write interfaces with single implementors, non-trivial programs quickly become extremely verbose to implement. Maybe something like Scala's `implicit` can be of help here.

> Debugging these applications is one of the most infuriating things I've ever done as a programmer.

Again, Dagger and Micronaut do DI at compile time using codegen, so nothing to debug at runtime.


Compile time DI doesn't mitigate huge stack traces or using unnecessary interfaces, which both contribute to bloat.

Non-trivial programs don't have to be verbose. They are generally made so by the addition of unnecessary "features" that, in turn, require even more additions "to reduce complexity."


As I mentioned, you don't have to use unnecessary interfaces with DI (compile time or runtime). You can directly inject your implementation classes. Secondly, huge stack traces are not related to DI either, they're an orthogonal issue.


I agree with a lot of this, although I wouldn't conflate DI references with global variables. They certainly allow for shared state between different components, but the great thing about DI is that you can always create new contexts with a separate set of shared state references.

I also agree that is kinda of funny to constantly so the Verber classes everywhere in Java, but it has at least improved over the last several years with the standardization around Function<T,R>, Supplier<T>, Consumer<T>, etc. Lambda syntax and method references also help. I'd admit the syntax for using a functional style in Java is still a little jenky, but its quite an improvement from where things were back in the day.


I'm being tongue-in-cheek, of course.

The biggest upside I see for automated DI is to get back concision once you've parameterized your code by the code it depends on. Usually this parameterization is done for unit testing purposes.

Direct calls are hard to exclude or intercept for testing, so you call a method on an interface or a base class or an object instead, and now you need to be initialized with a reference which may be replaced for testing purposes. But the requirement for initialization remains in production, even when there is normally exactly one candidate and it might not even require state if it wasn't for all the other references it in turn needs to call.

The great increase in the burden of initialization creates demand for the IoC / DI container. And once you have a container framework which takes care of gluing objects together, why not let it stick transparent adapters (proxies) around the objects too? Proxies for per-request state, multi-threading, transaction management, caching, all that good Common Lisp advice of before, after and around.

The temptation for clever people can be hard to resist, and before you know it your local code has global side effects unknowable to people who don't have the full picture in their head.

I actually blame unit testing, particularly when combined with a static type system. Static binding, in particular: you could mock static calls if dynamic binding like Emacs Lisp was available!

Unit testing is great at the leaves of the call graph, for self-contained things like container libraries, or things which behave functionally, i.e. they transform things.

Unit testing is like arthritis in the middle of the call graph. It makes rearchitecting much harder by baking in assumptions both upstream and downstream, and the easy temptation of mocking usually overspecifies the interaction with collaborators. The parameterization demands DI which encourages too much cleverness, as discussed. And unit tests don't leave you very confident that things will hold together because test mocks and stubs don't necessarily replicate the behaviour of the real things.

I like integration testing higher up the stack, and trying to convert a program as much as possible into a composition of functional elements - things that belong on or near the leaves of the call graph, and leaving as little "middle" as possible to be infected with unit tests.


I agree and might add that having improvements on something imperfect (but a thing that does get real work done) can be worth much more than compsci perfectionism.

It is very hard to do something that is 'good' in technical terms while also being 'good' in product terms. While the discussion often doesn't revolve around the practical implications of taking steps toward a commercially uncommon language or paradigm during business engineering it is pretty much where all the 'stuff gets done'.


You can have your cake and eat it too. For example, react’s functional components are both good in product terms and good in technical terms. C#, JavaScript, python, swift and others have lots of tools for programming in a more functional style.

And java isn’t where all the ‘stuff gets done’ - well, outside of the enterprise programming bubble. JavaScript and (surprisingly) python are the workhorses of the modern startup ecosystem. I can’t speak with certainty about python, but separating logic and data, and using immutability with stateless components and functions is all very common in the JavaScript world.


I didn't intend to sound like I meant that Java is where the stuff gets done but rather that 'stuff gets done' using non-ideal languages, frameworks, tools, context etc.

C, C++, Java, Python, JavaScript, C#, even PHP are not exactly ideal, even in their special niches, but a lot of real world stuff, if not most 'stuff' gets done using that.


As somebody who tried to force OOP onto Rust only to realize how easy things can go when you use a more data-oriented/compositional approach: I think the most important thing to keep in mind is that OOP is only one possible way of abstracting things.

There are other ways as well and all of them work well for certain problems and not so well for others. Limiting yourself to only one lense with which you look at problems is a bad thing.


> Limiting yourself to only one lens

Well, agree, but in my experience people who abandon OO are usually the ones who are limiting themselves to only one lens - the top-down/procedural way software was developed in the 60's and 70's which is easier to comprehend but ultimately unsustainable for subtle reasons. (not saying that's you, just saying a lot of people do).

Rejecting OO in favor of FP like Paul Graham does? Great! Hopefully you'll make some headway and functional languages/styles will really catch on!

Rejecting OO in favor of one big function that handles everything and a few global variables (like nearly everybody who uses the Spring "framework" abomination in Java)? Maybe spend some time considering why, if it's so obvious, it's referred to as a "bad practice".


I think you and I must be part of different bubbles, because I haven't heard anyone advocating for moving away from OOP toward procedural code. Looking at new languages which are proudly not OO (e.g., Go, Rust, etc), none appear to be advertising that you can write code that looks like 60s/70s procedural code--or at least not in the "pervasive global state" sense that you originally attributed to it. Instead, those languages tend to have first class functions and discourage global state.


> "... languages which are proudly not OO (e.g., Go, ...."

Outside of class based inheritance which aspects of Object Oriented is Go missing? I code Go daily and consider it primarily an OO language.


In general, "OO design", which is to say if you're writing a BookStore application, there are no Book types with "sell()" methods that imply that book instances must have a reference to the payment service and everything else in the application in addition to their plain-old-data fields (note that this is another phrasing of the banana/gorilla/jungle quip). Inheritance and OO design are the distinguishing features of OOP in my mind; if you don't have these, then OOP is indistinguishable from FP or DOP. If Go is OOP, then which languages aren't, and why?


> "Inheritance and OO design are the distinguishing features of OOP in my mind;"

I don't consider inheritance an important feature for a language to be an "Object Oriented Language." Inheritance: none, single, and multiple have been bandied about in the OO world for quite some time now so I understand where you're coming from.

I break out design from the language. While I wouldn't design very deep without knowing the language I was targeting I do consider Go as having a natural bias towards driving me to OO design over, say, functional or procedural. For me Go lends itself naturally to OO design. Sans inheritance, of course.

> "if you don't have these, then OOP is indistinguishable from FP or DOP. If Go is OOP, then which languages aren't, and why?"

Haskel, some Lisps, some Forths, etc. would be more non-OO languages in my mind. C is also a non-OO language to me even though you can certainly code it in an OO style.

I guess I agree with the blog author, OO is simply part of the language landscape these days and isn't going to be supplanted by functional any more than procedural was supplanted by OO. And with my mind-set then I'll say most all of Algol's recent children can be considered OO languages. So for me Go is definitely an object oriented language. But I now understand why you don't.


> I don't consider inheritance an important feature for a language to be an "Object Oriented Language."

I mostly agree except for one catch. How do you extend a class which is encapsulated properly? Often this is the only way I use inheritance because I need to do one or two extra things that rely on the internal state of an object which I can't modify directly. So I'm wondering if there is a good way that doesn't rely on modifying the implementation of the original object to be more flexible.


This is tough to answer without a language in mind. Depending on what access you have you can use interfaces to get run time method binding or composition to get compile time binding. For either of these you'll have to "wire" the needed types together. This is what Go has you doing.

Going back to the "old days" you'd reach for a pattern. Adapter pattern was the most common. Bridge or Facade maybe? But patterns fell out of favor a while back (imo) because they were tough on the coder's mind. A lot of languages added useful bits to create these pattern behaviors more naturally and without reaching for a book. I'd need to see code to be more specific.


I'm not familiar with Go but apparently it has "embedding" which is a little different. Maybe that works. But it's largely playing the same role - a way to extend a class you might not control.

So I'd rephrase that I suppose, perhaps we don't need something called Inheritance, but we do need a way to extend objects we don't control without breaking encapsulation.

As an example if you had a method that outputs XML and you want JSON, you can use an adapter object to call it, read the string, and reformat and pass it through. That works (and I've done things just as bad or worse many times from necessity), but it's definitely more hackish and less efficient than a solution that uses object fields to output JSON natively. Inheritance is my familiar way to do this in cases where I don't control the code (in which case injecting a Writer class of some sort probably makes more sense).


Go's embedding _is_ composition (has-a) in any other OO language but with a little sugar. Embedding allows the compiler to optimize the memory layout for a composed object and allows you to reference the composed object's fields without a qualifier. For example, if a Student class has-a Address {city, state, zip} then you could refer to the city field by coding aStudent.city rather than aStudent.address.city. It's handy but doesn't otherwise change "normal" composition.

I'm not advocating against inheritance. If it works then go for it. I do when I'm coding in languages that have it. This stemmed from a thread discussing whether the lack of inheritance in Go was sufficient to exclude it from being an object-oriented language.


> but we do need a way to extend objects we don't control without breaking encapsulation.

By definition, you can't extend objects you don't control without violating encapsulation. Encapsulation is all about controlling access so that the owner of a class can make changes without breaking downstream code. If downstream code accesses these private APIs, the downstream code may break, regardless of whether the access was via inheritance or composition (as a side note, "protected" makes no sense--who cares if downstream code breaks because it was accessed by inheritance but not by composition?).

> As an example if you had a method that outputs XML and you want JSON, you can use an adapter object to call it, read the string, and reformat and pass it through. That works (and I've done things just as bad or worse many times from necessity), but it's definitely more hackish and less efficient than a solution that uses object fields to output JSON natively. Inheritance is my familiar way to do this in cases where I don't control the code (in which case injecting a Writer class of some sort probably makes more sense).

You can do this just as easily without inheritance. You just need a way to access the member fields--whether you access them via inheritance or otherwise doesn't really matter. E.g.,

    class Foo:
        def __init__(self, x: int, y: int) -> None:
            self.x = x
            self.y = y

        def xml(self) -> str:
            return f"<foo x={self.x} y={self.y} />"

    # Via inheritance
    class JSONFoo(Foo):
        def json(self) -> str:
            return json.dumps({"x": self.x, "y": self.y})


    # Via composition
    class JSONFoo:
        def __init__(self, foo: Foo) -> None:
            self.foo = foo

        def xml(self) -> str:
            return self.foo.xml()

        def json(self) -> str:
            return json.dumps({"x": self.foo.x, "y": self.foo.y})

    # Simple
    def json_foo(foo: Foo) -> str:
        return json.dumps({"x": foo.x, "y": foo.y})
> I'm not familiar with Go but apparently it has "embedding" which is a little different. Maybe that works. But it's largely playing the same role - a way to extend a class you might not control.

Go's embedding is exactly composition. It doesn't let you access private member data of the embedded class. It's just syntax sugar for delegation. In other words, you could create the composition version of the JSONFoo class like this:

    type Foo struct { X: int; Y: int }

    func (f *Foo) XML() string { return fmt.Sprintf("<foo x=%d y=%d />", f.X, f.Y) }

    // The compiler will automatically generate this method:
    //
    // func (jf *JSONFoo) XML() string { return jf.Foo.XML() }
    //
    // And for a given instance of JSONFoo, the compiler will convert all instances
    // of jsonFooInstance.X to jsonFooInstance.Foo.X.
    type JSONFoo struct {Foo}

    func (jf *JSONFoo) JSON() string { return fmt.Sprintf(`{"x": %d, "y": %d}`, jf.X, jf.Y) }
Note that a `JSONFoo` isn't a `Foo`. This is an error:

    func printXML(foo *Foo) { fmt.Println(foo.XML()) }

    func main() {
        jsonFooInstance := JSONFoo{0, 0}
        // printXML(jsonFooInstance) // error
        printXML(jsonFooInstance.Foo) // correct!
    }


Thank you for the examples. I've been in perl too long where a lot of good practice is just a suggestion and not enforced. In particular private vs protected (which is what I was thinking of regarding accessing internals) doesn't really exist.

I tend to lean towards inheritance for functional extensions and composition for everything else, but there are so many ways to skin an OO cat.


I'll be blunt: That Book type design is horrible, precisely because of the problem you are complaining about. The answer is simple: Don't do it that way.

If that's what people think OOP is supposed to look like, no wonder they don't like it.


I suspect "OOP" is teached like that most of the time. Or more precisely, this is teached as being OOP (as a fundamental distinguishing point from "procedural", most often -- ironically, the opposition "OOP" / "procedural" is then also complete garbage)

I've yet to find a non boring, yet descriptive, consensual alternate definition of OOP; the first thought people have is more Java than Smalltalk, and if you exclude inheritance and everything virtual, then it boils down to syntactic sugar for single non-dynamic dispatch, and encapsulation. On the other (but still very sweet) hand, is Ruby OOP when you write 10.times { puts "hello" }? I don't know. Or rather: it's completely arbitrary. Even what is the most useful, when used cleanly is not exclusive of "OOP": invariants are also most important, and arguably way more well known by practitioners, in FP.

Natural language is inherently descriptive. If there is a better "OOP", you just have to fight to make it prevail, so that people think of it when they hear "OOP". Maybe we can make the bad teachers stop in other ways too. Like not even using the word, and making "alternate" approaches (maybe even what you are calling OOP) fashionable.


I think of OOP as encapsulation, inheritance, and polymorphism (but not necessarily all of them being used all of the time).

My problem with the example is that Book should not have a sell() method. Instead, BookStore should have a sell(Book) method. The machinery for how to send in a credit card transaction does not belong in Book. You still need it - it has to live somewhere - but BookStore is the place, not Book.


> OOP as encapsulation, inheritance, and polymorphism

Ugh.

Polymorphism is far from unique to OO, and predates it.

If by encapsulation you mean information hiding, that's as old as the birth of OO. I've also sometimes encountered an alternate definition of encapsulation that more or less means "information hiding in OO", which seems a bit circular to me?

That doesn't mean OO can't have those features, but you didn't say "OOP usually includes loops and conditionals", because it's too obvious to mention.


Exactly this. As someone who abandoned OOP, I find that my programming has improved a lot. But whenever I hear criticism of OO, it's always about this ancient taxonomic structuring (is-a has-a) idea that no one doing post 90s OOP is doing at all. And it's always proponents of "simpler" procedural paradigms that harp on about this. Weird.


If no one does this taxonomic structuring or god object design, then what distinguishes good OOP from data oriented programming? Also, taxonomic design was popular in code and in pedagogy (in Java, C#, and C++ anyway) when I was in school circa 2010.


That’s pretty much my point. If you strip this away, what is left to distinguish it from other paradigms? Inheritance?


See my reply to tenac. The problem isn't that OOP is bad, the problem is that you're doing it badly. It's not that you misunderstand what the ideas are, either. It's just... really bad design.

You even know what the problems are with the design - you described them quite well. But it's not showing that OOP is horribly flawed. It's just showing you that you need a different design.


I’m pretty convinced that it’s bad design, but many OOP practitioners (including textbooks and thought leaders for a good while) disagree. And even if you agree that all of these facets of bad design (including inheritance hierarchies) are Not True OO, then what is left that distinguishes object- from data-oriented design? What value does “OO” add to our lexicon?


I didn't say that inheritance hierarchies are bad. They're great... if you've got entities that are actually in a hierarchy. I've used a hierarchy in a genetic programming situation, with a Statement being the base class, and a number of derived classes that implemented the available instruction set, with a virtual execute() method. It worked great... for that problem.

If you're having a hierarchy just to have a hierarchy, that's about as wise as having gotos just to have gotos. Nobody sane would do that today; maybe we'll get there with hierarchy, too.

I'm not sure what your definition of "data-oriented design" is, so it's hard for me to say how OO is different. I'll take a stab at it anyway, but know in advance that my response may be orthogonal to your question.

You've got data - say, data about a book, in the original example. In the structured programming days, that would be an "entity"; now it's an "object". The difference is that, with private methods, nobody can modify the book's data just by having a reference or a pointer to it. (You could kind of do this in C with a source file that would operate on the structure, and some of the methods being file static. But that doesn't keep any other code that has a .h file from modifying the structure without using the functions.)

The philosophical difference with OO might be that OO uses the compiler to enforce that data is only accessed in approved ways. This is a logical extension of static typing. (Of course, for non-static-typed OO languages, it doesn't happen that way. They enforce it at runtime.)


Sounds like OO to you is all about encapsulation. I think that's fine, but lots of paradigms support encapsulation. It's absolutely pervasive in FP and DO programs, for example (even C supports opaque pointers). Do we say any language with encapsulation is also object-oriented?

Or is OO encapsulation + inheritance? One problem is that the OOP community seems split about whether or not inheritance is a defining feature of OOP. And since encapsulation isn't a defining feature, surely it's inheritance. Unfortunately, there's no clear consensus, so OO as a term is not very useful.

> The philosophical difference with OO might be that OO uses the compiler to enforce that data is only accessed in approved ways. This is a logical extension of static typing. (Of course, for non-static-typed OO languages, it doesn't happen that way. They enforce it at runtime.)

This is an interesting distinction; ironically C compilers support encapsulation via opaque pointers while Python doesn't enforce encapsulation even at runtime (it's all based on convention--"consenting adults" and all that).

> I didn't say that inheritance hierarchies are bad. They're great... if you've got entities that are actually in a hierarchy.

I think there are cases where inheritance doesn't overtly bite you, but I've never seen a case where inheritance is actually cleaner than composition (with the exception that in many languages, inheritance is the only way to automatically delegate--they lack something like Go's struct embedding). Specifically in your Statement example, you could have gotten your polymorphism from a Statement interface instead of a base class.

Inheritance seems like just a particular subset of composition and polymorphism--there are cases for which it is appropriate, but why does it need to exist at all when you can get the same benefits from composing the constituent components (components being polymorphism and composition via interfaces/first-class-functions and object-/functional-composition, respectively)? It feels like having an add4() function--it's not always bad (sometimes you really need to add 4 to another number), but the more general 2-argument add() function works just as well in the cases where add4() doesn't overtly bite you, and it's much less likely to be abused.


The problem with your claims now is that they are unfalsifiable. How do we go about proving or rejecting your ostensible claim that this how OOP programmers say it should be done? The OOP community has a vibrant culture of critiquing bad designs. See DDD and similar efforts.


> How do we go about proving or rejecting your ostensible claim that this how OOP programmers say it should be done?

Off the top of my head you could survey the landscape of OOP books, blog posts, and code bases throughout history. But it doesn’t really matter—even if you don’t believe me and all OOP programmers believe these are design flaws, then my question remains: what distinguishes “good OO design” from data oriented design?


If Go's paradigm is considered OO then what would you consider non OO? Seems like almost every language has the baseline of struct and struct methods (with private and public semantics)


> If Go's paradigm is considered OO then what would you consider non OO?

Classic, pre-object Pascal. C. Very little new in the last few decades, because OOP is a very influential paradigm.


Even though it's not OO that doesn't stop people adding their own hacked OO to C. I think the Quake source code had a lot of this, structs with function pointers. OO is such a natural fit for games in particular it finds a way to exist.


The whole GObject ecosystem is hacky object oriented C, to provide another suite of examples.


I personally found hacked OO ala Perl to be the first time I really understood it, even after years of class diagrams at university. The abstraction got in the way. I needed the nuts and bolts.


Yeah, I delved into it when I was working with C, but by that time I had no interest in reproducing inheritance but only interfaces.


Without DI frameworks and IoC class design, good luck testing your OO code.

I also don't get the Spring hate and what that has to do with creating "one big function and a few global variables."


Dependency Injection and Inversion of Control are great design principles that most capable programmers discovered on their own before they were given names. Having a worthless container instantiate everything via reflection and moving what ought to be compile-time errors into the runtime to enforce that is an unnecessary waste.


Taking care of instantiation AND teardown saves devs from doing quite a lot of work.

Do you use Dagger for your DI needs or do you roll your own container?


Instantiation and teardown aren’t hard, or at any rate they shouldn’t have to be. If you roll this stuff by hand, it can often indicate weak areas of the design -- unnecessarily tedious boilerplate that can be cleaned up.

To me, DI frameworks are usually an example of the tail wagging the dog. You make your code more complex, slower and harder to debug just for the sake of making it more testable. But if you avoid leaning too hard on the framework, it’s usually possible to keep the design clean and get good test coverage. In fact you can often get better test coverage, by using real components rather than being tempted into relying on mocks.


...and some of those Most Capable Programmers helped by letting other reuse what they wrote, in form of IoC/DI frameworks! Or are you suggesting that every programmer worth their salt should write their own IoC to suit their needs?


See, that's kind of like saying, "the Most Capable Programmers helped by writing a program, are you suggesting that every programmer should write their own program to suit their needs?" I mean, yes... absolutely. I don't think anybody should write a program that parses an XML file, reads class names out of specific attributes, reflectively instantiates them, then reflectively matches up their methods with other class names and then reflectively associates them, not do I think that anybody should write a program that scans annotations to match class types to attributes and instantiate them behind the scenes either. I don't think Rod Johnson should have done that, and I don't think anybody should use it. It's pure overhead with no benefit. I do think that programmers should decouple implementation from interface and program to interfaces and then write a dozen lines of Java code that do, in a type-safe, efficient way what the eight or ten megabytes of Spring "framework" does a bad job of doing "for you" slowly and poorly.


Actually that's what I typically do in Java. There's far too much magic in Spring. Initialization code to read a parameter file and inject values into objects only takes a couple hundred lines. Plus there's huge value in controlling boot and exit sequences directly.

I do the same with object-relational mapping--write a thin layer of classes to wrap INSERT, SELECT, UPDATE, DELETE. Not hard to write, easy to test, and you can always slip into SQL if you run into problems.

Use-case specific code should always be considered as an alternative to general frameworks.


That is not what I meant. Learning Rust was the best thing that ever happened to my Python Code for example. Does this mean I never use OOP patterns anymore? Not at all.

I just think more about hoe best to structure my code. Sometimes a OOP-like pattern works well, sometimes something else works better.


Rust’s type + interface system seems to completely fix the problems of oop traits & hierarchical inheritance.


>object oriented concepts like information hiding

Having read "Software Fundamentals" [1], I think David Parnas [2] would beg to differ on that bit. Not the least because he actually invented the concept.

Parnas wrote extensively on information hiding as the basis for modular program construction, abstract interfaces that provide services without revealing implementation, and other software engineering topics.

[1] https://www.goodreads.com/book/show/1416932.Software_Fundame...

[2] https://en.wikipedia.org/wiki/David_Parnas


Not sure what you're saying. Are you saying OOP does not have information hiding or that the "information hiding" in OO languages should not be called information hiding as per Parnas? Or something else entirely?


I'm saying that just because someone learned about a concept (in this case, information hiding) in a certain context (OOP), that does not mean that this is where that concept originated (which was implied). It's a common mistake I see all the time.

In short, information hiding is a concept from modular procedural programming that was then adopted by OOP, not an original OOP concept.

In general, OOP is really mostly a case of "let's take a bunch of known best practices and encourage (or even enforce) them on the language level". Procedural programming can be as object-oriented (or not) as you like. With minimal amounts of syntactic sugar in the compiler, like e.g. rewriting "foo.bar(...)" to "bar(foo,...)" you won't even be able to easily tell the difference when you see the source code.

While I'm at it, the critique that procedural programming necessarily results in a quagmire of global state is also wrong. It's all up to the programmer. Nobody forces you to use globals. Some OO languages try to force you not to, but usually fail in the face of determined opposition from the coder.

In essence, foo coders will find a way to produce foo code.


Got it. Makes sense.


I read their point as “information hiding is not a concept exclusive to OO.”

Whether or not the original quote meant to imply that I’m not sure, but do see how it could be read that way.


Plain global state is not that bad actually. What people do to fix the global state made things worse, and IMO OO is also a terrible attempt.

Mutate coarse-grained or global state is pretty much similar to functional programming approaches. The global state is a big tree, and there are functions designed for this type, there are also functions and types for every fraction of the tree. It's straight forward, elegant, and extensible.

It's basically what Alan Perlis said in SICP: "It is better to have 100 functions operate on one data structure than to have 10 functions operate on 10 data structures."

The only difference is just functional programming always transact the whole tree instead of mutating it, since they use immutable algebra data type.

It's not far from what Redux/Erlang process/Haskell ST Monad etc does.

IMO OO makes it more obscure and too subjective compared to this approach. It makes people lost the whole picture very quickly, and then accidental complexity comes in play in every corner.


bingo. It's not "global state" what's wrong. It's global MUTABLE state.

OOP brought the terrible idea of coupling state and behavior, so of course "global state" would be wrong.


OOP had another advantage over procedural: Often in procedural programming, you'd pass around a pointer to some data structure. If the data structure got messed up (put in an inconsistent state, say), you had no idea what code was responsible. With OOP, you look at the class's member functions, which is a much smaller set.


> If the data structure got messed up (put in an inconsistent state, say), you had no idea what code was responsible. With OOP, you look at the class's member functions, which is a much smaller set.

Yes this is correct. But data abstraction and encapsulation are found in all object-based code; they're not specific or related to OOP.


I'm not sure I understand. In your view, what is the difference between "object-based code" and OOP?


Simply using objects in the code doesn't mean one is practicing OOP any more than using functions means one is practicing FP.

That is, it is more about the "how" than the "what".


How is there a smaller set because of OOP? Just filter the functions by parameter type to restrict which function may access your pointer.


OOP encourages modifying an object's state via its member functions (if you use data hiding/encapsulation as part of your OOP definition). This means that users of class instances shouldn't be able to put it into a bad state, unless the member functions are able to put it in a bad state.

This lets you isolate where your contract/invariant violations are more easily. Once addressed in the member functions, either it fixes it in the callers, or their use is now invalid and becomes a compile or runtime error.


It seems like there is a whole subject buried in this comment.


Procedural programming was never abandoned, it just doesn't have a hype machine around it. How much software is written in plain C which is neither OO nor functional?


> How much software is written in plain C which is neither OO

And I would assert that every maintainable C program is fundamentally designed in a relatively object-oriented way (functions that operated on structured, polymorphic function pointers, etc.) even if the original author wasn't particularly thinking in terms of classes or inheritance.

C programs aren't functional because C isn't functional (no inherent concept of closures), but Greenspun's tenth rule: Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp" usually applies.


>> C programs aren't functional because C isn't functional (no inherent concept of closures)

You don't need closures to do functional programming.

I was recently tinkering in some C++ code and thought "this part should be able to run in parallel." It was simply doing the same operation of a list of objects. It's a complex operation involving some temporary data structures. I had to dig through several layers of functions, but I kept finding that all data was created locally. Functions that returned complex result are passed a class to fill out. It was then that I realized it was written in a functional style - or at least my limited understanding of what that means. The top level function is now run in parallel, which required no changes beneath that level.

Now I prefer that style but like TFA here, I don't think 100 percent functional is a good idea.


I think what you're illustrating there is how vague of a term OO is. If you can apply it to code that was never even intentionally written to be OO then is it that meaningful? Btw OO did not invent encapsulation and polymorphism.


And I would assert that every maintainable C program is fundamentally designed in a relatively object-oriented way (functions that operated on structured, polymorphic function pointers, etc.) even if the original author wasn't particularly thinking in terms of classes or inheritance.

That is in the eye of the beholder.

I can well believe that you would look at the code and decide that the author wrote in an OO way. I also believe that there are plenty of authors of such code who would disagree with your assessment.


I don’t think it is correct to say that those things are OO because they predate the concept and are used in many paradigms. When I think of OO I think of strict encapsulation (via private), complex inheritance, methods bound and defined in classes, single dispatch, etc.


so languages like self and javascript which, even in the modern forms, have no real classes are not oo to you?

i mean, the fact that you can use class style syntax do define object constructors in javascript and tho result is fairly coherent, seems to suggest that classes are not inherent to oo.

also, the fact that it's called “object oriented” not “classical”.

other examples of non-oo with strict encapsulation includes haskell, where if you lack the constructor, you can only use it according to the public api. it also uses single dispatch for typeclasses.

complex inheritance rare in oo - mostly only single inheritance is supported - and most oo design advocates pretty much say “don't use inheritance”. so it doesn't seem like it's intrinsic.


It is perfectly possible to use a functional style in an OO/procedural language, as John Carmack demonstrates below. You just lose the guarantees (which is a steep cost IMHO)

https://www.gamasutra.com/view/news/169296/Indepth_Functiona...


One of the main reasons why OOP is popular is not even talked about - the ability for AUTOCOMPLETE in the IDE to work extremely well.

For example, I have an object... what do I do with it? So I hit the '.' on my keyboard and read the list of things that the object can do.

This is why OOP is popular, it allows better tooling. If people want to move beyond OOP then they need to think about how the new style integrates with the IDE so that the programmer can hit the '.' or equivalent and get a list of useful things to do with the object in question.


This is a common idea but I'm skeptical it's that simple.

Haskell has a powerful enough type system that it can enable "hole-driven development" where you specify the type of your code and then leave a "hole" in the implementation -- anywhere in the implementation, including making the entire implementation a hole! -- and the tools can suggest what should go in the hole, leaving new holes in the code.

Then you can incrementally resolve smaller and smaller holes based on the suggestions you get, and finally you have a completely tool-written program, where you just helped disambiguate when you had to.

This is the '.' on steroids.

Haskell is not popular.


Haskell is weighed down with the problem that by the time I'm five pages into a "Haskell is actually easy, and let me show you how" tutorial, the average reader's eyes glaze over thanks to passages like:

"While >>= and return are the basic monadic sequencing operations, we also need some monadic primitives. A monadic primitive is simply an operation that uses the insides of the monad abstraction and taps into the `wheels and gears' that make the monad work. For example, in the IO monad, operators such as putChar are primitive since they deal with the inner workings of the IO monad. Similarly, our state monad uses two primitives: readSM and updateSM. Note that these depend on the inner structure of the monad..."

Or, possibly because when I make a post on Hacker News about how difficult it is to implement Quicksort in Haskell, the first response is a four-line Haskell implementation of Quicksort, spawning a 50-post subthread which points out that it's not actually an implementation of Quicksort, because it does not have the same performance characteristics as Quicksort, and instead has the performance characteristics of a lethargic water buffalo.

By the end of the thread, the question of whether it is possible to write a Quicksort in Haskell that has the performance characteristics of Quicksort is left as an open research question.


Instead of downvoting the parent, Haskell devotees would do well to recognize that this is a pretty good representation of the hurdle that would-be recruits to Haskell face.


There is a reason why Haskell is not popular. But I don't think the impenetrable passages about Monads are the reason behind it. It is more about what you gain by adopting Haskell and powering through the impenetrable passages. If it is not blazing fast runtime performance and fast speed of development and maintainable code, then Haskell is pointless. As such, if there are those kind of benefits available, there is plenty of extra brainpower available among developers to comprehend passages.


Agda literally has this "hole-driven development" as a standard feature in its IDE modes. Not only that, but when developing Agda code you get a list of every binding in scope, as well as a prompt of the type that must be provided to resolve the "hole". The expectation is that every part of developing Agda code can be expressed as a kind of "autocomplete", if desired.


> This is the '.' on steroids.

Not really. This is all still painfully manual.

Both type hole development and auto completion are enabled by static types. Haskell could theoretically benefit from all these advantages that mainstream statically typed enjoy thanks to IDEA, Visual Studio, or XCode but the Haskell community has this arrogant attitude that they are above such "pragmatic" tools and as such, never took the importance of IDE's seriously.

So when you watch a video on Haskell's awesome type hole development, the developer is often using vim and typing everything by hand. In 2020.


Except VS Code supports Haskell pretty well. And implementing IDEs is hard work, why do that when it's such a niche language that editors like VS Code suffice?



Haskell is popular in universities, fintech companies, and among the FP crowed at large. Many programmers know of it's benefits and simply avoid it because FP is very difficult for many to code in and Haskell is trying to be FP only...


Popular in which fintech companies? I don't believe this is true, but I would be happy to wrong.

Jane Street uses OCaml, but of course that's not Haskell. All the other finance firms seem to use pretty standard C/C++ because they need the low latency.


Standard Chartered uses their own dialect of Haskell. There's some talks on youtube about it.


I've done Haskell contract work at multiple large Wall Street investment banks. It's not like it's their only language - these are large orgs with lots of smart people, sometimes some of them use Haskell.


> One of the main reasons why OOP is popular is not even talked about - the ability for AUTOCOMPLETE in the IDE to work extremely well.

That's just dot-syntax and modules. That has nothing to do with OOP itself. Nothing is keeping you from doing that in functional languages that support proper modules.


Well generally speaking you're just describing a search problem: I have this data, what operations can I perform on it? Hypothetically you could do something like if I type a keycode after a variable it lists functions that can be run on it. Its just a tooling thing not really a language design thing.


But their point is -- and I agree! -- that with unfashionable OO languages there’s nothing hypothetical about it, it’s completely mainstream. All the IDEs have done this for years and it works extremely well.


> I have an object... what do I do with it? So I hit the '.' on my keyboard and read the list of things that the object can do.

As a counterpoint (relevant Rich Hickey rant): https://www.youtube.com/watch?v=aSEQfqNYNAc


The poor design of that class aside, I would rather have the specific version rather than the unspecified version. If I have a HttpServletRequest somewhere in my code, I have a good idea of its structure just by knowing its type. The same cannot be said of a blob of maps and lists.


> If I have a HttpServletRequest somewhere in my code, I have a good idea of its structure just by knowing its type. The same cannot be said of a blob of maps and lists.

Non-trivial data structures are always conformed to some specification, doesn't matter if its Clojure or an OOP-language. If you receive an HTTP request over the wire you will have to check it. Once you conform it to the specification you know exactly how it looks and there's no need to jump through hoops to access the information.


But with proper static typing you don't have to check it because you already know it's correct since it's construction. It relieves a lot of mental burden for me to not even have to consider the case that it might be incorrect, as long as I'm dealing with structures internal to my program.

I have a lot of respect for Rich Hickey and I feel like he Understands something that goes straight over my head, but his appreciation for generic, shapeless maps and lists is something I just don't get.


> But with proper static typing you don't have to check it because you already know it's correct since it's construction.

Its construction is dependent on it conforming to a specific form. In this case, your statically typed data comes from a parsed string that is then conformed to a specification to ensure that is has a certain shape. In Clojure you would just do the conforming to specification part and keep the data generic.

I think the appreciation for generic data has to do with the fact the fact that it makes Clojure code very generic and facilitates a lot of functional composition.


I have a counterpoint. OOP made simple things way too complicated and that resulted in more jobs in software development. Its the sad truth. Or the happy one, depending on which side of the fence one is.


I think a mention of Gary Bernhardt's "functional core, imperative shell" talk from 2012 is worthwhile here.

My mind was blown when I first saw it. https://www.destroyallsoftware.com/screencasts/catalog/funct...


I was introduced through his "Boundaries" talk [1] and had a very similar experience. I think a lot of programmers develop this methodology without really formalizing it. But practicing it consistently creates really nice code and easy unit testing. The one possible downside is long argument lists to functions.

[1] https://www.destroyallsoftware.com/talks/boundaries


> Usually when I see somebody arguing against object-oriented programming, I don't see them arguing for functional programming, but instead arguing for procedural (like Cobol, Basic, or Pascal) programming.

That's not what I'm seeing. The usual discourse around OOP is that some parts of it are bad. In particular, overriding concrete methods, and in general Java-like inheritance brings a lot of pain. "Everything is an object" from C# and Java also gets a lot of well-deserved criticism (https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...).

Other parts, like polymorphism, incapsulation, binding data with behaviour - I don't see that much criticism against this. For me, modern languages like Go or Rust (yes, I know it's almost a meme at this point, but still) bring sane OOP approach: while not being strictly OOP, they take the parts that work, and remove the parts that don't, while at the same time adopting many practices and abstractions from functional languages that proved useful.


There are 4 mainstream programming styles:

* Data Oriented Programming. There is data, and there are functions operating on the data.

* Functional Programming. Variant of DOP, with a strong emphasis on purism and abstract algebra concepts.

* Procedural Programming. Variant of DOP, lacking language level mechanisms or best practices around polymorphic interfaces. Largely obsolete.

* Object Oriented Programming. Everything is an object.

Information hiding and polymorphism are readily available in all styles, except PP. The trouble with OOP is that insists on two big lies:

* Everything is an object, therefore data is an object, therefore data must be hidden. In fact, data simply 'is' and you can't hide it. Think JSON. There is nothing to hide.

* Everything is a polymorphic object, codified as the popular 'Open/Closed Principle'. In fact, most code is functions that just compute some data. Never needs N different polymorphic variants. If the business cases ever warrant such variants, simply refactor a polymorphic interface into the code base.


I don't see any difference between DOP and procedural. Saying the difference is best practices and something vague about language mechanisms is accute to saying procedural is different to DOP because people say "do it good". This is even more formless than the criticism of no hard definitions levied against OOP by the same people.


The technical difference is lack of language-level module support, requiring manual management of vtables. It can be done, see Linux kernel for an example, but often times degenerates in a ball of globals.

To further clarify, one can use DOP style in many languages, including languages that are ostensibly OO. DOP is a style, using data hiding and polymorphism as sparingly as possible, as opposed to OOP style, which strongly encourages using data hiding and polymorphism at every corner.


Okay so it just sounds to me like careful OOP. Discipline just isn't a paradigm. It's an ethic and it's a good ethic. I love OOP, I've always built systems I was proud of with it. But to do that effectively you do need discipline and you need experience in design. Which most people think they can just skirt blindly by writing procedural code in class wrappers and the system will just magically emerge. This results in even more horrific programs when combined with blind TDD. This has always been nonsense. I don't use much OOP nowadays due to my work prioritizing efficiency and speed. Which OOP fails at miserably. But every now and again I'll find myself doing some monomorphism that looks like vtable dispatch without ever touching heap. And I'll kindly look back on my OOP days.


Agreed. Careful/Restrained OOP / 95% FP [the ML branch] / DOP, all names for the same sweet spot. The push back is against hard OOP culture, rooted in confused thinking.

* Everything is emphatically NOT an object. There is immutable data, there are functions and there are modules. These are not the interchangeable, learn to use them in the proper context.

* Everything is emphatically NOT a message. There are plain functions, there are polymorphic functions, there are RPCs and there are async messages. These are not interchangeable, learn to use them in the proper context.


The formalism and structure IS the critique. If the tools become about the rules they hinder instead of help.


Yes, two points:

1. To try to force restrictions on someone who doesn't understand the reasons for these restrictions will not magically ensure good results.

2. To try to force restrictions on someone who does understand the reason for these restrictions is just a pointless nuisance.


Personally I've lumped procedural and OOP together into a single concept. Put differently, I consider OOP as a strict successor to procedural. Both are about side-effects and state, OOP just gives you smarter tools for encapsulating and managing that state. FP lets you eliminate some state altogether. My personal philosophy is to eliminate as much state as you can, and then to smartly manage what's left.


State is simply scoped more often.

It’s all just a bunch of functions and memory


Hmmm, I have to push back against that somewhat. Do you have specific examples? I am mostly known for arguing against OOP and for Functional, and when I was doing my research most of the essays I read were aware of the Functional paradigm. Please see here:

http://www.smashcompany.com/technology/object-oriented-progr...


There is a (mistaken?) tendency to see OOP as a step beyond functional programming, probably because it allows for more complicated abstractions.

But it seems to me a more natural scale to use is how the state is managed. In procedural programming state is global, which is messy. OOP cleans it up a bit by splitting global state into pieces and making them local. Functional programming takes this even further by avoiding having to reason about state altogether.


Procedural code doesn't require global state, though it doesn't discourage it either. The mid-point between procedural global and OO encapsulated state is the use of structures/records to contain state, but without any associated (that is, no language enforced association) methods/functions. You instantiate a new state record, and pass that around.

This is a natural way to work on procedural code (in an iterative development fashion) to move from prototype/early code to stable, maintainable code. You create global variables, then encapsulate them in a record (perhaps keeping just one record globally), then you move to passing a record around (permitting multiple states to exist side-by-side). Ideally you'd start with passing the record around, but sometimes you just write code to get the concept implemented and refactor from there.


Language-enforced association with methods and functions makes it possible to have full encapsulation in a way that's guaranteed by the language itself. That's not even OO either, just simple object-based code. You even see these same ideas entering FP nowadays, with things like univalent types that enable one to establish equivalent types as "equal", and thus fully interchangeable, no matter how they're structured internally.


Yes, this is true. I'm not arguing that procedural (in the C-style) is fantastic, but it can be scaled decently well, but requires discipline. There are definitely improvements to be had when you start adding other language features that enforce certain things.


The whole problem with "discipline" is that it's hard to write it down and enforce it in a large-scale project. Language-based mechanisms are inherently scalable (at least if properly designed; and arguably e.g. implementation inheritance isn't) and make it much easier to build software "in the large".


How large a team I have certainly worked on large scale projects using procedural languages and I don't recall any problems with the required discipline


And having done a fair bit of procedural programming back in the day F77 and PL1/G only the most trivial of programs have every thing as global state.


How does functional programming allow you to avoid reasoning about state? Don't you always have to reason about state, no matter what paradigm you use? Otherwise, how are you going to build anything useful if you're not keeping track of any variables?


The following words are bad: shared, mutable, global.

All programs have state. It's worse if the state is shared. It's worse if the state is mutable. It's worse if the state is global. It's the worst if it's all three.

Functional program still have state. They typically don't have shared state, or mutable state, or (much) global state, though. Those are real wins.


I feel like a lot of functional programming tries to hide state behind an OS/API call of some sort. Sometimes that makes sense. Think a program that transforms data into SQL calls.

Mirror of that is a driver for a piece of hardware. You got multiple levels of state going on.


slight historical anecdote, back when graphical user interface were not a thing, sutherland team is said to have developped rendering in a fully streamed style. Relying on state too much is also a matter of context, that team didn't have memory really (at least nothing that could store a frame buffer) so they wrote imperative code in an iterable fashion.


As you demonstrate, there's not always a clear-cut distinction between OOP programs and non-OOP programs. Sure, the Linux kernel uses some OOP concepts, but it couldn't be described as fully OOP in aggregate. In my opinion, the best paradigm is no paradigm at all: use the right tool for the right job.


"Use the right tool for the right job" is the final stage of "first, learn the rules, then learn when to break them."

The difficulty with that advice is that it doesn't become applicable until you've gotten maybe 5 years worth of humble pies thrown in your face, for moments where you realize in retrospect that you made the wrong call.

Until then you don't have enough of a body of experience to tell two very different scenarios apart: "things are this way because my predecessors already learned the hard lessons" vs. "things are this way because my predecessors didn't know better."

Paradigms are really useful in practice as a shortcut to implementing 80% of the wisdom until you begin to learn the remaining 20%, and one of the roles of senior engineers is to make sure junior engineers don't mistake the 80% for the 20%.


> "things are this way because my predecessors already learned the hard lessons" vs. "things are this way because my predecessors didn't know better."

This is just Chesterton's Fence again, and has the same refutation: if you're putting up a fence that serves some long-term purpose, it is your responsibility to put signs on it explaining what that purpose is. Otherwise people are right to assume it's one of the overwhelming majority of fences that do not, in fact, serve any purpose beyond obstructing their movement.


if you're putting up a fence that serves some long-term purpose, it is your responsibility to put signs on it explaining what that purpose is. Otherwise people are right to assume it's one of the overwhelming majority of fences that do not, in fact, serve any purpose beyond obstructing their movement.

Your approach absolves yourself of exercising agency to discern purpose in fences that lack signage in a real world where plenty of purposeful fences lack signage. Just because it’s someone else’s responsibility doesn’t mean they actually do it, and just because they didn’t put a sign there doesn’t mean you’re blameless if you cause a service outage.

I work with cryptography and it would take a book to put adequate signage on some fences. Nobody’s going to write why they’re using AES GCM every time they use AES in that code.


Sure, that makes a lot of sense, and it's a sensible demand when there are a million people, each putting up one fence.

However, when you write the same architecture over and over, you don't write an explanation each time - for various reasons, a not insignificant one being that there's no obvious place to put it. (I use dependency injection in my code. Where should I explain why - on each class? In a separate file in the project? In the documentation? Should I write documentation, then? What if I write it and others continue to "new" services inside their code?)


There's a good talk here from Uncle Bob who helped popularise the solid principles, about what OOP is, and isn't

https://youtu.be/QHnLmvDxGTY

Towards the end of the talk note of the stickers on his laptop


I don't know much bout procedural programming, but I'm curious about your thoughts on swift protocols and also how swift allows classes to be used in a functional way. I wonder if this could mitigate some of those problems?


Design and framework are different things though. You can use oop design when needed without having the whole world evolve around it.


similarly, there's elm/redux style state management, which is almost explicitly about emulating global state in a functional architecture.

it seems all programming paradigms end up devolving into global state by friday. because too many developers cannot recognise global state, instead recognising only its syntax


Hidden information is entropy. Do you really want more of that in your system?

https://physics.stackexchange.com/questions/29175/why-is-inf...


“OOP” is one of those terms (like most terms) that have a narrow meaning and a broad meaning, and either proponents or detractors can use one of them in arguments:

• OOP, narrow sense (emphasis on “object”): data-with-associated-functions, encapsulation, etc.

• OOP, broad sense (emphasis on “oriented”): organizing your program around objects that model the nouns in the problem domain, “everything is an object”.

A (satirical) illustration of the latter is in the first few paragraphs of https://caseymuratori.com/blog_0015 (from 2014), which gives an example where, to write a payroll system, you first start by designing classes for “Employee”, “Manager”, etc.

A (non-satirical) illustration is in the infamous “TDD Sudoku” series of blog posts (follow the links from http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-s... or https://news.ycombinator.com/item?id=3033446) where (in contrast to Norvig's program which just solves the problem) the TDD/OOP proponent ends up with a “class Game”, “class Grid”, “class Cell”, “class CellGroup” (with derived classes “Row”, “Column”, and “Square”), but ends up nowhere. (From Seibel's post: “…got fixated on the problem of how to represent a Sudoku board. […] basically wandered around for the rest of his five blog postings fiddling with the representation, making it more “object oriented” and then fixing up the tests to work with the new representation and so on until eventually, it seems, he just got bored and gave up, having made only one minor stab at the problem”.)

I think we can agree that this sort of object oriented programming can hurt a lot (thinking about objects is not a substitute for solving your problems, though it is tempting), while objects themselves are useful.


> I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea.

> The big idea is "messaging" - that is what the kernal of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word - ma - for "that which is in between" - perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be.

https://wiki.c2.com/?AlanKaysDefinitionOfObjectOriented


Well, regarding that quote (interesting no doubt), see some important caveats at https://www.hillelwayne.com/post/alan-kay/ which goes into some detail on evolving views about OOP (including that quote). It turns out that Alan Kay did not coin the term “objects”, though he did coin “object-oriented programming”, but even in Smalltalk, messages were only one idea of three: the post concludes that “OOP consisted of three major ideas: classes that defined protocol and implementation, objects as instances of classes, and messages as the means of communication.”


People like taxonomies, grouping things into clades of clades.


I’ve seen this referred to as “satisfying your inner Linnaeus“ and have spent way too much time in that weird place myself.


That phrase was probably my fault.



> one of those terms (like most terms) that have a narrow meaning and a broad meaning

Very relevant: http://slatestarcodex.com/2014/11/03/all-in-all-another-bric...


Best answer so far.


I don't find this article well thought out. One, the writer presents OOP and FP as opposing paradigma's. While object oriented programming is orthogonal to functional programming. You can have a purely functional programming language with objects.

Two:

    100% pure functional programing doesn’t work. Even 98% 
    pure functional programming doesn’t work
Doesn't work for what?

I think pure functional programming has its place, it is not just for everything you want to do. But object oriented programming is also not meant for every domain. It is like saying scalpels are stupid, because you can't build a house with them.


I think it all comes down to one thing: Shared mutable state. We can discuss what is functional and what is OOP, but in general functional programming does not mutate data in place. OOP does in general with encapsulated instantiated data in classes being operated on in that class.

However sometimes it is hard or impossible to avoid shared state even in functional approaches. So I suppose that is the 98% part. Things like databases, connection pools, files.

Since I have programmed in Clojure it has made me a better programmer even in OOP languages when I use them. I try to avoid self mutation and shared state as much as possible. I know there is a back and forth on what is the better way to program for a lot of things, but to me avoiding shared state is indisputably better for avoiding surprising bugs, and also for making concurrency much easier.


100% agree that shared mutable state should be minimized regardless of the paradigm used and further agree that it's generally not possible to totally eliminate it. I believe programming in a functional style helps on to adopt a mindset towards minimizing shared mutable state.

Also believe functional programming gets one to think in terms of function composition. Even in an OOP-heavy language like Java, one can find themselves developing complex operations as a composition of simpler, reusable operations. While Java may not have proper 1st class function nor solid syntax for working with functional interface, one can still think in terms of composable operations.


>Since I have programmed in Clojure it has made me a better programmer even in OOP languages when I use them

That is also my experience when visiting different paradigmas, they make you a better programmer, because they come with a different way of thinking.

>However sometimes it is hard or impossible to avoid shared state even in functional approaches. So I suppose that is the 98% part. Things like databases, connection pools, files.

True, true, most pure languages have an escape hatch. Or they pretend to you model the IO through some clever mathematical contraptions and then run it in the runtime during, you never guess it, runtime. But I feel that is just playing with words. Computers work with mutable state, so the 100% pure functional language doesn't even exist.

> does in general with encapsulated instantiated data in classes being operated on in that class.

I agree, but the general case is always a bit dull. There is no reason for a object oriented programming language to not have immutable objects. Every time you call a method that would normally mutate the object, you get a new one. And then perhaps with something like linear types you can make sure, you only used each object once, so you don't spray needless copies all over the place.

I think the axis he should be talking about is imperative programming and functional programming, those are opposites.


> Doesn't work for what?

It implies that FP doesn't work in the real world without IO or without a runtime. When the author talks about "the messy parts that interact with the outside world" he's talking about the "Functional Core Imperative Shell" concept [1].

[1] https://www.destroyallsoftware.com/screencasts/catalog/funct...


Of course lambda calculus doesn't work in the real world, the canonical purely functional language. It doesn't work, because it are just functions and those need input before they become useful. A function just sitting on paper doesn't compute anything.

However that is not entirely true, we can perhaps view analogue computers as purely functional computers, albeit one with continuous functions.

But besides such a weird example, we should not kid ourselves, most imperative languages these days also need a runtime. And if you take that a step further, assembly also needs a runtime. But that is I think not a honest argument, but still something to think about.

If we pursue this further down then we arrive at physical phenomena that do the computations. Are those imperative or functional? But now we are just playing around.


So then why doesn't "98% pure functional programming" work either as the author puts it?


You can basically get the type of structured programming author is looking for with fully public structs in a functional language. And the pureness of functional paradigm is a bit of a strawman. Coming off of mostly C, C++, ruby, and python, I exclusively program in functional languages and have been for almost a decade now (wow!), and none of the fp langs (julia, elixir, erlang, react) that I have used have been "purely" functional. They all expose state (using different techniques - (bang functions, actors, actors, hooks) which are extremely well calibrated to the needs of their practioners. While that leads to a bit of fragmentation, it does also mean that the quality of code you release is better, because you're protected from making common mistakes that in my pre-FP experience, led to much teeth gnashing and hair pulling.


>Doesn't work for what?

>I think pure functional programming has its place, it is not just for everything you want to do. But object oriented programming is also not meant for every domain. It is like saying scalpels are stupid, because you can't build a house with them.

You answered your own question. "It is not just for everything you want to do." He didn't say it was stupid. He said is is not for everything you want to do. You and the author are in agreement.


The objects would be fully public structs only though, yes?

>Doesn't work for what?

Doesn't work for performance, too much copying of immutable state.

If 100% pure means zero mutation, you can't even fit the definition of a program and write output /s


Well, it depends. I work on big clusters with lot of nodes. There you need to copy objects anyway. There is no shared state. I wouldn't like to use C there for example. In the language I work, objects are immutable and that makes it possible for a human to work with the cluster.

And of course there is state, I know that every node is plenty full of state, but that is not the point and it is hidden from me.

But if you writing device drivers, sure then it might not be ideal.

Not all tools are useful for everything.


Immutable data structures and algorithms don't copy immutable state nearly as much as one might assume, and the vast majority of our field doesn't write software that is affected by the tiny performance differences between the underlying implementations of OOP Java versus functional Clojure, or perfectly optimized OOP JS versus naive functional Elm.


Sometimes its fine and sometimes its not but that is why 100% pure isn't feasible. I personally try take functional ideas into my OOP language as much as possible.


I think people care about performance without thinking about what exactly the bottleneck is. In some cases, it's the thing in front of the keyboard, in which case, you can do plenty of immutable state copying without worrying about it.


> I think people care about performance without thinking about what exactly the bottleneck is

Some of us are just permanently immersed in performance critical stuff. In gamedev, a single millisecond is a significant chunk of my per-frame budget and can stick out like a sore thumb in my profiling flamegraphs as an optimization opportunity / potential cause for hitching.

Pure functional code still has plenty of uses in this environment, but we've also got large chunks of our codebases dedicated to mutating external state (GPUs, Network IO, etc.) in horrifically performance sensitive ways.


> In gamedev, a single millisecond is a significant chunk of my per-frame budget and can stick out like a sore thumb in my profiling flamegraphs as an optimization opportunity / potential cause for hitching

Yes, but what is the relative portion of gamedevs among all devs? These people know who they are and they should absolutely obsess over performance. Everyone else should stop and think first.

Okay, okay. I'll make another carveout for certain classes of HPC. But even here, as the joke goes, HPC is the art of taking a CPU-bound task and turning it into an IO-bound task. I have heard of HPC people optimizing their numpy application to really run like the dickens on an HPC cluster, the scale their deploy from 1 to 40,000 nodes, and have the filesystem lock up for hours on what should be a 15 minute task due to the way python takes out r/w transactional locks on each and every one of its import statements.


> I'll make another carveout for certain classes of HPC.

And embedded. And anything RTOS. And kernel devs. Video encoders. And even some backend webdev stuff for scaling purpouses. And high performance logging systems. And ...


If you took a dev out of any of those categories, only the rtos engineer would be competent at knowing where the bottleneck lies before profiling.


i am sorry. the larger population of devs care too!

bigger players have conditioned us to think that the status quo is normal. and this is working pretty well on those who haven't been around for a while.

those who have been around for a while are flabbergasted. dreams of seeing software loading right away and doing things of the past for them hasn't come true. note: things of the past (like loading up a text editor or an image!)

the excitement generated by unreal 5 proves that the status quo is not acceptable. we should be expecting much more.


> i am sorry. the larger population of devs care too!

Yeah. My point is, not that they shouldn't care but WHEN they should care. They should build their system, then measure the bottleneck. I would bet that 4 times out of 5 their initial intuitions are wrong.


>a single millisecond is a significant chunk of my per-frame budget Interesting stuff, is that really a thing? Do you also have tools to measure this? I am genuinely interested, I like working under constraints, it makes programming more interesting. My main constraint is budget and total runtime of a job.


Yep this is really a thing. Game dev is an example of a realm where performance really matters and you need to tune that code to its target hardware, or some common denominator. Vtune is a pretty accessible tool if you want to profile your code on a Intel CPU. Also, rad game tools is a company that makes very useful tools for profiling.

I'm not in the gaming industry but the software I work on has a lot of visualization and physics simulations, so it's similar. It's definitely a different set of constraints, you deal with abstraction a lot less as well.

I'd highly recommend giving something like game dev a try, even if it's just as a side project. Handmadehero.org has great content. It's a breath of fresh air if you're coming from something like web development.


>> Interesting stuff, is that really a thing?

To maintain 30FPS you must finish a frame in 33ms (1000ms/30). To maintain 60FPS you must finish a frame in 16ms. Some of the VR literature I've read has recommended 90FPS or higher to combat nausea - 11ms. A millisecond could easily be 1/10th of your entire frame budget for the whole game's per-frame processing time, to say nothing of individual subsystems. At least we have multiple cores, but some stuff must run on the main thread...

> Also, rad game tools is a company that makes very useful tools for profiling.

arcin is of course referring to Telemetry[1] here, a wonderful tool for generating explicitly annotated flamegraphs that I've helped integrate into multiple codebases at different jobs. Engine-custom[2] tooling is also common, where you're seeing scopes measure in microseconds. There's a bunch of FOSS alternatives[3] of varying quality / functionality - nothing that's put Telemetry out of business for me yet though. "I'll write one myself for Rust someday!" I keep telling myself ;)

[1] http://www.radgametools.com/telemetry.htm

[2] https://www.youtube.com/watch?v=DMO4X9leTG8

[3] https://github.com/jonasmr/microprofile https://github.com/Celtoys/Remotery https://github.com/loganek/hawktracer https://github.com/google/orbit


You guys have much cooler stuff to play with than I have for profiling things. Also better visualizations, but that is perhaps to be expected from the game industry.

>Some of the VR literature I've read has recommended 90FPS or higher to combat nausea - 11ms That explains why VR is so computation heavy. Also curious what the mechanism is why you become nauseous under the 90fps. Perhaps the brain observes you are moving abruptly with lower frame rates, giving a constant sense of acceleration.

About telemetry. It feels weird that they are so flexible about their license, I am come from the corporate world, where licensing is set in stone. Is the game industry mostly informal?

>I'll write one myself for Rust someday You do your game development in Rust? I have interest in Rust, because it has an interesting type system, but perhaps it would be nice to make a simple game in it to learn it and look how that goes.

Thanks for your replies.


> Also curious what the mechanism is why you become nauseous under the 90fps. Perhaps the brain observes you are moving abruptly with lower frame rates, giving a constant sense of acceleration.

It seems to be some form of motion sickness... it seems nobody is 100% sure why that happens either? https://en.wikipedia.org/wiki/Motion_sickness#Pathophysiolog...

I'm lucky enough to not suffer from the effects myself... supposedly Habituation can help, so perhaps I just read enough books in moving vehicles as a child?

> About telemetry. It feels weird that they are so flexible about their license, I am come from the corporate world, where licensing is set in stone. Is the game industry mostly informal?

It varies a good bit. Gamedev middleware shops are often small companies not burdened by large bureaucracies, dealing with high price, low volume sales - so they can afford a more high touch, personalized, and perhaps informal approach to sales. They also want to maximize the amount of devs trying their software - even if they don't use it for their current project or company, they might use it for the next, which is more sales - so they're quite happy to hand out evaluation licenses, or talk about missing features / platforms you might be interested in.

Those licenses are partially enforced by software, and for console versions, they might need to verify your lawyers have executed an appropriate NDA with Sony/Microsoft/Nintendo/??? before they even let you download those versions, however. Possibly via posting to shared, restricted access forums.

More standardized pricing systems are also common.

> You do your game development in Rust? I have interest in Rust, because it has an interesting type system, but perhaps it would be nice to make a simple game in it to learn it and look how that goes.

Just for my side projects so far - my professional gamedev work is generally C++ plus a dozen other languages that vary by company for scripting and supporting tools. There's a lot of interest in Rust, though - it's type system and borrow checker do a lot to catch the kinds of heisenbugs that plague large high-performance C++ codebases (which in turn lead to crunch, missed milestones, etc. depending on the company culture.) On smaller codebases, the borrow checker feels more like a nuisance than a help until you get quite used to it, but at this point I feel like I have gotten used to it (and perhaps become a better programmer in the pursuit).

In my next job search I'll definitely be keeping an eye out for the chance to play with Rust professionally...


>I'd highly recommend giving something like game dev a try, even if it's just as a side project. Handmadehero.org has great content. It's a breath of fresh air if you're coming from something like web development.

Thanks for the resource :) Going to have a look. I am not a web developer, I am very bad at it for whatever reason. I fled the web a long time ago for the big data landscape. I almost ended up creating physical simulations of boats, but I missed that boat somehow, still a bit sad about it.


> You can have a purely functional programming language with objects.

Yes, but those objects would have to be immutable. Much (probably most) OOP and procedural software works with mutable data structures, which are forbidden by pure FP.


The article is clear to me. Dont chase a dogma rather let the data define how you code.


I think if we called it "binding functions to data", instead of "objects" we would have a much different view of how important OOP is. There are certainly times that binding functions to data can be useful, but I think if you tried to tell someone "im making a language where all functions must be bound to data" they would think you're crazy. (Even though that's basically what Java is)

There are so many times you don't want to bind functions to the data they operate on, and languages that force OOP always make it painful

The other thing OOP gives you is type taxonomies, but frequently those are an anti-feature. Almost every experienced developer I know thinks inheritance is usually a bad idea.


I'd change "binding functions to data" to "binding functions to types." The "data" is an object of a particular type. The binding to the actual data (object) can take place at compile time or run time, depending on need. IMO that makes it a bit less weird and more just shifting the position of a parameter.


Well, the reason I say binding to data instead of binding to type is that runtime polymorphism on instances is one of the selling points of OOP. I think its a little overrated but that is a good reason to use OO if you need that functionality. If you're only attaching functions to a type there isn't much difference between "thing.fn(x)" vs "fn(thing, x)"


Having just come out of a datastructures class, this is the view that seems the most useful to me. ADTs in general seem to be very well expressed with classes, and I think it is exactly for the reason you mentioned: they are just systems that need to be able to define the ways in which a user is allowed to interact with the data.


I think the key is the ability to have functions bind to data without functions mandatorily being bound to data.


The author seems to be implying that we have a hard choice ahead of us with no middleground: We can either accept object oriented programming, or we can turn to pure FP.

The OCaml community presents a pretty compelling third option:

OCaml is billed as a 'functional language,' but it doesn't do anything to prevent you from performing mutations or executing side effects anywhere you want. It even has builtin syntax for "for" and "while" loops.

Interestingly, OCaml does afford classes and objects but hardly anyone seems to use them. It's not that OCaml objects are weird or difficult or bad in some way. People just choose to write records and functions. Some of those records have functions in them. Some of the functions mutate state.

In the OCaml world, at least, pretending OOP never happened seems to have worked out just fine.


> "The author seems to be implying that we have a hard choice ahead of us with no middleground: We can either accept object oriented programming, or we can turn to pure FP."

I disagree. I dont think that is what the author was saying at all, not even a little bit: "100% pure functional programing doesn’t work. Even 98% pure functional programming doesn’t work. But if the slider between functional purity and 1980s BASIC-style imperative messiness is kicked down a few notches — say to 85% — then it really does work. ... It’s possible, and a good idea, to develop large parts of a system in purely functional code. But someone has to write the messy parts that interact with the outside world."


The problem is the way the author instantly jumps from "98% pure functional programming doesn't work" to "you should use OOP."

In order to bridge the gap, you have to make some pretty terrible assumptions:

If your code is not OO, it must be FP.

If your code is FP, it must be pure. Therefore,

If your code cannot be pure, it must be OO.


> In the OCaml world, at least, pretending OOP never happened seems to have worked out just fine.

... you know that the "O" in OCaml stands for object-oriented... right ?


And over 99% of OCaml projects don't use objects at all. You would be hard pressed to find an OCaml developer who would care if objects were just removed from the language entirely.


There isn't really a middle-ground (with regard to side-effects) despite people arguing for or against it - much like those memes where someone declares that they're not 'for' or 'against' abortion.

Either I can rely on the fact that `foo() == foo()`, or I can't.

Either I can rely on the fact that `map f . map g == map (f . g)`, or I can't.


Care to enlighten me?


The properties you describe regarding pure functions are correct and useful, but those ideas miss the point:

Neither OO nor pure FP are required to build high-quality software!

OCaml developers have been living in this world for decades now and they seem to get by just fine. Further, they do this purely by choice! Their language has had OO syntax for decades and they are happy ignoring it!


> 100% pure functional programing doesn’t work. Even 98% pure functional programming doesn’t work. But if the slider between functional purity and 1980s BASIC-style imperative messiness is kicked down a few notches — say to 85% — then it really does work. You get all the advantages of functional programming, but without the extreme mental effort and unmaintainability that increases as you get closer and closer to perfectly pure.

This is how we (try to) write code where I work. Pure algorithms, immutable data types, single point of change. D is pretty good for this, but could be better; ranges allow you to be pretty expressive when composing operations, but immutable types have a lot of problems still.

It actually combines well with OOP. We tend to mostly use classes to group methods and express domain dependency, rather than manage a tiny state subset. So in maybe half to two thirds of our classes, all the fields are set once on startup and then never changed again.


“Object oriented programming, for all its later excesses, was a big step forward in software engineering. It made it possible to develop much larger programs than before, maybe 10x larger[...]OOP made it possible to write programs that could not have been written before”

Teams were writing million-lines-of-code Fortran programs in the 1980s to simulate the atmosphere, nuclear weapons, etc. Somehow I think these remarks need some qualification.


Yeah, the author seems to rewrite history quite heavily. There is even arguments here that the Linux kernel is OOP, in case we need to have a clear definition of what OOP is, because right now this HN-discussion is quite "fluffy".


> That has been my experience. I hardly ever write classes anymore; I write functions. But I don’t write functions quite the way I did before I spent years writing classes.

> And while I don’t often write classes, I do often use classes that come from libraries. Sometimes these objects seem like they’d be better off as bare functions, but I imagine the same libraries would be harder to use if no functions were wrapped in objects.

I noticed this soon after I got involved with Java. Programmers were using classes where I would use methods. They would have objects which did things where I would have passive objects which have things done to them by methods. A specific example might a Parser class with a parse method, where I would define an entirely passive Grammar object with a parse method, and of course a Tokenizer object where I would use a tokenize method, calling an InputStreamReader class (instead of just InputStream) with a read method. I concluded that any class name ending in "er" was almost invariably unnecessary overhead.

I wondered where this came from. All I could think was in object orientation's roots in simulation, where it's entirely appropriate to have objects which do things, and it was copied from there.


This is why currently I am so happy with ES6+/Typescript. The ability to define strongly typed classes but also have methods really improves the OOP workflow to me. Classes should rarely be used for things that are unique, especially a "parse" method or etc. I recently had to make two modules- one to format a message and one to parse the message. Instead of classes, I can just use functions, outside the scope of any Class objects.


Sometimes the best function for the job is a free-standing function, e.g. parse(Grammar, Input). I was leaning this way when I read some quotes from Bjarne Stroustrup which touched on the subject. Here is a link to his personal FAQ which sums it up.

http://www.stroustrup.com/bs_faq.html#oop


This is just functional core, imperative shell. Which is to say, keep generic, immutable things generic and immutable. Build libraries to implement ideas and concepts in your domain. And then when it comes time to implement the high-level business logic, use an imperative shell to harness the power of all of those concepts you created. Keep it logical, straightforward, easy to follow, and well organized.

Like the author says, OOP is an organization tool more than anything and like "strategy," "adapter," "Command," "builder," etc signal specific things to your colleagues that IMPROVE understanding and time-to-grok.


Thanks for identifying the paradigm.

I've been working on refactoring some code, and I really like the functional style so originally wrote most of the logic in discrete functions. Multiple functions rely on some similar dependencies so there are some unique parameters for the function and then a handful of parameters that are the same for all functions (stuff like a database connection). It seems good that each of these functions can accept those dependencies in the signature, but it's likely that each of those functions is going to need that same dependency for the whole pipeline. At some point it looks to me like a small object that can handle the initial setup and management of those dependencies is helpful. Functional core imperative shell seems kind of like what I'm doing? Others mentioned there is some merit to being able to reduce the state from global to local in some form and OOP may be a useful for that case. Recommendations for other ways to handle these situations would be helpful as it's possible I'm only reaching for an OOP wrapper because that's what I know.


A classic worth watching if any of you haven't seen it! https://www.youtube.com/watch?v=yTkzNHF6rMs


It would have been better if OOP actually never happened. OOP is just reaaaaly bad. One of this ideas that looks appealing, but doesn't work in practice.

Inheritance is often a space-goat, but it's not even such a big of a problem with OOP. Core ideas behind OOP are misguided:

    Abstraction
    Encapsulation
    Inheritance
    Polymorphism

Encapsulation on a such a fine-grained level as each object/class is like a person putting padlocks on each pocket, so that right hand can't grab thing from the left pocket. The granularity of encapsulation in OOP is just sooo impractical. The right granularity for data-hiding are data-stores, layers, APIs, (micro-)services, modules, not each single little bit of data.

Abstractions are costly. They can't be a goal in itself. Abstractions should be applied only where they are needed and beneficial, where benefit of adding them outweighs the cost. Similar with Inheritance and Polymorphism - they are abstractions that have a cost. Sometimes worth-it, sometimes not really.

Object itself is a bad idea. Passing around references to objects (data with attached behavior, instead of POD - plain-old-data) forces you to scatter your data across many tiny bits, which makes everything way more complex and slow (poor cache locality, layers and layers of indirection). Instead of passing an Id or bunch of fields, now you're passing data plus some abstractions to manipulate it everywhere. You generally get a graph, and graphs are the most general and thus most difficult to use data-structure out there. Coordination becomes a nightmare really quickly.

OOP is just a busy-work, and trying to pretend that you can just play with abstraction and taxonomies and ignore the fact that effectively your software is supposed to manipulate data to give an expected result, and not be a little god-game of modeling the world. Ignoring that there are some ways to structure your data that supports well what you're trying to do, and a graph of abstract object is very rarely the best choice.

And so on... https://dpc.pw/the-faster-you-unlearn-oop-the-better-for-you...


It seems like there's something deficient in the way we tell the story of the history of software architecture (to the extent we tell a story at all) in terms of the name-brand techniques and technologies involved, rather than in the actual layout and organization of actual codebases.

For a while I've assumed that OOP as in C++/Java essentially formalized modular programming in C. In other words, that people were already writing programs whose state was divided into functional areas, with some functions serving as the interfaces between the modules. With a class-based system you can rigidly formalize this; and then OOP as we use the term essentially just reinterprets this formalization as actually creating the architectural paradigm that had already evolved as programs grew.

(This is NOT meant as the one way to sum up the whole world of things identified as or related to "object-oriented programming".)

But I wasn't around at the time...


This is my thinking too. It's really silly to have wars around programming paradigms. There are only a few principles around which we're all arguing:

* How do we make programs that are easy for the machine to execute efficiently? * How do we make programs that are easy for humans to read and understand? * How do we ensure, given the maintenance requirements of our programs, that another human who doesn't have the benefit of our experience can safely make changes to our programs without unitended consequences?

Discussions around OO versus Functional versus Procedural miss the point. You can write perfectly maintainable procedural, functional, or object-oriented code. If you're authoring something brand new you have to approach it with a complete understanding of all the moving parts. If you're not there, make a prototype, wait a few days, then go through and re-read it. Anything you don't understand is something nobody else will the first time they approach your code base. Come up with ways to be explicit and to communicate clearly what the intent is. Try to anticipate what things people will be changing often and make those easy things to change. Remember that it's about conveying a representation, not a deep understanding. You want to represent your understanding of the problem space to someone who doesn't have the same level of understanding as you.


> It's really silly to have wars around programming paradigms. There are only a few principles

Well, you say we're having wars around the programming paradigms, I say we're having "spirited debate" around the principles :). I've been working mostly in Java for the past 20 years or so, and I can't help but observe that most people, when they try to put together a Java application, default to a sort of design that looks an awful lot like old Cobol programs did: they have a "datatype" generator (usually automated from XML schemas) and a slew of "utility" classes with mostly static functions that have mostly static data that operate on these datatypes, and as little class instantiation as they can possibly get away with. I've seen this same basic architecture repeated many times across four different employers in two decades. It's always a lurching, monolithic, untestable behemoth that never works reliably and resists any attempt to change. In talking with the original designers, it's clear that there were no principles behind the design besides "it still doesn't work, how do I get this thing to work". If there were clear and adhered to principles like automated testability, you'd end up naturally with an OO (or even better, FP) type design.


Interesting. I guess I've been more fortunate. Most of the Java code I've worked with has involved reasonably well thought out classes. For me that mostly means I can read and understand parts of the codebase in isolation. There are usually a few piles of sometimes ugly utility classes and the occasional mess of deeply nested inheritance that nobody wants to touch. When the latter becomes painful enough, someone usually decides to refactor it, which is often not as hard to do as everyone fears.

It seems to be improving in the last 5-10 years, as most practitioners have found that both of these eyesores can be reduced. DI (sometimes messy itself, but it can be done cleanly) tends to make people rethink those utility classes, and shallow inheritance is now favored, with a focus more on interfaces and composition.


Sort of.. But the way this was done in C is still done and was evolving long after C++ split.

IMO classes as false separation is the reason we kept C++ and Java out of Oses. ABI compatibility, dynamic loading, etc are all from the duck philosophy. It is no one's business if your duck thinks it is a duck.


When I learned C, it was drilled into us that the proper way to implement all the data structures and things was with abstract data types.

https://www.edn.com/5-simple-steps-to-create-an-abstract-dat...

And then you have nicely "namespaced" functions that operate on those abstract pointers and your code is isolated from the implementation and you don't accidentally depend on some implementation detail you shouldn't.

But ultimately this is all just informal OOP. Objects/classes are just a natural way to organize programs.


I've been recently writing a lot of code in the ECS style https://en.wikipedia.org/wiki/Entity_component_system and I'm a fan. It allows me to concentrate on shit without getting distracted. Code tends to be clustered together with other related things as opposed to having to jump between files and classes and trying to figure out what's getting called.

The only downside is that it's not as popular in main stream and there aren't like ECS first languages.

However I believe that there is a reason ECS has been dominating in the game industry for the last 20 years. It's very flexible, fast and works nicely with the GPUs.


My impression of ECS from a distance is that it is antithetical to data-hiding.

That there are good reasons in terms of memory layout to locate all of those values which would have been member variables in large arrays so that external functions can iterate over them efficiently, but that in so doing, private members become impossible.

What am I missing?


You aren't missing anything. But you also aren't asking yourself the meta-question "what is the point of data hiding?"

The answer to that meta-question is "to preserve assumed relationships between data." `std::vector` stores a capacity, and that capacity damn well better correspond to the length of the last array it allocated or you will get a segmentation fault. So you hide it.

The OO thing that ECS is pushing back against is a tendency to group things together solely because they are in one-to-one correspondence, when they are otherwise unrelated. In a game, there is no constraint that needs to satisfied between a player's HP and their position on the map. So why "hide" this data together in an aggregation?

If you want to permit only a few functions to operate on your big array 'o stuff, you can always do something like

    class sensitive {
      static allowed_operation(sensitive& s, context& c);
    private:
      double here[N];
      int be;
      bool dragons;
    };
Similarly, if you need to do some really complicated computation on everything about a player, you can define Objects as aggregations of indices into the arrays 'o stuff (where typically people think of an Object as a big-ass struct of structs, often implicitly due to Inheritance), where the data you care about hiding is not the stuff from the array, but the indices:

    class Player {
      GetComponent() { return arrComponent[index]; }
    private:
      size_t index;
    };


Thanks for the examples shoehorning data hiding into ECS! They are revealing even if only appropriate in esoteric circumstances.

> But you also aren't asking yourself the meta-question "what is the point of data hiding?"

My answer to that question has always been that data hiding is necessary for the sake of modular independence in large systems. This is a principle which applies across all of engineering, not just software — see the "starter motor" example elsethread.

Implementation details need to stay hidden so that you need only concern yourself with local effects when making changes — instead of needing to keep the entire system in your head because any change might impact any tiny detail anywhere at all.

Nevertheless, I agree that in some circumstances it makes sense to expose the data structure as an API, and that ECS offers a compelling approach and set of conventions as to how you would go about that.


> My answer to that question has always been that data hiding is necessary for the sake of modular independence in large systems.

Yes. But not on a granularity of every object. Just like you don't put a padlock on every pocket, to protect your left hand from grabbing stuff from a right pocket.

Hiding data has a real cost, just like inventing abstractions and interfaces for every tiny thing. That's the core reason why OOP-software is so bloated and always feels so "heavy".

The right granularity is much coarse: modules, API layers, data-stores. Much closer to service in "micro-service", than "object".


You are right, it's shoehorning. That's because data-hiding just isn't as much of a concern in ECS.

In ECS the things you talk about are achieved differently.

In ECS, implementation details are local to the systems that process the relevant data. You don't store implementation details in the data you pass to the higher abstraction.


That's exactly the point. This whole data-hiding thing has way too many downsides.

If you consider your app a machine that transforms A to B, you don't want data-hiding in the first place. You want to be honest with the data you have and data you need.

Also in most web apps: Your data will escape your precious objects (mostly as JSON).

And don't confuse data-hiding with having objects in a consistent state (i.e. mutating data). Two different things.


> That's exactly the point. This whole data-hiding thing has way too many downsides.

The usual argument against OOP that I hear is that implementation inheritance is brittle. That, I understand and agree with.

But data hiding being bad? I'm not persuaded. The whole divide-and-conquer aspect of breaking modules into smaller parts requires not exposing unnecessary details as interfaces.

You have a starter motor in a car. If it's only connected to the motor via a couple wires, then it's easy to design a better one and install it. But if it has dozens of wires connecting to every which part of the engine, improving it is much harder.

> Also in most web apps: Your data will escape your precious objects (mostly as JSON).

My "precious objects"? :( Can we please discuss this subject without getting into a flamewar?


Sorry, forgot this wasn't reddit.

Well, will your data escape or not? As long as you are staying in OOP-land, everything is as it should be: Objects passing messages to each other, as politely as possible (trying not to bring in the concurrency aspect)

But then, there comes a REST endpoint around and wants your data. Now it is out in the open and subject to inspection and change. So what have you gained by using data-hiding? Shouldn't you be able to pass that data around openly in the rest of your code as well?

Maybe I shouldn't say data. I should say values. Immutable ones. Values can be passed around safely. They can be inspected by anyone. Accessed by multiple threads etc. That allows for abstractions that are impossible to achieve with stateful objects.

In OOP we keep our state hidden in the objects, have methods manipulate it, but that means these object can't be a value. They can change anytime. You can't even observe those state transitions (unless you code for them, which gave us bean-style properties...)

To stay with your motor example: If you assemble a motor from values and start it, it will continue to work, even if someone installs a new part. You can't break the motor.

You can even save the motor to disk at any time, load it back and you still have a working motor.

Yes, all of this could be achieved by using objects talking to each other, but it is vastly more difficult (try to snapshot a consistent state of an object graph, for example)


> The whole divide-and-conquer aspect of breaking modules into smaller parts requires not exposing unnecessary details as interfaces.

ECS takes a fundamentally different approach. OOP is about breaking things down, ECS is about building things up.

Like in ECS, it's possible to dynamically toggle whether an entity has a particular field or not.


Data-hiding is a language feature but I do get your point. I'm not sure if I really need data-hiding.

In your example, would some of the values be private while others public? Like in ECS, they are either all private or all public which kinda makes sense.


For the sake of loose coupling, which allows modules to be improved in large systems without having to keep the entire large system in your head, data hiding is necessary as a general engineering principle.

From my perspective, ECS looks like an optimization where you make the engineering tradeoff to sacrifice data hiding. For very good reasons! Memory locality is super important when iterating over elements the way it is typically done in ECS, such as when rendering objects in a game.


The way you do things in ECS is that you query all things that have particular field. As a result, you don't really need to remember what type do the field you are operating on really have, you are operating on the columns of a db, not the row of a db.


Yeah ECS is really easy for organisation. The registry is basically global state though!


It's more like a database. Like a database in some sense is just global state.


Interesting stuff, but I really miss explanations:

> It (OOP) made it possible to develop much larger programs than before, maybe 10x larger.

Why?

This contradicts my intuition, because the biggest problem is that OOP focuses on state changes (of objects). State change means complexity, so formally, you need to understand the set of all possible states to reason that the program is correct. This is an exponential problem in the number of pieces of state, unfortunately. So how would this enable larger programs? What's the reasoning here? It would mean some even more potent mechanism of OOP makes more state more manageable. The structuring of data does not strike me as that potent -- nice, yes, but, well -- please explain!

> OOP provides a way for programmers to organize their code.

I think it is primarily about organizing data. The organising of the corresponding code follows. When Java emerged, it did not even have functions -- every piece of code had to be attached to data. In SmallTalk, you could change the definition of the global 'true' or even extend bool to have three values -- definitely data centric, and introducing unmanageable state...

> 100% pure functional programing doesn’t work. Even 98% pure functional programming doesn’t work.

I believe this. Well, intuitively, I'd say no paradigm works well when applied 100% pure. It would still be nice to see examinations of this or reasoning or proof or at least examples.


> the biggest problem is that OOP focuses on state changes (of objects). State change means complexity, so formally, you need to understand the set of all possible states to reason that the program is correct.

This is just not true. In fact that's the whole point - you encapsulate state. It becomes easier to reason about.

When I write a webserver, do I need to know or care about the internal state of the classes dealing with the TLS protocol? Or do I encapsulate consistency and correct behavioir at that level and then use the interface elsewhere?

I don't need to know everything at all layers.


> I hardly ever write classes anymore; I write functions.

I've never understood why so many people insist on this false dichotomy.

I do both.

These past years, I feel that I've been writing much better functions as my understanding of more advanced concepts has ramped up, thanks to functional programming and the emergence of things like Rx and coroutines, but once I have these functions, I am extremely happy to be able to organize them in classes and modules and leverage various forms of polymorphism to make my code base flexible and easy to maintain.


> I've never understood why so many people insist on this false dichotomy.

> I do both.

Are you talking about methods attached to your classes or functions that are at the root/module level? If it's on a class I would argue it's not a function since it takes input that's not just the parameters to the function. If it doesn't access properties of the class, then whey does it need to be attached to the class?

For example, let's say there is a function that takes instances of two different classes and returns an instance of another class. If you want to move that function to a class, where does it go?

> once I have these functions, I am extremely happy to be able to organize them in classes and modules

I understand the organizing into modules and being able to create interfaces/types, but why classes?


> Object oriented programming, for all its later excesses, was a big step forward in software engineering. It made it possible to develop much larger programs than before, maybe 10x larger.

If that statement is true then OOP was surely the most important advance in software development since the original higher-level languages, such as Fortran, were developed.

My experience agrees with the author's stance that OOP provides significant leverage in developing complex software. What I don't understand is why he then falls into the current fad of seeking to abandon it.

Surely if OOP has been as successful as the author claims then what's needed is not to replace it with another paradigm that rejects all the core principles of OOP but rather an approach that combines the good aspects of OOP with other paradigms allowing the programmer to use the tools best adapted to the problem at hand.


No one ever seems to as if an ever larger program is a good idea, the fact that the tools enable it, and that its the way thats taught currently still does not answer the question of 'goodness'.


I don't understand the modern hate for OOP. Perhaps a bad mentor or inheritance nazi. Inheritance can be thrown in the trash but otherwise OOP is just a collection of related functions where you can have shared data to work with while not polluting global scope. I don't see how it's so oppressive unless if you're trapped in Java land where OOP is inescapable as opposed to dynamic languages where you can sprinkle in your OOP with other coding styles as needed.


A relevant quote from Joe Armstrong, creator of Erlang:

"the problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle."

In a nice procedural or functional program, when I need to do something I just write a function that takes the relevant arguments. From the function signature, I can reason about the function, that it only depends on and interacts with those things I passed in. It also means I can reuse that function anywhere else in the code I happen to have some of those things that are the function args.

In the OO approach, even if a function only needs to access a few member variables, I still pass in the whole object every time I declare a member function. This makes the function harder to reason about, as the body of the function could touch any member of the class, and if it has other classes as members then it could rely on any of them too. It also makes the function much harder to use, as even if it only needs X, Y and Z, calling it requires creating a whole instance of that class of which the function is a member, which may require a bunch of other stuff.

In that sense, the OO approach introduces unnecessary coupling, between the logic in the body of a member function and the class members not used in that function. Because I can't use that function's logic without creating instances of the members not used in that function.


I think it's more of a hate for patterns than OOP and like in your inheritance example, a confusion of the two (similar to "agile" and all the process that comes with "scrum")


I genuinely think that it’s as simple as the grass being greener on the other side of the fence. Most of us work in an OOP paradigm most of the time, so it’s easier to focus on what non-OOP languages would give you rather than what OOP does give you.


There's nothing wrong with objects. Hierarchies of objects, though, ran into the "is-a" problem. Multiple inheritance has even worse problems, especially around order of initialization.

Now that people are mostly over inheritance for the sake of inheritance, it's not so bad. Often, you want to have one class be a member of another, rather than a child of it.

This brings up a common problem. If A is a member of B, B sometimes wants a back-reference to A. This is hard to set up in some languages. It should be made easy, because it removes much of the need for inheritance. For Rust, especially, it's a special case of Rust's back-pointer problem, the one that makes doubly linked lists and trees with backpointers hard. Not having this encourages the creation of structures with too many data elements.


“Pretending X never happened” might make an interesting series. Things that we no longer adhere to (visibly at least) but changed us in ways we can still appreciate


A natural question that arises in this context is: “do those who join the field after the consensus to forget X ever happened behave any differently than they would have if X really hadn’t happened?”


The author seems to equate OOP with “writing classes”. So I have to suggest that OOP never happened for them in the first place, at least not in the sense I understand it viz. messaging, encapsulation, and late binding. The design discipline that falls out of this is, for me, domain modelling, not “writing classes”, and the value I’ve obtained being thereby a better fit between the structure of code, and the purpose of that code, with all the opportunities for validation and comprehension by domain experts that follows.

That said, I do agree with the sentiment that multi-paradigm languages offer higher developer productivity than purist languages. I’d still have procedural style in fourth place, behind OO, FP, and Logic.


Maybe its because I write small software or customize existing mature Enterprise apps. I just dont care about the dogma so much. I think there is a lot to like about OOP. There is a lot to like about functional code. I use both quite a bit with C#. Dont get me wrong though, Im glad people do think about this and try to make software paradigms better. I just dont find that I am overly hampered with current tool sets to worry about this stuff.

Most importantly, I think this is just all talk. This is John D. Cook just spitballing with a contemporary and debating things. I bet the other person in the conversation is probably not all that convicted of his argument and is throwing an exasperated take out there...


I was hoping this post would be about what the world (and our programs) would look like if OOP never happened. But an interesting and quick read nonetheless.


OOP never had one definition anyway, besides maybe "your language has a 'class' keyword" (although some don't!).


and Haskell has a "class" keyword, so I guess there goes that option too


Reading some of the comments here reminds me of an extremely popular language that was designed from the start to encourage the use of a kind of class inheritance. People built complex things with this language, using deep layers of nested inheritance, as was intended. Then they began to discover that, when your project reached a certain level of complexity, it was excruciating to try to figure out what was going on, and impossible to change anything without breaking eight other things far away. New, powerful features are added to this language every year, and are embraced by developers, but they have learned to tame it by avoiding its inheritance abilities, or using them very sparingly. They have even developed novel ways to use its power while largely avoiding inheritance—systems that have their own conventions and terminologies invented by end developers and not envisioned by the language designers.

I’m talking about CSS, where class inheritance is called the “cascade.”


To make my point, let's go back to the very beginning of programming: plugboards.

To create a program you used cables on a plugboard. You had a general purpose computer that can run multiple programs, but in other to switch programs you had to spend significant time setting up cables. A small change in the program may also result in the same thing.

So then we moved to stored program computers, so that we don't have to do that anymore, and the global consensus is that it was a good idea, because it saves time.

Then, we had programs in bare machine language. But that poses various challenges:

1) Having to keep track the state of memory and registers is difficult. So programming languages were created to create an abstraction over them in the form of variables.

2) Programs were full of jumps that became hard to track and maintain at scale. Structured programming was created to create an abstraction over jumps in the form of control structures (sequence/selection/iteration/recursion). Procedural programming was created to group statements into reusable procedures and functions.

3) Then, having variables around became hard to maintain as well. So then structures were created as a way to group variables that are used together.

But then, people understood that some procedures and functions are coupled with structures, and that some structures are supersets of other structures. And that's how OOP was born.

In this mindset, I do not think OOP is a bad idea. I also do not think that the natural consequence of OOP is bad software. The problem is how the paradigm is used, not the paradigm itself.

The problem people are facing now is shared mutable state, which is not only a maintainability problem, but is also problematic in multithreaded software. Functional programming is a viable solution to address that problem.

NOTE: edited based on suggestion, since apparently I got some concepts mixed up.


Just a note, structured programming wasn't about structures in the data sense, but in the logic sense. Organizing program logic in a way that was more structured. Using a pseudocode:

  function unstructured_summation (int lower, int upper):
    int sum := 0;
    label loop:
    sum := sum + lower;
    lower := lower + 1;
    branch_neq lower, upper, loop;
    return sum; // this is assuming there's even a return concept
This is short and simple, so that goto or branch is fine here for reasoning about. But it doesn't scale very well [0]. Structured programming discouraged these ("discourage" being a scale from: don't use unless it makes sense to don't use at all) in favor of other constructs that were more clear in their intent and easier to reason about (particularly in the days when you couldn't write a piece of code and test in in 10 seconds or less).

[0] I once inherited a Fortran codebase that was essentially written in an unstructured style, many gotos and computed gotos. The guy who wrote it (yes, singular) over a 20 year period knew exactly what it did. But that didn't help me, because he was 70 and retired with a nice pension. The whole thing had to be rewritten using the original as a reference for testing because it was unmaintainable.


This is all awfully vague. Accordingly, all the comments are just talking past each other.

What does "85% functional" mean? How exactly do you measure that?

For that matter, what does "OOP" mean? PG published a note from Jonathan Rees [1] which lists 9 possible (pieces of) definitions.

"Because OO is a moving target, OO zealots will choose some subset of this menu by whim and then use it to try to convince you that you are a loser."

That's roughly what I'm seeing here, yes.

[1]: http://www.paulgraham.com/reesoo.html


Funnily, ever since I decided to follow the solid principles and striving for clean architectures in general, I've been writing more classes. Not that I can in my current job, being go, but I make do.

Why can't people let paradigms be paradigms instead of trying to turn them into religions? They're approaches that might or might not be well suited to different problems; yet so many people try to make the problem fit the paradigm they prefer instead of the other way around.

This article is thankfully more nuanced than most on this subject, though.


Before I ever dabbled in what was called "object-oriented" languages, I was happy enough to

<include stdio.h> ... FILE myfile;

Isn't myfile really an object?


I've actually run into a couple of people with six-digit SO scores that profess to not know about such concepts as polymorphism.

My jaw dropped. Maybe they were being a smartass, but that's just crazy talk.

Like all dogmatic stances, people just take a "My Way or the Wrong Way" approach.

Most of my engineering is a hybrid of classic (read: "old") techniques, and new, "cutting-edge" techniques.

I write about my outlook on that here: https://medium.com/chrismarshallny/concrete-galoshes-a5798a5...

(Scroll down to "It's Not An 'Either/Or' Choice").

One day, AIs might replace us poor coding schlubs. At that point, we can assume that everything will revert to Machine Code.

Until then, OO is a great way for humans to grok the complexity of software development.

I will always keep refering to this classic joke, when I encounter inflexible, dogmatic thinking: http://www.solipsys.co.uk/new/TheParableOfTheToaster.html


I'd say all programmers of any significant experience know the concept of polymorphism, but it's quite possible to know the concept without knowing the word.

They might be using discriminated unions, or type-distinguishing enums on rows or structs with a superset of attributes, but the concept is hard to avoid when trying to generalize behaviour over heterogeneous data.


Good point, you see this so often in C code bases. Didn't even think about that.


A domain specialist writing software for embedded devices, perhaps has not a lot of use for polymorphism or functional programming. Not everyone needs to know everything, tat is ok.


True, dat (I believe that Linus is a rabid "straight C" man).

However, these were people answering Swift questions.

Swift introduces "Protocol-Oriented Programming" as dogma, which has some really salient points, but is not the Philosopher's Stone of software development.

EDIT: Added "as dogma"


Well, for one it's not actually new. Protocols are interfaces, sets of messages.

Hmm...that would be message-oriented programming, which is what Smalltalk for example has been all about. First equating OO with heavy use of static typing and inheritance (which wasn't the idea), noting that that doesn't actually work (surprise!) and then "inventing" POP...sigh.

https://blog.metaobject.com/2015/06/protocol-oriented-progra...


No, Smalltalk does not have "protocols", traits or interfaces. Everything is dynamically typed ala Objective C.


Last sentence first: Objective-C certainly has (optional) static types and protocols[1][2], or as Java calls them, interfaces. In fact Java interfaces were "borrowed" from Objective-C[3]. In fact, Objective-C needs the types of a message's arguments and return value in order to generate correct code in the general case, and since message names can be ambiguous it needs to know the static type of the receiver (or "close enough") in order to know those static types in the message. If all arguments and the return value are objects, then this is not necessary, this "id subset" has some nice properties.[4]

In Smalltalk, the messages an object responds to are called its protocol.[5]

"Every object in Smalltalk, even a lowly integer, has a set of messages, a protocol, that defines the explicit communication to which that object can respond."

Now that that single protocol for the entire object can be split into multiple meaningful protocols should be obvious, but Alan Kay has also explicitly talked about objects embodying "multiple algebras"[6]

The basic idea of "protocol oriented programming" is that you focus on what an object can do (its interface/protocol) rather than what it is (its class/type) and that Smalltalk certainly did. Whether you check this statically is a secondary and certainly orthogonal concern, but of course there were and are statically typed variants of Smalltalk (such as Strongtal) and type-systems for others Smalltalks.

Traits were introduced in 2003 for Squeak Smalltalk.[7]

[1] https://en.wikipedia.org/wiki/Objective-C#Protocols

[2] https://developer.apple.com/library/archive/documentation/Co...

[3] https://cs.gmu.edu/~sean/stuff/java-objc.html

[4] https://blog.metaobject.com/2014/05/the-spidy-subset-or-avoi...

[5] https://www.cs.virginia.edu/~evans/cs655/readings/smalltalk....

[6] http://www.cs.uni.edu/~wallingf/blog/archives/monthly/2014-0...

[7] http://scg.unibe.ch/archive/papers/Scha03aTraits.pdf


Swift did not introduce "Protocol-Oriented Programming". As misinterpreted and broadly applied, it's been present in (among others) Java and Objective-C forever.

As actually intended (listen to Dave Abrahams clarify what he meant on John Sundell's recent podcast) it has much more to do with parametric types and functions that operate on them, and is almost directly inherited from ideas that have been in C++ since the STL.


Not aimed at you, but...

Misinterpreted?

Sorry, but the definition of a protocol, particular in the context of Swift/Objective-C has been fixed for around a quarter of a century. And of course, those protocols were what he talked about in his presentation.

If he meant something different, possibly he should have (a) picked a different term and (b) talked about something else at WWDC.


Good points; I can't really disagree.


You guys are both right, but I should have rephrased that "introduces as dogma."

I get a bit discouraged, when smart, energetic engineers (usually young), latch onto the buzzwürd du jour, and declare it to be "The One, True, Path; all heathens and apostates must be cleansed with fire."

I say that, as having done exactly that, myself.

Looking back at my thinking, then, makes me wince.


> I've actually run into a couple of people with six-digit SO scores that profess to not know about such concepts as polymorphism.

Perhaps this is just a symptom of "measure some aspect of a person's output, and that person will optimize for that measurement, no matter the cost to the rest of their output."

> Until then, OO is a great way for humans to grok the complexity of software development.

Particularly in domains where everything is an object, like game development.


But in game development it seems many have been converted to ECS.


Isn't ECS just a specialization of OO?

I thought it was an aggregation (as opposed to hierarchical) model. I've been doing that with OO for decades. It makes a great deal of sense, in some cases. PoP has similarities to that, but, in my experience, there is absolutely nothing that can compete with polymorphism and/or inheritance -and, here's the kicker- in some situations.

OO, in my experience, is a lot different from just polymorphism. I think that it's a way of conceptualizing the model. I used to write OO in straight C, in the late 1980s. I called it my "Faux Object" pattern. It even had a rudimentary form of polymorphism (no vTables - just function pointers that could be replaced -Poor man's vTable).

My experience is that lots of people have difficulty adapting to hybrid design. We learn one tool, and we learn it well. We go to conferences, where we are told that we are über-programmers, because we use this tool.

But try using a screwdriver to drive a nail. Most of my friends that are (building and construction) contractors are quite used to hybrid design. They use one technique and set of tools on the foundation, another on the walls, and yet another on the roof.

Some of these aspects are a large enough field that they can specialize (like framers or roofers), but there are quite a few "general" contractors that are capable of building a house from scratch.

I started off with Machine Language (the original "ML"), and have written device interface code for a long time, and am quite used to a lot of limitations (like "lite" versions of languages for lower-layer work or embedded content). We use the best tools for the task at hand, and don't sneer at others that don't use them.


I am not a game programmer, but I have read about ECS. ECS reminds me a lot more of database theory. We have unique keys (entity IDs) and values, and the goal of ECS systems is to have normalized data such that our functions that iterate over them only access the data they actually need so that things are cache friendly.

(Maybe this is also getting mixed up with things like data oriented design?) I am definitely not an expert in these things so I would welcome any feedback.


One take, from Robin Popplestone, long ago in comp.lang.functional: “… it does seem to me that … OOP represents the discovery by the mainstream community that it is a good idea to associate code with data, but, since they still don't know how to do closures, they have bodged it.”


I don't know much, but I think:

modularity is VERY good

(I think most of the shortcomings of C are a lack of modularity)

functional programming is good (eliminate side effects and people and compilers can assume without making an a * * of u and me. Should not be required)

classes are good

(frequently it's the same, but slightly different)

multiple inheritance is bad

(there are other ways to do it)


> I hardly ever write classes anymore; I write functions.

This speaks to me so much.


I wish the author defined the term OOP somewhat. It really means different things to different people.

FWIW I like Bob Martin's definition (encapsulation, inheritance, polymorphism).


This article and the associated comments reminded me of 'Object-Oriented Programming and Essential State'[1] which was posted on here about half a year ago. I'm just going to copy from a thread that I felt got to the crux of the matter with OOP.

----

One thing I hate though, in a language like Java, is when I see "utility" classes with static methods which take objects as parameters, then perform some calculation based entirely on the state of the passed in object. In my opinion, if the object can reason about its own state and return an answer based on that reasoning, that method/logic should be in the object, not elsewhere.

Another one that bothers me are these transformation static methods which take type A and return type B based on nothing but the state of type A. The languages were talking about, C#/Java already provide a facility for this called a constructor.

If the development approach is going to completely remove operational methods from data types then I'd think long and hard about using a language which supports this instead of a language, like Clojure, which does not.

----

> that method/logic should be in the object, not elsewhere

Only if it's involved in preserving some object invariants, or accessing parts of its state that are abstracted away in the public interface. Otherwise, you're breaking encapsulation by putting some logic in the object that doesn't belong there, and making it hard to change the implementation later.

> The languages were talking about, C#/Java already provide a facility for this called a constructor.

There are sensible reasons to avoid using constructors for this, at least in the general case.

----

> Only if it's involved in preserving some object invariants, or accessing parts of its state that are abstracted away in the public interface. Otherwise, you're breaking encapsulation by putting some logic in the object that doesn't belong there, and making it hard to change the implementation later.

I've been pondering this thread all day, and I think this is the crux of the issue. Ideally, classes would only encapsulate state and you'd use namespaces/modules to encapsulate functionality. But most Java/C# OOP examples use classes to encapsulate both state and functionality, which gets you stuck in the morass that the article discusses.

----

[1] https://news.ycombinator.com/item?id=21238802


85% FP + 15% BASIC = JavaScript


haha. JavaScript bad


libc has stood for quite a while without OOP. I think a mix of fp and imperative is magnitudes easier to mull about than oop. imo fp and imperative semantics are much clearer than oop.


OOP in the sense of encapsulation, inheritance, polymorphism is an academic exercise for students to think about organization of data and functions into hierarchies. Objects in this case were mistaken for modules. This is only usable by students and librarians.

OOP in sense of message passing is just a programming language implementatiin technique used in Smalltalk which lets you handle functions of multiple arguments with ease. You basically 'pass' each argument as a message to the receiving object.

That is obviously just a hack to make that feature of nontrivial function call work. Real message passing obviously has to be asynchronous to even resemble concept of real life message passing. Erlang and Actor model in that case are pure models of message passing based OOP.

Somehow all of these OOP systems are just distractions from solving problems using computers because you can represent your problem using pure 'data' such as numbers or strings and composing them to bigger structures with well defined operations - such as lists, dicts and tables and so on. Objects can be purely abstract thing in your mind or documented in code comments.

Imperative programming obviously doesn't scale beyond one machine because of von Neumann bottleneck and suffers from unmanageable global state. That doesn't mean OOP is the solution.

Functional programming is of two kinds - statically typed with powerful type systems - which are systems with very limited field of use and high development cost. Unfortunately unusable for real life problems because of lack of intuition.

Other less strict functional languages fare better, but used naively suffer from von Neumann bottleneck too. Unless they are used in a language oriented programming paradigm, whereby you use your language only to construct a higher level language DSL) which can abstract over more than just a single machine.

Good examples are Lisps and their ability to define other languages such as Prolog or SQL or Linq.

The only way to make parallel computing possible is to use data parallelism instead of task parallelism so data oriented languages such as sql or datalog are the future since Moore's law reached its end. This poses a strong constraint on design od programming languages.

SQL, while not suffering from single processor bottleneck, unfortunately sucks as a language because it is not programmable and nondeterministic in generating physical plans.

Which offers a great opportunity for programming language enthusiasts - build a programmable sql or datalog like language because other paradigms reached some fundamental constraints and hit a dead end.

Or come up with a new paradigm.


Program = Data Structures + Algorithms

Most OO code is poorly designed, slow, error prone and a memory hog. Every object you initialise wastes so much memory. OOP is not going to be there forever. It is a failed paradigm. Hopefully it will be replaced with something more efficient, faster to develop and something with more provable correctness.

Is an object a data structure, an actor, a module, a knowledge frame ?

A state machine can rightfully be called state + behavior. Sadly no programming language has made hierarchical state machines a first class feature to support with syntax despite it being software engineering unlike what OO bros claim.


For a failed paradigm, there sure is a lot of perfectly successful software written using it.


You can make almost anything "successful" if you try hard and enough and spend enough money. If we just got rid of subtyping and mutation (which imo, are the core "features" of OOP), we would be in a lot better place. Though, if OOP means just grouping data together and providing functions that operate on that data, I don't have any problem with that. Most functional code works like that, it just doesn't mutate the data.


Yet it's been massively successful, out in the real world, in ways FP can't really claim.


What do you mean, exactly, by "successful"?


Successful as in slow to use, poorly designed, difficult to extend and a memory hog. You can't even reuse code between rails1 and rail2. OO is a joke. Pure OO languages like Java and SmallTalk are already failures. C++, Scala, Kotlin are not OO languages. Design patterns are not engineering, let alone architecture.

The only success for OO has been UI development. It's a failure everywhere else. OO databases are a failure. ORM is a failure. OO based distributed computing has failed, we use REST. OO based design is a failure, no one uses UML.

OO based architects are a waste of time and money. You can replace all OO based architecture nonsense and patterns with code written in Go.


I won't put java and smalltalk in same class. They are much different.

   >Design patterns are not engineering, let alone architecture
   >Sadly no programming language has made hierarchical state machines a first class feature to support with syntax despite it being software engineering unlike what OO bros claim
What do you mean by engineering here?


Java is not pure OO, neither is it objectively any sort of failure, being one of the most widely used languages out there.

As such I don't think the rest of your post is worth consideration.


Java Web Start is such a big success.


Avoiding OOP is hard: even those using “non-OOP” languages like C often end up doing some sort of bad OOP where the first parameter is some sort of explicit this pointer. I would tend to agree with the author’s conclusion: it’s often not worth trying to remove all instances of a paradigm from your code, especially if you’re working in a multiparadigm language. If you try to stay away from some of the more problematic aspects (inheritance) while retaining the strengths (encapsulation) I think you’ve done a good job.


If the problem domain naturally maps to OO approach, then programming it as such is reasonable. On the other hand, if the problem, or its understanding at present stage, is not OO (for example not much functional relations, but tons of properties and states), then forcing it into OO implementation in not going to bring any of the promised benefits, but most of the overhead.

Alternatively, spending a lot of lead time doing OO analysis in such cases is more like premature optimization. Instead, if the problem does not appear obviously OO, just implement it as procedural/structural, but implement it properly trying to keep the scopes and states as limited as needed. If a better model, perhaps OO, could emerge from such implementation, then it could be applied in the next iteration.


I agree with you that the biggest strength of OOP is encapsulation that is easily accessible. As for inheritance, I think it is also a good abstraction, but in an very limited number of circumstances. In 10 years as a professional developer I have only ever implemented something using inheritance once. I think the problem with inheritance is that it is far too encouraged, and people fit it into problems which it is unsuitable.

EDIT: typo


Very true. Implementation Inheritance should be taught as one of the more obscure features of OOP instead of as one of the main features as it is taught often.

But in the end we should be less dogmatic in general . Myself and a lot of people I know started out building big inheritance trees only to find out that this didn’t work so well so we reduced it to a degree that was useful and manageable. But then you have some very loud dogmatists who scream “this is not OOP” and insist on building factories and factories of factories and all the other nonsense. And for some reason the dogmatists often get heard, maybe because they are so loud and convinced of themselves.


> In 10 years as a professional developer I have only ever implemented something using inheritance once.

If you build a library or framework, you'll use it more. There are often plenty of literal "is-a" relationships that don't exist as often in application code. So you probably benefit from inheritance in OOP quite a bit even if you aren't building those relationships yourself.


And now that I think of it, what I was building was a data visualization framework that users could create "plugins" for to handle many disparate types of data. So yeah, frameworks are a great example of cases where inheritance shines, but is rather niche and not many people work on them day to day.


I think the reason a lot of people reach for inheritance is because they're never really taught how to decompose a system into smaller moving parts through analysis. At a high level every watch tells time and date and has complications. But the devil is in the details of what watch we're talking about. And I think most classes on OOP would take it as a trivial example and start with a Watch interface and subclass it. But they never really explain how a Watch consists of a Time and Date, with zero or more Complications and a Face.

If you focus on the compositional relationship you get a lot more mileage out of OOP which is the real revolution over the past 15 years. And the decomposition is how you arrive at the Single Responsibility Principle and Encapsulation. Suddenly you can work only with the set of all Complications, or the Face, or Time or Date without worrying about which Watch you have.


That isn't "bad OOP". It's how you do non-trivial things in C. Got a pointer to various things and need dynamic dispatch to avoid a god function switch statement? Bang! You've got OOP.


You can simplify a god function switch statement by using goto lines. Cut out the middleman!


And remember kids: you don’t need a bunch of gotos and printfs because you can use enums!


how is this bad oop? You have a function that operates on a struct vs having function pointers on a struct that operates on the values of the same struct?

taking in a pointer to a struct as an argument is much less overhead for the programmer than "ok this is an 'object' here is why it is doing what it's doing oh here is this method etc etc"


I'm not sure I agree on this. OOP is really about implementation inheritance and ad-hoc polymorphism based on the same, and it's pretty clear by now that these are not good ideas because of e.g. the fragile-base-class problem, that basically involves violations of modularity.

Simpler object-based paradigms and languages, which feature "classes" and "objects" for encapsulation and data abstraction, plus maybe some interface-only subtyping, are used all the time and work quite well with functional programming concepts.


The main lesson I’ve drawn is that the real problem is dogma: OOP works fine if you aren’t trying to follow some One True Way™ rather than adjusting based on your needs, problem domain, and resources. The same is true of most other styles: FP is prone to getting a certain brand of immutability fanatic who will cause problems until they learn more about computer architecture and get a more nuanced position.

In every case, focusing on dogma avoidance and having reasonable technical debt levels seems to be far more important than the specific language and paradigm.


People think that with FP you end up just copying a bunch of stuff constantly. But in my experience, that's rarely true: structural sharing minimizes copying so that the performance is usually on par with in place mutation. In fact, in some situations, the immutability guarantees make the program faster than the mutable version.

Moreover, advanced compilers (like GHC and the OCaml compiler), can often turn functional code that allocates and copies into in-place, GC-free code.


Remember, I was talking about dogmatic silliness - the kind of thing where someone thinks it’s some kind of moral failing to use Haskell’s mutable structures, even though they’re there for a reason.

My point was that this seems like the same mindset even in two fairly different domains, and that we’re prone to talking about it as a technical problem when it’s really more of social one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: