The author recommends watching Object-Oriented Programming is Bad.[1] I did, and Brian went into a discussion of dependencies (at ~20 minutes) and the implicit state-sharing that destroys real encapsulation and makes OOP a nightmare. He says: So if we're taking encapsulation seriously, the only real way to structure a program -- to structure objects as a graph -- is not as a free-form graph, but as a strict [tree] hierarchy." We handle message calls from one node to another by drilling messages from The Root down to the other ancestor. "No one writes programs this way."
We do. It's called Redux. The state is a tree, and each child is immutable and calculated via a pure function of the state and action (called a reducer). A message (called an action) propagates down every branch, and most ancestors remain unchanged, as they don't care about that particular action. So you have an explicit ledger of how state has changed, and figuring out why is trivial. State as an immutable result of pure functions gives you features for "free", e.g. time-traveling through your state. Debugging is a breeze. It's an absolute godsend.
(I realize the video is from 2016 and his opinions might've changed, but I think the speel is still valuable here.)
Yes, redux is a much saner way of mutating state, but the pattern is closer to FP than OOP.
Remember Brian isn’t attacking state-management in the front-end or anything specific. He’s attacking the entire paradigm of object oriented programming.
"The Platonic ideal of OOP is a sea of decoupled objects that send stateless messages to one another. No one really makes software like that, and Brian points out that it doesn’t even make sense: objects need to know which other objects to send messages to, and that means they need to hold references to one another. Most of the video is about the pain that happens trying to couple objects for control flow, while pretending that they’re decoupled by design"
What a bunch of baloney. My products ofetn live in this exact platonic world: class instances running in multiple threads send and receive asynchronous messages through internal pub/sub bus without knowing much about each other at all. All is working just fine, controlling devices, doing data processing, GPU accelerated display and whatnot.
OOP is used to name both the original (Smalltalk-like) OOP which was OK, and the modern OOP (Java-like) which is terrible. This is making it confusing for everyone. Critics of OOP (like myself) usually criticize the Java-OOP, while considering the Smalltalk-OOP to fall under Actor Model definition in the contemporary software.
It's really hard to have these conversations until everyone acknowledge and understand which OOP is being talked about a given moment.
Has nothing to do with Erlang in particular I've written these in c/c++/object-pascal. Implemented my first reliable multicast, pub/sub and state machine engine back in the 90s when I could not afford TIBCO and the likes ;)
Would be great to see __resumes__ alongside these blog posts. To state with such certitude that OOD is "bad" and a "disaster" takes a lot of chutzpah. I hope it is not necessary to point to existing (inherently) complex software developed using OOD.
As an aside, Incidental vs Inherent Complexity is more correct than "accidental" complexity.
("Accidents" happen by chance and if it keeps happening to you -- keep hitting that thumb with the hammer? -- then possibly try 'habitual' and don't blame the hammer.)
So much philosophy. I wish these types of articles have specific examples with sample code that they consider bad and good.
"Here is a bad implementation of a calculator, it is bad because of A, B and C. Here is a much better implementation of the same calculator. It's better because of X, Y and Z".
Check out this video https://www.youtube.com/watch?v=IRTfhkiAqPw from Brian Will, where he dissects 4 different code examples. I think one of the examples is also some Java code from Robert "Uncle Bob" Martin.
Let's assume you are beginner motorcycle rider. You could ask somebody to film end product (ie Moto GP riders all inputs to his motorcycle). How helpful would that be to a beginner rider barely after his MSF course?
There are no hard rules when it comes to programming and I am very suspicious of any person claiming they are in possession of ultimate truth.
Take the advice as it is and try to figure out if there is something that may improve your process or if it can trigger thoughts that may make you more knowledgeable.
> Not only are the components doing things their own ways, they’re trying to hide what they’re doing as “implementation details”. The fact that a database query requires a database connection never was an implementation detail.
In my opinion this is an implementation detail. A service exposes an interface for CRUD-type operations. The implementation could be any kind of datasource, whether it be a database, RESTful API, filesystem, or mocked data in-memory. Did the author imply that the consumer should be choosing the data source? What about caching? That might be an implementation detail as well -- is it a remote managed cache, a file-persisted cache, or in-memory? what about expiry/invalidation?
This might introduce additional accidental complexity but in my experience building an OOP system correctly has its benefits. An honest question: can FP solve this in a cleaner fashion, with less accidental complexity?
As a rule, FP is great for separating concerns in independent code. So you can have a great data-access layer, and a completely abstract data layer that doesn't care about the access layer at all. OOP systems nominally do that separation too, but FP usually makes those layers clearer and more independent.
A reduction in total complexity is another, completely unrelated thing. I honestly do not see accidental complexity on your description. Any system will have to deal with all that stuff, OOP and FP will just do it in different parts of the code.
You might not care about the implementation for a single call that only hits the happy path, but as soon as you are making more than one call or having to deal with failures the implementation definitely matters. And I think that FP makes it easier to build composable abstractions on top of the underlying code.
While in theory composability and encapsulation are orthogonal concerns when using OOP, in practice (for at least Java and C#) I find that there's often tension between the two.
VHDL/Verilog does but you're not going to write many apps in that.
At a high level:
1. Brute force with automated tests. Great if you have known datasets and platforms.
2. Work from most constrained hardware first. Easy to say, hard to do. Back in the X360/PS3 days almost everyone screwed this up and developed for X360 first.
3. If you need to do N of the same things fast, use an contiguous array. If you want to enforce that make the array part of your API. CPU prefetchers are amazing and love predictable memory patterns.
4. Rust is one of the few languages that bakes semantics into the language that line up well with modern architectures. Specifically Rust can automatically apply restrict semantics. It also forces you to think about ownership upfront in a way that tends to be performance friendly.
Yes. I currently have an FP system where in dev and test the interface kicks to an in memory multi state machine, and in lab and prod it goes to libvirt (which has costly network transactions) the modules implements the same API, and are swapped out at compile time....
It gets even better, in test the implementation partitions over checked out sessions so that acceptance tests can be concurrent with each test having its own view of the universe in spite of having a single state agent.
What I am gathering from this is the author is displeased with the concept of bundling modeling of the application domain with the mutation of the state indicated by the same model properties. This is more-or-less the original ideal of OOP: Put all of your properties as well as the methods to act on those properties into a single, well-defined class.
In my experience as a C# developer, this is not an ideal approach. I assume non-OOP developers have a perspective that this is how we actually go about things. In reality, my strategy is generally to declare POCOs - Plain Old CLR Objects - which effectively just serve to model the business domain as collections of basic properties (basic data types, collections, enums, other model types) without any methods, constructors, attributes, etc.
If you look in the Models folder in any project I currently work on, you will not find a single method declaration in any of the class files. All of the functionality is broken out into crosscutting abstractions such as a Logic or Rules namespace. The general idea is this: Why should I build a model specifically for this one context of usage and have the properties tightly-coupled to their mutators, when I can completely decouple these things and have the models operate with any arbitrary state mutators? In my experience, the modeling of all possible business facts can be done completely independently of how those facts should mutate over time. Another thing to consider is that not all business facts are relevant at all times, but that doesn't mean you can't catalog them all within the same logical business model (e.g. a 'Customer.cs' POCO). Null is a very powerful tool if used responsibly.
Taking this a step further, you can have a business model project (class library/Nuget) that is shared across multiple projects within an organization. Since you have included no specific implementation in the model classes pertaining to how their properties are read/written, you can use these unencumbered in any situation. Mix in a little bit of JSON serialization and you get a really quick and easy way to distribute a common contract and interoperate between various business systems. These projects can also effectively serve as your principal documentation of the business domain model.
One thing I hate though, in a language like Java, is when I see "utility" classes with static methods which take objects as parameters, then perform some calculation based entirely on the state of the passed in object. In my opinion, if the object can reason about its own state and return an answer based on that reasoning, that method/logic should be in the object, not elsewhere.
Another one that bothers me are these transformation static methods which take type A and return type B based on nothing but the state of type A. The languages were talking about, C#/Java already provide a facility for this called a constructor.
If the development approach is going to completely remove operational methods from data types then I'd think long and hard about using a language which supports this instead of a language, like Clojure, which does not.
> that method/logic should be in the object, not elsewhere
Only if it's involved in preserving some object invariants, or accessing parts of its state that are abstracted away in the public interface. Otherwise, you're breaking encapsulation by putting some logic in the object that doesn't belong there, and making it hard to change the implementation later.
> The languages were talking about, C#/Java already provide a facility for this called a constructor.
There are sensible reasons to avoid using constructors for this, at least in the general case.
> Only if it's involved in preserving some object invariants, or accessing parts of its state that are abstracted away in the public interface. Otherwise, you're breaking encapsulation by putting some logic in the object that doesn't belong there, and making it hard to change the implementation later.
I've been pondering this thread all day, and I think this is the crux of the issue. Ideally, classes would only encapsulate state and you'd use namespaces/modules to encapsulate functionality. But most Java/C# OOP examples use classes to encapsulate both state and functionality, which gets you stuck in the morass that the article discusses.
I think you made a good point: it is easy to shoot yourself in the foot with OOP. It is difficult to shoot yourself in the foot with FP. But when OOP is done correctly it offers useful features.
What you ask for is mostly impossible because you might not know about every possible object you'll be getting your construction params from....and besides that's horribly coupled. What about circular dependencies?
C# helps you to some degree with extension methods so you get the same instance.Method() syntax.
> Why should I build a model specifically for this one context of usage and have the properties tightly-coupled to their mutators, when I can completely decouple these things and have the models operate with any arbitrary state mutators?
Because otherwise, your domain model is not able to express which states are valid. A core idea behind encapsulation is that by controlling state mutations, it is possible to enforce invariants, thereby making invalid states impossible to construct. This idea is not unique to object orientation; for instance, in the static functional programming community, this is known as "making illegal state unrepresentable" [1].
By creating a "dumb" domain model and allowing business rule modules to do arbitrary mutations, each model is required to know about and enforce the domain model’s invariants itself, which dramatically increases the amount of code that potentially can break these invariants, and thus needs to be debugged if something goes wrong.
> Taking this a step further, you can have a business model project (class library/Nuget) that is shared across multiple projects within an organization.
This can work in some situations, but there are good reasons not to do this. Different teams may have differing and sometimes conflicting meanings for some of the terms of the domain (especially for generic concepts such as "user") as well as different data needs depending on the use case. This is why for instance Eric Evans’ "Domain-Driven Design" [2] argues for limiting domain model unification to "bounded contexts" [3] of which there may be multiple in the organization, and have explicit translation ("anti-corruption“) layers between the boundaries of these contexts.
The problem is that people never properly understood which operations were supposed to go on objects, and they just started accumulating everything that was even loosely related. My favorite example is registering for university courses. So you have a Student object and a Course object... Which one does the register function go on?
Objects should only contain those methods which are required for the invariants of that object. A course registration function is a workflow that involves two types, but does not control their invariants, so it belongs on neither. Most business logic falls into this bucket, and that bucket really works best as a functional paradigm where one can freely recombine the logic to create new workflows.
(The worst answer I've seen is that both end up with a register method, with one just reversing the parameters. So Student.register(Course) just calls Course.register(Student).)
There are well-defined patterns for such behavior. Domain services are introduced to protect invariants _between_ entities. Implementation is simple really. Student “fills out course registration form” (VO) and then Course “completes registration with filled out registration form”. Essentially the behavior of registering for the course is distributed between Student and Course as appropriate.
The point I’m trying to make is that the problem is not inherit in OOP. The problem is that domain modeling is hard. If a developer cannot synthesize the above “solution”, I’m not sure a different paradigm is going to help.
And why does a Course register a student? Why does it know what a "course registration form" is? None of these things have anything to do with the Course object or is invariants... The Course is just one of many parameters into the workflow. Why is it therefore responsible for the actual act of registration?
In real life, the aptly named "student registrar" is responsible for performing the task. But they're not an entity within this system... Rather they are the context in which this system is running. So a non-bound function actually makes perfect sense... It's bound to the top-level registrar context of the system.
In BPMN, the registrar would be one swimlane, and the registration the process, with additional swimlanes for other actors like the student. The registrar still owns the process, but BPMN allows for other actors to also play parts within the system.
A relational view would introduce a Registration which would join a Course and Students. The workflow would create a Registration, which defacto means that an individual Registration cannot be responsible for executing this.
> Most business logic falls into this bucket, and that bucket really works best as a functional paradigm
There is nothing I can see here that calls for FP. You are still mutating the state of the world.
Just use a free standing function. That's like a method on "The world", which is the object where we should put most functions. Programming is easy. (Not: all the technical decisions behind it)
I explicitly call out the reasoning for using functional concepts here: recombination into new workflows. For instance, a student may only be able to register for 15 credit hours or less. However, an advisor can register a student for up to 20 credit hours. If done properly, this should be a very simple change within the workflow... Just swap out the verification function that's composed into the workflow.
I still don't see what's functional about this. Functional is about higher-order functions, currying, and so on. I'm not positive most business logic profits from that. What you describe can be achieved by swapping out a number (or more generally, data).
Of course, we can come up with a scheme that requires two validation functions that have more distinctive features than just some number's values. And in this case that might be implemented by parameterizing the function parameter. But, it could also be implemented with a switch for example, and often this approach is more maintainable. And then, you can also have pointers to functions in conventional programming languages. In general, the need for functional approaches is vastly overblown.
> I still don't see what's functional about this. Functional is about higher-order functions, currying, and so on. I'm not positive most business logic profits from that. What you describe can be achieved by swapping out a number (or more generally, data).
And a discussion about composing functions is a discussion about higher-order functions. Why is that not FP?
I mean, it all boils down to assembly in the end... Why not just do everything in assembly?
I'm finding little too your arguments other than what seems to be a pre-existing bias towards functional approaches being "overblown", along with various alternatives which of course exist, because every programming paradigm is capable of implementing arbitrary logic.
Some abstractions will help you do certain things. If you find yourself constantly duplicating and tweaking the same axiomatic logic to do different things, functional approaches are generally a more helpful abstraction than others.
I can give you a good reason why not to do assembly. It requires you to allocate storage for data by constantly swapping it in and out of a fixed number of places. It binds you to minute details of a specific architecture that don't matter in 99% of the cases. It doesn't have a facility for subroutine calls (one of the most successful abstractions for sure), requiring the programmer to redundantly encode a calling convention at each call site, which also affects register allocation. It doesn't allow you to give descriptive names to local variables.
I'm not positive that functional approaches can bridge a similarly distinctive gap. But I know that it's easy to get tempted down rabbitholes where we constantly restructure programs without measurable benefit, or make abstractions that we regret later on.
Instead, it rather seems to me like mainstream imperative languages adopt a few simple features where they make sense (or not). For example, sometimes it's nice and elegant to look up items using a predicate function, although usually I'll actually prefer the explicit for loop or such (but yeah, YMMV).
One more problem I see is as follows. It seems to me that assembly->procedural (compiled) languages is a step that improves encoding efficiency in a very "local" way. It's still pretty easy to contain and control these abstractions and make different decisions in other places, and still have the places interact easily enough. With more advanced systems, I'm not so sure. There are so many general, far reaching assumptions how computation should be done (language runtime, code generation, etc) that go beyond the mere constraints of the computer's architecture. I think these lead to a lot of isolation that is not beneficial.
> If you find yourself constantly duplicating and tweaking the same axiomatic logic to do different things,
Point is, I really don't. (I'm not sure what "the same axiomatic logic" is, but I don't find myself duplicating a lot of things). Actually, it often seems to me things are much easier and less redundant if we can just focus on the data and not get a type system or similar in the way. Of course there might be problem domains where things look a little different than I've experienced.
This is the antithesis of OOP. Behavior should be modeled with data, not around data. Given a new use-case (some mutation of state), how can you be sure that an invariant is not bypassed if invariants are modeled at the level of each use-case? Any solution to the above begs the question, as all you will have done is distributed your “domain object” across the filesystem. Your system would be better served to simply put all the rules and objects into a single file which brings us back to original claim.
I really like the notion of separating essential complexity from accidental complexity. You can make brilliant OOP with essential complexity.
Another thing I like is to hide accidental complexity as preferred convenience while exposing the API that would allow a power user a more fine grained control over that object's accidental complexity.
I also think OOP and FP is a false dichotomy as I use objects functionally every day. And it rocks.
A paradigm/framework analogy: Cars are great, but you probably don't need a car ferry just to cross the water. And sometimes a bike is better then a car in a car trailer.
Furthermore, a car necessarily encapsulates it's passengers while transporting them from one state to another and may allow one or more passengers to temporarily un-encapsulate while parked at a rest stop along an interstate highway. The car as an encapsulation of passengers has come under criticism by some for allowing this temporary un-encapsulation however others consider this a positive attribute and note the behavior is typically only observed shortly after crossing state boundaries and for a limited duration relative to the total time of encapsulation.
OOP is essentially relational programming. It programs relations among objects. The encapsulation should be on the logic of relationship.
Transport is a relationship between passenger, viechle and state. Some passenger transports with a car, others by foot. It's the Transport's responsibility to update the states of passenger and viechle.
Encapsulation on entities sometimes is not enough to make a good OOD.
Anybody claiming something in programming (lib, programming language, or some other larger concept or abstraction) "gets it 100% right" is probably wrong.
> it doesn’t even make sense: objects need to know which other objects to send messages to, and that means they need to hold references to one another.
Hmm...if your model of OO is one where it doesn't make sense for objects to hold references to each other, then your model of OO is clearly mistaken.
And of course the "origin story" in the Brian Will post is so completely wrong that it doesn't even work when taken somewhat tongue-in-cheek.
Not that there isn't anything to critique in OO, I personally do think it is fatally flawed, but it is the best flawed approach we currently have. However, I think I'd rather read critiques of OO by authors that have actually understood OO.
> The fact that a database query requires a database connection never was an implementation detail. If something can’t be hidden, it’s saner to make it explicit.
This is why I love Lisp’s dynamic variables. It’s the best of both worlds.
Really good points made by this article imo. I feel like a lot of OO code problems can be the result of misuse of encapsulations. A private member of a class that mutates is rarely meant to be encapsulated but it is just a property of the class.
They couldn't of picked an OOP language people actually use in the industry? I almost feel like they wanted to show off their favorite programming language.
We do. It's called Redux. The state is a tree, and each child is immutable and calculated via a pure function of the state and action (called a reducer). A message (called an action) propagates down every branch, and most ancestors remain unchanged, as they don't care about that particular action. So you have an explicit ledger of how state has changed, and figuring out why is trivial. State as an immutable result of pure functions gives you features for "free", e.g. time-traveling through your state. Debugging is a breeze. It's an absolute godsend.
(I realize the video is from 2016 and his opinions might've changed, but I think the speel is still valuable here.)
[1] https://www.youtube.com/watch?v=QM1iUe6IofM