Hacker News new | past | comments | ask | show | jobs | submit login

Every objects-first codebase I've seen was terrible. OOP survived mostly because people push hard for it, because they think there must be value in overly taxonomic code, but in the end they never seem to get value out of it, only more and more incompatible objects (when I hear "mock object" it's time to run).

In OOP >50% of the LOC is just stupid bureaucrazy, setting up object graphs in the name of "isolation" (the irony), half-initializing fields, conforming to the right interfaces etc. This is completely meaningless, do-nothing code. Worse, it gives the illusion to remove some contextual dependencies this way, but the code never seems to work outside of the context it was created in. It's only much harder to read because the context is files away.

OOP is the wrong-minded idea that a program should be a bundle of many "self-contained" objects. But that's wrong, we're writing ONE program here, not thousands. It tries to repair this wrong idea with inheritance (which is at least as bad an idea).

And it makes it really hard to cope for "cross-cutting concerns", which are actually 90% of all we care for, not just a side concern. The complexity is in the edges (i.e. how is information moved/transformed), not in the objects!

OOP mostly survived where performance / architectural scalability is not super important (e.g. Python or similar scripting languages, where it enables dynamic typing). And it survived where the big money is, but not necessarily technical competence (where it enables Object-verb type code completion).

That relates to OOP as in languages like Java - not Alan Kay's idea of OOP, which he emphasizes was very different, but I still don't get what's the idea :p

> A large majority of the code running on our planet today is OOP.

Good example code base?

> It's pretty much the only software paradigm that's survived for that long.

Maybe check on your history? Many people are totally happy with procedural programming.




> when I hear "mock object" it's time to run

Do Haskell programmers not create mocks to test external components?

> OOP is the wrong-minded idea that a program should be a bundle of many "self-contained" objects. But that's wrong, we're writing ONE program here, not thousands.

The number of programs isn't the relevant metric. Complexity is. Any complex system is going to trend toward modularity. Modularity requires standard interfaces, which inevitably lead to bureaucracy.

A 1MM line Haskell program is going to be similarly bureaucratic. There are going to be standards you have to adhere to in order to play nice with the rest of the system. That's what typeclasses are, after all.

OOP is traditionally defined by three things: polymorphism, encapsulation, and inheritance.

Polymorphism: Modern Non-OOP languages can also be polymorphic, so that's no longer a differentiator.

Encapsulation: You definitely want encapsulation if your data is mutable.

Inheritance: This is the only truly problematic feature, and it's certainly abused, but it has its place. I don't always want to compose and delegate 20 methods when I just want to change the behavior of one.


> Polymorphism: Modern Non-OOP languages can also be polymorphic, so that's no longer a differentiator.

Haskell had ad-hoc polymorphism way before Java was a twinkle in its creator's eye. Before Haskell, Miranda (the language Haskell was based off of) could have kicked Java's polymorphism to the curb. Neither Java nor OOP invented polymorphism. If anything, they butchered it by introducing subtyping.

> Do Haskell programmers not create mocks to test external components?

The equivalent in Haskell would be having some kind of 'effects' system. An effect system differs from a mock object in that it limits in its totality what kind of interactions can take place. Typically, each layered effect also has a set of laws. Pure interpreters can be written for these effects, but the impure (i.e., real-world) interpreters are not privileged in their consumption of this effect. The pure interpreter also provides a proper implementation, such that you should be able to replace your real program with all pure interpreters, supply all your input at once, and still have a correct program. In other words, a Haskell program is typically polymorphic over which effects it uses in a way that other languages simply aren't.

> Encapsulation: You definitely want encapsulation if your data is mutable.

Again, Haskell, Miranda, and Lisp had encapsulation long before OOP came about, and Lisp has mutable data.


I think we're in violent agreement here. AFAICT, the feature sets of OOP and non-OOP languages have converged so much that inheritance is really the last differentiator. Maybe you could throw dynamic dispatch in there, but there's no reason in principle an OOP language couldn't add dynamic dispatch.


Mutability is a huge differentiator for some languages.

And dynamic dispatch is already part of Smalltalk and Objective-C, I believe.


Sure, but mutability is orthogonal to OOP.


As fond as I am of Erlang, I still find it hard to picture an immutable OO language.


> I still find it hard to picture an immutable OO language.

Picture an object-oriented procedural language like Java, and then make everything immutable; for every 'void' method, return an updated copy of 'this'; for every non-void function, return a tuple of the updated copy and the value you were going to return. No change to const methods, obviously. And you're done, and you can still take advantage of encapsulation, polymorphism, and inheritance without any hoops. You can even do it in Java itself as a style thing without too much effort and only a modest amount of boilerplate. Alternatively, it's not that much work to build this same thing out of lambdas and dictionaries, if your language has those but not objects (adding mutation into that object system would then be trivial) (and of course, you can build dictionaries out of lambdas and lambdas out of objects, if need be).


Write in scala using only "vals" and there you have it!


Not quite because you can still have a val that points to a mutable object.


> Do Haskell programmers not create mocks to test external components?

Do we create test substitutes, alternate implementations of the same interfaces? Yes. But dedicated mocking frameworks are crazy. In Haskell-like languages if you want an implementation of interface foo that returns bar when called with baz, you just... write an implementation of interface foo that returns bar when called with baz. If the easiest way to do that in your language is some kind of magical reflection-based framework, something is very wrong with your language.


> If the easiest way to do that in your language is some kind of magical reflection-based framework, something is very wrong with your language.

Languages have strengths and weaknesses. Certain tasks are easy in some languages, and certain other tasks are not. Throwing ones hands up and saying "something is very wrong with your language" because one is not familiar with a technique or tooling popular in another language is immature, IMO.

For example, if you explain how Debug.Trace works in Haskell to programmers familiar with Java, and they'd call it crazy.


> Languages have strengths and weaknesses. Certain tasks are easy in some languages, and certain other tasks are not.

Agreed, but we mustn't fall into the fallacy of assuming that means no language can ever be better or worse than another. There are good and bad language design choices, and "an implementation of interface foo that returns bar when called with baz" is not some obscure specialized feature, it's the basics of general-purpose programming.

> Throwing ones hands up and saying "something is very wrong with your language" because one is not familiar with a technique or tooling popular in another language is immature, IMO.

I'm very familiar with the techniques and tooling of mocking frameworks. I do not make these claims lightly.


> Agreed, but we mustn't fall into the fallacy of assuming that means no language can ever be better or worse than another. There are good and bad language design choices.

Agreed.

> "an implementation of interface foo that returns bar when called with baz" is not some obscure specialized feature

It isn't some obscure specialized feature in Java either.

    Foo foo = new Foo() {
      public Bar method(Baz baz) {
        return new Bar("bar");
      }
    }
What mocking frameworks do is to provide a DSL to describe behavior of such implementations, use dynamic bytecode generation (not reflection, BTW) to create implementations of the interfaces dynamically, and bind them to simulate various test conditions. What the makes the language "worse" to require or allow doing this?

My Haskell is rusty, but given Haskell psuedocode like:

   main :: IO ()
   main = do
    f <- foo
    if (f == 1) then
      putStrLn "Got 1"
    else
      putStrLn "Didn't get 1"    
   
   
   foo :: IO Int
   -- ...
how would you test that the two branches of main behave appropriately?

This is not a snark; I am truly interested to know how Haskell gets rid of the need to bind alternate implementations of an interface for testing purposes.


> It isn't some obscure specialized feature in Java either.

To the extent that that's true, fine. I'm sure I see a lot of developers writing something along the lines of:

    Foo foo = mock(Foo.class)
    foo.when(method(any())).return(new Bar("bar"));
instead of that, not because they need any of the mocking features as such but because it takes up fewer lines on the screen, particularly when there are more methods in the interface. (Partly a cultural problem of having overly large interfaces rather than a language problem per se, perhaps).

> use dynamic bytecode generation (not reflection, BTW)

How is it not reflection ("the ability of a computer program to examine, introspect, and modify its own structure and behavior at runtime")?

> What the makes the language "worse" to require or allow doing this?

Reflection or code generation means stepping outside the language and its usual guarantees - any time the programmer is forced to do it it's because the language didn't provide a good way to solve the problem within the language itself. It means you can no longer e.g. extract common expressions, because they don't necessarily mean the same thing; if you have some common mock setup code you can't just blindly automatically extract method, you have to think carefully about when the mocks get instantiated and when the expectations are set.

> how would you test that the two branches of main behave appropriately?

> This is not a snark; I am truly interested to know how Haskell gets rid of the need to bind alternate implementations of an interface for testing purposes.

It doesn't - as I said, you still write test implementations of your interfaces. What it does remove the need for is mocking frameworks, which people use in e.g. Java either because implementing the interface the normal way in the language is more effort (not a problem in Haskell), or because they want to test the specific interactions with the object (e.g. "verify that method foo was called twice") because those methods are used for side effects.

Haskell avoids that one by making it easier to represent actions as values; you can use e.g. a free monad to represent actions that will be performed later, so rather than testing that your complex business logic method called deleteUser(userId) on your mock, you instead test that it returns a DeleteUser(userId) value. To a certain extent you can do this in Java too ("the command pattern"), but without higher-kinded types you can't have a standard implementation of e.g. composed commands or standard methods for working with them, so it gets too cumbersome to really do in practice.

Even in Java you wouldn't want to use mocks for testing methods that operate on simple datatypes: to test e.g. a regex find method, you wouldn't pass in mock strings, you'd just pass in real strings and confirm that the results were true or false as expected. A language like Haskell just expands the space of what you can test in the same these-inputs-these-outputs way, by making it easier to represent more things as values.


> not because they need any of the mocking features as such but because it takes up fewer lines on the screen

Why is this a bad thing? How is this different from, say, using Template Haskell?

> It means you can no longer e.g. extract common expressions, because they don't necessarily mean the same thing; if you have some common mock setup code you can't just blindly automatically extract method, you have to think carefully about when the mocks get instantiated and when the expectations are set.

I have never found the existence of tests using mocks being a hindrance to refactoring in Java. Can you provide a more specific example?

PS: > How is it not reflection ("the ability of a computer program to examine, introspect, and modify its own structure and behavior at runtime")?

Agree that it is reflection, by that definition.


> Why is this a bad thing?

Because it shows the language could be a lot better. A common, basic task shouldn't be so much easier outside the language (via reflection) than inside it.

> How is this different from, say, using Template Haskell?

It's the same thing. To the extent that people feel the need to use Template Haskell to do basic and common things, something is very wrong with Haskell.

> I have never found the existence of tests using mocks being a hindrance to refactoring in Java. Can you provide a more specific example?

I mean you can't refactor the test itself. Just basic things like: if you do expect(methodOne(matches("someComplexRegex"))) ; expect(methodTwo(matches("someComplexRegex"))), if you try to pull matches("someComplexRegex") out as a variable you'll break your test (you have to make it a function instead). You can't move an expect() above or below another method call without checking to see whether that was the method it was testing. Individually these things are trivial, but they add up to a chilling effect where people don't dare to improve mock-based tests a little as they work on them, so they end up as repetitive code with subtle variations, just like main code would if you never refactored it.


> Individually these things are trivial, but they add up to a chilling effect where people don't dare to improve mock-based tests a little as they work on them

In my experience, I have not come across such effects. People understand the purpose, strengths, weaknesses and limitations of the libraries they use and try not to "cut against the grain".

> people don't dare to improve mock-based tests a little as they work on them, so they end up as repetitive code with subtle variations, just like main code would if you never refactored it.

I understand this is a subjective preference, but I try not to refactor test code too much. I strive to make my test code not have branches ("if-less code" as some people call it). Sometimes this lead to slightly more verbose code, but in the long run I have found it useful for my test code to be rather boring.

----

I now understand the point you are making, and agree with it technically. I don't agree that those technical points lead to the social effect you call out, because I have not come across it.

Overall, Java makes two bad design choices - nullability by default, and mutability by default. But in the codebases I have worked with in the last few years I, and my colleagues, tend to not opt in to these defaults. This leads to pleasant, testable codebases to work with. We also enjoy acceptable performance, good tooling, easy-to-reason memory usage, great library ecosystem etc.


> Overall, Java makes two bad design choices - nullability by default, and mutability by default.

There are a few more, even today: using a weird secondary type system to track what kind of errors can occur (checked exceptions), classes being non-final by default, universal methods (every method in java.lang.Object except possibly getClass() ought to be moved to interfaces that user-defined types have the choice of not implementing), a bunch of syntactic ceremony around blocks (braces required everywhere, "return" being mandatory) which gets even worse once you want to move away from mutability by default, variance at use site only, no sum types, no HKT...

> This leads to pleasant, testable codebases to work with. We also enjoy acceptable performance, good tooling, easy-to-reason memory usage, great library ecosystem etc.

Sure. There are a lot of good things about the Java ecosystem, and if you see the language as a modest, incremental step over C++ then it is an improvement on that front at least. At the same time I do think ML-family languages - even ML itself - offer a lot of advantages especially if we're talking about them just as languages. In practice I work in Scala and gain most of the advantages of the Java ecosystem but with a language that has most of the advantages of Haskell as well.


> That relates to OOP as in languages like Java - not Alan Kay's idea of OOP, which he emphasizes was very different, but I still don't get what's the idea

It's bad of me to react to a single poster because I've seen many similar well reasoned arguments about why OOP is bad, but refer to something I would not call OOP -- I would just call it bad programming.

To put it on it's head, there is a lot of terrible code out there. I wouldn't say that code whose authors believe it is OO is measurably worse than any other code I see. What I agree with is that it also isn't any better (which is often a surprise to those authors). I've seen some good OO code. I've seen some good procedural code. I've seen some good functional code. I've seen some good declarative code. But I've mostly seen terrible examples of all of them :-)

One of the very unfortunate things that happened in the late 80's and 90's was the idea we should write self contained components that we would somehow plug and play all over the place. Despite the many, many horrible systems we wrote like that, the idea refuses to die. Some people believe that this is OO. They are wrong :-)

We have exactly the same problem with "unit testing". Some people mistakenly believe that a "unit" is a piece of code taken in isolation. Then they think, "Hey... it's isolated... what else is isolated? Oh yeah! Objects!" So an object becomes a "unit" and it's tested in isolation... How do I isolate it? Oh yeah! I'll make fake objects for it to talk to.

Yeah, it's a tyre fire. But even though it is popular and even though it is popularly called OO, I think it's a bit unfair. It would be like saying, functions are procedures that return values so the only difference between functional programming and procedural programming is that you always have to return a value. Yeah, it's super wrong, but it's so simple that you could convince a whole bunch of people it's true and then complain about how useless FP is.

I realise that you realise that Alan Kay's idea of OOP is different, but the key part is "I still don't get what's the idea". When you find out, it would be nice to find out your reaction. Otherwise it's really just a strawman rant about how terrible mainstream programmers are (and we are.. terrible, that is ;-) ).


No, not a strawman at all. It's a rant about what bad qualities come from commonly accepted methodologies that fall under "OO". If you think that's not the common understanding of OO, you should state what you think it stands for instead and why you think it's still a good idea. You didn't do that.

And pretty much nobody really understands what exactly is Alan Kay's vision, simply because it's rather vague and somewhat removed from practical reality. He seems to want "extreme late binding" (which need not be bad) and asynchronicity (Actor Model? can remove control and thus be problematic). He also has some vague idea of extreme parallelism that seems just very far removed from computational reality today. I think it's more a philosophical idea of how the physical and biological world could be seen as concurrent processes. I don't want to say his are bad ideas, and he is obviously an extremely intelligent and educated guy, but at the same time he also does not seem like an experienced programmer from a practical standpoint - but more of a visionary. So that's my idea, and if you have a different one, feel free to head over to the C2 wiki to convince yourself that most people don't really know what he means.


Can I please upvote this 100x? The "is-a" relationship is so much encapsulation gone wrong. Trying to encapsulate functionality in an object can not represent the multitude of things you want to do with them. Yet an amazing number of even CS grads religiously repeats OOP pattern.


>> A large majority of the code running on our planet today is OOP.

>Good example code base?

The majority of code running Amazon the retail site and AWS. Large swathes of Microsoft and Google online services. Most code running bank back office and many trading systems. Likely 90%+ of the line of business applications run by the Fortune 500. Most desktop GUI programs. Should I keep going?


But was that because of OOP? Or in spite of it? There are many reasons OOP is used in all of these systems which aren't related to it's technical "superiority".

If popularity is a valuable metric, then javascript is probably the greatest language ever invented (sigh).


>> A large majority of the code running on our planet today is OOP.

> Good example code base?

It would be easier to give examples of non-OOP crucial programs, for they are far less.

OOP codebases: -- Almost all major AAA games.

-- Almost all GUI systems.

-- All major browsers.

-- Almost all video editors (NLE),

-- Almost all audio DAWs

-- All of Adobe's Suite

-- Almost all 3D and CAD tools

-- All office suites (MS Office, OpenOffice, iWork),

-- All major IDEs (Visual Studio, IntelliJ, XCode, Eclipse)

-- Most of Windows and OSX standard libraries

-- Clang (and GCC now that they went to C++? Not sure if they use OOP)

-- the JVM

need we go on?


> OOP codebases: -- Almost all major AAA games.

Nah. As far as I hear, most of them moved away from OOP a long time ago. "Component systems" have been hot for more than five years.

> Almost all GUI systems

Come on, what a GUI does vs a non-GUI "realtime" app is pretty trivial. A bit of layouting, routing a few input events. I'm not saying that e.g. building a scene graph or box model would be the wrong thing (there are other ways as well), but yeah... I certainly don't think that inheritance makes GUIs easier. Interfaces/Function pointers/dynamic dispatch? Yes, you might want that for decoupling a few containers of heterogeneous data/behaviour. But that's hardly a monopoly of "OOP", and also you don't need that in most places.

The other projects, don't know them. Again, I'm not against abstractions per se (the Stream abstraction is one I regular use, although it does require unwrapping for proper error handling. And I have a constant number of "objects" in each of my programs, namely modules with global data). But OOP culture, especially objects-first mentality and fine-grained data encapsulation, gluing implementation to pristine data - I believe it does only harm and leads to plain wrong program structure and overcomplicated code to jump into all these boxes. I prefer gliding through arrays :-)


>Nah. As far as I hear, most of them moved away from OOP a long time ago. "Component systems" have been hot for more than five years.

Nope, that's just hype. Good ole C++ still rules the day, except for specialized (and smaller in scope and speed needs) games.

>Come on, what a GUI does vs a non-GUI "realtime" app is pretty trivial.

You'd be surprised. A guy is not just "call x program with some flags" as you might believe.

Something like a NLE editor for example or even an IDE can have GUI needs that go far beyond hundreds of thousands of LOC...

(Not to mention that I was referring to GUI libraries themselves, complex beasts on their own, not GUI code as used by applications to build their GUIs).


> Nope, that's just hype. Good ole C++ still rules the day, except for specialized (and smaller in scope and speed needs) games.

Sure, it's C++, and in my perception much C-style C++. And C++ != OOP.

> (Not to mention that I was referring to GUI libraries themselves, complex beasts on their own, not GUI code as used by applications to build their GUIs).

I will admit I haven't written a NLE program, but integrating complex logic with a complex and hard to understand GUI framework like Qt, meaning there are a lot of states to synchronize, is a lot of inessential complexity. I'm pretty sure it's much easier when you only use GUI primitives and do most of the coding on your own. E.g. a standard box layouting and event bubbling algorithm can't be that much work, and it's MUCH MUCH easier to do it on your own and choose the appropriate structure, instead of writing many helper classes trying to bend the rigid framework.

Taking the example of a NLE program - it has lots of domain-specific state which you must absolutely understand if you want to write such a program. And you absolutely must have a vision how this state should be reflected on the screen. Choosing how to do it is a lot of work. Actually drawing it should be the smaller amount of work, by far. Just separate the state from the GUI library. If you scatter it over thousands of objects, inheriting implementations that you don't own and don't understand - well, of course! that's really hard.


>Taking the example of a NLE program - it has lots of domain-specific state which you must absolutely understand if you want to write such a program. And you absolutely must have a vision how this state should be reflected on the screen. Choosing how to do it is a lot of work. Actually drawing it should be the smaller amount of work, by far. Just separate the state from the GUI library. If you scatter it over thousands of objects, inheriting implementations that you don't own and don't understand - well, of course! that's really hard.

My point was rather than just the essential GUI functionality and interactions for such a program can be very complex.

In other words, when to show this or that, how to structure code to show it fast, how to present it best, etc is also essential logic (not some afterthought on top of the domain logic), and it can also be super-hard to code (depending on the kind of program).


> how to structure code to show it fast

It's pretty trivial, at least not any sort of GUI-specific hard problem. There is only a low number of objects on any given screen. Don't raster pixel-by-pixel on the CPU, of course.


It's only pretty trivial if you're doing an app that has some forms and does some CRUD and so on.

For anything above that, it can hairy real fast.


>> Nope, that's just hype. Good ole C++ still rules the day, except for specialized (and smaller in scope and speed needs) games. Sure, it's C++, and in my perception much C-style C++. And C++ != OOP.

Sorry, meant to write "good old OOP still rules the day".


> Every objects-first codebase I've seen was terrible.

Does this include Smalltalk and CLOS codebases?


No. I don't know these languages but color me sceptical.


you should take a good look at CLOS, you'll be colored rainbow-happy instead.


> Alan Kay's idea of OOP, which he emphasizes was very different, but I still don't get what's the idea :p

I think his idea of OOP is something very close to what we call actors these days.

It's a fractal design where it's 'computers' / objects communicating with messages all the way down.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: