Hacker News new | past | comments | ask | show | jobs | submit login

Out of curiosity, could you share these weaknesses of functional programming and how OOP solves them? Or at least point me to some material that discusses them? I would be very interested. I have programmed in OOP for over a decade and just picked up functional programming in the last 2 years so do not have enough experience with FP to make any definite conclusions yet (although I am really liking what I see so far). Thanks.



I wish I had a good handle on that. There is much to functional programming I need to digest yet. But the hints are there (I noticed this in the timeline of when these various concepts were played with and integrated into standard specs), and in my own programming, I have learned to use the hybrid effectively. What follows is purely my current opinion, and it should include some healthy ignorance. I am but an egg.

If I were to put a finger on it, I would say that the reason OOP arose was that there was general recognition of the need for a rich type system. I got a hint of this in the first couple parts of SICP. Haskell incorporates a kind of type system that is, itself, pretty interesting. CLOS/Moose makes C++ and Java feel antiquated.

In my experience, a lot of us who learned OOP via C++, Pascal, Java, Ada, and the like generally missed the point: type systems. We tend to treat classes like collections of the language primitives as opposed to combinations of data structures. Specifically, we make do with primitive data types and neglect to constrain them as a type. And "everything must be an object" is an extremist mantra with little value; given that your process image must have an entry point, purely a code-flow issue, was there really a point to the illusion of wrapping the entrypoint main() function in a class (I am looking at you, Java)?

Here is an OOP type example I see in the wild: an ID that is a string of four characters, a dash, and three digits tends to be stored as a string. This is incorrect; that string is not like other strings, and this is a blind faith that somebody is not going to incorrectly use that string. Wrap that in a type that specifically checks that the ID fits that pattern. It is easy to fall into this kind of mistake in the database management world, too, where an ORM pulls in a VARCHAR2 and does not translate values of this type into an appropriate type in the system, perhaps a type with a restricted domain.

Another example: I have learned to always question if my method really needs to accept an "int". This is a smell. Could my value be negative? What is the real min/max range of the value? Could the value have a magic value, such as "Number Not Given" (or NULL)?

Functional programming helps us with code flow. OOP helps us carefully classify the data.


Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy of concepts (aka subtyping). The only widely used OO language (for sufficiently narrow values of wide and wide values of OO) to get that right used to be Objective Caml, and recently its stepchildren F# and scala. So it is actually FP that helps you with the classification.


This is a very interesting point and should be highlighted. You said implementation artifacts (especially in reference to reducing code duplication), and for clarity, I think you are referring to the definition of operators on data (class methods, friend methods, and so on). I agree with you that subclassing (for the purpose of reusing behavior), traits (for adding behavior), and the like can be confused with classification to such an extent that modern designs tend to depart from type systems and be used for mere code organization.


"was there really a point to the illusion of wrapping the entrypoint main() function in a class (I am looking at you, Java)?"

Far be it for me to defend Java (I hate the damn thing), but: main is just a function in a class. The class is the entry point, as specified in the command line; main is just what the OS looks for, by convention. You could have a "main" in each class, but only the one in the specified class will be the entry point.


The way of the theorist is to tell any non-theorist that the non-theorist is wrong, then leave without any explanation. Or, simply hand-wave the explanation away, claiming it as "too complex" too fully understand without years of rigorous training. Of course I jest. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: