Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OO vs. FP is just a matter of whether you focus the nouns or the verbs. The counterargument to the OP is that surely a function that manipulates "data" is less powerful and abstract than one that manipulates objects.

For example, take an "interface" or abstract data type like Array, consisting of a length() and a get(i) method. (This is really called List in Java and Seq in Scala.) There may even be an associated type, A, such that all items are of type A. This is very powerful because functions written against the Array interface don't depend on the implementation; we can store the data different ways, calculated it on demand, etc.

The "binding together" Joe is complaining about is binding the implementation of length() and get(i) to the implementation of the data structure, which is surely understandable. The alternative, seen in Lisps and other "verb-oriented" languages, is that there is a global function called "length" which takes an object... er, a value... and desperately tries to figure out how to measure its length properly, perhaps with a giant conditional.

The original OO (SmallTalk) was about message passing rather than abstract data types; just the idea that an object was responsible for responding to certain messages, and that these communication patterns completely characterized the object. This is how we think about modern cloud services, too; it's kind of inevitable. Who would complain that S3's "functions" and "data" are too coupled? Who would ask for a description of S3 in terms of what sequence of calls to car and cdr it makes internally? OO concepts allow a functional description of a system that starts at the top and can stop at any point.

The "everything is an object" philosophy gets a bad rap. It's a big pain in Java, especially, because of how the type system works. Ideally I'd be able to define a type of ints between 1 and 3, an obvious subclass of ints in general, whereas in Java I find myself declaring "class [or enum] IntBetweenOneAndThree" or some nonsense.



You know I think this misses something.

OO vs. FP is just a matter of whether you focus the nouns or the verbs.

I am working on gigantic OO system right now. And the OP is correct. OO sucks. OO is about minding together a bunch of crap and getting it to slightly, only slightly less crappy. But it can do that and for that I am grateful.

FP is about constructing something that is completely elegant from the start. If you can do that, it's great and you will be doing far, far better than OO. Not comparable.

The problem is that so far few have been to construct elegant, uh, cathedrals. And when you've already got a huge, sinking mess, you can't use FP to fix it. Not comparable again.

Neither is better or worse. But they're wildly different. Now, if someone figure out how to not write messes, FP is simply a win. Of course it happens that I wrote my mess from the ground up so I'm pessimistic about the mess-avoiding thing. But hey, it might work.

But I think it is important to say that the two philosophies aren't comparable.


The key word there is gigantic not OO.

All gigantic systems are crappy, no matter what their underlying language/paradigm is. Slightly less crappy is a win.


The problem is that OO thinking tends to inflate systems, spreading code all over the place even though it logically belongs in one place and adding object wrappers to things that don't need it. In my experience taking over Python code written by Java developers, I can usually shrink their OO code and make it more reliable by refactoring it into conceptually equivalent functional code wherever it makes sense and falling back on procedural style where appropriate.


In my experience I'd guess you aren't dealing with a deficiency of OO, after all Python is an OO language.

I'd bet you are dealing with over-engineering, which is a cultural issue within the Java/J2EE community. And perhaps a lack of closures (I prefer those over list comprehensions, since they are more general) which make java needlessly verbose.


Languages are not inherently OO or FP, but they support OO or FP style programming. Python supports procedural programming very well, you'll see lots of "def" and no "class". If you argue that the integers and strings manipulated by a procedural Python program are called "objects" and therefore it is still OO, I shall point you to the C standard which indicates that the integers and strings in a C program are also called "objects".

You can do procedural programming in Java, but you'll have to make all of the functions methods on some dummy class. This is cumbersome, which is the real complaint here. The "everything is a class" mentality is both an issue with the language and an issue with the community, but we tolerate it because they still make useful programs.

Everyone needs a little "re-education" or assimilation in order to switch languages and not write puke-tastic code in languages you don't use every week. A seasoned Java programmer will likely have no trouble writing correct Python code, but you have to wait for a few dozen sleep cycles before the programmer will write idiomatic Python code.


I agree, but I do think that java, in particular, suffers from two distinct problems:

1) A lack of closures, which, as you point out, turns every obviously functional problem into a ridiculous object model.

2) A culture that creates libraries that suffer from over-abstraction, over-engineering and that tend to model a technical aspect of a problem rather than what a non-expert end user of the library would find intuitive.


> Languages are not inherently OO or FP, but they support OO or FP style programming.

I couldn't disagree more. Languages are as they are designed to be. Erlang is FP and Java is OO by design.


If you trying to write FP in Python - you're in trouble. (Python is my main language for several years)

It does not have "mandatory OOP" but it remains as mainly imperative OO-language with some FP-goodness. C# is the same, Java - not.


If you try to do your entire program as pure FP then I guess that is true, but it's often possible and beneficial to do a lot of the work in a functional style.

Honestly though, this misses the point. If your language forces you to use an unsuitable paradigm, it's time to use another language if you can.


The solution to gigantic systems are APIs, whether services or objects. Using FP you end up emulating objects, just at a different scale.


"The counterargument to the OP is that surely a function that manipulates "data" is less powerful and abstract than one that manipulates objects."

I don't see how that follows at all. Objects are glorified types. They're a bag of state plus a collection of functions which take that state as an implicit parameter. Polymorphism is "just" a form of delegation, which itself does not require you to glue together data and functionality.

If this were Haskell, you might define a typeclass for a List datatype.

    class List a where
      length :: Int
      get :: Int -> a
      ...
Then implementors for Heap and Array would do something like this in their respective packages

    instance List (Heap a) where
      length x = ...
      get x i  = ...

    instance List (Array a) where
      length x = ...
      get x i  = 
And so on. Functions which want a List add a type restriction:

     addListLengths :: (List a) => [a] -> Int
     addListLengths []     = 0
     addListLengths (x:xs) = length x + addListLengths xs
addLengths doesn't care if List is a Heap or Array, and calling length on a List will do the right thing.

I suspect there are cultural factors at work more than anything else, which does get back to what you say initially: do you prefer to live in the Kingdom of Nouns, or not?

In practice, I don't think there's a List typeclass in Haskell. I suspect people generally just use a bog-standard list. :) If you want a special implementation like a heap, you go find one and use it. I suppose this is one example of a cultural of "explicit is better than implicit."

Myself, I write in Java regularly and I just don't generally see a whole lot of value in the List abstraction over ArrayList or LinkedList or whatever. I suppose the reader can see List and take it as a given that it'll behave in some way. That's something.

On the other hand, I suspect the programmers don't think much about a List, either. It's a bit worrisome to contemplate the idea that not only don't programmers know what object they're really dealing with, but they shouldn't know. I highly recommend perusing "Building Memory-efficient Java Applications: Practices and Challenges"[1], which really gets into this stuff at a technical level. I don't know the extent to which this happens in FP-land; I'm mainly commenting on Java culture as I have observed it.

[1] http://www.cs.virginia.edu/kim/publicity/pldi09tutorials/mem...


> In practice, I don't think there's a List typeclass in Haskell.

I'm a Haskell n00b, but there is Functor, Foldable and friends, right? That's basically what you're talking about, although the methods are generalaized map, fold and so on instead of get and length. It seems like this just supports your point: there's nothing making functions operating on data in an FP less abstract than objects and methods.


> OO vs. FP is just a matter of whether you focus the nouns or the verbs.

Indeed: http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...


In Haskell I guess it would look like this: I'm still new to Haskell hopefully someone can improve it?

    data M = M1 | M2 | M3 deriving(Show, Eq, Ord)

    fromM M1 = 1
    fromM M2 = 2
    fromM M3 = 3

    instance Num M where
      abs = abs
      a + b = fromInteger (mod ((fromM a) + (fromM b)) 3)
      a * b = fromInteger (mod ((fromM a) * (fromM b)) 3)
      a - b = fromInteger (mod (abs $ (fromM a) - (fromM b)) 3)
      fromInteger 0 = M3
      fromInteger 1 = M1
      fromInteger 2 = M2
      fromInteger 3 = M3
Is this what you're looking for? :)


A more generic version allows you to parameterize based on the bound:

    newtype BoundedInt b = BoundedInt Int

    class Bound b where
      boundRange :: t b -> (Int, Int)

    fromBounded :: BoundedInt b -> Int
    fromBounded (BoundedInt x) = x

    toBounded :: Bound b => Int -> BoundedInt b
    toBounded x =
      let result = BoundedInt x
	  (minb, maxb) = boundRange result
      in if minb <= result && result <= maxb
	 then result
	 else error $ "Out of range: " ++ show x

    instance Bound b => Num (BoundedInt b) where
      abs = toBounded . abs . fromBounded
      negate = toBounded . negate . fromBounded
      signum = toBounded . signum . fromBounded
      x + y = toBounded (fromBounded x + fromBounded y)
      x - y = toBounded (fromBounded x - fromBounded y)
      x * y = toBounded (fromBounded x * fromBounded y)
      fromInteger = toBounded . fromInteger
This assumes you want exceptions for overflow. Nowadays, you can put numbers in the type system, obviating the need for a "Bound" class, but I'm not familiar with it yet. The implementation above will also silently overflow given large enough bounds.


I'd ask, I suppose, why he wants ints from 1 to 3 in the first place. Why are these semantically important, and what's the intended meaning?

If you just need a set of numbers which only has those three, a helper function with a modulus seems easier. Or an infinite list of [1,2,3]. But it's not clear to me what the use case is, so...


Yes, a Lisp-like language could use a giant conditional for a generic "length" function. But that's not how the source code usually looks, and there are Lisp compilers which optimize it so it's not done as a conditional.

In CLOS, it will look as a separate DEFMETHOD for each type you want to define the function on, and these can be in separate modules. DEFMETHOD will just extend the generic function. And the function itself can be optimized to use the normal tricks -- things kind of like vtables -- to optimize performance.

The S3 example is a bit facile since (1) S3 objects are like mud, they're all the same and (2) they map cleanly to the OO paradigm since they encapsulate discrete chunks of external state. At the other end we can throw around examples like BLAS and LAPACK, whose functions are much closer to the "functional ideal" (ignoring mutability, here). For example, if I want to solve a linear equation, how do I express that as a single dispatch method? Does it go on the matrix, or on the vector? Or do I have to create a new object from the two called LinearSystem, just so I can invoke a single method?

We think about things in terms of coupled data, as pure functions, and as dirty generic functions (like the Lisp example). The only real lesson is that people get pissed off at languages that force them to shoehorn everything into one category (like Java).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: