> When we drop down to the algorithm level, I think OO can seriously thwart reuse. In particular, the use of objects to represent simple informational data is almost criminal in its generation of per-piece-of-information micro-languages, i.e. the class methods, versus far more powerful, declarative, and generic methods like relational algebra. Inventing a class with its own interface to hold a piece of information is like inventing a new language to write every short story. This is anti-reuse, and, I think, results in an explosion of code in typical OO applications.
I think he has a good point, and as someone who really likes functional languages, I agree with him, but I think there really needs to be a better explanation of functional "patterns" and how to structure large functional programs before it will catch on more.
Also, it's easy to say, "object-oriented programming is bad for reuse," but that's not all OO is for (i.e. encapsulation, etc.).
I'd have to go back through the literature, but reuse was not something I heard talked about much at all when I first seriously started doing OO in the 90s.
With that said, reuse with OO today is not in a horrible state, no pun intended. Things can certainly be easier, but I think design is the bigger issue. And the problems I see suffer in functional designs as well as OO designs.
I'd have to go back through the literature, but reuse was not something I heard talked about much at all when I first seriously started doing OO in the 90s.
Maybe it was an '80s thing, but I recall "reusable software components" being touted as the industry savior. OO was all about reuse then.
Encapsulation is also addressed in the article. I really liked his response to that too, it was like a more succinct version of the Steve Yegge Wikileaks/Java private methods blog post[1] that did the rounds a few months back.
> I think he has a good point, and as someone who really likes functional languages, I agree with him, but I think there really needs to be a better explanation of functional "patterns" and how to structure large functional programs before it will catch on more.
Are there any good existing resources for learning these things?
> generic information manipulation code... requires the capability to generically access/modify/add properties by name/key, ... etc
Java reflection does much of this. Though you can't add properties.
He's against types, and only values them for performance. Yet, the relational algebra that he admires uses types (the definitions of tables - "schema"). Although they're flexible types, in that you can change schema definitions (schema are themselves tables). And you can invent new schema willy nilly (e.g. result of a join). They are types in that each row (or instance) in a table has a value for each column in the table.
The relational algebra is one of the most successful and enduring innovations in computer science - and it also has a solid mathematical basis. It's dynamic types may be a good guide to the value of types other than for performance.
His argument against using classes/objects is that objects (in mainstream PLs) can't be accessed in a dictionary-like fashion.
But switching to dictionaries wholesale for this reason seems to throw out the baby with the bathwater. The loss of abstraction is real, and could be worked around trivially by making every object respond to a dictionary protocol. This would give you the best of both worlds.
The whole point of OO is that you don't want code reaching into the internal member data of the object. The real reason that algorithmic code reuse is so difficult in C++ and Java is that they lack usable first-order function literals.
Not to mention that lack of structural typing and Mixins make it even harder to actually have reuse. Dynamic languages like Python and Ruby solve this beautifully, and so do Go and Scala on the more static type side.
No all those languages suffer (except JS) from the fact they don't give you uniform access. Consider the different forms of access for dict types, array types, object types in said languages.
Being able to deal with data generically in a first class manner is very, very useful. You can only get this in all the languages you've mentioned (including JS) w/ more or less custom code writing.
order.getLineItem(2).getQuantity();
(get-in order [:lineitems 2 :quantity] 1)
The first is simple enough, but it's essentially a DSL. No code that doesn't know about that API can work with those objects. If I want to guard against null objects I have to re-do that sort of work over and over again.
Contrast with the latter, where the get-in function just treats the subject as a nested associative structure. It can handle the null checking, and it doesn't care what my keys look like.
I have to admit, the first line looks like better code in my eyes. The only "advantage" of the second one is that it "handles" null checking. Which really means that I still need to handle what happens when order is null, versus, the non existence of line items, or the non existence of quantity.
The only real advantage I see from the second line version is that I can write that code and run a partially completed program, that for example doesn't have a notion of orders at all -- and the program can still run. But I guess this just boils down to preferences.
When I briefly used Clojure, I was very impressed with clojure.set. (Probably the "relational algebra" Hickey mentioned in the interview, with operators like project, rename, join, etc. Stuff that anyone who knows SQL will be familiar with. A good example of what such abstractness buys you.)
How much to spend reading and how much time to spend in programming exercises / programming for yourself? Now I have a daily job and it's hard for me to find the right equilibrium between learning new techniques and stuff and just creating something new with what I already know.
(I read relentlessly, too; English is not my native language though so sorry for the grammar)
I'm not here to convince you otherwise if you dislike reading, but for many it's not as black and white as you present. In fact most of the greatest hackers I've ever met were also among the most voracious readers (Rich Hickey included).
In doing, there is a lot of reading. Every line of code you write you will likely read dozens of times. And if you are writing anything significant, they will be read many more times by others.
I was expecting to see some reference to fortress, scala and abcl in order to discriminate what's important about Clojure. An important question: Of those books listed, are there any you think every programmer should read?, yes, Fogus, your book should be read by all programmers.
I've used Clojure in production. Specifically I coded database driven system monitoring app with a web interface, so I can say it works fine with SQL and in a webapp context.
I love clojure, wrote a web crawler in clojure at my last job. But, I still think rails is the best way to write a webapp if you have a lot of user interaction. If it's primarily a REST api or something like that, then clojure would work great.
> When we drop down to the algorithm level, I think OO can seriously thwart reuse. In particular, the use of objects to represent simple informational data is almost criminal in its generation of per-piece-of-information micro-languages, i.e. the class methods, versus far more powerful, declarative, and generic methods like relational algebra. Inventing a class with its own interface to hold a piece of information is like inventing a new language to write every short story. This is anti-reuse, and, I think, results in an explosion of code in typical OO applications.