Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Programming is for Stupid People (whattofix.com)
118 points by DanielBMarkham on Oct 18, 2010 | hide | past | favorite | 67 comments


"Instead we reward people for being smart -- for learning more and more details about trivial libraries that will be deprecated in ten years' time."

I get what the author is trying to convey in this post, but I feel he is unintentionally attacking a strawman.

I don't think many developers set out to memorize small details of libraries -- it's just something they pick up naturally over time with more usage.

My take is: on the surface, knowledge based on experience often seems like memorization. That's because when say Alice brings up a better way/library/tool, she's acting as a layer of abstraction.

You don't immediately get to see the internal processing Alice is doing to decide which library to use and why it is the best choice for your specific task. All you see is her output: "here's what you use in this situation". This isn't about knowing trivial details of libraries. It isn't about rote learning. It is the practical application of knowledge borne out of experience.

That experience is what is being rewarded, not the memorization. How to tell the difference? The best developers come with introspection tools: they are able and very willing to explain the reasoning behind their recommendations in detail.


Writing code is such a small part of our jobs anyway. If it were everything, our jobs would be easy.

Most of my time is actually spent answering these questions:

  * Why isn't this working?
  * How could the user fuck this up?
  * How could another programmer fuck this up?
  * What do I want this thing to do?
  * What's the best/fastest/safest way to do it?
No amount of abstraction will ever fully eradicate these questions while leaving a sufficiently general framework to work in - they characterise programming. And they are by no means "stupid".


Well said.

I've read that memory is an integral strength of high performers in fields such as chess and music.

And btw, you can extend this casual dismissal of expertise to knowledge of algorithms and data structures too. Would this author be equally dismissive if somebody pointed out that, what he was solving was an application of so-and-so algorithm.


I think there are two mind sets when it comes to programming, those that relish in learning every detail in a particular technology and those that find beauty in simplicity. (technically there is probably a third which is just a hack).

I don't think that these two programmer see eye to eye. One looks to apply the chains of functions to a problem to simplify the task at hand. While the other looks to creatively attack the problem with a minimalistic approach to writing code. They find beauty in recursion and abstraction, through interfacing things so that common logic can provide different results based on plug-able interfaces.

They are different mind sets entirely, where one looks to reuse existing library code the other looks to not write as little code as possible through creativity. Both are very valid programmer types and both can achieve mastery, it is more just a matter of how ones mind works. I myself am the latter, I love abstracting problems away till they just seem to disappear. I love closures and delegates, interfaces and anonymous functions. These are my utensils of mastery.

One thing I have noticed though is that programmers tend to work better with other programmer of the same type. For most of my career I have worked with the same development team. Many of the guys I trained and then they trained others. We are like a caravan of programming gypsies one of us would find a position at a start-up or find backing and the rest would follow in time.

Anyway, after the sale of the last company we where at I had to go out and find a real job. I did, and the entire team was of the other type it seems that they had amassed in a similar fashion as my group of developers and ended up there. Anyways, I have to say, for the first time in my life I was a drag on the team it just seemed that my code and their code just never jived, i mean it all worked properly but when I would have to work on one of their code or they would have to work on my code it was just hours of lost productivity. I eventually resigned because it was clear that for the first time in my life I was not working out as a developer. I was just mediocre at doing it there way.

Anyway, just food for thought, maybe you are like me Daniel and see the elegance of code. While the other guy sees the robustness.


"I think there are two mind sets when it comes to programming, those that relish in learning every detail in a particular technology and those that find beauty in simplicity."

These don't strike me as conflicting traits. I can love succinct solutions and still like knowing minute details about what I'm using. (It seems like a requirement for the best solution).

(You need these traits if you want to 'hack/kludge' in changes to an existing (ugly) system without breaking it; so you can be a terrible hack too!)

You would have to provide a proper definitions of simplicity, beauty, etc.


"I think there are two mind sets when it comes to programming, those that relish in learning every detail in a particular technology and those that find beauty in simplicity. (technically there is probably a third which is just a hack).

I don't think that these two programmer see eye to eye. One looks to apply the chains of functions to a problem to simplify the task at hand. While the other looks to creatively attack the problem with a minimalistic approach to writing code. They find beauty in recursion and abstraction, through interfacing things so that common logic can provide different results based on plug-able interfaces."

You've essentially just described the difference between a left-brained person and a right-brained person, although I'm not sure this has much to do with whether you like using libraries or not.

Left-brained people use the language centers of their brain more heavily. That's why they spend so much time trying to come up with the right names, adding comments, and adding documentation while ignoring useful abstractions.

Right-brained people build instinctual models in their heads. That's why they spend so much time thinking out interfaces and how things fit together and so little time thinking about the semantics.

The only thing I'd be careful of is in using terms like "simplicity", "creativity", and "elegance". The left-brained guy thinks his code is just as simple, elegant, and creative as the right-brained guy thinks his is. The difference is what they're measuring these things by.

In general, I think the ultimate goal is for these two types to learn working together. That can be a very great thing for a team, but can be fairly frustrating.


The only thing I'd be careful of is in using terms like "simplicity", "creativity", and "elegance".

Sure, that is my perspective because it is the code I tend find compelling but yes they are matters of subjective opinion.


The article is decent, but the HN-bait title is a work of art.


Daniel writes primarily for the HN audience I think, so you might say he optimized his choice of words in the title, given his audience.


I was going to post something about that as well, but then I saw that you have already taken care of that.

The article was somewhat interesting, but I didn't feel that the title flowed naturally from the content.


> Things I probably am going to reuse in other projects. [emphasis added]

Fred Brooks said that a "programming product" takes about 3 times as much work as a plain program (so it becomes a module reusable by anybody). The extra work is in things like generalizing, testing and documenting. He claimed that a "programming systems component" takes about three times as much work again (that is, a module reusable somewhere else), with the work going into things like precisely defining interfaces, and integration testing. p.6-7, MM-M. He said all that long before object oriented was coined (even before ADTs), so he wasn't talking about objects, but about code reuse in general.

What's the evidence? We see notoriously few FP libraries, but 10,000's of Java libraries (as an OO eg). The latter is a major reason touted by JVM languages for targeting it.

While FP individuals do seem to reuse their own code quite a bit, the code reuse of systems programming products remains mostly theoretical for FP - but standard practice for OO.

I think a more relevant distinction is between component consumers and component producers. The former learns libraries; and latter creates them. The former is closer to the customer and their application domain (where value becomes tangible); latter is closer to the raw materials, the essence of programming and nature's wild bounty (where value originates).


I'm still very much a novice at FP, but seeing as far as I can through my keyhole view I think FP offers a different, and in some ways deeper, level of absraction than OO, providing more re-use to the individual who becomes proficient, but less publishable reusable code.

Here's what I mean by that. The author, in talking about the reusable functions he writes for the kind of tasks he performs is really creating a domain specific language (or fragments of a language) addressing his programming domain, whatever that is. The extra work that would be involved to make this reusable for other people is daunting, primarily because he can be fast and lose in defining his domain for his own personal use (in fact he may not even reflect on the fact he is defining a domain), but for his domain specific language to be useful to others he would have to be far more rigorous in domain definition, with all the attendant documentation, use case testing, etc. This level of effort is way beyond the level of effort require to define an OO framework, for instance. Therefore he produces something that provides him a deeper level of abstraction, but is not useful enough to others.


What frustrates me about this article is the false dichotomy of 'smart solutions to problems' vs 'simple solutions to problems' - especially when it comes to C# vs. F#, languages I know well.

The whole point of programming languages is to make it easier for human beings to understand each other's + their own code; a computer could, in theory, accept code via binary tapped in on a front panel, it really doesn't care how it gets told what to do.

The point is that code quality is pretty well the same thing as simplicity - there is no dichotomy here. Like @keyist said, a strawman.

It's also important to remember that there are things that a language can achieve that simply no library can; the whole point of much of FP is to make it easier to write code simply. Pattern matching, type inference, tuples, immutability, these are not 'tools for smart people', they make it easier to write code which is declarative, and thus more clearly expressing what the programmer meant rather than how he goes about it.

Additionally, I don't think we should cede to the programmer who says 'I'm stuck in my ways and can do it all with libraries, blah' - that's a red flag in my view - a programmer who is not willing to constantly improve and question their work (can I make this better? How do I improve this?) - is likely to be the kind of programmer who produces code of a certain (almost certainly low) quality and has no means of improving on that. The 9-5 Bobs.

Let's not mistake inertia + fear of learning something new for wisdom, please.


In defense of the Bobs, I must say that some of us aren't programming just for the love of programming, and you have to be pragmatic.

I'm certain there are a lot of better solutions to what I'm doing, but I'll do them the way I know today, and try to learn a better way tomorrow.

The programmer that doesn't want to learn a new way, ever, yes, that's not a good programmer.


Sure, and I don't claim to be a great programmer by any stretch, rather it's the willingness to improve and to be open to things, within practicality.

'Smart and Gets Things Done' :)

Although I do like 'Done and Gets Things Smart' - http://steve-yegge.blogspot.com/2008/06/done-and-gets-things...


Instead we reward people for being smart -- for learning more and more details about trivial libraries that will be deprecated in ten years' time.

Having memorized a lot of facts is not what I think of when I hear someone described as smart. I think of the ability to find and synthesize new information. Memorizing facts is often a waste of time that could have been spent working out their implications and deriving new knowledge.


In modern culture, that rote memorization is what passes for "smart" now.

Memorizing facts is always a waste of time.

The biophysics professor that I worked for in college had that attitude. He studied protein structure, but rather than wasting his time memorizing the structures of amino acids, he just worked with them, and eventually the knowledge of their structures became second nature. In biochemistry, the professors tried to make everyone memorize all of those structures... but even the professors didn't get them all correct.

Those same professors couldn't explain basic things like ionization, or make sense of the fact that .1 cm = 1mm.


In modern culture, that rote memorization is what passes for "smart" now.

Not for long. Mobile Google has turned Trivial Pursuit into a typing competition.

I'd also question the use of the phrase modern culture here. In nonliterate cultures memorizing facts didn't just pass for smart, it was smart: An unmemorized fact was an unrecorded fact. In pre-Gutenberg literary culture paper, ink, and literacy itself were still sufficiently rare and expensive -- and entire books so much more expensive -- that it was still an important skill to be able to memorize lots of things.

(Those were the days when books were so costly that universities owned one copy of each book, and "education" consisted largely of copying out books that were read out to you by the lecturer. If you didn't copy down every word of Galen correctly as it was read out, you might be doomed to cite Galen wrong for the rest of your life, because it's not as if you were likely to be able to afford a copy written out by someone else.)

Now that paper is mass-produced by the ream, mass literacy exists, and even visual and auditory memory is heavily augmented by portable digicams and audio recording tech, we can afford to memorize less than ever before in history.


"Not for long. Mobile Google has turned Trivial Pursuit into a typing competition."

I hope you're right.

"I'd also question the use of the phrase modern culture here."

You make a good point.

It reminds me of the Aborgine's navigation technique referred to as "Songlines" or "Dreaming Tracks" -- they didn't have maps and instruments to navigate with, so instead they told stories. With those stories, they could convey not only a route, but also the locations of water holes and foraging grounds, and by using the story as a mnemonic, it was easy to learn and remember as well as to pass on to another.


Memorizing facts is always a waste of time.

Not true. Knowing the APIs you most commonly use instead of having to keep flicking back to the reference makes you much more efficient.

Your professor DID memorize them, he was just smart about how he did it. Instead of memorizing upfront, he did it in bitesize pieces, and was productive along the way. That's what a good programmer does too.


This way of memorising things is called Chunking - http://en.wikipedia.org/wiki/Chunking_%28psychology%29

Chess champions use this technique too. I remember seeing a TV documentary where they got a chess champion to look at a board layout for a few seconds and reproduce it in front of them. They could do this easily for a valid board layout. But when they got people with no knowledge of chess to design the board layout in an invalid manner the chess champion could not reconstruct the board as accurately or as quickly.

I guess a common programming equivalent would be a design pattern.


I think this is just a difference in terminology -- I don't think of this as memorizing, but rather as learning. When I think of memorizing, I think of someone going through a pile of flash cards or something and repeating them over and over again until their contents are a list of facts crammed into the person's head.

That said, we're saying the same thing -- rote memorization without applied knowledge isn't worth anything. Learning by doing is the way to go.


If a scientist doesn't know why .1cm=1mm, that person is stupid, memorization aside... actually if they took the trouble to memorize the SI prefixes it would be a trivial conversion.


Standing in front of a blackboard is known to reduce IQ temporarily but significantly.


After repeated interactions with the biochem profs at JHU, I think that in their case, it wasn't temporary. Maybe it was due to spending so much time standing in front of a blackboard that it just stayed reduced... and they had slaves... er, I mean graduate students doing all of their research for them, so they probably weren't getting a whole lot of intellectual exercise.


I won't refer to subjective views of intellect, but one of the WAIS IQ tests' component is the Verbal Comprehension Index. Among other things, it aims to measure the degree of general information acquired from culture by the person and her ability to deal with abstract social conventions, rules and expressions, as well as her vocabulary. To me, each of those three subtextually imply a certain degree of memorization, or accumulation of knowledge if you will.

Alternatively, one could say that knowing your tools is quite important. I mean, I'd consider someone who learns, forgets, and relearns what a screwdriver does not to be the definition of 'bright'.


That might be why people rejected his F# argument.

Or it might be that functional programming was reaching a level of abstraction that didn't make it especially useful for the problem at hand.

One could argue that bringing in a functional language first thing is somewhat akin to "big, upfront design" in the sense that it assumes a level of abstraction before you have complete knowledge of the domain.

The thing is that neither is bad, you do need some abstraction to start with. But both can be taken too far.


In several circles I have heard "smart" defined as knowing facts and "intelligent" defined as a measure of being able to learn. Hence, you can be intelligent but not smart and smart but not intelligent. I think we want a balance of the two with a lean towards intelligence. It is all semantics anyway, but you seemed to take issue with the specific term "smart", and this is probably what he meant.


I have always delineated knowledge and wisdom. Knowledge is knowing something, while wisdom is the application of self constructed ideas to an event, some times drawing upon knowledge (learned information, past experiences) to construct those ideas. knowledge != wisdom. The problem wit ha word like smart is it is contextual so smart can me knowledge. I mean a guy like rain man could be consider smart in a certain regard even though he cannot apply his ability. But also smart can mean wise I would look at a guy that does not know a lot of facts but can built a rocket in his back yard as smart. So smart tends to represent a spectrum of intelligence and is a hard word to apply because people have different measures of what they consider smart.


This article seems to be following the "Microsoft frameworks are for simple programmers" theme that I've been seeing more and more of here on Hacker News.

I don't know if that's bad or good since I've never really had to work in a .NET environment but I would hate for this article to be the embers of a forthcoming flame war.


I don't think that's the point at all. If anything it's "some Microsoft frameworks are too big and complicated" (but others - F# - aren't so let's try those)


F# is just another CLR language, so the "Framework" is just as big under F# as it is under C#. It's just a different way of looking at development (specifically, the OCaml way).

.NET is big because it actually contains a lot of useful APIs. I think this is a natural outgrowth of a framework that attempts to be all things to all people.

I'm not a .NET apologist, but I use it every day at work and it has it's strengths and weaknesses (like everything), but overall it is a fairly well-designed framework. My biggest beef with most .NET development is that you have to do it on Windows...


My biggest beef with most .NET development is that you have to do it on Windows...

I once shocked a Windows "Platform Evangelist" when I told him the only thing I needed Windows for was Visual Studio.


Haha, yeah, that would be typical Slashdot - sneer at VB.NET then go right back to PHP...


.Net get's more respect on HN than you might think. IMO it goes something like Python > .Net > Java.


If Python > .NET, where does IronPython fit in the ordering?


Evidently between those two.


I think this perception is because .NET was developed to compete with Java, and one of the very definite objectives of Java was to make programming "safe" for entry/low-level enterprise programmers.


OO code, on the other hand, is rarely reused.

This is true in situations where OO is done wrong. OO is supposed to help us implement types, and you can do this portably. You can write bad functions, too, that have too much dependency on things specific to your framework. This is not a matter of FP vs OOP but of writing portable code, which neither approach forces you to do.

FP vs OOP is not a real conflict anyway. The two are compatible.


I have yet to be convinced that any other development philosophy other than OO is superior when it comes to the UI. I will concede that OO offers little advantage when writing services, accessing databases, and integrating systems. But for the UI having self encapsulated widgets that can be extended to add distinct functionality is far superior to procedural, functional or declarative. Having a select box that can be extended to create a filtering select box with only the functionality for filtering needed is a no brainer. As well having the UI elements functionality represented by a distinct object helps isolate that code and functionality away from the larger system, with a defined interface so that it can be reused as needed without external dependencies. OO manages the hierarchy of code and its bisection in a manner that is far more comprehensible when it comes to UI, AI and game development. The rest is just preference.


What exactly are people talking about when they are discussing these differences? You mentioned "procedural" and "declarative"; how is that something wherein OO can be in opposition? In other words, how are you programming your class methods, and how are you programming the things that call those methods in a way that is not "procedural, functional, or declarative"?

OOP is not a philosophy. It is a way to glue data together. OOP is not separate from "procedural, functional, or declarative" programming, so it cannot possibly be "superior". I object to this becoming a religion or a we vs them.

Here is an example for clarity:

   x = new(Complex);
   set_real_part(x, 4);
   set_imag_part(x, 10);
   imag_part = get_imag_part(x);

Did I program that with OOP? How can you tell? Is it just syntax (because we can rearrange syntax)?

   x = new(Complex);
   x->set_real_part(4);
   x->set_imag_part(10);
   imag_part = x->get_imag_part();
And back to my hint:

   x = new(Complex);
   x = x->set_real_part(4);
   x = x->set_imag_part(10);
   imag_part = x->get_imag_part();
x could be a function.


OOP is not a philosophy

philosophy: any personal belief about how to live or how to deal with a situation; "self-indulgence was his only philosophy"; "my father's philosophy of child-rearing was to let mother do it"

It's not a science, it is a philosophical view point on how code should be arranged. So for me philosophy best fits the definition of the abstract concept that is a though process enshrined in code in particular adherence to a philosophically composed structure. I think philosophy fits the bill quite well.

I think you miss the point, functions and data existing outside of the class do not encapsulate the code base, so for you example, what if I write a widget for the UI and I want to give that widget to my friend, but not my project. In you example, I have to package up the functions, which may or may not reside with other functions, then you have to package up the data and again many or may not. Further the API does not float with the widget, so my friend has to design non conflicting entry points in his system to ensure that those functions are now available to the data for manipulation.

You are introducing external dependancies to a concept that by all intents and purposes can stand alone. OO promote reusability, because it boxes everything you need into a class and that is why when specifically applied to the UI where there is a lot of inherent reusability of items that deviate slightly it provides great application.

External dependancies are the nemesis of isolated components and your example shows why. You are looking outside the object at how it is used by the system. You are looking at how you call and use that class not how it relates to the overall system and keeps itself from becoming intertwined with it, all of the is dealt with inside the class not outside because it is inherent to OO.

Inheritance is a whole other aspect of why it is superior in this context and again it has to do with limiting external dependancies. Being able to extend a code base and suppress existing functionality while adding new novel functionality allow one to build on an existing item while still allowing the existing items to stand independent of that relationship. This allows my friend the ability to build upon my code base while leaving my code base intact. Therefore when I update my code base so long as I do not modify the existing contracts he can take upgrade in whole with no issue and no modification to his existing system. Now I know all this can be done with functional languages, the point is that it is simpler with OO and that is the crux of the issue.


For UI elements it is OO hands down.

I was thinking about this assertion on a walk, and I realized that this is just a matter of opinion. I can see it apply if you were programming with Tk. The same goes for Gtk, but when you add glide, the UI construction becomes declarative, and to be honest with you, I have preferred that style of UI construction over having to figure out my arrangements by pencil and paper. I would say that for 90% of your cases, you are not writing a truly dynamic UI that requires OOP to construct things.

When we talk about responding to events, we respond with functions. Presumably, we bury those functions in random placeholders, like a class dedicated to the callbacks, but they are still just functions that describe how to behave.

When I do web programming, I find myself with a mixed approach: HTML and CSS (declarative), javascript with jQuery (OOP and functional).

jQuery is an interesting use case of the blending of OOP and FP. Consider that when you make changes to objects in jQuery, a jQuery object is returned. Thus, you can chain things like this:

   jQuery(element).action().action().action();

This is actually an improvement over typical OOP as implemented by people who do not understand the benefits of FP, and it reinforces the fact that OOP and FP are compatible, not some kind of opposing forces.

In you example I have to package up the functions

Packaging issues really have little to do with OOP.

Yes. One way to do that is with a class. Another might be a package or namespace. Organization is a social issue. What you described is not a strength of OOP. In fact, the way most people I have seen do OOP is bulky because they are trying to mitigate this packaging issue: they try to put everything into one single class that is easy to share, as opposed to breaking it down to its logical parts with multiple classes.

Take a simple textarea widget, for example. Let's say you want to extend it to have a spell-checking feature. You extend the class to be TextAreaWithSpellChecking. The typical poorly designed resulting class will include the spell-checking code as opposed to implementing spell-checking in a separate module. It is easier to send the single class than multiple classes, right? Where should the spell-checking live? The answer is not about OOP or FP or other approaches/tools. It is about how you choose to abstract and break things apart.


Don't know any jQuery, but that looks like a monad. Glad it's found it's way into practice.


Sure and I get that they are compatible, but I think jQuery may be a bad example. Have you ever tried to write a huge web app with jQuery? I have and I can say hands down that Dojo is an easier toolkit to develop large apps with.

The reason, everything is implemented on the jQuery tree and you have to chain stuff together. So long as you don't have to extend jQuery, it works well, but as soon as you have to provide a large code base with custom plugins to manage the code it becomes untenable quickly. I know I have done quite a few of them. Please don't take this as me knocking jQuery, I love it and use it extensively for quick solutions in which I am going to use existing functionality, but if you have a problem set that grows beyond whats in the box or that you can't find in a box. It becomes a massive time sink.

One of the major reasons is it does not have the concept of jQuery code existing outside of the jQuery tree you build on jQuery and not with jQuery while subtle it is a huge distinction. It is a monolithic philosophy, you have a hard dependency on jQuery while Dojo does require a dojo API to declare and construct objects they exist independent of the dojo tree.

It is not my intention of devolving this into a toolkit war, but I wanted to highlight that jQuery exemplifies my assertions, it is great for quick a dirty, there is nothing faster, but it does not provide the depth of a more OO centric framework for those of us that produce the building blocks that other people use.

I think this post sums up what I am trying to say better than I am communicating it: http://news.ycombinator.com/item?id=1805143

As well for sake of clarity, when I say UI development, I am not talking about putting text boxes on a page, grabbing the values and submitting them to a service. OO provides no advantage over functional in controller or work-flow logic. When I am talking about OO's advantage it is specific to new and novel UI components that are packaged for others to use off the shelf in controllers and work-flows, in fact I think OO is inferior for work-flow. Just wanted to ensure that I provided clarity on where I believe OO provides superior advantages over functional.

In fact, the way most people I have seen do OOP

sure people do bad OO they also do bad functional the reality is there are more hacks than there are good programmers. It is not an exclusive trademark of OO programmers. I will give you though bad OO is much harder to fix than bad functional. It's the nature of the beast. To quote uncle Ben "With great power comes great responsibility".

I realized that this is just a matter of opinion

You are correct it is my opinion, that is why when I posted my first comment I called it a philosophy. It is all opinion, some of it relies completely on how an individual developers mind works. But for me, and most of the developers that I know that have been doing UI for a long time the opinion is that OO provides a superior philosophy. If you find that something suits you better by all means use what works for you.


Those are really benefits of data abstraction, not necessarily OOP (which is a specific approach to achieving data abstraction)


Yes, I agree and that was my point, OO provides the simplest mechanism to achieve that abstraction when it comes to the UI. I know it can be done with the others but my experience has been that it is at the cost of added complexity. I see beauty in simplicity and for the UI OO provides simplicity to the problem domain.


> This is true in situations where OO is done wrong.

Beware the No True Scotsman fallacy:

http://en.wikipedia.org/wiki/No_true_Scotsman


I have yet to see OO objects live on from one internal project to another. It's always the same cycle: design domain model objects; let them grow/bloat for the purposes of the project; end up redesigning them the "right" way for the next project.


There are objects that persist and are reusable, although often you might not think of them that way.

In python, for example, there are a number of useful and reusable objects. The 'function' object, the 'module' object, the 'str' object, etc..


Assuming we are talking about classes, I am glad someone released Path::Class for perl, among many other class packages. This is a social problem, not a problem with the technology.


Except to the extent that the technology exacerbates the social problem. Or even just enables the social problem to continue.


I would say it's not just a matter of writing portable code, but writing portable, generic, useful code. Code can be portable but not useful.

I think object-oriented code is slightly more prone to being portable-but-not-useful because it tends to be designed around real-world models for a particular problem, while functional programming tends to prefer generic abstract models like lists, hashes, graphs, and sets.

Also, code can be useful but not obviously so. A method like org_chart.peruse, for example, might be a solidly generic and portable tree-traversing method that you'd never think to use elsewhere.


Yes, they are compatible but you end up writing a lot of

let Write f buffer start len = f.Write(buffer,start,len)

so that you can pass functions independently of the object.


Over in Drupal land we are talking a lot about the direction we want to take Drupal 8; FP vs OOP.

The last couple of posts on this blog are a good read. http://www.garfieldtech.com/


I think this is basically a re-statment of Larry Wall's three virtues of good programmers:

http://c2.com/cgi/wiki?LazinessImpatienceHubris

TFA says programming is for stupid people, but I'd argue he meant lazy.


FTA: FP says that you don't have to have 20 thousand libraries. Or rather, you're welcome to have as many libraries as you want, but you develop those 30-70 functions that you use most often in your job and re-use them Instead of learning a new problem domain for each project and then working inside of it little piece by little piece, you're creating you own minimal set of symbols that work on all of your projects, thereby making each new project more and more trivial.

Would anybody mind making a non-comprehensive list of these 30-70 functions?

I know the article says they're specific for each job, but I'm sure there's an overlap among most of them.


The attitude of spinning your own mini environment to solve a variety of problems using the exclusive set of idioms known best to you reeks of what has plagued Forth and C++.

I love functional programming, but I don't think this animosity towards using known, tested libraries is all that healthy.


I feel for you -- whenever I try to write about basic algorithms (sorting, etc) I always get comments that you can do it in library. Yes, obviously, but I'm teaching an algorithm, not how to use a library!


You still need to know what your library's performance is like under various conditions - large or small numbers of items, whether comparisons are cheap or expensive, how much extra memory it takes up, etc. If you don't understand the library, how are you going to be able to use it, except in relatively trivial cases?


I don't debate library usage (reuse!), but I teach mostly theory, with code snippets to present algorithms concisely.

There are many tools to solve a problem, and I want to educate on a different layer. For instance, in sorting alone there are many sorting algorithms, I don't expect someone to interrupt me with "just use quicksort!" when I'm teaching bubblesort. There are lessons to learn in both, but it's a bit presumptuous (or at least, there are less smug ways) to assume ignorance, and not intention.


Sorry - that was sort of my point, but I didn't put it very well. Your student likely isn't taking any of that into account. One way to get your point across is to ask them a few pointed questions about (eg.) the data set that they're sorting. If it's relatively small, then the overhead of quicksort might not be worth it. If it's already mostly sorted, then bubblesort is potentially better, and so on.


Reminded of Tim Bray's 'Forget the defaults': http://www.tbray.org/ongoing/When/201x/2010/06/29/No-Default...


If you ask an engineer if other engineers should work toward standards or make up their own, they'll tell you every time that engineers should work against standards.

In the programming world, libraries represent those standards and every time you try to roll your own solution to a well explored problem, you're telling the rest of the world to avoid your code.


I don't see the connection between FP, OO and code reuse. Is there proof that using FP automatically leads to higher code reuse? That OO code is not reused?

Anyway, the biggest benefit of OO is abstraction & encapsulation, not code reuse.


Presumptuous, controversial, relevant. What a beautiful HN link-bait title.


I'm not so sure that FP code is so much superior to OO code in terms of reusability as the article implies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: