Wow. Just stumbled upon the following quote and read the corresponding article.
> Programming language comparisons usually focus on the brevity and expressiveness of the language. If the solution to a programming problem has fewer lines and is “easier to understand” or “clearer” in one language than another, that is an argument for using the former over the latter. This comparison is useful if one supposes that the art of programming is primarily concerned with writing programs. It isn’t, of course. It is mostly concerned with debugging programs.
> In summary, then, every concept that object-orientation brings to the table makes debugging harder.
> If you believe (as I do) that the majority of effort a programmer expends is devoted to finding and fixing bugs, then an imperative programming approach will be more efficient (in programmer time) than an object-oriented approach.
In truth tooling fixes a lot of his issues with OOP.
E.g. "you will have to check the entire code base for any methods which called the setter."
Sounds difficult... But it's really less than a second away as OOP IDEs have a "find references" functionality. Likewise if you want to know who is setting a bad value you just set a data breakpoint and the debugger will break when the bad value is set.
More fundamentally I don't agree with his assertion that figuring out where the bug is is the most time consuming part of debugging. The most time consuming part by a very large margin is setting up the environment to properly replicate the defect, followed by retesting all surrounding code/features that could be impacted by the fix, followed by determining a fix, followed by documenting/management overhead.
The average time for me to find the location of a bug in our codebase is around an hour. Yet I only fix on average a bug a day and it's not because I head to the pub after finding it :p.
> In all seriousness, this is one of the best articles I've ever read. I couldn't agree more with the author.
I'm not sure if you are being serious (you say you are, so Po's law), but this is a bad article written by someone who I guess must be a novice who thinks he knows something now but will look back on this 5 years later and cringe; examples...
> From the perspective of debugging, an instance variable is not in the scope of any of the methods on the stack.
He might as well be talking about heap allocated structs in C.
> This threatens to increase the search space, and suggests that avoiding instance variables in favor of passing arguments might be a better strategy where possible.
State should be stack allocated, except when it really is state (even if hidden in a monad).
> if you make the mistake of defining public “setter” methods, then the instance variables with setter methods become effectively global variables: if the value of the instance variable is incorrect at some point, that incorrect value might have been set anywhere in your code.
I'm trying really hard now to not be sarcastic.
> So, to avoid making your object-oriented code harder to debug than imperative code, you need to not use any public instance variables, and also avoid defining setter methods.
How does this guy define imperative code? Definitely not C where all members of a (heap allocated) struct are public?
He then describes the fragile base class problem without really understanding what that means.
> The technical term for a programming paradigm which uses neither inheritance nor instance variables is imperative (or functional) programming.
Ok, I can't stop myself from laughing at this point; I can't really go any further.
You know, we all go through this as programmers. We start knowing nothing, then we get a couple of years of experience and think we know everything....then we get 10 years of experience and realize we still don't know very much and what we thought we knew were just delusions of grandeur.
I won't say much about the rest of the publication. The nice thing about the internet is that anyone can produce something and make it available to read.
> a bad article written by someone who I guess must be a novice who thinks he knows something now but will look back on this 5 years later and cringe
The article's author appears to be Robert Lefkowitz. If it is, then the author transitioned from nuclear physics to programming in the 1970s, has been a speaker at PyCon, currently seems to be writing software in Haskell, Ruby, and Java, gave several talks at OSCon in 2013, and is currently a CTO while also working with corporations to open proprietary code ... among other things.
I won't further belabor the point, but what you just said (and how you said it) is one of my least favorite aspects of HN.
Oh, I have no idea if he's wrong or not. Honestly, programming isn't something I usually think deeply about; I'm sort of tired of all the usual arguments.
But you don't have to assume that someone's a novice to disagree with them. That seems to be a common thing for programmers to do to each other and it's disgusting.
Sean wasn't making points. He was being obtuse and dismissive, and made repeated statements about the author and how much he was laughing at the idea that this was being read by anybody.
I read it: I enjoyed it. I didn't run into anything I didn't understand, but I did feel like the phraseology was chosen to offend rather than inform.
For example, "global variable" was used to describe any memory cell that could change from any point in the program, either directly or indirectly. Obviously any public function (setter or method) can change that value. If the instance variable is static or a member of a singleton, then it really is a global variable, but if the object is simply long-lived it might be equivalent with regards to the impact it has on debugging.
Given that this isn't the ordinary use of the term "global variable", one has to read the article to understand that this is what is intended, but at least the article isn't very long. However because the author chose to overload the term (rightly or wrongly), it makes extracting the essence from a few quotes very difficult.
However if you read the article: What part did you have trouble understanding?
You don't get to be a good writer via ad hominem attacks either. Having had several interactions with the author, I'd say he has a knack for putting forth controversial theses like the one in the article. I've disagreed with lots of them--but I've always gotten smarter thinking about why I've disagreed with them (and have often been left with the suspicion that he really might be right).
When I was younger I used to read things I thought were wrong and smugly dismissed them as due to some flaw on the part of the author. Luckily I've outgrown that. (Good thing too, as I no longer do embarrassing things like asserting that a multi-decade veteran of the industry must be some novice.)
One thing I think we could have done better with Code Words is added short bios. Bios add context, which is lacking here. Here's R0ml's bio from 2006[^1]:
r0ml is an software architect and systems designer with over thirty years of experience. For two decades, r0ml worked on Wall Street, developing market data, trading, risk management, and quantitative analysis systems. More recently, as chief technical architect at AT&T Wireless, he drove the improvement of their CRM, ERP, commission, and data warehousing systems. Over the last several years, r0ml has become increasingly interested in open source software strategy at large enterprises, and is a frequent speaker on the topic.
This is the same R0ml that thaumaturgy mentions in a sibling comment.
I've been lucky to meet a lot of very smart programmers over the past 3.5 years working at Hacker School, and R0ml is one of the people I respect the most. He has a lot of opinions about programming, many of them contrarian. I think this comes from his background. His career began in the '70s. He spent the first third of it working in APL, the next third in Smalltalk, and the most recent third in Java. Over the year or so that I've known R0ml, he's planted the seeds of a number of unconventional ideas in my head (program in the database, don't use libraries, write things from scratch, the expressive power of a language is partially derived from what's in its standard library). I'm sure these aren't all good ideas—maybe some of them are terrible—but they've profoundly affected the way I go about my programming. R0ml is on the short list of people I would want to work for if I was not running Hacker School.
I don't bring this all up to refute your opinions–R0ml may be wrong in this case or even in all cases–I mostly wanted to give some context explaining why the tone of your response made me sad and frustrated. The onus is on us to give readers context for what they're reading, and I don't think we did a great job of that, but if you're going to write such a mean spirited response, I think some of the onus is on you to do some research before accusing someone of being a novice and responding in a way that feels at least partially ad hominem.
Around a month ago, I read Programming with Managed Time[^2], the paper that you wrote with Jonathan Edwards for Onward! '14. I thought it was insightful, well written, and contrarian in the best of ways. The first things I thought when I finished it was "I bet R0ml would enjoy that."
I don't mean to compare Code Words to an academic journal (it's absolutely not), nor do I mean to compare R0ml's article to your paper in terms of scope, research, or time put into it (yours obviously took more time and has much greater insight than R0ml's short article). I'm just bummed that one person who's counterintuitive opinions and experience I respect would be so quick to dismiss the opinions of someone else who I respect for the same reasons.
I know that complaining about tone is often used to distract from more substantive issues, but I'm going to risk doing that anyway: Please be nice!
I've given it a whole day to ponder and I apologize for calling this article out. The article was really bad in my opinion (not just disagreeable, but poorly argued), but it should have just remained my opinion. I don't care who wrote this, frankly, I never read bios anyways. It would however, prevented me from blathering on about why I thought this article was written.
I would suggest more editing in the future, not to change messages, but in the way that we always need other people to read our work to provide some external perspective.
The expressive power of a language is definitely based on what's in the standard library. If the C standard library came with default dictionaries and other common data structures (while still allowing you to build your own) it would be a much more expressive language.
I love C, I love that I can keep most of the standard library in my head, but a greater built in eco-system would make portable code a lot easier, provide a standard reference to beat, and make the language more useful from the get go.
> One thing I think we could have done better with Code Words is added short bios
I'm rather glad you didn't.
Sean felt justified not reading it because he believed the writer was "a novice", and I think he have acted differently if he knew this was an experienced and successful commercial programmer.
But because he didn't, the rest of us can learn something really interesting: That twenty years of research-experience cannot immediately recognise itself blub to twenty years of commercial-experience.
Maybe those guys writing software that does things have figured something out as well?
I totally read the article; I'm very surprised it was written by a professional but it wouldn't have changed my judgement of it. I thought it was poorly written and meandering, but I probably should have kept that to myself. I guess we all have different standards.
> Maybe those guys writing software that does things have figured something out as well?
At this point, I really think the article should speak for itself. If you think it says something important or even coherent, then you know, it was for you, not for me.
I don't think you should keep your opinions to yourself. However, when you present your opinions with sarcasm and ad hominem attacks, you are bringing down the conversation to a level that is not conducive to any meaningful discussions. I don't think that was your intention, but I do think it was the result.
"""don't use libraries, write things from scratch, the expressive power of a language is partially derived from what's in its standard library)."""
One of these things is not like the others.
He sounds like he's very charismatic and personal, and probably an ok dude to be around, but that still does not change the fact that this was a big mess of an article.
I think the problem is more that it was written with a specific conclusion in mind, without checking if the logic actually held up in the end. Terminology does get a bit stretched, at best...
>> if you make the mistake of defining public “setter” methods, then the instance variables with setter methods become effectively global variables: if the value of the instance variable is incorrect at some point, that incorrect value might have been set anywhere in your code.
You can end up in the same situation by spreading a pointer to your local int variable into e.g. a global pointer just as easily, without so much as a struct. Or more frequently in practice: By finding someone else's buffer overflows, when coding in C. This is a problem, but it's less of an OOP problem and more of a "mutable data with nonobvious mutators" problem (or a "OOP as practiced distressingly frequently" problem?)
This starts to go away when your memory access is safe, and when your variables (be they objects or not) are constant (which you'd get enforced for free in a purely functional language - and which I enforce by overusing const in C++ - either of which is compatible with OOP). To a lesser extent, it starts to go away when your variables are only referenced locally - but that's left me wondering about previous values all too often.
Indeed, this is enough to stop reading. Implying that two pretty much opposite concepts are one and the same thing — especially when comparing them to OOP, which, basically, is one kind of imperative programming already means that author clearly doesn't know what he's talking about.
So I don't know and I don't care who this guy is, how much "experience" he allegedly has and there's no justification to the fact that every second comment in the thread tries to apply to authority as if it means something. All we need to know about the article is article itself, and this article is full of nonsense.
That being said, I actually agree with conclusions (though these are pretty obvious things anybody with handful of experience would see). But the whole reasoning behind it is just… hilarious.
Global variables are terribad (imo). Dependency injection is good (imo). I don't think the article is particular clear with everything it's trying to say, but those two points seem pretty reasonable, no?
I looked up "terribad", since it seemed an unlikely typo. The luminaries at Urban Dictionary told me it was "[t]he act of being terrible and bad at the same time"; I can't for the life of me see how something could be terrible and not also be bad.
> I can't for the life of me see how something could be terrible and not also be bad.
Terrible doesn't necessarily imply bad, it can also imply fear or awe, in which case it's perfectly possible to be terrible and not bad. It's similar in that way to 'awful', which these days is used almost exclusively for something that is bad, but occasionally you'll still find the old usage (awe-filling) appearing.
On the other hand, 'terribad' is the kind of word that fills me with dread (combining latin and old english roots like that? The outrage!). I know language is arrived at by consensus but even if there were consensus on 'terribad', I'd not be joining it.
Since you asked, it's possible that terrible things can be both "good" in the sense of 'not evil' ("he's a terrific guy") and "good" in the sense of 'well done' ("Oz, the Great and Terrible").
This is an interesting perspective, but I think the author has made too much of it. OO is one of several developments that have added ways to put abstraction to work, and they are all double-edged swords, providing additional ways for a confused coder to express his confusion and work around his initial misconceptions of the problem he is attempting to solve.
The problem is not in the programming language features, any more than they are panaceas: the problem is in how we approach problem-solving. Guessing at the solution and then debugging it into acceptability, or 'guard-rail programming', as Rich Hickey has described it, is not the optimal approach in any language.
>This comparison is useful if one supposes that the art of programming is primarily concerned with writing programs. It isn’t, of course. It is mostly concerned with debugging programs.
I don't really get this. If you can write programs that have fewer bugs because your language is expressive and prevents them, then naturally there is a lot to gain from valuing a language primarily by how it works when writing code.
A language that is brief and concise gives you fewer opportunities to make mistakes:
> A language that is brief and concise gives you fewer opportunities to make mistakes
I don't think that's necessarily true. Look at brainfuck!
But in all seriousness, dynamically typed languages could be categorized as "more expressive" than statically typed languages, and generally allow for cleaner interfaces, but turns static-typing compile-time errors into dynamic run-time errors. What's worse is that your tests might not cover the specific conditions that cause it. So, while you might consider C++ less expressive and concise than Python, at least you'll never try to take the square root of a string.
The ease of writing bug-free code and debugging depends on a lot of factors. How well designed is the interface/API? How structured is the code? How good are the available tools? It's not what language you use, but /how/ you use it.
I'm not willing to say that dynamically typed languages are more expressive. It's hard to compare two languages and just say "A is more expressive than B." Usually they just express different things with different degrees of conciseness. (In particular, Brainfuck is not concise. It has a small grammar, but an equivalent print statement has much more room for error, since it takes a longer string of obfuscated code to produce.)
My point is that the _goal_ of expressiveness is still there and still valuable. We need languages that express our intent safely, clearly, and quickly. The better this is achieved, the less debugging you have to do -- because, by definition, the code will express what you want more clearly and with less room for error.
>It's not what language you use, but /how/ you use it.
How you use a language is determined by the grammar of that language. So I don't see any value in your point here, save for trying to dodge specificity and feign wisdom.
I agree with your skepticism of the idea that that dynamically typed languages are more expressive. It is not as if we are writing poetry here; we are actually trying to specify, with great accuracy, a virtual machine. Fortunately, we can state for sure that there is no fundamental difference between the capabilities of Turing-equivalence languages; if this were not the case, the arguments over expressiveness would be interminable.
This example is misleading, and I don't think the article addresses the subtleties.
Basically, the third line is not like the other two. Because none of the numbers involved in #3 are Doubles, Haskell will actually use the Integer type, which is basically a bignum (like integers in Python).
The first two are comparing values of type Double, so the instance of Eq for Double is used, which necessarily has some inaccuracies, as shown in the second line, while the last one is using the instance of Eq for Integer. Therefore, the statement that (==) is not transitive for these types is false, because we are talking about different versions of (==) in the first two examples vs. the last one.
The trick is that bare numeric literals have type Num a => a, and have their type inferred by their use. The first two examples cause the 2^53 to take on type Double, otherwise it wouldn't type check (literals with decimals in them have type Fractional a => a, of which Double is the default). The last one doesn't have any expressions with decimals, so it uses the Integer type.
When you specify the types of the numbers used, so that you are testing the transitivity of a single instance of Eq, you will find that transitivity does hold in this case.
a :: Double
a = 2^53
b :: Double
b = 2.0^53
c :: Double
c = 2^53 + 1
main :: IO ()
main = do
print $ a == b -- True
print $ b == c -- True
print $ a == c -- True
Turns out I'm too tired to do this tonight. I will try to do this over the weekend or on Monday. I'll try to ping each of you when there is an RSS feed.
In the mean time, we'll tweet about every issue on https://twitter.com/hackerschool, but I know that Twitter isn't RSS and that subscribing to our Twitter feed in order to get the 4 tweets per year that you want to see might be silly.
As somebody who can still only grow a neck beard, I've been trying to get into functional for a while now. Maybe it's because the examples are in python (a non-crazy looking language I've written a lot of lines in), but this is the first time I've read an FP article and thought I might enjoy explicitly using this stuff. I've been low-side effect and anti global for a while, but filter() is sexy.
def move_pos(x):
if random() > 0.3:
return x + 1
return x
def move_cars(positions):
return map(move_pos, positions)
I do not believe that functional programming benefits from brevity, except at the top layer. Unfortunately, functional programmers often expand their equations which, having examined them thoroughly, they understand implicitly, but can give too much information to the reader all at once.
Though, I have to agree, the FP community really hasn't been able to demonstrate how to do it without losing efficiency, and yet many proponents claim we should abandon all other paradigms. All of these articles seem very biassed towards FP.
I enjoy functional programming, but I prefer to work in imperative languages where I have a choice between paradigms.
There do exist object-oriented functional languages (Ocaml). Functional vs OOP is a false dichotomy, though a very common one. I prefer to use functional techniques in imperative languages with simple objects.
From the article Option and null in dynamic languages [1]
"Languages and libraries are defined not by what they make possible (the Church-Turing thesis tells us that), but by what they make easy."
The Church-Turing thesis tells us about what is fundamentally computable, what a library decides to implement is not the same thing.
Libraries are not about what is fundamentally computable, but about abstracting away the mechanics of some problem.
While no library can solve the halting problem, that in no way implies that two libraries expose the same things to the end user of the library, even in a related field.
I can't tell whether you intend to agree or disagree with the quote. Your tone suggests you are disagreeing, but your content to me seems to be in agreement.
Disagreeing, but it was late when I wrote it. I guess my counter argument would be that what is fundamentally computable and what someone choices to implement in a library are not the same thing.
With languages that may be a bit closer to the truth, but I would also add things like what the language makes safe (type safety, memory safety etc).
I did on the whole like your article thou, that small bit just got to me a bit
I like curated content. In fact a lot of my reading about programming now comes from things like ruby/python/html/javascript/open source/etc. weekly email lists. The internet has too much noise when it comes to programming stuff and the weekly emails cut through the cruft. This will be another set of articles I add to my reading list instead of trying to find gems on HN or r/programming.
Wow. Just stumbled upon the following quote and read the corresponding article.
> Programming language comparisons usually focus on the brevity and expressiveness of the language. If the solution to a programming problem has fewer lines and is “easier to understand” or “clearer” in one language than another, that is an argument for using the former over the latter. This comparison is useful if one supposes that the art of programming is primarily concerned with writing programs. It isn’t, of course. It is mostly concerned with debugging programs.
- https://codewords.hackerschool.com/issues/one/why-are-object...
> In summary, then, every concept that object-orientation brings to the table makes debugging harder. > If you believe (as I do) that the majority of effort a programmer expends is devoted to finding and fixing bugs, then an imperative programming approach will be more efficient (in programmer time) than an object-oriented approach.