Reminds me of what Edsger Dijkstra wrote, "Let me start with a well-established fact: by and large the programming community displays a very ambivalent attitude towards the problem of program correctness. A major part of the average programmer's activity is devoted to debugging, and from this observation we may conclude that the correctness of his programs —or should we say: their patent incorrectness?— is for him a matter of considerable concern. I claim that a programmer has only done a decent job when his program is flawless and not when his program is functioning properly only most of the time. But I have had plenty of opportunity to observe that this suggestion is repulsive to many professional programmers: they object to it violently! Apparently, many programmers derive the major part of their intellectual satisfaction and professional excitement from not quite understanding what they are doing. In this streamlined age, one of our most under-nourished psychological needs is the craving for Black Magic, and apparently the automatic computer can satisfy this need for the professional software engineers, who are secretly enthralled by the gigantic risks they take in their daring irresponsibility. They revel in the puzzles posed by the task of debugging. They defend —by appealing to all sorts of supposed Laws of Nature— the right of existence of their program bugs, because they are so attached to them: without the bugs, they feel, programming would no longer be what is used to be! (In the latter feeling I think —if I may say so— that they are quite correct.)" -- From EWD288
IIRC Dijkstra advocated constructing a proof of your program first. This also involves debugging, mistakes and trial-and-error, just at a different level. Mathematical representations and proofs are powerful because general and certain; but they aren't magic and don't write themselves.
It's a lot easier - and a lot more fun - to deal with specific instances, that are concrete, and can be examined and experimented with on a computer.
Logically, one could also work with proofs with a computer to give the same benefits (e.g. coq proof assistant), but for some reason, it doesn't seem to be as much fun in practice. Perhaps because the abstract versions don't have a direct impact; they aren't "live" (idk).
Yup and I had to take two modules on it for my degree. I've never encountered anything so dull in my life! [That was probably because the material was presented in such a dry way, I'm pretty sure anything can be interesting if someone with passion is also a skilled presenter]
It's definitely the presentation. I also studied formal verification in my second year and used it as an opportunity to catch up on missed sleep.
However, the third year I studied how to formally define programming languages and really enjoyed it. It was all about delivery.
I think class size has an effect too - it's much easier to engage with smaller classes. The second year module was mandatory, whereas the third year module was optional.
>Apparently, many programmers derive the major part of their intellectual satisfaction and professional excitement from not quite understanding what they are doing.
This is one of my favorites remarks from Dijkstra, but I found it in a video interview with him. I had no idea there's more context to this. Thanks!
It's interesting that the author wrote the article ironically, but that it still kind of resonates with what Dijkstra observed. Every joke has a kernel of truth.
I tend not to like TDD for a different and serious reasons. TDD seems to encourage bad trial and error programming practices where developer blindly modifies the code until it passes the tests instead of reasoning about correctness based on algorithms and specifications. TDD does not consider that even a program that work correctly in the current environment for any possible set of input data is still incorrect if it violates contracts defined in APIs and protocols. Remember how the correct update to the memcpy implementation in glibc broke programs that incorrectly used it for overlapping memory regions.
My problem is more that I never know what to test. I mean, suppose I have some nontrivial code -- one of my latest random projects contains this:
def permute(tpl):
"""Gives the set of valid permutations for this tuple."""
if len(tpl) == 0:
yield ()
else:
for t in permute(tpl[1:]):
for n in range(0, len(t) + 1):
yield t[0:n] + (tpl[0],) + t[n:]
It's a recursive generator-driven permutation engine. Technically I suppose the order of the permutations doesn't matter, so long as they are all distinct and there are n! of them. Is that what I should be testing? That is, should my test code read:
permute_test = tuple(permute((1, 2, 3, 4)))
assert(len(permute_test) == 24)
for i in range(0, 24):
assert(len(permute_test[i]) == 4)
for i in range(0, 23):
for j in range(i + 1, 24):
assert(permute_test[i] != permute_test[j])
...? And if so, how does that help me write the original function? Or am I supposed to test a base case and one or two recursion steps, so that it helps me write the function, but "hard-wires" a particular order?
You're right to want to avoid duplicating your code inside your test. In this case, I'd work out some simple permutations and test against those values, sorting the results so order doesn't matter.
In the doctest, I include a few toy examples, stuff I can verify by hand, and which helps the user understand what I'm doing.
In the unit tests, I try to do things in the style of Haskell's Quickcheck (the greatest testing library out there). Generate some random values and test properties (e.g., len(permute(x)) == factorial(len(x))).
Not to sound snarky, but sounds like you're doing it wrong. You're right that the canonical example of TDD is to write a test that fails and write code until the test passes, but that, IMO, is an over-simplification that ignores the motivation behind TDD. You should write tests whether or not you do TDD, but TDD ensures proper test coverage. If you write tests first, you ensure every line of code is covered by a test.
Otherwise you have to do what I had to last month, which was go back—after the fact—and write unit tests for a large chunk of our code base because we didn't have an automated way of verifying that our logic worked with different (read: non-happy path) data sets.
I hate TDD because I have to actually debug them, sometime more so than the actual app themselves. Also, there's the issue of slow TDD suite, which means you have to optimize them or otherwise it will slow down your development pace.
TDD is only good when it provide more benefits than its negative(debugging time, time waiting for it to run, time spent ripping out code).
That being said, it's better than 0 test. Just don't get crazy in writing testcase for every minute scenarios.
Unless of course you are writing software that goes into a pacemaker, in which case I would want that tested like crazy for every possible scenario you can imagine including automated fuzz testing. Same goes for databases and quite a few other classes of software.
In short, use common sense.
Personally I love TDD. Not because of the tests, but because the resulting code IS testable, and generally testable code is maintainable and easy to read.
If I knew one pacemaker was developed with TDD and one without it I would probably go for the one without it. Most programs are fine on the Happy Path, and TDD seems like it's focused on expanding the Happy Path vs actually writing correct software.
This could just be internal bias, but without TDD programmers more focused on the possible error conditions. I would rather see:
If ((a/2-1) + (b/2-1)) > (largestInt/2-5))
... do something
vs.
A large try catch block.
Arguably the second is just as safe, but it's the test's you don't think to run that tend to end up as production bugs and good tests require a level of paranoia which is more important than methodology IMO.
YMMV, while I don't do TDD, my unit-tests are fast because I use mock (I don't need to optimize by the way... it's just .. habits).
YMMV, while people disagree with the use of mocking, I find it to be valuable to write unit-tests. Does it represent the real-life production grade environment? no, so does the UAT/TEST environment.
It is meant to be that way, nobody able to test in real-life production grade environment (real data, real performance measure). Tests are always done in a more controlled situation.
Debugging code (especially others') is one of the most heinous activities on this planet. I have lost years of my life fixing what others have created broken (intentionally and otherwise). Years and special occasions that I can never get back because some moron's mission-critical code decided to break, with the ultimatum that no one could leave for $(Holiday of Choice) until fixed.
It's because of having to debug that I left development and went into a different area of technology. TDD is the only reason I even consider doing ANY development now.
People who enjoy debugging are the ones I hope work themselves into obsolescence.
Personal time lost to fixing problems can always be annoying. However, not all debugging situations are like that.
There is something terribly satisfying about digging into a problem on a production system that rears its ugly head once a month (requiring a system to be rebooted), brainstorming, setting up test scenarios, getting the bug reproduced, finding the source of the problem, fixing it, testing it and seeing that it has been nailed! (oh, and equally satisfying is seeing the days of uptime on said machine thereafter measured in 3 digit numbers :-D)
And the problem I have in mind was someone elses code. I still enjoyed every minute of the debugging process.
I can see why missing a holiday would suck, but you don't see the appeal of having a relatively confined and well-defined problem to solve? I find a good debugging very soothing after long stretches of greenfield development. Mind you, I don't want to miss a holiday over it, and I'd rather go and build new shit all day most days, but reality is, you have to debug sometimes, and you might as well take whatever satisfaction you can from the process.
I, too, very much enjoy debugging at times. There is something very satisfying about digging deep into some code base and coming out triumphant with that elusive single-line fix. It makes you feel like you are smarter than the code's author.
That said, other times it can be really annoying, too.
However, I have always had the most fun debugging other people's systems. Because under those conditions I'm always billing by the hour, and when I find the problem it's never my mistake.
For my own code, though, I prefer TDD. And these days I still get enough debugging exercise when dealing with third-party systems. Facebook API, I'm looking at you here.
Great article. Between TDD, dotting constraint checks all around my code and actually knowing what I want before I write it, I rarely have to use any debugger.
With respect to people feeling like TDD is a waste of time, this thought is immediately extinguished the moment you see an accidental regression get flagged instantly, resulting in you NOT having to spend hours finding it.
Debugging is one of those tasks that you don't really want to have to do either. If you're debugging all the time, you are doing things wrong. I don't find debugging fun - I find problem solving fun (not problems I've created!).
Wasted some time on TDD before, slow, like robot, generate tons of code that never be shipped. Nowadays, I just read through the code like it was written by another idiot, understand it, and you know what, I am actually refactor it more often, and keep improving it even though it doesn't fix a known bug. You have better understanding of your code, and then when something goes wrong, it is pretty easy for you to figure out what could be the cause. However, this approach doesn't fit for thrown-away type of projects.
If he's really looking from the satisfaction from debugging as more of a self-morale booster, why not contribute bug fixes to an open-source project? Find a major project, open its issue tracker, and start bug hunting!
I'm sure the maintainers of whatever project he picks will love having someone around to fix bugs, and he both gets the enjoyment he "misses", and gets to hone some debugging skills.
I can kinda relate to having fun fixing bugs, but I think (i addition to some sarcasm) the author is kinda romanticizing debugging. 90% of the time it's no fun, feels like a waste of time and doesn't really provide much satisfaction. Once in a while, though, you do find a really interesting bug and fix it and it feels pretty great!
I can relate to this sentiment. I've always described software development as moments of great frustration followed by great satisfaction. I just love the feeling that comes after you've squashed a nasty bug.
These programs I write, I curse them. So clean and faultless, so hideously optimized. My bugs, my passion, so close at hand, yet out of reach, squandered in the face of my boundless talent.
Debugging is not fun when your program is leaking memory and you're not sure why, and existing code tools don't tell you why. Especially if you're writing drivers. Sure, stepping through python code might be fun, but not low level c/c++ code where you're in a call stack that's 20 levels deep and half the time you don't have source for the parts the code breaks or asserts on.
What about the level of excitement you get when you find a bug in an underlying Fortran library linked in by code building tools because of a random header file someone included without thinking too much?
This is a good thing, isn’t it? So why do I hate TDD?
Because debugging is fun. There, I said it. I love debugging. I think lots of clever people like debugging.
I've been doing embedded development for quite some time, so you know, flipping bits like a boss, a shitload of open terminals, lots of cables everywhere all the stuff that makes you look like you know what you're doing but debugging is not fun, it never was, it's a huge waste of my time.
You know what's fun? Having a product that works.
Debugging is code monkey job, you don't do it because you're smart, you're doing it because you were foolish.
At least I personally am much more driven by the fun of solving problems - usually low-level of the kind - and for me having a product that works isn't particularly very rewarding. It's much more rewarding to actually come up with a clever solution to a seemingly abnormal problem. Not to say that TDD would take this away though, but I've noticed that the emphasis of problem-solving moves one step upwards to the design/architectue scale, which definitely isn't my cup of tea. TDD is meh for the most part of the really memorable challenges.
I debug because I'm smart, and I enjoy it, and I frequently find bugs that I didn't know existed by seeing inside a frame as it's executing. Debugging is a crucial skill in many fields, particularly embedded work (which is the irony in your comment), and the insight from breakpoints and a couple afternoons of digging is unique among our tools. I hate sweeping generalizations of how programming should be done, regardless of what they are.
Watch this sweeping generalization:
If you don't love debugging, I don't think you're a seasoned or valuable programmer.
If you don't love debugging, I don't think you're a seasoned or valuable programmer.
I call bullexcrement.
I've been coding since before a good portion of the people on this board were sparkles in their parents' eyes. Debugging is a huge time-waster and value-sucker.
Now, if you would say "If you're not good at debugging...", I'd agree with that. But saying one isn't a seasoned or valuable programmer if they don't love debugging just shows a level of immaturity.
I've learned that one must ensure one doesn't confuse love of the results given by a thing or action with the love of the thing or action which gets you those results. You're a seasoned debugger if you can successfully debug a complex system, but you're a wise developer if you can do the things which ensure you don't have to do so.
I can't imagine a scenario, throughout history or today, in which debugging would be unnecessary. It's wisdom to realize that, and more wisdom to realize that "debugging" is a nebulous term here that people are nailing down in different ways.
Test-driven development will never replace debugging, and saying otherwise implies a misunderstanding of one of them.
That's quite a generalization. Debugging does not necessarily mean debugging your own code: you might have stumbled upon a compiler bug, or a timing bug with your hardware, or you may be trying to buffer overflow some massive proprietary database so you can take over the world. All of those tasks will involve lots of time looking at memory addresses and hex dumps, but none of them mean the person doing it is a bad programmer.
Why you need to look at them manually? Computer will compare all that much faster.
If you are good programmer and bug is non-trivial, you will create code to catch bug much faster than you will catch it manually. Moreover, you will be rewarded next time, because your code will be already written and ready to use.
Let's call it "exploratory programming" instead of debugging. Yes, your computer will run unit tests. No, your computer will not notice that memory is being corrupted because of a particular sequence of instructions emitted by the compiler. To solve a problem, you have to understand it. And if you're writing code to solve a problem that's well-understood, you should have just downloaded the library instead.
Did you ever saw how program crashes when run, but works fine in debugger (or vice versa)? What you will do in such case? What you will do after few such cases?
It dramatically improves your cycle time to debug this kind of issue interactively instead of repeatedly stopping the process, coding up a hypothesis as a test case, running the tests, and then dealing with the positive results and the false positive results.