Hacker Newsnew | past | comments | ask | show | jobs | submit | FutureNerd's commentslogin

Some people here think this is about coding without testing. No, it's an example of thinking through before testing, about the kind of thinking that's separate from testing. It's not meant as an example of what all programming is about, but a probe of an aspect of programming. So why.

Is it okay to write down a guess, then look at it with edge cases in mind, fix and iterate? I think sketching is okay for this exercise as long as you follow through and finish.

The difference is that sketching doesn't tell you what the bugs are. Whereas testing can tell you specifically. One's an aid to thought and the other's... an alternative, to be neutral about it.

A couple people have described case analysis like simulating tests in your head. I hope they actually use more principled thinking than they're letting on.

Iterative coding breaks into two cases: either the task is subjective/aesthetic, like an interaction or layout problem, or it really is an exact task but you don't have it clear in your mind. Everybody recognizes the value of iterating on a fuzzy problem, put that aside. The question is, if it's an exact problem but you don't understand it well, then will test-writing clarify or obscure things?

Sometimes I iteratively write and run or test a program until I get to a point where I say, "Okay, now do I understand what this is about? Can I explain it thoroughly in full sentences?" Even so I'm sure I've fooled myself with tests.

In real world coding, you aren't given small problems. That doesn't mean small problems are poor examples. Obviously we should break big problems into as small and comprehensible problems as possible. Testing doesn't do that (although it could make incentives).

Not to mention the fact that there's no such thing as a complete test for a program with an infinite set of possible inputs. You can test for what seem to be the kinds of bugs the program could have.

The unstated assumption of many here is that somehow in the process of testing we'll come upon all the right tests for that program, without any sneaky missed cases. Then you will look at the tests and code and somehow extrapolate or judge that you've covered all the bases. But how sure is this if you still aren't sure exactly what the problem is, or if you don't think about bugs and cases very well?

There's a kind of thinking you have to do within and around testing anyway. The write- before- testing challenge tests how well you do that necessary thinking, and how reliably you judge how well you've done. It's an artificial light on a real issue.

Btw, my test file, meant to be easily processed in any language: http://www.mac-guyver.com/switham/2010/04/Binary_Search


No, there's a point to doing the unnatural thing of, "Keep working on it till you're sure you got it right."


I have to say that the usual way of writing binary search recursively doesn't help break the problem down. It's already as broken down as it gets, and you're just tweaking the single case. The loop version wins for clarity.

I think second runner-up is the recursive version where you use slices... does lisp have slices?


Loops are just special cases of recursion.

And binary search is definitely not the right operation on linked lists. So even in Lisp you should use a different data structure if you want to do a binary search.


Going back to Bentley, the point was not to code without testing but whether you could think through a small piece of code <i>before</i> testing.

raganwald says, "It feels very retrograde to try to write the whole thing out and only then start thinking about [...] edge cases." Nobody said you had to write it before thinking about edge cases! The idea is to think about the edge cases and incorporate that thinking into the code before you actually test the code--whether you can be--actually whether you know the steps to be--that thorough in your head.

One way to think about edge cases, of course, is to write the test code before the function. But after writing the tests, try writing the function correctly before testing it.

Of course, one of the potential bugs in binary search doesn't occur to people before they write and then try it. And, that bug is hard to test for, ahem, don't want to explain why.


I was thinking along eru's lines. The specifics I came up with: 1) Compress sorted lists by encoding the differences between successive numbers using a code that's shorter for smaller numbers. 2) Use in-memory merge sort. That way you don't need random access to the lists, and it's overall O(n log n). 3) In order to merge large lists when memory is already almost full, divide the lists into blocks, allocate blocks for the merged list while you're deallocating blocks from the two source lists. 4) To do this in Python, use the "array" module.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: