Hacker News new | past | comments | ask | show | jobs | submit login
Productive Programming (gabrielweinberg.com)
139 points by taylorwc on Aug 31, 2010 | hide | past | favorite | 79 comments



"When approaching debugging, I first find out where the problem is, i.e. the exact line of code or config file or whatever that is causing the problem. That should not take long, and before you ask anyone else about it you should know exactly where the problem lies."

For crashes: Get backtraces! Knowing the line and the state of the callstack makes many common errors trivial to fix. Some languages (Java, Python) are helpful and generate them automatically. Others (C, C++) require a debugger. I've seen lots of programmers ignore gdb or Visual Studio and spend minutes with printf() looking for the exact line of a crash. Code crashes all the time. Dealing with it methodically and effectively will add minutes - or even hours - to your day.


Indeed. There's also valgrind, which will let you know if you're stomping over memory you shouldn't: http://valgrind.org/

When students come to me with broken C code, I ask two questions: do you know which line it breaks on, and did you run it with valgrind?


The article was decent, but the debugging section offers no insight at all. Shotgun debugging is a strawman, only the incompetent rely on that. Of course you should identify the exact line causing the problem, and of course it should be fast on average, but sometimes it is exceedingly difficult. Bugs come in three flavors:

Trivial bugs where available evidence points directly to a cause. The scope of these increases with experience, but most bugs that come up in day to day coding are of this variety.

Deep bugs are reproducible, but require bisecting the problem to figure out the cause. Performing optimal bisections requires skill, experience, and raw brain capacity. This is where a good, but let's say, not virtuoso developer can distinguish himself from his less-than-average colleagues in a major way.

Heisenbugs are the most difficult because you can't reproduce them, either because it's a concurrency problem or some limitation of the environment the bug occurs in. To solve these bugs, a functional language and pure mathematics may be the best way to get reasonable assurances that a bug is gone, but also radical creativity in code instrumentation or expensive test scenarios with dedicated hardware may be necessary.

So, yes, being a productive programmer requires a level of skill where most bugs are trivial. But nevertheless, the second two flavors exist no matter how smart you are, and being productive over the long term means being able to tackle those systematically, even if it takes days or weeks to solve, and if all else fails, figuring out a workaround.


Debugging offers a most excellent opportunity to practice science - Observe, Hypothesize, Test, Repeat. And just like regular science, bugs span the entire range of trivial to the head-explodingly-elusive.

Observation is in particular quite a hard practice. On several occasions I've realized that the evidence was staring at me in the face from day one, but my eyes were too clouded to see it. Sometimes, it helps if you know the system inside-out, but at other times that itself happens to be a blinding disadvantage.


> Observe, Hypothesize, Test, Repeat.

Forget all that stuff. Just improve your code until it will tell you problem and it exact cause, you will save a lot on "Repeat" step.


Sometimes you can find concurrency bugs by reading logs during concurrent execution, along with a lot of sanity checks that throw exceptions when things go wrong. You can also just look really hard at the code and simulate in your head what might go wrong.

IMHO, the hardest bugs are the ones where you have too much data and/or running through the test case takes a very long amount of time. Usually the best way to chip away at these is to try and speed up the test phase by mocking things or to just do a good old stare at the code until the bug shows up. It's a cut once, measure twice approach except with time. You can either run the test again, or you can stare at the code for twice as long and not have the run the test multiple times.


One of the most interesting bugs I've seen was in a small Arm single board computer powered Lego robot. The robot would work fine for minutes at a time and then suddenly die. When used outside of the arena area it was designed for the problem went away.

We eventually traced the problem to the metallic tape used to define features in the arena. When the metal roller at the back of the robot touched this after a few minutes of operating it caused a static discharge which stopped the computer!


In Objective-C/Cocoa, I'm a big fan of using the "assert" macro's to throw exceptions if something happened in programming land which is the fault of the programmer. If the assertion is not true, an exception is thrown, and the Class/Method/Line-Number is printed to the console with the exception details. This makes certain programmer errors extremely easy to track down.

For example, say you have a method that prints a string, and you "require" that all callers of the method to actually provide a string, and that it's not 0 length:

  - (void)printAString:(NSString *)aString
  {
      NSParameterAssert(aString || [aString length]);
      
      NSLog(@"Printing a string: %@", aString);
  }

If |aString| was passed as nil, or 0 length, an exception would have been thrown, showing up on the console. Class, method name, and line number, along with the exception are all printed. Super easy to find!

Asserts are also great to affirm that a method returns only "kosher" results:

  - (NSData *)loadDataFromDatabase:(NSString *)databasePath
  {
      NSData *someData = [aDataBaseClassInstance loadData:databasePath];
      
      NSAssert1(someData, @"The |databasePath| is invalid: %@", databasePath);
      
      return someData;
  }
If |someData| came back from 'aDataBaseClassInstance' as nil, an exception is thrown; the programmer provided a bad |databasePath| to the database file. The caller of -loadDataFromDatabase: never has to worry about being returned a nil NSData either. You can crash the app, do @try/@catch/@finally, or whatever you choose.

While you have to be careful where you use NSAssert()/NSParameterAssert() (Obj-C methods) and NSCAssert()/NSCParameterAssert() (C functions), it can make it easier to stay out of debugger land.


With some prior setup you can actually get automatic stack traces in C/C++ for some common crashes ( SIGSEGV, or any other signal for that matter ).

http://www.tlug.org.za/wiki/index.php/Obtaining_a_stack_trac...


Also, for Windows/.NET: mdbg.exe is a .NET command-line debugger that can be helpful when there is a crash on a test or client system as it is portable. Also learn how to take a memory dump and analyze it with Windbg, it has saved me hours particularly when analyzing problems with complex applications like webapps running under IIS.

More in terms of tools: if you are tracking down a memory leak, get a memory profiler, don't guess blindly at what is leaking. What can be done in an hour of tracing rooted objects with windbg takes five minutes with a profiler. Likewise with performance profiling.


Under windows you can also get minidump which are a mini core dump with the current state of variables


Debugging is the lion's share of programming. Any tools which can make the process easier, like gdb or valgrind, will increase productivity. In the olden days I was that printf debugger, but things have improved quite a bit since.


One of the most productive programming techniques that I have used (it's saved my ass) is to make sure that most of my code is unit tested. In the short-term you will feel like you're moving much slower. In the long term, it'll pay dividends.


Yeah, BDD/TDD will also stop the necessity for "running the code all the time" like Gabriel espouses... what is better is to test the code all the time. Hit those green marks. Manual run-throughs of code paths is boring, to say the least.

This I guess is less applicable to the web dev/machine learning domains that Gabriel operates in.


Productivity is a topic too complicated to put into a few sentences but there is one tip I'd like to contribute: paper and pen are underrated.

There were awful lot of tasks I tried to achieve via pure coding and rewriting / rewiring blocks here and there. All they actually needed was some analysis on paper.


This is really true, and I think the reason is that it abstracts the problem away from the implementation, and forces you to think in 'problem-space' rather than 'implementation-space'.

When I sit at my desk coding, I tend to think in detailed, code (c/c++ for me) specific ways. My thinking is constrained by how I think the code might look.

When I turn off the screens and grab a pen and pad, I think in abstract, mathematical ways. Once I have a mathematical solution, it's easy to turn that into a practical implementation.


For me a white board is a lot better. No idea why. But something about standing up in my office at the white board gets the juices flowing more than a pen and paper.


I think there's some explanation in biomechanics and different courses of blood flow, but even more than that, whiteboard allows you to share your findings and troubles with the colleagues which is probably why you needed whiteboard in the first place :)


BIG disclaimer: I have NO formal training.

1. Tools. I generally shy away from tools. I just don't like using anything that makes me more productive when I'm programming. I prefer to type out every line of code, occasionally cutting and pasting from the same program or something similar from the past, but not very often. I want the writing of code to be painstakingly slow and deliberate. Why? Because productivity is not the objective. Becoming "one" with my project is. I may not start as fast as others, but that doesn't matter. It's not how fast you start, it's how soon you deliver a quality product. Like memorizing a speech, cranking out the code by hand makes it "firmware" in my brain. It's not unusual for me to crank out 300 lines of code and then be able to reenter them on another machine from memory. So when it comes time to debug, refactor, enhance, or rework, those cycles go very quickly; that code is already in my brain's RAM and it got there the hard way.

2. Simple Algorithms. Yes! I love this example:

  * EnterOrders 08/31/10 edw519
  *
  return();
I now have a working programming - Woo hoo! You say you want more features? No problem. Let's start enhancing it.

3. Debugging. I don't. I've seen 25 different debuggers and I hate them all. Print() is my friend, especially beyond the right hand margin. Better yet, write code that doesn't need to be debugged (See #1 & #2 above.) (Of course, this is much harder with someone else's code.)

4. References. Don't need no stinkin' manual. I have become extremely adept at about 4% of what's available to me, but that 4% accomplishes 98% of what I want to do. (OK, I'll refer to a manual when I need something from the other 96%, but that doesn't happen too often.)

Nice post, Gabriel. Got my juices flowing. We could talk about all kinds of other things too, like variable naming, how to iterate and branch, and never typing the same line of code twice, but this was a nice starting subset.


I'm amazed and a bit saddened whenever I hear sentiments like this. These days more than ever.

The thing is, every single bit of code you write depends on something else that you DIDN'T write. Everything. No exceptions.

And the thing about stuff you didn't write is that you end up making assumptions about it - often without even realising. Sometimes the assumption is simply that the code works. Sometimes, some of your assumptions will be wrong.

Debuggers help you check your assumptions. They're a very useful tool for this - more so than many people who say "I prefer printf" realise.

I apologise if you're not in that category, but did you know that many debuggers have a way to print a message (without stopping) whenever the program hits a certain line? Just like printf, only it doesn't have to be built in to your code ahead of time.

There are times when a debugger isn't the right tool for the job, but it's always better to make an informed choice.


> I'm amazed and a bit saddened whenever I hear sentiments like this. These days more than ever.

The big advantage of printf as a debug technique is that it doesn't work unless you understand what you're doing. If you plan to debug using printf, you must write code that is easy to understand.

Great debuggers let you try to fix code that you don't understand. Worse yet, they make it reasonable to write such code.

Since the person reading the code is never as smart as the person who wrote the code, encouraging simple to understand code is extremely important.


A big disadvantage of printf() in C (or similar non-memory-managed languages) is that printing values can disturb the "noise" on the stack that you were trying to find. That is, if you had a bug due to an uninitialized local variable or buffer overrun, the printf() call could "stabilize" things and cause a bug to stop manifesting.

Better to use a debugger, if you can. Not that I haven't ever sprinkled print's into the middle of a big mass of somebody else's code or framework just to get the initial lay of the land :-)


uh, the -g compiler flag to enable debugging will create much more disturbance than a few printf ever could, debuggers are mostly useless for tracking down race conditions as they change the timing completely. OTOH, I'd use a low overhead logging package instead of printfs.


Good one: yes, race conditions are not going to manifest while manually slowly stepping. You of course need to plan / read carefully for code which will be vulnerable to such.

Regarding "-g", at the job a few years ago where we were using C for some in-house jobs, we skipped using "-O" for the most part and simply deployed with "-g" left on. Better safe than sorry for single-use software.


I disagree with almost every sentence in your post, but there's one bit in particular I wanted to focus on:

> The big advantage of printf as a debug technique is that it doesn't work unless you understand what you're doing.

This isn't true, nor would it be an advantage if it were.

Any fool can stick a printf statement into their code, just like any fool can run it in a debugger. You might get lucky and find the problem, or you might not. Understanding what you're doing will help you out whichever technique you use. Better yet, it will help you decide which technique is more appropriate to the problem.


> Any fool can stick a printf statement into their code, just like any fool can run it in a debugger. You might get lucky and find the problem

That's like saying that you might get lucky if you make random changes to program code. Yes, it's true, but ....

> Understanding what you're doing will help you out whichever technique you use. Better yet, it will help you decide which technique is more appropriate to the problem.

Except that I was talking about understanding the program, not "what you're doing".

In my experience, debuggers let me find things, or, more often, think that I'm doing something useful, with less understanding than printf. YMMV.


>>Debuggers help you check your assumptions. They're a very useful tool for this - more so than many people who say "I prefer printf" realise.

A long time ago, I used debuggers and printf.

For years, I've instead added tests to checks my assumptions -- and I run the tests multiple times an hour.

(I still printf since it works well with the tests if I run a subset of them.)

(Edit: This was for when I write code, not working with others.)


If you're saying tests are an alternative to using a debugger, then I disagree.

Tests are very useful, but they fulfill a different need to debuggers. In fact the two work well together: stepping through a failing test case with a debugger can be a very effective way to find the root cause of a problem.


"Better yet, write code that doesn't need to be debugged (See #1 & #2 above.) (Of course, this is much harder with someone else's code.)"

This is pretty hard with your own code too. :-)

It never ceases to amaze me how often I go back to code I've written and think, "Why did I think this would work for all cases and exceptional conditions?". Then again, it's often hard to remember what I knew a year ago... maybe I wasn't even aware of these cases.


"2. Simple Algorithms. Yes! I love this example:

  * EnterOrders 08/31/10 edw519
  *
  return();
I now have a working programming - Woo hoo! You say you want more features? No problem. Let's start enhancing it."

There is nothing like simply getting a "Hello World" up and running as a simple sanity check and starting point.


I like the game programming story about the black triangle to illustrate this point: http://rampantgames.com/blog/2004/10/black-triangle.html


This are called Tracer Bullets in the book "The Pragmatic Programmer" by Andrew Hunt and Dave Thomas. They advocate:

Users get to see something early on.

Developers build a structure to work in.

You have an integration platform.

You have something to demonstrate.

You have a better feel for progress.

As a solo founder I need working code, that I can put out of mind, while I produce more.


I've always heard "tracer bullet" as a request to a production system that selectively gets much more detailed logging or diagnostic output than usual.


Which is to say you haven't read The Pragmatic Programmer? I highly, highly recommend it. I bought my first copy about 8 years ago, and pull it off the shelf once a year or so to freshen up. I always learn something new every time I do.


I know this as top-down programming.


I share your aversion to debuggers. Everyone is different, and plenty of people like them - but I've always thought that they consume brain cycles that otherwise could be engaging with the code. That's why I like print(); it keeps my head in the code.


When you're debugging code you didn't write, and you don't have time to read and understand that half a million lines because you need to ship in one week, and you have 2 hours to fix the bug, the debugger can come in handy.

Personally, I'm fond of configurable logging, but I'm not under any illusions; logs are like freeze-dried debug sessions with less features. Logs verbose enough to contain the relevant information can also get borderline unwieldy and time-consuming to generate and process. The compiler I work with, when hosted in the IDE, could easily produce over 100GB of log output on a complex project test case, so I usually try and keep it down to 1..2GB or so, but that often leaves out the incriminating evidence, so it takes some back and forth... And the reason I'm using log files at all is often because debugging itself gets unwieldy on large test cases.


When you're debugging code you didn't write, and you don't have time to read and understand that half a million lines because you need to ship in one week, and you have 2 hours to fix the bug, the debugger can come in handy.

We're talking about programming, and that's not programming. That's called piling shit on top of shit.


No, it's what happens when you have code that's been maintained for decades by dozens of programmers, most of whom have since moved on to pastures new. Not all of the code has someone on the team who has full insight into it. Sometimes that code breaks because of changes elsewhere. Yes, it would be nice to fully understand the code before fixing the problem; but that's not always a luxury you have.


If you are clever enough to avoid situations where that is necessary than it might be said that you are skilled in the career choices of a programmer. However it does not say anything about your skill as a programmer per se.


I would argue that piling shit on top of shit would qualify as part of the job description for half of the corporate world, programmers and non.


Getting paid to move code around in a text editor != programming.


I use debuggers as ways of inserting print statements into the code, without knowing what exactly I want to print before running it.

So, it goes something like this:

1. Insert `import pdb; pdb.set_trace()` into the code where I think there's an issue.

2. `print foo`

3. Hmm, that doesn't seem right... `print baz(foo)`

4. Ah, I needed to change... tabs over to editor

This style is heavily influenced by the fact that I primarily program in Python and Ruby, where one generally learns the language by typing commands into a REPL, rather than executing a file. When working on a Python project with a bunch of classmates who were good programmers, but used to Java and C++, I found that they found this approach utterly unintuitive.


I hate to say this, but you're never going to be very good (unless you change this). Someone good will have so many projects that they could never remember every detail of each of them, so every second you've spent burning this into your brain will be wasted.

It's much better to use the tools as far as you can [1], use nice big descriptive names (since you never have to actually type them out this is win-win) and try to be as consistent as possible [2]. For projects I've been the driver on I never have to look at anything. I know how the project is going to be laid out, what the naming scheme will be like, etc. I can e.g. open up the Visual Studio solution file, Ctrl+n, type in the capital letters of the class I know will be of interest (e.g. WRI for WidgetRepositoryImplementation) and off I go. While you're still birthing out your first class I'll already be tightening up my tests.

And there are people vastly better/faster than I am at this.

[1] But be sensible about this. For me, big code generators are a no go. I use spring because it (a) doesn't sprinkle it's code anywhere I have to see and (b) regenerates the code every time so there is no sync issue between changing a source and regenerating a dependency.

[2] Always refactor when you touch code, including making the code consistent with current coding standards.


I hate to say this, but you're never going to be very good (unless you change this).

I don't hate to say this, but responses like yours are the classic example of hn's biggest problem: people talking when they should be listening.

I suppose your opinion could be accurate if we had a flux capacitor and you were making your prediction in 1979. But it's you versus 1,000,000 lines of code deployed, 1,000 successful projects, and 100 satisfied customers. They outrank you.

FWIW, I made 2 posts yesterday. This grandparent which included intimate secrets of my success and earned nothing and this little joke:

http://news.ycombinator.com/item?id=1649922

which earned 58 points.

Gabriel Weinberg made wrote an interesting post which got my juices flowing and led me to share my experience (experience, not opinion). Then people like you tell me I'm wrong or worse, that what actually happened couldn't have.

Any wonder why outliers with successes to share are reluctant to share them?


>people talking when they should be listening.

I don't care much for the "no true Scotsman" nonsense, but I don't agree that someone should be listening to out dated advice. If you meant it as a story about the glory years then ok.

> This grandparent which included intimate secrets of my success and earned nothing and this little joke:

The one where you showed you were good at real estate? Are you here to earn imaginary points or to communicate with like minded people? You need a couple hundred karma to get downvote ability but after that? Who cares.

>Then people like you tell me I'm wrong

Your methodology is no longer the most effective way to develop software. One can still do it, but one would be artificially limiting what they're capable of.

EDIT: Edited to remove some of the unnecessary barbs. No one likes to be told they are what's wrong with anything but no one wins a flame war.


"This isn't a dick size contest is it?"

You're the one who started with unsubstantiated claims about how the OP was a poor programmer because he didn't follow your methodology. If edw has managed to find a method that works for him, why shouldn't he share it? You can take it or leave it. If you feel the need, you can also tell him where you think he is going wrong. But there is no need to start by insulting people or telling them that you can code circles around them. Edw may be the worse coder, but you've come across as the bigger dick.


Programming today isn't like programming in the 70's. We're dealing with vastly more complexity now. Edw's old school methodology may have served him well in his day but a new person taking this on will never be able to keep up for the same reason some guy with a mule and plow can't keep up with a modern farm.

>you can code circles around them

My point wasn't that I can code circles around him but that modern practice can code circles around him. Which is something he could take up as well.


Finding a buffer overrun is much easier with a debugger; if you have a deterministic test case, all you need to do is find the address of the corrupted memory (any point after you notice it's gone bad will do), then restart, set up a hardware breakpoint for that address, and continue: usually no more than a modification or two will do the trick.

And if there's too many modifications, a different trick will do. Change the breakpoint to a counted breakpoint with a nice high count, then run until the corruption occurs, then check the pass count on the breakpoint. You can then set the count on the breakpoint to the exact value needed to stop at the corruption, like stepping back in time (i.e. the famous "who modified this variable last" debugger feature).


#1,2 - http://news.ycombinator.com/item?id=1649922

#3 - inspired a "printState() div" 10,000px off-page

#4 - Especially when there's more than one way to do it, relying on my little mind saves huge amounts of productive time.


"I also found sleeping on it often magically produces a solution in the morning."

I think this is a great reason to take a nap in the middle of the day if your environment is conducive to it. It works for me as long as the code is the top thing on my mind when I fall asleep.


I think doing yoga poses, walking up a steep hill for 10 minutes, or practicing guitar have similar/related brain benefits: Neurologists: is it epinephrine or endorphine?

Other debugging helpfuls:

- Rubber duck:

http://news.ycombinator.com/item?id=510032

- or just look at your code in a different editor/syntax highlighter. Pull up ubuntu in VMware fusion ;-}


This has happened to me on the past two consecutive days (and nights). A problem that I couldn't figure out at 11pm, one that I spent at least 30 minutes pouring through Google Groups, searching through APIs, trial and error, etc...After sleeping on it the solution revealed itself to me in the morning in less than five minutes. Truly phenomenal and hugely rewarding.


The one I remember vividly was a bug in a student project - I was writing a set of tools to "compile" lambda calculus expressions to to various sets of combinators and gather statistics about the effectiveness of different sets of combinators (starting with SK).

For some reason any attempt to implement a Y-combinator for recursion was crashing and for a couple of days I had no idea what could be wrong - then sitting on a bus on the way to visit my sister I was looking at a cinema from the bus and the thought came into my head "it's your aggressive re-use of applicative nodes".

Sure enough next day I removed this attempt at an optimization and everything worked!

I've had a few experiences like that and I now know that if I find a really tricky bug the best strategy isn't to sit there all night looking at it but to go and get a decent nights sleep - there is a very good chance you will know the answer by the next morning.


My scariest one was dreaming a solution. I woke up about 3am absolutely clear what I'd done wrong, scribbled a note and went back to sleep.

Next morning I looked at the note and it was exactly right.


I've done this and actually find it a bit disturbing.

It's great when I can't solve a problem in the evening, get a good nights sleep, and then I am able to quickly solve the problem in the morning. Sometimes though I actually have dreams about sitting at my desk writing the code that I couldn't come up with during the day. On the one hand it's great to wake up in the morning with the solution to my coding problem. But on the other hand, I'm not sure it's healthy to have my dreams filled with the same thing that I do all day while awake. Whenever that happens I definitely make some extra time for my hobbies that don't involve computers.


Well that's one way to increase your billable hours.


Do you ever fall asleep with a cup of ball bearings in your hand when you are stuck on a problem? I have always wanted to try that ...


I've never heard of that, what's the logic there? You drop the ball bearings as soon as you've fallen asleep, for some reason I don't get?

Serious question.


A story, perhaps apocryphal, about Thomas Edison. He would hold ball bearings in his hand as he fell asleep, and the noise of them falling to the floor would wake him immediately as he went to sleep, thus generating the shortest possible power nap.

There's a decent chance that story is actually lifted wholesale from a similar one told about Dali, except it was a spoon instead of ball bearings. I suppose it's the best you can do after melting your alarm clock.


Dali makes a big production out of the way he did it and describes it extensively in his book "50 Secrets of Magic Craftsmanship". It was a key rather than a spoon in that case, though no doubt the details changed in every telling.


Aristotle also apparently did this.


i always heard it was a spoon. or was that someone else?


During his day, Edison would take time out by himself and relax in a chair or on a sofa. Invariably he would be working on a new invention and seeking creative solutions to the problem he was dealing with. He knew that if her could get into that "twilight state" between being awake and being asleep, he could access the pure creative genius of his subconscious mind.

To prevent himself from crossing all the way over the "genius gap" into deep sleep, he would nap with his hand propped up on his elbow while he clutched a handful of ball-bearings. Then he would just drift off to sleep, knowing that his subconscious mind would take up the challenge of his problem and provide a solution. As soon as he went into too deep a sleep, his hand would drop and the ball-bearings would spill noisily on the floor, waking him up again. He'd then write down whatever was in his mind.

Taken from: http://www.wilywalnut.com/Thomas-Edison-Power-Napping.html


I wouldn't say this is an article on productivity so much as an approach to learning. I find a lot of auto-didact programmers learned this way. The compiler is a great TA after-all.

If there was one thing I could add it would be take notes. If you can explain to someone else the "whys" and "hows" of your solution to a problem, you've got it down pat. Sometimes we get something to work by googling a quick solution, but we forget to try and understand that solution. This doesn't serve us well. Taking notes helps me to submlimate what I learn on the job into hard-referenceable tomes of knowledge.


In addition, start a blog for the pieces you can make public. Blogs have so many other benefits that you should be doing one anyway (especially if you are also involved in startups).

That said, writing about programming on your public blog is scary because you (usually) know that whatever topic you write on there are people who know more about that topic. You just have to get over that and learn to embrace the benefits of the conversations that ensue. Or write to a private audience.


If you just one day decide to start a blog. Chances are it will be mostly private audience for quite a while. Of course that is dis-regarding the spammers.


The article mentioned the dangers of clever code and asked for a particular quote he couldn't remember. I couldn't help but be reminded of this one:

Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. -Brian Kernighan


A bad habit I can get into is thinking about a feature in terms of all it's parts and getting mentally stuck on starting trying to put the whole thing together from the start. When as the advice in the article gives it is much better to focus on getting the critical path of a feature working. The mental block of thinking about the whole can easily lead to procrastination for me, something I am trying to avoid.


It well depends of the problem domain, sometimes productive programming is about best algorithms, for example if you develop a software playing chess you can't left it for later improvements.

I can add some of my experience (since I was 8 years old?), it works in my personal framework:

- Hybrid languages/technologies: for some solutions I need to use jython to develop in python + java libraries that you don't found in other languages (i.e: htmlunit).

- I have always an interactive console opened to test stuff while using an editor.Sometimes the language doesn't have a good interactive interpreter, my preference is using python (if using .net, then ironpython, if using java, the jython, etc).

- Since I do a lot of research for customers, research and proof of concept are first in the list to reduce the real risk in the project.

- When I need optimizacion/speed C/C++ is the language, but try to glue with python, ruby, COM, etc (SWIG is your friend).

- Sometimes debugging is the best option, sometimes focusing in the code and try to find the bugs in my head.

- Then I agree with Gabriel.


I appreciate the call to find the exact line of code that is causing the problem. I've done a fair number of code reviews where there was a bug, and it was "fixed", but no one knew why. The developer had simply kept changing things until the problem seemed to go away. That would mark the bug as closed, but it always seemed to show up again some months later in a new guise. These days I'm a terrible nuisance as I always want to know why the bug was happening. In my experience it's not enough to "fix" the bug you really do need to understand why it was happening in the first place, which is something that not all programmers do.


> I initially try to accomplish X in the simplest way possible by breaking it down into trivial steps. I know there is a famous programming quote that pertains to this process, but I can't find it right now

Maybe this is it?

A program should be light and agile, its subroutines connected like a strings of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little nor too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity. -- The Tao of Programming


No, this comment on the blog is the closest so far to what I was talking about:

rjbond3rd: "I'm mangling the quote but what I've heard is: 'Tackle a difficult problem by redefining it as a series of solved problems.'"


One of the most useful mental habits I know I learned from Michael Rabin: that the best way to solve a problem is often to redefine it.

http://www.paulgraham.com/ideas.html


That's kind of the way mathematics expands its knowledge base. Not always, of course. But it's a fine approach. It's an application of the KISS principle.


I can't think of a specific quote about breaking down programs into small steps, but you could be thinking of something from http://paulgraham.com/progbot.html


I don't know if this counts as a quote, but the term of art is "divide and conquer"


No, neither of those are it. I saw it on HN within the last two weeks, but I couldn't find it in a reasonable amount of time.


The quote for #2

What's the simplest thing that could possibly work? http://c2.com/xp/DoTheSimplestThingThatCouldPossiblyWork.htm...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: