Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Do you use print() debugging?
16 points by laughinghan on Nov 3, 2011 | hide | past | favorite | 16 comments
Or puts in Ruby, or System.out.println() in Java, or echo in shell scripting and PHP, or printf() in C/C++, or console.log() or worse, alert() in JS, or the equivalent in whatever language and environment you're in?

I'm primarily a JS developer, and the first debugging tool I use is usually alert() or console.log(). I usually just have to locate a single line that is past the point something has gone wrong, and I can usually very quickly binary search between this line and the event handler to find the offending line and fix it. Of course, the binary search is guided by intuition and if I don't find it after a few alerts I will start setting breakpoints in the debugger and/or stepping through instead, but somehow I've always seen the debugger as this big, heavyweight sledgehammer I don't whip out unless I'm actually stuck.

I do the same thing in all of the above mentioned languages, and in fact have never used a debugger at all for Ruby or the Pythons, though I assume they exist (nor for shell scripting or PHP, but do those exist?). I consider myself at least a proficient coder, and in particular have substantially used all the above languages (I've also used Scheme in a class, but nothing substantial). One of the best coders I know, who mainly codes in Ruby, also often first tries puts debugging. Yet I've also been ridiculed (in a nice way) by another friend for even suggesting to someone that they try to debug what seems like a simple mistake with a few quick print statements.

In a way I agree with them--print statements are a crude way to debug. But they're quick and dirty and get the job done and aren't hard at all to clean up. So I wonder: how many others debug with print statements first and a debugger second?




Yes, I use it all the time.

Using a real debugger is great, and it gives FAR more power. It also allows me to wander around and check the values of various things without having to re-compile, re-deploy, and re-run the application up to the point where the error is occurring.

But getting the debugger to work takes a few hours of setup. If I'm at someone else's desk helping them debug a problem, it will take hours to get the debugger working, and only a few minutes to add the print statement.

If I'm working at my own desk and I briefly need to make a small change to another application that I don't normally work on, it would take me hours to get the debugger set up and only a few minutes to add the print statement.

If I'm fixing a problem that only occurs in QA (or worse yet, in production) then I have to use logging or a print statement because it would take days to get permission to attach my debugger to QA.

If I'm working on my own usual project, then I use the debugger because it is many, many times more powerful. Unless it's Tuesday, or for some other random reason the environment gods hate me and my debugger isn't working today. Then, depending on how much of a rush I'm in, I invest another few hours into fixing it or I just add a print statement and get on with my work and fix the environment later.

PS: I'm writing this while waiting for my IDE to recompile the world in my 4th attempt to get my debugger working again because my environment broke for no known reason.


I debug primarily with print statements. There are a few times when a graphical debugger has been/would have been easier.

At a Python conference some 10 years ago I asked the same question. About 1/2 the people in the audience who responded said they were also primarily print-statement debuggers.


I debug with printf. I also use nano or pico as an editor and write web application backends in C++. These choices are probably not very common, but who cares?

It's simple enough: they work for me for some reason. If they work for you too, great! If not, keep searching.

I find that debuggers are just another tool, and use them when necessary (not too often). They do not excuse you from really understanding the entire state machine which is your program. Woe to those who try to fix something without first understanding it. It turns out that my well-placed printf/LOG calls tell me what I need to know most of the time.


Not if I can help it.

I mostly do medium-scale (5-50 people) C++ development. There are two main mode of debugging that I fall into: 1) I just wrote some code. Does it actually do what I planned? Let's see where it goes in practice. 2) Some code I've never seen before just screwed up. What the hell is just happened? OMG What the hell is all this stuff?

It helps that games are largely "while(1) doMainLoop();" kinds of apps as opposed to event-driven, asynchronous, multi-process, distributed, interpreted, whatever apps. It also helps that, as much as I can, I work in Visual Studio. The debugger works easily, reliably, continuously. It's faster for me to casually browse through deeply nested data structures than it is to type "print foo.x, print foo.x.bar". Data breakpoints are god-sent. "Break, change a variable and continue" is really nice for tweaking and quick-and-dirty testing. "Break and move the instruction pointer" is great for re-running a rare screw-up or skipping over code that would kill the process you are deep in the middle of debugging.

It seems to me that most of the people who rail against debuggers fall into A) working on systems that are not amenable to debuggers, B) stuck with very poor debuggers or none at all, or C) don't need debuggers because they either working alone on very small code bases or have dedicated many years of their life to absolutely understanding a every line of a single, slowly changing code base.


Most of the time, by looking at a backtrace (I primarly debug C stuff, by the way), I already have a good idea what the problem might be. Tracking that down in a debugger is horribly inefficient, especially if optimisations are involved (how may times did you try to print a local variable, or a parameter, only to be greeted with <this value has been optimized out>?), so another way to see the program flow is required.

printf() does wonders in this area, if you can put them at the right places. Over the years, I found and fixed far more issues with printf() than with a debugger.

Granted, sometimes a debugger is neccessary, at least until I understand the problem, and become able to reproduce it. Then printfs it is again. If I can't reproduce it, and only have a core file to work with, then, obviously, a debugger is one's only choice. If I need to extract information from a core dump, a debugger is the way to go again.

But when trying to examine running code, printfs are - in my experience - more straightforward to work with.


By way of background, I'm older than a lot of people here (42) and have been programming since the 1980's. Programming is a hobby for me after I got my law degree.

I still use print statements for debugging because they work - by now, I've switched primary languages three times (BASIC -> Turbo Pascal -> C -> Python) and development environments way too many times to count. I remember when IDE's were new.

My point is not that IDE's suck (they're pretty nice, really) but that unless I'm deep in a project, the time it takes to learn the ins & outs of the debugger/IDE environment probably won't lead to a net increase in productivity.

Also, IDE development is more complicated where the code is running on a server somewhere else.

I work interactively in the Python interpreter when testing/experimenting and then use vi to work on the real code.


I use some sort of logging a lot. In general it works pretty well, however there are some situations where it can cause problems. Outputting text to the screen, or a file is a relatively expensive operation, so it can screw up timing because it can be slow, or cause context switches. (Which can screw up trying to debug some multi-threaded issues such as deadlock, or resource access.)

Back in m C++ days, I finally created a fast "output queue" which would contain my tracing strings, then a separate thread would read those and output to wherever. That way, the tracing thread would not be _as_ affected by timing issues.


Depends on what I'm doing. I tend to think of debugging as roughly split into algorithmic and systemsy debugging: do I think I've messed up an algorithm, or do I have some messed-up software-engineering stuff like component interaction or memory management?

For algorithmic debugging I tend to do mainly printf debugging. In fact usually more like: read and mentally trace the code, and then printf a few things just to make sure they're what I expect them to be.

I use "real" debuggers if it's a more systemsy thing, like a bunch of interacting components, or inexplicable weird crashes.


have never used a debugger at all for Ruby or the Pythons, though I assume they exist (nor for shell scripting or PHP, but do those exist?)

You can remotely debug PHP using the xdebug extension (http://xdebug.org/docs/remote). You'll need a DBGp client too. It's useful for complex logic but I still find myself falling back to var_dump() for quick tests.


I code Perl at my day job and I use print statements. I'm in a terminal coding, and I've never really learned how to use the Perl CLI debugger. Normally I suspect what's broken and I just need the print statements to verify.

In Xcode I use the breakpoints because I'm practically already in the debugger, and I did similar when I had to hack on some C# & VB code.


I used to do that (in Python) but find it's much more efficient to use the logging module. If you're going to put in print statements, it's likely to be a good spot to put in a debug/error/info/warn log, so when you're application is being used you can turn the logging to whichever level you need and see what's going on.


For JavaScript I use console.log and "debugger". If you're unfamiliar, including a "debugger;" expression will pause execution and open a debugger (but only if one is available). Very lightweight and handy. This works on Chrome, Firefox, and likely others. Further, you can conditionalize it with if statements.


Depends on the language and what kind of software I'm working on. Building a C# app for WP7? It's trivial to toss in breakpoints and step through the code using Visual Studio.

Doing PHP work for some simple CRUD web app? Echo statements usually do the trick.


I used to use a debugger more extensively when I was writing Fortran. More recently, writing Python, S-Plus and VBA I've mostly used print (or MsgBox) - easier to do with interpreted languages, stepping through code is also easier.


in general I don't think it makes that much of a difference. find what works for you.

(ruby-debug, for example, doesn't feel like a big, heavyweight sledgehammer, so you might just wanna try it out.)

for me, it depends on the complexity of the problem.

when there's something wrong and I have absolutely no idea where to start (which is a bad sign anyway :) ), using the debugger is easier than scattering print statements all over my code. but when one print() will probably tell me what the problem is, I don't bother using the debugger.

(… and thanks for reminding me to learn how to use ruby-debug.)


My code normally has all sorts of: if(DEBUG) { print $foo }; (with a global boolean DEBUG variable) in it.

Ugly? Probably. Useful ass-saver? Definitely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: