While print-type debugging has a place, the reason there are a lot of articles dissuading the practice is the observed reality that people who lean on print debugging often have incomplete knowledge of the immense power of modern debugging tools.
This isn't just an assumption I'm making: years of being in developer leadership roles, and then watching a couple of my own sons learning the practice, has shown me in hundreds of cases that if print-type debugging is seen, a session demonstrating how to use the debugger to its fullest will be a very rewarding effort. Even experienced developers from great CS programs sometimes are shocked to see what a debugger can do.
Walk the call stack! See the parameters and values, add watches, set conditional breakpoints to catch that infrequent situation? What! It remains eye opening again and again for people.
Not far behind is finding a peer trying to eyeball complexity to optimize, to show them the magic of profilers...
> the reason there are a lot of articles dissuading the practice is the observed reality that people who lean on print debugging often have incomplete knowledge of the immense power of modern debugging tools.
While perhaps this is true of some sort of junior developer, I have both written my own debuggers and still lean heaviest on print debugging. It's trivially reproducible, incurs basically zero mental overhead, and can be reviewed by another person. Any day I break out a debugger is a bleak day indeed.
Profilers are much easier to argue for as it is very difficult for one to produce equivalent results without also producing something that looks an awful lot like a profiler. But in most cases the mechanisms you mention are just straight unnecessary and are mostly a distraction from a successful debugging session.
Edit: in addition to agreeing with a sibling comment that suggests different problems naturally lend themselves more to debugging (eg when writing low-level code a debugger is difficult to replace), I'd also like to suggest a third option languages can take: excellent runtime debugging ala lisp conditions. If you don't have to unwind a stack to catch an exception, if in fact you can modify the runtime context at runtime and resume execution, you quickly realize the best of both worlds without having to maintain an often extremely complex tool that replicates an astonishing amount of the language itself, often imperfectly.
I find that which tools I need changes immensely depending on what kinds of projects I'm working on.
When debugging parsers for my toy programming languages print debugging is less helpful and I make heavy use of all the debug tools you mention. The same goes for most types of business logic—writing a test and stepping through it in the debugger is usually the way to go.
But when troubleshooting odd behavior in a complex web app, the inverse is true—there are usually many possible points where the failure could occur, and many layers of function calls and API calls to check, which means that sticking a debug statement prematurely slows down your troubleshooting a lot. It's better to sprinkle logs everywhere, trigger the unexpected behavior, and then skim the logs to see where things stop making sense.
In general I think there are two conditions that make the difference between the debugger or print being better:
* Do you already know which unit is failing?
* Is there concurrency involved?
If you don't yet know the failing unit and/or the failing part of the code is concurrent, the debugger will not help you as much as logs will. You could use logs to narrow down the surface area until you know where the failure is and you've eliminated concurrency, but you shouldn't jump straight to the debugger.
I think we need to differentiate between printf style debugging and considered, comprehensive logging. Ideally logging with logging levels. While the latter might seem to fall under the same umbrella -- both are printing some sort of text artifact history of execution -- the latter is long-term and engineered, and the former is generally reactionary.
e.g. LOG(INFO_LEVEL, "Service startup") and printf("Here11") are completely different situations.
Indeed, the very submission is arguing for printf style debugging instead of logging. Like it uses it as the alternative.
Real-world projects should have logging. It should have configurable logging levels such that a failing project in the wild can be configured to a higher logging level and you can gather up a myriad of logs from a massive, heterogenous cross-runtimes and platforms project and trace through to figure out where things went awry. But that isn't print debugging or the subject of this discussion.
> Indeed, the very submission is arguing for printf style debugging instead of logging. Like it uses it as the alternative.
Yeah, this is a bogus distinction they're drawing. Logging and printf style debugging are the same thing at different phases of the software lifecycle, which means they can't be alternatives to each other because they can't exist in the same space at the same time.
As soon as your printfs are deployed to prod, they become (bad) logs, and conversely your "printf debugging" may very well actually use your log library, not printf itself.
This. Last week at work I've been investigating an odd flaky behavior. No way to do it with debugger. Add logging to every suspicious place and run all 40 containers of our distributed monolith in local docker. Turned out there is a race condition between consumption of Kafka messages and REST calls.
Have you ever been able to try https://replay.io time travel debugging as an alternative to conventional logging?
Last time I tried it you were able to add logging statements "after the fact" (i.e. after reproducing the bug) and see what they would have printed. I believe they also have the ability to act like a conventional debugger.
I think they're changing some aspects of their business model but the core record / replay tech is really cool.
If your parsers are pure¹, REPR testing and state-transition logging (trying X, X rejected, trying Y, Y is successful with input "abc") will beat any other tool by such a margin that it will feel they aren't even on the same competition.
1 - If your parsers are not pure, you either have a very weird application or should change that.
I do sometimes use print debugging despite having (in my opinion), a decent knowledge of the debuggers in the toolchains I use. Part of is that you could set a conditional breakpoint for a condition, if you know what it is, but sometimes you're just probing to see what differs from expectations, and putting every expectation into a conditional breakpoint is a pain with most debugger UIs. In theory you could use logpoints instead of print statements, but again the UI for this is often a pain compared to just typing in print. And even when you get to the breakpoint, you'll run into the dreaded "this variable has been optimised away" increasingly in modern languages, and it also doesn't give you a history of how it got there - maybe if rewind debugging was more commonly supported that would help, but it isn't.
Also suspending a thread to peek around is more likely to hide timing bugs than the extra time spent doing IO to print.
The thing is, in interpreted languages land print debugging has a power no debugger gives you: live debugging on your production instance.
Something is broken in prod, you cannot reproduce it in your test environment because you think it may be due to a config (some signing keys maybe) you can't check. And it looks like someone forgot to put logs around whatever is the problem.
You can either: spend multiple hours trying to reproduce and maybe find the cause. Or take 5mn, bash into one of your nodes, add some logging live and have a result right now: either you have your culprit or your hunch is false.
With modern js stacks being somewhat compiled anyway, and the backend often being C# or Java I don't think that case is very applicable. Not to mention developers logging on production servers and making whatever changes they want being a huge red flag.
I’ve definitely seen an undercurrent of “I don’t need the crutch of a debugger” sort of attitudes online over the years, never really made sense to me. It can be painful pairing with someone who keeps adding print statements one at a time and repeating the 15 step process to get to them when they could have put in a breakpoint right out of the gate.
I still print stuff plenty, but when the source of an issue is not immediately obvious I’m reaching for the debugger asap.
> It can be painful pairing with someone who keeps adding print statements one at a time and repeating the 15 step process to get to them when they could have put in a breakpoint right out of the gate.
This does sound painful, but this is not what most people who advocate for print debugging are advocating for.
If I'm only going to add one print statement, that's obviously a place where a breakpoint would serve. When I do print debugging, it's precisely because I haven't narrowed down the problem that far yet—I may have ten theories, not one, so I need ten log statements to test all ten theories at the same time.
Print debugging is most useful when the incorrect behavior could be in one of many pieces of a complex system, and you can use it to rapidly narrow down which of those pieces is actually the culprit.
I wouldn't categorize debuggers as a crutch, for "lazy minds", or anything like that. Everyone should use the tools they feel most productive with.
However, at least personally, I've also felt that there was a lot of truth to that Ken Thompson quote. Something along the lines of: "when your program has a bug, the first thing you should do is turn off the computer and think deeply."
Basically, a bug when where your mental model of the code has diverged from what you've actually written. I think about the symptoms I'm observing, and I try to reason about where in the code it could happen and what it could be.
The suggestion in the parent comment that I'm just too stupid to look into or learn about debuggers is so condescending and just plain wrong. I've looked into them, I know how to use them, I can use them when I want to. I simply tend not to, because they don't solve any problem that I have.
Also, the implication that I don't use completely unrelated tools like profilers is equally asinine. Debuggers and profilers are two completely different tools that solve completely different problems. I use profilers almost every day of my career because it solves an actual problem that I have.
Performance optimization is an excellent example of exactly the opposite. Why thinking first, creating the model of the code, to put a trace, metric, log call is better than mindless debugging. Interactive debugging has its uses but it may be less than appears at first and it encourages the wrong thing (local focus, irrelevant details). You should ask yourself first how fast the code should be and why (build the model), and only then measure. Learning happens when you get unexpected results. There is a limited utility in running profilers without a thought.
You should consider performance/efficiency at all stages. And that consideration should be based on an informed feedback loop where assumptions are validated and proven empirically. What developers think are performance patterns often wildly diverges from reality.
The scenario I gave is when there are performance problems with a developed project (you know -- where a profiler is actually usable, after they already decided on an approach and implemented code) and the developer is effectively guessing at the issues, doing iterative optimize-this-part then run and see if it's fixed pattern. This is folly 100% of the time. Yet it's a common pattern.
Measurement is a necessary step but if you don’t think through the expected results first then it is easy to rationalize any results that a profiler provides (easy to lye to yourself).
I like the solid approach described in Understanding Software Dynamics by Richard L. Sites
I've used both and most of the time I'm still print debugging, because the big advantage of print debugging is that it shows you exactly the kind of information you're looking for and nothing else.
Exactly this. Nobody is looking down on print debugging. Everybody uses it. People are looking down on those that stop at print debugging and never reach for a full debugger.
It's particularly annoying on projects that are set up without considering proper debuggers, because often it's impossible or difficult to use them, e.g. if your program is started via a complicated bash script or makefile rather than directly.
If you use Visual Studio on Windows, you have one nice option for multi-process debugging: https://marketplace.visualstudio.com/items?itemName=vsdbgpla... - auto-attach the debugger to child processes as they are invoked. I've also found this good in the past for debugging some kinds of client/server setup by having a little wrapper program that runs the combination of clients and servers required on your local PC.
All the processes end up being debugging simultaneously in the same instance of the debugger, which I've found to make light work of certain types of annoying bug. You might need a mode where the timeouts are disabled though!
In the embedded world, you use remote debugging (like gdbserver on the target and gdb on a development machine). There are issues like some third party pieces being debugged that are not part of your application, and not built in a way that plays along with your debugging environment. Those pieces may be started not simply by shell scripts, but C code, which hard codes some of their arguments and whatnot. You need networking for remote debugging, but the problem you're debugging might occur before networking is up on the target.
I kinda want to push back against the blanket statement that there are articles pushing back on print debugging. That implies there’s well known mind share thinking about it?
Is it real mind share? Is it bullshit?
Print debugging is the literal pocket knife of debugging.
There are loads of articles discouraging print debugging, and it's a very real thing that people fight against (and for). Print style debugging is the first thing most programmers learn, and for some it absolutely becomes a bad habit.
And to be clear, print debugging and pervasive, configurable logging are very different things and the latter is hugely encouraged (even with logging levels), while the former is almost always suboptimal. Being able to have your client turn on "DEBUG" logging and send you the logs after some abnormal behaviour is supremely useful. Doing "prinftf("Here!")" in one's project is not, or at least not remotely as useful as better approaches.
Print'here' is very useful. I get a log of how many times that funcion is called. If I log some data I log how that data changes over time. Those are powerful tools.
FWIW, many debuggers have facilities to do precisely this. The JetBrains debuggers allow you to set a breakpoint that -- in a non-stopping way -- simply logs every time it was passed, or logs whatever values you want it to log as an expression. So in one potentially non-stopping run you get an output of all of it.
I use a mix of strategies depending on the target platform. Right now, for nearly all of my hobby projects, the target platform is an old processor running on a weird game system without enough memory to run a real debugger and with no ability to expose that debugger's state to the PC. In these cases, I can't even really use printf (where would it print to?) and must instead rely on painting the debug information to the screen somehow. It's a wild and wacky set of techniques.
Of course, I pair this with a modern emulator for the target platform where I can at least see my disassembly, set breakpoints, watch memory values. But when I'm working on some issue that I can only reproduce on hardware, we get to bust out all the fun manual toys, because I just don't have anything else available. On the very worst days, it's "run this routine to crash on purpose and paint the screen pink. Okay, how far into the code do we get before that stops happening? Move the crash handler forward and search. (Each time we do this we are flashing an eeprom and socketing that into the board again.)"
The availability of tools is severely dependent on the runtime + language. With most of my work being in interpreted languages, it's just way easier to either use a REPL or print statements - as getting good debugging to work involves having Just That Particular (often - commercial) IDE, Just That Particular Version of the runtime (often outdated) etc. These things frequently break and before you have gotten it to work again you have spent so much time that using it over a REPL just isn't worth it. I never made the effort to master GUI-less debuggers like gdb though.
That said, on one project I did have a semi-decent experience with a debugger for PHP (couple of decades back) and when it worked - it was great. PHP didn't have much of a REPL then, though.
Absolutely true that not all runtimes and languages have the same level of tooling. But the state of tooling has dramatically improved and keeps improving.
I use PyCharm for my projects including Python, for instance, and it has absolutely fantastic debugging facilities. I wouldn't want to use an IDE that lacked this ability, and my time and the projects are too valuable to go without. Similar debugging facilities are there for Lua, PHP, Typescript/JavaScript, and on and on. Debuggers can cross processes and even machines. Debuggers can walk through your stored procedures or queries executing on massive database systems.
Several times in this thread, and in the submission, people have referenced Brian Kernighan's preference for print versus debugging. He said it in 1979 (when there was basically an absence of automated debugging facilities), and he repeated it in an interview in 1999. This is used as an appeal to authority and I think it's just massively obsolete.
As someone who fought with debuggers in the year 2000, they were absolute dogshit. Resource limitations meant that using a debugging meant absolutely glacial runtimes and a high probability that everything would just crash into a heap dump. They were only usable for the tiniest toy projects and the simplest scenarios. As things got bigger it was back to printf("Here1111!").
That isn't the case anymore. My IDEs are awesomely comprehensive and capable. My machine has seemingly infinite processor headroom where even a 1000x slowdown in the runtime of something is entirely workable. And it has enough memory to effortlessly trace everything with ease. It's a new world, baby.
Another thing I think leads people to print debugging - which is both a strength and a weakness of the approach:
It minimises the mental effort to get to the next potential clue. And programmers are naturally drawn to that because:
1. True focus is a limited resource, so it's usually a good strategy to do the mentally laziest thing at each stage if you're facing a hard problem.
2. It always feels like the next time might be it - the final clue.
But these can lead to a trap when you don't quickly converge on an answer and end up in a cycle of waiting for compilation repeatedly whilst not making progress.
One of the big benefits of print debugging is that it's useful for the initial "bifurcate the code" process.
If you add a bunch of print statements every few lines, it's easier to run the code and see you got to the checkpoint line 1580 and 1587 but not 1601, than to have to manually click through a dozen breakpoints and note the last one you passed before the problem occurs.
If you have a "hard crash" where you can give a stack trace, that's less of a need, but often it's something like "this value was 125 when it entered module Foo and came back as 267". Monitoring the expression can sometimes help, but it might also be a red herring (we trap that it got set at the end of function Bar, but then we have to dig into function Bar to find the trace). Printfs can include whatever combination of values are worth reporting at any time.
Yes, any debugger can do all of that, but when trying to spin it up ad-hoc, printfs can be less hassle than trying to pull up the debugging tools and wire it up inside the IDE.
> people who lean on print debugging often have incomplete knowledge of the immense power of modern debugging tools
not always; sometimes print debugging is much more time efficient due to the very slow runtime required to run compute-intensive programs in debug mode. I'll sometimes forego the debugger capabilities in order to get a quick answer.
Speaking from my own experience I'm not so sure that printf debuggers just have "incomplete knowledge [...] of modern debugging tools". I use printf (or the file-based equivalent, log files) quite a lot, but nobody can accuse me of not knowing good debugging environments.
Also, what's "modern" about "Walk the call stack! See the parameters and values, add watches, set conditional breakpoints"? Those are all things we had many decades ago (for some languages, at least). If anything, many modern debugging environments are fat and clunky, compared with some of the ones from way back when. What has greatly improved though are time travellers, because we didn't use to have the necessary amounts of memory lots of the time.
So please refrain from calling people with different preferences uneducated. [Ed. I retract this bit, though I think it is not unreasonable to associate lack of knowledge with lack of education (not necessarily formal education!) I don't want to quibble over semantics.]
I've never been able to successfully debug anything with a conditional breakpoint or watch, in spite of knowing about these things and trying.
(Well, other than my own conditional breakpoint features built into the code, doing things like programmatically trigger a breakpoint whenever an object with a specific address (that being settable in the debugger interactively) passes certain points in the garbage collector.)
I said such users often have incomplete knowledge. Are there exceptions? Sure. Of course there are.
"Those are all things we had many decades ago"
I didn't claim this is some new invention, though. Though as someone who has been a heavy user of debuggers for DECADES, debuggers have dramatically improved in usability and scenarios where they are useful.
"So please refrain from calling people with different preferences uneducated."
But...I didn't. In fact I specifically noted that graduates of excellent CS programs often haven't experienced how great the debuggers in the platforms they target are.
We all have incomplete knowledge about a lot of things.
That's a hell of an assumption to make and I don't quite understand your reasoning. Debuggers are complex, true, and often people don't understand their potential. People, however, approach learning code and associated tools from many different directions, backgrounds, assumptions, and biases. If I were to read beyond your words and guess why you're so emphatic, I sense that you're coming from a distinct background (I won't bother guessing) where this is either obscured from you or where you've been allowed to forget.
You’re absolutely right - but it’s worth mentioning that print debugging is the only sanity-preserving way to debug distributed systems (spans are basically super fancy prints) or systems which need to run at full speed (optimized builds) to reproduce bugs… sometimes the easy way is the only way.
Isn't it better to use a logger than print statements for distributed systems? Maybe I'm putting too much logging everywhere, but distributed systems are typically a use case where a bug can appear, be 'fixed' (or 'fix itselfn) , then re-appear two month later (the eisenbug). In this case, DEBUG=true and relaunching the app with a logger is often better imho (and if your logger is good, it prints on stdout/stderr when your app is launched locally).
Absolutely - if you want to make this distinction. I put logger.debug() and print() in the same bucket; if you're fancy, you've configured your print to emit logs or configured your linter to forbid print calls.
The problem with deguggers is that they're scoped to one particular technology, and by the time I learn how to use it, I'm already in a new project, doing new things. Meanwhile print is almost universal.
The both have their place. Print debugging is ineffective for complex problems. The full power of debuggers is overkill for simple problems. Neither option is more powerful than slowing down, observing the problem, and just thinking about it. When you throw rubber duck debugging into the mix, all three tend to be fairly evenly distributed in terms of how often they solve the problem, in my experience.
I very agree with this view. Print debugging is still useful in some cases, for example in game programming where the state of a lot of objects change rapidly and debugging a single update isn't enough to reproduce the situation.
Some of the trickiest bugs to hung down are those involving concurrency and non-deterministic timing. In these cases, stepping through a debugger is not at all what you want, since you may actually change the timing just by trying to observe.
To see the nature of the race condition, just put some print statements in some strategic locations and then see the interleaving, out of order, duplicate invocations etc that are causing the trouble. It's hard to see this type of stuff with a debugger.
Yes, but you can repeat your print-debug-loop once a second, maybe even faster. Hit play and look at the output. Hit play again and see if it changed. It may or may not turn up the concurrency issue.
Stepping through with a debugger will take you at least a minute per cycle, won't turn up the concurrency issue, and will spend a great deal of your daily concentration budget.
I think in this case, because everyone brings up multithreaded examples when saying a debugger isn’t useful, maybe print debugging can lead you towards the path of where to use a debugger efficiently.
I personally think if you can’t use a debugger in a multithreaded codebase, the architecture is bad or one doesn’t understand the code. So yeah, full circle, if print debugging helps one learn the code better, that is only a positive.
I’m so amused about how debuggers have become a debate around here. “Printf vs debugger” is like “emacs vs vi” right now, and it really shouldn’t be. Sometimes I put a breakpoint AT my printf statement.
>printing also will likely impact timing and can change concurrent behaviour as well.
I've had a bug like that and the intuitive way to handle it turned out to be entirely sufficient.
The bug (deep in networking stack, linux kernel on embedded device) was timing sensitive enough that printk() introduced unsuitable shifts.
Instead I appended single-character traces into pre-allocated ring buffer memory. The overhead was down to one memory read and two memory writes, plus associated TLB misses if any; not even a function call. Very little infra was needed, and the naive, intuitive implementation sufficed.
An unrelated process would read the ring buffer (exposed as /proc/ file) at opportune time and hand over to the developer.
tl;dr know which steps introduce significant processing, timing delays, or synchronization events and push them out of critical path
>I appended (...) traces into (...) memory. (...) An unrelated process would read (...) at opportune time and hand over to the developer.
I did something similar to debug concurrent treatments in Java, that allows to accumulate log statements in thread-local or instance-local collections and then publish them with possibly just a lazySet():
Print logging is pretty good for concurrency IMO because it doesn't stop the program and because it gives you a narrative of what happened.
If you have a time travel debugger then you can record concurrency issues without pausing the program then debug the whole history offline, so you get a similar benefit without having to choose what to log up front.
These have the advantage that you only need to repro the bug once (just record it in a loop until the bug happens) then debug at your leisure. So even rare bugs are susceptible.
I have also seen the print statements added for debugging alter the timing with the same effect on more than one occasion, appearing to “fix” the issue.
This is the exact realisation that made me take a second look at FP around 10 years ago. I haven't looked back since. I certainly couldn't debug concurrency issues in imperative code when I was young and sharp, but at least I tried. Now that I'm old, if I get a concurrency issue, I'll just file a ticket and grab a coffee instead.
For concurrency issues you don't want a debugger or printing as both are terrible for this, you want a library designed to specifically detect these issues, I have a custom one but many other people use valgrind etc.
This depends a lot on your stack. If we're talking about concurrency issues in a multi-threaded systems-level program, you're probably right and I can't speak to that. But as a web developer when I talk about concurrency issues I'm usually talking about race conditions between network requests and/or user input, and print works fine for those. The timings at fault are large enough that the microscopic overhead of print doesn't change them meaningfully.
The problem with all these 'print debugging is good' and 'print debugging articles are bad' is that none of them provide any context. Print debugging is just one tool in a box full of tools. Pick the right one for the right job. An article that walks through how to make that decision would be very useful. An article that just picks a side in a decades-old debate is just noise.
I'm a CS professor teaching some programming courses. I always make an effort to teach my students how to use a debugger and encourage them to use it, because otherwise they will not even try (print debugging comes naturally, while using the debugger requires more effort). I want them to master it so they can make conscious decisions about what to use.
Then, when I code myself, I use print debugging like 99.9% of the time :D I have the feeling that, for me, the debugger tends to be not worth the effort. If the bug is very simple, print debugging will do the job fast so the debugger would make me waste time. If the bug is very complex, it can be difficult to know where to set the breakpoints, etc. in the debugger (let alone if there's concurrency involved). There is a middle ground where it can be worth it but for me, it's infrequent enough that it doesn't seem worth the effort to spend time making the decision on whether to use the debugger or not. So I just don't use it except once in a blue moon.
I'm aware this can be very personal, though, hence my tries to have my students get some practice with the debugger.
I like debuggers and use them when I can, but folks who say you should only use debuggers tend to not realize:
* Not all languages have good debuggers.
* It's not always possible to connect a debugger in the environment where the code runs.
* Builds don't always include debug symbols, and this can be very high-friction to change.
* Compilers sometimes optimize out the variable I'm interested in, making it impossible to see in a debugger. (Haskell is particularly bad about this)
* As another commenter mentioned, the delay introduced by a debugger can change the behavior in a way that prevents the bug. (E.g. a connection times out)
* In interpreted languages, debuggers can make the code painfully slow to run (think multiple minutes before the first breakpoint is hit).
One technique that is easier to do in printf debugging is comparing two implementations. If you have (or create) one known-good implementation and have a buggy implementation, you can change the code to run both implementations and print when there's a difference in the result (possibly with some logic to determine if results are equivalent, e.g. if the resulting lists are the same up to ordering).
I think in many scenarios, print debugging wins the cost/benefit analysis - new project where I don't want to learn how to set up a debugger, trying to debug something really custom or specially formatted, etc.
However, if I know I'm going to be working on a project for a long time, I usually try to pay the upfront cost of setting up a debugger for common scenarios (ideally I try to make it as easy as hitting a button). When I run into debugging scenarios later, the cost/benefit analysis looks a lot better - set a breakpoint, hit the "debug" button, and boom, I can see all values in scope and step through code.
I agree with the title that you shouldn't look down on print debugging. You should look down on people who are slow at fixing bugs and who refuse to try tooling to be able to keep pace -- and using print statements can sometimes be a marker of this. But print debugging is just a tool and has its place, and the use of print debugging is not itself an indicator of poor developer performance. If you can find/fix bugs as quickly as I can with a debugger using print statements, then I don't care what you use. If you can do it faster, then I'm going to try to steal your techniques so I can be faster too. If you do it slower, then I hope you would steal my techniques.
Don't care about the tool care about the performance.
Anecdotally, debuggers are faster than print statements in most cases for me. I've been able to find bugs significantly faster using a debugger than with using print statements. I still do use print statements on occasion when I'm developing something where a debugger is very complicated to set up, or in cases where I'm dealing with things happening in parallel/async, where a debugger is less suited. I'm not going to shame you for using print statements, but I do hope that you've tried both and are familiar/comfortable with both approaches and can recognise their strengths/weaknesses -- something I'm not convinced of by this author, which only outlines the strengths of one approach.
Also not a fan of the manufactured outrage of saying people are being "shamed" for using print statements. Coupled with listing a bunch of hyperbolic articles -- many of which don't even seem to be about debugging but about logging libraries.
(Also as a side note: don't forget if you are using print statements for debugging to check if your language buffers the print output!! You'll likely want to have it be unbuffered if you're using print for debugging)
Sometimes there just isn’t good enough debuggers available.
Particularly in Rust, the type system is very complex and debuggers often fail to show enough information and dbg! becomes superior.
I mostly use debuggers when I try to understand someone else’s code.
You have one counter example. How many 0.1 programmers are there using a debugger to blindly change values in the program in the hope it fixes the bug?
I only use print debugging when working on the web, and your mention of console.log makes me think maybe you're in the same boat.
It's an absolutely damning indictment of the developer experience for the web that this is the case. Why aren't our IDEs and browsers beautifully integrated like every other development environment I use integrates the runtime and the IDE?
Why hasn't some startup, somewhere, fixed this and the rest of the web dev hellscape? I don't know.
Don't browsers have some of the best dev tools out there? For example, you can use the `debugger`[0] statement in your JS code to trigger the in-browser debugger when that statement is hit (its basically setting a breakpoint).
I have always used print debugging, since way before web dev existed. I resort to an actual debugger only occasionally.
Some IDEs do have integrated JS runtimes, so you can use a debugger in the IDE. However since JS runs on browsers and devices out of your control that only works up to a point.
Even printing can have some side effects on the code, by introducing some extra latency that might implicitly fix a race condition, if that’s the bug in question.
Not saying that it’s wrong, just funny to think about that race conditions are hard to debug with any kind of tool, either with debugger or printing.
Talking about print debugging has reminded me of a time I spent two weeks using a sideways form of print debugging to track down a timing bug. It was on an embedded system, and the bug when tripped would take out the serial communications line: at which point I couldn't get any diagnostics, not even print statements!
An in-circuit emulator was unavailable, so stepping through with a debugger was also not an option.
I ended up figuring out a way to be able to poke values into a few unused registers in an ancillary board within the system, where I could then read the values via the debug port on that board.
So I would figure out what parts of the serial comms code I wanted to test and insert calls that would increment register addresses on the ancillary board. I would compile the code onto a pair of floppy disks, load up the main CPU boards and spend between five and ninety minutes triggering redundancy changeovers until one of the serial ports shat itself.
After which I would probe the registers of the corresponding ancillary board to see which register locations were still incrementing and which were not, telling me which parts of the code were still being passed through. Study the code, make theories, add potential fixes, remove register increments and put in new ones, rinse and repeat for two weeks.
Print debugging is the tool most people reach for when they can, but its biggest problem is that you have to change the source code to add the printfs. This is impractical in many circumstances; it generally only works on your local machine. In particular, you can't do that in production environments, and that's where the most interesting debugging happens.
Similarly, traditional debuggers are not available in production either for a lot of modern a software -- you can't really attach gdb to your distributed service, for many reasons.
What print debugging and debuggers have in common, in contrast to other tools, is that they can extract data specific to your program (e.g values of variables and data structures) that your program was not instrumented to export. It's really a shame that we generally don't have this capability for production software running at scale.
That's why I'm working on Side-Eye [1], a debugger that does work in production. With Side-Eye, you can do something analogous to print debugging, but without changing code or restarting anything. It uses a combination of debug information and dynamic instrumentation.
Side-Eye is massively inspired by DTrace in some of its raw capabilities and the basic idea of dynamic instrumentation. Beyond that, they're very different. At a low level, DTrace is primarily geared towards debugging the kernel, whereas Side-Eye is about userspace. DTrace's support for the DWARF debug information format used on linux is limited. The interaction model is different - for DTrace you write scripts to collect and process data. DTrace works at the level of one machine, whereas Side-Eye monitors processes across a fleet. In Side-Eye you interact with a web application and you collect data into a SQL database that you can analyze. Side-Eye is also a cloud service that your whole team is supposed to use together over time.
And then there are more technically superficial, but crucial, aspects related to specific programming language support. Side-Eye understands Go maps and such, and the Go runtime. It can do stuff like enumerate all the goroutines and give you a snapshot of all their stacks. We're also working on integrating with the Go execution traces collected my the Go scheduler, etc.
Shame? My perception is that the situation is quite the opposite, in that tons of people online proudly proclaim at any opportunity that they only use print-debugging.
Which in some cases I see as related to a sort of macho attitude in programming where people are oddly proud of forgoing using good tooling (or anything from the 21st century really).
If anyone is making you feel ashamed for using one of the most fundamental, bread and butter debugging techniques, that's a red flag about that person. Not you. If there are better tools available, fine, use 'em. But there is absolutely nothing wrong with tossing out a console log to see what's going on.
There is nothing wrong with using print debugging but if you are only using print debugging, finding yourself doing a ton of compiler round trip, and don’t know how to use a debugger, you are doing yourself a massive disservice.
Print debugging is a disaster. If there's one widespread practice among non-beginner developers that wastes hours of time it's print debugging. I honestly can't tell how often I have seen people, even experienced programmers (usually because they insist on running some bespoke vim setup) refuse to use a graphical debugger to actually step through a program, and instead they spend hours hunting down bugs they could have found in ten minutes.
There's a section of an interview with John Carmack (https://youtu.be/tzr7hRXcwkw) where he laments the same thing. It's what the Windows/game development corner of the programming world actually got right, people generally use effective tools for software development.
Agreed, but print debugging should also not be entirely dismissed. It is not the first tool you should reach for, but it is a valuable tool when your more sophisticated tools fail or are not a good fit for the job.
It also ties into the importance of logging. If you know how to do print debugging well you'll know how to do logging well. And while a crash dump is very useful and allows you to inspect the crash with a debugger, only a good log can give you the necessary context to determine what led up to the crash.
Hmm, all those "don't use print" headlines shown in the article seem to be simply click-bait headlines for articles that aren't really shaming print debugging but instead illustrating other debugging tools that some programmers may not know about.
I remember the good old days when I was first learning programming with Applesoft BASIC where print debugging was all there was, and then again in my early days of 8051 programming when I didn't yet have the sophisticated 8051 ICE equipment to do more in depth debugging. Now with the ARM Cortex chips I most often program and their nice SWD interface, print debugging isn't usually necessary. But I still use it occasionally over a serial line because it is simple and why not?
Printing gives you a trace, breakpoints give you a point in time. They are two different things.
The closest between the two is a logging breakpoint, but the UI for them is generally worse than the UI of the main editor and the logging breakpoint has the same weakness as regular print calls, i.e. you've turned the data into a string and can therefore no longer inspect the objects in the trace.
What I would expect from a debugger in IntelliJ is that when you set a logging breakpoint, then the editor inserts the breakpoint logic source code directly inline with the code itself, so that you can pretend that you are writing a print call with all the IDE features, but the compiler never gets to see that line of code.
To me there are three requirements for me to be comfortable with a team culture of print-debugging.
1. If a breakpoint debugger exists for the stack, it should still be convenient and configured, and the programmer should have some experience using it. It's a skill/capability that needs to be in reserve.
2. The project has automatic protections against leftover statements being inadvertently merged into a major branch.
3. The dev environment allows loading in new code modules without restarting the whole application. Without that, someone can easily get stuck in rather long test iterations, especially if #1 is not satisfied and "it's too much work" to use another approach.
Let's flip that around and replace "debugging" with "coding" for a moment. Imagine you have a new junior developer on the team, and their deliverables seem okay.
Then one evening you discover they've been staying late manually reformatting and reindenting all of their code using notepad before each commit. They explain this is because it's what they know will work reliably, and those other tools gave odd errors on their computer or had too many confusing options or needed some kind of bridge or dependency.
I might be impressed with their work ethic, but I can't just not-care about the problem that has risen into view. (Unless I'm literally counting the days until I move elsewhere.)
Yes, but then imagine saying on the next stand-up that you took a couple hours or a whole day on adding new NATVIS visualizers or GDB pretty-printers, and think of the reaction you'd get. Approval and interest, or eyerolls and comments on work prioritization? That is the difference between the two cultures. It matters at least because use of debuggers gets more effective with investment into support scripts and configs, in ways print-debugging doesn't.
Debugging with print has one feature that debuggers will never match. You can send the binary or code with print/log statements to someone else who is experiencing the problem and get them to run it.
Often I have to debug bugs I can't reproduce. If method 1 - staring at the code - doesn't work, then it's add print/log statements and send it to the user to test. Repeat until you can reproduce the bug yourself or you fixed it.
You can do something equivalent with time travel debugging since you can get the user with the problem to record, then ship you back a recording (or, if they don't want to, send them a debugger script to extract the logging you want).
I think print debugging is a symptom of an underlying problem.
There is nothing bad about print debugging, there is no reason to avoid it if that's what works with your workflow and tools. The real question is why you are using print and not something else. In particular, what print does better than your purpose-built debugger? If the debugger doesn't get used, maybe one should look down on that particular tool and think of ways of addressing the problem.
I see many comments against print debugging that go around the lines of "if you learn to use a proper debugger, that's so much better". But in many modern languages that's actually the problem, you have to invest a lot of time and effort on something that should be intuitive. I remember when I started learning programming, with QBasic, Turbo Pascal, etc... using the debugger was the default, and so intuitive I used a debugger before even knowing what a debugger was! And it was 90s tech, now we have time travel debugging, hot reloading, and way more capable UIs, but for some reason, things got worse, not better. Though I don't know much about it, it seems the only ones who get it right are in the video game industry. The rest tend to be stuck with primitive print debugging.
And I say "primitive" not because print debugging is bad in general, but because if print debugging was really to be embraced, it could be made better. For example by having dedicated debug print functions, an easy way to access and print the stack trace, generic object print, pretty printers, overrides for accessing internal data, etc... Some languages already have some of that, but often stopping short of making print debugging first class. Also, it requires fast compilation times.
I think that persistence of print debugging shows weaknesses of debuggers.
Complicated setup, slow startup, separate custom UI for adding watches and breakpoints.
Make a debugger integrated with the language and people will use it.
You can then pile up on it subsequent useful features but you have to get basic UI right first. Because half of programmers now are willing to give up stepping, tree inspection even breakpoints just to avoid dealing with the crappy UI of debuggers.
Esoteric: I use print debugging on a Lisp Machine using a presentation based Read Eval Print Loop (REPL), similar things would work in some other Common Lisp environments. Presentation based means that the REPL remembers all output and the objects associated with that output.
Above prints the list to the REPL. The REPL prints the list as data, with the objects WHAT and WHERE included. It remembers that a specific printed output is caused by some object. Later these objects can be inspected or one can call functions on them...
This combines print debug statements with introspection in a read-eval-print-loop (REPL).
Writing the output as :before/:around/:after methods or as advise statements, makes it later easier to remove all print output code, without changing the rest of the code. -> methods and advises can be removed from the code at runtime.
If a good debugger is available, it is a great tool to have. But it is just one out of many tools. Some are more effective than others in different situations.
For example, I rarely used a debugger in my career as an Android driver developer (mostly C), for several reasons.
1. My first step when debugging is looking at the code to build working hypotheses of what sort of issues could be causing the incorrect behavior that is observed.
2. I find assertions to be a great debugging tool. Simply add extra assertions in various places to have my expectations checked automatically by the computer. They can typically unwind the stack to see the whole call trace, which is very useful.
3. Often, there only choice was command-line GDB, which iI found much slower than GUI debuggers.
4. Print statements can be placed inside if statements, so that you only print out data when particular conditions occur. Debuggers didn't have as much fine control.
5. Debugging multi threaded code. Prints were somewhat less likely to interfere with race conditions. I sometimes embedded sleep() calls to trigger different orderings to occur.
There are no good debuggers. They all lack a simple feature of what was the value of given watch in all subsequent calls and in context of values of other watches.
Learning how to use my C debugger felt like a super power. None of my (mid-90s') CS courses even mentioned the existence of a debugger let alone how to use one. First job I had to learn on the fly, and it was one of the most useful tools I picked up post-university.
Print debugging was pretty useless back then because compilation took minutes (a full compile took over an hour) rather than milliseconds. If your strategy was "try something, add a print, compile, try something else, add a print, compile" then you were going to have a very bad time.
People working on modern, fast-dev-cycle, interpreted languages today have it easy. You don't know the terror of looking at your code, making sure you have thought of "everything that you're going to need to debug that problem" and hitting compile, knowing that you'll know after lunch whether you have enough debugging information included. I'm sure it was even worse in the punch card era!
I have long relied on a print-debug function I wrote for python called dump(). You do dump(foo) and it will print out "foo: value". Where the variable name "foo" is magically pulled out of the source code and "value" is a json-dump of its value. So dicts look pretty, like the example below.
This is similar to the "debug f-strings" introduced in python 3.8: print(f"{foo=}"). But it's much easier to type dump(foo) and you get prettier output for complex types.
The article draws a distinction between logging and print debugging, which it should, but in recent work that distinction has been less important to me in practice.
I mostly write Zig these days (love it) and the main thing I'm working on is an interactive program. So the natural way to test features and debug problems is to spin the demo program up and provide it with input, and see what it's doing.
The key is that Zig has a lazy compilation model, which is completely pervasive. If a branch is comptime-known to be false, it gets dropped very early, it has to parse but that's almost it. You don't need dead-code elimination if there's no dead code going in to that phase of compilation.
So I can be very generous in setting up logging, since if the debug level isn't active, that logic is just gone with no trace. When a module starts getting noisy in the logs, I add a flag at the top `const extra = false;`, and drop `if (extra)` in front of log statements which I don't need to have printing. That way I can easily flip the switch to get more detail on any module I'm investigating. And again, since that's a comptime-known dead branch, it barely impacts compiling, and doesn't impact runtime at all.
I do delete log statements where the information is trivial outside of the context of a specific thing I'm debugging, but the gist of what I'm saying is that logging and print debugging blend together in a very nice way here. This approach is a natural fit for this kind of program, I have some stubs for replacing live interaction with reading and writing to different handles, but I haven't gotten around to setting it up, or, as a consequence, firing up lldb at any point.
With the custom debug printers found in the Zig repo, 'proper' debugging is a fairly nice experience for Zig code as well, I use it heavily on other projects. But sometimes trace debugging / print debugging is the natural fit to the program, and I like that the language makes it basically free do use. Horses for courses.
Print debugging is a great tool in unfamiliar environments. As the article notes, it’s simple and works everywhere.
I do think that it’s worth learning your debugger well for programming environments that you use frequently.
In particular, I think that the debugger is exceptionally important vs print debugging for C++. Part of this is the kinds of C++ programs that exist (large, legacy programs). Part of this is that it is annoying to e.g. print a std::vector, but the debugger will pretty-print it for you.
There are good arguments for both sides, and there is no contradiction. Why shouldn't they both be good tools, depending on the specific case?
I do print debugging most of the times, together with reasoning and some understanding what the code does (!), and I'm usually successful and quick enough with it.
The point here is: today's Internet, with all the social media stuff, is an attention economy. And also some software developers try to get their piece of the cake with extreme statements. They then exaggerate and maximally praise or demonize something because it generates better numbers on Twitter. It'd as simple as that. You shouldn't take everything too seriously. It's people crying for more attention.
If anything the effectiveness/necessity of manually adding print statements to get any feedback about what the program you're working on is doing makes me look down on software development in general.
We are working on a system that could have nearly total visibility, down to showing us a simulation of individual electrons moving through wires, yet we're programming basically blind. The default is I write code and run it, without any visual/intuitive feedback about what its doing besides the result. So much of my visual system goes completely unused. Also debuggers can be a pain to set up, way more reading and typing than "print()"
I still use a debugger for much of my print debugging needs by setting non-suspending breakpoints. This is useful because it allows me to change the printing dynamically, set breakpoints in library code, attach to running processes, and more.
I've used both print and proper debuggers plenty. I tend to lean on print debugging more these days. The thing about debuggers, in addition to often being a headache to set up, it usually seems tricky and time-consuming to get it to step to the lines you actually want to examine and skip the stuff you don't. And if you step past something but then later realize it was important, time to start over.
It's often faster and easier to set things up to run test cases fast and drop some prints around. Then if there's too much unimportant stuff or something else you want to check on, just switch around the prints and run it again.
Agree, there are some time travel debuggers though but either only for some programming languages or expensive commercial or only for linux e.g. rr-debugger[0]. Also there is rerun [1] that is only for image processing pipeline debugging.
I wish there was something similar like rerun but for code: you record the whole program running and capture all snapshots then stop it running. Now you can analyze all app execution and variables even offline and use any data queries, modify prints without execution and feed it too AI as extra context. I guess RAM would be a big obstacle to make it work since you would either have to capture snapshot at every program modification or some less snapshot but some diffs between what changed.
Yeah it's a big enough pain point that I'm sure it's been done in at least some places. I'm not really sure it'd be worth the bother and memory consumption etc to set up though. Not when print is dead simple and works everywhere.
More like regular debuggers, it seems to me like it's something to set up only when print debugging just isn't getting the job done and you think you need something extra to help solve the problem.
> And if you step past something but then later realize it was important, time to start over.
How can you do this using print debugging? For every print statement I add, I can add a breakpoint. Even more importantly, I can see the stack frame and know which functions led to the current one. I can inspect any and all variables in scope, and even change their values if I want to pretend that the code before was fine and proceed further.
What I mean here is, with print debugging, the setup is usually you have a run or test case that you start, spits out a bunch of text from the prints, and is complete in a second or two. With an interactive debugger, you often end up spending a while stepping around and through things and watching how data flows or changes. Then it can be a pain if you realize something was important after you stepped past it.
Granted, there's nothing really stopping you from using an interactive debugger with frequent short executions, but using print debugging seems to encourage it and interactive debuggers kind of discourage it.
It feels like the author is making the opposite point when they distinguish between logging and print debugging. Logging being the permanent bits of code, that either ship with the program or are disabled when the code is built for release. Print debugging are temporary bits of code that are manually added and removed as needed and is never intended to ship. If that is the distinction being made, then print debugging is problematic since the developer has to be diligent about removing it once it is not longer needed.
That said, I use print debugging all of the time. It is simply more practical in many cases.
As a ColdFusion developer (it still pays the bills) I've been doing this forever as Adobe has never really built a good step debugger. Their latest IDE has one but it's very difficult to setup and is buggy when it does work. BoxLang is modernizing CFML (among other things) and has a nice, working step debugger... https://boxlang.io/
It's always so weird to switch to another language which DOES have a debugger...
I love print debugging. It gives precise and fast information without any prior setup and when I am fully aware of every bit of code I wrote, it never misleads me.
For me print debugging ist the best way to work on a script or code if i dont know when debugging is finished. Most debugging sessions (using a debugger) i ever did were very complex situation where i knew how to trigger the error. And while i am sure that you can save debugging sessions i just dont need another tool (to learn) and install on multiple computers where i test and run my code.
If you're using print debugging in python try this instead:
> import IPython; IPython.embed()
That'll drop you into an interactive shell in whatever context you place the line (e.g. a nested loop inside a `with` inside a class inside a function etc).
You can print the value, change it, run whatever functions are visible there... And once you're done, the code will keep running with your changes (unless you `sys.exit()` manually)
I'd love for someone to explain to be how you can use a debugger cross half a dozen languages. Print works on everything from forth to python. In my experience you need to learn at least one debugger per language with a whole bunch of corner cases where they are outright misleading. Has the situation magically changed in the last 10 years?
My only complaint about print debugging is the sheer volume of commented out console.log statements I see across code bases. Or worse, not commented out and happily logging away on prod. Seriously, leave your console open as you browse around — you’ll be astounded by the amount of debug output just rolling along on production.
I have a perfect solution for this, that works for me at least.
Adding a `..` to the end of a variable triggers a macro that changes `val` into something like `print("val:", val) // FIXME: REMOVE`. Then a pre-commit hook makes sure I am unable to commit lines matching this pattern.
That could work, but there is a problem with this approach (the last part of it):
Not all projects use git, but the ones that do:
It is possible to set up a global pre-commit hook in git, but it requires some manual configuration because git does not natively support global hooks out of the box. By default, hooks are local to each repository and reside in the .git/hooks directory.
You could do:
git config --global core.hooksPath ~/.git-hooks
But... setting `core.hooksPath` overrides the use of local hooks in .git/hooks.
At least you can combine global and local hooks by modifying your global hook scripts to manually invoke the local hooks.
Print debugging is how you make software talk back to you. Having software do that is an obvious asset when trying to understand what it does.
There are many debugging tools (debuggers, sanitisers, prints to name a few), all of them have their place and could be the most efficient route to fixing any particular bug.
I've heard that too - I've never had the good fortune to with on a codebase where the was practical though.
I've never satisfied myself that you can't just make a legacy codebase work that way, given enough effort, but I am not fully convinced it's always a good idea.
In the nightmare kitchen sink of web transpilers and meta-framework, yes just print it is often way more efficient than trying to make a sense of these useless stack traces, or set a brittle debugger configuration in your IDE that sooner than later will invariably lake a map to know what code it should show.
For those doing print debugging in python see the screenshots here https://github.com/likianta/lk-logger
I assume it is not known given it only has a few stars. Adds source location to print and exception outputs.
The article mentions it, but you can sum up print debugging as selectively enabling verbose/trace logging. “We’re about to do X” or “We just did Y, here is Z”.
A debugger gives you insight into the context of a particular code entity - expression, function, whatever.
Seems silly to be dogmatic about this. Both techniques are useful!
Print debugging is great if you are debugging multiple processes, maybe even multiple computers at the same time. But it can also be a bane as print debugging is directly influencing run of your program, so it can temporary fix existing or create new bugs by its sheer presence.
To be fair at least, I think in c# one should avoid Console.WriteLine as much as possible, because Console.WriteLine is blocking which make API request slow and instead use logger like serilog and use async sinks
Multithreading and locking are the areas where print/blinking LED on microcontroller based stuff and some other trickery are useful. Otherwise I prefer "normal" debuggers like with powerful IDE
I mostly use print debugging and not because i don't know how "real" debugging works. It's just that most of the code i deal with, deals with data that's flowing real time. So you stopped on a breakpoint, and the app broke already because incoming data and/or what consumes output, didn't stop.
The data streams of course can be simulated, then "true" debugging with breakpoints and watches becomes practical, but the simulation is never 100% and getting it close to 100% is sometimes harder than debugging the app out using print debugging. So with most of the code, i only use debugger to analyse crash dumps.
Print debugging has no problem inspecting pointers, you just gotta decide ahead of time which pointers you want to inspect and which members of the structure it's pointing to are relevant.
This isn't just an assumption I'm making: years of being in developer leadership roles, and then watching a couple of my own sons learning the practice, has shown me in hundreds of cases that if print-type debugging is seen, a session demonstrating how to use the debugger to its fullest will be a very rewarding effort. Even experienced developers from great CS programs sometimes are shocked to see what a debugger can do.
Walk the call stack! See the parameters and values, add watches, set conditional breakpoints to catch that infrequent situation? What! It remains eye opening again and again for people.
Not far behind is finding a peer trying to eyeball complexity to optimize, to show them the magic of profilers...
reply