Hacker News new | past | comments | ask | show | jobs | submit login

When I was programming at home om my Atari ST I thought debuggers was the greatest invention ever. It was wonderful to be able to step through assembler code line by line, instead of looking at BASIC print statement output and guessing what was going on and where. Made life so much easier.

Don't people believe in debuggers any more?




Once you get to a complex enough system, sometimes a debugger just isn't enough.

E.g. I have a multi-threaded and multi-process robot control system - I can't put breakpoints in the controller to debug why the robot misbehaves, because then the control loop timing is broken and the robot faults. Instead, you have to put the time into effective logging tools, so that you can capture the behavior of the running system and translate that into a simpler and smaller example that can be examined offline. Maybe those you run under a debugger, but you probably can express what values you want to examine and when more cleanly in code than in the debugger, with the significant advantage that it's easier to communicate "run this code and look at the output at step n" than "run this code with these breakpoints and these debugger scripts".

My view at this point is that the conditions I would normally examine in a debugger with breakpoint and stepping are usually so rare (e.g. a few in potentially thousands/millions of iterations) that I need to write logic to express what checks and where I want to make them, and I would rather write that logic in the context of the program itself than do so in the debugger.


Ofcourse there are edge cases that don't work with a debugger. I have been there too. Timing sensitive applications like controlling the fat-TV live on scanline/pixel level can't be debugged. Physical objects that move over a certain speed can't be live debugged because of physics. This is still edge cases.


> This is still edge cases.

Arguably, once you are working on a sufficiently complex and mature system with good-enough tooling and tests and development practices, these edge cases can come to dominate. I don't spend time debugging simple things on their own, because the simple things on their own are generally well tested, so the failures that do happen emerge from their combination and integration into complex systems.

That said, I do spend a fair bit of time using a debugger as my first-line response to an issue - but overwhelmingly it's to examine a crashdump of the failure rather than investigate a live process.


I guess that once you reach a certain level of coding, static verification, strong typing, solid unit tests, you only got timing multi-threaded Heisenbugs left to find...


And even without that when you're faced with a bug caused by a large input of some kind it's often easier to dump a bunch of data and look for what doesn't fit.

I've had two Heisenbugs, although no threading involved:

1) Long ago, interface library for going from protected mode to real mode in I believe it was Borland Pascal (I can't recall for sure where this was relative to the name change.) Step through at the assembly level, it worked. Anything else it might work but that was unlikely. The only outcomes were correct or segment error, it never returned a wrong answer. Culprit: Apparently nobody used the code. The whole file was riddled with real mode pointers declared as pointers. Oops, when asked to copy a pointer the compiler emitted code that loaded it into a segment:offset pair and then stored it. If the segment part happened to be valid, fine. If it wasn't a valid segment, boom. The debugger apparently did not actually execute a single step, it emulated it--except for not failing on an invalid segment register value. In any other case the attempt to dereference it would have blown anyway, but this wasn't being dereferenced.

2) Pretty recently, C#. I had a bunch of lazy-initialize data structures--and one code path that checked for the presence of data without triggering the initialization. But the debugger evaluated everything for display purposes, triggering the initializer. There is a way to tell the debugger to keep it's hands off but I hadn't heard of it until I hit it.


Some people just don't believe in tooling full stop. Kind of mind-blowing. They're essentially coding with a fancy notepad.exe.


What’s weird is that debuggers are so advanced now. rr and Pernosco are to regular debuggers like regular debuggers are to inserting an infinite loop into your code.


I used the debugger all the time when I was writing in Pascal (and later Delphi). It was great.

Then I switched to Haskell. No (useful) debugger there.

Now I write TypeScript, and.. somehow I never figured out how to do debugging in JS properly. Always something broken. Breakpoints don’t break, VSCode can’t connect to Node, idk. Maybe I should try again.


The Borland Pascal had a problem with too much debug data. By the time that program got retired I could turn on the symbols for a few percent of the code, set a breakpoint and examine the situation when it triggered but not continue at that point.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: