It writes an invalid instruction at this location. What ever this instruction, it just has to be invalid.
On x86 at least, it is a valid instruction. INT3, or CC in hex. There are also the debug registers which implement breakpoints without modifying any code, although it's limited to a maximum of 4 at once.
Characterising gdb as a "C debugger" is quite appropriate --- try to debug the Asm directly with it is an excruciating experience.
Having recently had the pleasure of having to debug JIT-compiled code with an ABI mismatch, I can't overstate how useful `rr` (https://github.com/mozilla/rr) can be to debug assembly. The ability to `rsi` e.g. reverse step instruction is very powerful.
One tool that I started exploring is https://pernos.co/, the ability to do dataflow analysis is super cool. Let's you easily answer the question "How did this value get into this register".
Yeah, it's useful to have a special-purpose 'software breakpoint' instruction. Arm actually didn't have one until v5 of the architecture, so some older systems or those wanting to maintain compatibility will still use an arbitrary undefined-instruction pattern. The advantage of an architecturally defined instruction rather than picking something invalid at random is (a) software can rely on future CPUs not deciding to use it for some actual feature (b) software can easily distinguish it from a random attempt to execute an illegal instruction and (c) it acts as a "strong convention" that pushes all software (OSes, debuggers, etc) towards using the same mechanism for setting a software breakpoint, which improves interoperability. In the Unix world the distinction is usually surfaced to the debugger as SIGTRAP vs SIGILL.
Having come from a background of WinDbg (Windows) and DEBUG (DOS) before it, GDB feels very "unergonomic" in comparison. The general verbosity (why do you need an asterisk in front of an address when writing a breakpoint --- as the title of this site so prominently reminds?), lack of a regular hexdump command (16-byte hex+ASCII format) and the rather perplexing behaviour of the "disassemble" command (https://stackoverflow.com/questions/1237489/how-to-disassemb...) are what comes to mind immediately.
what does this mean? watch when the value at the address given by my_ptr changes, or watch when the value of my_ptr changes (i.e. it is modified to point to a different location) ?
gdb) watch * my_ptr
Ah ... now it's clear what is meant.
gdb) watch 0xfeedface
hmm, now 0xfeedface is not a variable but literally an address. But wait, is it? What does this mean? Watch the value at memory location 0xfeedface? But that's totally inconsistent with the semantics of "watch my_ptr". So,
gdb) watch * 0xfeedface
and once again, no ambiguity over what is going on and consistent syntax.
As for you other complaints, I've been programming for about 33 years in C and C++, and I don't recall ever needing to use a hexdump inside the debugger or the disassemble command. Which is not to say that they're not important for some work, but they are also not important for all work.
Wasn't that _exactly_ the OP's point? That GDB is a fine C debugger but its logic and UX choices do not make much sense for debugging assembly code (e.g. there are no asterisks in assembly addresses, so no consistency there, etc.).
Names of locations in assembly language are typically constants, similar to global arrays in C. There's no my_ptr, as such; there's an address, and my_ptr is its name. When you watch my_ptr, you want to monitor what's at that address - ``watch *my_ptr'', in gdb terms.
I think GDB is an example of a program which should've been designed around a kind of inverse Greenspun's tenth rule principle i.e. vague semantics and weird syntax so just build a lisp from the start to control it.
I will rewrite this sentence, "invalid instruction" is too limited. It should be something like "an instruction that traps, and which isn't already used for another purpose".
Syscalls trap but they are already used; and INT3/CC are valid instructions.
Your description still has problems on architectures like x86 with instructions of different lengths. In particular, you pretty much have to use a single-byte instruction to do the trap; a multi-byte one runs the risk of clobbering other code that could well be jumped to. That’s why INT3 exists as a single-byte instruction to begin with!
This is a fantastic summary of debugger implementation!
Another great one that actually walks through writing a basic debugger is Eli Bendersky's series[1].
One nitpick:
> It could, and that would work (that the way valgrind memory debugger works), but that would be too slow. Valgrind slows the application 1000x down, GDB doesn't. That's also the way virtual machines like Qemu work.
This is usecase-dependent: running a program until you hit a breakpoint will be significantly faster with `int 3`, but running a piece of instrumentation on every instruction (or branch, or basic block, or ...) will be significantly faster with Valgrind (or another dynamic binary instrumentation framework). This is because Valgrind and other DBI tools can rewrite the instruction stream to sidecar instrumentation into the same process, versus converting every instruction (or other program feature) into a sequence of expensive system calls.
That's why step-debugging usually is done on unoptimized builds where there's a clear relationship between the original source code lines and variables and the generated machine code.
But there's a wide "grey area" depending on optimization level where the debug information is more or less off yet the mapping is still "good enough". Debugging on optimized builds can work surprisingly well if you know what to expect (e.g. the debugging cursor might jump to unexpected places in the source code because the line mapping is off, or you can't step into a function because it has been inlined).
I think their question was this: how is it that I can step through and see the expected changes to my variables, given that the optimiser is permitted to re-order/elide/restructure my code?
You will not see the expected changes and stepping through will jump around erratically. Often you will be unable to print the value of a variable because it has been optimised out.
On x86 at least, it is a valid instruction. INT3, or CC in hex. There are also the debug registers which implement breakpoints without modifying any code, although it's limited to a maximum of 4 at once.
Characterising gdb as a "C debugger" is quite appropriate --- try to debug the Asm directly with it is an excruciating experience.