I'll simply ask gdb to dump its memory, then I extract my command from its coredump. For example,
$ bash
$ echo $$
477609
$ while true ; do echo 1 ; echo 2>/dev/null ; sleep 30 ; done
# From now on, this command cannot be stopped, and by now
# the text has been overwritten by new output...
Open a root shell, install gdb.
# gcore 4077609
0x00007ae1b321ceca in wait4 () from /lib64/libc.so.6
Saved corefile core.4077609
[Inferior 1 (process 4077609) detached]
# strings core.4077609 | grep while
......(omit huge amount of text)......
while true ; do echo 1 ; echo 2>/dev/null ; sleep 30 ; done
Studying the source code and calling C functions in a debugger, like the author did here, is a clever and accurate way to solve this problem and deserves its pages in sysadmin folklore, but I think my brute-force approach, although boring, is equally acceptable. It's also safer, a wrong function call won't crash the program. If I can not find what I need immediately, analyzing the coredump safely in a debugger (perhaps on my own machine with more devtools installed, with a cup of tea) is also an option for me.
Modern shells are powerful enough to help you remember, if you learn to configure them appropriately. My histories are always saved because each shell instance gets its own HISTFILE, like so:
export HISTFILE=$HOME/.history/${TTY##\*/}.$SHLVL
As I use different terminal windows for different tasks, this keeps history files rather concise thematically.
And I let the shell add timestamps too, so I can grep for entries produced during a certain time span:
zsh:
setopt EXTENDEDHISTORY # add timestamps
bash:
HISTTIMEFORMAT="%F %T "
I write perl or shell script files, of course, if it's more than some a handful of lines.
It's attached to the preexec hook of https://github.com/rcaloras/bash-preexec, so is run before every command. This means that everything goes into one easily-greppable file, but is still separable by PID/host machine - since my work has me walking around a large facility, often I'll remember where I was when I did something but not exactly when, so can narrow down by machine.
You could deduce the working directory from the sequence of commands in your history file, so the working directory is implicitly contained in the history file.
I don't remember an option to save the working directory explicitly. But zsh has a number of history related commands, which can be used to execute shell functions before and after each command. So you could use these to write your own special history file, even one per directory if need. Example:
This shell function defined here will run when the command has been read but just before it will be executed. The argument $1 contains the command line to be executed.
> You could deduce the working directory from the sequence of commands in your history file, so the working directory is implicitly contained in the history file.
I didn't think of that, but I don't think it's always possible. E.g. suppose you run "source somefile", and somefile contains a cd command.
Unfortunately I'm using bash mostly, so I'm afraid the suggestion you gave for zsh doesn't work for me.
By the way, another thing I'm interested in is how people manage their history files over multiple machines.
Does bash have an option to write out the history before running the command? Usually history is written in PROMPT_COMMAND, which runs after the command completes.
Very clever, but the sensible thing to do is never rely on one-liners, lest you end up in this kind of situation. Get that thinking out of your head. Always write the script (that's why it's called "the script" ... ) in a file and use that.
This is why every major project I work on has a ".archive/oneshot/" folder for anything complicated that was used to quickly put out a fire. Not only will I inevitably need to remember months later what happened, but I'll also want to reuse certain bits later.
That ".archive" folder itself is then a personal git repo. Sometimes you just have to put that fire out or get that question answered and its not worth a "proper" solution, but keeping a history is worth it.
Resist the urge to keep it as part of the main project's repo at all costs. It's far too easy for it to be come standardized (e.g. by other devs, perhaps) and breath life into them they should never have.
Exactly, I just wrote a blog post about how I do that, basically with shell scripts in the repo with lots of small functions. They basically form an annotated log of what I did, so I never have to remember long commands.
I can't remember long commands so I always write them down.
Examples of using a JSON API, using a C++ tool uftrace, and hacking on Kernighan's awk:
Admittely there are a lot of people who don't seem to like reading shell scripts as docs. But you don't really have to read the code -- you just read the function names.
I also added doc comments to Oil, like this:
deploy() {
### doc comment
cp foo bar
}
which you can access with the 'pp' builtin. So eventually those strings could be exposed to autocomplete, etc.
I'm surprised gdb wouldn't automatically consult /proc/PID/exe to read symbols. It seems more reliable than checking argv[0] and then reading the file at that path.
Sure, but I'm sure there are tons of Linux-specific things in gdb behind #ifdefs already; this seems like a pretty worthwhile thing that justifies adding one more.
Because ctrl-z will stop the current command, and when you run "fg" it resumes that one process. However, the loop you were in has been abandoned, and it will not continue to execute.
For example, try it with this:
for N in $(seq 10); do echo $N; sleep 1; echo $N; done
You'll see something like this:
$ for N in $(seq 10); do echo $N; sleep 1; echo $N; done
1
1
2
^Z
[1]+ Stopped sleep 1
$ fg
sleep 1
$
Also, you may have things talking to external resources that are sensitive to timeouts ... even small ones. You may not be able to cleanly resume execution and cause an entirely new problem.
After so many months, I had no idea what tools I was running in that command line... And therefore, the effect of any signal seemed far more dangerous to me, than extracting the actual command from my shell's memory.
This is simply old school sys admin mastery.
This should definitely get into school books.
It's not the solution itself, it's the approach.
Much like the one in "the martian".
Kudos!
I usually launch persistent one liners inside a `bash -c "while...`, just so if it starts using resources I can easily see what is the original purpose in the process tree.
>Heck, there has to be a way to get to that line. ps aux | grep ... doesn't help - it shows the currently running piece of the command you wrote. You want the whole shebang.
Huh? Why wouldn't it show the original command, since it is still running? There's also an option to print the "tree" of commands (e.g. the original command to run some script, other programs started from said script, etc.)
Unless you are running the command via "sh -c", you can't see any shell builtin commands or Unix pipelines by dumping the process tree. For example, "cat > /dev/null", the redirection is done by the shell and isn't passed to cat's "char **argv", in the process tree you can only see a "cat" running without arguments (and a shell built-in is invisible). It also gives no information about the control flow of your one-liner shell script (e.g. what does the "for loop" do?), nor environmental variables, etc. You can try to reconstruct some information by inspecting its open files, its environ, etc., but it's not practical for any complicated combination of commands).
The script was running in production, doing actual work. Stopping it - when you don't remember a damn thing about it and how it worked - was not an option I wanted to consider at the time.
The answer to that is simpler: Try doing the Ctrl-z/fg sequence in your bash with this one:
while true ; do sleep 1 ; done
You'll see that after 'fg', the loop ends :-)
Simply put: C-z followed by fg is not bulletproof. Not to mention that I had no idea what I was running in there, and how any signal would impact it... So I wanted to find a safer way to dump what was already there, in my shell's memory.
Anyway, I hope you guys enjoyed reading this regardless :-)