My favourite way to log..
Redirect stdout and stderr ( &> ) into a named pipe ( >() ) running "tee"
And get the redirect into the log file as well.
`exec &> >(tee ${__DIR}/${DOC_LOCAL}/${LOG_LOCAL})`
So this line only redirects all script output to a file as well as ensure that that output makes it onto your screen at the same time.
It is unstructured only in the way you allow any running command within your script to dump their output.
I run most commands inside my scripts with `> /dev/null 2>&1` and then rely on exit codes to wrap structured information to be echoed out with this function:
PS.
The __DOC_LOCAL and __DIR variables start with these magic variables below.
These variables are a life saver and allow easy directory and file manipulation, they kind of setup a top-level context:
Do you just want to log the scripts execution or do you want something more structured? If it's the former you can redirect the output from within the bash script with this (apologies for any condescension, I'm not familiar with your skill tree):
if [ ! -t 1 ]; then
exec > /my/log/file 2>&1
fi
The if statements tests if your at an interactive prompt, if your not all output from the script get's redirected to /my/log/file. The above poster is instead redirecting into a subprocess " > (tee)" that will both print the output and log it.
It should be noted that often the bottleneck is the terminal itself, try running your scripts with a "> /dev/null" to suppress output and verify the slow part is actually the script.