I saw a note[0] about using a script to activate Redshift in Linux and wondered what other cool, useful, or otherwise interesting scripts you might be using.
I've been using my "dots" script for over 10 years to give me feedback that a long task (like un-tarring a big source tree) is still running, without flooding the terminal with text. I pipe the "verbose" output of a command into the script: tar xvfz kernel.tar.gz | dots
Mine watches for changes in a directory and runs a command on any save change. I use it a lot for Test Driven Development. Modify source code, save and automatically on a terminal it runs `make test` or similar.
Might be useful, I'm not entirely sure if it suits your case because my experience with inotify is limited to playing around with it and nothing substantial coming from it.
I use this in Ubuntu to take screenshots. Ubuntu comes built in with gnome-screenshot, but the file saving mechanism is lacking.
So, this will automatically file away your screenshots to your $HOME/Documents/screenshot/ folder, organized by year/year_month/file.png.
Where file.png is in the format yyyy_mmdd_hhmmss.png.
I use it to take an area screenshot of all my research notes, useful comments, gold nuggets, etc. The automatic folder organization files it away nicely, and keeps it organized as the years go by.
Create it and set the execute bit:
sudo vi /usr/bin/area_screenshot
chmod ugo+x /usr/bin/area_screenshot
Then copy the contents below:
#!/bin/bash
screenshot_dir="$HOME/Documents/screenshot"
current_year_dir="$screenshot_dir/$(date +%Y)"
current_month_dir="$current_year_dir/$(date +%Y_%m)"
fileout="$current_month_dir/$(date +%Y_%m%d_%H%M%S).png"
# Step 1: Check for screenshot directory
[ -d "$screenshot_dir" ] || mkdir "$screenshot_dir"
# Step 2: Check year and month directory
[ -d "$current_year_dir" ] || mkdir "$current_year_dir"
[ -d "$current_month_dir" ] || mkdir "$current_month_dir"
# Step 3: Take area screenshot to the current month
[ -d "$current_month_dir" ] && /usr/bin/gnome-screenshot -a -f "$fileout" $@
True, but it doesn’t organize it further. And over time, the screenshots accumulate. I needed a way for it to self organize. And I found this to be the perfect compromise.
Basically, I write a script (or use an alias) for everything that I do repeatedly which would otherwise require typing more than 2-3 unique characters before using tab completion. The habit just makes life easier (and is fun and relaxing sometimes). I try to use names that are quick to type (like, not the same finger repeatedly) and which are memorable, or grouping things by first characters (if there is a theme).
(This goes with the idea, which I also try to encourage, that any repeated process should be first documented in some rough form at least (like a personal note-base or team wiki), then improved over time, via improving the doc, scripting it, and moving toward full automation based on balancing considerations of cost/benefit over time, YAGNI, and avoiding debt, and ideas from the "checklist manifesto", such as the realization that even smart people can forget important things, drop the ball sometimes, or leave.)
Edit: This also lets me script away differences between platforms, so I can just remember my usual little 1-3 letter command and it takes care of the rest, while the script records the details for reference.
To clarify this slightly (I guess I have an urge to make a speech): it has really seemed to me, that, for the benefit of an organization (medium- and long-term), and the peacefulness & less pain for everyone involved: if anything is going to be repeated much, it seems worth making the process reliable and improving/automating it, at least a little bit each time it is performed (again, documenting, and indicating openness to peer feedback, on the path toward automating).
This, to me, seems like part of the Golden Rule of working together (treat others the way one would want to be treated), as are things like not leaving useful but hard-to-maintain messes behind for others without some clarification, or having important processes that only one person knows how to do, so when they leave it is a crisis (or no one really knows how to do it so it is a crisis every time).</speech>
I've been holding off making scripts for a lot of cases so that I memorize the commands. Do you think that's pointless? I'm a college student now and I've been thinking that I'd be better off not customizing my shell too much so that I can use other computers I end up in front of. (I have nothing against customization in general, as I think my emacs would show)
The most wizardly of the wizards I work with (FAANG, though that correlates less with shell wizardry than people might think) does a lot of his ~scripting in a Vim buffer in tmux and just enters text over one chunk at a time. This is a nice approach because a lot of the time he's starting from bare bones, and always has his basic building blocks very evident so he knows them well. But, when he needs to build up a more complicated script for review, he can clean up whatever he's got.
If you ask me, I'd say don't worry too much about memorizing stuff you don't use often (viva la man page), but it's great to have your common commands memorized rather than aliased for when you're helping someone else out or explaining how something works.
Could you clarify what is meant by "enters text over one chunk at a time" and "basic building blocks very evident"? I use tmux & vim but ... unsure if I follow.
Edit: Do you mean, personal, local notes on tasks, as the notes evolve into scripts? I also keep a lot of personal notes which I can export into a web site (similarly to putting into a wiki, in a very loose comparison, an example being my site at http://lukecall.net or the other at http://onemodel.org), using something like a big, efficient, outline of all my notes. (I would probably put it in org-vim like org-mode, or maybe taskwarrior, if I didn't have http://onemodel.org, which I hope to make much easier to install, sometime.)
Good question. I often (not always) start the custom command name (script or alias) with the same letter as what it abbreviates, and if memorizing were my goal and that didn't help enough to remember them, I might do as you suggest.
But I have enough to remember as it is, so I make backups of my scripts (as with everything personal on my main computer) using tar/gz/gpg/rsync/scp, optical media, etc., so I can refer to them when memory fails. Overall, I'd have to guess it depends on the person and the priorities, and how well one way works for you vs. the other. I currently always use a computer under my control, and over time, enough has stuck in my memory.
I also want to save my hands by making things ergonomic (shorter to type when practical), as the hands are useful in the long-term. :)
I also make notes occasionally for review, to help me remember things (like with anki, or my sfw at http://onemodel.org ), or try to make mental connections with manual pages I have read so I know where to look something up again when needed. I also sometimes go to https://man.openbsd.org and put in just a period ("." w/o quotes) then click the "apropos" button (optionally choosing a section) to see "what is available" and jog my memory on good tools I have forgotten. Plus somethings get used so often they really do burn in to the memory: like, I am not going to forget "ls", but I don't want to type "ls -ltr [...]" very often and it is what I want, to see the most recent entry last, so I aliased that to just "l".
The largest reason I don't do this is because I ssh into a lot of different servers and then won't have my unique aliases or scripts readily available.
Is there an easy way other than copy paste to rectify this?
It's nice if the environment lets one have a (replicated?) shell account so you can have some place to put things (but I don't pretend at all to know if that is wise in the environment you work in). I have at times done copy/paste, or, if it seems likely to save enough time in a given situation, something like a script full of aliases that I can quickly scp over and source (". someScript") to make them available in the environment. ((or again, paste)
The other comment in this discussion (do Ctrl+F for "most wizardly of the wizards") might have a good idea, but I don't understand it yet.
When building a new website, I default to handwriting the HTML, instead of going with a static site generator. More pages lead to more duplication. I wrote a script that recursively goes through the directory, and replaces all occurrences of a string:
fd -e html --print0 | xargs -0 sed -i "" "s/replaceme/withthis/g"
I have a tiny Sinatra app that I use in testing some parts of my Rails app. I either start it in the background or in a separate terminal window (that I close after some time). I have a script that kills a process given its port number (4567):
There's a bit of scaffolding around things going into a docs folder, some JS and CSS tweaks for the various output formats (ePub, HTML & PDF), but there is a very simple script I use to reduce some friction whilst writing.
It's nice to have a lot of small files. However, you may often want to:
1. Rearrange those files
2. Insert a new section between two other files
So, I hacked together this incredibly tiny script that means anytime I want to add anything, I can immediately get to wherever I'm up to:
next="$(ag TODO docs | awk -F':' '{print $1}' | uniq | sort | head -n1)"
if [ -z "$next" ]
n="$(basename "$(find docs -type f | sort | tail -n1)" .md)"
n2="$(echo "$n + 10" | bc -l)"
next=docs/"$(printf "%05d" "$n2")".md
end
echo "$next"
nano -s 'aspell -c' "$next"
Bonus: It outputs the current filename to the console, so that if I want to add stuff after where I'm currently at, I have the starting number.
The entire scaffold for my books is three scripts, some standard CSS, JS and YAML. Which makes setting up a new one to match my sensibilities, quick and simple.
A hacky python wrapper around bluetoothctl that lets me use aliases as addresses. So I can write `connect {niss}` to connect my phone instead of `connect 12:6a:78:c2:74:d5 (ref. to one of childhood favorite books, Startide Rising).
The script then runs "devices", looks for an alias "niss", and substitutes in the corresponding address. I use expect in Python to script it all together.
All commands you enter pass directly to bluetoothctl except aliases in curly braces are replaced. You can use it interactively or pipe to it.
Complex piped stuff may or may not actually work. If stdin isn't a tty the program exits once bluetoothctl reports a success/failure after it gets an EOF on stdin. This means you can write `btctl <<< "connect {my_device_alias}"` and it will exit once it's done connecting or couldn't connect.
If you're using it interactively it only waits 0.01 seconds for results before displaying them and moving on to the next input() (whereas bluetoothctl will asynchronously display more results even while you're typing in a new input) so you may need to spam enter to see results.
The package's tagline is "collection of the Unix tools that nobody thought to write long ago, when Unix was young", so pretty relevant to the whole topic.
Emacs has something built in just like that called dired (directory edit). Shows you ls-like output, you edit it like a text file then save it. It means you can use all your familiar text editing tools to do what you need to do.
I wrote a script the other day that starts my dev environment. It opens terminal and executes the commands to start the backend and CDN services, then opens another tab for the git directory. After that it opens the frontend and backend codebases in my IDE. I then added the script to my PATH so I can launch it with a command from any directory.
This is for a side project and being able to launch the dev environment so quickly has allowed me to start working more easily as opposed to going on YouTube or Reddit instead. Your brain will crave the easiest source of dopamine so anything you can do to make the habits you want to build (like working on a side project) easier will help you immensely!
Back when I still used Xubuntu, I wrote little script called Highlander that manages the launching and switching between programs in my launcher tray. It gives your app launcher icons the same functionality as OSX dock icons: launch an instance of a program, then for subsequent clicks, bring that instance to focus. That way you don't get redundant copies of a given program running (and thus the joke of calling it Highlander.)
Automatically re-ssh if a host key changed and run monkeysphere s (ssh pk’s stored in gpg keychain) if not already loaded in the gpg agent (which also supports ssh-agent functions).
lw which does ls -l “$(which -a “$1”)”
ew which does ”$EDITOR” “$(which “$1”)”
newsh which does touch ~/bin/“$1” && chmod +x ~/bin/“$1” && “$EDITOR” ~/bin/“$1”
And habitat (hab) and arch pkgbuild, which use shell scripts as their package DSL... the former I’ve hacked up to screen-scrape package versions (due to the dearth of RDF/metalink usage in release artifact publication) and check gpg keys.
I have one similar to your "newsh" called "new" which does `mkdir "$1" && cd "$1"`
I also have another one called "store" which does `mkdir "${@: -1}" && mv $@`. I basically wanted an easy way to move everything into a new directory ("store * backup").
I’ve got a VPN server on a machine which gets a public IPv4 address via DHCP. Once in a while it changes. Instead of using something like DDNS, I just have a script running via chron to get my public address and send it to a GitHub gist.
If VPN stops working, I can always check to see if the address changed. May not be the best solution, but it works for me.
I have a bash alias mapped to a single letter, and when I connect my macbook to the various three desks with multiple monitors, I can cmd-tab to the iterm window, open a new tab, type the alias, and it will infer which desk I plugged into and fix my monitors. Left, middle, right rotated. Etc.
I wont give the script since it's only for my monitors and layout, but the nugget is using displayplacer and invoking the static, complicated command with a wrapper.
I have a project that is going on since 2015. So ran the above command to see how much code we have. I was expecting our models to be really large turns out we have ~(404 * 3) rest endpoints. the scariest thing was Vendor dir.
#!/usr/bin/perl
$| = 1; $i = 0;
if ($#ARGV < 0) { $number = 1000; } else { $number = shift(@ARGV); }
while (<STDIN>) { print '.' unless ($i % $number); $i++; }
print "\n";