Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Can I see your scripts?
374 points by fastily on Aug 15, 2022 | hide | past | favorite | 294 comments
A few weeks ago, I asked if I could see your cheatsheets (https://news.ycombinator.com/item?id=31928736) and I was really impressed by all the high quality responses!

Today I'm asking about your scripts.

Almost every engineer I know has a collection of scripts/utilities for automating ${something}. I'm willing to bet that HN users have some of the most interesting scripts on the internet.

So that said, could I please see your scripts?

I'll go first: https://github.com/fastily/autobots




Not really a script, but a `.ssh/config` to automatically deploy parts of my local cli environment to every server i connect to (if username and ip/hostname matches my rules).

On first connect to a server, this sync all the dotfiles i want to a remote host and on subsequent connects, it updates the dotfiles.

Idk if this is "special", but I haven't seen anyone else do this really and it beats for example ansible playbooks by being dead simple.

   Match Host 192.168.123.*,another-example.org,*.example.com User myusername,myotherusername
      ForwardAgent yes
      PermitLocalCommand yes
      LocalCommand rsync -L --exclude .netrwhist --exclude .git --exclude .config/iterm2/AppSupport/ --exclude .vim/bundle/youcompleteme/ -vRrlptze "ssh -o PermitLocalCommand=no" %d/./.screenrc %d/./.gitignore %d/./.bash_profile %d/./.ssh/git_ed25519.pub %d/./.ssh/authorized_keys %d/./.vimrc %d/./.zshrc %d/./.config/iterm2/ %d/./.vim/ %d/./bin/ %d/./.bash/ %r@%n:/home/%r


Man.., 20 years doing system administration and I never did that because I never think about that.

What a shame!

I owe you a beer at least


You...you mean...I could finally get all my aliases to work everywhere I go?

I'm not crying, you are crying!


That's really cool. I never found it necessary to do this. I'm a little bit liberal in regards to security local, so I wouldn't want that to transfer to a server accidentally. I just deal with it and get out when I am not using it.


FWIW though - he explicitly sets up a Match rule for the servers he cares about doing this to so he oughtn't end up accidentally doing any transferring unless his Match backfires.


I used to have a script name ".ase" (meaning "as someone else") that I'd source when I was doing something for someone else, and had become root. I was very careful to make sure it just had safe aliases there.


Does anyone have something similar to this for exec'ing into kubernetes pods? It's usually not the case that the container will have bash, vim, etc., but there is probably something to make it feel more like home.


You probably don't want either a fully functioning remote shell, or a malleable filesystem for injecting one, since that's precisely the kind of environment that is great for infiltrators to make a pod do something it's not intended to.


if you have a common base system it might be possible to copy/rsync/untar the tools you need and then use them. Ideally you'd want to restart the container/pod once you're done to ensure the tools aren't left around, or their presence causes other weird issues.


I keep this in my snippet manager to be pasted into a kubernetes pod I want to have the tools I'm used to:

    apt-get update; apt-get install -y tmux git ncdu psmisc iproute2 net-tools curl zsh vim; curl -Ls install.ohmyz.sh | sh; chsh -s $(which zsh); exec zsh
It's not automatic but it only takes a second for me to find it and about 10-15s to run.


What do all the %d's do?



https://man.openbsd.org/ssh_config#TOKENS

Why didn't you look this up yourself?


I have something similar--for deciding whether to try to connect locally or via remote address:

    Match host 10400 exec "timeout 1 nc -z 192.168.1.101 22 2>/dev/null"
        Hostname 192.168.1.101
        Port 22

    Host 10400
        Hostname myhost.duckdns.org
        Port 564


I've been having this in the back of my mind for a couple of years by now (funny how brains work sometimes) and now I don't even have to write it! Thank you!

Also: This will be great to combine with chezmoi for bootstrapping workstations - allowing you to do host-specific configuration, templating, and basic secrets injection without fiddling around with USB drives or whatnot.


This is awesome. It terrifies me though.


Super neat! When I did sysadmin work I had a tux config that did something like this via a keystroke, and it was all ephemeral changes. So via a key binding I could lightly customize the single SSH connection without affecting anyone else.


Very cool usage of Local command, never use this one in my config. Thanks for sharing!


Very nice. I owe you a beer.


https://github.com/learnbyexample/command_help/blob/master/c...

Command help, inspired by http://explainshell.com/ to extract help text from builtin commands and man pages. Here's an example:

    $ ch ls -AXG
           ls - list directory contents

           -A, --almost-all
                  do not list implied . and ..

           -X     sort alphabetically by entry extension

           -G, --no-group
                  in a long listing, don't print group names

---

https://github.com/learnbyexample/regexp-cut/blob/main/rcut is another - uses awk to provide cut like syntax for field extraction. After writing this, I found commands like `hck`, `tuc` written in Rust that solves most of the things I wanted.


Has anybody stumbled upon a similar tool that does this in reverse? For example, I want to find a curl option that has a "redirect" word in its description.


Oh lord this is beautiful!


Ohhhh that's lovely


I will steal that too..


I have a nice little script for managing "MISC" packages, which stands for "Manually Installed or Source Compiled".

https://github.com/tpapastylianou/misc-updater

In full honesty, I'm as proud of the "MISC" acronym as of the script itself. :p

I'm secretly hoping the acronym catches on for referring to any stuff outside the control of a system's standard package management.


MISC is a really good name for these ---- I've been putting them in ~/src/vendor but I might just move em to ~/misc :D


Tbh, I use variants of src and opt for where things get compiled / installed respectively too. I've been treating "misc" as more of a conceptual grouping for "packages whose presence and updates are not tracked in the system by other means" rather than as an installation directory.

E.g., you may note that one of the packages the script checks for is zoom. Zoom is installed as a normal .deb file that I download from the zoom website, and install manually using dpkg, which installs it in the normal system directories. But it has its own cumbersome update check mechanism (which involves clicking a menu in the app), and is not picked up by apt-update because it's not a repo package. So this makes it a good candidate for a misc-updates check, even though it's installed as a normal .deb file. :)


I tend to make install and build scripts for such things, but one of my main things is catching the hash of the latest version, or better yet a GPG key. Gives me a little peace of mind.


I've never thought about this problem until now. Now that I see it, it makes total sense one would want to monitor those packages for changes.

What surprises me is that there seems to be no other way than hacking (cutting, grepping, etc.) each package separately. I wonder how this is handled in machines that use a lot of MISC packages (other than pulling+building every time to automatically have the latest version)?

Also, kudos on the acronym :)


I'm upvoting for the acronym.


That is an awesome and perfectly appropriate acronym! The script is pretty handy too!


This is something I didn’t know I needed. Great acronym as well!


I've pair-programmed a lot this year, and some of my colleagues tend to like the "Co-authored-by: ..." message because they like that due attribution is given regardless of who was in control of the keyboard.

I eventually got tired of writing that manually, so I wrote a small

  git co-commit --thor ...
that works just like 'git commit', except it adds another line to the commit message template.

Placing it in e.g. ~/bin/git-co-commit and having ~/bin in your $PATH will enable it as a git sub-command.

I've never had a use for this before, and I don't think I'll need it much beyond this team, but this was my first git sub-command that wasn't trivially solvable by existing command parameters (that I know of).

https://gist.github.com/sshine/d5a2986a6fc377b440bc8aa096037...


wait wait wait... are you saying that having an executable at ~/bin/git-co-commit will automatically create `git co-commit`? Wouldn't it merely create a 'git-co-commit' command you can access as that user as long as it's in your $PATH? What is integrating it in the `git` program?? Or was it just a typo in your example and you actually type `git-co-commit` and not `git co-commit`?


Git will automatically prefix a subcommand with `git-` and try to execute a command with that name.

https://blog.sebastian-daschner.com/entries/custom-git-subco...

Many "built-in" Git commands are themselves separate executables. My Linux machine has them in `/usr/lib/git-core/`, and my macOS machine has them in `/Applications/Xcode.app/Contents/Developer/usr/libexec/git-core/`.


yep! all git subcommands are separate binaries. Check out `ls /usr/lib/git-core` (that's the path on my machine). Or you can `locate git-add` or something to check where they're located on your machine. I have a tiny fzf script called `git select` to give me a nice interface for selecting branches I've recently worked on. Just call the script `git-select` and stick it somewhere in your path. In the spirit of this question, my one is here:

  !/bin/zsh

  alias git_my_branches="git for-each-ref --sort=committerdate refs/heads/ --format='%(HEAD) %(color:yellow)%(refname:short)%(color:reset) - %(color:red)%(objectname:short)%(color:reset) - %(contents:subject) - %(authorname) (%(color:green)%(committerdate:relative)%(color:reset))'"

  local selected_branch=$(git_my_branches --color | fzf --ansi --tac | awk '{print $1}')
  if [ "$selected_branch" != '*' ]; then
          git checkout $selected_branch
  fi


I had the same reaction!... but just tested it and lo and behold it works!

    [user@user ~]# cat > ~/bin/git-test-me <<'EOF'
    #!/bin/bash
    
    echo hi
    EOF
    [user@user ~]# chmod +x ~/bin/git-test-me
    [user@user ~]# git test-me
    hi


As others mentioned, yes! If you’re curious on how it works I write a sort of deep-dive on the git internals of how this works: https://nathancraddock.com/blog/2022/custom-git-commands/


Correct. Git will try to run a built in subcommand, and if it doesn't exist, it'll fall back to trying to run `git-some-command`.

Many Linux tools do this. Rust's cargo is another.

In fact, `man` itself can take multiple positional arguments and will concatenate them with a hyphen to perform the lookup.


Try 'strace git blah', it will be educational.


Have you checked out https://github.com/git-duet/git-duet/ ?

You configure a ~/.git-authors file with people with whom you regularly pair, and use `git duet [author-1] [author-2]` to set primary and secondary commit authors. Env variables set whether you want `Signed-off-by` or `Co-authored-by` trailers.


Thanks for the hint!

There seems to be at least these three advantages over my approach:

  - `git duet` has neat syntax for attributing more than two people.
  - `git duet` lets me enter a mode where it keeps attributing my co-authors.
  - `git duet` keeps the authors in a separata data file, not in the script.
I might consider switching for the next small project. :-)


It also can automatically rotate authors if you've got people eager for attribution. When I was just starting out pairing, it felt really good to join a project and immediately get commits on a new repo.


To make zsh complete these commands:

    zstyle ':completion:*:*:git:*' user-commands ${${(M)${(k)commands}:#git-*}/git-/}
Or specify just the ones you want to be completed:

    zstyle ':completion:*:*:git:*' user-commands foo:'description for foo'
(from /usr/share/zsh/functions/Completion/Unix/_git)


I use a generic version of this.

My .gitconfig has:

  [pretty]
  co-authored-by = Co-authored-by: %an <%ae>
  [alias]
  co-authored-by = log -1 --pretty=co-authored-by --regexp-ignore-case --author
Now `git co-authored-by Tom` generates a Co-authored-by: trailer for the last person named Tom who committed to the repo. Typically I'd just do `:r !git co-authored-by Name` in vim (mapped to \gc to save typing).


What does "--thor" mean?


Looking at the linked script, it indicates which cow orker to credit.

To the OP: You might be able to simplify the script by using `git commit --trailer …`. Or maybe you tried that and it didn't display the message in the editor window satisfactorily?


Thanks for the hint, `--trailer` is a nicer solution than overriding the commit template!

For some reason `--trailer` is not available on my system, so I'd need to upgrade git, it seems.


Probably an example of a name, so that the added commit comment line has “Coauthored-by: Thor”


AutoHotkey script, pressing Caps Lock+Left Mouse Button simulates 50 mouse clicks per second until the left mouse button is pressed:

  ~LButton up::
    if (GetKeyState("CapsLock", "P")) {
      while(!GetKeyState("LButton", "P")){
        MouseClick, left
        sleep 20      
      } 
    }
    return
Useful in situations when you need to click a lot :)

Also a bookmarklet that you use to turn any page dark with a click:

    javascript:document.querySelectorAll('\*').forEach(e=>e.setAttribute('style','background-color:#222;background-image:none;color:#'+(/^A|BU/.test(e.tagName)?'36c;':'eee;')+e.getAttribute('style')))
From here: https://github.com/x08d/222


I use AutoHotkey for short-term automation of programs, helped by this entry for "when I save the Autohotkey config in notepad with ctrl+s, reload the script in AutoHotkey so it works immediately":

    ~^s::
        WinGetActiveTitle, currentWinTitle
            If InStr(currentWinTitle, "AutoHotkey.ahk - Notepad")
                Reload
        Return
Then entries which automate typical Windows keyboard actions, like this one below which triggers on alt+3 and moves around the fields on some specific GUI program which had no other automation support for what I was doing:

    !3::Send {alt down}a{alt up}{Tab}{Tab}{Tab}{Tab}{Tab}{Tab}Position{Shift down}{Down}{Shift up}{Del}{Tab}{Tab}
Workflow becomes:

- have something mildly repetitive to do, notice how to do parts of it with keyboard only.

- right click AutoHotkey icon in taskbar, edit (opens Notepad).

- change some of these automations, alt+1, alt+2, alt+3, ...

- press ctrl+s to save and reload.

- switch back to the program and use the hotkey immediately to begin helping.

- repeat switching to AutoHotkey and the program, tweaking and adding more.

It's amenable to the kind of occasional task which has no easy proper automation, or is a one-off and isn't worth more time to do it through proper interfaces. Things like a vendor support telling you to "go through every affected record and toggle X field off and on again".


my autohotkey is super simple and mainly to be vimmy in things like native controls (ALT-K and ALT-J for things like drop downs) which means I almost never need to touch the arrow keys

  <!k::Send {Up}
  <!j::Send {Down}
  Capslock::Esc


What do you use this for?


Messed up user interfaces and games, e.g.: https://www.decisionproblem.com/paperclips/


TIL about Paperclips. Curse you for posting that link! At least I managed to release the hypno drones though...


Ah Universal Paperclips... See you in the future then. Also: Kittens game by bloodrizer (and no, you won't be getting out of this one as easily as you might get out of Universal Paperclips, so search with care if you need to do anything productive for the near future.)


Cookie clicker


I use this script, saved as `rerun`, to automatically re-execute a command whenever a file in the current directory changes:

    #!/usr/bin/sh

    while true; do
        reset;
        "$@";
        inotifywait -e MODIFY --recursive .
    done
For example, if you invoke `rerun make test` then `rerun` will run `make test` whenever you save a file in your editor.



There's also modd[0] which allows for many file watch pattern -> command combos to easily be defined & run simultaneously from a modd.conf file.

[0]https://github.com/cortesi/modd


I have two similar commands, both written as zsh functions but easily adaptable to shell scripts, other shells, etc.

# wait on a path and do something on change, e.g. `wait_do test/ run_tests.sh`

    wait_do() {
        local watch_file=${1}
        shift

        if [[ ! -e ${watch_file} ]]; then
            echo "${watch_file} does not exist!"
            return 1
        fi

        if [[ `uname` == 'Linux' ]] && ! command -v inotifywait &>/dev/null; then
            echo "inotifywait not found!"
            return 1
        elif [[ `uname` == 'Darwin' ]] && ! command -v fswatch &>/dev/null; then
            echo "fswatch not found, install via 'brew install fswatch'"
            return 1
        fi

        local exclude_list="(\.cargo-lock|\.coverage$|\.git|\.hypothesis|\.mypy_cache|\.pgconf*|\.pyc$|__pycache__|\.pytest_cache|\.log$|^tags$|./target*|\.tox|\.yaml$)"
        if [[ `uname` == 'Linux' ]]; then
            while inotifywait -re close_write --excludei ${exclude_list} ${watch_file}; do
                local start=$(\date +%s)
                echo "start:    $(date)"
                echo "exec:     ${@}"
                ${@}
                local stop=$(\date +%s)
                echo "finished: $(date) ($((${stop} - ${start})) seconds elapsed)"
            done
        elif [[ `uname` == 'Darwin' ]]; then
            fswatch --one-per-batch --recursive --exclude ${exclude_list} --extended --insensitive ${watch_file} | (
                while read -r modified_path; do
                    local start=$(\date +%s)
                    echo "changed:  ${modified_path}"
                    echo "start:    $(date)"
                    echo "exec:     ${@}"
                    ${@}
                    local stop=$(\date +%s)
                    echo "finished: $(date) ($((${stop} - ${start})) seconds elapsed)"
                done
            )
        fi
    }

# keep running a command until successful (i.e. zero return code), e.g. `nevergonnagiveyouup rsync ~/folder/ remote:~/folder/`

    nevergonnagiveyouup() {
        false
        while [ $? -ne 0 ]; do
            ${@}

            if [ $? -ne 0 ]; then
                echo "[$(\date +%Y.%m.%d_%H%M)] FAIL: trying again in 60 seconds..."
                sleep 60
                false
            fi
        done
    }


Okay it's 2022 and you still don't want to run nepomunk, a holistic semantic filesystem approach never happened and you're stuck with a million files in your Download folder, home folder etc.

So I use this script to give me a nice work environment, based on each day.

Every time you open bash, it'll drop you into today's directory. (~/work/year/month/day/)

When I think about stuff it's like.. oh yeah I worked on that last week, last year, etc - the folder structure makes this a lot easier, and you can just write 'notes' or 'meeting-with-joe' and you know the ref date.

For your bashrc:

  alias t='source /path/to/today'

  t
Now every day you'll know what you worked on yesterday!

  # this is where you'll get dropped by default.

  calvin@bison:~/work/2022/07/10$

  calvin@bison:~/work/2022/07/10$ ls
  WardsPerlSimulator.pl

  calvin@bison:~/work/2022/07/10$ cd ..; ls;
  01  02  04  05  06  07  08  09  10
  calvin@bison:~/work/2022/07$ cd ..; ls
  01  02  03  04  05  06  07
 calvin@bison:~/work/2022$ cd ..; ls
  2021  2022
additionally you'll get a shortcut, you can type 't' as a bash fn, or go to ~/t/ which is symlinked and updated everytime you run today (which is everytime you open bash or hit 't'. this is useful if you want to have Firefox/Slack/whatever always save something in your 'today' folder.

https://git.ceux.org/today.git/


> Okay it's 2022 and you still don't want to run nepomunk, a holistic semantic filesystem approach never happened

Do you mean Nepomuk, that thing from KDE, which dead like 10+ years ago?

> Every time you open bash, it'll drop you into today's directory. (~/work/year/month/day/)

Ok, that's an interesting Idea, but isn't this more akin to Gnome Zeitgeist? When aimed Nepopmuk at journaling?

> https://git.ceux.org/today.git/

But where is the script? I only see the readme, no code for creating the folder.


> Do you mean Nepomuk, that thing from KDE, which dead like 10+ years ago?

Yeah. I mean I think the pitch when KDE4 launched was like... let's rethink how we deal with our files as less discrete paths and more like easily findable stuff.

> But where is the script? I only see the readme, no code for creating the folder.

whoops, I added it!



Temporal Nix


Which is what exactly?


Not exactly a script, but I have a UserStyle applied to Hacker News (using Stylus: <https://addons.mozilla.org/en-US/firefox/addon/styl-us/>). Here are the best bits:

1. Preserve single newlines that people typed in: Often people hit Return only once, and their intended formatting becomes a wall of text. Hacker News preserves the newline in the HTML.

  .commtext {
    white-space: pre-wrap;
  }
  .commtext .reply {
    white-space: normal; /* fix extraneous whitespace caused by first rule */
  }
2. Vertical lines to visually represent thread depth!

  .ind {
    background-image: repeating-linear-gradient(to right, transparent 0px 39px, rgba(204, 204, 204, 1.0) 39px 40px);
  }


Nice! I hope you don't mind that I just stole these. :D


Similar to most other posters, I have a dotfiles repo, most of it isn't particularly novel, but I have a light wrapper around `git` that after a successful clone, will add custom identity information to `.git/config` so when I commit, I won't inadvertently use my work author string vs my personal author string:

https://github.com/dom111/dotfiles/blob/master/bin/git

which when combined with files like:

    $cat ~/.gitidentities/github.com 
    
    [user]
            name = dom111
            email = dom111@users.noreply.github.com
means I don't accidentally leak email addresses etc

Also, not entirely related, but I wrote a small tool to add some animated GIFs to scripts: https://dom111.github.io/gif-to-ansi/


I use a custom git alias `git recent` almost every day. It shows you the the most recent branches you have worked on. This is useful for when you are are trying to find a branch you have worked on recently, but forgot its name.

To use this alias, make an executable file called `git-recent` with the following contents and ensure it is in your `$PATH`

    git for-each-ref \
      --sort=-committerdate refs/heads/ \
      --format='%(HEAD) %(color:red)%(objectname:short)%(color:reset) %(color:yellow)%(refname:short)%(color:reset) - %(contents:subject) - %(authorname) (%(color:green)%(committerdate:relative)%(color:reset))'

Here's a redacted example of the output looks like

    * fc924a7e68 team/name/feature-2 - save - My Name (4 days ago)
      2fed1acfac team/name/feature-1 - add test - My Name (3 weeks ago)
      4db4d4ac77 main - Remove changes (#22397) - My Name (6 weeks ago)


I have a similar one with slightly different output, however mine appends the top N stashes as well for additional context.

  # Branches
  2022-08-11 13:26:34 -0700 4 days ago    branch-1
  2022-07-13 07:49:36 -0700 5 weeks ago   branch-10
  2022-06-28 23:40:53 +0000 7 weeks ago   branch-8
  2022-06-28 22:47:31 +0000 7 weeks ago   main
  2022-06-28 19:24:10 +0000 7 weeks ago   branch-7
  # Stashes
  stash@{0}:Thu Aug 11 13:17:11 2022 -0700 WIP on branch-3: afa19e7444a Some changes based on morning sync
  stash@{1}:Tue Jul 26 13:25:37 2022 -0700 WIP on branch-5: bd6122e2dfa find() bugfix
  stash@{2}:Tue Jul 12 15:05:31 2022 -0700 WIP on branch-7: 1221d0640c5 linter
Code:

  recent() 
  { 
      echo -e "${PURPLE}# Branches${COLOR_END}";
      for k in $(git branch | perl -pe 's/^..(.*?)( ->.*)?$/\1/');
      do
          echo -e $(git show --pretty=format:"%Cgreen%ci %Cblue%cr%Creset " $k -- | head -n 1)\\\t$k;
      done | sort -r | head;
      _num_stashes=$(git stash list | wc -l | while read l; do echo "$l - 1"; done | bc);
      echo -e "${PURPLE}# Stashes${COLOR_END}";
      for i in $(seq 0 ${_num_stashes});
      do
          echo -en "${CYAN}stash@{${i}}:${GREEN}" && git show --format="%ad%Creset %s" stash@{$i} | head -n 1;
      done
  }


I use something similar along with fzf, so I can search the branch and switch to it.

git alias:

  recent = "!f() { script -q /dev/null git for-each-ref --sort=-committerdate refs/heads/ --format='%(HEAD)%(color:yellow)%(refname:short)%(color:reset)|%(authordate:short)|%(color:red)%(objectname:short)%(color:reset)|%(subject) (%(color:green)%(committerdate:relative)%(color:reset))'|column -ts'|'; }; f"
zsh function:

  grecent() {
    local branch=$(git recent | fzf --ansi --no-multi --bind "q:abort")

    if [[ "$branch" = "" ]]; then
        echo "No branch selected."
        return
    fi

    local branch_name=$(echo "$branch" | awk '{print $1}')
    echo "git checkout $branch_name"
    git checkout "$branch_name"
  }
But sometimes it screws up the terminal and I have to run `reset` to fix. Will be great if someone helps with that.


Just stole this. Thanks a lot!


Not that long ago I was maintaining around 30 Drupal (popular PHP CMS) websites for different clients, on different ISPs.

I made a CLI utility for automating certain operations I was doing all the time: rsync of sources (push or pull), db backup / rollback, copying the local db to the remote server or back, etc. The utility looked for a dotfile in the project directory to get things like the remote server address, remote project path, etc.

The tool served several purposes:

- Executing auxiliary tools (rsync, mysqldump, drush) with the right parameters, without requiring me to remember them.

- Storing (non-secret) information about the remote environment(s) in the project directory.

- Some dangerous operations (e.g. copying the local db to the remote server) were prohibited unless the dotfile explicitly enabled them. Some sites were only edited on dev and then pushed to production, but some had user data that should never be overwritten.

- When running rsync of sources the tool always did a dry-run first, and then required entering a randomly generated 4-letter code to execute them... so I would have to stop and think and didn't deploy by mistake.

This tool is too rough for sharing it with the general public... but I consider it one of my greatest professional achievements because it saved me a lot of mental effort and stress over the years, quite a bit of time, prevented me from shooting myself in the foot, and forced me to use proper workflows every time instead of winging it. It required a small investment of time and some foresight... but my philosophy is that my work should be to build tools to replace me, and that was a step in that direction.


These are the stories I love. This is a perfect example of identifying automation too. Repetitive task with an understandable and maintainable amount of input parameters that you already accomplish with command line based tools.

I dream of doing freelance work (assuming this is what this was) and essentially automating 80% of the maintenance work. There's something just so neat about that to me.


    function ccd { mkdir -p "$1" && cd "$1" }
Biggest timesaving script I've ever included in my arsenal.


I have this one aliased to mkcd, but yours saves an extra character. I like it!


add in exec functionality and you've got xkcd!


I call it "mcd"(for make and cd) on my windows cmd and bash shells


Zsh has the "take" command which accomplishes this.


Correction: Zsh does not have the take command. The Oh My Zsh add-on has the take command.

(Lots of us don't use OMZ)


Very interesting - didn't know that. I always install OMZ after zsh.


Didn't know that! thank you


I use `mkdir -p DIRECTORY_NAME && cd $_` in bash which is not as concise but still DRY.


This is the sort of DRY that is more dogmatic than helpful, IMO.


Quite.

DRY is a good guideline but a rubbish rule.


Not mine, and not ..... really.... serious.... but someone has to mention the greatest work scripts ever :

https://github.com/NARKOZ/hacker-scripts


Related to writing scripts on Mac OS, I highly recommend rumps (https://rumps.readthedocs.io) to show anything you want in your status bar.

For example, I have one script that uses rumps to show how many outdated homebrew packages I have (and also as a convenient shortcut to update those packages in the dropdown menu). I also have a second script that uses it to show a counter for open pull requests that I need to review (with links to individual PRs in the dropdown menu). It's great!

Result looks like this: https://imgur.com/yy6GlYk.jpg


Never heard of rumps, but I have been using xbar (renamed from BitBar) for years and it does the same.

My examples: https://imgur.com/a/SrjG1xe

I basically do all my local management from there, no need to run scripts manually, no need to click around in Finder manually, I even added a command to quickly copy my email, simply because it is so low friction to do it.

Hands down the best tool I've ever used


And there goes the rest of my work week. Wow....


https://github.com/p-e-w/argos is a similar project for gnome-shell


Here's a shell script for moving files in git between repos, preserving the history, and following that history through the file being renamed:

     git-move src/afile.c src/bfile.c src/cfile.c ../destination/repo
https://gist.github.com/mnemnion/87b51dc8f15af3242204472391f...


simpler than i expected. what was your original use case?


Same as the ongoing use case: breaking submodules of a project out into its own namespace, while preserving the edit history.


Bank reconcilliation in Xero has no "auto match if reference and date are the same" option, so I (very crudely) scripted one. It'll run until it encounters a mismatch (which for the company I work for is basically never). Paste into console.

function go() {

    let line = document.querySelector('#statementLines .line');

    if (line) {

        let leftAndRight = line.querySelectorAll('.statement');

        let left = leftAndRight[0];
        let right = leftAndRight[1];

        if (right.hasClassName('matched')) {

            let leftDetails = left.querySelectorAll('.info .details-container .details span');
            let leftDate = leftDetails[0].textContent;
            let leftReference = leftDetails[1].textContent;

            let rightDetails = right.querySelectorAll('.info .details-container .details span');
            let rightDate = rightDetails[0].textContent;
            let rightReference = rightDetails[2].textContent;

            rightReference = rightReference.replace('Ref: ', '');

            if (Date.parse(leftDate).getTime() == Date.parse(rightDate).getTime() && leftReference.toLowerCase() == rightReference.toLowerCase()) {


                var okButton = line.querySelector(".ok .okayButton");

                console.log(leftReference);

                okButton.click();

                var waiter = function () {

                    if (line.parentNode == null) {

                        go();

                    }
                    else {
                        setTimeout(waiter, 50);
                    }


                };

                setTimeout(waiter, 50);



            }
            else {
                console.log("Details dont match");
            }


        }
        else {
            console.log("Line not matched");
        }

    }
    else {
        console.log("No line found");
    }
}

setTimeout(go, 100);


Just logged in to tell you that I am working on an alternative to Xero and you just validated one of our UI designs.


I switched from Xero to plaintext accounting,[0] and it was a huge step up.

There are several different plaintext accounting tools, but they all support automation like this. I personally use Beancount because I work best in Python.

The other huge advantage is that the "state" of your finances isn't opaque like in Xero. If you realize you've been categorizing certain transactions incorrectly in Xero, it's a hassle to navigate Xero's interface to correct everything, whereas in plaintext accounting it's usually a 2-second find/replace.

The downside is that there's a steep learning curve and the documentation is kind of overwhelming, but once you learn it, it's extremely valuable.

[0] https://plaintextaccounting.org/


I just reviewed my own set of scripts in bin and don't feel I've got anything to contribute.

BUT, this thread is so special because it feels like this is the stuff you only get to see when you sit down at a co-worker's desk and watch them type something and then say "WHAT? HOW COOL!"

I miss that part now that it is all remote work. :(


Simple command line utility to display charging (or discharging) rate in watts on linux. You might have to modify battery_directory and the status/current_now/voltage_now names based on laptop brand, but Lenovo, Dell and Samsung seems to use this convention.

    #!/usr/bin/python3

    battery_directory = "/sys/class/power_supply/BAT1/"

    with open(battery_directory + "status", "r") as f:
        state = f.read().strip()

    with open(battery_directory + "current_now", "r") as f:
        current = int(f.read().strip())

    with open(battery_directory + "voltage_now", "r") as f:
        voltage = int(f.read().strip())

    wattage = (voltage / 10**6) * (current / 10**6)
    wattage_formatted = f"{'-' if state == 'Discharging' else ''}{wattage:.2f}W"

    if state in ["Charging", "Discharging", "Not charging"]:
        print(f"{state}: {wattage_formatted}")

Output:

Charging: 32.15W

Discharging: -5.15W


My ASUS Zephyrus G15 has a BAT0 with power_now (in microwatts) instead of current_now and voltage_now.

I have in my shell history which I occasionally use:

  while true; do echo -n '^[[34m'; date --iso-8601=seconds | tr -d '\n'; echo -n '^[[m: battery contains ^[[31m'; echo "scale=3;$(cat /sys/class/power_supply/BAT0/energy_now)/1000000" | bc | tr -d '\n'; echo -n 'Wh^[[m, '; [ "$(cat /sys/class/power_supply/BAT0/status)" = "Discharging" ] && echo -n 'consuming' || echo -n 'charging at'; echo -n ' ^[[32m'; echo "scale=3;$(cat /sys/class/power_supply/BAT0/power_now)/1000000" | bc | tr -d '\n'; echo 'W^[[m'; sleep 60; done
(With ^[ being actual escape, entered via Ctrl-V Escape, because I find writing the escape codes literally easier and more consistent than using echo -e or whatever else.)

It’ll show lines like this every minute (with nice colouring):

  2022-08-16T01:07:31+10:00: battery contains 76.968Wh, charging at 0W
My PinePhone has an axp20x-battery with current_now and voltage_now, like your various laptops except that while discharging it gets a negative current_now, which makes perfect sense to me but which doesn’t seem to match your laptops (since you add the negative sign manually in your script) or my laptop’s power_now (which is likewise still positive while discharging).


My workstation setup, both for Linux and MacOS, is in the following repository: https://github.com/sirikon/workstation

https://github.com/sirikon/workstation/blob/master/src/cli/c...

For Linux, it can install and configure everything I need when launched on a clean Debian installation. apt repositories, pins and packages; X11, i3, networking, terminal, symlinking configuration of many programs to Dropbox or the repository itself... The idea is to have my whole setup with a single command.

For Mac, it installs programs using brew and sets some configurations. Mac isn't my daily driver so the scripts aren't as complete.

Also there are scripts for the terminal to make my life easier. Random stuff like killing any gradle process in the background, upgrading programs that aren't packetized on APT, backing up savegames from my Anbernic, etc. https://github.com/sirikon/workstation/tree/master/src/shell

And more programs for common use, like screenshots, copying Stripe test cards into the clipboard, launching android emulators without opening Android Studio, etc. https://github.com/sirikon/workstation/tree/master/src/bin


A bash script to open up the current git branch in my browser.

Use it everyday. Great because my company has multiple git submodules in any given project and I can use this to watch for pipeline failures and the like.

  x=$(git config --local remote.origin.url|sed -n 's#.*/\([^.]*\)\.git#\1#p')
  y=$(git symbolic-ref --short HEAD)
  url="https://git.thecompany.com/thecompany/$x/tree/$y"
  $(open -a "firefox developer edition" "$url"


Me too! Though yours looks fancier, is that jumping to a branch? lovely.

    glo () {
      xdg-open https://git.thecompany.com/$(git config remote.origin.url |cut -f2 -d: |cut -f1 -d.)
    }


I use git-open [1] for this – highly recommended. It supports multiple remotes and works great with custom hosted servers out of the box.

[1]: https://github.com/paulirish/git-open


I use termdown to run timers in my terminal. Back when we only had a pressure cooker and couldn't afford an automatic rice cooker, I wrote a bash function "rice" that would give instructions for cooking it in the pressure cooker. Kinda silly in retrospect, but it did ease the pain of being broke a bit:

    function alarm_forever() {
        # play one part of the track at a time so that this function can be killed any time
        while :; do
            afplay --time .72 ~/sounds/alarm.mp3;
        done
    }

    function alarm_until_input() {
        alarm_forever &
        pid=$!;
        read  -n 1 -p "$*";
        kill -9 $pid;
    }

    # pip install termdown
    function timer {
        termdown $@;
        alarm_until_input "[Press any key to stop]"
    }
    alias alarm="timer"

    # TODO: ask if user soaked the rice first
    function rice {
        echo "1. Wash rice. Place in pressure cooker with 1-1 water-rice ratio."
        echo "2. Place the pressure cooker on the stove on high."
        read  -n 1 -p "3. When the pressure pot starts whistling, press any key to start the timer."
        termdown --title "Tweet!" 2m
        alarm_until_input "4. Take pot off heat and press any key."
        termdown --title "RICE" 11m
        alarm_until_input "5. Open the pot and stir the rice immediately."
        alarm_until_input "6. Eat!"
    }


I used to clip tons of videos for highlight reels, which was made a lot quicker with this snippet.

  clip_video () {
          ffmpeg -ss "$2" -i "$1" -t "$3" -c copy "$4"
  }

Used like so:

  clip_video filename.mp4 start_point duration output.mp4


I do a similar thing, except for making gifs:

    if [ -z $3 ] 
    then
        echo "usage: $0 file start_seconds duration [scale=600] [fps=15] [crop]"
        exit 1
    fi

    if [ -z $4 ]
    then
        SCALE=600
    else
        # w=iw/2:h=ih/2 half size
        SCALE="$4"
    fi

    if [ -z $5 ]
    then
        FPS=15
    else
        FPS=$5
    fi

    if [ -z $6 ]
    then 
        CROP="crop=iw:ih:0:0,"
    else
        # CROP="600:ih:250:0" full height
        CROP="setsar=1,crop=${6},"
    fi

    rm "${1}.gif" &> /dev/null

    ffmpeg -ss $2 -t $3 -i "$1" -vf  ${CROP}fps=${FPS},scale=${SCALE}:-1:flags=lanczos,palettegen palette.png -loglevel error
    ffmpeg -ss $2 -t $3 -i "$1" -i palette.png -filter_complex "${CROP}fps=${FPS},scale=${SCALE}:-1:flags=lanczos[x];[x][1:v]paletteuse" "${1}.gif" -loglevel error


A bloated script to automate creation of an Arch Linux Qemu VM. The subscript that runs in the VM is useful by itself for setting up a new Arch installation.

https://github.com/trevorgross/installarch/blob/main/install...

It's a personal tool that just kept growing. Probably amateurish by HN standards, but then, I'm an amateur. Yes, I could simply copy a disk image, but that's no fun.


My scripts are usually for work so they don't make sense outside of that.

One I was particularly proud of/disgusted by was one that allowed me to jump around a network with a single command despite access being gated by regional jumphosts..

You are warned: https://git.drk.sc/-/snippets/107

Another script I wrote for our devs to get access to MySQL in production on GCP; the intent was for the script to be executable only by root and allow sudo access to only this script: that means also ''chmod ugo-rwx gcloud'' too though: https://git.drk.sc/-/snippets/98

I have another script to generate screenshots from grafana dashboards since that functionality was removed from grafana itself (https://github.com/grafana/grafana/issues/18914): https://git.drk.sc/-/snippets/66

Another time I got annoyed that Wayland/Sway would relabel my screens on successive disconnect/reconnects (IE my right screen could be DP-1 or DP-7 or anything in between randomly); so I wrote a docking script which moves the screens to the right place based on serial number: https://git.drk.sc/-/snippets/74


> One I was particularly proud of/disgusted by

I can relate! I think it just reflects the nature of the problem space. The script is gnarly because the thing one is trying to do is gnarly. Utility is the driving force, as far as I'm concerned.

The following aren't as gnarly as yours, but served their purpose nicely in that little project's context. I like to put/accumulate project-related automations in a `./bin` in my projects.

https://gitlab.com/nilenso/cats/-/tree/master/bin


Going to use this opportunity to spam ShellCheck, because it has historically saved me dozens of hours catching many silent Bash scripting errors and just making my scripts more robust/warning me of obscure edge cases:

https://www.shellcheck.net/


shellcheck is so good it's even worth installing the 100+ required haskell libraries


I've always just run it as a docker command. Here's what's listed on the github page [1]:

    docker run --rm -v "$PWD:/mnt" koalaman/shellcheck:stable myscript

[1] https://github.com/koalaman/shellcheck


If you're on Arch, you could install `shellcheck-bin` from the AUR.


Jetbrains IDEs have a plugin called `Shell Script` that provides code assistance and warnings. It's also compatible with shellcheck too.


I have a script for concurrent web scraping: https://github.com/mateuszbuda/webscraping-benchmark It takes a file with urls and scrapes the content. For more demanding websites it can use web scraping API that handles rotating proxies. I add some logic to process the output as needed.


Since everyone here like scripting, May I suggest, if you have not used it already, checkout Xbar (https://xbarapp.com/) for Mac and Argos (https://argos-scripts.github.io/) for Linux.

I have used these 2 on my machines for the last 4 years and writing tons of script for myself, here are a few:

- Displaying internet/internal ip and allow me to click it to put in clipboard

- taskwarrior

- Simple conversion script that take my clipboard & encode/decode in base64, hex, url encoding, convert epoch to UTC,

- "auto type" my clipboard by simulating keystrokes- particular useful for pasting text into terminal that disable clipboard

- An incident response switch that would trigger a script to take screenshot every 5 seconds when my mouse moves, reduce image quality and save it to a folder in my homedrive. Another script will GPG encrypt it at the end of the day so i can go back and get screenshot or look back at incident if needed.


Is your incident script available anywhere? Sounds invaluable for postmortems.


Sure - You can find my script here https://gist.github.com/santrancisco/9d14e0105316cfa15f98f0f...

After that, it's just the matter of putting a crontab job to run archive job every night. Note that i have no way yet to know when the mouse move in macosx as xdotool no longer work with mac so right now it takes screenshot of every monitor and resize it down... it might be too much and could eat up your HDD. i like the nix version since I did a dirty job with mouse location so whenever i take a break from incident or walk away from my desk, the screenshot script stops.


JS snippet to sort and return Play Store app reviews by helpfulness:

  var nodes = [...document.querySelectorAll('\*[aria-label="Number of times this review was rated helpful"]')];
  nodes.sort((a, b) => (parseInt(b.innerText) || 0) - (parseInt(a.innerText) || 0));
  nodes.map(e => ([
    parseInt(e.innerText) || 0,
    e.parentNode.parentNode.parentNode.parentNode.parentNode.children[1].textContent.toString().trimStart(),
  ]));


Here's one I wrote a few years back, that I'm quite fond of. It turns any arbitrary directory tree with individual executables into a "git [X] [Y]" style shell command.

https://github.com/Mister-Meeseeks/subcmd/blob/master/subcmd


There exists `sd` which does the same! Maybe details differ. See intro in https://ianthehenry.com/posts/a-cozy-nest-for-your-scripts/


sd’s autocompletion looks fun, thanks


Somewhat boring, but I wrote a shell script to tar and gzip my home directory and then rsync it to a NAS drive.

It can be configured to exclude certain directories (.cache and Downloads being likely contenders). Also, it can read in config files so it can backup other directories.

https://github.com/lordfeck/feckback


Dead simple but serves me extremely well, and I haven't seen anyone do it:

    cd ()
    {
        builtin cd "$@" || return $?
        ls --my-usual-flags
    }


I have a very similar alias that’s bound to ‘cl’


I have something similar on windows aliased to d


On macOS this can be really useful, change the current terminal to the top most folder in Finder, do not recall where it came from:

  # Change to the Front Folder open in Finder
  function ff {
    osascript -e 'tell application "Finder"'\
    -e 'if (0 < (count Finder windows)) then'\
    -e 'set finderpath to get target of the front window as alias'\
    -e 'else'\
    -e 'set finderpath to get desktop as alias'\
    -e 'end if'\
    -e 'get POSIX path of finderpath'\
    -e 'end tell';};\
  function cdff { cd "`ff $@`" || exit; };


Old habit breaker:

  git() {
    if [[ "$1" == 'checkout' ]]; then
      echo 'Reminder: Use `git switch` or `git restore` instead.' >&2
    fi

    command git "$@"
  }


What's wrong with git checkout?


It’s overloaded and has recently been made obsolete by splitting it into two new commands. There’s nothing wrong with personally continuing to use it, but adopting the new vocabulary might help make Git less confusing to others.


I've often encountered dependency issues on Ubuntu. One time, while dealing with NVidia/CUDA, running `apt-get -y install cuda` complained about some missing dependency. I recursively went through the error messages and installed every missing dependency manually, and it worked, but it took me a long time and a lot of typing.

Then I wrote a script that does that automatically:

    #!/usr/bin/env bash

    main() {
        local package="$1"

        if [ -z "$package" ]
        then
            echo "usage: $0 PACKAGE"
            exit 1
        fi

        install_package "$package"
    }

    install_package() {
        local package="$1"
        local subpackage

        if sudo apt-get -y install "$package"
        then exit 0
        else
            sudo apt-get -y install "$package" \
            |& grep '^ ' \
            | sed 's/[^:]*:[^:]*: //;s/ .*//;' \
            | {
                while read subpackage
                do install_package "$subpackage"
                done
            }
            sudo apt-get -y install "$package" \
                && echo "SUCCESS: $package" \
                || echo "FAILURE: $package"
        fi
    }

    main "$@"


Curious, what's the difference with this and `apt-get --fix-broken install`?


Honestly, I have no idea.

If I recall correctly, I think there were some situations where --fix-broken did nothing for me, but the script did. I don't remember it nearly well enough to guarantee, though.

One difference I'm sure about is that the script marks all recursively installed dependencies as manually installed, which may not be favorable (e.g. if you wanted to remove the top-level package, all the dependencies would not be removed automatically).


Here’s a cat function with highlighting for when I’m working on different platforms and might not have the shell configured for syntax highlighting:

#!/usr/bin/env bash

function shc() { #: cat for shell scripts, source code. #: prints text with line numbers and syntax highlighting. #: accepts input as argument or pipe.

    if [ $# -eq 0 ]; then
        # arguments equal zero; assume piped input
        nl | /usr/local/bin/pygmentize -l bash
        # accept piped input, process as source code
    else
        case "$1" in
            -h|--help)
                printf "%s\n" "shc usage:" "           shc [file]" "           type [function] | shc"
                ;;
            -v|--version)
                printf "%s\n" "vers 2"
                ;;
            *)
                if [ -f "$1" ]; then
                    # test anything that isn't expected flags for file
                    cat "$1" | nl | /usr/local/bin/pygmentize -l bash
                    # process file as source code
                else
                    # if not a file or expected flags, bail
                    printf "%s\n" "error; not the expected input. read shc_func source for more details"
                fi
        esac
    fi
}


Makes operating AWS CLI against a user with MFA enabled easier

---------

#!/bin/sh

echo "Store and retrieve session token AWS STS \n\n"

# Get source profile read -p "Source Profile [<profile_name>]: " source_profile source_profile=${source_profile:-'<profile_name>'} echo $source_profile

# Get destination profile read -p "Destination Profile [<profile_name>-mfa]: " destination_profile destination_profile=${destination_profile:-'<profile_name>-mfa'} echo $destination_profile

mfa_serial_number='arn:aws:iam::<id>:mfa/<name>'

echo "\nOTP: " read -p "One Time Password (OTP): " otp

echo "\nOTP:" $otp echo "\n"

output=$(aws sts get-session-token --profile <profile_name> --serial-number $mfa_serial_number --output json --token-code $otp)

echo $output

access_key_id=$(echo $output | jq .Credentials.AccessKeyId | tr -d '"') secret_access_key=$(echo $output | jq .Credentials.SecretAccessKey | tr -d '"') session_token=$(echo $output | jq .Credentials.SessionToken | tr -d '"')

aws configure set aws_access_key_id $access_key_id --profile=$destination_profile aws configure set aws_secret_access_key $secret_access_key --profile=$destination_profile aws configure set aws_session_token $session_token --profile=$destination_profile

echo "Configured AWS for profile" $destination_profile


maybe look into AWS Vault?


Thank you!!


This is my autohotkey function, so that Windows gets the same functionality as OSX for cycling through instances of the same app (e.g. multiple firefox instances). Pass 0 or 1 to cycle all apps, or apps on the same desktop.

    !`::CycleCurrentApplication(0)
    !+`::CycleCurrentApplication(1)


    WhichMonitorAppIsOn(winId) {
        WinGetPos, cX, cY, cW, cH, ahk_id %winId%
        xMid := cX + (cW / 2)
        yMid := cY + (cH / 2)
        SysGet, nMons, MonitorCount
        Loop, % nMons
        {
            ; MsgBox %A_Index%
            SysGet, tmp, Monitor, %A_Index%
            withinWidth := (xMid > tmpLeft) && (xMid < tmpRight)
            ; MsgBox % tmpLeft . " -> " . tmpRight . "`t" . xMid
            if (withinWidth == 1)
                return %A_Index%
        }
    }


    CycleCurrentApplication(same_desktop_only) {
        WinGet, curID, ID, A
        curMon := WhichMonitorAppIsOn(curID)

        WinGetClass, ActiveClass, A
        WinGet, WinClassCount, Count, ahk_class %ActiveClass%
        IF WinClassCount = 1
            Return
        Else
            WinGet, List, List, % "ahk_class " ActiveClass
        Loop, % List
        {
            index := List - A_Index + 1
            WinGet, State, MinMax, % "ahk_id " List%index%
            WinGet, nextID, ID, % "ahk_id " List%index%
            nextMon := WhichMonitorAppIsOn(nextID)

            if (same_desktop_only > 0 && (curMon != nextMon))
                continue

            if (State != -1)  ; if window not minimised
            {
                WinID := List%index%
                break
            }
        }
        WinActivate, % "ahk_id " WinID
    }


I use a short bash/perl script to find/replace globally in large files. I have to change the search function each time, although I pass in the file name. Its not sophisticated, but its been very useful.

The reason I like it is also backs up the original in case I mess up the regex (happens sometimes...)

   #!/usr/bin/env bash
   perl -i.bak  -p -e 's/oldtext/newtext/g;' $1


I used to have a bunch of scripts, but I compulsively "clean" my backups too often to keep old stuff around. Here's my "newterm" script, which I use for launching xterm with tmux:

  #!/bin/sh
  
  if [ $(tmux has-session 2>/dev/null; echo $?) -eq 0 ]; then
      if [ $(tmux list-windows -f '#{window_active_clients}') ]; then
          if [ $(tmux ls | head -n 1 | awk '{print $2}') -le 2 ]; then
              xterm -e "tmux new-session -f active-pane,ignore-size -t "0" \; new-window"
          else
              xterm -e "tmux new-session -f active-pane,ignore-size -t "0" \; select-window -t +2"
          fi
      else
          xterm -e "tmux attach -f active-pane,ignore-size -t "0""
      fi
  else
      xterm -e tmux new-session -f active-pane,ignore-size
  fi
  
  if [ $(tmux ls | wc -l) -gt 1 ]; then
      for i in $(tmux ls -F '#S' -f '#{?session_attached,,#S}' ); do
          tmux kill-session -t ${i}
      done
  fi


Err... Maybe next time I'll remember to test before posting! (I fixed an obvious but easy to ignore bug that was in since the beginning to make it "presentable", but didn't test/trace to make sure the new version actually worked correctly!)

  --- newterm    Mon Aug 15 18:09:51 2022
  +++ .local/bin/newterm    Mon Aug 15 18:07:51 2022
  @@ -1,8 +1,8 @@
   #!/bin/sh
   
   if [ $(tmux has-session 2>/dev/null; echo $?) -eq 0 ]; then
  -    if [ $(tmux list-windows -f '#{window_active_clients}') ]; then
  -        if [ $(tmux ls | head -n 1 | awk '{print $2}') -le 2 ]; then
  +    if [ $(tmux list-windows -F '#I' -f '#{window_active_clients}' | wc -l) -gt 0 ]; then
  +        if [ $(tmux ls -F '#{session_windows}' | head -n 1) -le 2 ]; then
               xterm -e "tmux new-session -f active-pane,ignore-size -t "0" \; new-window"
           else
               xterm -e "tmux new-session -f active-pane,ignore-size -t "0" \; select-window -t +2"


Using my GitHub SSH keys to login as root on my servers: https://gist.github.com/withinboredom/84067b9662abc1f968dfad...

This can be extended easily, even dynamically creating an account if the user is part of an org, or use libnss-ato to alias the user to a specific account.


For local documentation of libraries (and languages):

    #! /bin/bash

    remote_file_path=$1

    wget --recursive --level=5 --convert-links --page-requisites --wait=1 --random-wait --timestamping --no-parent ${remote_file_path}
And a couple of zshrc functions which make jumping around my filesystem quite snappy. `jump` is aliased to `j`, and `mark` to `m`

    MARKPATH=~/.marks
    function jump {
        cd -P ${MARKPATH}/$1 2> /dev/null || (echo "No such mark: $1" && marks)
    }
    function mark {
        mkdir -p ${MARKPATH}; ln -s $(pwd) $MARKPATH/$1
    }
    function unmark {
        rm -i ${MARKPATH}/$1
    }
    function marks {
        ls -l ${MARKPATH} | sed 's/  / /g' | cut -d' ' -f9- && echo
    }
    _jump()
    {
        local cur=${COMP_WORDS[COMP_CWORD]}
        COMPREPLY=( $(compgen -W "$( ls $MARKPATH )" -- $cur) )
    }
    complete -F _jump jump
(Totally stolen, and fixed up to work in ZSH)


I did something similar in bash using cd instead of jump.

  export CDPATH=.:~/.marks/
  function mark {
    ln -sv "$(pwd)" ~/.marks/"$1"
  }
I prefix all the "marks" with a symbol (eg "@") then if I do

  $ cd @
then press tab it will list all the marks, or autocomplete if I type more.


I also stole jump script, but instead of aliasing, I just renamed the function to j instead of jump. I still kept mark as is because I don't use it anywhere near as often as j.


I have a bash function I use to checkout a git branch based on a search string:

  function git-checkout-branch-by-search-string() {
    local maybe_branch_name
    maybe_branch_name=$(git branch --sort=-committerdate | grep $1 | head -n 1)
    if [ -n "$maybe_branch_name" ]; then
      git checkout "${maybe_branch_name:2}"
    else
      echo "Could not find branch matching $1"
    fi
  }
  alias gcos="git-checkout-branch-by-search-string"
Branches often include things like ticket numbers and project keys, so you can do

  $ gcos 1234
and save some typing.

I have a pair of fixup commit functions, which make it faster to target fixup commits prior to rebasing:

  function git-commit-fixup() {
    git commit --fixup ":/$*"
  }
  function git-add-all-then-git-commit-fixup() {
    git add .
    git commit --fixup ":/$*"
  }
Long function names that are then assigned to an alias can make it easier to find them later if you forget rarely used ones. That is you can do:

$ alias | grep fixup

to see the list of relevant aliases and the functions they call.

I also have two functions I use like a linear git bisect:

  function git-checkout-parent-commit() {
    local prev
    prev=$(git rev-parse HEAD~1)
    git checkout "$prev"
  }
  function git-checkout-child-commit() {
    local forward
    forward=$(git-children-of HEAD | tail -1)
    git checkout "$forward"
  }
  function git-children-of() {
    for arg in "$@"; do
      for commit in $(git rev-parse $arg^0); do
        for child in $(git log --format='%H %P' --all | grep -F " $commit" | cut -f1 -d' '); do
          echo $child
        done
      done
    done
  }


I use this to generate my site:

    #!/usr/bin/env bash

    for file in `find . -name '*.md'`; do
        output=${file::-3}.html
        if [[ `date -r "$file" "+%s"` -le `date -r "../$output" "+%s"` ]]
        then
            echo "Skipping $file"
            continue
        fi
        mkdir -p ../$(dirname $output)
        echo Generating $output from $file
        cat << EOF > ../$output
    <!DOCTYPE html>
    `cat head.html`
    <body>
    `cat navigation.html`
    <main>
    `pandoc $file`
    </main>
    <footer>
        <span>The content on this page is licensed under the CC BY-ND 4.0</span>
        <a style="float:right" href="/md/$file">Source</a>
    </footer>
    </body>
    EOF
    done;


Not mine and I don’t remember the source, but really useful:

    # Simple calculator
    function calc() {
        local result=""
        result="$(printf "scale=10;$*\n" | bc --mathlib | tr -d '\\\n')"
        #                       └─ default (when `--mathlib` is used) is 20
        #
        if [[ "$result" == *.* ]]; then
                # improve the output for decimal numbers
                printf "$result" |
                sed -e 's/^\./0./'        `# add "0" for cases like ".5"` \
                    -e 's/^-\./-0./'      `# add "0" for cases like "-.5"`\
                    -e 's/0*$//;s/\.$//'   # remove trailing zeros
        else
                printf "$result"
        fi
        printf "\n"
    }


Another version

> bc


Another version would be `python`. The point is to reduce friction by preventing the opening and closing of a program just to perform a single calculation.


I have my notes in Dendron which is basically a directory of yaml files. I often need to search through the notes so I made the below

search_notes() { input=$(rg -v '(\-\-)|(^\s*$)' --line-number /home/user/some-dir | fzf --ansi --delimiter : --preview 'batcat --color=always {1} --highlight-line {2}' --preview-window 'up,60%,border-bottom,+{2}+3/3,~3' | choose -f : 0) if [[$input = ""]]; then else less $input fi }

It uses various linux utilities including fzf and batcat(https://github.com/sharkdp/bat) to open a terminal with all the places where my query comes up (supporting fuzzy search). Since the workhorses are fzf and ripgrep its is quite fast even for very large directories.

So i will do `search_notes postgres authentication`. I can select a line and it will open the file in less. Works like a charm!


I run this simple shell script to make daily incremental backups of my home folder using Borg. It works really well, haven't touched it in years [1].

1: https://gist.github.com/adewes/02e8a1f662d100a7ed80627801d0a...


To shake it up compared to other responses. I regularly integrate the PyWin32 library into work scripts. Sometimes you just need a way to automate interactions with Windows in those non-dev jobs.

The most recent was a script that parsed a financial report and generated multiple emails depending on a set of criteria. Then the user could manual review these emails and press send if everything checks out. The goal of the script was to reduce some of the menial work my financial co-worker was doing. I don't have it published on GitHub because it has some internal company info in it. But it worked cleanly, and regularly saves him hours of tedious work.

Also I highly recommend EasyGui library for those quick scripts that need user input from people who are not comfortable with a console/cmd. Helps make different types of popup windows for user input/selection with a few simple lines.


I can only share a part, since the majority of my scripts reveal much about my system structure (I try to open whatever I can, though; the tedious part of open sourcing a script, is to make it generic/configurable):

https://github.com/64kramsystem/openscripts

Missed the previous cheatsheet post :) I have a massive collection, which are large enough to be books more than cheatsheets (still, I access and use them as cheatsheets):

https://github.com/64kramsystem/personal_notes/tree/master/t...



I use find and grep a lot in code repos, so I came up with this bash function:

    alias filter_repos_z="grep -ZzEv '/tags|/\.hg/|/\.svn/|/\.git/|/\.repo/|\.o$|\.o\.cmd$|\.depend|\.map$|\.dep$|\.js$|\.html$'"
    function findxgrep()
    {
    find . -type f -print0 | filter_repos_z | xargs -0 grep --color=auto "${@}" | grep -v "^Binary file" | sed 's/^\.\///' | less -F
    }
The "${@}" is the critical bit that allows me to pass arguments like -i to grep. The grep, find and xargs commands all support using a NULL as a file separator instead of whitespace.


Ever used ack?


No, but thanks for pointing out its existence. Homepage:

https://beyondgrep.com/


the_silver_searcher and ripgrep are similar (and inspired by Ack). I switched from ack to the_silver_searcher years ago as it's quite a bit faster (or was at the time anyway, not sure how fast Ack is now), and then to ripgrep because it has the -g option (only search files that match a pattern) that's handy sometimes.


Favourite topic!

My "Bash Toolkit": https://github.com/adityaathalye/bash-toolkit

My (yak-shaving-in-progress :) "little hot-reloadin' static shite generator from shell": https://github.com/adityaathalye/shite

A subtle-ish aspect is, I like to write Functional Programming style Bash. I've been blogging about it here: https://www.evalapply.org/tags/bash/


My favorite utility is a shell function for quickly attaching to an existing screen session after connecting via ssh (or creating a new one if none exists). It's pretty handy for treating a single ssh connection to a server as if it was a long-lived multi-tab terminal:

  sshcreen () {
      ssh -t "$@" screen -xRR
  }
Works with bash and zsh. Usage is pretty simple:

  $ sshcreen user@example.com
Or for local Docker instances mapped to port 2222:

  $ sshcreen root@localhost -p 2222
Detach the session with CTRL-A + D, reattach by rerunning the sshcreen command you previously used.


This was handy before oh-my-zsh but omz has functionality for this now that it's no longer necessary.

    pman()
    {
        man -t "${1}" | open -f -a /System/Applications/Preview.app
    }

I like using this in conjunction with pbcopy to quickly generate a random password at given length

    pwgen()
    {
        length=${1:-64}
        charlist='0-9a-zA-Z~!@#$%^&*()_+-=:";<>?,./'
        echo `cat /dev/random | tr -dc $charlist | head -c$length`
    }


For pwgen there is a great tool of the same name that does basically that. Just have to add -1 to get a single password by default it outputs a whole bunch so you can pick one which I prefer most of the time.


>pman

Oh man. I didn't know just how badly I needed this in my life, thank you!


If you're using oh-my-zsh, the macOS plugin has "man-preview" that does the same thing, probably a little better. I updated mine to alias pman=man-preview


I use my system setup script: https://github.com/pcho/binfiles/blob/master/bt. It helps me a lot with setting up new VPS when I need or with daily tasks. While using macOS, I also had this as helpers: https://github.com/pcho/binfiles/blob/master/.archive/setup-..., to set up homebrew. And, https://github.com/pcho/binfiles/blob/master/.archive/setup-..., for a bunch of options as many of build from source works fine in both systems. In .archive folder there’s a lot of other scripts that I used, but tried to incorporate them in bootstrap script.

It also uses my https://github.com/pcho/dotfiles, https://github.com/pcho/vimfiles and https://github.com/pcho/zshfiles


  alias makepw='cat /dev/urandom | LC_ALL=C tr -cd A-Za-z0-9,_- | head -c 25; echo'
Any proper password manager will of course be able to supplant tricks like these.


Basically a one-liner version of pwgen.


Mint recently updated the API they use behind the scenes, and it broke the preexisting scripts others had written for importing transactions from a CSV (when you link a new bank, it only goes up to the past 90 days).

With my son having opened an account over a year ago, but we didn’t sign up for Mint until this weekend, I ended up writing a new import script for the updated API:

https://github.com/jeradrose/mint-simple-import


I'm not sure bashrc tweaks completely qualify but considering it involves probably the most convoluted shell script I've ever had to come up with I'll plug https://github.com/jkern888/bash-dir-collapse. I like having the current directory in my prompt but got annoyed at how long it could get so made this to shrink the overall length without completely sacrificing the full path


I'm looking in my `~/bin` folder on my work machine right now. I have a good few that are very specific to my work.

- Scripts to test our rate limiting for both authenticated and unauthenticated users (was handy)

- API routes changed in a given PR (set of commits since the last interaction with master in reality)

- ssl-expiration-date - Checks the expiration date of a site's certificate

  domain="$1"
  
  echo "Checking the SSL certificate expiration date for: $domain"
  
  curl -vI "$domain" 2>&1 | grep -o 'expire date: .*$'
- test-tls-version - Checks if a website supports a given version of TLS

  domain="$1"
  curl_options=( "--tlsv${2}" --tls-max "$2" )
  
  curl "${curl_options[@]}"  -vI "$domain" 2>&1
There are also some miscellaneous PHP scripts lying around for template related stuff. PHP makes a create templating language when you need some basic programmatic additions to your output text.

Everything is too coupled to my work to be useful to others, and most of the automation scripts I've written for work are run as cron jobs now and send out emails to the appropriate emails. Most of these are written in PHP (we're a PHP shop).


Here's a small script I use often to tag commits with Git.

It shows the current status, lists out the most recent tags, prompts for a new tag and message, and finally pushes.

Everything is colorized so it's easy to read and I use it quite often for Golang projects.

https://github.com/bbkane/dotfiles/blob/e30c12c11a61ccc758f7...


I use a script called `shell-safe-rm` [1], aliased as `rm` in interactive shells, such that I don't normally use `rm` directly. Instead of directly removing files, they are placed in the trash folder so they can be recovered if they were mistakenly deleted. Highly recommend using a script/program like this to help prevent accidental data loss.

[1] https://github.com/kaelzhang/shell-safe-rm


Script I quickly wrote that automatically compiles my tex files on change, and reloads my pdf viewer (mupdf in this particular case). This was written for OpenBSD (TeXstudio wasn't available, and I ended up liking this editor+mupdf approach even more), so I don't know if it perfectly translates to other OSs.

    #!/bin/sh
    
    pdf_viewer="mupdf";
    latex_cmd="pdflatex -interaction=nonstopmode"
    
    if [[ $# -eq 0 ]]; then
        print "No arguments: filename required"
        exit
    fi
    
    filename=$1;
    pdfname=${filename%%.*}.pdf
    
    # inital compilation to make sure a pdf file exists
    ${latex_cmd} ${filename};
    
    ${pdf_viewer} ${pdfname} &
    
    # get pid of the pdf viewer
    pdf_viewer_pid=$!;
    
    while true; do
        # as long as the pdf viewer is open, continue operation, if it gets closed,
        # end script
        if kill -0 "${pdf_viewer_pid}" 2>/dev/null; then
            if [[ ${filename} -nt ${pdfname} ]]; then
                ${latex_cmd} ${filename};
    
                # reload pdf file, only works with mupdf
                kill -HUP ${pdf_viewer_pid};
                touch $pdfname
            fi
            sleep 1;
        else
            exit 0;
        fi
    done;


I sometimes leave a process running on a port for webdev and then try to open a new one resulting in the error, "Error: listen EADDRINUSE 0.0.0.0:NNN", e.g. 0.0.0.0:443.

There are many ways to search for the process, but here's what I use:

   lsof -iTCP -sTCP:LISTEN -P | grep [PORT NUMBER]
Look for port num and kill the process with:

   kill -9 [PID OF PROCESS YOU WANT TO KILL]
Note if running as root user, you will need to prepend the above commands with sudo


I often have a need to serve a local directory via HTTP. In the old days the built-in Python webserver was enough, but at some point browsers became more aggressive about concurrent connections and the single-threaded `python -m SimpleHTTPServer` would just get stuck if it received two requests at once.

As a workaround, I wrote a small wrapper script that would enable multi-threading for SimpleHTTPServer.

~/bin/http-cwd , Python 2 version (original):

  #!/usr/bin/python
  import argparse
  import BaseHTTPServer
  import SimpleHTTPServer
  import SocketServer
  import sys

  class ThreadedHTTPServer(SocketServer.ThreadingMixIn, BaseHTTPServer.HTTPServer):
      pass

  def main(argv):
      parser = argparse.ArgumentParser()
      parser.add_argument(
          "--port", type = int, nargs = "?",
          action = "store", default = 8000,
          help = "Specify alternate port [default: 8000]",
      )
      parser.add_argument(
          "--iface", type = str, nargs = "?",
          action = "store", default = "127.0.0.1",
          help = "Specify iface [default: 127.0.0.1]",
      )
      args = parser.parse_args(argv[1:])
      server_address = (args.iface, args.port)
      srv = ThreadedHTTPServer(server_address, SimpleHTTPServer.SimpleHTTPRequestHandler)
      sa = srv.socket.getsockname()
      print "Serving http://%s:%r ..." % (sa[0], sa[1])
      srv.serve_forever()

  if __name__ == "__main__":
      sys.exit(main(sys.argv))
Python 3 version (necessary for platforms that have dropped Python 2, such as macOS):

  #!/usr/bin/python3
  import argparse
  import http.server
  import socketserver
  import sys

  class ThreadedHTTPServer(socketserver.ThreadingMixIn, http.server.HTTPServer):
      pass

  def main(argv):
      parser = argparse.ArgumentParser()
      parser.add_argument(
          "--port", type = int, nargs = "?",
          action = "store", default = 8000,
          help = "Specify alternate port [default: 8000]",
      )
      parser.add_argument(
          "--iface", type = str, nargs = "?",
          action = "store", default = "127.0.0.1",
          help = "Specify iface [default: 127.0.0.1]",
      )
      args = parser.parse_args(argv[1:])
      server_address = (args.iface, args.port)
      srv = ThreadedHTTPServer(server_address, http.server.SimpleHTTPRequestHandler)
      sa = srv.socket.getsockname()
      print("Serving http://%s:%r ..." % (sa[0], sa[1]))
      srv.serve_forever()

  if __name__ == "__main__":
      sys.exit(main(sys.argv))


How does this differ from the stdlib approach?

    python -m http.server 8080 --bind 127.0.0.1 --directory your_directory


May I direct your attention to the second sentence of my post?

It fixes an issue in the Python built-in HTTP server that causes it to hang under concurrent connections.

First, run the built-in Python web server:

  [term1]$ python -m SimpleHTTPServer 8080
Then connect to it with a client that doesn't immediately send a request, such as `netcat`. This simulates the behavior of modern browsers, which seem to set up a pool of pre-established connections.

  [term2]$ nc localhost 8080
Now try to get a page from the server via Curl (or wget, etc). It will hang after sending the request, because the server's single thread is trying to serve the idle connection.

  [term3]$ curl -v http://127.0.0.1:8080
  *   Trying 127.0.0.1:8080...
  * TCP_NODELAY set
  * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
  > GET / HTTP/1.1
  > Host: 127.0.0.1:8080
  > User-Agent: curl/7.68.0
  > Accept: */*
  > 
In real life, the behavior I saw was that I'd try to connect to the server with Chrome and it would hang after the pages had partially loaded.


I totally missed that! Thanks for the additional detail.

This issue appears to have been fixed: https://github.com/python/cpython/issues/75820


I think this is a problem that's long been fixed; I tested your command and it seems to work as expected for me in Python 3.10 anyway. And I've been using the "python -mhttp.server" frequently for years, and never experienced any of these problems.


Yup, here are mine.

    # Search all directories for this directory name.
    dname() {
        [ $# -eq 0 ] && echo "$0 'dir_name'" && return 1
        fd --hidden --follow --exclude .git --type directory "$*"
    }

    # Search all files for this filename.
    fname() {
        [ $# -eq 0 ] && echo "$0 'file_name'" && return 1
        fd --hidden --follow --exclude .git --type file "$*"
    }

    # Find and replace with a pattern and replacement
    sub() {
        [ $# -ne 2 ] && echo "$0 'pattern' 'replacement'" && return 1
        pattern="$1"
        replace="$2"
        command rg -0 --files-with-matches "$pattern" --hidden --glob '!.git' | xargs -0 perl -pi -e "s|$pattern|$replace|g"
    }

    # Uses z and fzf, if there's a match then jump to it. If not, bring up a list via fzf to fuzzy search.
    unalias z 2> /dev/null
    z() {
        [ $# -gt 0 ] && _z "$*" && return
        cd "$(_z -l 2>&1 | sed 's/^[0-9,.]* *//' | fzf)"
    }


Shorthand to find all files matching a pattern (with optional additional arguments, e.g., -delete, -ls, -exec ..., etc.

  fndi () 
  { 
      tgt="${1}";
      shift;
      echo find . -iname \*"${tgt}"\* "${@}";
      find . -iname \*"${tgt}"\* "${@}" 2> /dev/null;
      [[ -z $tgt ]] && { 
          echo;
          echo "No target was specified, did the results   surprise?"
      }
  }
Shorthand to find all files containing a pattern:

  fndg () 
  { 
      binOpt="-I";
      wordOpt="";
      caseOpt="-i";
      while true; do
          if [[ -z $1 || $1 =~ ^[^-+] ]]; then
              break;
          fi;
          case $1 in 
              +i)
                  caseOpt=""
              ;;
              -B)
                  binOpt=""
              ;;
              -w)
                  wordOpt="-w"
              ;;
              *)
                  echo "Unrecognized option '${1}', cannot proceed.";
                  return 1
              ;;
          esac;
          shift;
      done;
      if [[ -z $2 ]]; then
          startIn=.;
      else
          startIn='';
          while [[ ! -z $2 ]]; do
              startIn+="$1 ";
              shift;
          done;
      fi;
      [[ -z $1 ]] && { 
          echo "No target specified, cannot proceed.";
          return
      };
      tgt=$1;
      echo find ${startIn} -type f -exec grep $binOpt $wordOpt $caseOpt -H "${tgt}" {} \;;
      find ${startIn} -type f -exec grep $binOpt $wordOpt $caseOpt -H "${tgt}" {} \; 2> /dev/null
  }


My most-used self-written script is probably ``syssheet` -- this script displays information about the running system in terms of load (RAM usage, disk usage, load avg, users) and Hardware (CPU model, Network Interfaces...). It is like a crossover between top and inxi. I use it to clearly distinguish what kind of system I am logging into and deploy it to all of my systems: https://masysma.lima-city.de/11/syssheet.xhtml

There is also a collection of more "obscure" scripts in my shellscripts repository documented here: https://masysma.lima-city.de/32/shellscripts.xhtml.

Another (probably niché) topic is my handling of scanned documents which arrive as PDFs from the scanner and that I want to number according to the stamped number on the document and convert to png at reduced color space: https://masysma.lima-city.de/32/scanning.xhtml


not mine (i stole it from someone else) but very useful bash prompt_func to store your entire bash history forever:

8<-----------------------------

  function prompt_func {
    CMDNUM=`history 1 | awk '{print $1}'`
    LAST_CMD=`history 1 | cut -f 3- -d ' '`

    if [ x$LAST_CMDNUM = xwho_knows ]; then
      LAST_CMDNUM=$CMDNUM
    fi

    if [ x$CMDNUM != x$LAST_CMDNUM ]; then
      FULL_CMD_LOG="$HOME/full-history/$(date "+%Y-%m-%d").log"
      echo "$(date '+%H:%M:%S') `munge_pwd` $LAST_CMD" >> $FULL_CMD_LOG
      LAST_CMDNUM=$CMDNUM
    fi
  }
  export PROMPT_COMMAND=prompt_func
  export LAST_CMDNUM=who_knows

  function fh() {
    grep -r --color=NEVER ${*} ~/full-history |
    sed 's/[^ ]* //' |
    sed 's/ \[[^]]\*\]/$/'
  }
8<-----------------------------

`munge_pwd` is another script that does various substitutions on the prompt (specific to how my work directories are laid out) but mostly you can just substitute `pwd` if you don't care about deduplicating stuff like multiple checkouts of the same project.


Not a script, but https://github.com/ellie/atuin does something similar


thanks! i've even searched for something like this in the past and come up blank.


Recursive grep: For every time you know you’ve written that code before but can’t remember the exact syntax. Filters out known build directories that would otherwise make it slow (moddify this to your personal use case).

https://github.com/djsamseng/cheat_sheet/blob/main/grep_for_...

#!/bin/bash

if [ $# -eq 0 ] then echo "Usage: ./grep_for_text.sh \"text to find\" /path/to/folder --include=*.{cpp,h}" exit fi

text=$1 location=$2

# Remove $1 and $2 to pass remaining arguments as $@ shift shift

result=$(grep -Ril "$text" "$location" \ $@ \ --exclude-dir=node_modules --exclude-dir=build --exclude-dir=env --exclude-dir=lib \ --exclude-dir=.data --exclude-dir=.git --exclude-dir=data --exclude-dir=include \ --exclude-dir=__pycache__ --exclude-dir=.cache --exclude-dir=docs \ --exclude-dir=share --exclude-dir=odas --exclude-dir=dependencies \ --exclude-dir=assets)

echo "$result"


Seems like ripgrep would be the optimal tool for this job.

https://github.com/BurntSushi/ripgrep


Disclosure: I'm the author of ripgrep.

As a sibling comment mentioned, assuming you're .gitignore files exclude all of that stuff from your repo, you should be able to just run 'rg "text to find"' to replace all of that. And use 'rg "text to find" -tcpp' if you want to limit it to C++ files.

I had similar scripts for recursive grep like that too. ripgrep replaced all of them.


I have something similar. This is the reason my organisation of source code is primarily by-language at the top level.


    json2yaml() {
        python3 -c "import json,sys,yaml; print(yaml.dump(json.load(sys.stdin)))"
    }
    export -f json2yaml

    yaml2json() {
        python3 -c "import json,sys,yaml; json.dump(yaml.safe_load(sys.stdin), sys.stdout, default=str)"
    }
    export -f yaml2json

    httping() {
        while true; do
            curl $@ -so /dev/null \
                -w "connected to %{remote_ip}:%{remote_port}, code=%{response_code} time=%{time_total}s\n" \
                || return $?
            sleep 1
        done
    }
    [[ ! $(>&/dev/null type httping) ]] && export -f httping

    redis-cli() {
        REDIS_HOST="${1:-127.0.0.1}"
        REDIS_PORT="${2:-6379}"  
        rlwrap -S "${REDIS_HOST}:${REDIS_PORT}> " socat tcp:${REDIS_HOST}:${REDIS_PORT} STDIO
    }
    [[ ! $(>&/dev/null type redis-cli) ]] && export -f redis-cli


Does a PostgreSQL plpgsql function count as a script?

https://gist.github.com/stuporglue/83714cdfa0e4b4401cb6

It's one of my favorites because it's pretty simple, and I wrote it when a lot of things were finally coming together for me (including GIS concepts, plpgsql programming, and a project I was working on at the time).

This is code which takes either two foci points and a distance, or two foci, a distance and the number of pointers per quadrant and generates a polygon representing an ellipse. Nothing fancy, but it made me happy when I finally got it working.

The use case was to calculate a naive estimate of how far someone could have ridden on a bike share bike. I had the locations they checked out the bike, and where they returned it, and the time they were gone. By assuming some average speed, I could make an ellipse where everywhere within the ellipse could have been reached during the bike rental.


I have a shell function exported called "cheat". It looks for a plain text file in a specific location named after the one argument to cheat() and it just prints the file out. I think it might glob and print multiples out.

And then I have different dotfile repos. I have a base one that I keep so clean I could get a job at Disney with it. That's where most of my scripts live. And then I have locale ones, like -home, -<employername>. Those have overlays so that I can have contextual extensions, such as a cheat database with work stuff. Also, I can keep that dotfile-employername hosted at my employer so that I'm not "crossing the streams". I don't even have to link them, they just autoload based on their location and name.

I don't have to hop systems too much, so grabbing fresh tooling is a twice a year problem. I'm a cli-as-ide dinosaur so I just hide all my seldom-used scripts under a double underscore prefix. __init_tooling will update vim and give me the 8 or 9 plugins I have grown dependent upon, give me a ruby and python environment, etc.

I have a function called "add_word". Every time I see a word I dont know, I learn it, and then I run "add_word <new word> <definition>". It creates a new file called <new word> with the definition and commits it to a git repo hidden away. Every couple years I'll work through the list and see which I remember. I have about a 30% success rate adopting new words, which again, dinosaur here, so, I'll take whatever I can get.

The dirtiest thing I have is a cheap vault that uses vim and shell automation. I have a grammar for descripting secrets, and I can pass a passphrase through automation to get secrets out. I'm sure it's 100% hackable. I know the first rule of security software is "dont ever try to make your own". So I don't put anything too good in there.


  > Every couple years I'll work through the list and see which I remember.
You'll love Anki, if you've never heard of it.

Desktop: https://apps.ankiweb.net/

Android: https://play.google.com/store/apps/details?id=com.ichi2.anki...


I wasn't the author but if I had to pick one set of scripts that have lowered my anxiety the most it's this: http://www.mikerubel.org/computers/rsync_snapshots/

Long story short: you can use hard links + rsync to create delta snapshots of a directory tree. I use it to create a back up of my important directory trees.

Funny story about this: I had really old HP "Lance Armstrong" branded laptop that I used for years. The above above script was on it and was rsyincing to separate machine so it was fully backed up. Because of that, I was actually hoping for the laptop to die so I could get a new one (frugalness kicking in strong here).

My girlfriend at the time was using it and said "Oh, should I not eat or drink over your laptop?" and I responded: "No, please do! If you break it that means I can allow myself to order a new one."


Access Firefox bookmarks from command-line:

  f0() {
    echo 'select moz_bookmarks.title || '"'"' = '"'"' || url from moz_places, moz_bookmarks on moz_places.id = moz_bookmarks.fk where parent = 2;' | sqlite3 /home/user/.mozilla/firefox/twht79zd.default/places.sqlite
  }
  f1() {
    firefox `echo 'select url from moz_places, moz_bookmarks on moz_places.id = moz_bookmarks.fk where moz_bookmarks.title = '"'$1'"';' | sqlite3 /home/user/.mozilla/firefox/twht79zd.default/places.sqlite`
  }
  f$# $1
Execute PostScript programs alone with command-line arguments:

  exec gs -P -dBATCH -dNODISPLAY -dNOEPS -dNOPAUSE -dNOSAFER -q -- "$@"
Tell the IP address:

  curl -s 'http://icanhazip.com/' | cat -v


If you are polite, it works:

alias please='sudo zsh -c "$(fc -ln -1)"' # rerun the last command with sudo (because it failed )

Easier PATH management:

# nicer path configuration and lookup function path { if [[ $# -eq 0 ]]; then echo -e ${PATH//:/\\n} | sort elif [[ "$1" == "--save" ]]; then path $2 && echo "\npath $2" >> $HOME/.profile else if [[ -d "$1" ]] ; then if [[ -z "$PATH" ]] ; then export PATH=$1 else export PATH=$1:$PATH fi else echo "$1 does not exist :(" return 1 fi fi }


python3 -c "import pyotp ; print(pyotp.TOTP('<MY_TOTP_2FA_SECRET>'.lower()).now())"

This is a bash one-liner that takes the place of an RSA/2FA token/AuthyApp



Wrote this simple one to create gzipped tar backups and send them over ssh. A lot faster than rsync if you just need to back up a whole folder on a schedule. It requires pigz, which is a parallel gzip implementation.

Variables: $HOSTNAME - the computer hostname $TOBACKUPDIR - the local directory you want backed up $N_CORES - the number of cores you want to use for compression $REMOTEUSER - the ssh user login on the remote server $REMOTEHOST - the remote server's IP $BACKUPDIR - where you want the file to be backed up to

#!/bin/bash

bfile=`date +%F`.$HOSTNAME.tar.gz

    /usr/bin/tar cvpf - \
            # You can exclude local directories here with
            # --exclude="dir" \
            $TOBACKUPDIR | pigz -p $N_CORES | \
            ssh $REMOTEUSER@$REMOTEHOST "cat - > /$BACKUPDIR/$bfile"


This is not mine. I can’t remember where I found it. Apologies for no attribution (Mac):

#!/usr/bin/env bash

function cdf() { #: Change working directory to the top-most Finder window location cd "$(osascript -e 'tell app "Finder" to POSIX path of (insertion location as alias)')"; }


Ooh, that's nice. I have the opposite direction:

  # Open current working directory in Finder
  alias of='open -a Finder ./'


Wouldn't "open ." work just the same?


I have the same, but mapped to ‘F’. Thank you to whoever submitted that to macosxhints.com many years ago.


A bash script + a little elisp magic that leverages Emacs for fuzzy finding on the command line instead of fzf:

https://www.masteringemacs.org/article/fuzzy-finding-emacs-i...


This is embarrassingly simple, but useful.

    manps()
    {
     if [ -z "$1" ]; then
      echo usage: $FUNCNAME topic
      echo This will open a PostScript formatted version of the man page for \'topic\'.
     else
      man -t $1 | open -f -a /Applications/Preview.app
     fi
    }
This is for MacOS. All it does is display a `man` entry as properly formatted Postscript. If you were around in the 80's when we had ring-bound paper manuals, you may remember how much easier they were to read as compared with fixed-pitch terminal rendering of the same page.

Sorry, no Linux version as I rarely have a graphical desktop open on Linux. It should be easy to rig something up with Ghostscript or similar.


Note: In my machine, path to Preview.app was - /System/Applications/Preview.app

Thanks for this tip!


Let's say you have a cronjob YOUR_CRONJOB.php that is slow that uses a query to get the data it needs to iterate over. Change your sql where to have MOD(id, total_threads) = MOD. Toss this into jenkins and you are set.

This will fork it and wait until it ends.

#!/bin/bash

TOTAL=`ps aux | grep YOUR_CRONJOB.php | grep -v grep | wc -l`

echo "TOTAL PROCESSES ALREADY RUNNING :"$TOTAL

MAX_THREADS=20

TOTAL_MODS="$(($MAX_THREADS-1))"

echo "TOTAL MODS: "$TOTAL_MODS

if [ $TOTAL -eq 0 ]

then

    echo "RUNNING..."

    for i in $(seq 0 $TOTAL_MODS)

    do 
        echo "Starting thread $i"
        timeout 10000 php YOUR_CRONJOB.php $i $MAX_THREADS  & 

        pids[${i}]=$!

    done

    echo "FINISHED FORKING"  
else

    echo "NOT RUNNING...."
fi

for pid in ${pids[*]}; do

    wait $pid
done

echo "OK FINISHED"


Few years ago, I needed a quick way to create Qemu VM's locally for testing some weird software configurations. So I made a script to pull Ubuntu cloud images and clone them into qcow2 disks, then create and register libvirt virtual machines. Part of the "magic" was creating a cloud-config ISO image that would be mounted to pre-seed the VM on first launch. It also pushed my ssh key into the VM so I wouldn't need to use passwords. Janky, but worked well for what I needed.

https://github.com/noahbailey/kvmgr/blob/master/kvmgr.sh


Here is my script gfm-preview [1], which I think is pretty cool since it implements a HTTP server in 50 lines of shell script (ab-)use with netcat. What is does is it starts a HTTP server that serves a rendered preview of a Markdown document using GitHub's API for rendering GitHub Flavoured Markdown. The page will automatically update when the document changes using fswatch and HTTP long polling!

[1]: https://github.com/axelf4/nixos-config/blob/e90e897243e1d135...



Why do you deal with QR codes so much?


A shebang-friendly script for "interpreting" single C99, C11, and C++ files, including rcfile support: https://github.com/RhysU/c99sh/blob/master/c99sh

Use gnuplot to plot one or more files directly from the command line: https://github.com/RhysU/gplot/blob/master/gplot


Some zsh functions I can't live without

  mkcd() {
      mkdir -p "$1" && cd "$1"
  }

  mkcdtmp() {
      mkcd ~/tmp/$(date "+%y%m%d")
  }


Oh gosh, that made me soo nostalgic - I did such mono .sh file to boost my system setup couple years ago as elementaryOS at the time was not supporting updates and I had to do a clean install when new release was published: https://github.com/mrmnmly/linux-installation-script/blob/ma...

When I read it today I miss those soo oversimplified solutions to do stuff :'-)


My dotfiles: https://github.com/BurntSushi/dotfiles

Here are some selected scripts folks might find interesting.

Here's my backup script that I use to encrypt my data at rest before shipping it off to s3. Runs every night and is idempotent. I use s3 lifecycle rules to keep data around for 6 months after it's deleted. That way, if my script goofs, I can recover: https://github.com/BurntSushi/dotfiles/blob/2f58eedf3b7f7dae...

I have so many machines running Archlinux that I wrote my own little helper for installing Arch that configures the machine in the way I expect: https://github.com/BurntSushi/dotfiles/blob/2f58eedf3b7f7dae...

A tiny little script to recover the git commit message you spent 10 minutes writing, but "lost" because something caused the actual commit to fail (like a gpg error): https://github.com/BurntSushi/dotfiles/blob/2f58eedf3b7f7dae...

A script that produces a GitHub permalink from just a file path and some optional file numbers. Pass --clip to put it on your clipboard: https://github.com/BurntSushi/dotfiles/blob/2f58eedf3b7f7dae... --- I use it with this vimscript function to quickly generate permalinks from my editor: https://github.com/BurntSushi/dotfiles/blob/2f58eedf3b7f7dae...

A wrapper around 'gh' (previously: 'hub') that lets you run 'hub-rollup pr-number' and it will automatically rebase that PR into your current branch. This is useful for creating one big "rollup" branch of a bunch of PRs. It is idempotent. https://github.com/BurntSushi/dotfiles/blob/2f58eedf3b7f7dae...

Scale a video without having to memorize ffmpeg's crazy CLI syntax: https://github.com/BurntSushi/dotfiles/blob/2f58eedf3b7f7dae...

Under X11, copy something to your clipboard using the best tool available: https://github.com/BurntSushi/dotfiles/blob/2f58eedf3b7f7dae...


Here's my dotfiles repository [1], which is used to sync my little scripts and config files between my different systems (Mac/Linux). I first heard about it here [2].

[1] https://github.com/benwinding/dotfiles

[2] https://zachholman.com/2010/08/dotfiles-are-meant-to-be-fork...


Not a script as such, but I did put this together, building on what someone else did:

https://github.com/ianmiell/bash-template

It's a 'cut and paste' starter for shell scripts that tries to be as robust as possible while not going crazy with the scaffolding. Useful for "I want to quickly cut a script and put it into our source but don't want it to look totally hacky" situations.


This is great; thanks for sharing!


  autoload -U add-zsh-hook

  add-zsh-hook chpwd source_env

  source_env() {
        if [[ -f .env && -r .env  ]]; then
                source .env
        fi
  }


be careful what you clone and browse around with that


Small bash functions that're useful for silly-ego-reasons.

    ## coding analysis
    function lines_coded {
        perl -ne'print unless /^\s*$/ || /^\s*(?:#|\/\*|\*)/' $* | wl
    }
    function lines_commented {
        perl -ne'print if /^\s*(?:#|\/\*|\*)/' $* | wl
    }
And wl is just a small alias (because I used it all the time):

    wl='wc -l'


My Mac OS bootstrap script to setup any new Mac from scratch. It includes niceties like moving the default screenshots from the Desktop to a more sane location, setting full disk encryption, and setting up privoxy & dnscrypt out if the box. https://github.com/james-see/fresh-mac


With this i can produce beautifully typesetted latex PDF and publish a blogpost, right from my text editor.

https://rtnf.prose.sh/prose-sublime-text-integration

https://rtnf.prose.sh/pandoc-sublime-text-integration


I use the following definition in my .profile to be able to replace foo with bar in all text files within a folder.

  replace() {
      grep -rl "$1" . | xargs gsed -i "s/$1/$2/g"
  }
Also, I run Spotify from the command line: https://github.com/hnarayanan/shpotify


> replace foo with bar in all text files within a folder.

Jesus that foot-gun would make Bjarne Stroustrup blush.


With great power comes great holes in feet.


borg-backup.sh, which runs my remote borg backups off a cronjob: https://github.com/Freaky/borg-backup.sh

zfsnapr, a ZFS recursive snapshot mounter - I run borg-backup.sh using this to make consistent backups: https://github.com/Freaky/zfsnapr

mkjail, an automatic minimal FreeBSD chroot environment builder: https://github.com/Freaky/mkjail

run-one, a clone of the Ubuntu scripts of the same name, which provides a slightly friendlier alternative to running commands with flock/lockf: https://github.com/Freaky/run-one

ioztat, a Python script that basically provides what zfs-iostat(8) would if it existed: https://github.com/jimsalterjrs/ioztat


Not sure whether to be proud or ashamed of this, but this script runs in a cron job to monitor my home server’s IPv4 and IPv6 addresses and update them everywhere (DNS, firewall, reverse proxies) if they change: https://gist.github.com/thedanbob/13f88ca8c21cb2ab7904ec5a6e...


If you host your domain in AWS Route 53, I wrote a Python CLI that makes dynamic DNS a one-liner: https://github.com/ericfitz/r53


https://bitbucket.org/mieszkowski/introspect/src/master/

I replaced Plone for my personal use with about 1000 lines of Python. A object oriented database. The interface is awkward but if you get past that the goal was to produce pictures of trees with graphviz.


I started - but rarely update and kinda forgot pushing to github - some small scripts and knowledge snippets. One of them being a network/ssh based distributed unseal mechanism (using shamir algorithm) to allow machines to boot and decrypt their OS partition.

https://github.com/maulware/maulstuff


I wrote a silly little script to play a random assortment of music.

    playMeSomeMusicMan() { rg --files -tmusic ~/Music | shuf | mpv --playlist=- }
I also got sick of waiting for Activity Monitor to boot to kill an errant process, so I wrote this one to fuzzy search and kill the selection.

    kp() { ps aux | fzy | awk '{ print $2 }' | xargs kill }


themicrosoftchainsawmassacre.cmd

  TASKKILL /IM outlook.exe
  TASKKILL /IM teams.exe
  TASKKILL /IM onedrive.exe

  timeout /t 2

  TASKKILL /F /IM outlook.exe
  TASKKILL /F /IM teams.exe
  TASKKILL /F /IM onedrive.exe
  TASKKILL /F /IM Microsoft.AAD.BrokerPlugin.exe
  timeout /t 2

  start outlook.exe
  start "" %LOCALAPPDATA%\Microsoft\Teams\Update.exe --processStart "Teams.exe"
  start "" "C:\Program Files\Microsoft OneDrive\OneDrive.exe"  /background


There is a short period between network start and VPN start where all the microsoft thingies start and want me to login again. As their SMSes sometimes take hours to arrive, it is more easye to just kill and restart them, and let them reuse their existing login.

So I dropped the batch above on my desktop, and click it while the VPN is starting up. In the 4 seconds it takes to kill everything, the network works as it should.



I find this one handy:

#!/bin/bash echo |\ openssl s_client -connect ${1:?Usage: $0 HOSTNAME [PORT] [x509 OPTIONS]}:${2:-443} 2>&1 |\ sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' |\ openssl x509 ${3:--text} ${@:4} 2>/dev/null |\ sed '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/d'


Here’s one I wrote recently to take a list of domains and enumerate subdomains using sublist3r. Mostly it wraps the latter tool and cleans up the output, but it also enriches the output with dig info.

https://github.com/ericfitz/dominfo

Dependencies: sublist3r (Python) pv (used for progress bars)


My junk drawer: https://github.com/peterwwillis/junkdrawer

The junk I haven't touched in 10 years: https://github.com/psypete/public-bin/src


I work on an m1 macbook and a lot of times using arm architecture breaks dependencies. I have two really basic functions in my .zshrc (should also work for bash):

# M1 compatibility switches

arm() { arch -arm64 "${@:-$SHELL}" }

x86() { arch -x86_64 "${@:-$SHELL}" }

This with the addition of `$(uname -m)` in my $PROMPT, has saved me a lot of time by letting me switch between arm and x86_64 architecture.


My philosophy is that if I need a script something is wrong. Unfortunately a lot of things are wrong so I have a lot of scripts


A quick one to display the contents of my scripts:

  $ wat wat
  #!/usr/bin/env bash
  cat `which $1`


I have something similar in windows

c:\Temp>tbt tbt @echo off

type c:\tools\%1.bat

c:\Temp>



https://codeberg.org/kas/qtime/src/branch/master/shell/qtime...

    $ qtime.bash
    It's nearly twenty-five past two.


Not mind-blowing but I use it all the time to loop a task a certain number of times. I always forget the bash command for loops so I just wrote a simple command where I can run `loop 10 ./this-other thing and stuff` to loop ten times.

loop() { NUM=$1 shift for i in {1..$NUM}; do "$@" done }


zsh has "repeat" for that, and with "setopt short_repeat" (added in the last release) you can do:

    repeat 10; mycmd


This is a bit of a meta-answer... many years ago I saw that I was not very good at writing error handling code for my scripts, so I (mostly) switched to this:

http://angg.twu.net/eepitch.html

that lets me execute my scripts line by line very easily.


Show kubernetes pods in "unusual states" or restarted 8 or more times:

    kubectl get pods --all-namespaces --sort-by=.metadata.creationTimestamp -o wide -Lapp \
        | grep -vP "Completed|Terminating|ContainerCreating|Running\s+[01234567]\s+"


Sorted output from ripgrep without sacrificing parallel search (of course you have to wait for the search to complete before seeing any output:

  function rgs {
      rg --line-number --with-filename --color always "$@" | sort --stable --field-separator=: --key=1,1
  }


I use this one almost every day.

  # Recursively search for keyword in all files of current directory

  grr() {
    grep -rHIn --exclude-dir=.git --exclude-dir=node_modules --exclude=*.min.* --exclude=*.map "$@" . 2>&1 | grep -v "No such file"
  }


Here's my scripts directory from my dotfiles repo: https://github.com/kleutzinger/dotfiles/tree/master/scripts


https://github.com/johnl-m/display-brightness-scripts

I wanted to control my display’s brightness using my keyboard on Linux. Turned out to be pretty easy with ddcutil!


I don't have that many useful scripts online, but one I'm using a bit is for quickly generating static photo albums: https://github.com/DusteDdk/chromogen


Zotero: I `git init`ed my zotero folder and wrote scripts to do daily commits and pushes to a remote host:

https://github.com/jcuenod/zotero-backup-scripts/


Not a script for a specific need, but I have a folder with Bash snippets from where I copy and mix parts of them when writing scripts.

https://github.com/j1elo/shell-snippets


I often type ls after cd and even if I know the folder contents I don't mind seeing it again. To automatically ls after each cd command add this to ~/bash_profile

[ -z "$PS1" ] && return function cd { builtin cd "$@" && ls }


My scripts aren't particularly fancy or original, but you can look through everything at https://github.com/randombk/randombk-dotfiles.


System setup and scripts:

https://github.com/mrichtarsky/linux-shared

The repo name is a bit outdated, it works on macOS too. Lots of scripts are missing, will add them soon.


I've started adding some of my shorter scripts to a single repo - https://github.com/curtis86/my-scripts

Will definitely be adding more as I tidy them up! :)


I wrote a small script to stick my local weather (based on IP address) in my tmux status bar.

https://jezenthomas.com/showing-the-weather-in-tmux/


Script? Bash? TLS?

This one will generate any kind of TLS certificate: Root CA, intermediate, mail, web, client-side …

https://github.com/egberts/tls-ca-manage


TL;DR Quick way to create work-in-progress commits:

    #!/bin/bash
    # Perform a work-in-progress commit
    
    # Add everything
    git add $(git rev-parse --show-toplevel)
    
    # If the latest commit is already a WIP, amend it
    if [[ $(git show --pretty=format:%s -s HEAD) = "WIP" ]]; then
      git amend --no-edit --no-verify
    else
      git commit -m "WIP" --no-verify
    fi
I wanted a way to quickly commit everything in a branch without thinking about it. This comes up a lot when I'm working on something and either need to pivot to something else or I want to pull down a PR and verify it without losing my work. I also wanted the option to quickly switch back to that branch, pick up where I left off, and be able to drop it again just as quickly without muddying up the commit history.

This script automatically stages everything and commits it as "WIP". If it detects that the most recent commit was a "WIP" then it amends the previous commit. No more weird stashing just to avoid losing my place


search history like "hist tsc"

  hist() {
    history | grep $1
  }


Ctrl-r tsc will also work, and give you the ability to refine and run the command directly.


I am using a simple python script to check Terabytes of files

https://github.com/web3cryptowallet/drive-py


I maintain git alias scripts, such as for shortcuts, metrics, and workflows.

https://github.com/gitalias/gitalias


I lost the script (bash function) when I changed job, but inspired by a co-worker:

> up

Does a `cd ..` on every keypress except ESC or space.

> up $n

Does a total of $n `cd ..` and (important!) set OLDPWD to the initial directory for proper `cd -`.


I've seen a few examples of this function over the years, here's a recent variant of mine (I'm accepting the use of eval here):

    # Provide 'up', so instead of e.g. 'cd ../../../' you simply type 'up 3'
    up() {
      case "${1}" in
          (*[!0-9]*)  : ;;
          ("")        cd || return ;;
          (1)         cd .. || return ;;
          (*)         cd "$(eval "printf -- '../'%.0s {1..$1}")" || return ;;
      esac
      pwd
    }


https://github.com/jawj/IKEv2-setup

Sets up an Ubuntu server as a strongSwan IKEv2 VPN.



Open GitLab MRs from the commandline

https://github.com/helpermethod/mr


GitLab team member here, thanks for sharing!

You can also set Git push options understood by the GitLab server to create merge requests [0] on the CLI.

Sid's dotfiles provide an example in [1]. The workflow is 1) push 2) create merge request 3) set target (master/main) 4) merge when the pipeline succeeds.

alias mwps='git push -u origin -o merge_request.create -o merge_request.target=main -o merge_request.merge_when_pipeline_succeeds' # mwps NAME_OF_BRANCH

There are more push options, such as setting the MR as draft, add labels, milestones, assignees, etc. My personal favorite: Remove the source branch when the MR is merged. That's a project setting too, but sometimes not set. Using the push options, you can force this behavior and avoid stale Git branches.

glab as CLI tool provides a similar functionality to create an MR. Its development has been moved to this project [2]

[0] https://docs.gitlab.com/ee/user/project/push_options.html#pu...

[1] https://gitlab.com/sytses/dotfiles/-/blob/master/git/aliases...

[2] https://gitlab.com/gitlab-org/cli


I keep some of my scripts and cheat sheets here:

https://avestura.dev/snippets


Your snippets... ok, a personal thing, surely useful for you, not that much for me. But: your blog is great!


Thanks!


Mine are mostly here: https://yossarian.net/snippets



This is a really simple script that I use to save a few keystrokes when I'm querying a package.json from the CLI. It depends on JQ. e.g., pkg dependencies, pkg version, etc.

  #!/usr/bin/env sh
  set -o errtrace; set -o errexit; set -o pipefail
  
  if [ -n "${1}" ]; then filter="${1}"; else filter=''; fi
  jq ."${filter}" package.json


this exact idea is what I’m trying to build https://trytoolkit.com for

Never promoted it but I’ve been quietly using it myself to build stuff that I need. Obviously browser based stuff have limitations but I found I still get a lot done


> "fastily/autobots/ubuntu/scripts/bin/vomitSMART.sh"

What a descriptive name :D


A function for sorting contents of a directory by storage consumed (sorry for lack of comments throughout). I must admit, I’m particularly pleased with this one. (This is for MacOS):

#!/usr/bin/env bash

function szup() {

description=' #: Title: szup #: Synopsis: sort all items within a directory according to size #: Date: 2016-05-30 #: Version: 0.0.5 #: Options: -h | --help: print short usage info #: : -v | --version: print version number '

funcname=$(echo "$description" | grep '^#: Title: ' | sed 's/#: Title: //g') version=$(echo "$description" | grep '^#: Version: ' | sed 's/#: Version: //g') updated="$(echo "$description" | grep '^#: Date: ' | sed 's/#: Date: //g')"

    function usage() {
        printf "\n%s\n" "$funcname : $version : $updated"
        printf "%s\n" ""
    }

    function sortdir() {
        Chars="$(printf "    %s" "inspecting " "$(pwd)" | wc -c)"
        divider=====================
        divider=$divider$divider$divider$divider
        format="    %-${Chars}.${Chars}s %35s\n"
        totalwidth="$(ls -1 | /usr/local/bin/gwc -L)"
        totalwidth=$(echo $totalwidth | grep -o [0-9]\\+)
        Chars=$(echo $Chars | grep -o [0-9]\\+)
        if [ "$totalwidth" -lt "$Chars" ]; then
            longestvar="$Chars"
        else
            longestvar="$totalwidth"
        fi
        shortervar=$(/Users/danyoung/bin/qc "$longestvar"*.8)
        shortervar=$(printf "%1.0f\n" "$shortervar")
        echo "$shortervar"
        printf "\n    %s\n" "inspecting $(pwd)"
        printf "    %$shortervar.${longestvar}s\n" "$divider"
        theOutput="$(du -hs "${theDir}"/* | gsort -hr)"
        Condensed="$(echo -n "$theOutput" | awk '{ print $1","$2 }')"
        unset arr
        declare -a arr
        arr=($(echo "$Condensed"))
        Count="$(echo "$(printf "%s\n" "${arr[@]}")" | wc -l)"
        Count=$((Count-1))
        for i in $(seq 1 $Count); do
        printf "   %5s    %-16s\n" "$var1" "${var2//\/*\//./}"
        done
        echo
    }

    case "$1" in
        -h|--help)
            usage
            return 0
            ;;
        *)
            :
            ;;
    esac

     if [ -z "$1" ]; then
             oldDir="$(pwd)"
             cd "${1}"
             local theDir="$(pwd)"
             sortdir
             cd "$oldDir"
             return 0
     else
            :
             oldDir="$(pwd)"
             cd "${1}"
             local theDir="$(pwd)"
             sortdir
             cd "$oldDir"
             return 0
     fi


}


Realized I should include example output:

$ szup . 42

    inspecting /Users/redacted/Documents/backups
    ====================================================
    1.8G    ./pfsense_backups
     90M    ./Alfred
     22M    ./atext
     21M    ./depot_2.0_backup
    6.0M    ./fever_backup_2009-06-30_18.11.27.sql
    5.7M    ./Contacts_bkup_2019-01-13.abbu
    2.1M    ./pocket_backups
    936K    ./simplenote_db_backup_2017-12-13
    692K    ./quiver_backups
    256K    ./pocket_exports
    184K    ./simplenote_db_backup_2017-12-13.zip
    172K    ./pins_exports
     24K    ./reeder_subscriptions_2022-04-26.opml
     24K    ./.DS_Store
     20K    ./rss_02-23-2016.opml
    8.0K    ./reeder_subscriptions_2022-04-26.opml.zip


I do not understand


Here's my script for playing videos in a folder. The command is , (comma)

When you download a video from certain sites, ctime is the time you created the file (so the time you downloaded) but the video still comes with a timestamp which is saved as the mtime (I'm not sure why this happens, maybe there's a http header for that?), and I presume it's the time when the video was first uploaded to the site?

Here's a favorite of mine: all my scripts' -h simply show the source code

    $ cat $(which ,)
    #!/bin/sh
    
    #export DRI_PRIME=1
    
    cmd=mpv
    
    param='-fs --msg-level=all=no,cplayer=info'
    filter='/Playing/!d; s/^Playing: //'
    
    order='%C@' # by default, order by ctime
    sort=-n     # (which is a numeric sort)
    reverse=-r  # ... show newer videos first
    depth=      # ... and do it recursively
    loop=       # ... without looping
    
    [[ -n $MYT_MUTE ]] && set -- -u "$@"
    [[ -n $MYT_1 ]] && set -- -1 "$@"
    [[ -n $MYT_REC ]] && set -- -r "$@"
    
    while getopts 1rcmsaRnolugh o; do
        case "$o" in
        1) depth='-maxdepth 1';;  # just the current directory
        r) depth=;;               # recursively
    
            # a video uploaded in 2009 but downloaded in 2015 will have
            # mtime in 2009 and ctime in 2015
        #
        # (note: moving the video to another directory actually bumps
        # the ctime)
    
        c) order='%C@'; sort=-n;;    # order by ctime (download time)
        m) order='%T@'; sort=-n;;    # order by mtime (upload time)
    
        s) order='%s'; sort=-n;;     # order by size
        a) order='alpha'; sort=;;    # order lexicographically
        R) order='random'; sort=-n;; # order at random
    
        n) reverse=-r;; # newer first
        o) reverse=;;   # older first
    
            l) loop=--loop=inf;; # infinite loop
    
        u) mute=--mute=yes;; # no sound
    
        g) filter=; param=;;    # debug
    
        h) ${PAGER-less} "$0"; exit;;
        esac
    done
    shift $((OPTIND-1))
    
    find -L "$@" $depth -mindepth 0 \
         -not -path '*/\.*' \
         -type f \
         -name '*.*' \
         -printf "$order %p\0" \
        | awk 'BEGIN { RS="\0"; srand() } {
            if ($1 == "random")
              sub ($1, int(rand()*100000));
            printf "%s\0", $0
          }' \
        | sort -z $sort $reverse \
        | sed -zr 's/^[^ ]+ //' \
        | xargs -0 $cmd $loop $mute $param 2>&1 \
        | sed "$filter"
    
    
    #     -exec $cmd {} + | sed "$sed"


> the video still comes with a timestamp which is saved as the mtime

do you mean when using youtube-dl/yt-dlp ? they have an option: --no-mtime


But I actually like it. The idea is that I can pass -c or -m to my script and it's a meaningfully different playlist order (either plays in the order I downloaded, or the order the videos were originally uploaded)

And yes the videos were downloaded with youtube-dl

edit: ohhh it uses the Last-modified HTTP header to set the mtime!! cool! https://unix.stackexchange.com/questions/387132/youtube-dl-a...


Oh I see, that is nice. For my use cases I'd prefer they were the other way round - ctime as upload date, mtime as download date.

My downloads do get the upload date written into the video's metadata so I have that if I ever need it.


> My downloads do get the upload date written into the video's metadata so I have that if I ever need it.

How do you do it


I believe it's the --add-metadata option


Also here's a emacs lisp snippet for window switching that I never managed to publish but I just can't use emacs without it. With it I press super+left and super+right to cycle windows such that if I'm on a non-asterisk buffer, it cycles only through such kind of buffer (and scratch), but if I'm on an asterisk* buffer (except scratch), it cycles only through asterisk buffers, skipping some bad buffers. And super+up goes to a non-asterisk buffer, and super+down goes to an asterisk buffer:

https://github.com/dlight/dotemacs/blob/master/bindings.el#L...

This is brittle-looking but I didn't know how to test if a buffer is special other than checking the buffer name (I'm sure there's a better way). Anyway I kept looking if emacs already had this, but apparently not

Maybe someone will think it is useful so:

    (defun keys (a)
      (when a
        (global-set-key (read-kbd-macro (nth 0 a)) (nth 1 a))
        (keys (cddr a))))


    (keys '(
            "<s-left>" my-previous-buffer
            "<s-right>" my-next-buffer
            "<s-up>" next-regular-buffer
            "<s-down>" previous-star-buffer
    ))

    (defun switch-to-previous-buffer ()
      (interactive)
      (switch-to-buffer (other-buffer)))

    (defun matching-buffer ()
      ;(string-match "^\\*.*\\*$" (buffer-name)))
      (string-match "^\\*.*$" (buffer-name)))

    (defun my-next-buffer ()
      (interactive)
      (if (matching-buffer)
          (next-star-buffer)
        (next-regular-buffer)))

    (defun my-previous-buffer ()
      (interactive)
      (if (matching-buffer)
          (previous-star-buffer)
        (previous-regular-buffer)))

    (defun next-regular-buffer ()
      (interactive)
      (next-buffer)
      (if (matching-buffer)
          (next-regular-buffer)))

    (defun next-regular-buffer-or-scratch ()
      (interactive)
      (next-buffer)
      (if (and (matching-buffer)
          (not (equal "*scratch*" (buffer-name))))
      (next-regular-buffer-or-scratch)))

    (defun previous-regular-buffer ()
      (interactive)
      (previous-buffer)
      (if (matching-buffer)
          (previous-regular-buffer)))

    (defun bad-buffer (a)
      (member a '("*Backtrace*"
                  "*Completions*"
                  "*Messages*"
                  "*Help*"
                  "*Egg:Select Action*")))

    (defun next-star-buffer ()
      (interactive)
      (next-buffer)
      (if (or (not (matching-buffer))
          (bad-buffer (buffer-name)))
      (next-star-buffer)))

    (defun previous-star-buffer ()
      (interactive)
      (previous-buffer)
      (if (or (not (matching-buffer))
              (bad-buffer (buffer-name)))
          (previous-star-buffer)))


I'd add how I load a terminal enviroment for different profiles.

In .zshrc I have the least possible things so it opens fast. But I include the commands that would extend (srcBlah) or help me tune how to extend (editBlah):

For my pet projects I'd use these two:

   alias editSeb="code ~/.sebrc"
   alias srcSeb="source ~/.sebrc"
As you can see, editSet opens VSCode and src is sourcing it in the current terminal.

   .sebrc

   # Download video from the given YouTube URL
   function ytdl() {
     youtube-dl -x --audio-format mp3 --prefer-ffmpeg $1
   }

   # Download audio from the given YouTube URL
   function ytmp3() {
     ytdl $1 | ffmpeg -i pipe:0 -b:a 320K -vn $2.mp3
   }

   # Shows total size of the given directory at $1
   function dus() {
     du -h -d 1 $1
   }

   # Used to opt-out of pre-commit autofixes
   export NO_COMMIT_CHECKS=true

   function cleanUSB() {
     volumeName=$1
     subdir=$2
     if [[ "$volumeName" != "" ]] && [[ "$subdir" = "" ]]; then
       rm -rfv /Volumes/$volumeName/.DS_Store
       rm -rfv /Volumes/$volumeName/.Spotlight-V100
       rm -rfv /Volumes/$volumeName/.fseventsd
       rm -rfv /Volumes/$volumeName/.Trashes
       rm -rfv /Volumes/$volumeName/._\*
       echo "Volume $volumeName is clean"
     elif [[ "$volumeName" != "" ]] && [[ "$subdir" != "" ]]; then
       rm -rfv /Volumes/$volumeName/$subdir/.DS_Store
       rm -rfv /Volumes/$volumeName/$subdir/.Spotlight-V100
       rm -rfv /Volumes/$volumeName/$subdir/.fseventsd
       rm -rfv /Volumes/$volumeName/$subdir/.Trashes
       rm -rfv /Volumes/$volumeName/$subdir/._\*
       echo "Volume $volumeName/$subdir is clean"
     else
       echo "No volume name given. Nothing to do."
     fi
   }

   function blogBackup() {
     rsync -avzh --progress -e ssh root@seb-nyc1-01:/root/blog/db /Users/seb/Documents/blog
   }

   alias showHiddenFiles='defaults write com.apple.finder AppleShowAllFiles YES; killall Finder /System/Library/CoreServices/Finder.app'
   alias hideHiddenFiles='defaults write com.apple.finder AppleShowAllFiles NO; killall Finder /System/Library/CoreServices/Finder.app'

   # Show ports currently listening
   function openPorts() {
     netstat -p tcp -van | grep '^Proto\|LISTEN'
   }

   # Create a RAM disk on macOS
   function ramDisk() {
     # https://eshop.macsales.com/blog/46348-how-to-create-and-use-a-ram-disk-with-your-mac-warnings-included/
     # 2048 = 1MB
     # 2097152 = 1G
     quantityOfBlocks=2097152
     diskutil erasevolume HFS+ "RAMDisk" `hdiutil attach -nomount ram://${quantityOfBlocks}`
   }

   # Tauri watcher for source file changes will not stop automatically.
   function killRollup() {
     ps aux | grep node | grep rollup | awk '{print  $2;}' | xargs kill -9 $1 
   }

   # X pet project required env var
   export X_TOKEN=blahValue

   function dockerCleanAll() {
       docker stop $(docker ps -aq)
       docker rm $(docker ps -aq)
       docker rmi $(docker images -q) -f
   }

   function dockerCleanVolumes() {
       docker volume rm $(docker volume ls -qf dangling=true)
   }

   alias ll='ls -lah'
   alias gg='git status -s'

   # Creates a timestamped backup of the current branch:
   alias gbk='git checkout -b "backup-$(git symbolic-ref -q HEAD --short)-$(date +%Y-%m-%d-%H.%M.%S)" && git checkout -'


list:

    #!/usr/bin/env sh
    
    find | grep -- "$1"
If you are searching for a Python or a Java package / class, it will work because the dots in it will mean "any char" for grep and will match the slashes in its path.

oneline:

    #!/usr/bin/env sh
    tr '\n' ' '; echo
Puts anything you give in its standard input in one line.

L, my journaling tool (whenever I need to get something out of my head or be sure to find it later); I can edit and fix stuff by editing the file it generates after the fact:

    #!/bin/sh
    set -e

    CONFIG_FILE="${HOME}/.config/Ljournalrc";

    if [ ! -f "${CONFIG_FILE}" ]; then
        mkdir -p "$(dirname "$CONFIG_FILE")"
        printf 'JOURNAL_FILE="${HOME}/Documents/journal.txt"\n' >> "${CONFIG_FILE}"
        printf 'VIEWER=less\n'                                  >> "${CONFIG_FILE}"
        printf 'LESS='"'"'-~ -e +G'"'"'\n'                      >> "${CONFIG_FILE}"
    fi

    L=$(basename $0)

    usage() {
        cat <<HERE
    Usage:
        $L           - show the content of the journal
        $L message   - add message to the journal, with a date
        $L + message - add message to the previous entry in the journal
        $L -         - add stdin to the journal
        $L + -       - add stdin to the journal, without a date
        $L e         - edit the journal manually
        $L h, $L -h  - show this help

    Config file is in ${CONFIG_FILE}
    HERE
    }

    if [ "$1" = "h" ] || [ "$1" = "-h" ] || [ "$1" = "--help" ] || [ "$1" = "-help" ]; then
        usage
        exit
    fi

    . "${CONFIG_FILE}"


    if [ "$1" = "e" ]; then
        if [ -z "$EDITOR" ]; then
            if   [ -f "$(which nano)" ];  then
                EDITOR=nano
            elif [ -f "$(which vim)" ];   then
                EDITOR=vim
            elif [ -f "$(which emacs)" ]; then
                EDITOR=emacs
            fi
        fi
        exec "$EDITOR" "${JOURNAL_FILE}"
    fi


    # Don't add a new date line
    if [ "$1" = "+" ]; then
        append="+"
        shift
    fi

    if [ "$1" = "-" ]; then
            stdin="-"
            shift
    fi


    if [ -z "$stdin" ]; then
        msg="$@"
    else
        msg=""
        while read line; do
            msg=$(printf "%s\n%s" "$msg" "$line")
        done
    fi

    msg="$(printf %s "$msg" | sed 's|^|\t|g')"


    if [ -z "$msg" ] && [ -z "$append" ]; then
        if [ ! -f ${JOURNAL_FILE} ]; then
            exec usage
        else
            exec "${VIEWER}" -- "${JOURNAL_FILE}"
        fi
    fi

    if [ -z "$append" ]; then
        printf "\n%s:\n\n" "$(date -R)" >> ${JOURNAL_FILE}
    fi

    printf "%s\n" "$msg" >> ${JOURNAL_FILE}




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: