You can customize LS_COLORS if you want, but I've never been mad at the defaults. Also, I prefer c-style escapes, which allows you to copy and paste the filename and use it in other commands:
alias ls='ls --color=auto -F -b -T 0 -A'
-F classify entries with (*/=>@|)
-b use c-style escapes instead of quoting
-T 0 do not use tabs for alignment
-A show all dotfiles except . and ..
Plus, I find colors to be insanely useful when my old eyes try to read 'ip' output, so I really do prefer:
I really wish there were a unified environment variable to enable colors for everything instead of having a separate one for everything or having to clutter your bashrc with a trillion aliases.
The default for o+w directories having a bright green background colour with grey text is kind of annoying to me, I suppose you could argue that o+w directories should be annoying so that you are aware of their existance and risk
What shade of blue do you have configured? The default is this impossible to read shade of dark blue, so the first thing I do is configure a different, readable shade of not-so-dark blue.
My main aliases are: a more human-friendly ls and an alias of rm which attempts to detect mistakes:
alias l='command ls -Av1h --color=always --time-style=long-iso --group-directories-first'
# -A: show all, including dotfiles, except . and ..
# -v: natural sort of number
# -1 (one): list (use -l for long version)
# -h: human-readable sizes
alias rm='command rm -Iv'
# -I: prompt once before removing more than three files or recursively
# -v: verbose
I redefine also $HISTFILE to ensure that every shell get its own temporary history:
https://chezmoi.io is a dotfile manager that is runs on multiple OSes (including Windows) while handling differences from machine to machine, allows you to store your secrets in your password manager (so you don't have to store secrets in your dotfile repo), and it even supports the NO_COLOR environment variable. Check it out! Disclaimer: I'm the author.
This seems to almost do what I need, but not quite; I wonder if I'm mistaken or if someone knows that one of the other dotfile tools would suit me.
The major gap: Some files belong to external repositories (not all git), and some of these have contents that cross dot-directories. e.g. project repos A and B both contain material that should end up in ~/.local/share and material that should end up in ~/.config.
I’ve been using chezmoi for over a year now and having to set up two work computers and a new personal one in that time. It’s made that process significantly shorter and less painful.
You can get similar behavior with bash's built-in history by setting a few options:
# -------- History --------
#increase bash history
HISTFILESIZE=10000000
HISTTIMEFORMAT="%FT%R "
HISTSIZE=10000000
# append to .bash_history and reread after each command
export PROMPT_COMMAND="history -a;$PROMPT_COMMAND"
# append to .bash_history instead of overwriting
shopt -s histappend
# dont allow repeated lines
export HISTCONTROL=ignoredups:erasedups
Curiously, I'm observing in bash 5.0-6ubuntu1.2 that tidily appending the above to the default ~/.bashrc does not work: The variables appear overwritten as intended when inspected, but upon shell exit, the first-set values (from first few lines of .bashrc e.g. HISTFILESIZE=2000) control the update to .bash_history.
Anyone else see this in the bash distro? Seems to be a regression.
The fix of course is to simply delete or hack a '#' into the start of the unwanted lines from the default .bashrc.
Except if you ever accidentally open a shell without those settings set (ex: "bash --norc" to get a clean profile) it will trim your decades-long history.
Your HISTFILESIZE is also probably not enough: my .full_history goes back to 2012 and is 2.5x your limit.
The file might fit without much problems in the filesystem cache, so i guess it would only be a problem in the first time.
I had a friend in university that used a refurbished hp workstation with a shitload of ram as a desktop computer, and he would find | xargs -L1 cat {} > /dev/null everything in his home directory on login to warm up the filesystem cache. The kernel would then drop stuff not actually needed/accessed nicely over time.
I like saving all my history, but I also find that it easily becomes a mess doing it in one file. So I split it up session, and then I merge it all into one file.
# Eternal bash history.
# ---------------------
HISTCONTROL=
HISTFILESIZE=
HISTIGNORE=
HISTSIZE=
HISTTIMEFORMAT="%Y-%m-%dT%T%t"
# read global history
HISTFILE=~/.bash_eternal_history
history -r
# IIF there's no session history directory, create one
if [ ! -d ~/.bash_history ]; then
[ -f ~/.bash_history ] && mv ~/.bash_history ~/.bash_history.0
mkdir ~/.bash_history
fi
# change history file to one for the session
HISTFILE=~/.bash_history/$$
# If it already exists, take a backup
if [ -f "$HISTFILE" ]; then
mv "$HISTFILE" "$HISTFILE"."$(stat "$HISTFILE" --format='%W')"
fi
# save the current session id + process id
{ export SESSION_ID="$(</proc/$$/sessionid)"; } 2> /dev/null
printf "#$(date '+%s')"'\n# starting session:'$SESSION_ID' process:'$$' in:'"$PWD"'\n' > "$HISTFILE" &&
history -r
# Use trap to save history, not just on logout but also EXIT
trap_exit() {
# Append session history to global history
## Append to local history session to file
cp -f "$HISTFILE" /tmp/history.$$.0
history -a
cp -f "$HISTFILE" /tmp/history.$$.1
## Append local history file to global history
printf "#$(date '+%s')"'\n# ending session:'$SESSION_ID' process:'$$' in:'"$PWD"'\n' |
cat "$HISTFILE" - >> ~/.bash_eternal_history
}
trap trap_exit EXIT
Great Atuin, that's amazing. Thank you, this is indeed a game changer for having a permanent history. I use ctrl+r for everything including commands I vaguely remember but know at least some letters
It’s indispensable in some workflows. Scoring history can help with otherwise difficult debugging, or looking up a variant to a complicated invocation that’s not the one suggested by auto complete.
The best description I found with a not so short Google was buried in https://www.redhat.com/sysadmin/fzf-linux-fuzzy-finder but the workflow is I hit ctrl-R (in bash, but there's bindings for other shells), I type out part of the command I know is in my history,
and fzf will grep history for commands that match via its fuzzy search algorithm. So if I remember running something that involved a file called something in a log dir, I type /log/, and it interactively lets me search. I remembered I was using awk with the log file, so I can add "awk" to the existing search), and then I use the up arrow to select it and hit enter. at which point it's in my command buffer waiting for me to either hit enter again or edit the command and then hit enter.
Might sound complex written out like that, but I promise it's worth the investment to set it up and read the manual.
this is what mcfly does but it does it all automagically and stores everything in sqlite database. Also easy to sync that db between two computers connected by tailscale/ssh.
The mention of .plan and .project brings back memories. You used to be able to do things like finger dhosek@ymir.claremont.edu and see if I was logged on and what I had to share with the world (which was a long series of quotes that ran several screens). I remember learning that the AMS had shorter hours on Fridays thanks to Barbara Beeton’s .plan file.
> Then I have a .xsessionrc which needs to exist because I now log in through xdm, and the window manager (fluxbox) ends up inheriting that environment. Yep, it doesn't get a .bashrc type thing applied to it. (Not gonna lie - this took a while to figure out.
I've grown fond of just logging in via the tty and invoking startx... it's always there, even when your graphics drivers fail. I tried messing with xdms a long time ago when transitioning to i3wm, but it was just extra configuration, noise and inconsistency to sort out, without really gain anything useful for what is a single person machine. Simple is good.
> it's always there, even when your graphics drivers fail.
This sentence is jaw dropping to me. Maybe I'm not the target audience for this sort of ultra minimalist, ultra customised environment, but the concept of requiring a workaround because my graphics drivers aren't functional is something I cannot understand. Why would you willingly work in that environment? In the last decade of being a professional programmer, my graphics drivers have _never_ failed me to that level. They're not perfect, but they _never_ fail to initialise.
> the concept of requiring a workaround because my geaphics drivers aren't functional is something I cannot understand
It's not a workaround, it's just a perk, when I moved to "ultra customized" as you put it, you don't want to have to configure masses of stuff, i3wm is good for this, it's pretty minimal to configure... an XDM was just more work to configure, but with little utility from my perspective, also I work on the cli a lot so it doesn't feel unnatural to be welcomed by a tty.
And yeah the final perk, Linux wasn't always so compatible, and even in more recent history nVidia graphics drivers can still be problematic. About 10 years ago i think, I used Linux on a 2008 MBP, and freeBSD actually, those things had craptastic nVidia GPUs, you know the batch that came with the first gen unleaded soldier fiasco and when nVidia lied to apple about their thermal specs before they had a falling out and everyone got their GPUs underclocked in firmware to push failures out of warrantee - i digress - anyway the linux support for any nvidia stuff back then was terrible, and very flimsy on dual GPU laptops. For Apple it was more tricky still because you needed to write an Apple specific gmux switcher before even attempting to boot with the nvidia driver, and if that actually worked when you start X then you would lose visual access to your tty because it had no modsetting support... which ironically made using tty instead of an xdm a disadvantage for when the graphics driver failed once in X... I sometimes tried to bring it back to life essentially blindfolded on the tty if X died, mixed success.
Point is, on any laptop with nvidia back then, you usually had to do some level of mucking around on the tty anyway before being able to depend on anything graphical like an xdm... and pray it doesn't break on the next driver update. Frankly right now, we are living in Linux utopia, a lot of hardware "just works".
I remember those times, had the unpleasant experience of having nvidia graphics on my laptops in the past.
It’s been almost ten years now, and i just stay away from nvidia hardware. If a laptoo has an nvidia gpu I’ll look for something else (I don’t care about noveau, if I’m spending money I might as well buy something that just works).
Same here. I think support has become a little better but the attitude is pretty much the same, I just stick to Intel and AMD now. Thankfully there is also way more officially supported hardware these days too.
1) This sort of nonsense is why people don't use these environments
and
2) Just because something worked like that 15 years ago, doesn't mean it's a good reason to still do that now.
I cannot fathom how _anyone_ would tolerate that in a world where Windows and MacOS (and I guess to a certain extent various linux flavors) exist and "just work", for anything other than tinkering.
It's subjective, tty "just works" for me, not you. Just because mac and windows "just works" for you doesn't mean that's true for everyone else's purposes, it certainly isn't for me, they create immense friction for me.
More is not universally better, there is no functional difference for me starting X via an xdm vs a tty, and as a bonus there is less complexity and _less_ for me to setup... so why would I bother installing one to gain nothing. That doesn't mean I would force everyone to learn how to use a cli and assume this method is appropriate for everyone.
> in a world where Windows and MacOS (and I guess to a certain extent various linux flavors) exist and "just work",
Lol, I've seen how reliably Windows "works"; I'll stick with Linux, thanks. At least FOSS OS always let me find a fix or work around. Granted, I mostly see it these days when relatives ask for help so the sample is biased, but that seems fair considering the conversation here.
That's nice, I've had Nvidia based laptops and Linux and any time the kernel was updated there was a good chance I would be dropped to text mode and have to fix things from lynx, or links2.
Same, but I would say I only had these problems because I'm using arch and living in the bleeding edge. On the boxes I'm using a stabler distro (Ubuntu, PopOS, Mint) never had this problem.
It is by choice though, if you want everything updated to their latest releases you need to live with some breaking changes here and there
Some people seem to prefer systems that break often for some reason.
Had a roommate that ran arch (and wouldn’t shut up about it of course) —- he would update the system very often and very often it wouldn’t boot for some reason. One time he had to update a system that hadn’t been updated in a while and he opted for just reinstalling it.
It doesn’t really make sense to me, but I guess some people like it that way?
Please don't quantise all Linux users with something that's not gnome|kde or mainstream into "likes stuff that breaks" - both the author and I run Debian, this is the least breaky distro of all time, less than ubuntu.
Linux is literally about user choice, everything but the kernel is up for grabs, and even the kernel if that's what you want.
One also shouldn't form an impression of arch (or any other distro) from the experience of one user. I've had this arch install for over 10 years and don't remember having any issues like that; while hearing plenty of people having issues doing major version updates on more "stable" distros.
Arch specifically attracts people who want to learn how things are put together and mess with their system, which are also the kind of people most likely to break things.
Very well; since you've linked a site claiming that the Linux kernel (an open source kernel used in everything from embedded routers to supercomputers, from Android to Debian) isn't "about" choice, we can rephrase.
The stock PS1 bugged me a bit, so I mangled it down to this
That's not too far from this one which I use and have seen quite a few others use too --- really hate a lot of the newer defaults that hide the full path for some reason, and managed to convince a coworker who accidentally edited the file of the same name in the same directory with a different path to use it too:
'\u@\h:\w$ '
Also I should mention that in this era of constant telemetry^Wspyware and tracking, everything you say can and will be used to identify or correlate you, and that does include using non-default configuration. Of course, how much this matters depends on the context in which you say it.
The full directory can make your prompt very long and ugly in some cases. I'm using zsh’s RPROMPT instead, the main prompt to the left displays the basename, and the right-side prompt displays the full path (if it fits on the line).
I have a similar prompt. Having commands alone on a line makes them easier to grab with OP's beloved triple click. Multi-line commands indent right, so I'm more likely to use them. And I added the color after having trouble picking out the boundary between commands with pages of output.
I've moved to two line prompts everywhere as well, the second line being basically just " > "
I've also added inter prompt spacing.. first that was just an extra \n before the prompt, but I decided I wanted space between the prompt and the start of program output, as well as between the end of output and the next prompt.
I can't remember where I stole the idea from to give credit, and it feels a little wrong but works really well:
For bash, in PROMPT_COMMAND, set a debug trap that writes a newline, and then removes itself "trap - DEBUG".
If it's not obvious the debug trap fires right after you hit return, but before the command actually runs ( also making it a nice place to update xterm/tmux titles ). So that gets you your post prompt newline, but then because the traps removed it doesn't keep firing for every command in you for loop or whatever.
But PROMPT_COMMAND adds the trap again for the next time.
I also have some weird logic in prompt_command that only adds the preprompt newline if the cursor isn't in column 0, because I found otherwise I was getting unnecessary extra spacing.
So yes, if I hit enter on a plain prompt I get two blank lines before the next one.. so?
But it makes differentiating output so much easier. Combine with colour prompts with timestamps ( and in history too ) and I get at least rough timing on how long tasks take, which is often useful.
It's actually on monochrome terminals (for whatever dumb reason that might be happening) that it helps the most, to the point where now when I see a non customised terminal my reaction is "urhg.. what's all this mess, take it away" <shooing wave of hands>
That and pspg for database/csv viewing make my daily life measurably better.
although it gets progressively richer with heightened context, e.g:
username if not my account
hostname if ssh'ing
basename(pwd) (not full path) if not ~
HEAD value (ref or sha) if in git or hg
literal rebase & al for stateful git operation in progress
any of ! ≠ ± (untracked, unstaged, and staged changes)
any of literal nix venv when entering specific contexts
at some point I had it as simple as > but some situations had me wish I knew context right away so I progressively added these each time I had a "fuck, I made a mistake and I would not have if I had that bit of context"
I thought I'd go all the way to
;
plan9 rc-style which made sense because "select whole line then paste and run" just works. plus it looks like a wink.
I do have colours in a few select areas, but very limited, so that when there are colours they are very meaningful. notably the prompt segments are colour coded by meaning since they're dynamic. my vim theme looks like e-ink, merely getting fancy with a shade of gray for comments (which at some point could be reverted so thar comments get the "focus" and code is toned down, very literal programming)
oh and please: no right prompt, ever (or $COLUMNS-wide left prompt) as it gets wild when resizing; no freaking emoji in my output (e.g I still have that HOMEBRE_NO_EMOJI env var set to an expletive of sorts even though I don't use homebrew anymore)
Hah, my ~minimalist variant must look outright flashy compared to yours. Thus:
PS1='%U%m%u:%B%30<..<%~%b%% '
I like to know which host I'm on directly from the shell prompt, as well as the trail end of CWD. You could get close to the same with just '%30d' but then I'd lose the visual cues. And it must have been that way for at least 15 years.
I use ... for user and []_ for root, I would rather not fill up terminal space with information I generally know. I really hate when the prompt causes things to wrap and I do not mind putting pwd or the like to use on those occasions I do not know.
> Quick, which of .bashrc, .bash_profile, .profile et al get run for any given type of login you do to a box?
I appear to have just given up and symlinked them together. Anyone have a better idea?
Edit: Actually, my profile apparently has a conditional to detect what shell it's loaded by (which I did remember), and for at least bash a conditional to detect interactive mode (which I did not). So I'm still dealing with this, just not at the file level.
$ENV is for interactive shells, .profile is for login shells. Normal shells source these files according to to which of the two flags are set or not.
To make make bash behave in a compatible manner, you need to:
- Delete .bash_profile, so .profile gets sourced instead (works around bash ignoring .profile when .bash_profile exists)
- At the bottom of .profile, call $ENV (works around login bash ignoring bashrc)
- Make .bashrc call or be equal to $ENV
- At the head of $ENV, do an early exit for non-interactive shells (works around bash sourcing the bashrc when invoked non-interactively via SSH or socket, also works around the previous workaround)
In terms of startup logic, bash is the worst shell i've seen so far.
I still haven't figured out where the canonical place is to put modifications to $PATH. Now I just source ~/.profile from .zshrc and .xsessionrc and ignore that some directories sometimes appear multiple times in $PATH.
I think that typically sysadmin-type people seem to have less customization, possibly because they ssh into so many boxes and dotfiles don't usually carry over.
It's a really simply, bare bones, kinda setup. Many spend hours, weeks, months, if not years tweaking their setup. Everything from the shell to their editor get "riced up"[1] just the why they like it. It's usually beautiful to look at, but if like me you work on a large number of servers each day, it's kinda pointless, unless you insist on cloning your setup on each of the thousands of servers in your company. Personally I configure almost nothing, everything is standard. I do load a few Vim plugins for syntax checking, but that's about it. I don't even care if something is zsh, ksh or bash, I just care that it's not sh, csh or tcsh.
The most customized thing I have is my ssh config. I'm sure I'm missing out on a ton of nice features, and I sometimes regret not learning tools to a larger extend, especially when others shows neat little shortcuts or speedy functions, on the other hand, I don't want the hassle.
I feel like there’s a sweet spot where I spend most of my time in a terminal but on a relatively small number of different servers, and that’s why I want things customized just so. It’s worth it because the number of machines is small.
I strongly suspect this is an example of a desirable disfluency. If you make things harder to visually distinguish, your brain takes in the information better because the parts that need to work harder influence attention.
I also disable most highlighting. I find it terribly distracting when coding or doing other menial stuff, my exceptions are greping or diffing, but those only use 1-2 colors
I've been gradually dialling back colours in the terminal over the years. I find everything has just become too colourful for no apparent reason, when everything is highlighted it feels like nothing is highlighted and more distracting than anything... syntax highlighting is the only hold out I've kept so far.
Eg Docker defaults to DarkBlue on Black for some status states and it's barely visible on my setup at tge daytime. At night it's just incomprehensible, I need to turn off Night Light or ramp up brightness over 150%.
No, I can't tune my environment, it's always some client's machine.
Wow, it has 40 stars! I'm kind of shocked, I think it was way under 20 last time I looked. :) Feeling pretty good about that, if I've helped just that many people, then I feel like writing that was more than worth it!
I learned a lot from Kali Linux's zshrc [0]. But then I'm certainly no expert in customizing my shell environment, as I usually mostly stick with the distro's defaults. Still, now mine is a bit nicer. It also includes the `ip --color=auto` thing, which is mentioned in other comments and which I hadn't known about either.
Anyone else here managing his "dot files" with an Ansible playbook? In my environments files like .bashrc only contain a few statements that source other scripts where I inject / template my actual configuration.
I previously used Puppet, but you usually need root access on the machine to install Ansible or Puppet itself as they have a large number of dependencies and often add extra system services or user accounts. This is very heavy tool to install if you only need to update a few files in your home directory and install a couple of packages with sudo.
I do this. Bashrc, SSH key generation and distribution, a few development niceties and networking tweaks, all kept in sync with a Makefile and some ansible.
In hindsight it seems so ridiculous that we appropriated the .dot suffix, but no one made us - we did that intentionally. Later we tried switching to .gv but everyone forgets sometimes.
I'm on so many different machines (mostly some embedded stuff/RasPis that need linux) that I ignore all these bad defaults.
The only thing I find unbearable is vimdiff colors, that have the SAME or very similar foreground and background so it's impossible to read. (I don't remember, but believe it's just on some distros)
Colours interfere with scannability. It forces you to deal with text one colour region at a time, making it harder to take in the whole thing at once.
Scanning is a skill you acquire with age. As you get older, and you more and more replace word-by-word reading with full-page scanning, colour becomes less attractive.
Less attractive doesn't mean worthless. Colour just needs to carry its weight, by providing enough relevant information to compensate for breaking up scanning flow. So used sparingly, colours can be good.
It's just that no one seems to use colours sparingly. You see things like giving ".gz" (and other compressed) files a different colour. I have no need for that, .gz is just one among a thousand file extensions that I know, I can read the file extension myself thank you very much. Even if I had a great need for recognising compressed files, I couldn't rely on colours for that: Colouring is too inconsistent between applications, whereas scanning text and spotting the .gz extension carries over to almost anything.
I was looking at how my younger colleagues were organising their screens, with IDE tool windows taking up most of it, and leaving no more than about a quarter of the screen to the source code editor, and comparing it to my own workspace, where the source code takes up most of the screen. And it struck me that 25 years ago my workspace looked more like theirs today than mine today. The scanning bit is the explanation I came up with.
Wrt. colour, there's the concept of alarm colours in cognitive psychology. That, at least, should be DDG'able. It's the observation that certain colours, mainly red but also yellow, pull at your attention. This is a hardwired part of human cognition. If there are alarm colours present, then it becomes harder to read the rest of the text, because the alarm colour keeps trying to pull you in.
I am not the author, but I imagine that it could have started out as just a preference, but turned into great hatred as they kept on being introduced in more and more places. I don't mind colour, but really dislike incorrect colouring (not enough to any effort to do anything about it). Or maybe she really likes colour, but likes to use it very sparingly so it highlights things more effectively. I think there can be plenty of reasons, and I would be unsurprised if her reasons are ones that I guessed
Some of us prefer tools which work best without them.
Allow me to repeat my plea for CLI developers to take a little time to read https://no-color.org and ensure their programs honor things like NO_COLOR, npm config set color false, TERM=dumb, INSIDE_EMACS etc.
How do people manage their custom bash profile across machines.
Currently I just add a small if statement that checks if my custom file with all my little aliases I have put together over the years is there it sources it. Is there a better way?
Some application configuration does not allow host name resolution. In this case a symlink to a host-specific config file is used. Like, for `kitty` I make a `<hostname>.conf` for each host on which I have an account and then make a local, per account symlink to a literal `local.conf` included from `kitty.conf`. This same pattern is useful in configs that do allow dynamic host name resolution as it robust against cases where the host name unexpectedly changes.
All dot files are distributed via `vcsh` with the remote `git` repo in a personal `gitea` instance.
I use https://www.chezmoi.io/, which deals with that with templates and/or scripts. It's very flexible, so it can feel overkill, but I'm very fond of it.
I group different kind of settings under particular files. My .bashrc has `source ~/.rc.d/pyenvrc`, `source ~/.rc.d/gitrc` and so on which can help to organize yourself if you have too many custom settings.
I also have a very tiny set of useful functions that I might occasionally use from some scripts. For example:
# Changes the current working directory to the running script.
cd_running_script_dir() {
cd "$(dirname "$(readlink -f "$0")")"
}
# Display an error message and abort the running script.
#
# Arguments:
# $1: A string with error to be displayed before aborting.
abort() {
ERROR=$1
>&2 echo "${ERROR}"
kill $$
}
# Return a program's full path if exists or display an error
# message and abort script execution.
#
# Arguments:
# $1: A string with the program's name to be looked up.
get_program() {
PROGRAM_NAME=$1
PROGRAM_PATH=$(which "${PROGRAM_NAME}")
if [ -z "${PROGRAM_PATH}" ]; then
abort "Error - ${PROGRAM_NAME} is not installed."
fi
echo "${PROGRAM_PATH}"
unset PROGRAM_PATH
unset PROGRAM_NAME
}
Given the sort of subtle nature of sh that for example misplacing a whitespace or a character can render a whole script wrong, from time to time I add these snippets so I can entirely forget how to do some regular things by sourcing `~/.scripts/lib.sh`. It's also a great way to extend your sh knowledge and learn about customizing your environment.
Yes these are silly functions, but their main purpose is to make sh snippets more readable.
Because I don't like to always display the same wallpaper over and over again my .xinitrc contains:
"${HOME}/.scripts/set-random-background" &
which is:
#!/usr/bin/env sh
# Randomly sets a wallpaper from a directory containing images.
#
# Usage:
# ./set-random-background
# Adjust global settings accordingly.
WALLPAPERS_PATH="${HOME}/.scripts/assets/wallpapers"
set_random_background() {
FEH=$(get_program "feh")
if [ -d "${WALLPAPERS_PATH}" ]; then
WALLPAPER=$(ls "${WALLPAPERS_PATH}"/* | sort --random-sort | head -n1)
if [ -n "${WALLPAPER}" ]; then
${FEH} --no-fehbg --bg-center --bg-scale "${WALLPAPER}"
fi
fi
}
main() {
if [ -n "${BASH_LIB}" ]; then
. "${BASH_LIB}"
set_random_background
fi
}
main