I have most of my shell history back to 2005(?).
Each terminal gets its own new history file.
99% of the time, I never look at it, but when I do need to look at it, it has been great. My boss once asked me: "What args and screening file did we use when we made that one-off DB 4 months ago?" Was able to check and confirm it was correct.
Or for personal use: "Where did I move that folder of pictures?"
I opted for a single history across all sessions on any given host: On my main machine, the first of 54,434 entries is timestamped 2020-08-22:12:39, while on the machine on which I do most development at the moment (it varies from product to product and release to release), the first of 34,771 entries is timestamped 2023-05-08:11:34.
For the curious, the salient .bashrc bits are:
function _setAndReloadHistory {
builtin history -a
builtin history -c
builtin history -r
}
# preserve shell history
set -o history
# preserve multiline commands...
shopt -s cmdhist
# preserve multiline command as literally as possible
shopt -s lithist
# reedit failed history substitutions
shopt -s histreedit
# enforce careful mode... we'll see how this goes
shopt -s histverify
# timestamp all history entries
HISTTIMEFORMAT='%Y-%m-%d:%H:%M '
# not the default, we like to be explicit that we are not using defaults
HISTFILE=~/.bash_eternal_history
# preserve all history, forever
HISTSIZE=-1
# preserve all history, forever, on disk
HISTFILESIZE=-1
# record only one instance of a command repeated after itself
HISTCONTROL=ignoredups
# preserve history across/between shell sessions...
# ...when the shell exits...
shopt -s histappend
# ...and after every command...
PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND; } _setAndReloadHistory"
EDIT: Remembered just after submitting that since I am on MacOS, I ran the command
touch ~/.bash_sessions_disable
back on August 22nd, 2020, to prevent Terminal from saving per-session information. I've never cleaned out ~/.bash_sessions, suppose I should, but it hasn't been updated since that day.
For those who would like to read more about staying in power as a dictator, or for that matter any kind of political leader, I found "The Dictator's Handbook: Why Bad Behavior is Almost Always Good Politics" [1] a good read on the topic.
Of note is the required number of supporters can be quite low when you have a government supported by resource extraction.
IPv4 is just inefficiently allocated in general. Why does the world need 10.0.0.0/8 in addition to 192.168.0.0/16? Isn't 65k addresses enough? Is there a private organization in need of 16 million addresses?
Not to mention AMPRNet (amateur radio) owned the entire 44.0.0.0/8 up until 2019 (a portion was sold off to Amazon). That may have seemed reasonable in 1980 but now it's just plain crazy.
> Is there a private organization in need of 16 million addresses?
Plenty of mobile phone networks have well over 16 million subscribers, and they typically don't have enough public IPv4 addresses for everyone. This has led to really hacky stuff, like using DOD IP ranges as psuedo-private space, or re-using private IP addresses in different regions (which can't be fun to maintain.)
Some networks have fixed the problem by using NAT64 - forgoing IPv4 altogether internally, and translating to public v4 at the edge. (Works surprisingly well, T-Mobile US has been doing it for the better part of a decade.)
It's not about number of addresses available, it's about reflection of organization hierarchy in the octets. The 10.0.0.0/8 range is useful for breaking a distributed company's network up so each major office gets a 10.x.0.0/16, and each department a 10.x.y.0/24.
This is why 192.168.0.0/16 is often used for services like libvirtd, kubernetes and docker. And the use of the range by those services makes it even more unwieldy to try and put some other LAN in there.
You can work around these considerations if you want, but many people won't. When you're the network engineer responsible for designing a company's networks, you'll be wanting to keep things simple and robust. When you're called in at 3am on a Sunday because the network is down, you better hope your ability to recover doesn't require making a bunch of subnet calculations because you decided to try and use the pool of available IP addresses efficiently.
https://en.wikipedia.org/wiki/Scott_Galloway_(professor)#Bib...