Related: pnut.sh https://github.com/udem-dlteam/pnut just released. It’s a self-compiling C->POSIX shell transpiler that outputs readable shell, aimed at live-bootstrap and reproducible build chains. The shell edition is auditable and can bootstrap a native compiler from just POSIX sh + coreutils. Very much in the spirit of stage0 / compile-from-nothing work.
Nice work this actually looks great. Of course, it’s only a matter of time before someone drops the XKCD about standards proliferation, so I’ll save them the trouble. Pre-emptive XKCD #927 deployed.
finally feels like Python scripts can Just Work™ without a virtualenv scavenger hunt.
Now if only someone could do the same for shell scripts. Packaging, dependency management, and reproducibility in shell land are still stuck in the Stone Ages. Right now it’s still curl | bash and hope for the best, or a README with 12 manual steps and three missing dependencies.
Sure, there’s Nix... if you’ve already transcended time, space, and the Nix manual. Docker? Great, if downloading a Linux distro to run sed sounds reasonable.
There’s got to be a middle ground simple, declarative, and built for humans.
Nix is overkill for any of the things it can do. Writing a simple portable script is no exception.
But: it’s the same skill set for every one of those things. This is why it’s an investment worth making IMO. If you’re only going to ever use it for one single thing, it’s not worth it. But once you’ve learned it you’ll be able to leverage it everywhere.
Python scripts with or without dependencies, uv or no uv (through the excellent uv2nix which I can’t plug enough, no affiliation), bash scripts with any dependencies you want, etc. suddenly it’s your choice and you can actually choose the right tool for the job.
Not trying to derail the thread but it feels germane in this context. All these little packaging problems go away with Nix, and are replaced by one single giant problem XD
I don't think nix is that hard for this particular use case. Installing nix on other distros is pretty easy, and once it's installed you just do something like this
This is a hack but I still found it helpful. If you do want to force a certain version, without worrying about flakes [1] this can be your bash shebang, with similar for nix configuration.nix or nix-shell interactive. It just tells nix to use a specific git hash for it's base instead of whatever your normal channel is.
For my use case, most things I don't mind tracking mainline, but some things I want to fix (chromium is very large, python changes a lot, or some version broke things)
I will say this with a whole heart. My arch linux broke and I wanted to try out nix.
The most shocking part about nix is the nix-shell (I know I can use it in other distros but hear me out once), its literally so cool to install projects for one off.
Want to record a desktop? Its one of those tasks that for me I do just quite infrequently and I don't like how in arch, I had to update my system with obs as a dependency always or I had to uninstall it. Ephemerality was a concept that I was looking for before nix since I always like to try out new software/keep my home system kind of minimalist-ish Cool. nix-shell -p obs-studio & obs and you got this.
honestly, I like a lot of things about nix tbh. I still haven't gone too much into the flake sides of things and just use it imperatively sadly but I found out that nix builds are sandboxed so I found a unique idea of using it as a sandbox to run code on reddit and I think I am going to do something cool with it. (building something like codapi , codapi's creator is kinda cool if you are reading this mate, I'd love talking to ya)
And I personally almost feel as if some software could truly be made plug n play (like imagine hetzner having nix os machines (currently I have heard that its support is finnicky) but then somehow a way to get hetzner nix os machines and then I almost feel as if we can get something really really close to digital ocean droplets/ plug n play without any isolation that docker provides because I guess docker has its own usecases but I almost feel as if managing docker stuff is kinda harder than nix stuff but feel free to correct me as I am just saying what I am feelin using nix.
I also wish if something like functional lua (does fxn lua exist??) -> nix transpiler because I'd like to write lua instead of nix to manage my system but I guess nix is fine too!
Hi there, Since you mentioned Hetzner, I thought I would respond here. While we do not have NixOS as one of our standard images for our cloud products, it is part of our ISO library. Customers can install it manually. To do this, create a cloud server, click on it, and then on the "ISO" in the menu, and then look for it listed alphabetically. --Katie
Hey hetzner. I am just a 16 year old boy (technically I am turning 17 on 2nd july haha but I want nothing from ya haha) who has heard great things about your service while being affordable but never have tried them because I guess I just don't have a credit card/I guess I am a really frugal person at this moment haha. I was just reading one of your own documents if I feel correct and it said that the support isn't the best(but I guess I was wrong)
I guess I will try out nix on hetzner for sure one day.
This is really cool!!! Thanks! I didn't expect you to respond. This is really really cool. You made my day to whoever responded with this.
THANKS A LOT KATIE. LOTS OF LOVE TO HETZNER. MAY YOU BE THE WAY YOU ARE, SINCE Y'ALL ARE PERFECT.
Hi again, I'm happy that I made your day! You seem pretty easy to please if that is all it takes.
Keep in mind that customers must be 18 years old. I believe that is a legal requirement here in Germany, where we are based. Until then, if you're a fan, maybe you'd enjoy seeing what we're up to. We're on YouTube, reddit, Mastodon, Instagram, Facebook, and X. --Katie
> Packaging, dependency management, and reproducibility in shell land are still stuck in the Stone Ages.
IMO it should stay that way, because any script that needs those things is way past the point where shell is a reasonable choice. Shell scripts should be small, 20 lines or so. The language just plain sucks too much to make it worth using for anything bigger.
My rule of thumb is that as soon as I write a conditional, it's time to upgrade bash to Python/Node/etc. I shouldn't have to search for the nuances of `if` statements every time I need to write them.
An if statement in, for instance bash, just runs any command and then runs one of two blocks of code based on the exit status of that command. If the exit status is truthy, it runs what follows the `then`. If it's falsey, it rhns what follows the `else`. (`elsif` is admittedly gratuitous syntax— it would be better if it were just implemented as an if inside an else statement.) This seems quite similar to other programming languages and like not very much to remember.
I'll admit that one thing I do in my shell scripts is avoid "fake syntax"— I never use `[` or `[[` because these obscure the real structure of the statements for the sake of cuteness. I just write `test`, which makes clear that it's just an ordinary command, ans also signals to someone who isn't sure what it's doing that they can find out just by running `man test`, `help test`, `info test`, etc., from the same shell.
I also agree that if statements and if expressions should be kept few and simple. But in some ways it's actually easier to do this in shell languages than in many others! Chaining && and/or || can often get you through a substantial script without any if statements at all, let alone nested ones.
The difference being, as far as I know, that `[[` is the real syntax. This from what I remember helps in avoiding certain class of issues, gives better error messages and is more certain to be a bash built-in.
What I would worry about more is that it breaks `sh` compatibility.
`test` and `[` are Bash builtins just like `[[` is built into bash. But `[[`'s implementation does some things that actual commands can't do because it gets parsed differently than a normal command.
When Bash sees it, it treats it as something that needs to be matched, like a string, and won't stop taking user input in an interactive session until it sees the `]]` or something else that causes a syntax error. If I write `[` and just hit enter, I get an error from the `[` command, same as if I ran an external one. But if I use a `[[`, I get an error message back from Bash itself about a malformed conditional (and/or it will wait for input before trying to execute the command):
2 bash which -a [
/Users/pxc/.nix-profile/bin/[
/bin/[
2 ~
bash [
bash: [: missing `]'
2 ~
2 bash /bin/[
[: missing ]
2 ~
2 bash [[
∙ no prompt
bash: conditional binary operator expected
bash: syntax error near `prompt'
2 ~
2 bash [[
∙ a =
bash: unexpected argument `newline' to conditional binary operator
bash: syntax error near `='
The other thing `[[` does is it has you write `&&` and `||` instead of `-a` and `-o`. A normal command can't do this, because it can't influence the parser of the shell running it-- `&&` will get interpreted by the shell rather than the command unless it is escaped.
This same kind of special handling by the parser probably allows for other differences in error messages, but I don't write Bash that produces such errors, so I couldn't tell you. ;)
> more certain to be a bash built-in
If you want to be sure that you're using a built-in rather than a command, you can use the `builtin` command. But because `[[` is actually special syntax, it's technically not a builtin, so you can't use it this way! Check it out:
~ took 34m8s
2 bash
2 ~
bash builtin [[
bash: builtin: [[: not a shell builtin
2 ~
1 bash builtin [
bash: [: missing `]'
The thing that lets `[[` yield more sophisticated error messages in some ways is actually the very reason I prefer to stay away from it: it's special syntax when an ordinary command works just fine. I think the any-command-goes-here-and-all-commands-are-equal structure of if statements in Unix shells is elegant, and it's already expressive enough for everything we want to do. Stuff like `[[` complicates things and obscures that elegance without really buying us additional power, or even additional concision.
Imo, that's the real reason to avoid it. I'm all for embracing Bashisms when it makes code more legible. For instance, I think it's great to lean on associative arrays, `shopt -s lastpipe`, and `mapfile`. They're innovations (or deviations, depending on how you look at it ;), like `[[`, but I feel like they make the language clearer and more elegant while `[[` actually obfuscates the beauty of `if` statements in shell languages, including Bash.
I mean, there are 3 equally valid ways to write an if statement: `test`, `[`, and `[[`. In the case of the latter two, there are a mess of single-letter flags to test things about a file or condition[0]. I'm not sure what makes them "fake syntax", but I also don't know that much about bash.
It's all reasonable enough if you go and look it up, but the script immediately becomes harder to reason about. Conditionals shouldn't be this hard.
You don't need any of those to write an if statement. I frequently write if statements like this one
if ! grep -qF something /etc/some/config/file 2>/dev/null; then
do_something
fi
The `test` command is there if you want to use it, but it's just another command.
In the case of Bash, `test` is a built-in command rather than an external program, and it also has two other names, `[` and `[[`. I don't like the latter two because they look, to a naive reader, like special syntax built into the shell— like something the parser sees as unique and different and bear a special relationship to if-statements— but they aren't and they don't. And in fact you can use them in other shells that don't have them as built-ins, if you implement them as external commands. (You can probably find a binary called `[` on your system right now.)
(Actually, it looks like `[[` is even worse than "fake syntax"... it's real special syntax. It changes how Bash interprets `&&` and `||`. Yikes.)
But if you don't like `test`, you don't have to use it; you can use any command you like!
For instance, you might use `expr`:
if expr "1 > 0"; then
echo this will always run
else
echo this will never run
fi
Fish has some built-ins that fall into a similar niche that are handy for simple comparisons like this, namely `math` and `string`, but there are probably others.
If you really don't like `test`, don't even need to use it for checking the existence or type (dir, symlink, socket, etc.) of files! You can use GNU `find` for that, or even sharkdp's `fd` if you ache for something new and shiny.
Fish actually has something really nice here in the `path` built-in, which includes long options like you and I both wish `test` had. You can write:
if path -q --type=dir a/b/c
touch a/b/c/some-file
end
You don't need `test` for asking about or asserting equality of variables, either;
grep -qxF "$A" <<< "$B"
is equivalent to
test "A" = "$B"
or with the Fish `string` built-in
string match --entire $A $B
The key is that in a shell, all commands are truthy in terms of their exit status. `&&` and `||` let you combine those exit statuses in exactly the way you'd expect, as do the (imo much more elegant) `and` and `or` combiner commands in Fish.
Finally, there's no need to use the likes of `test` for combining conditions. I certainly never do. You can just write
test "$A" = "$B" && test "$C" = "$D"
instead of something like
[ "$A" = "$B" -a "$C" = "$D" ]
If-statements in shell languages are so simple that there's practically nothing to them. They just take a single command (any!) and branch based on its exit status! That's it.
As for readability: any program in any language is difficult to understand if you don't know the interfaces or behaviors of the functions it invokes. `[`/`test` is no different from any such function, although it appears that `[[` is something weirder and, imo, worse.
This is a decent heuristic, although (IMO) you can usually get away with ~100 lines of shell without too much headache.
Last year I wrote (really, grew like a tumor) a 2000 line Fish script to do some Podman magic. The first few hundred lines were great, since it was "just" piping data around - shell is great at that!
It then proceeded to go completely off the rails when I went full sunk cost fallacy and started abusing /dev/shm to emulate hash tables.
E: just looked at the source code. My "build system" was another Fish script that concatenated several script files together. Jeez. Never again.
Historically; my rule of thumb is as soon as I can't see the ~entire script without scrolling - time to rewrite in Python/ansible. I Think about the rewrite, but it usually takes awhile to do it (if ever)
When you solve the dependency management issue for shell scripts, you can also use newer language features because you can ship a newer interpreter the same way you ship whatever external dependencies you have. You don't have to limit yourself to what is POSIX, etc. Depending on how you solve it, you may even be able to switch to a newer shell with a nicer language. (And doing so may solve it for you; since PowerShell, newer shells often come with a dependency management layer.)
> any script that needs those things
It's not really a matter of needing those things, necessarily. Once you have them, you're welcome to write scripts in a cleaner, more convenient way. For instance, all of my shell scripts used by colleagues at work just use GNU coreutils regardless of what platform they're on. Instead of worrying about differences in how sed behaves with certain flags, on different platforms, I simply write everything for GNU sed and it Just Works™. Do those scripts need such a thing? Not necessarily. Is it nicer to write free of constraints like that? Yes!
Same thing for just choosing commands with nicer interfaces, or more unified syntax... Use p7zip for handling all your archives so there's only one interface to think about. Make heavy use of `jq` (a great language) for dealing with structured data. Don't worry about reading input from a file and then writing back to it in the same pipeline; just throw in `sponge` from moreutils.
> The language just plain sucks too much
There really isn't anything better for invoking external programs. Everything else is way clunkier. Maybe that's okay, but when I've rewritten large-ish shell scripts in other languages, I often found myself annoyed with the new language. What used to be a 20-line shell script can easily end up being 400 lines in a "real" language.
I kind of agree with you, of course. POSIX-ish shells have too much syntax and at the same time not enough power. But what I really want is a better shell language, not to use some interpreted non-shell language in their place.
Nice, if only you could count on having it installed on your fleet, and your fleet is 100pct Linux, no AIX, no HPUX, no SOLARIS, no SUSE on IBM Power....
Been there, tried to, got a huge slap in the face.
Been there, done that. I am so glad I don’t have to deal with all that insanity anymore. In the build farm I was responsible for, I was always happy to work on the Linux and BSD boxes. AIX and HPUX made me want to throw things. At least the Itanium junk acted like a normal server, just a painfully slow one.
I will never voluntarily run a bunch of non-Linux/BSD servers again.
At the time (10 years ago) I worked for a company with enormous customers who had all kinds of different deployment targets. I bet that list is a lot shorter today.
I have a couple of projects consisting of around >1k lines of Bash. :) Not to bloat, but it is pretty easy to read and maintain. It is complete as well. I tested all of its functionalities and it just works(tm). Were it another language, it may have been more than just around 1k LOC, however, or more difficult to maintain. I call some external programs a lot, so I stick'd to a shell script.
I simply do not write shell scripts that use or reference binaries/libraries that are no pre-installed on the target OS (which is the correct target, writing shell scripts for portability is silly).
There is no package manager that is going to make a shell script I write for macOS work on Linux if that script uses commands that only exist on macOS.
That's a shame as I got to a monk-level python jujitsu. I can fix any problem, you name it, https nightmare, brew version vs pyenv, virtualenv shenanigans. Now all this knowledge is a bad investment of time.
Knowing the Python packaging ecosystem, uv could very well be replaced by something else. It feels different this time, but we won't know for a while yet.
Agreed. I migrated ~all my personal things to Uv; but I'm sure once I start adopting widely at work I'll find edge cases you need to know the weeds to figureout/work around.
I'm unable to resist responding that clearly the solution is to run Nix in Docker as your shell since packaging, dependency management, and reproducibility will be at theoretical maximum.
For the specific case of solving shell script dependencies, Nix is actually very straightforward. Packaging a script is a writeShellApplication call and calling it is a `nix run`.
I guess the issue is just that nobody has documented how to do that one specific thing so you can only learn this technique by trying to learn Nix as a whole.
So perhaps the thing you're envisaging could just be a wrapper for this Nix logic.
> finally feels like Python scripts can Just Work™ without a virtualenv scavenger hunt.
Hmm, last time I checked, uv installs into ~/.local/share/uv/python/cpython-3.xx and can not be installed globally e.g. inside a minimal docker without any other python.
>When Python is installed by uv, it will not be available globally (i.e. via the python command). Support for this feature is in preview. See Installing Python executables for details.
>You can still use uv run or create and activate a virtual environment to use python directly.
Quick search shows Altera held 30% of the FPGA market. That puts AMD’s $50B acquisition of Xilinx (which holds ~50% of the market) in an awkward light. Using some extremely crude math, Xilinx’s fair market value might now be closer to ~$15B.
Did AMD massively overpay, or has the FPGA market fundamentally shifted? Curious to see how this new benchmark ripples into AMD’s stock valuation.
The FPGA market shifted. For a brief moment they were allowed to be on BOMs of end user devices due to the rest of the computing field lagging behind somewhat. That period, as far as I can tell, is over.
My anecdotal example would be high end broadcast audio processors. These do quite a bit beyond the actual processing of audio, in particular, into baseband or even RF signal generation.
In any case these devices used to be fully analog, then when they first went digital were a combination of DSPs for processing and FPGAs for signal output. Later generations dropped the DSP and did everything in larger FPGAs as the larger FPGAs became available. Later generations dropped the whole stack and just run on an 8 core Intel processor using real time linux and some specialized real time signal processing software with custom designed signal generators.
The high core and high frequency CPUs became good enough and getting custom made chips became exceptionally cheap as well. FPGAs became rather pointless in this pipline.
The US military, for a time, had a next generation radio specification that specifically called for the use of FPGAs, as that would allow them to make manufacturer agnostic radios and custom software for them. That never panned out but it shows the peak use of FPGAs to manage the constraints of this time period.
There’s an interesting middle-ground worth mentioning: a 2-in-1 connector https://www.mouser.com/ProductDetail/Rego-Electronics/845-00... that can take both HDMI and DisplayPort cables, which could be a neat solution for devices juggling both standards.
This reminds me of MIT’s work on femto-photography to see around corners. They used ultrafast laser pulses to bounce light off walls, capturing the reflections from hidden objects. By analyzing the time-of-flight data, they could reconstruct 3D shapes of objects not in direct view.
<https://web.media.mit.edu/~raskar/cornar/>
Finally, LLMs have brought us to the threshold of ancestral simulation. It's like the universe hit 'retry' on humanity, but this time with AI as the dungeon master.