Hacker News new | past | comments | ask | show | jobs | submit login
Why Create a New Unix Shell? (oilshell.org)
73 points by tux1968 on Jan 27, 2021 | hide | past | favorite | 62 comments




It's only tangential but switching to fish shell made me about 3 times more productive. Being able to easily recall commands, to have completions displayed as I type, and to have interactivity be a first class citizen of the environment... made working in the shell so so so so much nicer.

Fish has its own quirks, some of which seem unnecessary, but it was a godsend. There is definitely still room for improvement of the old tools, a. lot. of. room.


I switched from fish to zsh once I found this plugin:

https://github.com/zsh-users/zsh-autosuggestions

Now zsh feels just like fish, but with bash-compatible syntax (which was the main day-to-day problem I had with fish, otherwise it's fine, but except for the auto-suggestions I didn't really use any special fish features).


I was always reluctant to change my shell, but recently I switched to fish and It’s been a pleasure from day one. I wish I did it sooner.


I have used fish for many years and always been puzzled why it is not more popular. If you want an easy and effective shell that gets out of your way then it is should have been an obvious choice.

Do you have any thoughts on why people don’t try it out? Where u also reluctant?


In my case it was the fact that fish's scripting language is incompatible with bash's that made me switch back to zsh (after a few years of using fish). It's not a show stopper (otherwise I had switched back earlier), but there's been a dose of small daily frictions because of that.


I try not to use nonstandard software unless it's something I spend a lot of time on. The shell became very important to my work recently, and only then I started looking for alternatives to bash and zsh.


I am a huge fan of fish. Oil shell seems kind of different in aim from fish. Fish is trying primarily to be a great interactive shell and have code which is easy to understand but not necessarily compact of efficient to write. But for me it doesn’t matter. For longer things I use Julia or Go anyway.


Absolutely, hence "tangential". I was only hoping to lend credence to the idea that reinventing things is a good idea with a parallel example. I'm not the target audience of oil shell but I wish anyone who takes on something like that all the luck.

It has always baffled me the pushback that is given when someone works on projects like these. As if trying to do better isn't worth it.


It always surprises me that people get so much productivity gain out of their editors, shells and keyboards and such.

I write maybe a few dozen lines of code on a good day.


Well. We all do different kinds of work. I'm a scientist, i spend all day looking at and parsing data in a zillion formats. Not to mention writing papers in latex. My productivity directly depends on my ability to ingest and make sense of text files and code and every day is different. A more efficient shell... or more importantly a shell that frees up brain space for other tasks... makes a huge difference.

I often work with colleagues who ask how j get results so fast. They're very surprised when I say "remote emacs and a solid foundation with shell workflows".

When everything you work with is a text file, getting efficient with the shell just makes sense. Ask a carpenter about their tools and organization and you'll get a similar response.


(Fish doesn't make me 3x more productive, but...) I think a good analogy would be having sharp tools. If you don't have to exert yourself to do simple things, then menial tasks use less mental energy which you can save for your actual work.

Fish manages to pull this off without _any_ configuration burden (I use it completely stock), which is exciting because usually to benefit from shortcuts or macros or other "productivity hacks" you have to become a power user.


their take on config is unique:

> Every configuration option in a program is a place where the program is too stupid to figure out for itself what the user really wants, and should be considered a failure of both the program and the programmer who implemented it.

https://fishshell.com/docs/current/design.html#configurabili...


If you're working a lot in the terminal, than fish's convenient auto-suggestion mechanism is really quite a game changer, and it really "feels" 3x more productive because the prediction is pretty good (each directory has its own command history which fish checks first for matches). It's like replacing a traditional search box which only shows exact matches after you hit Enter with a fuzzy-search-box which immediately displays a most likely match while you're typing.

There are plugins for zsh (and maybe other shells) which emulate this feature though.


I am curious what you are doing with the rest of the day? Planning? Meetings?


In my experience, reading code.


Figuring out what lines to write, mostly.


How does fish compare to zsh? I just switched from bash to zsh and am loving it.


Fish’s strength is in having good defaults that makes it work well out of the box. Fish’s downside is not being POSIX compatible [1] though its creators would argue they deliberately avoided POSIX weaknesses.

If you already have Zsh configured well then Fish probably wouldn’t offer a huge advantage.

[1]: https://en.wikipedia.org/wiki/Fish_(Unix_shell)#Bash/fish_tr...


For a good default config of zsh I highly recommend oh-my-zsh[0]. I've been using it for years and it's amazing.

[0]: https://github.com/ohmyzsh/ohmyzsh


Fish is a bit like the Mac of shells. Stuff just works and its easy to use while flexible. Lots of little details are well polished.

The downsides are similar to Mac. Less standard/compatible with other stuff. Zsh seems more like a traditional shell. Meaning it is far more complex. Fish really aims to keep simplicity.

I have used Fish for many years and swear by it but I am also not the kind of guy who writes long shell scripts. I use regular programming languages for that. For me fish is mainly about having a shell that works well in interactive use.


I completely agree with this.

I recommend people Zsh: it's very familiar, scripts copied from the internet will most probably work and they have lots of flexibility to adjust it to their needs.

I personally have a Mac with Fish installed, I don't enjoy the process of sharpening my tools, I'm willing to give up flexibility in exchange of great defaults. It's the same reason why after I learned how to use a sharpening stone, I ended up buying an electric knife sharpener.


My least favorite fish quirk is that you cannot use stty to remap Ctrl-C. You need to use stty in another shell, then exec into fish to remap it.


Good news: In the upcoming 3.2 release, fish will be less insistent on resetting terminal modes, so things like this (and enabling flow control) will be possible.

(I'm the fish dev who implemented that)


Why do you remap Ctrl-C?


Why wouldn't I remap ctrl-c? It is the shortcut for copy. The letter c is nowhere in the word interrupt.


Interesting, it seems a 3x productivity boost to all programmers and admins would be a no brainer for the industry.


If all programmers and admins had the same needs then that would be a logical conclusion. But we don't, so I'll stick to ksh.


Not everyone who uses a computer is a programmer or an admin. I can't speak to their needs or what fish provides for them.


Oil looks interesting. The documentation is compelling.

I sleep at night by writing stoic bash. If I encounter anything unexpected in a script I don’t try to handle it, I just report it and then crash.

Most of the bash contortions I’ve seen in my life happen when people (myself included) have tried to handle error conditions or ill specified inputs, gracefully.

A bash script is like a pre-flight checklist: a list of things that should work to get something going. Most bash scripts are like this: in general they are sequences of commands to change the system from one good state to another, not to recover it from a bad state.

In the analogy: if something is not ok with your aircraft it’s beyond the scope of the pre flight check to fix it. Don’t use the checklist as a way of trying to automatically detect what the error is in order to fix it.


Great analogy. This is exactly how I use bash and have never thought to articulate it this way.


I think the author realized something important. There's the bash as a CLI users and bash as a scripting language users. I'm firmly in camp one and I think it's an important difference to realize.

I don't want my CLI to be a REPL for a scripting language. It's something I really disliked about PowerShell when I tried it. It just felt like a dotnet scripting language and not like a CLI.

If Oil provides a scripting interface with structured types etc. I'm all for it. I just don't want to have to deal with it when using the shell interactively. I love it when writing scripts and programs, but not for interactive shell use.


Reason I use fish is purely for OOTB shell defaults that are nice. I wouldnt script for Fish though its not a standard shell on most OS.


What if the language is “shell”-ish? Reading the piece, the idea of shell as a REPL seemed attractive.


For me: No. I feel like powershell tried exactly that and i don't like it. That may be in the end just depend on execution though.

But powershell e.g. prints errors that look like stacktraces (it's been a few years i don't remember exactly) with type information. And while that may be desirable to some, when using bash i'm using a UI as a user and not programming and i don't want to read a stacktrace. I want an error message in english telling me what's wrong.


There is a try/catch/finally mechanism in powershell: https://docs.microsoft.com/en-us/powershell/module/microsoft... However, it is quirky to set up and I don't remember the details now, it doesn't handle the errors out of the box for sure. I would still prefer powershell over bash (or god forbid, cmd) any day.


Sure. Is that how all REPLs behave? I have a feeling there is a lot of latitude for tuning the ergonomics.


Others' experiences may vary but personally speaking I have to write a fair amount of shell scripts that run on a range of machines, networks etc. At best its a pita to change shell on these ephemeral boxes, worst case its out of my control.

The lowest common denominator ends up being the good ol' bash (or sh)! Sure fish/oil is nice but quite a bit of mental gymnastics to keep them all straight in my head.


That's reasonable, but Oil can still be useful to you as a dev tool, since it implements a very large and sane subset of bash. Some details here:

http://www.oilshell.org/why.html

And something I just released is a better "set -x", which no other shell has:

https://www.oilshell.org/release/0.8.7/doc/xtrace.html

So you could debug your programs with Oil if bash tracing isn't enough (which it often isn't for me).

Oil is also very easy to build and install -- all you need is a compiler, a shell, and make, and make will probably go away at some point.


This is why I use the default shell on whatever platform I'm working on. I want to know that my work does not first dependon on installing an alternate shell (often not possible, depending on the environment).


I would like to know other people's habits.

for example, I always close browser windows and only keep a few tabs open at once, and then found out I was weird and many people keep huge amounts of tabs around.

My personal habit is to use bash, and always write shell scripts with #!/bin/sh as the first line.

Thing is - I've haven't tried other shells (since I was a student) because of "linux superstition". Same superstition that chooses filenames without spaces and prevents putting "." in your path.

Thing is - I think maybe decoupling the user interface from the scripting language should be more of a thing. People don't have as much trouble switching out their terminal emulator (a way of taking up the slack).

Maybe the way to a better UI while maintaining compatibility would be to choose a good "user interface" shell and maybe exporting SHELL=/bin/bash so scripts run?


> GNU bash is the most popular shell implementation in the world. It was first released in 1989, and implements the POSIX standard plus many extensions. It's the default shell on most Linux distributions, runs on BSD Unix variants, used to ship on Mac OS X, and runs on Windows.

Note that macOS still ships bash.


I haven’t had a Mac in a few years but iirc they ship zsh* nowadays not bash


Mac never stopped shipping Bash. What they did was to no longer use it as the default interactive shell. Zsh is now the new default shell. As far as I know, Mac has never shipped with Fish.


Default shell is zsh. They ship old version pre-GPL3 bash. I just “brew install bash” and then change my user shell in the system preferences to point at it.


They ship a few shells, but the default is zsh these days.


Greetings!

First off, any person who would write either a language or a shell, now or in the future, should read your page, and in fact, should read everything about your Oil shell/language...

It's a laudable effort!

This page is going to my HN favorites, for future review.

What follows next are my thoughts about selected excerpts of text on the page (please don't interpret as criticism, that's not the intent):

>"You can think of a Unix shell in two ways:

1. As a text-based user interface. You communicate with the operating system by typing commands.

2. As a language. It has variables, functions, and loops. Shell programs are text files that start with #!/bin/sh."

[...]

>"Are you reinventing Perl?

It's true that Perl is closer to shell than Python and Ruby are. For example, the perl -pie idiom can take the place of awk and sed. However, Perl isn't an acceptable shell either:

o It doesn't have true pipelines (of processes). I believe you have to "shell out" for this.

o It doesn't have a concept of file descriptor state. How do you write the equivalent of my_shell_func 2> err.txt, where my_shell_func can invoke both functions and external commands?

o Perl has a function called grep(), but the real grep is better for many problems."

PDS: This touches upon a historic problem, which is basically that either:

A) Shell designers work their way "up" to a programming language, in which case some elegant programming language features are either not thought about, or not thought deeply enough about, and thus either not implmented, or implemented poorly / "non-orthagonally", AKA a "Kludge".

Or:

B) Language designers work their way "down" to supporting shell commands / a shell subsystem -- in which case some elegant shell features are either not thought about, or not thought deeply enough about, and thus either not implmented, or implemented poorly / "non-orthagonally", AKA a "Kludge".

Also, languages are typically too heavy in what they require typed for shell-scripters, and shells are typically too light in what they allow (and error checking/debugging features) for programmers writing large programs...

If someone were to design the most ideal programming language which also can be used for shell scripting, they'd have to look at things from both perspectives, and they'd have to create a language/shell -- completely balanced "down the middle" between these two paradigms...

Perhaps the Oil shell/programming language -- is, or will become that middle line...

To that extent, I wish you a lot of luck!

If I were going to go down this path, I'd look at "first principles" (Elon Musk):

The greatest differentiator between shell commands and lines of code in most programming languages is the point you alluded to in your comparison with Perl:

"o It doesn't have true pipelines (of processes). I believe you have to "shell out" for this."

In other words, in most programming languages (unless you are using threads), you are guaranteed linear execution.

In a Unix shell -- because you can run multiple commands that may take varying amounts of time to complete, this guarantee is no longer present...

That would be the fundamental thing to keep in mind when writing this future language/shell...

(A good program to test when writing this language would be a Web-server -- where the base server is a single script, and then when it detects a connection, passes this over to either a single separate command-line invoked Unix program or script comprising multiple such Unix programs linked with pipes, but the central "server" script has to handle a whole lot of simultaneous I/O from multiple such separate programs (it centralizes database access!), and do this in a simple way that script programmers would be comfortable with, while guaranteeing the correct data flows...)

Anyway, wishing you luck in this endeavor!


(author here) Thanks for the comment! I do think you hit on something interesting, and it partly explains why the project is so big :)

To me, shell and Python/JS feel kind of similar, and that was sort of the thesis at the start of the project.

But if you just incrementally add features to Bourne shell, you basically get Korn shell, which is where bash got all the "bashisms" that people don't like.

This paper was surprising to me; the proprietary AT&T ksh was used for GUIs and so forth, AND it was embeddable like Lua:

http://www.oilshell.org/archive/ksh-usenix.pdf

[ksh] has the capabilities of perl and tcl, yet it is backward compatible with the Bourne shell. Applications of up to 25,000 lines have been written in ksh and are in production use.

Much of the impetus for ksh–93 was wksh, which allows graphical user interfaces to be written in ksh.

It's a lot more work to make something like Python or JS! Garbage collection is one issue; shells don't have it because they don't have recursive compound data structures or true functions.

So I would say that ksh "failed" because it took a lot of shortcuts. Perl, Python, Ruby, JS, and PHP won. So with Oil I also am upgrading shell, but it was a lot more work since I wanted to make it like the latter languages. It wasn't obvious at the outset how different these things are!

So in a sense Oil is upgrading shell to be more like the languages that won, which I still think is a good idea. It's weird that every computer boots to into a programming language REPL, but you're discouraged from learning it because it sucks. There's no reason that language shouldn't be a good one.

---

Yes pipelines are a big deal and the runtime is solid now so they can be enhanced. Ideas here, feel free to chime in: https://github.com/oilshell/oil/issues/843 :)


>"It's a lot more work to make something like Python or JS! Garbage collection is one issue; shells don't have it because they don't have recursive compound data structures or true functions."

An excellent point!

>"Yes pipelines are a big deal and the runtime is solid now so they can be enhanced."

Maybe synchronous (aka, guaranteed linear) and asynchronous (no guarantees) are better ways to look at it...

For example, a simple shell command like 'echo' could be run either synchronously or asynchronously, but if we're assigning the result of it to some variable, it makes sense to run it synchronously, to get that result into the variable before proceeding to the next line of the program...

But, if we're running a disk defragmenting tool, or other process that we don't know when it's going to complete, then it makes sense to run it asynchronously.

Shell commands that are chained together via pipelines may run asynchronously, but the shell may wait for everything to complete to scoop up the output; so you've sort of have the entire line that was sent to the shell running synchronously (if you're assigning the result to a variable), even though parts of it are running asynchronously (with respect to one another)...

And of course, via ampersand, you could have the entire line run asynchronous to your program...

See, I'd almost say that one of the problems is that shell scripting languages almost have no knowledge (and correct me if I am wrong, I might be!) about the pipes they create inside of commands -- an ideal shell language would be able to know about these and hook these for various purposes, and specify what would be ran synchronously and asynchronously...

Or, another weird corner case -- what if you have a shell script that launches other shell scripts asynchronously (in subshells) and they launch other shell scripts also asynchronously -- in deeper level subshells?

How would your language/shell track/handle that case?

How do you communicate/track/determine when/where to assign results to things?

It's sort of like the language/shell -- must be able to :

A) Explicitly say what is asynchronous and what is synchronous...

B) For asynchronous things, be able to communicate with them, track their status, get results from them when they complete, etc., etc.

Now, of all of the programming languages in existence, I think Go is a good candidate for understanding synchronous vs. non-synchronous -- but you don't exactly get an "easily scriptable" beginner-friendly shell-scripting language with Go...

Python seems to be the right balance between "ease of use" and "gives you control".

So I'm agreed with all that you've said about Python, and your use of Python in code...


Yes being able to store the FDs for pipes in variables is something that would make it more explicit and flexible. (It could also introduce deadlocks though)

Also, dgsh is along these lines: https://www.spinellis.gr/sw/dgsh/#compress-compare

It appears to use Unix domain sockets, not pipes.

I think I might want to use the Ruby-like blocks for syntax, something like

    grep FOO *.py | wc -l | sort -n

    pipeline {
      pipe :p1  # variables that are pipes
      pipe :p2

      # this syntax isn't great but shows the idea
      grep FOO *.py > &p1
      < &p1 wc -l > &p2
      < &p2 sort -n
    }

-----

Although I think richer pipelines make sense, structured data will probably come first:

https://github.com/oilshell/oil/wiki/Structured-Data-in-Oil


That's another excellent point, by the way...

All Unix (and derivative OS utilities) typically pass unstructured raw data to one another via pipes...

That's sort of a good thing and a bad thing at the same time...

It's great for such things as binary files, binary streams, anything binary in nature...

But equal-and-oppositely, it's terrible for say, if I wanted to put the output of a 'ls -al', and I wanted to put all of those fields into something like a structured XML or JSON or BSON or what-have-you.

I also like your point about FD's for pipes... that's a good idea too.

Yes, I could see deadlocks as possible, but then, maybe yet another extra step of engineering/forethought is necessary -- how do we manage locking in those scenarios? Maybe we could create timers that automatically break a deadlock after a certain number of seconds, and/or put the lock explicitly under program control (like the FD's for pipes), etc., etc.

Anyway, some fascinating stuff!

>"Although I think richer pipelines make sense, structured data will probably come first:

https://github.com/oilshell/oil/wiki/Structured-Data-in-Oil"

Looks like you've been doing your homework! <g>


Performance or correctness. Giving bash/zsh a performant hash-table would trivially empower them without boiling the ocean.


This reminds me: why isn't there a good Python shell? Something with, as the author says, a "domain-specific language for dealing with concurrent processes and the file system", but otherwise basically Python? I'm sure people must have tried, so I guess there are fundamental incompatibilities between the two?


You might be interested in Xonsh: https://xon.sh/


Uh oh. I've always stuck with Bash, because it's guaranteed to be on absolutely everything, and that's valuable for ops work. But this looks so much better! Thanks for pointing it out.


Sometimes I want a more interactive shell. With the native ability to display graphics and view html and PDFs and other docs, without opening a separate GUI program.


Yeah I hope people will build stuff like that on top of Oil. Hopefully this year the embedding story will become clearer, and progress can be made on top.

Some notes here about interactive shell ideas: https://github.com/oilshell/oil/wiki/Interactive-Shell

Some people might think Oil is sort of a text-only or retro project ... but it really isn't, it's a SPEED project. I use shell because it's the fastest to get certain things done.

But other things are faster when done with GUIs. Actually I wrote the last 5 or 6 blog posts in https://stackedit.io because copying and pasting huge swaths of text and links and images is faster with the mouse!

I am a die-hard Vim person for code, but I realized that "thinking and writing" is aided by the mouse and by rich GUIs (but not slow ones, which is hard!). Previewing hyperlinks is also very important.


One thing newbies often ask, is why can't they write a shell script which changes the environment or directory of their running shell. The answer, of course, is that you can't, without sourcing it.

People also ask the same question about Windows, and even DOS before it; there you actually could do it from a batch file, since CMD.EXE/COMMAND.COM effectively sources all batch files, it doesn't run them in a subprocess – but the same issue occurs if you write a program in some other language than batch, it runs in a subprocess and so can't modify CMD.EXE/COMMAND.COM's environment. (People resorted to some tricks though, like having a BATCH1.BAT call their program, and then afterwards call BATCH2.BAT, and their program modifies BATCH2.BAT on disk).

Actually on DOS, there is a way to modify COMMAND.COM's environment – COMMAND.COM installs an undocumented interrupt, INT 0x2E, which you can use to send commands to COMMAND.COM to run. So you can actually modify COMMAND.COM's environment. (Only the root COMAMND.COM installs a handler for INT 0x2E, nested ones do not.) Of course, due to no memory protection, there are also nasty ways of doing this, like modifying COMMAND.COM's memory. (And changing COMMAND.COM directory isn't an issue, since under DOS, the current directory is system-wide, not per-process.)

I was thinking, you could do something like "INT 0x2E" in a Unix shell. Create a Unix domain socket, and the shell listens for commands on it to execute. Put the path to the socket in an environment variable which is inherited by subprocesses, e.g. SHELL_CONTROL. Then, a subprocess can inspect and modify the shell's environment, working directory, functions, aliases, running jobs, etc, by reading/writing the Unix domain socket mentioned in SHELL_CONTROL.


Yes there are some ideas discussed here:

https://github.com/oilshell/oil/issues/738

I would call the 2 possible solutions an API or IPC, and what you're describing is basically IPC.

The devil is in the details though... I'm interested in a prototype :) A consumer of the interface will help shape it.


Just wanted to mention https://github.com/nushell/nushell which seems to be doing some very cool stuff.


re: "are you reinventing perl" Oil feels much more like TCL to me than it does perl.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: