I rarely use ctrl+r, because most of the time I'm searching based on command name, which is at the start (unless it is in pipeline, etc).. so, I use this instead in `.inputrc`:
# use up and down arrow to match search history based on typed starting text
"\e[A": history-search-backward
"\e[B": history-search-forward
This is actually a general feature of the GNU readline library, which a lot of programmes including bash use. For instance, a lot of REPLs like ghci use it.
Additional tip: if you overshoot and miss the command you wanted by hitting ctrl+R too much times (as it happens sometime when looking for an older command), you can go forward with ctrl+S. [EDIT: As Hikikomori pointed (thanks!), I got my shortcuts mixed up, ctrl+S is the shortcut for zsh, and the equivalent for bash is ctrl+shift+R]
(ctrl+S and ctrl+R are based on emacs shortcuts, where ctrl+S is search and ctrl+R is reverse search)
For zsh, zaw[1]'s history is insanely useful. I don't really think a neural network is needed. Usually I get to the exact line I wanted with a few keystrokes.
This is such a cool project! Plus a pretty awesome example of using Rust for CLI's. I'm curious about the neural network though. Is that also written in Rust? Or are you embedding something else?
Just saw the implementation. It's goes a little over my head unfortunately, but I am curious about the affects of it. Did you compare the non-neural net version with the neural net version? If so what differences did you find?
I found that the neural network does a little better than a simple linear regression function for weighting the parameters. I train it on a few months of my own shell history to predict, based on the last commands, what I'll type next.
Cool project and well executed. My only complaints are:
1. The GUI should display at the bottom of the screen rather than at the top. I found myself constantly having to jump my eyes from the bottom of the terminal where my command was, to the top of the screen.
2. Show the full commands in the search results. Maybe they could be wrapped? Long commands with changes towards the end of the string get chopped off making it hard to know the differences between them in the display.
Nifty! Is the neural network constantly being adjusted from the user's own patterns of history re-use?
This also reminds me of one of my dream features for a shell: full dataflow input/output/provenance histories, on a per-file(-version) basis.
For example: show all the commands a certain file has been read from, show the command(s) that wrote to a file, show true steps by which a certain file was constructed from predecessor files/commands, etc.
this is a really nice example of a simple, practical, "non-deep" NN. I'm wondering, are there any good learning resources you could recommend to start a project like this?
It makes me want to try a personal project that makes use of a tiny light-weight NN as well. I see giant AlphaGo neural nets mentioned so frequently that I forgot they could be lean :)
I read some tutorials on back propagation for this, but if you want to get into ML in general, I highly recommend fast.ai and Andrew Ng's free Stanford ML course.
I’m still quite new to neural networks. Can you explain the material benefit to using a neural network over a priority queue with a similar weighting system? Are you giving any other input to the neural network than simply the metrics you listed?
P.S. e.g. a polynomial over the metrics.
P.P.S. I imagine you’re using the neural network to tune the weights?
I started with a list of commands prioritized by a linear function of the metrics listed. It did okay, but since I was learning the linear function with back propagation, I figured a "real" network would do slightly better, and it seems to (but possibly only slightly).
Rather, the first one was simply a single node linear perceptron (a linear function) that I trained with backprop because I could, even though there are better techniques for fitting a linear function. Now that it's a "real" network, backprop is appropriate.
Sorry, I agree that was confusing. Online learning is a term in ML for when you train a model incrementally over time. McFly won't connect to the Internet.
Ah, thanks for the clarification! To a non-ML person, this brought back memories of what Canonical was trying to do with search functionality on Ubuntu, by fetching ads over the internet to interleve in local search results.