We need a formal language that is a little more amenable than what we need to put in the terminal. But dictating to a comfortable hearing terminal would be great!
Oh yes. I spend much of my time there. To the point where I don't even have a graphical file manager let alone a desktop environment installed on my machine.
However. The shell, vim, tiling window managers, keyboard navigation plugins for the browser only seem to go so far and really only are an ideal interface for programming, reading, writing, and manipulating data.
To work on audio, images, or even use many web apps I have to accept a less fluid interface which is spatially laid out with visual elements representing actions I might take. There are hot keys but 1. They're usually not sensibly laid out and 2. they don't compose, so I have no choice but to dig around interfaces (which tend to be flat and modeless.)
It seems like I should be able to manipulate image or musical data in the fluid comfortable conposable hands off way I can approach programming tasks or even creative programming but the tools just aren't there. There's no real high level that I've seen interaction language for say.. Music or graphics. Such that there is no vim for music.
When I edit Clojure code for example in vim with vim-exp or paredit I'm not editing text.. I feel like I'm able to navigate and edit a structure that my editor and I both have a shared understanding of. A structure with a clear interaction language and representation. Working with graphics is a pain because I feel that I'm manipulating the artifacts rather than controlling inputs into a process. And while music creation software has come a long way there is a lot of visual noise, buttons that represent actions, and skeumorphic elements rather than a clean interaction language. I still prefer to at least begin most of my music working with hardware sequencers because they're focused and discrete.
There are some great tools working with Sound. (SuperCollider, Overtone) but those are programming environments more so than interactive ones.
I want to be able to describe part of an idea, supply some rough parameters and get some immediate feedback as to what that might look or sound like.. With the option of using whatever data I have at hand as an input. Then I want to be able to adjust those parameters, possibly even making their values a function of some other elements I've already made, and when I'm happy with it I'd then want to be able to select over, edit, and build off of the objects I've created.
Buttons that advertise actions are just noise to me.. I want the entire screen just to be feedback as to the state of my project and the action that I'm constructing.
Drag and drop.. Is a drag.. Why am I carrying icons across a computer screen? Why am I performing virtual manual labor? Copy and paste is a little better but I want to see what it is that I've grabbed before I place it. I want to be able to paste in multiple places and edit them all at once. And I want to be able to describe rather than having to indicate where I want things to go and how I want them to change.
All the "best" interfaces for creative work begin with the assumption that the user is going to point at things or drag their finger across a touch screen, hot keys are there for power users, but they mostly provide quick access to a pallete of isolated and uncomposable actions where you modify the artifacts in steps moving it closer to your vision.
Wouldn't it be nicer to modify the transformation you want to apply in steps?
Poking things, whether with a finger or a mouse pointer is almost certainly not the pinnacle of UI. It's an emulation extension of things people do with their hands. (Draw a picture with a pencil, cut and splice tape, pencil notes onto sheet music.) Lots of where and not enough what.
Lots of showing the computer what you want done, or really doing it manually with the provided tools, rather than telling the machine what to do or describing what you want.
The speech recognition and natural language processing work that's being done is promising. Human languages can be written as well as spoken though. (And written with a lot less ambiguity.)
I'm going to keep clinging to my array of general purpose buttons that I've built muscle memory for and do what I can to make tools for it. I feel very strongly that the interactive picture book is not an actual improvement on language based interfaces though. It's different and it's a recent development based on recent technology, but it's really a separate thing altogether even though society is chasing the trend and talking about it as though it is the best man-machine interface we have to date.
The keyboard sucks. It gives you RSI, it's big, it's noisy. But when you know it you know it. You can go in any of 101 different directions instantaneously when using it as an array of buttons, or when using it to actually type.. Well it's infinite. No features ever need to get dropped because their button was wasting space on the screen. Nothing has to waste space on the screen. You can read the doc, refer back to it when needed, memorize what you need and discard the rest, maybe peruse it periodically to update your software.
Well that's my 11 cents. Rant over. I have to confess to being one of those people with a display approaching the size of a desk (that I have no desire to ever touch for any reason).
Sign me up, can't stand all the pointing-and-clicking.