My personal feeling is that if programmers embraced voice input, simply not having to type when issues arise would help. Of course, most programmers will rant about how keyboards are so much more efficient, which they might be until you can’t use one.
Voice input helped me recover in the past. It was a challenge to get a workable setup, and getting back to the same speed takes a lot of tweaking and experimenting.
Sadly the CPU load and maintaining the setup with ever changing OSes has been too much. Though I do still maintain easier keyboard shortcuts almost religiously. AutoHotKey and Karabinder FTW.
I've designed Talon (my voice / alt input project) from the ground up for cross-platform scripts/configs. Once I port the OS layer to Windows/Linux you should be able to reuse mostly everything (with some app/os-specific stuff overridden at whatever abstraction layer it makes sense).
What sort of CPU load are you seeing? I'm able to dictate on battery, peaks averaging ~20% cpu. (Dragon is a different beast, peaking to >80% anytime it's over the noise threshold, so you need a really good mic to get good battery life).
I think that if you made a module that did voice commands for VLC media player that it could be quite popular. I've often wished I could just yell out "VLC pause" instead of having to walk over to the keyboard or mouse.
I can see why people hold out against this; voice programming sounds slower in general, and vastly slower during the ramping up period.
But I wonder if we'd be better off enabling voice input sooner, before we're in serious trouble, and using it sporadically. Mixing voice and keyboard input at need looks like a way to substantially reduce typing while still using a keyboard whenever voice work is too inconvenient.
This is one of my huge goals with Talon. I want to convince people who have no symptoms that using voice + eye tracking + etc is cool and will help them in useful / exciting ways. I think RSI is a kind of silent epidemic, and we won't ever solve it by only treating people who already show major symptoms.
On the other hand, it's really not much slower with the state of the art, especially when you mix in stuff like eye tracking at a really core level (we're playing stuff like autocompleting based on a symbol you just looked at!). I'm not far enough along with Talon to be pushing the benchmarking side of things heavily yet, but one early test was about 2/3 the code input performance of a 90wpm typist on the same code, with a lot of obvious places to improve. I think good application of continuous recognition, resumable grammars, and really context-specific helper code can push specific workloads way past what a keyboard/mouse can do (which calls back to the goal of impressing people who aren't injured yet).
Then there's the professional app space - e.g. Photoshop, CADD, and video/audio production tools could really benefit from voice workflows (imagine using a pen tablet augmented with voice + eye tracking instead of complex UI).
Here's one for your list that worked for me within 1 week of starting: dropping wheat from my diet. Noticed you don't have diet as a category even.
For more detail, I was sleeping in two wrist braces every night and doing stretches and strength exercises for an hour or more a day before the diet changes. Now I don't need any exercises or braces.
When I regress for a day or two it comes back, but not as bad.
weird that you're getting downvoted. The only explanation I can come up with is that some people's gut reaction to testimony against wheat is a result of addiction.
It couldn't possibly be that too many people have seen wheat/gluten blamed for a litany of unrelated ills with no data or scientific backing, supported entirely by placebo-effect sensitive anecdotes.
I'm afraid of the pressure happening the other way - I think open office plans reduce the chance of people trying voice programming. People in the community specifically talk about working from home or asking for private office space. There's also the Stenomask, which I've heard works great for dictation.
IMO, the current state of voice recognition in general is absolute garbage for anyone who doesn't speak American English natively. Dragon for Windows is touted as having great voice recognition, but it often fails to understand 'page up' / 'page down' and other simple commands. The built in system to OS X is ok if dictation with mistakes that need correcting by hand (aka keyboard) is something one desires; I can't imagine it ever is. Interestingly enough, the mobile voice recognition features seem to work a little better, but systems like Siri and Google's assistant are completely useless for getting actual work done. The accuracy of all these systems is such shit, that I have turned them all off as they are not even useful for dictating a plain English message into slack, let alone driving a computer and doing productive programming and system administration work. On the other hand, I've seen native speakers do exactly that which is impressive and also, I admit, makes me jealous as I don't expect to have access to this technology for many years to come.
Can you join the slack linked at talonvoice.com or send an email to the address in my hn profile?
I'd probably start by asking for audio samples of you saying a couple of sentences and commands in english
Then I'd try them locally against a couple of speech recognition engines
Then talk about potential issues and ways to influence recognition accuracy
Then I can help you try out Talon for both dictation and command input
As an anecdote, I've heard from some of my non-english-native users that they need to avoid certain word sounds in commands (by changing the command trigger words) because their accent doesn't emphasize them enough for good accuracy.
Anyway, on Github I’ve collected several dozen articles and created a table of things people have tried.
https://github.com/melling/ErgonomicNotes
My personal feeling is that if programmers embraced voice input, simply not having to type when issues arise would help. Of course, most programmers will rant about how keyboards are so much more efficient, which they might be until you can’t use one.
Here’s the current state of voice programming:
https://github.com/melling/ErgonomicNotes/blob/master/progra...