A lot of blind users don't use Linux because the major reader vendors only support Windows. And, Windows has historically had a lot more assistive technology built-in.
I'm kinda disappointed in the lack of support in the Open Source world for this kind of thing, but I've been guilty of it myself in the past. These days, I always ask, "How does this effect users with no or limited sight? Limited mobility? Limited hearing?" I don't always know how to resolve the problems, but I'm always thinking about it and trying to learn. I recently implemented a rule for myself that I won't publish any new video without complete and accurate captions (the automatic captioning at YouTube is insufficient to satisfy this requirement, but it's wonderful that they make captioning so easy and fast). Likewise, we have a new theme that's been developed by a third party which is miles ahead of any UI we've had in the past, but we're not gonna make it the default until we've at least made a clear path for reader users to get to a theme that will work well for them (our old mobile theme uses a hierarchy of menus and is surprisingly useable by screen readers, so we continue to support it just for that reason).
This may be something I should try just so I can wrap my head around how one would use our software without being able to see it. It's been around for a long time. I wonder, however, if Emacspeak would give me anywhere near a realistic average user experience.
I just had eye surgery, so I had a 48 hour "blackout" period where I tried different screen readers and methods of programming. I have an iPhone and a Macbook air. I configured emacs and emacsspeak ahead of time.
Many apps on the iPhone work really well- reading email and sending texts was very easy. Writing email was difficult: I noticed how nonlinear composing email was, and browsing the web was either fine or impossible with nothing in-between.
Some apps had no assistive-technology support and were completely unusable (giving only a "thunk" for feedback). This was very frustrating and is something I will check when evaluating mobile apps in the future.
When someone made a typo, it confused the reader. There wasn't an easy way that I could find to get it to spell things out and I needed to ask for help reading something from someone sighted.
Programming was interesting.
CL (with paredit) and K/Q were no more difficult without eyes (although the choice of words for symbols left some to be desired; I would create a special minor mode if I had to do this again).
Java/C/C++ was "okay". I felt like I could do better with more practice, but it was difficult to figure out where the cursor was sometimes.
Python and Haskell were impossible: I kept having to "re-speak" each line and move the cursor around putting extra spaces in to figure out the indention level. I could only hold small functions in my head and I forgot frequently. -1/10 would not recommend python to blind people.
Kudos to you for undertaking this task. I think a lot of folks would have simply not used their devices for 48 hours rather than learn the assistive tech.
That said, please don't fall into the trap of assuming that your experience is definitive, and of recommending/not recommending things to those of us who may be blind for our entire lives based on your 48 hours of experience. Case in point, I know a number of blind Python developers. I myself did a lot of Python during college, back when Zope was huge and Plone didn't exist. It's no more or less difficult than any other high-level language. Just about every screen reader I've used has the ability to speak line indentation (I.e. "8 spaces def fn(...") and this makes tracking scope easy. Holding a function in your head based on hearing it spoken is a skill that improves with practice. I've coded to varying degrees in every language you've listed, and while some of them do have their unique accessibility challenges (Haskell in particular I find dense because "f . g $ h i" packs a lot of meaning into something with little to break it up) I've learned to audit orally parse them over time.
FWIW, as a blind person, I imagine that coordinating the motion of a 2-ton block of metal and glass, down a strip of asphalt surrounded by a number of other chunks of metal and glass, moving as fast as baseballs and prone to factors like weather and surface conditions,, is a very fragile process that terrifies the hell out of me if I really think about it for any amount of time. I wouldn't not recommend driving to sighted folks just because I can't imagine how it routinely gets solved, though. :)
I promise I don't actually mean to speak for anyone other than myself.
I have observed assistive technology before and wondered just how effective it is. I suspect many sighted programmers (like the parent poster) have wondered this as well, and the point of my post was, to use your metaphor, to point out it is a very different experience riding in a car versus driving one.
That said, some of your report surprise me.
Do you program in K?
You wrote you programmed in all of the languages that I mentioned, but you made mention of how dense Haskell seems and K is much more dense than Haskell.
For example, and I'm sorry if your screen reader doesn't like this, in K I can write:
c:{+(.#:'=x;?x)}
Q sounds better on a screen reader to me, but is harder for my eyes to parse:
c:{flip(value count each group x;distinct x)}
Here's a Python implementation of "c" above:
def c(v):
h = {}
for el in v:
h[el] = h.get(el,0) + 1
a = []
for el in h.keys():
a += [[ h[el], el ]]
return a
Now I don't know if you know K or Q, but I must say that I don't find this easy to read. Not with my eyes or with my ears. When I pointed to the fact that Python functions are longer and delimited by whitespace caused me a great deal of frustration, I hope that if you don't know K or Q that this gives you a better understanding of what I'm comparing it with.
On the other hand: If you do know K or Q, then I'm wondering if you could explain how you experience a python program like that in your environment is preferable to the K or Q example.
GNOME has pretty great screen reader support built in (I think this was work that Sun did before Oracle bought them). The native apps have good screen reader support in my experience, unfortunately, support for the web is lacking (a lot of the new ARIA stuff is unsupported or broken).
There was a really nice lightning talk this year at Libreplanet about increasing accessibility in free software. One of the key takeaways was that good text is important. All of the widgets should be labeled appropriately otherwise there isn't a prayer of making it accessible.
If you're looking for someone to contract with to improve your accessibility, I do lots of web development and would be happy to help. I suspect you have my contact info already (assuming you're the swelljoe I suspect you of being. :)
Hahah...Hi, Nolan. I was actually just talking to my co-founder about whether we could find some budget to hire you. And, perhaps we can work out some kind of barter, as well. I know you've got some of your own projects, maybe I can be useful on one or more of those, in exchange.
I have earlier tried the Oralux Linux distribution, as a non-blind user but not looking at the screen, to see how far I could get.
It was slow going, but I managed to follow the instructions at boot to start an emacs tutorial and get to the point of being able to open, edit and save a text file.
It looks like it's not developed anymore, but hopefully there are still active distros that come with out-of-the-box usability for the blind.
I am actually working on a large project involving the new 2016 braille standard for the blind with regard to translation, text editing, and a few other factors. It's nice to see other people care about the demographic and are trying to help out. I think its a really valuable experience to go through your own page with a screen reader and see how it feels. a11y is a whole different world of UX.
I've always been surprised by the lack of work put into systems for the blind, because systems for the blind can (sometimes) also be used by the normal-sighted-who-have-to-or-could-be-looking-somewhere-else.
Whose protagonist was not blind, but rather felt that dyslexia prevented using the command line and kept them on the Microsoft stack. In fairness, not using the command line is the only really sad part of it as far as I'm concerned.
I've recently been working with a blind Windows user, and started wondering if a terminal interface would be easier to use. More linear, would need the ability to skip the narration back and forward, but the power of "ls filenameIAmLookingFor*" as even a simple example, vs trying to scroll around a GUI, seems obvious
It's a wonderful portmanteau, but the all important question is does the pronunciation split happen on B + Linux to emphasize 'Linux' or or Blin + ux to emphasize 'Blind'.
Perhaps you're making a crude attempt at sarcasm; if not: I'm curious: did you browse via a braille terminal or do you think it sounded bland using text-to-speech?
I'm kinda disappointed in the lack of support in the Open Source world for this kind of thing, but I've been guilty of it myself in the past. These days, I always ask, "How does this effect users with no or limited sight? Limited mobility? Limited hearing?" I don't always know how to resolve the problems, but I'm always thinking about it and trying to learn. I recently implemented a rule for myself that I won't publish any new video without complete and accurate captions (the automatic captioning at YouTube is insufficient to satisfy this requirement, but it's wonderful that they make captioning so easy and fast). Likewise, we have a new theme that's been developed by a third party which is miles ahead of any UI we've had in the past, but we're not gonna make it the default until we've at least made a clear path for reader users to get to a theme that will work well for them (our old mobile theme uses a hierarchy of menus and is surprisingly useable by screen readers, so we continue to support it just for that reason).
This may be something I should try just so I can wrap my head around how one would use our software without being able to see it. It's been around for a long time. I wonder, however, if Emacspeak would give me anywhere near a realistic average user experience.