Hacker News new | past | comments | ask | show | jobs | submit login

> "The solution here is to augment our tools with the information a screen reader needs."

So we are in agreement that there should be different interfaces that suit each experience, we just disagree on the exact implementation details. I submit it's best to build entirely seperate experiences, but you submit it's enough to take the graphical-first experience and annotate it enough so that a screen-reader can generate an equivalent experience on the fly (using different kinds of annotation technologies). My response to this is:

1. Annotations and metadata (ARIA-labels, et-al) make it easier for the screen-reader to display relevant information in an accessible manner - but they create an unnecessary coupling between the visual-first frontend and the accessible frontend, when in reality they are built for different kinds of users.

2. Annotations are a decent starting point, but they are NOT a substitute for building an "accessibility-first" experience because they are too limited. You can't annotate a graph, or a progress bar, for example. But you could've built an entirely seperate experience which conveys the same data a graph would, in an accessible manner (given the right tools and frameworks).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: