Hacker News new | past | comments | ask | show | jobs | submit login

The eye tracking sounds like it could be incredibly useful as a 3rd input.

Very cool stuff, thanks for sharing




In addition to a pointer replacement, there is an idea of using eye tracking on a virtual keyboard with a swype type entry.


I dunno... "swyping" your eyes around a virtual keyboard seems like it'd cause a lot of strain on your eyes.


There are alternative virtual keyboard styles (typically made for use with a mouse and impaired motor control) which would work better. Fewer "fine" movements.

Combined with predictive typing, and you could go pretty far.


Do you have any examples?


Not the OP, but Dasher is a classic example that works very well for a single 2D input like using a joystick: https://youtu.be/0d6yIquOKQ0


I don't know about that, but I'd love to have eye tracking for focusing windows. Since I don't generally maximize windows, I find that I'm often typing hotkeys for the window I'm looking at, not the window my mouse is on (or was last focused, depending on your wm)


That's a great idea! This would really push up input speed. I'm surprised I haven't seen any implementations in the wild yet, although research has been done to explore this [0]

[0] https://pdfs.semanticscholar.org/1170/44feb247a82ab81a30b57f...


Windows 10's Eye Control has something like this, called "Shape Writing". See the second GIF here: https://www.theverge.com/2017/8/2/16087368/microsoft-eye-con...

Long before that, Optikey has had "multi-key selection" which works in the same way, see https://youtu.be/HLkyORh7vKk?t=10

In my experience (as a developer of eye gaze interfaces, not an everyday end user) it is often more efficient to have really good next-word-prediction (such as with the Presage engine) combined with single-letter dwells, rather than using "swipe-like" spelling, where you're committed to tracing out the whole word.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: