Hacker News new | past | comments | ask | show | jobs | submit login

It seems like you're so close to having a full predictive virtual keyboard (with nothing but dynamically-generated keys).

Have you given any thought on integrating this with some sort of bluetooth thimble-like button (makey makey?) on each finger for untethered typing?

I've written more about this line of reasoning here[1] if you're interested. Feel free to ping me on twitter if there's any way I can help. Congrats on this awesome project!

[1]: https://news.ycombinator.com/item?id=11223697




The possibilities for how language will work in the future are really exciting and interesting! We made the Minuum keyboard, too, which also explores how machine learning assistance can open up new ways of communicating.

One reason we're interested in visual communication with Dango, though, is that regular text input is pretty good already. Chorded keyboards exist and are way faster, but people mostly can't be bothered to use them. QWERTY is just good enough. But the field is wide open for rich communication with images, nothing out there is particularly good yet.


I might add that this is pretty much exactly what Elon Musk is asking for with his "Neural Lace" idea to merge humans and machines in "symbiosis".

Replace thimble-keys with OpenBCI and you already have it.

Urbit.org sounds like a good fit for the immutable append-only content-addressable private keylog (now would be the time for a portmanteau generator). I would love to help in any way to make this happen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: