Hacker News new | past | comments | ask | show | jobs | submit login
Dasher:A Novel Text Entry System For Mobile and Other Devices [video]
6 points by srean on Jan 26, 2011 | hide | past | favorite | 10 comments



This is yet another post on text entry system targeted to (area) limited devices. Rather than posting it as a comment on an old thread I decided to make it into a standalone post.

The video is long, but I did not regret watching it. Here David Mackay, an information theorist, explains how one may input text through space limited devices. If only I could explain theoretical concepts as lucidly and as well as he does.

What interests me is that, Dasher, when adequately augmented, is even more appropriate for inputting programming text than say English. Though English has been modeled by (stochastic) context free grammars, its only an approximation. Whereas programming languages, fit it to a 't'. Mackay mentions that English has an entropy of around 1 bit per letter and that a finger is usually capable of generating 14 bits per second. Source code, I believe, has significantly less entropy than English (this depends on the language of course).

I think lisp is a good language to target because the actual layout of the source code does not matter and can be automated. Also because it is such a thin layer above the AST and hence an ideal match for simple context free grammars. It is also advantageous that the first token of an s-expression has an overwhelming possibility of being drawn from a set of few key possibilities. Making the language's bit rate low.

There are several extensions that are possible. For instance given a code base, learn the coding style and adapt the model to it (including the style of generating identifier names). For new code bases, one may learn the model on the fly.

Finally, I like his analogy. Writing as an activity has had two dominant forms, (i) pushing buttons (ii) scribbling. Dasher is writing by navigating/driving efficiently through a library of all possible text. Depending on the style of your intended text one may re-arrange that library.

The only drawback I see is that it requires your attention, unlike say typing. But that might after all be a good thing.

Its released under GPL and available for free on iPhone as well http://www.inference.phy.cam.ac.uk/dasher/MobileDasher.html


> The only drawback I see is that it requires your attention, unlike say typing. But that might after all be a good thing.

I don't see how being distracting can be an advantage for an information entry system. It seems like it would just distract you from the content of the message. Care to elaborate? (I'm curious)

Edit: The other major drawback is that it limits your expression. Suppose for example that I want to write the word "Rhinocephant" (it's half rhinoceros, half elephant, and 100% imaginary). But if I want to type that with Dasher, it will be extremely difficult (because it is not in their language model). On the keyboard (or pen) the hardest part is deciding how to spell it. (Unless I'm wrong about typing imaginary words?)


Oh its a minor quibble. I can text without looking at the keypad but here I would have to look at the screen. I probably should take back my confusing comment about drawbacks. About your other comment, it becomes particularly easy to enter a word if a word fits the language model. But if it does not, it is not that much more difficult than picking the letters one by one.

In fact I just had to try your example out on it. It turns out 'rhinocephant' isnt a hard enough word for the default English language model at all. 'rhinoce' and 'phant' turned out to be common substrings.


It's not so much the length as the sound. I'm profoundly deaf and can't really understand video.


I am almost deaf, but only on one side. I was about to suggest the recent discussion on deafness on HN, then I realized you have commented there. I really loved that thread. Thanks for commenting on it. I realized that it is so much better to be upfront about the difficulty especially when meeting someone new. For some reason that had not occurred to me before.

I tried to find a closed caption version of the video. But it appears that google has not enabled auto closed captioning on all of its channels. Apparently they are rolling that feature in gradually. So maybe sometime in the future the video will have closed captions.

http://googlevideo.blogspot.com/2006/09/finally-caption-play....


No problem -- I didn't realize that was you!

Definitely update us at some point with your progress; I'm always interested in hearing how people have surpassed it and led successful lives.


Thanks for trying!


I don't want to sit here trying to strain to understand a one-hour video and the site linked in the video description leads to a car loan ad site.

What's the low-down on this technology?


Oh ! dont worry about that link, I guess the researchers let it expire and someone else grabbed it. This link would have more details http://www.inference.phy.cam.ac.uk/dasher/

One can try it in one's browser here http://www.inference.phy.cam.ac.uk/dasher/TryJavaDasherNow.h... but looking at the video/demo helped in understanding what is going on. In fact I was surprised to find that it's actually easier to use than what it looked like in the demo.


Here is their Git repo:

http://git.gnome.org/browse/dasher/tree/

Last commit only a few days ago, that's good.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: