In case anyone is interested, I watched the network requests go by to see how it works. Google is doing the detection server side (no surprise there, but it was smooth enough that I had to check) and receives the drawing area (writing_area_width, writing_area_height), the "ink" which is an array of points and any already existing characters in the search field. The return data is the best guesses for the input. Curious that multiple options are returned, I'm not sure how those are used. Another AJAX request is made for Google Instant.
Fun feature. Would be killer with support for non-latin based language. Imagine walking down a street in Japan, seeing a restaurant, and wanting to search for reviews. Oh no! It's written in kanji and you have a hard enough time reading basic words, much less names. Open up Google on your phone, write the kanji and add the japanese word for review behind it, and off you go.
You can already do this on Android in any text field of your choice using the "7notes with mazec" keyboard[0]. The kanji recognition is nothing short of miraculous[1]. It's a little expensive, so check out the free 10-day trial[2] if you need any further convincing.
Thank you for the recommendation. I haven't checked this solution out yet, but it looks fantastic! I have never seen an app with this kind of kanji recognition.
It is indeed one of a kind. I haven't formally studied 書道 (calligraphy), but the ad hoc 行書 (cursive style of writing) I've developed out of laziness is recognized perfectly by it.
If you're interested in good Android apps for Japanese, I would also urge you to check out Aedict[0] and DroidWing[1]. The latter, when combined with EPWING-format dictionaries obtained from a 3rd party source, is (IMO) just as good as any 電子辞書 (which themselves use EPWING-format dictionaries). I personally use the 研究社 新和英大辞典 第5版 and 研究社 新英和大辞典第6版 (they are, surprisingly enough, more than sufficient for classical Japanese and for kanbun), though you can also get J-J dictionaries and domain-specific ones, such as for IT or medicine.
Are there any apps that use a dictionary other than EDICT? EDICT is great if you are trying to read a textbook or something, but if you're trying to decipher manga, anime, or a conversation that took place anytime after 1995, it's nearly useless. Actual denshi jishos tend to do much better, but are really optimized for users that speak Japanese natively.
> Are there any apps that use a dictionary other than EDICT?
Look into the second app I mentioned (DroidWing). Like I said, it supports EPWING-format dictionary files, the same as 電子辞書 use, meaning that comprehensiveness is no longer a distinguishing factor when it comes to 電子辞書 vs. smartphones. The specific dictionaries I mentioned (from 研究社) are considered to be the authoritative E-J/J-E dictionaries, both printed and digital. You can of course get other speciality E-J/J-E dictionaries, or J-J dictionaries, if that's what you desire.
Ah, OK. When I read the description of DroidWing I wasn't sure what the point of converting EDICT to some other format was. I will look into this further.
That's a hard copy, and an old edition at that. The digital version (which is what you need for DroidWing) is only $270[0], or free if your ethics are flexible.
I must thank you as well. Though I love my standalone denshi jisho (Wordtank G55) that's lasted me a good 5 years or so, having an app for this works well if I don't want to bust the thing out... especially since these days I'm not sitting down writing Japanese essays.
Maybe it's because I was still in high school when the modern smartphone revolution came about, but for some reason, I just abhor the idea of a standalone 電子辞書 and see it as a 20th-century anachronism.
When I began studying 古文 and 漢文, despite being told that I would "need" to purchase a 電子辞書, I persevered and was able to (most likely thanks to my technical knowledge, which most other Japanese students admittedly do not possess) replicate all the functionality of a 電子辞書 on my existing smartphone for nearly free. In fact, I often found it superior in many ways, such as handwriting recognition with 7notes being far superior to the write-on-a-tiny-1-square-inch-pad-with-a-stylus approach that most 電子辞書 continue to take. And let's not forget the huge advantage provided by cellular internet access, particularly when it comes to looking up proper names when reading texts such as 三国志演義 (Romance of the Three Kingdoms).
No problem! Have fun in Japan, I can't wait to return for at least a couple years. I spent a few months there and enjoyed every moment of it. It's not exactly easy for engineers to get jobs there though, and I have no interest in teaching English :/
It's not easier if you're standing in a forest of 桜 and there's no paper and pencil around, as in the demonstration video. :-)
And even if you do have it written, it may be easier for the user, but it's harder for the computer. The order and direction of strokes is standardized, and getting this information as the user writes it is tremendously valuable.
I've played with handwriting recognition algorithms for kanji before, and if you assume the stroke order and direction are correct, you can make a surprisingly good algorithm with very little code. Detecting a character from an image is much harder.
The iOS handwriting recognition, for one, takes this into account. iOS officially only has hanzi (Chinese) handwriting recognition, but it usually works OK for Japanese, too, as long as it's a character that both languages have. But when you run into one of the characters where Chinese and Japanese have standardized on a different stroke order, like 王, it'll never recognize it when written using the Japanese stroke order, no matter how nicely you write it.
The Google Translate app for Android does this. I was recently in Japan, and that app was invaluable. It's also pretty decent at Japanese speech to English translation.
Now THAT would be amazing. Incredibly smart move. What better way to create your own handwriting recognition program than to have all 4 billion people field testing it for you.
yeah, I've done handwriting with a stylus on windows and mac before and had much worse results than I'm getting with my finger on this. Very impressive.
OK, so Google want to improve their handwriting recognition. Fine. But from a users point of view this just tells me one thing:
- Using an on-screen keyboard is such a pain for people that Google changed their HOMEPAGE to accommodate this.
Why does this irritate me?
1) None of the latest generation of smartphones are coming out with a hardware keyboard. The best are always keyboardless.
2) Stylus-based input, which is the other extremely fast form of text entry, has all but died out (I know some products still exist - but it was essentially dead when capacitive screens won out).
Hence, we've gone backwards from 10 years ago. It was faster for me to send a text/email from my phone 10 years ago then it is today. That is hugely disappointing.
Yes, this exactly. I desperately want to move from my Blackberry to an android phone but all the best phones seem keyboard-less. Typing on a touch screen is a major annoyance for anything more than typing a search string or sms.
I'll note that despite the Techcrunch headline about Google Handwrite accepting cursive handwriting, it doesn't really do so. In fact, Google's help pages specifically say you should use block printing versus cursive. I tried some cursive and the recognition didn't work all that well. However, the block printing was generally quite accurate.
I actually see a great value to it for times when I'm walking around a conference with my iPad and want to do a quick Google search. I can't really use voice (i.e. Siri) and I can't use two hands to type. This handwriting could be an interesting option.
I wonder, do latest versions of Android have support for 3rd party Wacom styluses? Although I'm not sure Wacom even sells 3rd party styluses for capacitive tablets that are as accurate as Samsung's S-pen or those old Windows tablets.
The reason I'm asking is because I'd like to buy say a Nexus tablet to use with a stylus, but want the same level of accuracy as Samsung's S-pen or better, without having to buy Samsung's Galaxy Note devices. Do those need a special panel as well to have that kind of accuracy, do they only require that Android has the necessary API's and the support for those styluses?
As anyone who mastered PalmOS's grafitti can attest, sometimes writing on a PDA / smartphone is a lot more usable than typing, especially on a soft keyboard.
Input remains the weak point to many or most of these devices.
The beta version of Swype is really a major improvement. I'say I can write faster on my phone than on my pc, under certain conditions. The much slower part is copyediting, but it is mostly because sites like hn break the flow with some fancy textarea tweaks.
You're right that note-taking will be one of the killer apps for the next generation of tablets. OneNote for Metro ("OneNote MX") is already available in beta, so I'm guessing Microsoft is chasing that:
From a technical point of view this is remarkable. Seems like the recognition works really well, and that's great.
However, from a usability point of view this is probably awful. I've had an app on my phone for a while now that doesn't do handwriting recognition but just stores the handwritten stuff as images. Using that without a proper pen-like stylus is a pain. Writing with your fingers just doesn't really work. Fingertips are just not accurate enough for this kind of movement, which is precisely why we guide a pen with multiple fingers instead of just one.
Additionally it's much slower than just typing it out, even on the sucky on-screen-keyboard that you're provided with these days.
So it's slow and surprisingly hard to use. And frankly, I don't see styluses making their big comeback next year. People are just too addicted to touching things with their fingers.
I use pattern unlock. But I installed WidgetLocket to give me a primary lockscreen where I was able to make one of the swipe positions a Google Now shortcut, and another position a Google Now Voice Search shortcut, in addition to the two existing positions of a Camera shortcut and the Unlock shortcut.
Sometimes opening a keyboard and typing is just too efficient. Sometimes you need a technology that will slow your life down and make it more prone to errors. Google handwrite: why type when you can awkwardly scribble?
I agree with you, but at the same time I think it's great that somebody is at least trying to innovate the tools we have for interfacing with technology. I know handwriting recognition isn't anything new, but the idea that the keyboard is all we will ever need can only contribute to the stagnation of advancement.
I'm still waiting for the day I can telepathically control all my devices.
There could be a lot done with eyeball tracking. We already have setups that can accurately determine where you will be looking a fraction of a second in the future.
Eyeball tracking combined with speech or handwriting translation could be quite powerful. The eyeball tracking could provide a lot of contextual information to make speech/handwriting recognition more accurate.
One's saccade eye movements are very regular in acceleration profile, so it's possible to determine where you're going to look from the start of the saccade movement.
I could not find the original reference, but there is some highly technical information here:
This is totally incorrect, eye tracking is not sufficiently advanced to be in any way useful in terms of entering input. There exist 'visual keyboards' for people that are paralyzed which allow them to select a key based on where they are looking. These, while a fantastic tool to help disabled people, are not even close to approaching the speed we achieve using a manual keyboard. There's an example here - your eye has to linger for at least a second for the computer to be confident in your choice.
And it only takes common sense to realize that we don't look at every key we type, and it makes it faster. The quickest typists don't have to look at the keyboard at all, and this increases efficiency. Both handwriting and eye tracking are much much slower than typing, no matter what. You can be the fastest handwriter on the planet, and still a moderately talented typist will burn the shit out of you. It just takes less time to hit a key than it does to write an entire letter form.
I realize my first comment was kind of mean and sarcastic, but that was because this idea is so completely stupid and not progressive at all that I thought the relatively intelligent community on hacker news would realize this immediately. All my friends and co-workers who saw it were like "this is completely dumb"... immediately.
I understand that people like things that are 'different' and 'progressive', but this particular tool is neither of the above. It's a fun little trick that is totally not practically useful in any way.
> This is totally incorrect, eye tracking is not sufficiently advanced to be in any way useful in terms of entering input.
Actually, you are totally incorrect. If you look back at the thread, I am not proposing eye tracking as a sole means of input, but as a means of providing contextual information to other means of input.
> I realize my first comment was kind of mean and sarcastic, but that was because this idea is so completely stupid and not progressive at all that I thought the relatively intelligent community on hacker news would realize this immediately. All my friends and co-workers who saw it were like "this is completely dumb"... immediately.
Then at least you or you and your friends are guilty of sloppy reading, of a level I do not expect for HN. Again, this is not proposed as a primary means of input, but as an enhancement to contextual information for speech and handwriting input.
Hey sorry this is so late. Didn't realize what you meant by contextual input or how that could be useful, but you're right - I can't deny such a fuzzy statement. It's somewhat possible that this could help something sometime in the future, definitely. Apologies for calling you incorrect. Eye tracking is however not efficient or useful as a main input for typing and will very likely not take a leading role in helping to recognize word/sentence input to computers.
On the second statement, I meant my original comment about google handwriting, the one at the top of this tree - not your comment about eye tracking. So I apologize again - I must have not been clear in the way I stated that and I'm sorry you took offense.
Well, let's put it to the test. Not at a desk, of course, but while walking, or using one hand while the other's holding a baby, or in the back seat of a taxi.
Edit: I forgot to mention, the test will be carried out by a 62-year-old Egyptian grandmother.