Hacker News new | past | comments | ask | show | jobs | submit login

This sort of thing is surprisingly useful.

A couple of weeks ago I was going to Human Resources on the other side of campus and there was a Chinese family wandering around, obviously lost.

The mother showed me her phone with some Chinese-language map app that I'd never seen before. It indicted that there was a shopping mall where we were standing. Obviously, her map app was wrong since the company has been at this location for 30 years.

But I was able to say to my phone, "Hey, Siri. How to you say 'I'm sorry, there is no shopping center here.' In Chinese?" And then I held my phone for her to see while Siri both printed out the translation on the screen and spoke it to her. I said a few other hopefully helpful phrases to her, but she seemed happy with my guidance and did lots of smiling and nodding.

(I assume the article is about the Google version of this. I wasn't able to read the article because Wired popped up so many ads and DIV modals on the screen that there wasn't any actual story text.)




It's useful, but after so many years the state of machine translations and speech recognition in general is still not exactly reliable. It's like it doesn't have context or doesn't know how to apply it. I've heard success stories like this before, experienced them a few times as well, but most of the times the experience for me is subpar to the point it gets so annoying and needs so much manual intervention I gave up, thinking I'll just try again in 5 or 10 years and see if it's any beter.

Maybe my accent or pronounciation sucks but I tried getting Siri to write down text messages about 10 times. Most of the times it was close, but none of the times the words were 100% correct and in more than 50% of cases that led to the produced sentences not conveying the original meaning. Same for navigation. Names of cities (in Europe) seem problematic, like confusing Miltenberg (DE) with Milton in Canada or so. Similar for Google Translate. Our Portugese taxi driver didn't speak English and was worried about getting us to the airport in time. His phone showed us he was worried about the weather. I get 'tempo' can mean both, but it's these subtle differences technology still is lacking.


Might be your accent. I use Siri to send text messages all the time while driving and anecdotally it works very well.


My wife's name is Nada. I pronounce it nA-da and siri says nah-da. If I don't use the siri pronunciation it won't find the contact. Took me a while to figure out that work around.


I have a similar problem with my car's native voice recognition. But Siri gets both the recognition and pronunciation correct. I wish my car had CarPlay.


Maybe it's language-dependent? Dutch isn't spoken by as many people as English for instance, could be the algorithms just don't work well enough yet.


Perhaps an issue with her maps app applying GCJ-02 or BD-09? Apparently the "in china" check for the noise function is a simple bounding box, which includes much of the surrounding countries.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: