Hacker News new | past | comments | ask | show | jobs | submit login

I saw a demonstration (about 2 years ago) of an mobile ARM powered offline voice recognition platform, faster than Google and more accurate.

There was a side by side with Google's online voice recognition and it out performed in speed and accuracy on a mobile GPU/CPU. Complete with an actual learning system. That is truer to an AI for the Raspberry Pi, not to mention addressing privacy concerns.

If this is a glorified API / Cloud adapter rather than a true AI, what is it really?

edit: found it https://www.youtube.com/watch?v=Fwzs8SvOI3Y




I like the idea of this - doing all the processing/ML client side. I know Apple has started doing this recently (object/scene recognition & search in Photos app), but appreciate that this process occurs on my device rather than on say, Google's servers...

I have a strong feeling that quite a bit of R&D has been going toward Apple's upcoming chip, which will likely have a custom GPU architecture optimized for deep learning (of which Siri will also greatly benefit from) and augmented reality - like the custom HPU in Hololens. Apple's "Lens" wearable will probably pair via W1 with an iPhone which will handle most of the processing. Perhaps they'll even have a custom 3D/depth sensor based on the PrimeSense tech they purchased....

We're on the cusp of consumer AR going mainstream, and it's exciting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: