Hacker News new | past | comments | ask | show | jobs | submit login

As a deaf person I am using Ava [0] which uses IBM's speech to text service [1] as its backend AFAIK. I am always impressed by how it picks up on context clues to make corrections in realtime and capitalizing proper nouns (Incredible Pizza for example). However, it does not work with multiple speakers on a single microphone.

[0] https://www.ava.me/

[1] https://www.ibm.com/watson/services/speech-to-text/




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: