Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yep, Star Trek computers understand addressing, the conversation is modal: one does not need to begin every sentence with the keyword, a first use of the hotword (or implicitly in some cases, like entering a turbolift) combined with a specific tone makes the computer “open” the conversation. From then on, tone only is sufficient for the computer to know when it is being addressed. With a conversation opened, context is remembered.

I am flabbergasted that the following hasn’t been an option:

- hey Siri

- yes?

- what are the last three releases from <artist>?

- X Y and Z

- search again without EPs

- W X and Z

- Play the first one

- <playing W>

- thank you Siri

<conversation closed>

Also, with attention tracking that -already- exists with the FaceID array, the phone can know when it is addressed and when it’s not. You know, just like when you’re talking to someone, you usually look at them...



Context is a hard problem and even the best chatbots don't nail this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: