"Implicit query" is another term for it. On the desktop side, one of the most realized solutions was the Dashboard prototype: http://nat.org/dashboard/
I've long argued that designing multi-modal interfaces is where applications need to go. We're seeing this now with desktop, web and mobile apps all running against the same APIs, but what it really means is you're more likely to be able to also design a UI that's really an audio-only interface, whispering to you continuously via a Bluetooth headset, or a wrist-mounted UI, or... etc.
It also means you can design and build something useful for desktop users, and then mobile users, and then scaled even further down for wearable users. Real-time continuous Google searching as a desktop sidebar, or rendered on an iPad or iPod Touch display linked to desktop input as a second screen, would be a start.
"Implicit query" is another term for it. On the desktop side, one of the most realized solutions was the Dashboard prototype: http://nat.org/dashboard/
I've long argued that designing multi-modal interfaces is where applications need to go. We're seeing this now with desktop, web and mobile apps all running against the same APIs, but what it really means is you're more likely to be able to also design a UI that's really an audio-only interface, whispering to you continuously via a Bluetooth headset, or a wrist-mounted UI, or... etc.
It also means you can design and build something useful for desktop users, and then mobile users, and then scaled even further down for wearable users. Real-time continuous Google searching as a desktop sidebar, or rendered on an iPad or iPod Touch display linked to desktop input as a second screen, would be a start.