Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This essay is what leads me to think about the possibilities of user interfaces that are comprised of real world objects which become "smart" via the projection of augmented reality. For example, your average tennis ball could become a slider, a knob, or even a virtual storage container with a heads up display like that produced by meta.

I wonder if the future of user interfaces are simple but universal real world controls (similar to a meatspace UI toolkit) combined with AR. With AR any surface is a display, and when you consider the fact that the contemporary "pictures under glass" model of UI fundamentally falls out of the limitations of current day display technology (namely, displays are a type of surface, but not all surfaces are displays) then it kind of seems logical that if most flat surfaces become displays (virtual or otherwise) then the space of ideas around user interfaces loses a large coupling and fundamentally new things should be possible.



You might find the work coming out of MIT's Fluid Interfaces group fascinating.

The "Smarter Objects" (http://fluid.media.mit.edu/projects/smarter-objects) project seem to tackle some of the things you're talking about. This article (http://singularityhub.com/2013/05/20/virtual-and-real-object...) explains it a bit more.


Thanks for this!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: