This essay is what leads me to think about the possibilities of user interfaces that are comprised of real world objects which become "smart" via the projection of augmented reality. For example, your average tennis ball could become a slider, a knob, or even a virtual storage container with a heads up display like that produced by meta.
I wonder if the future of user interfaces are simple but universal real world controls (similar to a meatspace UI toolkit) combined with AR. With AR any surface is a display, and when you consider the fact that the contemporary "pictures under glass" model of UI fundamentally falls out of the limitations of current day display technology (namely, displays are a type of surface, but not all surfaces are displays) then it kind of seems logical that if most flat surfaces become displays (virtual or otherwise) then the space of ideas around user interfaces loses a large coupling and fundamentally new things should be possible.
I wonder if the future of user interfaces are simple but universal real world controls (similar to a meatspace UI toolkit) combined with AR. With AR any surface is a display, and when you consider the fact that the contemporary "pictures under glass" model of UI fundamentally falls out of the limitations of current day display technology (namely, displays are a type of surface, but not all surfaces are displays) then it kind of seems logical that if most flat surfaces become displays (virtual or otherwise) then the space of ideas around user interfaces loses a large coupling and fundamentally new things should be possible.