Hacker News new | past | comments | ask | show | jobs | submit login

I see. In theory, like any idea, sounds cool. I've actually taken real world pictures around my building to mockup a possible UI sometime.

The issue is that, as far as i know, geo-location (GPS at least) has a +-10 meter precision, so placing content at a specific spot is, fuzzy.

ps.: by placing content at a specific spot, i mean grabbing the latitude/longitude/height and possibly direction from the device and allowing anyone around it to see it by pointing a device (smartphone/tablet/glasses) there.

edit: of course, the precision is really an issue depending on what your'e "tagging". If you're adding stuff to a town square, +- 10 meters don't make any difference :-)




There's an ios app called "Minecraft Reality" which uses camera tracking to put models into physical spaces. It's not seamless by any stretch (you manually download each model from a list of nearby placements before viewing) and it loses sync if the scene is changed too much, but it does a pretty good job of accurately placing the objects. I imagine that maybe taking advantage of 3d sensors like leapmotion could make it a lot more robust.


For what it's worth, there are many, many smartphone apps that enable this, both using augmented reality and not, and many other research-level prototypes in the literature.

If you've never heard of any of them, perhaps that's because in practice it's not as great an idea as it sounds.


Good to know, thanks! (one less vaporware i have to worry about)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: