Hacker News new | past | comments | ask | show | jobs | submit login

In my experience with the Leap it's more important to learn the limits of the capture space and the device's sensitivity in the individual application than getting the gesture perfectly right. Another thing that people don't get immediately is that you're not signalling at the visual medium on the screen to perform tasks, rather, the screen doesn't matter at all. It's a sensory feedback loop adjustment that takes a few hours of tinkering around, but once you get it, you get it. The main problem is gorilla arm and the tediousness of some tasks, the latter of which will hopefully be solved in software. Check out the 3D skull explorer app if you get a chance, it's the best implementation of gestures I've seen for the Leap, actually usable. I could definitely see people who regularly need to manipulate 3D models using similar metaphors.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: