Hacker News new | past | comments | ask | show | jobs | submit login

Please, for the love of Jef Raskin and Henry Dreyfuss and Don Norman and all that is human factors, no.

The film version of Minority Report was not a model for practical or usable interface design. Millions of years of evolution have built our brains and bodies for interacting with things that provide physical feedback when we touch them. Waving a pencil in the air, "manipulating" an invisible item and looking for visual feedback from a screen, these are not good experiences. Even if you discount the "gorilla arm syndrome" that StavrosK quite rightly points out here, the fatigue of trying to perform fine and accurate motion without physical stimuli for your hands and fingers to respond to is significant.

I'm sorry to be a negative voice in the face of innovation, but this really does feel like a technology in search of a problem. What worries me greatly is that it has a remarkably high "cool factor" that would be excellent in short demos, and could be easily pitched to companies looking for a flashy feature to get a leg up on the competition. We were saddled with some dubious decisions at the dawn of the GUI age, and we're just starting to lose them as we enter the Direct Manipulation age of interfaces. Please don't let this concept of feedback-free hand gestures become a paradigm that we're stuck with in the future.




As an embodied cognition guy, I disagree here pretty strongly. Gestures are a powerful form of communication - consider deaf signers, scuba divers, even in time sensitive operations like military and sports. Gestures likely have a deeper evolutionary history than spoken language too. They are intuitive. Babies mimic long before they speak or manipulate.


That's an interesting perspective that I hadn't thought of before.

The usual approach to a 3D gestural interface is the kind of thing that's shown in the video for Leap—writing in the air, using mimed actions in space to represent manipulation of objects on a 2D screen, et cetera.

Gestures as an abstraction, like sign language or even the everyday hand gestures we use like flipping the bird, the "A-OK" sign, and such make a lot more sense. If you move away from the idea of using hand waving as a stand-in for direct manipulation of objects, and look at gestures as a form of communication, it's a whole different ball game.

Thanks for that. I still look at the demo video for Leap with fear and loathing, but using that same hardware for a communicative gesture system like you suggest is exciting indeed. Now I'm going to be distracted all day thinking about ways to incorporate hand signs into a UI.


The big question for me is whether gestures began as a proto-language. Consider mimicry and the use of tools in social learning. That's animal cognition too.


Upvoted both of you for civility :-)


> Gestures are a powerful form of communication - consider deaf signers, scuba divers, even in time sensitive operations like military and sports.

In all of these examples, gestures are being used to interact with another sentient being. We use gestures to talk to people, not to tools.

I think gestures are great, but I don't want to have a conversation with my computer, I want to use it. I want to feel like I'm a craftsman, and not a manager of the work I create on it. (For that same reason, I'm not enthusiastic about voice recognition either.)

That being said, I probably would be excited to use gestures (and voice) for social software that was intrinsically about interacting with other people. Think multiplayer games or video chat.


Yes for communication with humans, but for interacting with a computer? Gestures are imprecise, physically tiring and non-intuitive for the very precise and specific actions we have to do on a PC every day.


Those are based on static symbols and are not used for precise, realtime control of a stateful 2-D system.


Pardon me for a shameless plug here, but I had jump in as this is an interesting discussion where we have at Flutter have long thought about it. We came up with a simple gesture to play/pause a song (itunes/spotify) by asking thousands of humans what gesture they would use if they did not have mouse/keyboard/voice. Almost everyone came up with the one we picked... even for next set of gesture, we're looking at static human gestures that are micro and intuitive (and doesn't make you fatigued).

When we released the first app, we got countless emails asking for hand-swipe as a next song gesture. We had felt a long ago that even though it seems natural it is completely impractical in certain context (i.e. coffee shop) and doing hand swipe 20 times starts to wear you down. It was important to try this number of times as same gesture can because to go to next slide or next photo or next album.

Hence, we ended-up picking thumbs-right and thumbs-left as metaphor: flutterapp.com/next

We will also hit our 4th Million gesture this month. That's 4 million times someone has either played or paused a song.

Please try our app and send feedback - would love to get your thoughts on this!

This is one of the best thoughtful discussion I've seen on this subject! thanks all for stimulating thoughts. I just thought I had something to share...


Fred Brooks agrees. He broadly outlines a design system using gestures in The Design of Design. (I know that PHK already exceeded the quota for Fred Brooks references in a week; I guess, just get over it.)


If you look at it, it's quite sensitive enough to follow finger motions, not only whole hand motions. So it could even follow your fingers as your hands rest on a desk.

I'm not convinced the lack of tactile feedback is a problem provided there is very good visual feedback. Do you have any studies to this point?

Furthermore, I think basic pointing and pinching are only the beginning of the capabilities this system can provide. More complex hand signals, or even face, body and posture signals could drastically increase the bandwidth of human/computer interaction, even by supplementing a keyboard.


This isn't a study, any scenario where you might have impaired vision is a case where tactile feedback is necessary.

While I do think tactile feedback is extremely important, I won't discount the ability for humans to adapt. Should a compelling enough system be developed, I imagine any person who desires would learn and adapt to a system -- the problem here is getting the gross majority of users to adapt to it.

There's obviously going to be a lot of backlash against a product like this (and equal amounts excitement). I think the problem here is after such a well publicized film, everyone assumes tech like this is going after a Minority Report type paradigm. While they definitley have to (somewhat) pitch it like that, I'm very excited to see different ways that something like this can be used. It definitely seems like a great step forward.


If you look at it, it's quite sensitive enough to follow finger motions, not only whole hand motions. So it could even follow your fingers as your hands rest on a desk.

I agree, you don't need to use your whole arm. It's like playing tennis on the Wii. You don't have to flail your arms around. You just need enough motion for the accelerometers.


There are many good reasons this is a useful product: demoing 3D models, education platforms, music performance.. I love my keyboard when working but aren't the wii and kinect one of the most successful gaming platforms? If this has a good API there's no reason it can't fill a niche need.


I agree that the lack of touch feedback is a terrible experience. But this is nevertheless an amazing piece of technology that opens new doors to a lot of applications (in the sense "apply", not software). We've seen what the community around Kinect has been able to build with much less powerful captors. I'm excited to see what people might imagine and achieve with this one. Moreover, nothing prevents you to couple this impressive captor with any other device that would provide you with other sensorial feedbacks. And why narrowing it down to a control device? Why couldn't it be used to gather other kinds of input? Monitoring activity? Assisted vision? Who knows what people have in mind for it?


The way I see it, it's an innovative interface that has no drawbacks to us (users and companies) in testing it. If it's not intuitive or useful, people won't use it and either the company adapts to the users or will fold. Simple as that.


I can imagine a world that uses this or similar technology, holograms, and some sort of soft electrical shock to manipulate 3D interfaces.

Don't write it off. This might be Xerox Parc's mouse, waiting for Jobs.


"There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophy."

Yes. I saw an obvious use for this in our products and it would solve a known problem we have so I immediately fired off an email to them. The sooner I can get a few of these in my hot little hands the better.

Thank You to whomever submitted this link!


How about for presentations? You are already using your hand and talking naturally, why not incorporate that into your talk?

I wouldn't use it as a keyboard replacement, but for other tasks that naturally fit into this type of human input.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: