Hacker News new | past | comments | ask | show | jobs | submit login
Leap Motion: Amazing, Revolutionary, Useless (hanselman.com)
173 points by kevin_morrill on Aug 14, 2013 | hide | past | favorite | 135 comments



Believe it or not, this is a century-old problem.

The theremin is an electronic musical instrument, played by waving your hands in the air. It works by detecting RF capacitance between a pair of antennae and the player's body. You can see the Theremin being played at the YouTube link below.

Playing the theremin is incredibly difficult, due to the lack of tactile feedback. The human body is very poorly equipped to point precisely at an arbitrary position in free space. Only a handful of players can achieve anything better than squeaky science fiction noises and even virtuoso players struggle constantly with intonation. Modern theremin technique depends on a system of discrete hand gestures, which reduce the player's dependency upon coarse proprioception.

If the Leap Motion is to have any real utility, it will need phenomenally sophisticated software, to interpret intent from hand motion rather than simply passing the hand location as a raw input. The human body simply isn't capable of making the kind of movements that the designers of Leap Motion seem to expect, even with a great deal of practice.

https://www.youtube.com/watch?v=Ptq_N-gjEpI


Thank you for insight. Really interesting. But popular sci-fi movies features like waving hands in the air / parallel to the floor touch screens // transparent monochrome screens(not HUDs) just seem so obviously wrong to me and I don't understand why so many people fail to see it.


I wonder if you could add some tactile feedback with some precision fans? A grid of fans below your hand and in front of it could provide a gradient of subtle pressure. Or maybe a fan on some sensitive servos that track each fingertip?


That's pretty much exactly the idea behind the "Aireal" project from Disney Research:

http://www.disneyresearch.com/project/aireal/


As I understand, a problem with the theremin is also that the sensitivity depends on humidity and anything else that affects capacitance, and capacitance also depends of the shape of the hand (and arm position, and body position, etc) and because of that it doesn't directly translate to linear distance coordinates. So it becomes hard to build reliable muscle memory. The leap in principle should be able to avoid these problems, although so far a lot of the software written for it fails at this.

Also, a theremin is always on, you can't for example turn the detection on or off based on the amount of fingers you hold up.


alternately, Sheldon and his theremin :-)

https://www.youtube.com/watch?v=XPL8u8gJL0A


I think the article is a bit unfair. I've been playing around with a leap for a few days and am suitably impressed.

What works very well with the device is coarse movements, especially relative hand movements. What doesn't work so well is finer gestures (1/100th of a millimeter motions of all 10 fingers? ha).

This app is a perfect example: https://airspace.leapmotion.com/apps/cyber-science-motion

You can use your hands, kept flat, to spin around / zoom a 3-D rendering of a human skull. You can also point at specific elements on the skull. Both of these coarse gestures work great, and the experience is incredible.

However, the app also unfortunately has a "click" gesture to pick apart elements of the skull - you click by spreading out your thumb and then folding it back in. Works terribly, as this fine gesture is detected maybe 50% of the time. It should've simply been left out.

I showed this app to my dad, who's a doctor, and he was blown away. He was visibly excited about the potential for a device he can use to spin around CT and MRI scans in the operating room without having to touch a mouse/joystick - currently he has a person doing this for him to keep things sterile, and this can sometimes be frustrating.

The leap, at least in its current incarnation, reminds me a lot of Google glass. Both Google/LEAP and their proponents say the devices are going to change the world. Maybe, maybe not. Neither device works perfectly like what you see in the heavily edited demo videos. But both can be invaluable in certain specialized fields, today, as long as folks are realistic about what can be done with them.


I showed this app to my dad, who's a doctor, and he was blown away. He was visibly excited about the potential for a device he can use to spin around CT and MRI scans in the operating room without having to touch a mouse/joystick - currently he has a person doing this for him to keep things sterile, and this can sometimes be frustrating.

This is exactly the value proposition of my startup, TouchFree Labs. We're developing software that uses the Leap Motion Controller to allow surgeons to manipulate medical images inside of the operating room. You can see a demonstration of an early prototype here: http://www.youtube.com/watch?v=WaO-cimDOEQ. Demo starts at about 35s. (Apologies in advance for the low production value.)

Right now our bottleneck is medical expertise, and we're looking for surgeons who would be interested in collaborating with us. We're developing workflows that are tailored for different types of procedures, which requires very specialized knowledge. The application also learns the nuances of individual users' movements to improve gesture recognition, which requires lots of data.

I don't know how far away you are from Toronto, but if you could pass the message along to your dad, I'd be very grateful--if only to get some basic feedback. But if he's interested, he could be among the first surgeons in the world to use the Leap Motion inside of an operating room.


Is there anything you can say about how you arranged a license for this use? The default Leap EULA forbids use with medical equipment, as well as use on a shared work-station.


We're still working on this. But since our product won't be used for diagnostics or direct surgery, the regulatory requirements should be a bit more relaxed.


Nice video, and this is a great target market. My dad's in India so collaboration could be a challenge :) PM me if you think he can still help though - I'm here on vacation for a few more weeks.

BTW, not sure if you saw the following video - it was embedded in the last newsletter from Leap, and shows a vet controlling OsiriX (an open source viewer for medical/DICOM images) using Leap: https://www.facebook.com/photo.php?v=10152121411384392

Looks like a little hokey compared to cyber science or your app, but the approach of simply plugging into and enhancing an existing viewer seemed quite pragmatic.


Neat stuff. I work near U of T, and I've got an Oculus VR. Want to get together for lunch?

Re: video production values, one simple improvement would be to start with action. Start with a quick clip of you manipulating the image, without any explanation. After you've elicited a "Wow!", then explain.


The Oculus looks awesome. What do you think of it after using it?

Lunch would be great! Send me an email and we'll figure it out.

And thanks for the tip regarding the video; I'll keep it in mind for the next one.


My parents are both physicians (though not surgeons); I'm sure they'd be interested in what you are doing, and should be able to connect you with some interested surgeons, please send me an e-mail at eli[at]storied[dot]us if you'd like an intro.


Done.


Why don't you just bring your product to the hospital where you work?


It's in the works. But the more surgeons we get on board, the better.


Kinect for Windows already has medical applications completed for surgeons to rotate 3D imaging, as you suggest: http://research.microsoft.com/en-us/news/features/touchlesss...

It should improve drastically when the new Kinect from Xbox One gets released for Windows.


Too much technology for a simple problem. Add a foot pedal controller (think of the ddr pad) and the problem is solved.


At this very moment I am using Apple's external trackpad. If this thing had Leap tech built in, it, to me, would be the next logical evolution of computing interfacing. I imagine the trackpad being a bit wider and the user could just raise his hands when needed, touch in just about any other situation. *I haven't used Leap yet


A big problem with the LEAP is that there isn't an effective way to click / select something. Pushing forward with your index finger isn't very accurate when the finger tip is also controlling the position. Hence, you always seem to miss where you intend to click. Good selection is a pretty important piece of almost any useful application.

Disclaimer: I work at 3Gear Systems (http://threegear.com), developing technology that possibly competes with the LEAP. We solve clicking by tracking the entire hand -- not just the straight finger.


This is very interesting, doubly so because at my last gig (retailnext.net) we were looking into shopper tracking via ToF technology.

Fine gestures like clicking do indeed work terribly on the Leap. Their own app (Touchless) uses a difficult "poke" gesture for clicking vs. some other coarse and easily-detectable gesture.

The one thing that Leap has going for it is though is that it is small and positioned under the wrist, which means that installation is super-easy, and at some point it will be built into some laptops and keyboards. This means it will likely do better in the consumer space. (assuming they can fix all the bugs, that is)

However, if your technology can detect more gestures robustly, it will do WAY better in professional environments where ease of installation is not such a big deal (e.g the surgery room, animation studios, etc). I'm sure you already know this, but just typing out my thoughts :)


A couple medical device companies and a hospital are evaluating our system right now. :-)

We're actively working on supporting smaller / shorter-range sensors as well. You probably already know that in addition to the Kinect, a lot more depth cameras are on the market now: PrimeSense's Capri, SoftKinetic, PMD, Inuitive Tec, etc. All of these companies have introduced gum-stick sized sensors that can be embedded in a laptop or monitor.


I actually had no problem with clicking accurately. Instead of pushing "forward", I twitched my finger downwards, like you would normally do when clicking a mouse. My problem is fingers would randomly appear and disappear, despite the fact my hands hadn't moved (or at least not enough for me to notice they were moving). I attempted to play cut the rope and was completely unable to get it to reliably track a single finger.


Couldn't you use a foot pedal as a mouse button? You could easily add multiple switches or triggers that way.


Or microphone.

"this" for clicking, "draaaaaaaag" for dragging.


would it be too annoying to add another parameter for a click, like a sound of voice signal?

the other alternative I'd see would be to have a virtual representation, and to be able to "click" on it, but it isn't really for a near future...


The Leap won't be a "real input device" until you can get your hands out of the flat orientation. At this point, it is a toy.

I think the article is good coverage of the state of the device.


Imagine using your hand as a mouse. Just move it around on the desk next to your keyboard, tap the surface of the desk with a finger to click, you've automatically got five buttons - one for each finger, etc. I guess that would be kind of like a "virtual" trackpad plus if you need to go 3d, lift your hand off the desk.


I think, from the example cited, what this author need is a touch screen lap top.


he has one


What's the lag like?


Not noticeable at all (I'm testing on a 2 yr old Macbook Pro).


I remember the latency being around 4-15ms depending on the number of hands in the frame. That's very low, but if you compared with 1-2ms latency it would be noticeable. Applications can add additional latency on top of that.


60Hz+ seems good for this type of tech, especially in its infancy. Obviously it would be great to have 1000Hz input like that, or even a touch screen at 1-4ms on readily available devices.


I think I have a use for this if their SDK is good. I'll check it out. Thanks!


There's a general assumption that the Minority Report interface is the interface of the future.[0] There are few reasons why I'm saying no.

1. First and foremost, gorilla arm.[1] My presumption with the "interface of the future" is that it's needed for prolonged use. So, first thing's first, the interface can't be one where our arms require our hands to be higher than our elbows. Unless of course our species got a whole lot stronger in the forearm to support such a feature. Don't see our species doing that anytime soon.

2. Feedback - Right now the feedback loop is eye->brain->hand->brain->eye (repeat) where the hand's pressure against a solid surface is the most important feedback response. With the minority report style interface we currently have a massive delay (comparatively speaking) between the brain->hand->brain loop. We also have to iterate the whole loop much more because we need to constantly assess with our eye where our hand is in 3D (not digital) space. Now let's say the technology gets much better and reduces this to 5ms. We are now bound by the differences of our synapses firing between touch and light. I could be wrong, but it's my assumption that due to the speed of light being the way that it is, that "touch" will always beat "sight" in performance.

For prolonged used applications my bet is on adaptive surfaces. For short term (turning an stove on, flicking a light switch, etc) interfaces I potentially see this Minority Report style interface happening. But does the benefit cost of innovation? Personally I think we are fooling ourselves.

[0] - http://www.ted.com/talks/john_underkoffler_drive_3d_data_wit...

[1] - http://en.wikipedia.org/wiki/Touchscreen#.22Gorilla_arm.22


Very true. I work at a company building an alternative gestural input device (http://threegear.com). Here's how we have tried to address your points.

1. Gorilla arm -- keep your hands low. We support tracking and interactions literally 1cm above the keyboard / desk. We're mounting the camera above the monitor to achieve this.

2. We use gestures with built-in physical feedback. For instance, our click mechanism is a "pinch" which brings the thumb and index finger tips together. You can "feel" the physical touch event between your fingers when you trigger a command.

Shameless plug: an old video showing interaction with Reddit, Google Maps, browser. http://www.youtube.com/watch?v=U0WLh7WNxCI


Spot on.

We got one of these devices. And did the demos waved around our hands to move through google earth but then we were wondering so our hands are getting tired and we couldn't really see how this was better than a mouse or joystick.

Maybe it would make it cool for interactive kiosks or cool little showroom gimicks for prolong use, forgetaboutit.


I agree with you that a Minority Report style interface doesn't make sense as the sole interface to a PC. However, something like the Leap IMO makes a great secondary interface to a keyboard and mouse/trackpad.

I'm using a Leap controller right now, with BTT for Mac/Touchless for gesture-based control. As I read through a page I can simply stick my hand out and wave it up to scroll down the page - it's a phenomenal experience for passive reading as I don't have to break focus to reach out for my mouse/trackpad. I've also configured some additional coarse gestures to launch mission control etc.

Using the Leap for such brief, coarse gestures avoids both the problems you've mentioned because my arm is resting on my desk, with fingers just a few inches above my trackpad/keyboard, so no "gorilla arm" problems; the gestures are coarse, requiring very little hand-eye co-ordination and finally, the gestures are brief so no fatigue problems.

All of this breaks down once you start trying to do any finer-control gestures, like trying to point at links and click on them like the OP tried to do. IMO the Leap should be used to augment the keyboard/mouse as a secondary interaction interface that you use occasionally.


I don't really understand how reaching out and waving my hand to scroll a page is a step forward from simply using a scroll wheel on the mouse which is already under my hand. Even if its not under my hand, my peripheral vision is good enough that I don't need to break focus on what I'm reading to position my hand on the mouse.....

I totally accept that commenting on a LEAP with out first using one could / will make me look stupid.


>I don't really understand how reaching out and waving my hand to scroll a page is a step forward from simply using a scroll wheel on the mouse which is already under my hand

The real value proposition is not necessarily in applications designed under existing UI paradigms like scrolling text. It does, however, let you accomplish "science fiction" effects like changing the camera position on a 3D model far, far more easily than the mouse.


Is it better than a touchpad? My iPad with touch does rotating and zooming really well.

I can see this being an awesome technology in an operating room where they don't want to touch things for sanitation reasons. I have a hard time imagining how it is useful in my day to day use of a computer.


I guess this boils down to personal preferences. When reading a full page of text I usually take my hands off the keyboard/mouse and lean back. Not having to find the mouse every so often is nice.

Again, I'm not saying that using the Leap for OS interaction is ground-breaking and akin to the first time I ever used a mouse - it's simply a nice addition to my current setup.


A coworker and I have written some gesture recognizers for Leap in C# and Objective-C hoping to make the thing more usable and easier for other developers to write software using it. Hate to do a shameless plug, but both can be found here:

https://github.com/uacaps/MotionGestureRecognizers-ObjC

https://github.com/uacaps/MotionGestureRecognizers-CSharp

---

We're hoping to start the community foundation for making tools that help make Leap extremely usable from both a development and from a user experience standpoint. The Leap is awesome, beautiful and we think can be used in a myriad of applications.


If all you do is browse the web, a touchpad is often all you need.

For text editing / word processing, a good keyboard is often all what's needed, and the use of mouse is often discouraged by gurus.

I still can easily imagine using the Leap Motion device while editing images and especially 3D models. Even more I can imagine using it in games, especially games written with this device in mind.

I don't own the device but I tried it. What's great is that you don't need to wave your hands in the air, Minority report-style; moving your fingers us enough. I wish it was built into a keyboard; it would replace a touchpad / trackpoint easily, adding much more capabilities.

BTW does anyone here remember how clumsy were mice on PCs in, say, 1992?


Mice worked pretty well in 1992. They worked well in 1984 on the Mac, and five years before that, on Xerox hardware and LMs (though they suffered from "really small ball bearing" disease, and easily got dirty or cranky and refused to roll well).

With a mouse, you have at least one button you can signal an event with.

Imagine doing a UI where you didn't have a mouse button. All you can do is move and point. That's a Kinect, for the most part.

I haven't used a LeapMotion, but I suspect it's the same problem; there's no way to generate a discrete event. It's all fuzzy. Did your fingers touch? Did you wave in a particular way? Some fuzzy matcher is pumping out "90% probability of event X, 75% probability of event Y" every few milliseconds, and it's up to higher layers to turn this goo into decisions that people are happy with. It's hard at all layers.

I really think you need a button, a clicker. Something "hard" in the UI that slams a voice of reason into that fuzzy tower that's continually only able to /guess/ what you're trying to do.

[We wanted a clicker on Kinect. Politically impossible. I think it would have helped a lot.]


Actually, it's quite possible to build a "clicker" on the Kinect. It involves mounting the sensor from above, and building completely different software that tracks the hands and fingers well.

We've done it: http://threegear.com Here's a video of tracking arbitrary hand motion using a Kinect-equivalent sensor: http://youtu.be/exZ6wukQCpk


I think he was calling out the microsoft politics as limiting the functionality of the kinect, not technical expertise.


video of hands (fingers) doesn't seem like the hard part here though (relatively speaking, I'm sure it's plenty hard). The hard part is recognizing the input the user wants. Tracking the finger is easy. Figuring out when that finger motion is a click is the hard part.

The video on your website is significantly more impressive than the linked youtube video. That being said, I still didn't see any "clicking". Perhaps I missed it.


Note how I mentioned PCs of 1992, already the majority platform that still sucked with this regard. (On Douglas Engelbart's computer mice worked impressively well in 1968.)

Take a look at a typical laptop; its trackpad also can't seem to generate click events. Still people tap on it and happily ignore hardware buttons (if they're present at all). Same applies to the wildly popular touch screens. Despite the fact that the finger's projection on a touch surface is large and fuzzy, it allows for rather fine motions.

Same thing is with Leap Motion (and probably Kinect): you can define a 'click zone' on a hard surface, like your desk or screen, or tap your fingers one against another. You instantly have 8-10 'mouse buttons'. Finger touch is pretty well-defined: not only positions but velocities of the fingers change in a discernible pattern.


> BTW does anyone here remember how clumsy were mice on PCs in, say, 1992?

I came here to state this. I'm studying interaction design, while I understand the author's frustrations, the problems do largely seem to be to be poor interface design for how to use the data the leap collects about your hand (and the visualiser demonstrates how accurate that data is).

Basically, the current basic demos try to mimick a mouse by way of using extremely clunky gestures. That won't work: for this to take off, the interface needs to be designed from the start with gestures in mind. I have some ideas on how to do that, but it will require some further tinkering and testing.

The sensor itself is amazing and in my experience very reliable - although I might be biased after having tried to design gesture based interfaces with the Kinect and not succeeding due to its technical limitations and unreliability.


> I wish it was built into a keyboard

This may very well happen if the Leap takes off. They already have distribution deals with HP and Asus; the next logical step would be building it into the laptop. Should be quite possible since the device is small and relatively cheap.


I find the Leap part of BTT (BetterTouchTool) is actually... err.. use-worthy..? Neither useful nor useless.

I've got some really cool (still a big part for me) and useful stuff working, augmenting my mouse/keyboard use. For example, a finger to the left minimises and two fingers to the right opens a list of recently used apps.

Yet I'm very conscious that everything would just be better suited to a keyboard shortcut..

I never bothered with Touchless and mouse emulation things; years of 2D GUI design isn't suited to this kind of interface. "Midnight" is a my favourite Leap app but I think that's just an iPad app that lends itself very well to the leap input too.


I did some investigative work to look at if we could use the leap motions to replace the touch-sensitive overlays that we strap to 60" TV's for our on-air traffic folks to use during their segments -- the overlays run about 2500, and if we could replace them with a leap motion and get the same functionality with less cost, this could allow us to roll out the traffic application to more stations than the six or so that are currently using our in-house traffic application.

The first thing that I noticed was that it couldn't take the range of the 60" television that we had hooked up to the traffic software, so I scaled this down to a Thunderbolt display, and tried again. In my tests, the recognition for the thumb was sporadic, if not completely missing -- in both my (fat guy) test as well as during the testing of the local personality (non-fat guy).

I then made some changes to our software to try to minimize the effects of the natural movements of the hand -- I turned down the sensitivity to attempt to compensate for the normal shakes and jitters that you have with your hands. This gave it a better feel, but, the traffic reporters still missed the feeling of touching the display and watching that display interact with your touch.

They're still neat devices (I really wanted to say neat toys, but, I don't want to cheapen the work that the Leap Motion folks put into this thing), but, I'm having a hard time implementing them in a way that would work for us...so they're sitting on my shelf, waiting for a project that could use them (or, take them to my local hackerspace should I not find a good project for them shortly)...


Amazing, Revolutionary, Useless... and let's not forget buggy! And with horrible support... My device was not able to recalibrate, a problem shared with many others should I believe the forum. A week or two has passed, and no reply from the makers of the device, neither on the forum or on my bug report.

I guess I just got another hunk of junk to put in the failed-devices-closet... :-(


same here. my device reports "bright light detected" even when there's not enough ambient light to see the keys on my keyboard. turning off the monitor seems to help, but...


It works by detecting infrared light - the lightsource is likely an infrared one that you can't see, possibly your monitor.


absolutely. yet if it won't work properly with the monitor turned on, there's little use for it.


This is why I decided to back Mycestro [1] and skipped on the LeapMotion. While it won't support multi-hand (unless you have 2 devices) and multi-finger stuff, at least it should be able to easily recognize any motion I make with it on.

[1] http://www.mycestro.com/


Expensive. 30 USD shipping for a tiny < 50g light device. Will they send John Travolta delivering these things in person using his jet, or what is up with that?


you can arrange john travolta in a jet for 30 USD? that sounds like a bargain. :)


Fedex/UPS/etc international shipping rates tend to be in the $50 range.


For the Kickstarter I paid $10 for international shipping :)


How is that expensive? You pay more for a good mouse.


30 dollars shipping for a mouse?


oops, I thought that was the item price.


This monrity report concept has been vetted on HN for years and the consensus is that nobody believed that this felt natural. I was watching a steve jobs interview about the ipad and he spoke about how natural it was for people to point to stuff, even as little kids. This is so intutive that we now see two year old kids controlling iphones to view youtube. The minority report interface for now is not natural maybe as the design of the world and tech morph, this may become useful but as of now it is not. The tech is really cool and i rather see these guys take a crack at this and see if people like it, than seeing another snapchat valued at 800mill. The advancement i am excited about right now is a mix between chromecast and Mircosoft's Illumiroom, http://www.ign.com/articles/2013/02/01/is-microsofts-illumir....


Ooh, interesting tech. Thanks for the link!


If you like gesture devices, you may find the MYO interesting as well: https://www.thalmic.com/myo/


I looked at that as well but I went with what I felt to be the safer option. The Mycestro is not that revolutionary, it essentially still is a mouse only not tethered to your desk. For me it's closer to one of those presentation pointing devices than to the Myo or Leap Motion.


My first impressions with the Leap is similar. Got one, played around with it, felt kinda useless, haven't "touched it" since.


As another early adopter, I have to say that it's disappointing how much the company is relying on "the community" to generate their business model for them, rather than properly develop the software themselves.


It's obviously not a replacement for a mouse and keyboard and never will be. I could see some useful gesture based macros, like "throw your hands up in total frustration" to rage-quit an app or open a distraction-free full screen editor.


I find Leap Motion quite accurate. You will quickly get used to the convenience of gestures with BetterTouchTool, and wish you had it on computers which don't have Leap Motion.

Otherwise, The gestures used by apps is something that needs to be carefully crafted. For instance Touchless, the mouse replacement, simply doesn't cut it; you'll find yourself reaching out for the mouse/trackball/trackpad within the first 10 seconds.

The leap gets effected by strong light sources on the ceiling. You might want to use it facing downwards if that is an issue. Also, if you are wearing a watch or a ring, it might get confused with the reflection.



Disclaimer: I haven't tried Leap Motion, yet.

Did I get this right? Leap Motion vs. Kinect:

  - LM is smaller (significantly);
  - LM is cheaper (significantly);
  - LM is more accurate (significantly);
  - LM has almost no real apps (mostly concept demos).
If these are all correct I find Scott's post nothing more than a "normal", "the competition sucks, too", Microsoft type of post.


Very different markets.

The LM is very short range, so you couldn't use it like a Kinect. And you wouldn't plug a Kinect into your PC to watch your fingers move either.


What surprised me after little digging up was that Leap Motion does not support point cloud. That means you can't get 3D world as points in space from Leap Motion. Their founders says Leap Motion isn't designed for this purpose. This means you can't use Leap Motion for applications such as 3D scanning. Personally I think that would be much more exciting then ability to move windows by waving.


Is all writing seriously boiling down to animated GIFs of "reactions"?


I had the same reaction after seeing the first two gifs but the rest are mostly demonstrative and to the point.

On a side note, it's still quite amazing that gifs are still the simplest way to show short video clips on the web.


They're really not. Embedded video is a fraction of the size, and faster to render to boot.


Is that true? What kind of encoding are we talking about?


https://mediacru.sh/ was on the front page a day or so ago, it grabs GIFs and re-encodes them into video (MP4 and OGGv). The results are much smaller and, in my local experimentation, are much, much faster to render than the original files.

There's some technical details at https://mediacru.sh/demo .

Draw your own conclusions, but that's how I have experienced it.


The gif loaded and played seamlessly. The video was a placeholder which loaded a full screen QuickTime window that delayed for 5-10 seconds then played.

Claim "gifs are still the simplest way to show short video clips on the web." agreed.


I'm assuming you're using Chrome. Chrome delays the loading of <video> tags until after <image> ones. It's mentioned on the page I linked to.


No, mobile Safari.


The biggest problem with the leap that I have seen (there is one in my lab) is that it sees hands spread out evenly on the surface really well, and that's about it. If you do a thumbs up gesture (or a rude gesture when it doesn't work), the fingers get occluded by the bottom of your hand.

Think about you hands as five friends trying to play connect at the same time and you can imagine the kinds of occlusion problems you might face.

Still, I, and probably others, like the leap. It's not useless. You just have to exploit it the right way, looking for natural interface design beyond a Tom Cruse movie.

The biggest free air interaction problems are (1) making visible what the available gestures are, and (2) providing tangible or visible feedback. You don't get to see and feel the the interaction like you can with a keyboard or less digitally inclined tools.


Maybe camera is just the way to go then. There's Kinect, of course, but Intel is getting into the act as well: http://click.intel.com/intelsdk/Intel_Developer_Kit-0-C92.as...

Video demos here: http://software.intel.com/en-us/vcsource/tools/perceptual-co...


I've been experimenting with the LeapMotion at work today, and some initial observations:

- There's no apps yet that have made me go wow. - The range is quite small - The motion of hovering an arm in front of you is extremely tiring after more than 10-15minutes. Try holding your arm out in front of you for that long without moving and you'll see why.

The reason Kinect was a success is that you can take real-world activities such as dancing, jumping over obstacles, jogging (on the spot), and translate them into an interactive digital version.

With the Leap, I've yet to think of a real world scenario where I would be waving my hands in front of my chest, that would translate well into a digital experience. Conducting an orchestra would be one good application for this, perhaps training conductors, but I couldn't think of anything else.


I think the "useless" comment misses that its sweet spot utility is unknown at this point. And I'm not sure "general purpose computer input device" is going to be such a sweet spot.

The precision "problem" can obviously be addressed with software.

That said, I do believe the absence of killer utility is a problem for Leap right now since it came out with a decent bang and now the less than favorable reviews are dripping in. I think they would have done themselves a significant favor by having a killer app ready from the outset. I also think they need to encourage people to look beyond simple human-computer interaction. Apparently these things could map a whole football game or count the number of people at a concert. Things like that. I also think the commercial angles will be better for business.


The onus was on the company to figure out what its "sweet spot utility" was before launch, don't you think?


Yep: "I think they would have done themselves a significant favor by having a killer app ready from the outset."


Sorry -- yes, chiming in with agreement.


I've got my Leap doing what I bought it for: controlling iTunes when my hands are covered in hair dye. [1]

If I can get it to do more, awesome. But that's enough for now.

1: http://www.youtube.com/watch?v=_O6sR0PKofc


Then you got ripped off!

https://flutterapp.com/


When it's at my desk, my laptop is connected to an external monitor and keyboard; it runs closed. Flutter is of no use to me.


Why does it run closed? It doesn't have to.


Short answer: I like it that way.

Long answer: When it's plugged into the external monitor/keyboard/wacom tablet/leap/etc, the laptop is on a shelf beneath the main desk surface. It's completely out of view when I'm standing there working. I've got a 24" monitor on an adjustable arm; I don't need a second screen. Or want one; if I have two screens next to each other I inevitably go crazy if they're not exactly the same color profile.

(Photo of my setup: http://egypt.urnash.com/media/blogs.dir/1/files/2012/08/desk...)


I played with a LeapMotion last week and was impressed by it but within 5 minutes my arm was tired.


The local BestBuy had one set up & I played with it for a few minutes. Really Interesting, but not all that applicable to how I use a computer on a daily basis.

I think it's greatest potential will be in gaming. Imagine casting a spell by using the appropriate arcane hand gestures. Or swinging a sword - it can tell the difference between an overhand and a side cut, and a blocking move.


> I think it's greatest potential will be in gaming. Imagine casting a spell by using the appropriate arcane hand gestures. Or swinging a sword - it can tell the difference between an overhand and a side cut, and a blocking move.

I've had a Wii, and a Kinect, and gesturing as input got annoying very quickly with both. It's something that just sounds fun, but isn't.

Even with the Nintendo DS, when you'd use a stylus to trace a certain shape to trigger an action, it didn't translate well to sustained gameplay. Though it remained workable longer than with Wii/Kinect gaming. Simple swipes with a stylus or finger (on a screen) continue to work well, whereas even simple arm or hand gestures quickly become annoying and fatiguing.

It turns out that pressing buttons works just fine, and is actually ideal, even if other input methods sound sexier.


Your comment reminds me of something pointed out during a boxing match many moons ago. Which was essentially a comment about how hard it actually is to simply hold up one's arms and fists over quite small periods of time, let alone constantly move them around, then start punching. Its actually quite draining.

Current input schemes involve a lot of support of the hands and arms. This waving hands around in the air offers no physical support.

So, I think that its use will be limited to a subset of tasks, where it will be revolutionary, but much harder to integrate in to general use. Simply because of fatigue.


Yes. Millions of years of evolution led us to manual dexterity with our fingers, and with it we have (in an intellectual sense at least) taken over the planet.

I have grave doubts about devices that ask us to give all of that up. The medical apps talked about above seem like a pretty good use, if you need infrequent medium-to-gross motor control of something in a sterile environment.


This was my experience - it's cool because it's novel, but playing a game for longer than 5 minutes really hurts my hands from holding them in awkward positions. At the end of the day I'll always want to stick with my mouse and keyboard.


I think that the newer Kinect 2.0 will be able to take care of all these more sophisticated gestures. When compared to the Leapmotion it has much better range and its accuracy will improve now too.


Four students at RIT are using this to develop a sign to text app.

http://motionsavvy.com/

So it does have potential/use.


Fascinating. What's annoying about computer/video based sign instructions is the lack of feedback. "Did I get this sign right or do I just think I have it right?"

If MotionSavvy can notify me when I get it wrong, very nice!


I also have one, and after seeing what it sees during the orientation I wondered how many of its problems could be cured by having a second sensor that one could position about a foot away from the other to give the Leap stereo vision. I am speaking totally out of school because I don't know if that would create more problems than it solves, though.

On my phone, to apologies if this is a duplicate of another thread.


Nice example of someone adding a leap motion controller to their site, GoSquared: https://www.gosquared.com/blog/playing-around-with-the-new-l... I'm not going to argue this demonstrates utility, but it does show some good recognition of smaller gestures.


I think a ring to click might be the best answer. Clicking (for me) works better with a tactile response.


Looks like a much better alternative just revealed itself: http://www.kickstarter.com/projects/haptix/haptix-multitouch...


I think combining a kinnect and several leaps with an Oculus Rift might be great. That way you can use your own body motions within the rift. The kinnect would be for gross motions and the leaps would be for detecting fine motions.

What do you guys think?


Absolutely! The first thing I really missed when trying the Oculus Rift demos was my hands, you really want to reach out and grab/push things. Leap Motion seems useless for navigating Windows, perfect for VR.


Well, as kinect already is capable of tracking your fingers, i am not sure the leap would be very useful. rather a slightly better kinect..


"kinect already is capable of tracking your fingers"

I will nitpick the word selection that "capable" is nearly useless in a user interface. It needs to be nearly 100% reliable or its useless. 99% of my interaction with Kinect is my daughter crying that she can't navigate menus in her dance games and is all frustrated, followed by me being all frustrated and swearing about how if only I could bypass this POS and use the buttons on the controller I would be done twenty seconds ago and I hate Kinect with a passion. It works fine for gross motor like my daughters dance games but useless for fine motor. Perhaps in the future I will write SQL statements by performing an interpretive dance at work, but I hope not.

The failure rate is vital... If I'm typing this at 100 WPM, which is probably about right, then a 99% motion detection success rate means I'd swear and hate motion detection and have to stop and fix an error, what, every six seconds or so? All day long? Forget that, I'm sticking to the keyboard and mouse, I don't have the patience for 99% success.


The newer Kinect 2.0 will be able to do almost perfect finger tracking. Infact sign language recognition apps have been written for even the older Kinect (XBox 360 version). Check out the SigmaNIL framework and FORTH libraries offered with OpenNI.


You wouldn't see this in an XBox game though - the XBox SDK doesn't support it.


I think that the SDK is way ahead of OpenNI. Soon the more enhanced features of the new Kinect will make their way into games too!


sorry, when writing about the kinect I assumed the new one. even though it's not available yet, it is more compareable to the leap than the old kinect, because of it's age :D


Actually perhaps the leap could be placed on the front of the Oculus Rift. The only time you'd be doing detailed things with your fingers is when you're looking at them right? So at those time the leap would have a view of your hands.

Then the kinnect (I guess one on each wall) would monitor larger movements.


The Leap doesn't have the range to do this. My brother has one and it's striking how narrow the field of view is.


Can someone build something like this but only have it recognize a small set simple gestures? Using it to simply browse the web would be cool. If you could throw in a way to click build in Xcode and IntelliJ then I'm sold.


This is exactly my experience with the Leap unit; I played with it for a couple of days. Then uninstalled all the drivers and put it back in the box, will try again in a few months to see if things have improved.


Leap Motion – Just a toy or the future? - http://tech.particulate.me/


Question: does the LEAP need to be flat on the table, or could you hang it around your neck to get mobile gesture input in a vertical plane?


Touchless sucks on AirSpace. Try BetterTouchTool. The gesture tracking and usability of that App is incredible.


Well jeez, let's not even bother then. Wendell, shut off the generator, Scott thinks we're useless.


Can it be that I saw a prototype at ICT Delta conference may 2007 in Utrecht, the Netherlands?


Thank you for the most hilarity I'm likely to encounter today! That's a great article.


"Hey so LeapMotion is still being developed, I better hurry to write a blog post and complain about it before they improve it".


> "Hey so LeapMotion is still being developed, I better hurry to write a blog post and complain about it before they improve it"

If you take someone's money and give them a product they have every right to review it and point out both the good and bad points about your product.

I thought the review was very fair, and I"ll be the folks at leap would to. They just got a bit of press and some tips on what one of their clients thinks is broken.


If it's still being developed why is it on retail for $80?

Most people make sure their product is useful and working before selling it to people.

If you bought a car and the brakes mostly worked, would you be happy?


If the year was 1896, I probably would be happy. Brakes were far less functional and reliable in the first 20 years of the automobile's history.


The hardware works. It's mostly lacking in the software side, which I presume will get better over time with 3rd party apps in the Leap App Store, and also when existing applications start implementing the functionality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: