So damn clever... releasing an SDK for the Kinect[1] was one of the smartest things Microsoft could have ever done and I think has gone a long way to contribute to it being the fastest selling piece of hardware in history[2] (did anyone know that? I sure didn't, and barely believed it reading it).
This reminds me a lot of what Johnny Lee did with the Wii soon after it was released with head-tracking 3D[3]
Part of me wants to stop using a computer and take up beet-farming because the cool factor seems so much less than what these guys are doing.
Anyone on HN actively toying with a Kinect and want to share some video?
I have no video, but I was able to get a WebGL-powered game world running that was controlled through a Kinect using DepthJS. This was done in just over a day of playing around. Was pretty cool when it worked.
Edit: My co-workers were looking at me with concerned looks that day. I would stand up and wave my arms for 30 seconds and then sit back down at random times throughout the day. :)
Very impressive Derek -- how easy was it to work with the Kinect data via the API? Is the information coming out of the sensors pretty straight forward or does it just come in as a video feed with additional metadata (positional?)
The way that DepthJS works is through three technologies: libfreenect[1], OpenCV[2] and then a simple Tornado (push) server. libfreenect is what grabs all of the data from the Kinect device and OpenCV is the 'interpreter'. This translates to just getting events such as 'move' in the browser.
What I find cool is that this sort of application (3D video recording) is actually possible with libfreenect. I bet the most difficult part was defining a storage format and getting it integrated into an app built with the augmented reality SDK.
[Added in edit:]
Anyone on HN actively toying with a Kinect and want to share some video?
Looks like you and I have toyed with the same idea (Kinect controlled lighting). I was shocked by how easy it was to put together and how effective it was. Probably not an easy thing to commercialize though ("yeah, just embed these sensors in your walls and replace all of your light switches...").
The piece of paper tells the app the position of the animation and is removed by the app from the video in the iPad. If you look closely you'll see the app projects some kind of wooden surface where the piece of paper was, this could be a pattern recognition thing (looking at the surrounding pattern and filling in the paper with that pattern) or just a custom job.
I believe that in augmented reality programs you need a target. The piece of paper is that target. It allows for the AR program to know where to place the video and also how to rotate the camera as he is walking around to the side.
Ya it looks like the "augmented reality" part was faked. The angle of the video never quite matched up with where he was holding it either. In any case it's still a nice proof of concept.
This reminds me a lot of what Johnny Lee did with the Wii soon after it was released with head-tracking 3D[3]
Part of me wants to stop using a computer and take up beet-farming because the cool factor seems so much less than what these guys are doing.
Anyone on HN actively toying with a Kinect and want to share some video?
[1] http://research.microsoft.com/en-us/um/redmond/projects/kine...
[2] http://www.vgchartz.com/article/83375/kinect-is-the-fastest-...
[3] http://www.youtube.com/watch?v=Jd3-eiid-Uw