Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: DIY Position Tracking Using HTC Vive's Lighthouse (github.com/ashtuchkin)
153 points by ashtuchkin on Nov 28, 2016 | hide | past | favorite | 38 comments



Hey, author here. I used the system described in the link for indoor stabilization of a drone, plus precise landing, to support an automatic battery swap station project (have a video there). Worked pretty well, so I decided to open-source it in hopes this would help fellow hackers.

Let me know if you have any questions!


This is awesome!

Curious what your thoughts are on this vs one of the official SteamVR tracking modules[1] which use the TS3633[2]? For $10 this is an awesome hack for off the shelf components. However, for $70 + Shipping you can pick up 10 of the tracking modules which filter out noise and do accurate envelope calculation.

[1] https://www.triadsemi.com/product/ts3633-cm1/

[2] https://www.triadsemi.com/product/ts3633/


I'm not familiar with the SteamVR tracking modules that you linked, however from the link:

> The CM1 contains all the circuitry necessary to convert SteamVR Tracking base station infrared light into a digital output that encodes the angle of the sensor from the base station.

That implies that it's not a full location tracking solution by itself (angle only). The data sheet[0] doesn't seem to shed any further (ahem) light. Maybe I'm wrong. And I'm certainly interested in learning more. Do you have more details on the SteamVR tracking system?

[0] https://www.triadsemi.com/download/16501/


Correct, it's only going to give you a signal "upon incident light from an IR source once the threshold level for light intensity has been exceeded"[0].

What you are getting in this chip is all of the work Valve and Triad have done in improving the accuracy of that envelope signal. Essentially, the accuracy in the timing of that signal == the accuracy of the position. There's a really good talk from Steam Dev Days[1] (that I was fortunate enough to attend) that gives a good overview of all the work that has gone into SteamVR tracking.

[0] https://www.triadsemi.com/download/16617/

[1] https://www.youtube.com/watch?v=m3wKLZHH_dM


The cool thing is you can probably just use Teensy and my code to convert the envelope to 3d position. I haven't thought about that :)


Apparently the math is complex (and proprietary). They claim they saturate a single core of an i5 processor with all of the math converting the timing signals for all of the devices to position. With the official kit, you don't do anything on the hardware side on your own with regards to figuring out position.


Wow, definitely post more info on this if you can. I had no idea about this. I want to take the class, but can't shell out $3000 for a side project.


I was previously under the impression that the photodiodes figured out their own angular position and you received a library to convert those values into a single rectilinear position and orientation on the device. This is not the case. The photodiode ASICs basically just convert photons to voltage and clean up the signal. The official kit includes an FPGA to pack up all the sensor data and an MCU to give you an SPI interface to that data. The system is pretty much end-to-end, you mostly just get a chance to inject your own data about button clicks or what have you into the data stream from the device. SteamVR on the PC does the work of making sense of the data.

There is a huge amount of design that has to go into the sensor layout. Bad sensor layout will cause significant problems in tracking. It's extremely constraining in what you can do.

The system is designed to work with one base station. The second is only for redundancy, covering areas of the play area that are occluded for the master base station by your own body.

In contrast, Oculus uses cameras to detect IR LEDs on the device. It's kind of inside out from SteamVR. They have to them do image processing on the PC to get the angular position with regards to the camera of each LED. SteamVR just has to convert timing values into laser emitter motor positions. Once you get to that point, they are pretty much the same, having to compute position and orientation from that data. It's just a lot cheaper for SteamVR to acquire that data.

It also means that adding more cameras to Oculus adds to the workload, whereas adding base stations to SteamVR does nothing to the PC. Both will increase workload to add more devices, but it's negligible compared to the image processing Oculus has to do. Oculus' system is fundamentally unscalable. SteamVR is fundamentally scalable.


We've officially been told that the SteamVR information is not confidential. There is a lot, so if you have any questions, find my email in my profile and just send me a message.


Thanks that video is really good, watching now.

@ashtuchkin maybe link it from your page?


Thanks! I actually haven't seen those, they look pretty good, basically replace the custom schematics I needed to build.

The timing calculation still needs to be done somewhere though, I wish there was a module that would do it internally and just publish the timing numbers - that would be more scalable and then even Arduino would be able to handle several sensors.


From what I understand, the official kit includes software for calibrating a specific configuration of sensors in fixed relation to each other and figuring out the timing parameters. Guess I find out for sure tomorrow!


Yeah, you've basically done all the hard work. I just ordered a batch of these to see if I can get it working. I've been wanting to hack on this since they came out. Thanks for the inspiration!


The hot-swappable drone battery video is cool enough to be its own Shown HN submission.

Would it be hard for the drone to be able to seamlessly switch between indoor/lighthouse tracking and outside/GPS tracking? How about the ability for the drone to orchestrate the battery swap autonomously with the charging unit? If you could add those two features you could produce a proof of concept video that would probably blow a lot of minds.


Thanks, that's nice to hear!

Seamless switching is definitely doable and shouldn't be hard, just need some time to implement. I'm using PX4 autopilot and it's very friendly for adding new functionality.

Orchestration is also close - in current demo both station and drone use Wifi to coordinate, so it's a matter of adding new logic.

Currently the most limiting resource for that project is my time :) I'd be happy if other people would want to join!


Dude that video is an amazing demo. I left a comment on YT but you should take this further!

P.s. I think the I-term in your pid control loop is too high on your quadcopter, that oscillation isn't really affecting your flight control too badly but it heats the motors more than a stable tune would which reduces the service life of the permanent magnets in them.

Could be the P-term but the oscillation looks too slow for that, I guess one way to check for sure (assuming you don't have a blackbox on there) could be to increase your D-term and see what the response is


Thanks, yeah I need to tune it more, just wanted to share sooner :) I do have a blackbox there, that should help.


Awesome project, and especially awesome README file: good explanations and images!

If I read it right, a way to feed it position data is already integrated in the drone firmware, so you only have to send it the found coordinates and it uses them, correct?


Thanks!

Yep, I'm using Mavlink protocol (specifically ATT_POS_MOCAP message) to feed 3d position to the drone autopilot.


Very nice work!

Do I understand correctly that a single sensor board as described here will get position, but can't get orientation?

If so, how does the drone handle orienting itself correctly to land on the battery swapping station?


Yes you're right, only position. I'm using acc/gyro/compass on the drone.


Onboard accelerometer/gyroscope which is part of the flight controller, most likely.


Hi, I have some questions:

> 3d position accuracy: currently ~10mm; less than 2mm possible with additional work.

Could you outline what is required to take your solution from 10mm to 2mm accuracy? what factors limit maximum accuracy?

Presumably the lighthouse-derived position could be fused with IMU data for better temporal accuracy. Does your drone software do that?

Thanks!


The geometry calculation is currently approximate, not taking into account that the laser planes have ~3cm offset from center, that's one. Timing precision is probably the next one - microseconds might not be enough and we can measure better.

Currently std dev of each position coord in rest is 1.9mm.

Fusion with IMU is the next level and is usually done later down the line because it needs to be tuned for particular model. Yes, the drone does that.


Did you have any issues with PX4 interpreting and fusing MOCAP data?


I needed to make some adjustments and fix some bugs there to make it work. It has the code, but it didn't work great indoor. I'll try to send these changes upstream when I have time. Let me know if you're interested in them - I can explain further.


Not really interested in details, just had a similar experience with optical flow implementation and wanted to know if mocap was better ;)


Great project! When listening to an interview with Alan Yates[0] (main designer of the Lighthouse) I was thinking about an application like this.

I recently did shop around for motion capture system (cameras tracking markers) and one of the cheapest systems with comparable performance to OP's came out to cost $5-8.5K.

[0] http://embedded.fm/episodes/162


That's crazy :)


A bit off-topic, but I was wondering:

How was the base station visualization[1] done?

[1] https://camo.githubusercontent.com/d9241f231a03d177d215f98bd...



Fantastic!

Having been developing with the Vive for most of the last year for Left-Hand Path (http://store.steampowered.com/app/488760), and given I've got a lot of mocap experience before that, I can confirm that Lighthouse's tracking is ridiculously good.

It's not just as good as something like an Optitrack system: it's significantly better.

If this provides comparable tracking to what the Vive offers, it's an absolutely unbeatable price / performance combo.


I was really inspired by that too! The tracking is comparable, but not on par yet - you need multiple sensors and IMU fusion to achieve the smoothness of Vive. This is only a first step :)


Ah, they're doing sensor fusion with inbuilt IMUs? That makes sense...

I have a colleague with some deep technical knowledge in this area - he's the guy who did the heavy lifting when I built an inertial mocap suit a while ago. I've pointed him at your project. If it turns out that it is of interest, perhaps you can get some useful collaboration / suggestions out of that!


I actually just arrived in Seattle to take the official HTC course on using the Steam VR positional tracking system, and this was one of the first things I saw as I got off the plane.


That's so cool :) let me know if I got anything wrong with the device or geometry calculation! Also I would be happy to get any feedback from Steam VR people!


I've been mulling a installation work based on exploring video solids[1] around using cheap phone-VR mounts. This project is exactly what I need to find the viewers position in the video/space. Any cheaper way to do this without the Lighthouse devices?

[1] https://theblackbox.ca/blog/vector-video/


Nice project!

There's a whole bunch of indoor positioning methods, having different precision/simplicity/cost profiles, so you might need to shop around. Probably cheapest and closest to my sensor is ultrasound sensors like http://www.marvelmind.com




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: