Hacker News new | past | comments | ask | show | jobs | submit login

Oculus is perfect for this.

I was really hoping the cameras would be mounted on a 3-axis gimbal, with each axis controlled by the Oculus motion sensing.

The Oculus is a great option for for first person flying, even with using it as a regular 2D display, $300 is pretty cheap compared to most head mounted displays.




The latency of physically moving cameras with head tracking would be horrendous. Perhaps if you also used wide-angle/fisheye lenses and somehow synchronized the virtual panning with physical panning it would be workable, as long as you don't move your head too quickly.


FPV on RC planes with head tracking+servo controlled cameras is already pretty common in the hobby.


The control doesn't have to be 100% absolute. You can have the video 'projected' within a 3D environment and adjust it's position within that environment based on the camera's physical movements, but you can let the 1:1 tracking from the headset overshoot the boundaries so the user doesn't get too much of a jarring experience. It would feel as if they were in a cockpit.


Trying that approach will probably result in people throwing up. In an immersive environment even short latencies are extremely disorienting. When I first started recording audio for film, the industry-standard recorders used Digital Audio Tape and there was a switch to monitor either the live feed from the preamps or the recorded feed from the tape (so you could be sure you were recording - you'd be surprised how easy it is to mess this up over the course of a 12-hour workday).

The latency was small, on the order of 10-12 milliseconds, but it took me months to overcome the weirdness of opening your mouth to speak but not hearing anything until ~1/100th of a second later. I shudder to think what it would be like to have this in your visual cortex but with a longer delay and also while your eyes are telling you that you're floating above the ground.


I think you're right. Latency is very disorientating.

Here's how I would do it: The craft has two wide angle cameras. The Oculus viewer is at the center of a sphere in 3D space. Project the video feed onto the inside of the sphere, and keep the position of the feed anchored to where the camera is currently positioned in relation to the craft.

Moving your head moves the frustum immediately, and the camera position lags behind, so if you moved too quickly (say, turned 180 degrees) the frustum moves right away, but the video only comes into view after the cameras are repositioned. There could be numbered checkerboard or other pattern shown wherever the video feed is not projected, so spatial disorientation is minimized.

That way you get the cockpit feel and immediate movement in the Oculus, but avoid the uneasiness that latency would give with a more simplistic approach.

Another option would be to use 360 degree cameras to fill up the entire sphere, and then use the binocular cameras to provide high detail.


This is basically what I was hinting at, but you described it a lot more clearly than I did. Basically using a virtual 3D environment in which the player can 'be' and view the video feed through a virtual 'window'.


Interesting approaches - don't have an OR or a high-powered drone, but that's intriguing enough to do a duct tape test at home with existing gear. Thanks for the ideas.


So, kind of like ghetto dual-paraboloid mapping?


Yes, exactly.


It's really interesting to introduce a longer delay (100 ms?) and try to speak while monitoring your delayed voice. At least for me it comes out completely garbled, like I'm having a stroke or something.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: