Hacker News new | past | comments | ask | show | jobs | submit login
Prototyping a 3D light field display video projector array (usc.edu)
54 points by zackmorris on May 16, 2015 | hide | past | favorite | 9 comments



This is a surprisingly simple technique for multiscopic display, a back-to-basics approach that relies of the existence of extremely small or concentrated projectors. This research is from 2011, but like most multiscopy research, it's actually pretty far out into the future.

It's interesting to consider how such displays would require a different approach to video creation and viewing. Notably, how many viewers can share one display? A single viewer with a television-sized display will experience immersive stereoscopy. The video can be prepared as a window, and incite the viewer to lean left and right to get different perspectives. As you add more viewers, the display's multiscopy becomes less and less interesting, though I imagine it can remain so, in the same way that theater has more dimensionality than cinema.

Also a semantic nitpick: in the overview they call it "autostereoscopic" but I think "automultiscopic" is more explicit, and used in existing literature [1]. The reasoning is that, for example, a creature with more than two eyes would still get a different perspective for each eye, so there's nothing inherently "stereo" about the display.

[1]: http://web.media.mit.edu/~mhirsch/hr3d/taxonomy.png (from http://web.media.mit.edu/~mhirsch/hr3d/)


I'm not really familiar with this field at all, but does this basically give the effect of an in-air 3d display?


You mean Star Wars style? In which case the answer is no, not really. Multiscopic displays have never been properly represented in SF movies, as far as I'm aware. A perfect automultiscopic display would be very much like a window, with the video being able to show any kind of dynamic perspective on the other side of the window. Actually it's more than a window, since elements can give the impression of being in front of the window, but such elements would clip at the edge of the window if the viewer leaned too far out. So the 3D could only be "in-air" as long as the screen is behind it.


Yeah basically. It may not actually be hanging in the air, but would it more or less look like it?

Or would it look more like a 3d TV, a window on the wall that has the appearance of volume, except with this research you can also move around a bit and the object changes as expected?


The difference with current 3D TVs is that you only get two angles, one for each eye. Everyone who looks at the TV from any angle gets the same two views. With this tech, the view each eye gets depends on where it is physically, so you can move your head around to get a different angle, and multiple people can get different angles all at once.

Edit2 (ignore previous edit): Another advantage is that you gain the ability to focus your eyes (each eye) at the correct depth for the object that you're looking at. With 3D TVs, each eye has to focus at the depth of the display all the time. The only depth cue is the angle between your eyes.


If you're interested in that you may also like Matt Hirsch and Gordon Wetzstein's more recent work from the MIT Media Lab (they cite this paper):

http://web.media.mit.edu/~gordonw/CompressiveLightFieldProje...

Similar concept optically (as far as I can tell, it's not really my field) but solves some of the practical implementation issues (particularly the one-projector-per-pixel problem).


Wow this seems expensive and bulky compared to a headset with a hi resolution screen a few inches away from the eyes.


It does, except that this technique can, in theory create a holodeck like experience for a group not wearing headsets. That would be better for training and simulation.

For a while I was looking at building a basement with a "picture window" which was really just a view from the floor above ground. "Normal" displays don't simulate a good window experience, a curved display can give a decent simulation if you don't move (and that is what they do in flight sims as you are constrained to your seat), but a light field simulation would really give you the feeling you were looking out of the window directly, even when moving around your point of view.


Even for a single user you need to add head tracking to get a perspective that shifts as you move your head (though that's pretty doable with consumer stuff these days). For a multi-user system I think you need something more like this.

It could be interesting to do multi-user individual-perspective shutter-based 3D, but you'd need to run at a really high framerate and you'd probably have brightness issues as each eye would only be receiving light for some smaller fraction of time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: