The thing that strikes me the most is how absolutely terrible they are at showing off their product. Their whole promo video is extreme closeups on 3D renderings that look no different than what my cellphone can do because it's just a boring video of boring renderings and I'm watching it on my boring cellphone. It's like they're trying to show the grand canyon by filling the frame with a small bit of rock.
Looking Glass people, if you're reading this, you need to zoom way the fuck out, turn the lights on, and quit with the artsy bullshit fading to black every few seconds because it looks like you're hiding something. You cannot show pictures. You need to show the experience.
What really kills me is that they clearly shot the screen on a turntable (0:30) which would be the perfect time to show the on-screen image rotating, but no. Black screen.
Too hard to connect the cables for that shot? Didn’t bring a computer to the photo studio?
Another option: short animated loop to create depth from parallax. But the key thing is it needs to show the monitor as a whole along with the image on screen, all moving together. https://imgur.com/eh5u6Gu
Specifically they need to pan the camera around since you cant show off the stereoscopic effect on a regular screen. Heres my favorite demo: https://youtu.be/E8pZlI2WM_Q
They did not allow press to take images of the device when the display was on.
They knew images couldn't capture the 3D so they did what this company is doing and used a bunch of vague renderings to try to express how the thing worked.
Since it's probably the same underlying parallax tech powering both, I'd guess the reasonings are the same.
That's a bad reason, and they executed poorly. The techcrunch gif I linked _kills_ the demo (in the good way) in a second. Random schmoe cellphone took their marketing department to school without an ounce of preparation.
Yeah. The 3DS only supports one viewing angle, holding it straight parallel to your face. At that angle, you see 3D; at any other angle, you see distortions. So indeed, the 3DS very hard to demonstrate in a photo or video. But Looking Glass supports multiple angles, which makes the gif you mentioned possible and is also what makes it unique as a product!
> Since it's probably the same underlying parallax tech powering both, I'd guess the reasonings are the same.
It's not. Looking Glass is an advanced version of lenticular optics, which is 100+ year old technology, except used to create a multiscopic display(many views).
Nintendo 3DS is newer technology, known as parallax barrier. However, 3DS is only two views, which is pretty much only useful for a single viewer.
It's not really the same tech. More like a super sandwitch of 43 perspective layers. the 3DS uses parallax effect, which requires your head to be in a set location for it to work (I believe?) versus this monitor that actually works via you looking around at it from different perspectives (43 is the number I kept hearing)
I don't think the video quality or display quality is the problem though. The 8k screen itself looks plenty high quality in the Linus video bouncing around this thread. The problem is that their promo material entirely eliminates any sense of the _one_ thing that they bring to the table that makes them special.
They need to throw up a _static_ 3D image that the viewer can easily understand and then move the camera. That's it. That's all they had to do, because literally the one thing that makes this screen special is showing different viewing angles, and they failed wildly.
I have designed lenticular prints in the past and also I'm the person that wrote the vray plugin for rendering CGI to lenticular.
The main thing that everyone wants is a lot of depth, a wow effect. Lenticular technology can only deliver that while your head is static. As soon as you move, you either see stripes move across the image or you need to introduce an unnatural amount of bokeh blur.
The reason for the stripes is that from your eye / camera, different parts of the image have slightly different angles, so slightly different subpixels are visible.
So if they follow your suggestion, they need to ensure that the final video is low quality enough so that you don't see the striping artifacts.
Here's a lenticular print that I had made at 4800dpi, so at a much higher resolution than what a display can hope to achieve
Note the strong blur to hide artifacts, yet you can still see some striping on the background and in the top right corner.
Here's a simulation of the best possible result that one could hope to achieve with 70lpi sheets and 1200dpi effective resolution, which should be close to what this display uses:
(Edited) The frog was on the low-res developer device.
The much larger “8k” version introduced half way through https://www.youtube.com/watch?v=-EA2FQXs4dw has some good tech detail, with the raw pixel mapping shown at 6:30. Calcs seem about right since raw pixel count was stated as 43 million, although it was implied elsewhere they split RGB sub pixels too (which sounds wrong I admit), so maybe 43 million divided by 15? Our eyes are more sensitive to luminance than colour, so maybe they did something there (however, not that I could see from the raw pixel mapping at 6:30).
If they're counting subpixels as separate whole pixels and omitting the fact that the "effective" resolution is much lower because of the 3d layering just so they can say "8k" in their marketing material that's suuuper dishonest.
This is very reminiscent of the marketing around Magic Leap, except with this (judging from the actual videos of it in action from other people, which Magic Leap didn't have) there's a real product that pretty much matches the marketing behind the smoke and mirrors. Why are they so afraid to show it off?
I know! It's crazy! Reviewers are like "OMG so amazing! Look at how cool it is when you move around!", and they're in there going "Ok, but wouldn't it be fun if we hid it from everyone and made it look like we're full of shit and have something to hide?"
Given that this is the perfect display to watch porn, maybe they tried to get ahead of the game with their "artsy" approach by showcasing other good use cases like design, science and art.
I saw an earlier prototype at a Demosplash at CMU. It was really incredibly cool in certain instances. 3D objects really looked like they were "in" the volume of the display and moving my body or head from side-to-side was pretty flawless.
That being said, it also had limitations. 3D effects that extended beyond the display (like a tunnel, or some larger effect) lost that depth to me, and there's no up-down 3D, only side-to-side. The one I saw was also not as high-resolution as you might think, the effective resolution was the resolution/elements where each element was a particular angle the display was to be viewed from and the computer had to render the display from each angle simultaneously. The prototype I saw had the effect of looking a bit like 3d objects underwater.
Still, it was the closest thing I've seen to a volumetric display outside of a lab, and in the cases where it really worked (things inside the volume of the display) it was kind of jaw dropping.
I had my hands on a dev kit for a while. It's simultaneously amazing and meh. Sorta like any current AR headsets. The depth of the field that the display achieves is miraculous. But the image is still grainy and has a really narrow viewing angle. This was last year, but we evaluated it as not quite ready for general use, but incredibly promising. The step up to 8K is probably very valuable, but the viewing angle is the big area for improvement.
It's more like panning. If you step left right, the parallax effect persists, like walking around an object in a box on table. But if you try to go up/down it appears to rotate with you as the display doesn't have any fields vertically.
It reminded me a lot of those "3d" pictures you can get that have a bunch of vertical sections at slightly different angles. They go back to maybe the 70s. There's a funny name for them that I can't quite remember.
My understanding is that the "box" that sits on top of the display is purely cosmetic and helps sell the effect, and that the backboard of the display provides everything.
Looking Glass is just lenticular sheet glued to LCD, same technology as cheap 3D postcards and stickers from the ‘90s made of vertical grated plastic that you tilt left and right and picture changes. Probably not even 3D. Acrylic block part is just a gimmick as well.
But reportedly they execute that principle super well, gimmick part included, to the point it looks almost VR.
The interesting thing is that if the lenticular lens is done with mini spheres instead of mini-cylinders, it should be possible to create a more fully holographic effect, given a sufficiently high-res base display. Basically a light-field camera in reverse.
Fovi3D (http://www.fovi3d.com/) makes this kind of light-field display based on microlenses.
One challenge here is the tradeoff between spatial and angular resolution. If you're generating a 4x4 lightfield starting with a 2160x1440 display, you only end up with 640x360 superpixels.
The company I work for (https://www.leiainc.com/) makes a display that can switch between light field and 2D, so you can have the best of both worlds.
That's not what that article says. It's speculation from before the display launched. Here is a later article in which the founder of the company says it is lenticular:
"since he happens to be head honcho at a holographic display company he can show us the result. Looking Glass Factory’s display panel uses a lenticular lens to combine the multiple images into a hologram, and is probably one of the most inexpensive ways to practically display this type of image."
"There’s a lot more to the evolution, enough for a book on the subject, but if you had to create a list of what kinds of technology and methodology went into the Looking Glass (since they do not list it), it might look like this:"
No actually. The Nintendo 3DS uses parallax barrier technology, which is significantly newer technology than 100+ year old lenticular tech which powers Looking Glass and most 3D devices. That's why the Nintendo 3DS has such good image separation and reduced ghosting compared to cheap Chinese 3D phones and Rokit IO Pro 3D.
Wow second reference to Demosplash I have seen in a week. Guess its more well known than I thought. Looking at the pouet page, it looks like 2018 was a banner year for Demosplash, tons of prods from that year on there. I went for the first time in 2019. The CMU alumni crew that organize this are a bunch of cool cats. Wish we had demo parties at my alma mater. :)
Yeah, I missed the other post on the FM Towns emulator or I would have commented. One of the coolest parts of Demosplash is the live old PCs and consoles you can play around on. Partly I think this is because Demosplash is hosted by the vintage computer club.
One of the professors at CMU (who coincidentally also is writing the FM Towns emulator and has written a number of demos for Demosplash, of the first ever for those platforms), is a really cool guy who brings a library of almost impossible to play Japanese only PCs to the event. He's done some truly amazing stuff with some of them and they're a blast to play games on.
The folks that run it are great, including having a mainframe emulator running on a rPi as the sign-in system, and it should easily have 4-10 times the attendance. I really recommend it if you're just a few hours from CMU and can take the weekend. Even better, enter a competition!
IIR, the competitor who brought the Looking Glass display was a VR developer for Facebook, so there's some serious muscle that shows up.
>One of the coolest parts of Demosplash is the live old PCs and consoles you can play around on. Partly I think this is because Demosplash is hosted by the vintage computer club.
Yea it was a lot of fun playing with the old machines at last year's Demosplash.
>One of the professors at CMU (who coincidentally also is writing the FM Towns emulator and has written a number of demos for Demosplash, of the first ever for those platforms), is a really cool guy who brings a library of almost impossible to play Japanese only PCs to the event. He's done some truly amazing stuff with some of them and they're a blast to play games on.
Soji Yamaka is a wizard at reverse-engineering these old systems. I am in awe at his drive and ambition in figuring out solutions to preserving these old systems. His submission for the 2019 Demosplash was one of my favorites. We had some great late night chats.
>The folks that run it are great, including having a mainframe emulator running on a rPi as the sign-in system
HA! I remember that. Geeked out with the founders for like an hour just talking about how they did it. Seeing that at the check in desk was my first introduction to how CMU students leave no stone unturned in nerdy awesomeness.
When I was in school, me and my fellow retro geeks would talk about writing stuff like this but we never actually spent the time to learn how to write software for old machines. We would just be trying to survive our own classes and then waste time on other things.
The CMU computer club takes the craft to a whole new level. I guess it is to be expected given that they are a top tier school. While the mainframe system they emulated was before my time, I loved the glow of the DEC monitor they used. Took me back to when I first saw one as a child.
>IIR, the competitor who brought the Looking Glass display was a VR developer for Facebook, so there's some serious muscle that shows up.
Ah man missed it! I don't recall any outside companies attending in 2019.
Do you attend every year or was it just a one off thing?
It is difficult to predict how successful a technology will be and how widely it will be adopted.
Does anyone use their 3D glasses that used to ship with "3D TVs" circa 2013? My Samsung TV came with a pair of active 3D glasses that were collecting dust. On the other hand we've had adoption of touch screens on handheld devices, it almost swooped the entire mobile market between 2007-2012 after iPhone's introduction. But the same thing didn't happen with Keyboard + Mouse input on a desktop computer. Infact, the market just exploded with new mechanical keyboard aficionados sometime around 2010, I still remember hanging out on geekhack a decade ago and now mechanical keyboards are everywhere.
We've seen in the past and we will see this in the future - a whole lotta focus on aesthetics, UI and presentation - i.e., cool graphics in video games, touch screens, holographic displays like the one in the article, AR/VR tech (magic leap?) without proper attention to content will lead to nowhere. Probably just make headlines.
Another thing is that people don't take ergonomics into account. Pretty much any sci-fi movie has people moving their arms about to interact with content. That would never take off in real life. Unfortunately, people like Elon Musk are hell bent on horrible UI/UX depicted in their favorite sci-fi movies and shoehorning it into Space craft (dragon capsule has all touch screen interface with literally no buttons, checkout the Everyday Astronaut channel's tour). Elon's vision about UI/UX is misguided through movies, it is embarrassing. The giant touch screen panel in a Tesla is the main reason I would never get one. I think he might put this 8k Holographic display as an option, please don't tweet him about it.
Sci-fi movies are the cancer of design. It is the victorian design equivalent of modern times, pure decoration. You can find traces of it in professional equipment, this thing looks like it was pulled from a space ship: https://images-na.ssl-images-amazon.com/images/I/61cGhQ0begL...
> The giant touch screen panel in a Tesla is the main reason I would never get one.
For me, the interesting part about owing a Tesla is the realization of just how badly designed traditional cars are. Dedicated tactile controls are better in an automotive context for fiddling while driving, and the Model 3 implements almost everything necessary while driving as physical controls.
The two things that I fiddle with while driving that don't have physical controls are the stereo and windshield wipers. The stereo has a physical volume and tracking controls, I don't think a physical interface for any other part of the stereo would be much better than the touch screen. Windshield wipers should have a physical interface, in the Model 3's auto mode is great 99% of the time but when it isn't it's too hard to manually adjust and when you need to adjust wipers is precisely the wrong time to be fiddling.
I don't know, I think the controls [0] on my car (VW Golf 2017, <20k new) are pretty close to perfection.
Physical buttons or dials for everything except for infotainment settings, but minimal with no clutter. Steering wheel input for cruise control/audio/wipers and the heads up display, while the climate controls are dead simple and easy to work without looking down. Infotainment has both touch and a context sensitive dial. Everything is grouped contextually and I never have to look away from the road.
I was especially impressed that a single dial (left, next to the lights) dims _every_ light in the entire cabin to the same brightness, across many different subsystems. Great design shields the driver from the complexity of the system.
That's a beautiful UI for a car, I also love BMWs (E90-F30) models before the latest 2019 redesign. 2012-2019 BMW 3 series is a perfection in UI design IMO.
The car I drive has the same problem as that image: undifferentiated buttons to control the AC. I want to turn on the windscreen defogger (or whatever it's called). Now which button is it, oops, now I have to look down cause I can't remember nor can I locate it by touch if I did.
On the golf it's not a button but a dial (furthest right).
I leave it on Defog most of the time (straight up) and change from there without needing to look. If I'm unsure a quick glance tells me the angle and I can make the adjustment without having my eyes on it.
I actually want every single thing as a button, knob, slider or a toggle. All 80 of them. I understand that I am in the minority, just stating my personal opinion.
I don't think people realize how amazing it would be to flip off a clunking toggle switch to turn on Bluetooth.
It is fricking glorious. IMO, hardware controls > software controls. Big time and $$$.
All monitors today have no dedicated buttons for switching input sources. Instead we have a d-pad nipple that takes 20 seconds to pop up a menu, then dig into the menu shortcuts (thank god!) and then having to select input source.
Humans have regressed hard in last 30 years in terms of UX/UI.
The BENQ monitor I bought a few years ago had capacitive buttons. It started flickering and the warranty replacement was a newer model. I was happy to see that they had switched back to physical buttons.
That said, the OSD interface has nothing on the decade-old Dell that sits next to it, which even has the option to flip the interface for ease of use in portrait mode which is how I have it set up.
Over the last couple of years, I found the combination of physical controls for driving-essential functions and voice control (e.g. CarPlay; I don't like the manufacturer solutions as much) for convenience functions quite effective.
My Samsung 3D television was awesome for playing PC video games on. The big downside was that playing PC games on a large television wasn't the most ergonomic experience. (And for movies, none of the streaming services offer 3D content.)
I wish smaller 3D computer monitors had caught on. At least VR headsets are still going strong and are even more immersive.
Xfininty used to have 3d on demand as well. I have a 3d tv from 2012 and love it. It also has an interesting feature where it can interpolate 2d => 3d which depending on the content works very well, or is a complete mess.
They are both cool for short term use, but I've yet to see a shutter glass/etc implementation that is synced well enough to the screen and blocks 100% of the light to avoid ghosting.
VR googles have their own issues, because the lenses never seem to be perfectly in focus, and the resolution isn't high enough directly into the glasses.
In both cases you can choose to ignore the problems for a while, but at least in my case the eye strain builds up enough I doubt I could deal with it for even 4 hours a day on a regular basis.
> They are both cool for short term use, but I've yet to see a shutter glass/etc implementation that is synced well enough to the screen and blocks 100% of the light to avoid ghosting.
I've never seen this on a monitor/display either. But I have an Epson projector from 2013/2014 that uses shutter glasses and does block 100% of the opposite eyes image. Because it's not a screen, it doesn't have to blank the image between frames, it just completely stops sending light from that frame.
> Does anyone use their 3D glasses that used to ship with "3D TVs" circa 2013? My Samsung TV came with a pair of active 3D glasses that were collecting dust. On the other hand we've had adoption of touch screens on handheld devices, it almost swooped the entire mobile market between 2007-2012 after iPhone's introduction. But the same thing didn't happen with Keyboard + Mouse input on a desktop computer. Infact, the market just exploded with new mechanical keyboard aficionados sometime around 2010, I still remember hanging out on geekhack a decade ago and now mechanical keyboards are everywhere.
Glasses based 3D sucks. People who don't wear glasses don't want to wear glasses and find them unconfortable. People who do wear glasses don't want to wear two pairs of glasses and find it uncomfortable. Active shutter glasses give some people headaches from the flickering. If I could get perscription lenses with polarization for my TV's passive 3d, I might play with it... But I watched like one 3d blu-ray with the glasses and that's good enough for me. This display looks interesting because the viewer doesn't have to wear anything, but we'll see.
Touchscreens on mobile works because it's cheaper to build than a number pad, and way cheaper than a keyboard, it's cheaper to extend the touch screen so they don't use any real buttons on the front of most androids. The flexibility is helpful for text input.
A basic keyboard for a computer is $10 at retail because there is no size constraint making things expensive. Even a $10 keyboard has better user feedback than a touchscreen, but a computer sized touchscreen is going to cost more than $10. Plus, ergonomics. Touchpads could overtake mice, maybe, but desktop is being vastly overtaken by mobile, so it barely matters.
> Touchscreens on mobile works because it's cheaper to build than a number pad, and way cheaper than a keyboard, it's cheaper to extend the touch screen so they don't use any real buttons on the front of most androids. The flexibility is helpful for text input.
Consumers didn't flock to a more expensive phone (the iPhone in 2007) because it was cheaper to make (it wasn't). They did so because the touchscreen enabled new forms of interaction not yet possible, enabling full-screen games, photo viewing/shooting, and web browsing to name a few.
It was cheaper to make the iPhone with a touch screen, than it would have been to make it with a slide out keyboard. While the iPhone has a big market share in the US and a few other high income countries, touchscreen phones have taken over in almost all markets, even inexpensive phones, because they're cheaper to make, if you've already got a large enough screen and a fast enough processor in the phone for other reasons. You can still make a cost constrained phone where buttons is a better choice, but a cost constrained android isn't that much more expensive in absolute dollars, and provides so much more functionality so it only makes sense for the most cost constrained buyers, or those who eschew a smartphone for other reasons.
Biggest problem with TV 3D is the lack of content. If vast majority of content is not 3D, and the stuff that gets produced is usually done digitally and not through proper 3d cameras, it fizzles out as a gimmick.
This type of holographic display will have a similar issue, but it may be saved by the fact that it can be very impressive as a display in commercial uses.
Touchscreen just enabled another method of interacting with web and applications, method that was already present via use of mouses on PC. It did not depend on massive adoption by TV and movie industry with little benefit for them.
They have content - CT scan, Kinect and modern smartphones equipped with depth sensors.
Start with medical applications, expand to luxury video chats. Screen is only part of technology - it requires beefy GeForce RTX 2080 TI. Or, looking from another perspective, some potential customers already own $1500 worth of required equipment. Regular 8k displays are no cheap either.
Notebooks and smartphones had a long run before universal success. Lets see in ten years.
>Another thing is that people don't take ergonomics into account. Pretty much any sci-fi movie has people moving their arms about to interact with content. That would never take off in real life.
Why do you believe this to be the case? I've been having a blast with the (fully wireless) Oculus Quest, especially games like Racket NX which involve constant swinging of arms (you're playing tennis, basically).
I think people haven’t really updated their opinion of VR since the Quest came on the scene. VR will almost certainly never be as big as Mobile is now. But the Quest is a fantastic experience and lives well past the point where VR novelty has worn off. It’s the social experience and portability that does it. Sports, as you say, in particular are good here. I’ve been playing Echo Arena for a few weeks, now, and far from the novelty wearing off, I’m actually enjoying it more and more. Maybe it has something to do with the sense of presence and—dare I say—intimacy you get playing a multiplayer contact team sport when we’re all supposed to be self-isolating.
The main, enduring dimension VR adds to gaming IMHO is physical exertion (and this aspect is only really engaging with a standing experience... and then only enjoyable if wireless). I feel much better physically after the light exercise of half an hour of Quest gaming than I do after half an hour playing some PC first person shooter.
> Does anyone use their 3D glasses that used to ship with "3D TVs" circa 2013?
No, but I still do use them on my projector. It seems that 3D became pretty popular with the home theater market, who buys more expensive technology than average consumers. This is why most good projectors these days still support 3D, while almost all regular TV's do not.
Yes, but it can only go so far. Only certain glasses are compatible with certain devices. The cheap Chinese glasses I bought for my projector are absolutely phenomenal though.
I work in high volume manufacturing factory. AR has literally become the buzzword cliche that keeps popping up over and over.
I have serious doubts about AR use in things like assembly instructions. Turns out, it is probably ok for training but if you do a task 30 times, your brain develops a memory for how to do it and you don't need to wear AR glasses. For maintenance techs, which is how Google Glass is marketed, it is too much of a hassle to put on the glasses, have a software team write the application and maintain it and then after spending $200k on this boondoggle AR project, what is the ROI? I really don't see it as of any value... maybe there is a positive ROI for airplane maintenance.
I am really not convinced. It looks like a solution in search of a problem. Are there any AR goggles used in manufacturing industry on a mass scale?
I can see why this would be useful than reading a 400 page instructions. This is simulation and it adds value during training.
As I said, there are edge cases where AR/VR makes a lot of sense. HUDs in fighter jets for example are tremendously useful. But my complain is mostly about people wanting to jam some new tech in, take on a lot of tech debt, spend $$$ in a fortune 500 company only to find that there is no real problem to be solved.
Should we spend $120k consulting with a software company to develop an iPad based checklist? Not to mention yearly maintenance of that codebase. Printed paper + pen works just fine in 90% of the cases for 99.99% less cost! You have to ask - how many people are going through this "old school" paper trail? 100? 1000? 10 million? I've seen AR in manufacturing roadmap slides where there are like 13 workers and they are all with 25+ years of experience and the management wants to do their AR boondoggle. Frustrating.
That thing is the natural continuation of 1960’s-era oscilloscopes. Those steeped in the industry find it familiar.
A bit puzzling that you criticize all from ancient knob proliferation to a single large flat screen. Only alternative is small screens with deep menus, it’s own he11.
If you have just as many controls, you're faced with the same amount of mental space, and when that mental space no longer corresponds to physical space, seems like it just gets more difficult.
Those cockpits look intimidating to people who haven't been trained to fly those planes. But here's the thing, non-pilots can't fly glass cockpit planes either. The complexity is unavoidable.
To a certain level this is true and important when dealing with situations where someone making the wrong decision can have dire consequences. The comparison breaks down though when the added complexity only serves to increase the amount of control over processes which are meant to be accessible to casual users and experts alike, e.g. an audio-producing device offering a simple volume control to the casual user with an option to access a multi-band parametric equalizer to those who want or need to have more control over the device.
3D glasses were really cool actually to experience CGI in. It was actually possible to enjoy that content together in the living room unlike VR content when you are expected to sit in the same space with boxes on your head
> Unfortunately, people like Elon Musk are hell bent on horrible UI/UX depicted in their favorite sci-fi movies and shoehorning it into Space craft (dragon capsule has all touch screen interface with literally no buttons). Elon's vision about UI/UX is misguided through movies, it is embarrassing.
My, just today there was a piece about SpaceX I watched on the noon news and I noticed that as well. I was amazed, I refused to believe it was true until I looked it up. How could anyone possible think about touch-only interfaces and sci-fi aesthetics in such a critical piece of equipment. It's already preposterous (and dangerous) that they do it on cars, but in a space-ship! that's another level.
I am a big fan of 3D and still use my 3D TV. You can also view 3D blu rays in VR. That generation of technology wasn't great for a lot of people, or extended use, unfortunately.
I wish they wouldn't call it holographic because it seems like it has nothing to do with what are traditionally called holograms which involve using interference patterns, diffraction, and coherent light to record and reproduce light field information into and from a 2d medium.
What this appears to be is recording pseudo-light felid with thin strips of vertical prisms or lenses with many vertical strips per micro-lens so that you get depth from the horizontal but not vertical perspective (tilting the display up and down won't change the image, but panning left and right will)
I was reading the article trying to guess the price of this thing. I was thinking $20k before I got to the last paragraph which mentions the price is quote only. I assume that means I was very low on my guess, but then I checked the website and they have 15 inch dev kits for as low as $3k, so I have no idea. Anyone have a rough ballpark for what this would cost?
If they could do $20k for the display, I would expect to see some usage (as their marketing materials state) in medical devices. Displays targeted for medical devices can be surprisingly expensive, Glasses 3d monitors, such as those used on upcoming surgical robots, are getting purchased around $12k from what I hear.
I don't know about "prototype", but If it helps, the Kickstarter was $2k (I remember it as a grand, but I guess my memory was wrong) for the big(now medium) 15" ones, and $400 for the small 8". After it arrived, I kinda regret not gambling on the bigger one
These remind me of the holography exhibit at the MIT Museum. If you haven't seen it before, I highly recommend it. For those of us whose main experience with holographic images are the little foil sticker on the backs of credit cards (and, increasingly, whose main experience with the world more than 50 miles from our homes is through a screen), it is mind-blowing. The detail and visceral dimensionality struck me profoundly; they look as if someone has cut out a piece of reality and copied it into another space, frozen in that instant forevermore.
(I recognize the irony of trying to illustrate this with a Youtube video, but nevertheless: https://youtu.be/LkpBYne7SlU?t=54 ; I wish that the one with a man at his desk was viewable.)
This looks to have the same effect, in full color, and animate-able. Light field technology is truly amazing.
I remember seeing glasses-free 3D displays from sharp many years ago and thinking this would be everywhere within 2 years. The technology was used in some asian novelty pre-smartphone handsets and in one gameboy model but then disappeared. This was not light field technology like the Looking glass, so only one angle and one viewer could really enjoy it, but i assume the market will behave pretty similar this time: After the novelty wore off, nearly every single user i talked to preferred 2D displays if it had even slightly increased display clarity, brightness and resolution. Of course that insight sounds nearly trivial, but what really surprised me was the extreme degree of this preference: People were not even interested enough to make 3d Monitors a second mainstream option next to super retina 2d displays or whatever, but the whole market did not even come to existence outside very small niches. In a completely different context i was extremely surprised how many early "VR" experiences did not even mention they were just in 2D with head-tracking and no one except me seemed to find that odd or super annoying. It is as if no one really cares for 3D.
Yea, it’s really not designed for 3D movies or replacement of a normal monitor. It’s great for viewing or editing individual models that seem to fit inside the display which while a niche is a fairly wide one.
Also, “only 864x853 ... it might be difficult to read normal-sized text.” That makes me feel really old, I spent a long time coding on 640×480 in 16 colors
I remember VESA256 when you could go from 320x240x4 to 640x480 x8 and I thought "wow, this is insane". Nowadays anything under 1920x1080 makes me avoid it.
I am also shocked when people call HD (1920x1080) panels "shitty". Even though I am using a pretty hi resolution ultra ultra wide, HD still feels like the bleeding edge for me.
I can't speak for the bigger models, but the SDK lets you have a range of grid sizes, effectively giving you a range of, err "3D" resolutions. (Pretty sure these options were for rendering performance rather than quality though)
It's unclear to me how the "K" rating is affected by the holographic features, but for normal monitors 8K is a waste.
Every monitor size and resolution has an "optimal" viewing distance where the human eye can resolve the maximal amount of detail. Unfortunately sometimes the "optimal" viewing distance deserves its scare quotes, as it results in the viewer sitting so close that they hurt their neck trying to get away from said overwhelming screen.
For a 55" 4K display, the size of my TV, the optimal viewing distance is 3.7ft, far closer than I sit to my TV. Between 3.7ft and 7.2ft (ideal distance for 1080p), you get some benefit from 4K, but not all of it. In order to even detect the difference for an 8K display I would need to sit somewhere between 3.7ft and 1.7ft (ideal 8K distance) in order to reap the benefits of my purchase. Needless to say, I am not sitting that close.
For monitors the story is a bit different, because they're small and we sit close to them. A 32" 4K display has an optimal range 2ft, which is actually pretty reasonable, while an 32" 8K display has an optimal range of 1.1ft, which is again too close. I personally suspect that this is part of why Apple started pushing to 5K and 6K (although the latter might be cinema related), because ~5-6K is probably the maximum useful resolution for a desktop monitor.
This is a long winded way of saying that 8K is kind of a gimmick, at least for desktop and home use.
I think it's different in this case .... essentially that 8k is split up among a bunch of smaller screens that are then displayed as layers (this is a very simplified description) - the actual effective screen is much smaller than 8k
Holographic displays are one approach being studied for use in VR/AR headsets. My understanding is that they may help solve the vergence-accommodation conflict, which causes a lot of the motion sickness, headaches, eye strain and other issues people have with VR:
Current headsets use two flat screen displays positioned a fixed distance from your head showing two slightly different 2D images. This tricks your brain into thinking you're seeing a 3D environment with objects closer or further away than the displays actually are, but some parts of your visual system are not fooled - leading to a conflict where your eyes try to focus and adjust to what you're seeing in two different ways at the same time.
Holographs may be able to provide more depth cues to each eye, helping to convince the visual system that the images are real.
"""vergence-accommodation conflict, which causes a lot of the motion sickness, headaches, eye strain and other issues people have with VR"""
This is untrue. It actually a pretty minor amount of the issues. There are individuals that suffer from this disproportionately but the estimates I've seen were in the low single digits.
The major issues are in the source to the link you posted -
"""Accommodation-vergence conflict is the one remaining aspect of vision that is not simulated by current VR headsets. While it is not as big a deal as simulator sickness induced by poor tracking, high latency, or artificial locomotion, """
These are the major sources of vr discomfort which are increasingly handled by the baseline vr specification being increasingly in the reach of more and more hardware systems.
I'd also note that as the poor tracking and high latency issues have disappeared people have found they are comfortable with radically wider ranges of artificial locomotion styles.
At this point, in my opinion, the utter uselessness of any text based applications (ie what people actually DO all day long) is what is holding VR back. The resolution needs to scale up fairly radically.
This monitor may have better text results but its not clear what the boundaries of it as a 3d display are as I haven't seen a review from knowledgeable sources (Oliver Kreylos/ docok is one of the people that I'd like to hear from).
That's the biggest culprit. Some people even get nausea without VR headsets. There was a presentation at the California Academy of Sciences where the camera was panning as if it was travelling. A few people nearly barfed.
I wonder how much of it is training. Lots of people are ok with flight simulators, or simulated cars, train rides, etc. It's mostly when they think they are moving that the problem presents itself.
> At this point, in my opinion, the utter uselessness of any text based applications (ie what people actually DO all day long) is what is holding VR back.
In non-gaming scenarios, yes. Resolution could be better. However, it is not so bad at all. You need bigger "displays" in VR than what you would have in real life, but you can code alright.
I would personally prefer lighter, less intrusive devices, even if the resolution was the same.
The user moving themselves around the world is artificial locomotion.
In vr, taking the camera control away from the user (as if in a film or cutscene) is an absolute no go and whoever did that in the example you experienced should be professionally embarrassed and apologize to the people they put through that.
Actual no-compromise panel-sized laser-bending holography is far beyond current tech. You need micrometer resolutions - think 5k dpi as a blurry bare minimum, and more like 50k dpi for high quality output - combined with 3D rendering at that resolution, passed through a holographic transform which maps a 3D scene to a holographic plane.
For animated holograms you have to do this in real time. And colour is still a problem - ideally you want at least three different planes for RGB, all correlated with sub-micron accuracy.
Shortcuts are possible - actually with stereoscopic 3D TVs and monitors they've been and gone - but the real thing won't be happening any time soon.
3D-like emulations - like this product - are much more plausible in the short term.
They really need an editor. It's even worse than the syntax errors, saying something `felt more like a proof of concept [...] was immediately an impressive concept` is amateur wordsmithing. Better wording would be "Immediately, viewers were left with a strong impression despite the display being a proof of concept.`
> When Looking Glass Factory showed /of/off/s its first holographic display way back /on/in/s August 2018, it felt more like a proof of concept than anything — though it was immediately an impressive concept
Ever since I first saw one of these I have been wondering if it would be possible to display as-shot light field camera images. If it is, they should seriously think about collaborating with Lytro.
We got one of these in the office (much smaller) that we only bring out for conventions. It's a nice conversation piece but serves no practical purpose.
Thats why I will regret not attending an in-person SIGGRAPH in person this year. You could see amazing demos of frontier technology not yet commercialized.
Ditto watching the NVDIA keynote recently. They were bragging about how good their AI-driven 8K raytracing was. But I could not see much before and after difference on my puny tablet screen.
The only advantage this has over a parallax-barrier 3D display is you can move your head to look around an object. I don't see how that makes up for having 24 times the processing footprint, greatly reduced resolution, and enormous pricetag, especially if you equip a parallax display with similar hand-tracking sensors.
Is this the technology being used on Disneyland's smugglers run ride? I remember moving my head around and the 3d perspective following my viewpoint, but my Google Fu at the time could only find articles about the real-time rendering used on the screens.
That may be quite cool for demos and entertainment. But no way in the office - eye strain after 8h day must be severe. I would spend same money on 27" e-ink display once such is available.
When some of those 45 perspectives don't go near any watching eyes, do they still need to be rendered? If not, one might use head/eye tracking to save computes.
Could also be neat for museums to virtually show jewelry and other small objects. Hard to justify the price for consumer use, but for an exhibit with thousands of people passing through you’d get more mileage.
The benefits being that you can share the items across many museums at once without exposing them to UV or theft risk, and without taking the originals away from wherever they belong.
I have seen the real holographic photographs [1] being used in museums. Actually, they are getting cheaper and there are some hologram kits you can buy to make your own at home. Obviously not animated, but in principle it should be possible to do a holographic movie with a chemical film as well.
This shitty gif from techcrunch is infinitely more impactful in every conceivable way despite being a shit quality gif. https://techcrunch.com/wp-content/uploads/2018/08/Aug-22-201...
Looking Glass people, if you're reading this, you need to zoom way the fuck out, turn the lights on, and quit with the artsy bullshit fading to black every few seconds because it looks like you're hiding something. You cannot show pictures. You need to show the experience.