Hacker News new | past | comments | ask | show | jobs | submit login
Cascaded Displays: Spatiotemporal Superresolution using Offset Pixel Layers (nvidia.com)
105 points by chdir on July 29, 2014 | hide | past | favorite | 37 comments



Video showing its capabilities : http://www.youtube.com/watch?v=0XwaARRMbSA


Very useful thanks.

In 10 years the before example in a before/after vid will still look like it does in that video - just as it did 10 years ago.

The before example shots never seem to evolve - which makes me cynical about display technology demos I don't see in person.


Volume is really low (in my case at least) but the automatic captioning is a home run in this one.


Very interesting approach.

I think one interesting aspect of this is that it couples spatial as well as temporal interpolation. This means that you get a higher resolution as well as a higher framerate, but on the downside seems to introduce additional artifacts depending on how these two interpolations interact.

I have not yet read the technical paper and only watched the video without sound, but from this video it seems that moving sharp edges introduce additional artifacts (can be seen when looking at the features of the houses in peripheral vision at 5:11 in the video). This is what you would roughly expect to happen if both pixel grids try to display a sharp edge, but due to their staggered update, one of these two edges is always at a wrong position.

This problem could probably somewhat alleviated through an algorithm that has some knowledge about the next frames, but this would introduce additional lag (bad for interactive content, horrible for virtual reality, not so bad for video).

I intend to read the paper later, but can anyone who already read it comment on whether they already need knowledge about the next frame or half-frame for the shown examples?


> on the downside seems to introduce additional artefacts

It definitely introduces a form of ghosting visible near the rear end of the motorcycle.

As for lag, I can already see John Carmack cringing! There may be an interesting effect though, in that the increase in apparent resolution is quadratic when the increase in computation is linear. Hardware-wise this possibly could be done straight in the double-buffering phase without additional lag if it can be made to race the beam.


If I understand correctly the idea is that you get a high-resolution display by putting two low-resolution displays in fromt of each other?


This is not the first time that someone has stacked up two displays to get better output. About five years ago there was a paper about using a DLP projector to backlight an LCD display, yielding high dynamic range. Can't find the paper right now, can only find this poster http://www.cis.rit.edu/jaf/publications/2009/ferwerda09_vss_...

Like the LCoS hack at the end of this video, the DLP backlight suffered from registration artifacts and other crazy limitations. It's still a nifty idea, though.


Yeah, this is what I'm wondering about as well. What does the actual implementation of this look like? Is it just one display being fed 2 low-resolution image streams? And is there any effort required to synthesize the cascaded image?


I guess the main point is lower production cost. For 4k screen you need 4x more density pixels than a 1080p one, which is difficult to make with a high defect rate, by cascading two 1080p LCD you get as-good results but the cost is very cheap.

Besides you don't need to 4x your display bandwidth, just double 1080p.

Just my guess.


I guess this would also be a nice improvement for smartphones' screens since you would only need to power half the number of pixels for the same equivalent density which will save on the battery


Nya, the light will have to travel through two LCD layers, if I've understood it correctly. You'd need a very powerful backlight.


In the video, some of the captures for the cascaded displays were actually brighter (eg: 4:20 mark). I'm not sure why and it looked like they were just using a single display's backlight. Anyone know?


My guess is the opposite; it seems like cascading two 1080p LCDs ought to be more expensive than a single 1440p LCD which may have equivalent spatial resolution. The demo video doesn't address this; it says the cascaded display is better than a "conventional" one, but there are a variety of conventional LCDs available on the market.


> What does the actual implementation of this look like?

If you look at the video, the prototype "implementation" is just two display panels laid on top of each other. There's also another one involving projectors which involved more complicated optical setups.

> Is it just one display being fed 2 low-resolution image streams?

Yes, this is the impression I got.

> And is there any effort required to synthesize the cascaded image?

Yes there is but that was achieved in real time with GPU acceleration.


I have difficulties understanding the mecanism of this supersampling (2 succesive images to make one ?). Can anyone explain this in a simple way ?


They have two layers, slightly offset (by half a pixel in both directions) on which they show different images which, together, combine to one of higher resolution. They also can show different frames shortly after another quickly enough so that they appear to belong to the same image, but each contributing different parts to either temporal or spatial resolution of the final image.

Since they're using off-the-shelf LCD displays for their prototype, I guess the final result is not yet flicker-free (they probably cannot show more than 60 fps, and thus not more than 15–30 high-resolution frames per second). Also evident as they're demonstrating the capabilities with 5- and 10-fps video. But that's just a matter of a higher refresh rate for the displays, I guess, unless computing the individual frames is too taxing for now (it doesn't seem to be, they do plenty of work in shaders, being NVidia and all).

Major benefits seem to be cost, simplicity and size; their prototypes were built as a head-mounted display and a small projector.


I found this aspect of the build interesting:

"The bare liquid crystal panel was affixed to the base plate, held in direct contact with the first LCD at a fixed lateral offset. As assembled, the front polarizer on the bottom LCD is crossed with the rear polarizer on the top LCD. Rather than remove the polarizers from the thin glass panels, we placed quarter-wave retarder film between the two (American Polarizers APQW92-004-PC-140NMHE): rotating the polarization state to restore the operation of the top LCD."

It's probably extremely basic knowledge for people familiar with polarization, but I didn't know it could be so simple.


> their prototypes were built as a head-mounted display and a small projector.

Sounds like there are issues with parallax. Both head-mount and projectors have the benefit of a fixed viewer (or light-source, same thing) relative to the display. LCDs have depth, and at any other angle your offset is ruined. The tech is not likely coming to a desktop or laptop monitor any time soon.

Of course you can probably get around this. The two most obvious solutions being face tracking or paper-thin LCDs.


Take a piece of graph paper, and then put another on top, offset in the X and Y half a square size. You can still see the lines underneath, making it look like the grid has double the number of squares.

The rest is math.


I don't know if this is technically accurate or not, but "<broad layman explanation>. The rest is math" is begging to be a snowclone.


The main part of an LCD is transparent. It looks like they've stacked one in front of the other, with a half-pixel offset, and arranged the polarizers in such a way that they perform a multiply operation.

So, two panels can produce 4X the resolution, using only static images. But I'm guessing they'd have to sacrifice some bits in the luminosity domain to make it work.


I would really like to see some data on the memory savings using this technique. How significant are they?


I would guess 0. My understanding is that you are rendering at the full higher resolution then simply computing the proper subpixels on the offset displays to align them right. You still need all the data there using the full amount of memory otherwise you can't really perform the calculations necessary for the subpixel/temporal interpolation.


Unfortunately this will be yet another proprietary technology from Nvidia that nobody else will use - which means it won't have mass adoption - which means it's ultimately pointless (unless someone else creates an open source version of it).


> Unfortunately this will be yet another proprietary technology from Nvidia that nobody else will use...

This is a scientific/technical research paper for a computer graphics conference. It's not even near being a technology that ships.

There's a reason that there are so many "Nvidia only" technologies. Take the G-sync displays for example. It's a problem dating back to cathode ray tube display technology but to overcome it, it takes integration between the display controller hardware (in the graphics card) and the panel control electronics. Display manufacturers do not make the GPU hardware so the only option is for a GPU company to try to make the first step.

In the long run some of these technologies will become standard and widespread but someone has to take the first step and that must be economically viable.


Except what G-sync does is not Nvidia only. What G-sync does is it sends frames at maximum speed (DVI, HDMI, and DP only send frames as fast as absolutely needed to get done in time, ie, at 60hz, it takes approximately 1/60th of a second to send the frame, even if the connection had an available clock rate to send it in 1/144th of a second), and then it sends it on demand instead of at the next frame interval (to reduce latency and jitter).

However, before Nvidia announced G-sync (which requires special and expensive hardware), a group of companies lead by AMD submitted an addition to the DisplayPort standard called Freesync which does the same thing using existing hardware (and some monitors already in the wild could be theoretically upgraded to support Freesync with only a firmware update).

Nvidia (as a company that makes gamer-grade GPUs) will be required to support Freesync on their GPUs (no existing GeForce can do it according to Nvidia due to HW issues, AMD says Radeons that are GCN should be able to do it with driver updates) because VESA accepted the Freesync proposal and finalized it into the next Displayport spec (VESA/AMD announced it at CES 2014, shortly after Nvidia announced their much more expensive and proprietary G-sync).


Yes, you are correct. But G-sync (kind of) ships already, you can buy a DIY module to mod a display with it. I'm not aware of any other solutions that would ship yet.

I sincerely hope that these proprietary technologies will get put in a standard that will ship with several vendors. All the variable refresh rate demos I've seen look amazing.

But the point above still stands, it took the initial effort of the GPU companies to make progress happen on this. It would have been impossible for a display manufacturer, let alone a panel manufacturer to make this happen.


NVidia doesn't make displays, so unless you're expecting them to move into that market, your comment doesn't make sense. (Doubly so because this is hardware, not software.)

I expect NVidia to license this to monitor manufacturers to drive up demand for 4k-capable video cards.


NVidia has various contractual arrangements with companies that do make displays to include NVidia-proprietary features though.


What is the real use case for this? Gaming and VR?

we have no problem making 4K Screens and Hardware isn't bound by it either.


Well, display hardware isn't the problem. 4K, 8K, there's no end to it. Cascaded displays using multiplied layers seem to help achieve benefits like sharpness at super high resolutions & effectively smoother results at low frames per second (staggered) video playback. This helps remove a major obstacle that high-res display technologies will face in the short term, which is processing power. Presently, high-end graphics cards can barely crank out 30 FPS at 4K resolutions for games. Also, any compression artifacts etc. in textures are much more pronounced on high-res/big sized displays. While requiring the need to change the workflow (of game development) a little bit, cascaded displays can potentially help render higher resolution, better quality/sharper images at lower frame-rates (i.e. much more cheaply) while still providing that 60fps feel.

Personally, if this takes off, I can see it saving the XBox One's ass, as a lot of the complaints from gamers have been regarding it's inferior capabilities for rendering high-end games (It renders many games at 720p 30 frames/second, while Playstation 4 is able to crank out 1080p for the same titles), and also, play another factor in prolonging the shelf life of the present generation of consoles, by enabling them to deliver much better graphics with the same hardware. Kind of like what Normal Maps (among other things) did for Xbox 360 & PS3, you can see the difference in graphics between a game released in 2005 vs a game released in 2013 on the same hardware. Among a lot of other factors, that was why it took 7 years before we saw the next generation of consoles being released. Comparatively, the Xbox 360 came out within 4 years of the release of the original Xbox.

TL;DR - It's not about the display hardware itself, it's about the ease of rendering graphics to meet the demands of high-end display.


I'm not sure if processing power is going to be any different here. To get acceptable results from the proposed method you need twice the resolution (two displays) and twice the frame rate. Which translates, surprise, to four times as much processing power needed, just as a quadrupling of the resolution would.


I think they actually propose something like rendering two 1080p streams at 60hz to get the effect of both higher resolution than 1080p and higher framerate than 60hz. That's their intent, apparently, who knows if the staggered frames actually create problems for viewers if the frame rate isn't just doubled.


Oh yeah, sorry about that, I forgot that it's hard to see the advantage in video playback. However, in games, when rendering a Full HD frame twice (with different settings/specifications), you would see a huge advantage in terms of the memory & some benefits in the amount of processing power required.


It's very well suited for VR. VR really needs small displays with resolution that is simply not economical to manufacture. An 8K tablet LCD would be crazy expensive. But, 2 4K LCDs only cost twice as much as 1.

As a bonus, VR really wants crazy fast refresh rates as well.


4k screens are expensive to make. If the 4k burden would be taken away from the OEMs and just put on the GPU makers (which would have to deal with the performance drawbacks of 4k displays anyway), then TV manufacturers could start selling 4k TVs instead of 1080p ones by next year, and at the same prices (well, they will probably make them a bit more expensive to take some extra profit, but the point stands). Same with tablet makers, monitor makers and so on.


Well they are not. MiTV and LeTV in China are selling them at less then $500 USD for 50".

I can see this being useful in gaming but not anything else. And my valid question got downvoted. :(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: