Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"I can't tell much difference between putatively 4K content and clearly-1080P"

I presume the TV does upscaling? Not an expert, but theoretically a kick-ass algorithm should be able to gleam more resolution out of a temporally changing signal than the native resolution - thus actually providing more than 1080 pixels worth of resolution out of the 1080p stream.

When I bought a HD TV either the Blu-Ray player or my TV (can't recall anymore) upscaled my old DVD:s with surpising quality - I can only presume the same applies for the current generation (haven't had the chance to do detailed analysis).

Experts please revise.



How can an algorithm create more resolution (i.e. information) from less information?

Do you mean a wavering 4k sream outputted as 1080p?


Video Upscaling via Spatio-Temporal Self-Similarity: http://vision.ucla.edu/papers/ayvaciJLCS12.pdf

The Freedman and Fattal paper they mention can be found here: https://pdfs.semanticscholar.org/7df0/39049948d54fd1f4d75526...


We all laughed at the ridiculous "Zoom and Enhance" bits on TV crime shows, but it's become much more plausible in the past couple of years.

It's called super-resolution, or upsampling. Here is a good overview of techniques: http://www.robots.ox.ac.uk/~vgg/publications/papers/pickup08...

More recently, Google's RAISER: https://research.googleblog.com/2016/11/enhance-raisr-sharp-...

This repo pulls together techniques from several papers with impressive results: https://github.com/alexjc/neural-enhance

Anyway, it's an area of active research, there are already four dozen relevant papers in 2017 alone: https://scholar.google.com/scholar?q="machine+learning"+"sup...


It's using information from adjacent frames to add extra detail.


The general term is "video super-resolution". There's software available off the shelf to do it, IIRC.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: