Hacker News new | past | comments | ask | show | jobs | submit login
TV Backlight Compensation (lofibucket.com)
388 points by zdw on Feb 6, 2020 | hide | past | favorite | 56 comments



Not nearly as cool, but about 12 years ago I did a poor man's idea of room correction by recording my own tone sweeps and then creating an EQ profile by hand for my ALSA soundcard via a LADSPA plugin on my HTPC. I couldn't believe the difference, especially how much quieter I could listen to the system and still hear everything. Nowadays this is of course built in to most receivers.


Any chance you have a write up on this?


Sorry for the late reply, but here is a write up: https://news.ycombinator.com/item?id=22290466


Thanks! Great little read, thinking about trying out something similar.


Fascinating. You're probably aware but room correction has come a long way - I'm particularly impressed with Dirac Live[0], which in my opinion provides significantly better results than Audyssey, YPAO & Co. Personally, I couldn't imagine buying a receiver without it these days.

[0] https://live.dirac.com/overview/


First of, really cool article. I love fun hacks like this.

There's a few things I don't get. The first one is why he's using the same global scalar values for gain and offset for the entire image, instead of pixel-wise values. The pattern isn't uniform, so why is the correction?

The second is why the optimization is just randomizing new values instead of using a better algorithm like gradient decent. This kind of stochastic search seems really wasteful.

The third is why it needs to be an optimization problem at all, why not just look at the blob image? Each pixel is supposed to be white, but we see (R_max, G_max, B_max), where both G_max and B_max is < R_max, making the pixel red. So just remap the green and blue channel for each pixel to [0, R_max] instead. Then each pixel will have the same gray value of (R_max, R_max, R_max) when displaying white. This should be really straight forward in a shader.


The correction is not uniform, he passes the blob image to the shader as a texture.


Yes, ok, it's not fully uniform, but gain and offset is the same for the whole image.


Nice. Colorspace profile management across the distribution pipeline is still a disaster. It would be nice if:

0. The full stack of end-user display down through distribution, editing, and "film" camera applies faithful color management.

1. Every scene contained a hidden color calibration frame that was recorded on-set holding a proper calibration sheet that the end-user's display could apply dynamic color-correction to produce color calibrated to the average lighting conditions that existed on-set.

2. The end-user can choose whether to apply just 0. or also 1.

There is another problem: in filming or editing, by applying color filter "looks" there is currently little in the way of assurances that downstream will faithfully reproduce what was intended. Colorspace profiles were and are a great advance, but they need to be measured, calibrated and validated all the way down so that they're useful.

PS: This article also reminded me of an attack that could reproduce almost an entire image from diffuse reflections of old-style scanned CRTs/TVs, e.g., that ghostly blue glow of TV watching visible from outdoors, but also that imperfect reflections of CRT computer monitors could reveal their contents.


Did the diffuse reflection reconstruction rely on high speed photography? Or a special geometric setup of the reflector? Trying to think of how this could work.


It sounds like "Dual Photography".

https://dl.acm.org/doi/10.1145/1186822.1073257

"We present a novel photographic technique called dual photography, which exploits Helmholtz reciprocity to interchange the lights and cameras in a scene. With a video projector providing structured illumination, reciprocity permits us to generate pictures from the viewpoint of the projector, even though no camera was present at that location."

I recommend watchign the video from the paper for good explains and demonstrates the technique. It's very impressive work.

https://www.youtube.com/watch?v=p5_tpq5ejFQ


Ok that trick with the playing card was cool. I'm impressed by the math that is behind this concept and all I could think about while watching it is every cheesy crime show calling out "enhance"


So the catch is that you have to be in control of the projector.


The CRT attack works because it's a progressively scanned image, so each pixel is being produced at a distinct time. Take the image intensity of a screen from a telescope a mile away, write the pixels, jiggle the offset and you have an image.

I'm sure it was harder than that in practice.


BTW, I doubt very much that what you are seeing here is backlight uniformity problems. This is actually an issue with the LCD element, the liquid crystal itself, and, possibly, to some extent, the optics. We used to manufacture advanced displays where uniformity and color accuracy was super important (think being able to identify healthy vs. cancerous tissue during endoscopic surgery).

As incredible as LCD manufacturing is, the eye is amazing at being able to pick-out differences under the right conditions. All kinds of corrective work had to be applied to the displays in order to achieve uniform, repeatable and reliable color and image rendering performance.


Love that idea of just photoshopping for the ideal ground truth.

Instead of white target, could even map a desired digital video file into the ideal target. For more general distortions you could obviate the 'blob' image and instead just optimize a gain and offset for every pixel independently. Seems ambitiously high dimensional but I was able to get this kind of thing to work effectively using SPSA ( https://www.jhuapl.edu/SPSA/ ). Also basically the algorithm of evolutionary strategies in AI: https://openai.com/blog/evolution-strategies/


I've always assumed that LED backlight TVs have this kind of correction built in and must be calibrated at the factory with a similar jig (calibrated camera looks at screen, screen shows sweeps through black->(red|green|blue), inverse mapping is recorded to correct colour and intensity).

Nice hack to do it with a webcam. :)


OLED phones, TVs, and other OLED products do this kind of uniformity correction with factory calibration. On some, the calibration data is stored on Flash or EEPROM on the panel itself and applied in the Display Driver IC. In others its stored and applied at system level by a Display Controller or SoC. I'm aware of some LCD-based products that do this as well, but I'm not sure if TVs are among them. It's a great way to improve uniformity though, and almost always cheaper than tightening manufacturing tolerances (i.e. decreasing yield).


Exactly right. Apple does this for every LCD. In the LED display industry calibration can be per LED, per PCB, per module, or per display. Some of them store the calibration data on each individual LED PCB, and some store it in the send box (Brompton).


This reminds me that you should frequently calibrate your display’s color profile if you ever work with photos or video. There are commercial products that help you do this essentially by the process that the OP uses. They don’t do a whole screen, just a section of it. And it seems that the new MacBooks have some kind of calibration thing that responds to ambient light. But it would be nice to be able to do this as a standard thing on all displays.


Eizo sells monitors for color sensitive work which contain built in calibration probes that pop out of a hidden compartment. They can be set to recalibrate on a schedule. They are pretty neat.

https://www.eizo.com/products/coloredge/cg279x/


This never made sense to me in the LCD era, and I suspect it's a rule-of-thumb that has been inherited from older CRT technology.

CRTs use distinct phosphors for each color, which slowly fade over time, and at different rates.

LCDs typically use color filters, which in most cases tend not to fade. In fact, most LCDs are so consistent that you can use the calibration done by someone else with the same panel model and use it yourself just fine. (The rare exception to this would be LCDs exposed to direct sunlight. Strong UV light can make just about anything fade)

OLEDs fade, much like CRTs, but are very rarely used as PC monitors.

This is why it annoys me that LCD panels don't simply report their ICC profile to the operating system. It would be 99% accurate 99% of the time. This is a vast improvement over the current status quo, where color reproduction outside of premium televisions is basically random.


My experience here is that the OSes are the problem. They provide an API that looks like "please display the following (r, g, b) tuple". Unfortunately, that isn't enough information to accurately display a color. To turn an (r, g, b) tuple into a color, you have to assign it a colorspace, and that's where everything is broken.

For example, when I last punished myself by using a better-than-sRGB monitor, I learned that browsers will properly color correct images that have a colorspace tag, but they do not do it to CSS. So if you have an image with color #abcdef and then you set a CSS color to #abcdef, they will be different colors!

Applications that want to properly display color have to hack around things. They need to ask the OS for the display's colorspace, then they have to figure out the image's color space, then compute a transformation that will yield an (r, g, b) tuple that when transformed by the OS (using the monitor's profile) will display the right color. This is horrifying but does work; so at least things like Photoshop can typically show you the right color.

It would be nice if we could use more than 24 bits for color and just stored everything as a CIExyz color. People have known that that is the right solution for decades but nothing has happened, so I'm not holding my breath. Realistically, I think we have to agree on a new set of primaries and gamma, and start assuming that a (r, g, b) tuple is in that colorspace. I guess this is what DCI-P3 is. It is the same problem all over again, but might at least get people better colors soon.


Throw "rendering intent" into the mix then it all gets pretty messy.


>LCDs typically use color filters, which in most cases tend not to fade.

The older LCDs were backlit with (cold cathode) fluorescent lamps, and those lamps do fade out and shift in color over time. With those you do want to adjust calibration once in a while. No idea how modern LCD backlit ones fare in longer run.

At one point I've had two decent quality CCFL LCDs set side-by-side; one had about 10'000 hours, the other 40'000 hours. The difference in color (and in brightness) was notable, and not quite possible to get right with the basic RGB calibration provided.


Color filters may not fade, but backlighting uses white LEDs that do use phosphors that fade over time, except perhaps the quantum dot variety.


The panel may have consistent color, but the backlight does not, whether it is led or the older cold cathode.


Lenovo laptops used to have one that calibrated the screen when it was closed when a sensor by the keyboard


My fairly modern (2017) Thinkpad P71 has a place for one (unfortunately no module in it though, a con of buying used). The latest versions of the same line have it as well.


Nice. It was a bit simpler in the CRT days. I remember my first after-school job when I was a wee teen, fixing TVs and degaussing them.

https://en.m.wikipedia.org/wiki/Degaussing


I miss degaussing screens. Ones with the button for that were great, and it was very satisfying to do. I don’t miss the low resolution, the heat and the feeling of sunburn after looking into one for too long.


I don't know, I'm beginning to think modern LCDs are worse for the eyes because in the quest for more contrast they have become too bright to be healthy.

Maybe OLED will fix that. Whenever affordable OLED monitors show up...


I agree that screens are too bright. A part of my job is calibrating screens and the brightness they can achieve is alarming. Regulation in radiology requires a minimum luminance and the spec is vastly in excess of what I would use and far above what anyone sets when given the chance.

Wickedly high contrast ratios seem to be encouraged too and keeping the ratio down gets frowned upon. Some of the screens come out the box at close to 1000:1, which is too high but happens with a high luminance screen. The first thing I usually do with a screen I use is dim it, and I long for iPad and iPhone screens that are dimmer. The lowest settings are too high.


High contrast is good; problem is on lcd displays you can only achieve it by having high brightness overall, because black isn't really black.

On CRTs you could set a black background, turn your brightness way down and happily code with enough contrast. Even with black on white text you could turn the brightness down somewhat because the black was... black.

Hopefully when OLED becomes usable for something between wall sized TVs and watches, this problem will be gone because there is no backlight so black is again black.


I miss all those things


Bonus points if you put this on an FPGA that decodes, modifies, and re-encodes HDMI.


Do you want MCAS?! Because that's how you get MCAS!

EE in me died a little reading this, while programmer loved the hack :). Personally I would just replace the backlight, discoloration is most likely sign of impending failure anyway. Btw we already do something similar to a lot of electronics by design. For example camera sensors have defect lists, pristine ones are extremely expensive, think highend microscopes and satellites.


Oof this is a very interesting and underexplored topic. I haven’t heard of calibrating display uniformity before. I hope something like this becomes integrated into OS level caibration at some point.


Curious about using three picture average and then gaussian blur to reduce moiré effect as the author did in third picture. It clearly worked, but i'm not sure how?


I suspect he moved the camera slightly in between taking pictures, this would create different patterns which then reduces the amplitude of the distortion when you average these different patterns


Makes sense, thanks! I just tried doing it in photoshop and observed that to remove moiré grid with single picture i'd need much higher blur radius -- not ideal for a ground truth image. (Note that its a far reaching guess).

I'm gonna add it to my fictional image analysis toolkit for next time when i'll need it.


Looks like you made this correction in sRGB space rather than physical (un gamma corrected) colorspace, which is unlikely to be correct...


Per-channel gain settings remain correct as long as the full workflow was in sRGB and you approximate sRGB with c^2.2 (where c is the channel value). Most other arithmetic is horribly incorrect in sRGB, I agree.


In LED jumbotrons sometimes you see a bad pixel or group. I always thought they should come with feedback cameras that could detect and correct for the bad pixels the sane way the author is doing.

Free idea, go do it!


This is awesome (both the idea and the writeup) but I missed the bit about object recognition? It must have been necessary to detect and extract the rectangle representing the TV screen from the captured webcam image (even if the framing/positioning of the webcam were perfect, most webcams are 4:3 aspect ratio), but was a skew transformed performed to correct for any perspective issues? Assuming such a transform were written, wouldn’t it have been trivial to throw up a white sheet of paper next to the tv and have that represent your neutral as a white point reference?


He showed how he did this. He said he took an image with the camera, and edited the tv to be perfectly white and used that as the target: "After some fruitless attempts at simple image statistics, I realized it's possible to edit a camera picture by hand and use that as a ground truth."


This is true. So the answer is that there was no object recognition since it wasn't necessary.


This is really cool, nifty hack - kudos!


What make and model TV?


Make: Panasonic

Model: Clunker :P


Superb


This is quite the technical accomplishment but I can't stop feeling like buying a new TV would have cost less than the value of the time spent on this impressive solution.


This makes sense if this was an office, and the TV was linked to productivity and lost revenue. For a curiosity project, the value of the time spent experimenting, learning and documenting for and teaching us is priceless.


How much is the experience learned worth?

Besides, life stinks if you break it down like this.


One less TV bought is one less TV that has to be manufactured and then recycled.


Of course, but it sounds like you're forgetting to have fun there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: