The author also uses a neural network to automatically detect the foreground and apply the filter only to the background so this can be done automatically.
So it’s a “find bright spots, bloom them into circles or hexagons or some such, then make 2-4 offset less opaque circle clones and overlay them offset by position”?
There’s a bit more to it than that. Look near the edges of the frame vs the middle. Also note that the effect is not binary. It’s dependent on depth as well as angle. One of the reasons the example from the original article looks so unrealistic is that the flowers are not at a great depth so they really shouldn’t be blurry unless the subject is also out of focus.
Additionally: finding true bright spots (vs regular objects that happen to be white) is difficult with a typical low dynamic range photograph. You’d have a much easier time with an HDR photo. It still wouldn’t give you any depth or angle information though.
It’s a nice proof of concept though. I think that further applying the second part of the article (masking) to lighter parts of the image at a different intensity could provide a more accurate simulation.
Edit: I really have no idea what I’m talking about. I’m just guessing it can definitely be improved upon. I probably should’ve stopped at it’s a nice proof of concept (like most free content) :)
Bokeh is a type of blur (that's what the original word stands for too). The most common image processing blur is gaussian blur. According to the article the author is using a type of blur that simulates bokeh.
Yes. That is correct. Just wanted to see if the idea in my head actually worked. I was surprised by how decent it turned out especially given that I did not do anything “clever”.
The image you used as an example already has some nice shallow depth of field from the beginning, which I think really helps make blend the effect in the end result. Did you try applying the effect to an actual selfie, which usually has more even focus?
To make the Gaussian blur look better, it needs to be applied to linear lightness, not to gamma-encoded colors, as it is done here. In reality, the blur is induced by the lens, on linear lightness, and only gamma-encoded afterwards.
The effect being that blurred colors mix wrongly, as you can see where the red from the flower mixes with the green. The transition looks wrong, and not how a lens would render it.
This is a quick proof of concept of an idea. This won't work for every use-case but it worked surprisingly well for a use-case I had in mind. Happy to answer any questions.
I just discovered that Zoom teleconferencing software has the ability to detect the background and replace it with an arbitrary video in real time. I have no idea how they do it, but it's as impressive as heck.
Yes, it did not work for me sitting with a laptop on a swing chair. It actually works better with clear static features in the background, but it seams to be a bit more (cnn?) because it seams to isolate paintings on the wall for me as well. Wish there was more open source easy to use camera stream manipulation. Skype also has background blurring.
It is a good early version of such a feature, but I've found it to fail fairly quickly in uneven light or with busy backgrounds. Still, it is a great feature, so I'm hopeful that they'll keep improving it and other services will match it.
It’s likely some type of segmentation network like Mask RCNN or SegNet. Look up Mask RCNN for some state of the art segmentation results. This stuff has been able to run on mobile phones in real time for years now.
Jitsi has this feature in beta too. It works pretty well, although it currently uses way too much CPU, and doesn't work quite as well as it does in MS Teams.
It would be nice if there was some more information about where the mask comes from. We want a segmentation map for people, so this technique basically takes the layer activation map for the "person" class - which is id 0 in COCO - and you can threshold that to get foreground/background. If you changed the mask index, this would respond to other object types (so as written this code will only work for humans).
--
By the way, Google does some absolutely nuts stuff with this on the Pixel 3 and 4 - they actually calculate a stereo depth map using the two autofocus sites in individual pixels. Essentially some modern CMOS sensors use a technology called dual pixel autofocus (DPAF), by measuring the response from two photodiodes in the same pixel, you adjust focus until each pixel has the same intensity (more or less). If the camera is out of focus, the two photodiodes will have different intensities.
However what this gives you is two separate images with an extremely small (but detectable) parallax which can be used to give coarse 3D reconstruction, and you can segment foreground and background. It's nice because you get a strong physical prior, rather than having to worry about using a convnet to identify fore/background regions. (They of course apply a convnet anyway to refine the result).
Awesome! Computational photography is fascinating. A while back, I wrote some OpenGL shaders to do the same thing, and got stuck when I wanted variable bokeh based on scene depth. This is easy when you have the depth map (say, in a renderer). I recall reading about how Apple achieved this using stereo disparity mapping, which is great if you have the hardware. I hadn't considered a neural net approach like yours for auto-generating a depth map, although I now that I look into it, Facebook recently published research on just that!
One problem with bokeh is that it disappears when you scale the image down. Perhaps this technique can help solve that (but I suppose there would be better ways to do this).
Anyway, the entire technique is also very easy to do manually with Photoshop.
One step further would be to incorporate the blurring, masking, and combining operations into the network itself, so you can leverage the GPU to do the whole computation in one fell swoop.
Ain’t no lens that would blur the front brim of her hat and her face is clear like that. Disregarding the nature of the optics you are emulating ruins the value of the effect
What’s odd is that I can’t interact with it at all on my iPad. I scroll down, hit a media object of some kind, and that’s the end of it. Blank space, no scroll indication, can’t seem to scroll back up to the top.
I think I'm missing the part of my brain that lets me understand why bokeh selfies are even a thing. If you're going to remove your surroundings from the picture, why even bother taking a new picture?
if we're using deep learning anyways, might as well use a u-net (ie. pix2pix) to add the bokeh directly - this would require pair training data but would probably look a lot better visually.
Wow this looks ugly :D. But still nice work. It's just that the girl is totally over-processed and the "bokeh", well I hate to break it to you, but this isn't bokeh, its called "blur".
I suggest to brush up a bit on photo skills, although I agree that the instragram and iPhone culture can shift your perception of "bokehlicious" quite a bit.
I don't want to make you feel bad, so I also tell you that Apple's "bokeh" in portrait mode looks disgusting, doesn't make any sense and breaks the image in a lot of trivial cases... and they spent a lot more money & effort on it than you :).
FYI, the web page seems entirely unusable on my iPad Pro. It goes blank at the first example, and without a scroll indicator I have no idea what’s going on.
I was able to scroll to the bottom footer, but I can’t scroll back to the top, so there’s literally nothing but a blank page now except for the footer.