The author also uses a neural network to automatically detect the foreground and apply the filter only to the background so this can be done automatically.
So it’s a “find bright spots, bloom them into circles or hexagons or some such, then make 2-4 offset less opaque circle clones and overlay them offset by position”?
There’s a bit more to it than that. Look near the edges of the frame vs the middle. Also note that the effect is not binary. It’s dependent on depth as well as angle. One of the reasons the example from the original article looks so unrealistic is that the flowers are not at a great depth so they really shouldn’t be blurry unless the subject is also out of focus.
Additionally: finding true bright spots (vs regular objects that happen to be white) is difficult with a typical low dynamic range photograph. You’d have a much easier time with an HDR photo. It still wouldn’t give you any depth or angle information though.
It’s a nice proof of concept though. I think that further applying the second part of the article (masking) to lighter parts of the image at a different intensity could provide a more accurate simulation.
Edit: I really have no idea what I’m talking about. I’m just guessing it can definitely be improved upon. I probably should’ve stopped at it’s a nice proof of concept (like most free content) :)
Bokeh is a type of blur (that's what the original word stands for too). The most common image processing blur is gaussian blur. According to the article the author is using a type of blur that simulates bokeh.
Yes. That is correct. Just wanted to see if the idea in my head actually worked. I was surprised by how decent it turned out especially given that I did not do anything “clever”.
The image you used as an example already has some nice shallow depth of field from the beginning, which I think really helps make blend the effect in the end result. Did you try applying the effect to an actual selfie, which usually has more even focus?