Hacker News new | past | comments | ask | show | jobs | submit login
Bokehlicious Selfies (rahulrav.com)
134 points by rahulrav on March 28, 2020 | hide | past | favorite | 52 comments



This is really cool, but it isn't bokeh, it's just blur.


Yeah. I don’t see anything here that couldn’t be done in Photoshop with a bit of masking work.

Here’s an example [1] of how bokeh really looks, using a high quality fast lens on a decent camera. We’ve got a long way to go.

[1] https://upload.wikimedia.org/wikipedia/commons/8/8a/Josefina...


The author also uses a neural network to automatically detect the foreground and apply the filter only to the background so this can be done automatically.


So it’s a “find bright spots, bloom them into circles or hexagons or some such, then make 2-4 offset less opaque circle clones and overlay them offset by position”?


There’s a bit more to it than that. Look near the edges of the frame vs the middle. Also note that the effect is not binary. It’s dependent on depth as well as angle. One of the reasons the example from the original article looks so unrealistic is that the flowers are not at a great depth so they really shouldn’t be blurry unless the subject is also out of focus.

Additionally: finding true bright spots (vs regular objects that happen to be white) is difficult with a typical low dynamic range photograph. You’d have a much easier time with an HDR photo. It still wouldn’t give you any depth or angle information though.


It also isn’t a selfie!


Yes. This hints at a more interesting application of DL: modify a selfie such that it doesn't look like a selfie anymore.


It’s a nice proof of concept though. I think that further applying the second part of the article (masking) to lighter parts of the image at a different intensity could provide a more accurate simulation.

Edit: I really have no idea what I’m talking about. I’m just guessing it can definitely be improved upon. I probably should’ve stopped at it’s a nice proof of concept (like most free content) :)


Bokeh is a type of blur (that's what the original word stands for too). The most common image processing blur is gaussian blur. According to the article the author is using a type of blur that simulates bokeh.


Yes. That is correct. Just wanted to see if the idea in my head actually worked. I was surprised by how decent it turned out especially given that I did not do anything “clever”.


The image you used as an example already has some nice shallow depth of field from the beginning, which I think really helps make blend the effect in the end result. Did you try applying the effect to an actual selfie, which usually has more even focus?


One "clever" idea someone had was the use of the depth camera on the new iPhones to isolate the subject.

Rather than using a hardware solution, this is a good software solution that can be accomplished on more devices.


To make the Gaussian blur look better, it needs to be applied to linear lightness, not to gamma-encoded colors, as it is done here. In reality, the blur is induced by the lens, on linear lightness, and only gamma-encoded afterwards.

The effect being that blurred colors mix wrongly, as you can see where the red from the flower mixes with the green. The transition looks wrong, and not how a lens would render it.


Shallow depth of field blur is not a Gaussian blur though. Attempts at faking shallow depth of field with that filter will not result with true bokeh.


True, but Gaussian or not, convolutional filtering should be performed in linear light space!


This is a quick proof of concept of an idea. This won't work for every use-case but it worked surprisingly well for a use-case I had in mind. Happy to answer any questions.


So, is the ML part just the segmentation, correct?


Yes. That is correct.


I just discovered that Zoom teleconferencing software has the ability to detect the background and replace it with an arbitrary video in real time. I have no idea how they do it, but it's as impressive as heck.


Having just used it, it feels like a few factors:

- a background scene is usually static. if a pixel matches its historical average (i e. not changing) then it's probably background.

- if a pixel color matches its neighbors, it's probably the same as them. i noticed my black shirt tripped it up on occasion, but was all-or-nothing.

- people are blobs, not diffuse, so try to segment large regions.


Yes, it did not work for me sitting with a laptop on a swing chair. It actually works better with clear static features in the background, but it seams to be a bit more (cnn?) because it seams to isolate paintings on the wall for me as well. Wish there was more open source easy to use camera stream manipulation. Skype also has background blurring.


My work is using Meets, and I second this desire to edit my camera stream. I didn't find anything even cheap to do it.


A green screen should make it pretty straightforward?


Yes, that would make it trivial. But Zoom gives you the option of using a green screen or not.


It is a good early version of such a feature, but I've found it to fail fairly quickly in uneven light or with busy backgrounds. Still, it is a great feature, so I'm hopeful that they'll keep improving it and other services will match it.


It’s likely some type of segmentation network like Mask RCNN or SegNet. Look up Mask RCNN for some state of the art segmentation results. This stuff has been able to run on mobile phones in real time for years now.


They detect the face and then extrapolate the body. You can tell because hiding your face (or turning away) makes you background.


MS Teams can blur the background on video.


Jitsi has this feature in beta too. It works pretty well, although it currently uses way too much CPU, and doesn't work quite as well as it does in MS Teams.


Skype, Jitsi and MS teams also have this feature for some time now.


It would be nice if there was some more information about where the mask comes from. We want a segmentation map for people, so this technique basically takes the layer activation map for the "person" class - which is id 0 in COCO - and you can threshold that to get foreground/background. If you changed the mask index, this would respond to other object types (so as written this code will only work for humans).

--

By the way, Google does some absolutely nuts stuff with this on the Pixel 3 and 4 - they actually calculate a stereo depth map using the two autofocus sites in individual pixels. Essentially some modern CMOS sensors use a technology called dual pixel autofocus (DPAF), by measuring the response from two photodiodes in the same pixel, you adjust focus until each pixel has the same intensity (more or less). If the camera is out of focus, the two photodiodes will have different intensities.

However what this gives you is two separate images with an extremely small (but detectable) parallax which can be used to give coarse 3D reconstruction, and you can segment foreground and background. It's nice because you get a strong physical prior, rather than having to worry about using a convnet to identify fore/background regions. (They of course apply a convnet anyway to refine the result).

https://ai.googleblog.com/2018/11/learning-to-predict-depth-...

https://ai.googleblog.com/2019/12/improvements-to-portrait-m...


It’s a simple Gaussian kernel multiplied with a triangular mask.

You can do better by playing with intensities of pixel values as suggested in the article I linked.


I meant about where this line comes from:

    mask = masks[0][0]  
Presumably 0 is the class ID? For someone new to ML or object detection, it might not be obvious why you take the first channel here.

Also recent related reading: https://bartwronski.com/2020/03/15/using-jax-numpy-and-optim...

HN Discussion: https://news.ycombinator.com/item?id=22590360&ref=hvper.com&...


Awesome! Computational photography is fascinating. A while back, I wrote some OpenGL shaders to do the same thing, and got stuck when I wanted variable bokeh based on scene depth. This is easy when you have the depth map (say, in a renderer). I recall reading about how Apple achieved this using stereo disparity mapping, which is great if you have the hardware. I hadn't considered a neural net approach like yours for auto-generating a depth map, although I now that I look into it, Facebook recently published research on just that!

https://ai.facebook.com/blog/-powered-by-ai-turning-any-2d-p...


One problem with bokeh is that it disappears when you scale the image down. Perhaps this technique can help solve that (but I suppose there would be better ways to do this).

Anyway, the entire technique is also very easy to do manually with Photoshop.


I don't have much experience with Photoshop. Would it be possible to record some sort of macro in it to automatically transform any image?

If not, this approach seems to have merit


Yes, Photoshop supports pretty customizable recorded and custom actions. Can also process multiple files in bulk/batch.


One step further would be to incorporate the blurring, masking, and combining operations into the network itself, so you can leverage the GPU to do the whole computation in one fell swoop.


Ain’t no lens that would blur the front brim of her hat and her face is clear like that. Disregarding the nature of the optics you are emulating ruins the value of the effect


OT: I can't understand why this website _want_ to looks like a mobile web __and__ hides scrolling bar. I'm on desktop and it irritates me a bit.


Might be that it's a mobile first theme that's more like mobile only – styling for the phone and not making any changes for desktop.


What’s odd is that I can’t interact with it at all on my iPad. I scroll down, hit a media object of some kind, and that’s the end of it. Blank space, no scroll indication, can’t seem to scroll back up to the top.


I think I'm missing the part of my brain that lets me understand why bokeh selfies are even a thing. If you're going to remove your surroundings from the picture, why even bother taking a new picture?


His original photo looks way nicer compared to the processed one.


Page is broken with no scrollbar.


if we're using deep learning anyways, might as well use a u-net (ie. pix2pix) to add the bokeh directly - this would require pair training data but would probably look a lot better visually.


website has no scrollbar??


Only works if you scroll while hovering over the "content" part of the website in the center.


Wow this looks ugly :D. But still nice work. It's just that the girl is totally over-processed and the "bokeh", well I hate to break it to you, but this isn't bokeh, its called "blur".

This is bokeh: https://cdn.mos.cms.futurecdn.net/JgrDuxQPCAvgd5VzqiKN5a-650...

And bokeh is 3 dimensional and surrounds the focus area, increasing with distance: https://www.flickr.com/photos/scottcartwrightphotography/145...

I suggest to brush up a bit on photo skills, although I agree that the instragram and iPhone culture can shift your perception of "bokehlicious" quite a bit.

I don't want to make you feel bad, so I also tell you that Apple's "bokeh" in portrait mode looks disgusting, doesn't make any sense and breaks the image in a lot of trivial cases... and they spent a lot more money & effort on it than you :).


I agree. My goal here was to get something that look like bokeh if you squinted at it.

I just wanted to see how far I could take this idea.


FYI, the web page seems entirely unusable on my iPad Pro. It goes blank at the first example, and without a scroll indicator I have no idea what’s going on.

I was able to scroll to the bottom footer, but I can’t scroll back to the top, so there’s literally nothing but a blank page now except for the footer.


Thanks for the feedback. Let me fix that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: