> I'm currently working on skin detection & exclusion during the color detection phase and am looking at using basic machine learning techniques. The key challenge I'm facing is differences in skin tones.
Try looking at the chromatic colour rather than the RGB values. You can get extremely far with just this, most skin colours fall into one of two peaks [0], no machine learning needed.
Once you've got this, edge detection & a few other bits should give you pretty reliable skin blocks. I've used it a few times before. Here's a presentation I did some years ago that I apparently still have on my desktop: http://files.figshare.com/1409002/1.pdf [1]
EDIT - I'm sure there are many good approaches for this, and many fancy ones. This is very simple and was researched/written purely for fun in a couple of weeks.
EDIT 2 - The final slide shows the more interesting part, where you use edge detectors to guide your estimation of what is inside or outside a shape. That plus an adaptive threshold (designed to stop if the number of pixels included jumped rapidly) got some good results, but I've not got the code any more.
awesome stuff :D Thanks for this. Will definitely look into some of that in more detail.
Another tricky part of skin detection is false positives. ie, what if the actual product is that color?
Some things I've noticed and will be taking into account are:
Skin areas tend to clump around the same locations in photos. The product is usually the focus and skin is near the edges.
Product types also tend to share similar photo layouts.
So with that, skin color in those zones score higher.
> awesome stuff :D Thanks for this. Will definitely look into some of that in more detail.
No worries, hope it helps, it was just a quick project back in the day at uni that ended up working a lot better than I expected.
Give me a shout if you want any work done on it (my email address is in my profile).
> Some things I've noticed and will be taking into account are: Skin areas tend to clump around the same locations in photos. The product is usually the focus and skin is near the edges. Product types also tend to share similar photo layouts. So with that, skin color in those zones score higher.
This kind of thing will really help you, small bits of knowledge about the specifics drastically simplify the problem. For example, you can estimate the skin tone by roughly segmenting the image into possible skin/not skin with the approach above, then look at segments which are more likely to be skin because of their positioning you can narrow your accepted parameters and hopefully help distinguish between the two.
Identification of unusual edges/shapes can help too, to classify regions as skin/not skin.
Beyond that, starting to look at estimations of pose to help guess the underlying shape (since you know it's on humans you can make a lot of assumptions).
Also, since you're detecting colours, mistaking very similarly coloured skin as the product wouldn't change your results much :)
Try looking at the chromatic colour rather than the RGB values. You can get extremely far with just this, most skin colours fall into one of two peaks [0], no machine learning needed.
Once you've got this, edge detection & a few other bits should give you pretty reliable skin blocks. I've used it a few times before. Here's a presentation I did some years ago that I apparently still have on my desktop: http://files.figshare.com/1409002/1.pdf [1]
[0] http://www-cs-students.stanford.edu/~robles/ee368/skincolor....
[1] Calvert, Ian (2014): Finger pointing detection. figshare. http://dx.doi.org/10.6084/m9.figshare.953171
EDIT - I'm sure there are many good approaches for this, and many fancy ones. This is very simple and was researched/written purely for fun in a couple of weeks.
EDIT 2 - The final slide shows the more interesting part, where you use edge detectors to guide your estimation of what is inside or outside a shape. That plus an adaptive threshold (designed to stop if the number of pixels included jumped rapidly) got some good results, but I've not got the code any more.