They assume that a computer will detect faces where humans do not and use such a selection as a captcha. It forces the computer to make a random guess against its matches so there is still a chance of it getting the right answer, but it's an interesting concept.
Interesting, but my iPhone won't have any luck getting into computer-only clubs: It successfully detects the human face in this image[1] but none of the dot patterns.
A lot of artistic value, but not so many practical applications.
To bypass this, a bot can simply be tweaked to draw a frame shifted of x and y pixels in any direction after correctly guessing a face. The system will see no match with the non-detectable-by-a-human face and therefore it will think the bot is a human.
I agree. It seems this could be used to determine only whether the testee is not-human. It's trivial like you say, to trick the test into believing it is human by not finding a face.
At first glance I thought the test was only to determine whether the testee is definitely not-human but the description actually implies categorising into computer and not-computer:
"FADTCHA(FAce Detection Turing test to tell Computers and Humans Apart) is a type of challenge-response test to determine whether the response is generated by a computer. The test involves asking a testee to find* a face in a presented image and draw a square region of the face. If one correctly finds a face, it can be presumed that it is a computer, otherwise it will be regarded as a non-computer."
It's a joke on using a Captcha to detect a human user - this is the opposite, a way of only allowing robots to access information, and to keep away humans.
I was explaining why, in my opinion, the facial captcha project isn't very functional, at least as it is described on the artist's website.
It was an explanation of the simplest method a bot could use to game that system.
This is the basis of "artificial creativity". Train these NNs to recognize faces (or anything else, maybe especially everything else), and then run random noise through them. See what they come up with.
You could have an algorithm coming up with some pretty decent (and original) cartoon faces. Or with art, abstract or not.
That doesn't sound right. You suggest if you give the computer something that looks like a face, then add a filter, it has somehow created an idea on its own. Yet you are still telling the computer what to do, it has done nothing creative on its own. I'd venture that real "creativity" is the reverse ... the ability to create something that is recognizable as a face.
If he wants to make a career out of something so simple, I'm not going to stop him. Why anyone would listen to him, that's another thing entirely. Perhaps you like the sound of bullshit.
Random input filtered by face detection does create new faces. It's just a different method of creating faces than one that derives a face from the parameters of the face filter (which is more like how a human would do it), but either way, you get new output that is identifiable and identified as a face.
Yes, the computer is biased by your input of the face detection filter, but how is that any different than a human's cultural biases? The only real differences are purely internal details.
For me, creativity often happens by getting a random idea and recognizing it as part of a whole which is yet to be created. In the linked case it could be something like identifying a face in the clouds and then extrapolating that to create an image of a whole cloud person.
I was a bit flabbergasted the first time I pointed a digital camera at an Obey Giant sticker and a popup appeared: "The subject of this photo may have closed their eyes."
Very nice to see more projects where algorithms and art come together. This post also reminded me a bit of this project where a face detection algorithm and a psuedo-genetic algorithm are combined to create faces out of noise: http://lbrandy.com/blog/2009/04/genetic-algorithms-evolving-...
I tested those "cloud face" in the popular face recognition engine rekognition.com. None of them actually got recognized as face. Looks like the face recognition algorithm they uses are pretty smart.
If you upload the image of the overall piece (the one with an out-of-focus head looking towards it) 2 of the cloud faces are recognized as faces. The other image of the piece taken at a more skewed angle with no head blocking it actually recognizes 3 of the panels as faces.
The funny thing about seeing false faces (or patterns in general, really) is that scale (and to some degree composition) matters, even for us humans. When I look at the large picture of the cloud that is the first one presented in the article at 100% scaling, I struggle to see any sort of meaningful face. But when I locate that same image as a panel on the overall work in a smaller scaled down presentation I see the 'face' immediately. With a browser that can appropriately scale down to 25% or less I can see the same thing happening with the actual large source images... once I hit about 50% scaling I can see the "faces" that the computer recognized.
Obviously YMMV depending upon monitor size vs dpi as to where they stop being clouds and start being faces.
Different algorithms will usually get different false positives. This is why large collections of algorithms generally perform much better than the best individual algorithm. I imagine these are all weird edge cases as it is.
We do tend to "see" patterns that associate to built-in or learned abstractions, such as faces (or elephants, etc.) in clouds. This is especially the case when social reinforcement is a factor, like when someone points skyward and asks, "don't you see a face in that cloud?" and soon everyone standing nearby agrees it sure looks like a face.
That computers are "fooled" may simply reflect their human programming, though not being fooled could well be a very hard problem to solve.
It's amusing the way face-detection in my spiffy digital camera (an Olympus EM1) will find faces in all kinds of inanimate objects. The feature can be useful for photographing real people, but in other situations face-detection is just a distraction and I keep it turned off.
Though now I may have to aim the camera at a few clouds...
> ‘Cloud Face’ is a collection of cloud images that are recognized as human face by a face-detection algorithm. It is a result of computer’s vision error, but they often look like faces to human eyes, too. This work attempts to examine the relation between computer vision and human vision.
Is it really computer error? Most of us probably saw the face of Einstein in the one cloud photo, because in fact there was a strong resemblance. If a artist paints a caricature or abstract image we can often identify the original subject. That's not error. There is something else going on here.
It is an error in a sense that these are not in fact faces, but clouds. While humans also can see shapes of faces in clouds, we are capable of understanding that these are not real faces. Computer algorithm is not capable of recognizing the difference which results in such misclassification, thus it is failing its intended purpose.
They assume that a computer will detect faces where humans do not and use such a selection as a captcha. It forces the computer to make a random guess against its matches so there is still a chance of it getting the right answer, but it's an interesting concept.