Read the linked paper, it's actually very good. It looks like they used good science in testing this:
"From our study, we conclude that both our method and the light probe method are highly realistic, but that users can tell a real image apart from a synthetic image with probability higher than chance. However, even though users had no time restrictions, they still could not differentiate real images from both our method and the light probe method reliably."
This is a great example of using humans for what humans are good at (interpreting photographs) and computers for what computers are good at (lots of light modelling).
Very realistic. My first thought was that if this can be done without leaving detectable artifacts, then it will inevitably impact the admissibility of photographs as evidence in trials.
And eye-witness testimony should be made inadmissible (and I think may eventually). DNA evidence has a 1 in 200 (ish) chance of being wrong[1]. Hard time to be a lawyer.
1. Source: The Drunkard's Walk. This is the estimated likelihood of a lab error, which by far drowns out the 1 in a billion chances the DNA match is wrong, but the numbers are not reported and not admissible as evidence.
On average, you need 100 people to find two among them with one common fingerprint. Now calculate how many you need to find one among them that has a common fingerprint with a given person, i.e. you.
But if you know the lab has a one in ten chance of screwing up the processing of my print, how can I trust the one in a large number of a match. The screw up may cause a one in five chance of matching with anybody.
Error rates are hugely important yet completely ignored in court.
Error rates are hugely important yet completely ignored in court.
Not actually true. This is not to say that courts get it right all the time, but it's not particularly newsworthy when they do. Attacking the chain of custody and alleging contamination or false positives are standard techniques in challenging forensic testimony, both biological and digital.
The cases where the forensic evidence is legitimately challenged and those challenges are ignored or overruled are often dramatic and newsworthy, but the rate of miscarriages of justice is falling because courts today are much more aware of these issues than the courts of (say) 20 years ago. Unfortunately news reporting of trials and appeals is so utterly awful that people tend to assume the exceptional is the norm.
I'm not saying that it isn't a problem, just that the courts are more cognizant of the issue than more people appreciate.
"A technician in the NYPD's forensics lab has been suspended for allegedly falsifying drug-test results, throwing into question "maybe thousands" of criminal cases -- and prompting a panicked meeting yesterday between cops and the city district attorneys."
What's your point? If courts were not aware of such things, there wouldn't be the potential for a large number of convictions to be overturned, would there?
While humans will be fooled, the researches have not attempted to fool computers. I would think there would be some simple filtering you could perform on the image to detect the inserted objects.
They took a still image and added moving objects to it, but I don't know if they can apply the technique to videos yet (although I'm sure that's coming).
Trust me, that's a trivial extension. It's more time-consuming to do these things on video, but on the other hand one can also extract a ton more information from the scene if the camera or objects within its purview are in motion.
This could be a huge step for augmented reality if the process can be applied in real-time without any user input. It could improve on other research that's already out there like this one:
http://www.youtube.com/watch?v=XCEp7udJ2n4
Happens all the time. The new and front pages are too busy so worthwhile articles often get overlooked. But it is good to repost as you did, I will usually look at the original and add an upvote.
The article linked to uses the title. Better would have been to link to the paper itself (which also includes the video), but the OP was following title submission rules.
That is very impressive. I used to do a lot of photorealistic modelling using mental ray or vray in 3ds max, andt he level of precision of this is quite frankly extraordinary. This could very be a game changer in the 3D industry.
Camera matching usually requires a lot of forethought and preparation of the physical scene, involving thoroughly surveying the location & obtaining light-probe data so that an accurate 3d model of the scene can be produced and the inserted objects lit correctly.
This could make composing 3d/practical images a trivial exercise, very nice.
As technology progresses, it seems like only a matter of time before all photos and videos can be perfectly modified to suit whatever purpose. At some point, perhaps movie stars will do no more than lend their likeness (airbrushed of course) to productions.
It will be a case of '3d killed the video star,' in many ways. We are already approaching a point where the acting and the appearance are two different things; actors like Andy Sirkis and Doug Jones are not very well known to the general public, but have played starring roles as larger-than-life monsters of the screen. Although it is not yet economical, it is already quite possible to take one person with excellent acting ability and map on the appearance of someone else who is more visually appropriate or attractive.
I see they use Luxrender. Any one dares to take a guess at what algorithm they used? The animations seems noise free so I guess particle/photon mapping? IGI perhaps?
"From our study, we conclude that both our method and the light probe method are highly realistic, but that users can tell a real image apart from a synthetic image with probability higher than chance. However, even though users had no time restrictions, they still could not differentiate real images from both our method and the light probe method reliably."