Hacker News new | past | comments | ask | show | jobs | submit login
Inserting Artificial Objects into Photographs (cgchannel.com)
339 points by ThomPete on Oct 17, 2011 | hide | past | favorite | 41 comments



Read the linked paper, it's actually very good. It looks like they used good science in testing this:

"From our study, we conclude that both our method and the light probe method are highly realistic, but that users can tell a real image apart from a synthetic image with probability higher than chance. However, even though users had no time restrictions, they still could not differentiate real images from both our method and the light probe method reliably."


This is a great example of using humans for what humans are good at (interpreting photographs) and computers for what computers are good at (lots of light modelling).


Very realistic. My first thought was that if this can be done without leaving detectable artifacts, then it will inevitably impact the admissibility of photographs as evidence in trials.


And eye-witness testimony should be made inadmissible (and I think may eventually). DNA evidence has a 1 in 200 (ish) chance of being wrong[1]. Hard time to be a lawyer.

1. Source: The Drunkard's Walk. This is the estimated likelihood of a lab error, which by far drowns out the 1 in a billion chances the DNA match is wrong, but the numbers are not reported and not admissible as evidence.


On average, you need 100 people to find two among them with one common fingerprint. Now calculate how many you need to find one among them that has a common fingerprint with a given person, i.e. you.


But if you know the lab has a one in ten chance of screwing up the processing of my print, how can I trust the one in a large number of a match. The screw up may cause a one in five chance of matching with anybody.

Error rates are hugely important yet completely ignored in court.


Error rates are hugely important yet completely ignored in court.

Not actually true. This is not to say that courts get it right all the time, but it's not particularly newsworthy when they do. Attacking the chain of custody and alleging contamination or false positives are standard techniques in challenging forensic testimony, both biological and digital.

The cases where the forensic evidence is legitimately challenged and those challenges are ignored or overruled are often dramatic and newsworthy, but the rate of miscarriages of justice is falling because courts today are much more aware of these issues than the courts of (say) 20 years ago. Unfortunately news reporting of trials and appeals is so utterly awful that people tend to assume the exceptional is the norm.

I'm not saying that it isn't a problem, just that the courts are more cognizant of the issue than more people appreciate.


"A technician in the NYPD's forensics lab has been suspended for allegedly falsifying drug-test results, throwing into question "maybe thousands" of criminal cases -- and prompting a panicked meeting yesterday between cops and the city district attorneys."

http://www.nypost.com/p/news/local/queens/lab_tech_wQIOPAcKY...


What's your point? If courts were not aware of such things, there wouldn't be the potential for a large number of convictions to be overturned, would there?


I was giving a good example.


Ah, I see. It was hard to tell without any comment of your own on the grandparent.


As long as the original evidence is preserved, you can always collect material again and re-run the test.


While humans will be fooled, the researches have not attempted to fool computers. I would think there would be some simple filtering you could perform on the image to detect the inserted objects.


Container-induced information analysis (like jpeg artefacts) can indeed bring a lot of information about manipulated areas.


And, apparently, videography.


They took a still image and added moving objects to it, but I don't know if they can apply the technique to videos yet (although I'm sure that's coming).


Trust me, that's a trivial extension. It's more time-consuming to do these things on video, but on the other hand one can also extract a ton more information from the scene if the camera or objects within its purview are in motion.


There is a company which puts ads on objects in videos. Seems like a step in the right direction.


This could be a huge step for augmented reality if the process can be applied in real-time without any user input. It could improve on other research that's already out there like this one: http://www.youtube.com/watch?v=XCEp7udJ2n4


I've seen videos of this already. I think Maya was the software?


Funny thing is that this was posted 3 days ago : http://news.ycombinator.com/item?id=3111258

and it only got 5 points

And the link was to the page of one of the guys who did research.

???


Happens all the time. The new and front pages are too busy so worthwhile articles often get overlooked. But it is good to repost as you did, I will usually look at the original and add an upvote.


Yes, e.g. the recent "Quantum Levitation" was submitted 5 times before the successful one: http://www.hnsearch.com/search#request/all&q=Quantum+Lev...


Yeah, but it wasn't "mind-blowing" then. Good titles are important.


Can we please not use OTT descriptions like "Mind-Blowing". It sounds a bit... tabloid-y


The title of the video is "Rendering Synthetic Objects into Legacy Photographs" which I think sounds much better.


The article linked to uses the title. Better would have been to link to the paper itself (which also includes the video), but the OP was following title submission rules.


The article was submitted days ago and received no attention at all. I guess Bind Blowing Titles work even on HN.


Okay, but to be completely honest I was blown away by the video demonstration, so it did deliver on the promise ;-)


Looks like it's been corrected to something a little less dramatic. Cool.


That is very impressive. I used to do a lot of photorealistic modelling using mental ray or vray in 3ds max, andt he level of precision of this is quite frankly extraordinary. This could very be a game changer in the 3D industry.


This is true.

Camera matching usually requires a lot of forethought and preparation of the physical scene, involving thoroughly surveying the location & obtaining light-probe data so that an accurate 3d model of the scene can be produced and the inserted objects lit correctly.

This could make composing 3d/practical images a trivial exercise, very nice.


pretty cool... i can see this being used as a pretty slick "try before you buy" feature for an online furniture/home goods store.


As technology progresses, it seems like only a matter of time before all photos and videos can be perfectly modified to suit whatever purpose. At some point, perhaps movie stars will do no more than lend their likeness (airbrushed of course) to productions.


It will be a case of '3d killed the video star,' in many ways. We are already approaching a point where the acting and the appearance are two different things; actors like Andy Sirkis and Doug Jones are not very well known to the general public, but have played starring roles as larger-than-life monsters of the screen. Although it is not yet economical, it is already quite possible to take one person with excellent acting ability and map on the appearance of someone else who is more visually appropriate or attractive.


I love the extensive use of teapots. On a previous life, I had one (a physical one) on my desk.


It's the standard Utah Teapot https://secure.wikimedia.org/wikipedia/en/wiki/Utah_teapot also the Stanford Dragon makes several appearances https://secure.wikimedia.org/wikipedia/en/wiki/Stanford_Drag...


Never had a dragon, but I was very fond of my "teapotahedron"


I see they use Luxrender. Any one dares to take a guess at what algorithm they used? The animations seems noise free so I guess particle/photon mapping? IGI perhaps?


Papers: http://kevinkarsch.com/publications/sa11.pdf

Looks like film makers and others will save a lot of money in decorating scenarios.


Same thoughts here, it might make low budget special effects possible?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: