Friday, April 06, 2007


Science News Online Article: Computing Photographic Forgeries

Dartmouth Professor develops software program to detect digital image forgery. "The eyes are a partial mirror into the world in which you're photographed," Farid says. If there are two white dots in each eye, there had to have been two separate light sources. So, if a photo shows two dots in one person's eyes and only one dot in another person's eyes, it must have been spliced together from two different originals."


Here's the "yes but" ---

This presumes (1) that all eyeballs are aimed in the same direction; (2) that there is no "outlier" light source (such as a strobe or flash, or spot light) that provides a focused second source of light.
Really, the eyes are a partial mirror "of" the world in which one is photographed. This researcher has been a big promoter of near pixel-by-pixel forensic photographic analysis, and is a promoter as well of his own technology he claims accomplishes same. Nice to know he calls this a "bag of tricks" ---

Hypo time: Two dueling digital photographs. One digitally signed. The altered one. With a time and date match. Saved into different image formats (perhaps removing or rendering uninterpretable those nasty "layers") before digital signature applied. The argument: "It's digitally signed, and you can see that it hasn't been altered, kind sir (or madam)." The other is legitimate, but alas, not digitally signed. The argument: "Your honor, believe me, this photograph is the real McCoy. The other is fake.) The arguments on both sides become almost Kafkaesque.

The following excerpt is from David Levy, in his Chapter on "Authenticity in a Digital Environment" published by the Council of Library and Information Resources. He states the issue quite well: "Without the security of stable digital objects, what might we do? One possibility would be to maintain audit trails, indicating the series of transformations that has brought a particular document to the desktop. Such a trail (akin to an object's provenance) could conceivably lead back to the creation of the initial document or, at least, back to a version that we had independent reasons to trust as authentic. Having such an audit trail (and trusting it) would allow us to decide whether any of the transformations performed had violated the document's claimed authenticity. A second possibility would ignore the history of transformations and would instead specify what properties the document in question would have to have to be authentic. This would be akin to using a script or a score to ascertain the authenticity of a performance."

Rather than seeking "provenance", Professor Farid prefers "scripting." Of course, the scripting is Farid's "mathematical" bag'o tricks. They may work to ferret out forgeries or alterations, or they may not (e.g., is "sampling" used?). My apprehension is not that it might or might not work, but that, by imparting blind trust to a script, we might never know under what circumstances it would not work.

Another good quote from David Levy:

"Understanding what we want to accomplish, and what we can accomplish, with regard to authenticity in the digital realm will take considerable effort."

Content authentication information, be it image or otherwise, should be verifiably embedded or associated with data at the time data is first instantiated. It certainly would save a great deal of time and resources.