Imagine looking in a mirror and seeing not your own reflection but that of
That is the sinister potential of Face2Face, a technology developed by researchers at
Now imagine marrying that "facial re-enactment" technology to artfully snipped audio clips of the president's previous public pronouncements. You post your creation on YouTube: a convincing snippet of
It is the ultimate fake news scenario but not an inconceivable one: scientists have already demonstrated the concept by altering YouTube videos of George HW Bush,
Now Darpa, the
The five-year programme is intended to turn out a system capable of analysing hundreds of thousands of images a day and immediately assessing if they have been tampered with. Professor
"I've now seen the technology get good enough that I'm very concerned,"
At the moment, spotting fakery takes time and expert knowledge, meaning that the bulk of bogus pictures slip into existence unchallenged. The first step with a questionable picture is to feed it into a reverse image search, such as Google Image Search, which will retrieve the picture if it has appeared elsewhere (this has proven surprisingly useful in uncovering scientific fraud, in instances when authors have plagiarised graphs).
Photographs can be scrutinised for unusual edges or disturbances in colour. A colour image is composed of single, one-colour pixels. The lone dots are combined in particular ways to create the many hues and shading in a photograph. Inserting another image, or airbrushing something out, disrupts that characteristic pixellation. Shadows are another giveaway.
Researchers at
Machine learning is aiding the fraudsters: determined fakers can build "generative adversarial networks". A GAN is a sort of Jekyll-and-Hyde network that, on the one hand, generates images and, on the other, rejects those that do not measure up authentically against a library of images. The result is a machine with its own inbuilt devil's advocate, able to teach itself how to generate hard-to-spot fakes.
Not all artifice, however, is malevolent: two students built a program capable of producing art that looks like . . . art. Their source was the WikiArt database of 100,000 paintings: the program, GANGogh, has since generated creations that would not look out of place on a millionaire's wall.
Such is the epic reach of digital duplicity: it threatens not only to disrupt politics and destabilise the world order, but also to reframe our ideas about art.
The writer is a science commentator
Copyright The Financial Times Limited 2017