Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Financial Times
Financial Times
Business
Anjana Ahuja

The ultimate fake news scenario

Imagine looking in a mirror and seeing not your own reflection but that of Donald Trump. Each time you contort your face, you simultaneously contort his. You smile, he smiles. You scowl, he scowls. You control, in real time, the face of the president of the US.

That is the sinister potential of Face2Face, a technology developed by researchers at Stanford University in California that allows someone to transpose their facial gestures on to the video of someone else.

Now imagine marrying that "facial re-enactment" technology to artfully snipped audio clips of the president's previous public pronouncements. You post your creation on YouTube: a convincing snippet of Mr Trump declaring nuclear war against North Korea. In the current febrile climate, the incendiary video might well go viral before the White House can scramble a denial.

It is the ultimate fake news scenario but not an inconceivable one: scientists have already demonstrated the concept by altering YouTube videos of George HW Bush, Barack Obama and Vladimir Putin.

Now Darpa, the Defense Advanced Research Projects Agency in the US, has embarked on a research programme called MediFor (short for media forensics). Darpa says its programme is about levelling a field that "currently favours the manipulator", a nefarious advantage that becomes a national security concern if the goal of forgery is propaganda or misinformation.

The five-year programme is intended to turn out a system capable of analysing hundreds of thousands of images a day and immediately assessing if they have been tampered with. Professor Hany Farid, a computer scientist at Dartmouth College, New Hampshire, is among the academics involved. He specialises in detecting the manipulation of images, and his work includes assignments for law enforcement agencies and media organisations.

"I've now seen the technology get good enough that I'm very concerned," Prof Farid told Nature last week. "At some point, we will reach a stage when we can generate realistic video, with audio, of a world leader, and that's going to be very disconcerting." He describes the attempt to keep up with the manipulators as a technological arms race.

At the moment, spotting fakery takes time and expert knowledge, meaning that the bulk of bogus pictures slip into existence unchallenged. The first step with a questionable picture is to feed it into a reverse image search, such as Google Image Search, which will retrieve the picture if it has appeared elsewhere (this has proven surprisingly useful in uncovering scientific fraud, in instances when authors have plagiarised graphs).

Photographs can be scrutinised for unusual edges or disturbances in colour. A colour image is composed of single, one-colour pixels. The lone dots are combined in particular ways to create the many hues and shading in a photograph. Inserting another image, or airbrushing something out, disrupts that characteristic pixellation. Shadows are another giveaway. Professor Farid cites a 2012 viral video of an eagle snatching a child: his speedy analysis revealed inconsistent shadows, exposing the film as a computer-generated concoction.

Researchers at Massachusetts Institute of Technology have also developed an ingenious method of determining whether the people in video clips are real or animated. By magnifying video clips and checking colour differences in a person's face, they can deduce whether the person has a pulse. Interestingly, some legal experts have argued that computer-generated child pornography should be covered by the First Amendment , which protects free speech. Cases have turned on experts being able to detect whether offending material contains live victims.

Machine learning is aiding the fraudsters: determined fakers can build "generative adversarial networks". A GAN is a sort of Jekyll-and-Hyde network that, on the one hand, generates images and, on the other, rejects those that do not measure up authentically against a library of images. The result is a machine with its own inbuilt devil's advocate, able to teach itself how to generate hard-to-spot fakes.

Not all artifice, however, is malevolent: two students built a program capable of producing art that looks like . . . art. Their source was the WikiArt database of 100,000 paintings: the program, GANGogh, has since generated creations that would not look out of place on a millionaire's wall.

Such is the epic reach of digital duplicity: it threatens not only to disrupt politics and destabilise the world order, but also to reframe our ideas about art.

The writer is a science commentator

Copyright The Financial Times Limited 2017

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.