While science fiction is often preoccupied with the threat of artificial intelligence successfully imitating human intelligence, researchers say a bigger danger right now is people using the technology to imitate one another.
A recent survey from the University College of London ranked deepfakes as the most worrying application of machine learning in terms of potential for crime and terrorism. According to 31 AI experts, the video fabrication technique could fuel a variety of crimes—from discrediting a public figure with fake footage to extorting money through video call scams impersonating a victim’s loved one—with the cumulative effect leading to a dangerous mistrust of audio and visual evidence on the part of society.
The experts were asked to rank a list of 20 identified threats associated with AI, ranging from driverless car attacks to AI-authored phishing messages and fake news. The criteria for the ranking included overall risk, ease of use, profit potential and the level of difficulty in how hard they are to detect and stop.
Deepfakes are worrying on all of these counts. They are easily made and are increasingly hard to differentiate from real video. Advertised listings are easily found on corners of the dark web, so the prominence of the targets and variety of possible crimes means that there could be a lot of money at stake.
While the threat of deepfakes was once confined to celebrities, politicians and other prominent figures with enough visual data to train an AI, more recent systems have been proven to be effective when trained on as little data with a couple of photos.
“People now conduct large parts of their lives online and their online activity can make and break reputations,” said the report’s lead author, UCL researcher Matthew Caldwell, in a statement. “Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.”
Despite the abundance of possible criminal applications of deepfakes, a report last fall found that they are so far primarily used by bad actors to create fake pornography against the subject’s consent.
Not all uses for deepfakes are nefarious, however. Agencies like Goodby, Silverstein & Partners and R/GA have used them in experimental ad campaigns, and the underlying generative technology is helping fuel different types of AI creativity and art.