Deepfakes a growing problem: Experts
ST, 26 Jun 2019, Deepfakes a growing problem: Experts
They call for greater awareness as tech used to create fake videos becomes more easily available
Deepfake videos created using face-swopping technology are challenging the notion that seeing is believing, with experts saying that the problem is bigger than ever and calling for greater awareness about digital trickery.
Celebrities, politicians and everyday people have all been targeted in the videos, while a gay sex video purportedly featuring Malaysia's Economic Affairs Minister Azmin Ali is currently being scrutinised to see if it has been faked.
Algorithms in deepfake videos look for instances where the expressions of individuals whose faces are being synthesised show similar expressions and facial positioning.
After finding the best match through a machine learning process called "generative adversarial networks", the faces are swopped.
Information about the technology is publicly available and the ability to generate them is spreading, with a Google search showing anyone can pay someone else to create a deepfake video.
For example deepfakesweb.com charges just $10 to create a video, which can be ready in four hours.
Mr Bryan Tan, a lawyer from Pinsent Masons MPillay specialising in technology law and data protection, said the problem has been around for about six years and is getting worse. "With such easy availability and usability, mass use of this becomes a danger especially to an unsuspecting public," he said.
A spokesman for cyber security company Malwarebytes Labs told The Straits Times that the phrase "deepfake" was coined in 2017, when a Reddit user with the handle DeepFakes posted explicit videos of "celebrities", whose faces replaced those of pornographic actors.
Since then, the technology has improved vastly.
"With the recently developed software and hardware, creating a deepfake video is so easy that the hardest part is to find enough still images to feed to the algorithm in order to create a realistic imitation," said the spokesman.
"If we have the original video of a person giving a speech, we only need to type out the text we want them to say to change the existing video into the desired one."
Tell-tale signs of a deepfake video, the firm says, are a distorted background or unnatural and asymmetrical facial features. But it warned that forensic tools to spot deepfake videos now only focus on certain characteristics which could easily be avoided by malicious actors.
Yet even fake videos that are not well produced can still become popular. Last month, a distorted video depicting US House Speaker Nancy Pelosi with slurred speech and acting drunkenly showed how even basic manipulation can trick its way into being shared by many.
The head of the humanities, arts and social sciences at Singapore University of Technology and Design, Professor Lim Sun Sun, explained that the appeal of deepfake videos lies in their emotional appeal.
"When something has that shock value it makes you sit up, it makes you even want to share it," said the Nominated MP and Media Literacy Council (MLC) member. "It is a kind of a psychological bias that we are all predisposed to."
That is why when educating the public about fake news, she said, the MLC reminds people to be alert to any kind of content that seems extreme in nature, as reactions like anger, disgust or even sympathy could encourage people to share the content without first assessing its validity.
Mr Tan said that while many videos are still legitimate and netizens "still have reason to trust photos and video", the rise in deepfakes and their ease of creation means that netizens must treat the content they consume critically rather than blindly accepting it.
Prof Lim advises that before sharing unusual content, individuals should check with relevant parties or individuals first, and compare the information to another source.
"We need to be more circumspect and less gullible, to recognise and internalise the fact that there are increasingly fake news purveyors out there, operating at extremely high levels of sophistication," she said. "So any kind of news or content that seems too extreme or too sensationalist should be taken with a pinch of salt, and we should all do the due diligence to find out more before we seek to comment or to share."
TECHNIQUES USED TO CREATE FAKE VIDEOS
FACE SWOP
Many people might be familiar with this in Instagram and Snapchat filters, but such technology - which uses an algorithm to seamlessly insert the entire face of a person onto another - can be dangerous. This technique could put people in situations they were never really in by placing their face onto another person's body.
FACIAL REENACTMENT
This technique can transfer facial expressions onto a person in another video - no matter what his appearance - making him appear angry, surprised or disgusted.
LIP SYNC
Grafting a lip-syncing mouth onto the mouth of someone else and combining the footage with audio can make it seem like a person is saying something he never said.
MOTION TRANSFER
This advanced form of deepfake video captures the movements of a person and transfers them to another target. According to Nieman Lab at Harvard, Wall Street Journal correspondent Jason Bellini, working with researchers at the University of California, Berkeley, tried this technique out for himself and ended up dancing like Bruno Mars.
YANG MI, FEBRUARY 2019
The problem of deepfake videos was thrust into the national spotlight in China when the face of well-known actress Yang Mi, 32, was spliced onto a television drama that aired 25 years earlier.
The face of actress Athena Chu was replaced with Yang Mi’s, causing hilarity – and concern – on social media.
It became one of the top trending stories on Weibo, with the related hashtag being read over 120 million times.
The incident prompted China’s top legislative body to raise concerns about deepfake videos and artificial intelligence technology.
KIM KARDASHIAN WEST, MAY 2019
On May 29, anti-advertising activists Brandalism uploaded a fake but realistic video of reality star Kim Kardashian West to YouTube.
She was depicted discussing a shadowy organisation called “Spectre” and mocking her fans for violating copyright. The video used to make this deepfake was uploaded in April by Vogue magazine.
The magazine’s publisher, Conde Nast, made a copyright claim to the video, and YouTube has since removed it.
KIT HARINGTON, JUNE 2019
A deepfake video of Game Of Thrones main character Jon Snow apologising for the show’s polarising final season went viral earlier this month.
According to the New York Post, the video titled “Breaking: Jon Snow finally apologised for Season 8” was posted by YouTube channel Eating Things on June 13 and amassed more than 1.7 million views.
In the video, Kit Harington, who plays Jon, refers to how a coffee cup was accidentally left on a table in the season’s fourth episode of the final season. He also “apologised” for the season’s conclusion, saying: “I know nothing made sense at the end.”