In an age of Photoshop, filters and social media, many people are used to seeing manipulated pictures — subjects become slimmer and smoother or, in the case of Snapchat, transformed into puppies.
However, there is a new breed of video and audio manipulation tools, made possible by advances in artificial intelligence and computer graphics, that would allow for the creation of realistic-looking footage of public figures appearing to say, well, anything.
US President Donald Trump declaring his proclivity for water sports. Former US secretary of state Hillary Rodham Clinton describing the stolen children she keeps locked in her wine cellar. Actor Tom Cruise finally admitting what we suspected all along — that he is a brony (a My Little Pony fan.)
This is the future of fake news. People have long been told not to believe everything they read, but soon they will have to question everything they see and hear as well.
For now, there are several research teams working on capturing and synthesizing different visual and audio elements of human behavior.
Software developed at Stanford University is able to manipulate video footage of public figures to allow a second person to put words in their mouth — in real time.
Face2Face captures the second person’s facial expressions as they talk into a webcam and then morphs those movements directly onto the face of the person in the original video.
The research team demonstrated their technology by puppeteering videos of former US president George W. Bush, Russian President Vladimir Putin and Trump.
On its own, Face2Face is a fun plaything for creating memes and entertaining late-night talk show hosts.
However, with the addition of a synthesized voice, it becomes more convincing — not only does the digital puppet look like the politician, but it can also sound like the politician.
A research team at the University of Alabama at Birmingham has been working on voice impersonation.
With three to five minutes of audio of a victim’s voice — taken live or from YouTube videos or radio shows — an attacker can create a synthesized voice that can fool both humans and voice biometric security systems used by some banks and smartphones.
The attacker can then talk into a microphone and the software will convert it so that the words sound like they are being spoken by the victim — whether that is over the telephone or on a radio show.
Canadian start-up Lyrebird has developed similar capabilities, which it says can be used to turn text into on-the-spot audiobooks “read” by famous voices or for characters in video games.
Although their intentions might be well-meaning, voice-morphing technology could be combined with face-morphing technology to create convincing fake statements by public figures.
You only have to look at the University of Washington’s Synthesizing Obama project, where they took the audio from one of former US president Barack Obama’s speeches and used it to animate his face in an entirely different video with incredible accuracy — thanks to training a recurrent neural network with hours of footage — to get a sense of how insidious these adulterations can be.
Beyond fake news, there are many other implications, said Nitesh Saxena, associate professor and research director of the University of Alabama at Birmingham’s department of computer science.
“You could leave fake voice messages posing as someone’s mom. Or defame someone and post the audio samples online,” Saxena said.
These morphing technologies are not yet perfect. The facial expressions in the videos can seem a little distorted or unnatural and the voices can sound a little robotic.
However, given time, they will be able to faithfully recreate the sound or appearance of a person — to the point where it might be very difficult for humans to detect the fraud.
Given the erosion of trust in the media and the rampant spread of hoaxes via social media, it will become even more important for news organizations to scrutinize content that looks and sounds like the real deal.
Telltale signs will be where the video or audio was created, who else was at the event and whether the weather conditions match the records of that day.
People should also be looking at the lighting and shadows in the video, whether all of the elements featured in the frame are the right size and whether the audio is synced perfectly, said Mandy Jenkins, from social news company Storyful, which specializes in verifying news content.
Doctored content might not pass the scrutiny of a rigorous newsroom, but if posted as a grainy video to social media, it could spread virally and trigger a public relations, political or diplomatic disaster. Imagine Trump declaring war on North Korea, for example.
“If someone looks like Trump and speaks like Trump, they will think it’s Trump,” Saxena said.
“We already see it doesn’t even take doctored audio or video to make people believe something that isn’t true,” Jenkins added. “This has the potential to make it worse.”
In 1976, the Gang of Four was ousted. The Gang of Four was a leftist political group comprising Chinese Communist Party (CCP) members: Jiang Qing (江青), its leading figure and Mao Zedong’s (毛澤東) last wife; Zhang Chunqiao (張春橋); Yao Wenyuan (姚文元); and Wang Hongwen (王洪文). The four wielded supreme power during the Cultural Revolution (1966-1976), but when Mao died, they were overthrown and charged with crimes against China in what was in essence a political coup of the right against the left. The same type of thing might be happening again as the CCP has expelled nine top generals. Rather than a
Former Chinese Nationalist Party (KMT) lawmaker Cheng Li-wun (鄭麗文) on Saturday won the party’s chairperson election with 65,122 votes, or 50.15 percent of the votes, becoming the second woman in the seat and the first to have switched allegiance from the Democratic Progressive Party (DPP) to the KMT. Cheng, running for the top KMT position for the first time, had been termed a “dark horse,” while the biggest contender was former Taipei mayor Hau Lung-bin (郝龍斌), considered by many to represent the party’s establishment elite. Hau also has substantial experience in government and in the KMT. Cheng joined the Wild Lily Student
When Taiwan High Speed Rail Corp (THSRC) announced the implementation of a new “quiet carriage” policy across all train cars on Sept. 22, I — a classroom teacher who frequently takes the high-speed rail — was filled with anticipation. The days of passengers videoconferencing as if there were no one else on the train, playing videos at full volume or speaking loudly without regard for others finally seemed numbered. However, this battle for silence was lost after less than one month. Faced with emotional guilt from infants and anxious parents, THSRC caved and retreated. However, official high-speed rail data have long
Taipei stands as one of the safest capital cities the world. Taiwan has exceptionally low crime rates — lower than many European nations — and is one of Asia’s leading democracies, respected for its rule of law and commitment to human rights. It is among the few Asian countries to have given legal effect to the International Covenant on Civil and Political Rights and the International Covenant of Social Economic and Cultural Rights. Yet Taiwan continues to uphold the death penalty. This year, the government has taken a number of regressive steps: Executions have resumed, proposals for harsher prison sentences