Hey, did my representative really say that? Is that really US President Donald Trump on that video, or am I being duped?
New technology on the Internet lets anyone make videos of real people appearing to say things they have never said.
Republicans and Democrats predict that this high-tech way of putting words in someone’s mouth will become the latest weapon in disinformation wars against the US and other Western democracies.
Illustration: Mountain People
We are not talking about lip-syncing videos. This technology uses facial mapping and artificial intelligence (AI) to produce videos that appear so genuine it is hard to spot the phonies.
Lawmakers and intelligence officials worry that the bogus videos — called deepfakes — could be used to threaten national security or interfere in elections.
So far, that has not happened, but experts say it is not a question of if, but when.
“I expect that here in the United States we will start to see this content in the upcoming midterms and national election two years from now,” said Hany Farid, a digital forensics expert at Dartmouth College in Hanover, New Hampshire.
“The technology, of course, knows no borders, so I expect the impact to ripple around the globe,” he said.
When an average person can create a realistic fake video of the president saying anything they want, “we have entered a new world where it is going to be difficult to know how to believe what we see,” Farid said.
The reverse is also a concern. People might dismiss as fake genuine footage, say of a real atrocity, to score political points.
Realizing the implications of the technology, the US Defense Advanced Research Projects Agency is already two years into a four-your program to develop technologies that can detect fake images and videos.
Right now, it takes extensive analysis to identify phony videos. It is unclear if new ways to authenticate images or detect fakes will keep pace with deepfake technology.
Deepfakes are so named because they utilize deep learning, a form of AI.
They are made by feeding a computer an algorithm, or set of instructions, lots of images and audio of a certain person. The computer program learns how to mimic the person’s facial expressions, mannerisms, voice and inflections.
If you have enough video and audio of someone, you can combine a fake video of the person with a fake audio and get them to say anything you want.
So far, deepfakes have mostly been used to smear celebrities or as gags, but it is easy to foresee a nation-state using them for nefarious activities against the US, said US Senator Marco Rubio, one of several members of the US Senate Select Committee on Intelligence who are expressing concern about deepfakes.
A foreign intelligence agency could use the technology to produce a fake video of a US politician using a racial epithet or taking a bribe, Rubio said.
They could use a fake video of a US soldier massacring civilians overseas or one of a US official supposedly admitting a secret plan to carry out a conspiracy.
Imagine a fake video of a US leader — or an official from North Korea or Iran — warning the US of an impending disaster.
“It’s a weapon that could be used — timed appropriately and placed appropriately — in the same way fake news is used, except in a video form, which could create real chaos and instability on the eve of an election or a major decision of any sort,” Rubio told reporters.
Deepfake technology still has a few hitches — for instance, people’s blinking in fake videos might appear unnatural — but the technology is improving.
“Within a year or two, it’s going to be really hard for a person to distinguish between a real video and a fake video,” said Andrew Grotto, an international security fellow at the Center for International Security and Cooperation at Stanford University in California.
“This technology, I think, will be irresistible for nation-states to use in disinformation campaigns to manipulate public opinion, deceive populations and undermine confidence in our institutions,” Grotto said, calling for government leaders and politicians to clearly say it has no place in civilized political debate.
Crude videos have been used for malicious political purposes for years, so there is no reason to believe the higher-tech ones, which are more realistic, will not become tools in disinformation campaigns.
Rubio said that in 2009, the US embassy in Moscow complained to the Russian Ministry of Foreign Affairs about a fake sex video it said was made to damage the reputation of a US diplomat.
The video showed the married diplomat, who was a liaison to Russian religious and human rights groups, making telephone calls on a dark street. It then showed the diplomat in his hotel room, scenes that apparently were shot with a hidden camera.
Later, the video appeared to show a man and a woman having sex in the same room with the lights off, although it was not at all clear that the man was the diplomat.
John Beyrle, who was the US ambassador in Moscow at the time, blamed the Russian government for the video, which he said was clearly fabricated.
Michael McFaul, who was the US ambassador to Russia between 2012 and 2014, said Russia has engaged in disinformation videos against various political actors for years and that he too had been a target.
Russian state propaganda had inserted his face into photographs and “spliced my speeches to make me say things I never uttered and even accused me of pedophilia,” he said.
President William Lai (賴清德) recently attended an event in Taipei marking the end of World War II in Europe, emphasizing in his speech: “Using force to invade another country is an unjust act and will ultimately fail.” In just a few words, he captured the core values of the postwar international order and reminded us again: History is not just for reflection, but serves as a warning for the present. From a broad historical perspective, his statement carries weight. For centuries, international relations operated under the law of the jungle — where the strong dominated and the weak were constrained. That
The Executive Yuan recently revised a page of its Web site on ethnic groups in Taiwan, replacing the term “Han” (漢族) with “the rest of the population.” The page, which was updated on March 24, describes the composition of Taiwan’s registered households as indigenous (2.5 percent), foreign origin (1.2 percent) and the rest of the population (96.2 percent). The change was picked up by a social media user and amplified by local media, sparking heated discussion over the weekend. The pan-blue and pro-China camp called it a politically motivated desinicization attempt to obscure the Han Chinese ethnicity of most Taiwanese.
On Wednesday last week, the Rossiyskaya Gazeta published an article by Chinese President Xi Jinping (習近平) asserting the People’s Republic of China’s (PRC) territorial claim over Taiwan effective 1945, predicated upon instruments such as the 1943 Cairo Declaration and the 1945 Potsdam Proclamation. The article further contended that this de jure and de facto status was subsequently reaffirmed by UN General Assembly Resolution 2758 of 1971. The Ministry of Foreign Affairs promptly issued a statement categorically repudiating these assertions. In addition to the reasons put forward by the ministry, I believe that China’s assertions are open to questions in international
The Legislative Yuan passed an amendment on Friday last week to add four national holidays and make Workers’ Day a national holiday for all sectors — a move referred to as “four plus one.” The Chinese Nationalist Party (KMT) and the Taiwan People’s Party (TPP), who used their combined legislative majority to push the bill through its third reading, claim the holidays were chosen based on their inherent significance and social relevance. However, in passing the amendment, they have stuck to the traditional mindset of taking a holiday just for the sake of it, failing to make good use of