There exist, on the Internet, any number of videos that show people doing things they never did. Real people, real faces, close to photo-realistic footage; entirely unreal events. These videos are called deepfakes and they are made using a particular kind of artificial intelligence (AI).
Inevitably enough, they began in porn — there is a thriving online market for celebrity faces superimposed on porn actors’ bodies — but the reason it is being discussed now is that people are worried about their effects on the fervid political debate. Those worries are real enough to prompt the British government and the US Congress to look at ways of regulating them.
The video that sparked the sudden concern last month was not a deepfake at all: It was a good old-fashioned doctored video of US House Speaker Nancy Pelosi. There were no fancy AIs involved; the video had simply been slowed down to about 75 percent of its usual speed and the pitch of her voice was raised to keep it sounding natural.
Illustration: Constance Chou
It could have been done 50 years ago, but it made her look convincingly drunk or incapable, and was shared millions of times across every platform, including by former New York City mayor Rudy Giuliani — US President Donald Trump’s lawyer.
It got people worrying about fake videos in general and deepfakes in particular. Since the Pelosi video came out, a deepfake of Mark Zuckerberg apparently talking about how he has “total control of billions of people’s stolen data” and how he “owe[s] it all to Spectre,” the product of a team of satirical artists, went viral as well.
Last year, Oscar-winning director Jordan Peele and his brother-in-law, BuzzFeed CEO Jonah Peretti, created a deepfake of former US president Barack Obama apparently calling Trump a “complete and utter dipshit” to warn of the risks to public discourse.
A lot of fears about technology are overstated. For instance, despite worries about screen time and social media, in general, high-quality research has shown that there is little evidence of it having a major effect on mental health.
Every generation has its techno-panic: video nasties, violent computer games, pulp novels.
However, deepfakes might be a different matter, said Sandra Wachter, a professor in the law and ethics of AI at the Oxford Internet Institute.
“I can understand the public concern,” she said. “Any tech developing so quickly could have unforeseen and unintended consequences.”
It is not that fake videos or misinformation are new, but things are changing so fast that it is challenging people’s ability to keep up, she said.
“The sophisticated way in which fake information can be created, how fast it can be created and how endlessly it can be disseminated is on a different level. In the past, I could have spread lies, but my range was limited,” Wachter said.
Here is how deepfakes work. They are the product of not one, but two AI algorithms, which work together in something called a generative adversarial network (GAN). The two algorithms are called the generator and the discriminator.
Imagine a GAN that has been designed to create believable spam e-mails. The discriminator would be exactly the same as a real spam filter algorithm: It would simply sort all e-mails into either “spam” or “not spam.”
It would do that by being given a huge folder of e-mails and determining which elements were most often associated with the ones it was told were spam: perhaps words like “enlarger” or “pills” or “an accident that wasn’t your fault.” That folder is its “training set.”
Then, as new e-mails come in, it would give each one a rating based on how many of these features it detected: 60 percent likely to be spam, 90 percent likely and so on. All e-mails above a certain threshold would go into the spam folder.
The bigger its training set, the better it gets at establishing real from fake.
However, the generator algorithm works the other way. It takes that same data set and uses it to build new e-mails that do not look like spam. It knows to avoid words like “penis” or “won an iPad.”
When it makes them, it puts them into the stream of data going through the discriminator. The two are in competition: If the discriminator is fooled, the generator “wins”; if it is not, the discriminator “wins.”
Either way, it is a new piece of data for the GAN. The discriminator gets better at telling fake from real content, so the generator has to get better at creating the fakes. It is an arms race, a self-reinforcing cycle. This same system can be used for creating almost any digital product: spam e-mails, art, music — or, of course, videos.
GANs are hugely powerful and have many interesting uses — they are not just for creating deepfakes, Alan Turing Institute for AI digital ethics researcher Christina Hitrova said.
The photo-realistic imaginary people at ThisPersonDoesNotExist.com are all created with GANs.
Discriminatory algorithms (such as spam filters) can be improved by GANs creating ever better things to test them with. It can do amazing things with pictures, including sharpening up fuzzy ones or colorizing black-and-white ones.
“Scientists are also exploring using GANs to create virtual chemical molecules ... to speed up materials science and medical discoveries: you can generate new molecules and simulate them to see what they can do,” Hitrova said.
GANs were only invented in 2014, but have already become one of the most exciting tools in AI. However, they are widely available, easy to use and increasingly sophisticated, able to create ever more believable videos.
“There’s some way to go before the fakes are undetectable,” Hitrova said. “For instance, with CGI [computer-generated imagery] faces, they haven’t quite perfected the generation of teeth or eyes that look natural, but this is changing, and I think it’s important that we explore solutions — technological solutions and digital literacy solutions, as well as policy solutions.”
With GANs, one technological solution presents itself immediately: simply use the discriminator to tell whether a given video is fake.
However, “obviously that’s going to feed into the fake generator to produce even better fakes,” Hitrova said.
For instance, one tool was able to identify deepfakes by looking at the pattern of blinking, she said, but added that then the next generation would take that into account and future discriminators would have to use something else.
The arms race that goes on inside GANs will go on outside, as well.
Other technological solutions include hashing — essentially a form of digital watermarking, giving a video file a short string of numbers that is lost if the video is tampered with — or, controversially, “authenticated alibis,” wherein public figures constantly record where they are and what they are doing, so that if a deepfake circulates apparently showing them doing something they want to disprove, they can show what they were really doing.
That idea has been tentatively floated by AI law specialist Danielle Citron, but Hitrova said that has “dystopian” implications.
None of these solutions can entirely remove the risk of deepfakes. Some form of authentication might work to tell you that certain things are real, but what if someone wants to deny the reality of something real?
If there had been deepfakes in 2016, “Trump could have said: ‘I never said ‘grab them by the pussy,’” Hitrova said.
Most would not have believed him — it came from Access Hollywood footage and was confirmed by the show’s presenter — but it would have given an excuse for people to doubt them.
Education — critical thinking and digital literacy — will be important too.
Finnish children score highly on their ability to spot fake news, a trait that is credited to the country’s policy of teaching critical thinking skills in school.
However, that can only be part of the solution. For one thing, most people are not in school.
Even if the current generation of schoolchildren becomes more wary — as they naturally are anyway, having grown up with digital technology — their elders would remain less so, as can be seen in the case of British lawmakers being fooled by obvious fake tweets.
“Older people are much less tech-savvy,” Hitrova said. “They’re much more likely to share something without fact-checking it.”
Wachter and Hitrova agreed that some sort of regulatory framework would be necessary. The US and the UK are grappling with the idea.
In the US, social media platforms are not held responsible for their content. Congress is considering changing that and making such immunity dependent on “reasonable moderation practices.” Some sort of requirement to identify fake content has also been floated.
Something like copyright, by which people have the right for their face not to be used falsely, might be useful, Wachter said, but added that by the time you have taken down a deepfake, the reputational damage might already be done, so pre-emptive regulation is needed, too.
A European Commission report earlier this month found that digital disinformation was rife in recent European elections, and that platforms are failing to take steps to reduce it.
For instance, Facebook has entirely washed its hands of responsibility for fact-checking, saying that it would only take down fake videos after a third-party fact-checker has declared it to be false.
However, Britain is taking a more active role, Hitrova said.
“The EU is using the threat of regulation to force platforms to self-regulate, which so far they have not,” Hitrova said.
“But the UK’s recent online harms white paper and the [British] Department for Digital, Culture, Media and Sport subcommittee [on disinformation, which has not yet reported, but is expected to recommend regulation] show that the UK is really planning to regulate,” she said.
“It’s an important moment; they’ll be the first country in the world to do so, they’ll have a lot of work — it’s no simple task to balance fake news against the rights to parody and art and political commentary — but it’s truly important work,” she added.
Wachter agreed, saying: “The sophistication of the technology calls for new types of law.”
In the past, as new forms of information and disinformation have arisen, society has developed antibodies to deal with them: few people would be fooled by World War I propaganda now.
However, the world is changing so fast that people might not be able to develop those antibodies this time around, Wachter said, adding that even if they do, it could take years, and there is a real problem to sort out right now.
“Maybe in 10 years’ time we’ll look back at this stuff and wonder how anyone took it seriously, but we’re not there now,” Wachter said.
Recently, China launched another diplomatic offensive against Taiwan, improperly linking its “one China principle” with UN General Assembly Resolution 2758 to constrain Taiwan’s diplomatic space. After Taiwan’s presidential election on Jan. 13, China persuaded Nauru to sever diplomatic ties with Taiwan. Nauru cited Resolution 2758 in its declaration of the diplomatic break. Subsequently, during the WHO Executive Board meeting that month, Beijing rallied countries including Venezuela, Zimbabwe, Belarus, Egypt, Nicaragua, Sri Lanka, Laos, Russia, Syria and Pakistan to reiterate the “one China principle” in their statements, and assert that “Resolution 2758 has settled the status of Taiwan” to hinder Taiwan’s
Singaporean Prime Minister Lee Hsien Loong’s (李顯龍) decision to step down after 19 years and hand power to his deputy, Lawrence Wong (黃循財), on May 15 was expected — though, perhaps, not so soon. Most political analysts had been eyeing an end-of-year handover, to ensure more time for Wong to study and shadow the role, ahead of general elections that must be called by November next year. Wong — who is currently both deputy prime minister and minister of finance — would need a combination of fresh ideas, wisdom and experience as he writes the nation’s next chapter. The world that
Can US dialogue and cooperation with the communist dictatorship in Beijing help avert a Taiwan Strait crisis? Or is US President Joe Biden playing into Chinese President Xi Jinping’s (習近平) hands? With America preoccupied with the wars in Europe and the Middle East, Biden is seeking better relations with Xi’s regime. The goal is to responsibly manage US-China competition and prevent unintended conflict, thereby hoping to create greater space for the two countries to work together in areas where their interests align. The existing wars have already stretched US military resources thin, and the last thing Biden wants is yet another war.
Since the Russian invasion of Ukraine in February 2022, people have been asking if Taiwan is the next Ukraine. At a G7 meeting of national leaders in January, Japanese Prime Minister Fumio Kishida warned that Taiwan “could be the next Ukraine” if Chinese aggression is not checked. NATO Secretary-General Jens Stoltenberg has said that if Russia is not defeated, then “today, it’s Ukraine, tomorrow it can be Taiwan.” China does not like this rhetoric. Its diplomats ask people to stop saying “Ukraine today, Taiwan tomorrow.” However, the rhetoric and stated ambition of Chinese President Xi Jinping (習近平) on Taiwan shows strong parallels with