We haven’t yet seen a clear frontrunner emerge as the Democratic candidate for next year’s US presidential election, but I have been interested in another race — the race to see which buzzword is going to be a pivotal issue in political reporting, hot takes and the general political introspection that elections bring. In 2016 it was “fake news.”
“Deepfake” is shoring up as one of the leading candidates for next year.
This week the US House Permanent Select Committee on Intelligence asked Facebook, Twitter and Google what they were planning to do to combat deepfakes in next year’s election. It is a fair question. With a bit of work, deepfakes could be convincing and misleading enough to make fake news look like child’s play.
Deepfake, a portmanteau of “deep learning” and “fake,” refers to artificial intelligence (AI) software that can superimpose a digital composite face on to an existing video (and sometimes audio) of a person.
The term first rose to prominence when Motherboard reported on a Reddit user who was using AI to superimpose the faces of film stars on to existing porn videos, creating (with varying degrees of realness) porn starring Emma Watson, Gal Gadot, Scarlett Johansson and an array of other female celebrities.
However, there are also a range of political possibilities. Filmmaker Jordan Peele highlighted some of the harmful potential in an eerie video produced with Buzzfeed, in which he literally puts his words in former US President Barack Obama’s mouth. Satisfying or not, hearing Obama call US President Donald Trump a “total and complete dipshit” is concerning, given that he never said it.
Just as concerning as the potential for deepfakes to be abused is that tech platforms are struggling to deal with them. For one thing, their content moderation issues are well-documented. Most recently, a doctored video of US House of Representatives Speaker Nancy Pelosi, slowed and pitch-edited to make her appear drunk, was tweeted by Trump. Twitter did not remove the video, YouTube did and Facebook de-ranked it in the news feed.
For another, they have already tried, and failed, to moderate deepfakes. In a laudably fast response to the non-consensual pornographic deepfakes, Twitter, Gfycat, Pornhub and other platforms quickly acted to remove them and develop technology to help them do it.
However, once technology is released, it is like herding cats. Deepfakes are a moving feast and as soon as moderators find a way of detecting them, people will find a workaround.
However, while there are important questions about how to deal with deepfakes, we are making the mistake of distancing it from broader questions and looking for exclusively technological solutions. We made the same mistake with fake news, where the prime offender was seen to be tech platforms rather than the politicians and journalists who had created an environment in which lies could flourish.
The furor over deepfakes is a microcosm for the larger social discussion about the ethics of technology. It is pretty clear the software should not have been developed and has led — and will continue to lead — to disproportionately more harm than good. And the lesson was not learned.
Recently the creator of an app called “DeepNude,” designed to give a realistic approximation of how a woman would look naked based on a clothed image, canceled its launch fearing “the probability that people will misuse it is too high.”
What the legitimate use for this app is, I do not know, but the response is revealing in how predictable it is.
Reporting triggers some level of public outcry, at which suddenly tech developers realize the error of their ways. Their’s is the conscience of hindsight: feeling bad after the fact rather than proactively looking for ways to advance the common good, treat people fairly and minimize potential harm. By now we should know better and expect more.
Why then do we continue to let the tech sector manage its own mess?
Partly it is because it is difficult, but it is also because we are still addicted to the promise of technology even as we come to criticize it. Technology is a way of seeing the world. It’s a kind of promise — that we can bring the world under our control and bend it to our will.
Deepfakes afford us the ability to manipulate a person’s image. We can make them speak and move as we please, with a ready-made, if weak, moral defense: “No people were harmed in the making of this deepfake.”
However, in asking for a technological fix to deepfakes, we are fueling the same logic that brought us here. Want to solve Silicon Valley? There is an app for that! Eventually, maybe, that app will work. However, we are still treating the symptoms, not the cause.
The discussion around ethics and regulation in technology needs to expand to include more existential questions. How should we respond to the promises of technology? Do we really want the world to be completely under our control? What are the moral costs of doing this? What does it mean to see every unfulfilled desire as something that can be solved with an app?
Yes, we need to think about the bad actors who are going to use technology to manipulate, harm and abuse. We need to consider the now obvious fact that if a technology exists, someone is going to use it to optimize their orgasms. However, we also need to consider what it means when the only place we can turn to solve the problems of technology is itself technological.
Big tech firms have an enormous set of moral and political responsibilities — it is good they are being asked to live up to them. An industry-wide commitment to basic legal standards, significant regulation and technological ethics would go a long way to solving the immediate harms of bad tech design.
However, it would not get us out of the technological paradigm we seem to be stuck in. For that, we do not just need tech developers to read some moral philosophy, we need our politicians and citizens to do the same.
At the moment, we are dancing around the edges of the issue, playing whack-a-mole as new technologies arise. We treat tech design and development like it is inevitable. As a result, we aim to minimize risks rather than look more deeply at the values, goals and moral commitments built into the technology. As well as asking how we stop deepfakes, we need to ask why someone thought they would be a good idea to begin with. There is no app for that.
There is much evidence that the Chinese Communist Party (CCP) is sending soldiers from the People’s Liberation Army (PLA) to support Russia’s invasion of Ukraine — and is learning lessons for a future war against Taiwan. Until now, the CCP has claimed that they have not sent PLA personnel to support Russian aggression. On 18 April, Ukrainian President Volodymyr Zelinskiy announced that the CCP is supplying war supplies such as gunpowder, artillery, and weapons subcomponents to Russia. When Zelinskiy announced on 9 April that the Ukrainian Army had captured two Chinese nationals fighting with Russians on the front line with details
On a quiet lane in Taipei’s central Daan District (大安), an otherwise unremarkable high-rise is marked by a police guard and a tawdry A4 printout from the Ministry of Foreign Affairs indicating an “embassy area.” Keen observers would see the emblem of the Holy See, one of Taiwan’s 12 so-called “diplomatic allies.” Unlike Taipei’s other embassies and quasi-consulates, no national flag flies there, nor is there a plaque indicating what country’s embassy this is. Visitors hoping to sign a condolence book for the late Pope Francis would instead have to visit the Italian Trade Office, adjacent to Taipei 101. The death of
By now, most of Taiwan has heard Taipei Mayor Chiang Wan-an’s (蔣萬安) threats to initiate a vote of no confidence against the Cabinet. His rationale is that the Democratic Progressive Party (DPP)-led government’s investigation into alleged signature forgery in the Chinese Nationalist Party’s (KMT) recall campaign constitutes “political persecution.” I sincerely hope he goes through with it. The opposition currently holds a majority in the Legislative Yuan, so the initiation of a no-confidence motion and its passage should be entirely within reach. If Chiang truly believes that the government is overreaching, abusing its power and targeting political opponents — then
The Chinese Nationalist Party (KMT), joined by the Taiwan People’s Party (TPP), held a protest on Saturday on Ketagalan Boulevard in Taipei. They were essentially standing for the Chinese Communist Party (CCP), which is anxious about the mass recall campaign against KMT legislators. President William Lai (賴清德) said that if the opposition parties truly wanted to fight dictatorship, they should do so in Tiananmen Square — and at the very least, refrain from groveling to Chinese officials during their visits to China, alluding to meetings between KMT members and Chinese authorities. Now that China has been defined as a foreign hostile force,