Last year, the shape of politics to come appeared in a video. In it, former Democratic Party US presidential candidate and secretary of state Hillary Clinton says: “You know, people might be surprised to hear me saying this, but I actually like Ron DeSantis a lot. Yeah, I know. I’d say he’s just the kind of guy this country needs.”
It seems odd that Clinton would warmly endorse a Republican presidential hopeful. And it is. Further investigations found the video was produced using generative artificial intelligence (AI).
The Clinton video is only one small example of how generative AI could profoundly reshape politics in the near future.
Experts have pointed out the consequences for elections. These include the possibility of false information being created at little or no cost and highly personalized advertising being produced to manipulate voters. The results could be so-called “October surprises” — ie a piece of news that breaks just before the US elections in November, where misinformation is circulated and there is insufficient time to refute it — and the generation of misleading information about electoral administration, such as where polling stations are.
Concerns about the impact of generative AI on elections have become urgent as we enter a year in which billions of people across the planet are to go to the polls. This year, it is projected that there will be elections in Taiwan, India, Russia, South Africa, Mexico, Iran, Pakistan, Indonesia, the EU, the US and the UK. Many of these elections would not determine just the future of nation-states; they would also shape how we tackle global challenges such as geopolitical tensions and the climate crisis.
It is likely that each of these elections would be influenced by new generative AI technologies in the same way the elections of the 2010s were shaped by social media.
While politicians spent millions harnessing the power of social media to shape elections during the 2010s, generative AI effectively reduces the cost of producing empty and misleading information to zero. This is particularly concerning because, during the past decade, we have witnessed the role that so-called “bullshit” can play in politics.
In a short book on the topic, the late Princeton philosopher Harry Frankfurt defined bullshit specifically as speech intended to persuade without regard to the truth. Throughout the 2010s this appeared to become an increasingly common practice among political leaders. With the rise of generative AI and technologies such as ChatGPT, we could see the rise of a phenomenon my colleagues and I label “botshit.”
In a recent paper, Tim Hannigan, Ian McCarthy and I sought to understand what exactly botshit is and how it works. It is well known that generative AI technologies such as ChatGPT can produce what are called “hallucinations.” This is because generative AI answers questions by making statistically informed guesses. Often these guesses are correct, but sometimes they are wildly off. The result could be artificially generated “hallucinations” that bear little relationship to reality, such as explanations or images that seem superficially plausible, but are not actually the correct answer to whatever the question was.
Humans might use untrue material created by generative AI in an uncritical and thoughtless way. And that could make it harder for people to know what is true and false in the world. In some cases, these risks might be relatively low, for example, if generative AI were used for a task that was not very important (such as to come up with some ideas for a birthday party speech), or if the truth of the output were easily verifiable using another source (such as when did the battle of Waterloo happen?).
The real problems arise when the outputs of generative AI have important consequences and the outputs cannot easily be verified.
If AI-produced hallucinations are used to answer important but difficult-to-verify questions, such as the state of the economy or the war in Ukraine, there is a real danger it could create an environment where some people start to make important voting decisions based on an entirely illusory universe of information. There is a danger that voters could end up living in generated online realities that are based on a toxic mixture of AI hallucinations and political expediency.
Although AI technologies pose dangers, there are measures that could be taken to limit them. Technology companies could continue to use watermarking, which allows users to easily identify AI-generated content. They could also ensure AIs are trained on authoritative information sources. Journalists could take extra precautions to avoid covering AI-generated stories during an election cycle. Political parties could develop policies to prevent the use of deceptive AI-generated information. Most importantly, voters could exercise their critical judgment by reality-checking important pieces of information they are unsure about.
The rise of generative AI has already started to fundamentally change many professions and industries. Politics is likely to be at the forefront of this change.
The Brookings Institution points out that there are many positive ways generative AI could be used in politics. However, at the moment its negative uses are most obvious, and more likely to affect us imminently.
It is vital we strive to ensure that generative AI is used for beneficial purposes and does not simply lead to more botshit.
Andre Spicer is professor of organisational behavior at the Bayes Business School at City, University of London. He is the author of the book Business Bullshit.
When US budget carrier Southwest Airlines last week announced a new partnership with China Airlines, Southwest’s social media were filled with comments from travelers excited by the new opportunity to visit China. Of course, China Airlines is not based in China, but in Taiwan, and the new partnership connects Taiwan Taoyuan International Airport with 30 cities across the US. At a time when China is increasing efforts on all fronts to falsely label Taiwan as “China” in all arenas, Taiwan does itself no favors by having its flagship carrier named China Airlines. The Ministry of Foreign Affairs is eager to jump at
The muting of the line “I’m from Taiwan” (我台灣來欸), sung in Hoklo (commonly known as Taiwanese), during a performance at the closing ceremony of the World Masters Games in New Taipei City on May 31 has sparked a public outcry. The lyric from the well-known song All Eyes on Me (世界都看見) — originally written and performed by Taiwanese hip-hop group Nine One One (玖壹壹) — was muted twice, while the subtitles on the screen showed an alternate line, “we come here together” (阮作伙來欸), which was not sung. The song, performed at the ceremony by a cheerleading group, was the theme
Secretary of State Marco Rubio raised eyebrows recently when he declared the era of American unipolarity over. He described America’s unrivaled dominance of the international system as an anomaly that was created by the collapse of the Soviet Union at the end of the Cold War. Now, he observed, the United States was returning to a more multipolar world where there are great powers in different parts of the planet. He pointed to China and Russia, as well as “rogue states like Iran and North Korea” as examples of countries the United States must contend with. This all begs the question:
In China, competition is fierce, and in many cases suppliers do not get paid on time. Rather than improving, the situation appears to be deteriorating. BYD Co, the world’s largest electric vehicle manufacturer by production volume, has gained notoriety for its harsh treatment of suppliers, raising concerns about the long-term sustainability. The case also highlights the decline of China’s business environment, and the growing risk of a cascading wave of corporate failures. BYD generally does not follow China’s Negotiable Instruments Law when settling payments with suppliers. Instead the company has created its own proprietary supply chain finance system called the “D-chain,” through which