What does it take to change a person’s mind? As generative artificial intelligence (AI) becomes more embedded in customer-facing systems — think of human-like phone calls or online chatbots — it is an ethical question that needs to be addressed widely.
The capacity to change minds through reasoned discourse is at the heart of democracy. Clear and effective communication forms the foundation of deliberation and persuasion, which are essential to resolve competing interests. However, there is a dark side to persuasion: false motives, lies and cognitive manipulation — malicious behavior that AI could facilitate.
In the not-so-distant future, generative AI could enable the creation of new user interfaces that could persuade on behalf of any person or entity with the means to establish such a system. Leveraging private knowledge bases, these specialized models would offer different truths that compete based on their ability to generate convincing responses for a target group — an AI for each ideology. A wave of AI-assisted social engineering would surely follow, with escalating competition making it easier and cheaper for bad actors to spread disinformation and perpetrate scams.
Illustration: Yusha
The emergence of generative AI has thus fueled a crisis of epistemic insecurity. The initial policy response has been to ensure that humans know that they are engaging with an AI. In June, the European Commission urged large tech companies to start labeling text, video and audio created or manipulated by AI tools, while the European Parliament is pushing for a similar rule in the forthcoming AI Act. This awareness, the argument goes, would prevent us from being misled by an artificial agent, no matter how convincing.
However, alerting people to the presence of AI would not necessarily safeguard them against manipulation. As far back as the 1960s, the ELIZA chatbot experiment at MIT demonstrated that people could form emotional connections with, have empathy for, and attribute human thought processes to a computer program with anthropomorphic characteristics — in this case, natural speech patterns — despite being told that it is a non-human entity.
We tend to develop a strong emotional attachment to our beliefs, which then hinders our ability to assess contradictory evidence objectively. Moreover, we often seek information that supports, rather than challenges, our views. Our goal should be to engage in reflective persuasion, whereby we present arguments and carefully consider our beliefs and values to reach well-founded agreements or disagreements.
However, crucially, forming emotional connections with others could increase our susceptibility to manipulation, and we know that humans could make these types of connections even with chatbots that are not designed to do so. When chatbots are built to connect emotionally with humans, this would create a new dynamic rooted in two longstanding problems of human discourse: asymmetrical risk and reciprocity.
Imagine that a tech company creates a persuasive chatbot. Such an agent would be taking essentially zero risk — either emotional or physical — in attempting to convince others. As for reciprocity, there is very little chance that the chatbot doing the persuading would have any capacity to be persuaded. It is more likely that an individual could get the chatbot to concede a point in the context of their limited interaction, which would then be internalized for training. This would make active persuasion — which is about inducing a change in belief, not reaching momentary agreement — largely infeasible.
In short, we are woefully unprepared for the dissemination of persuasive AI systems. Many industry leaders, including OpenAI, the company behind ChatGPT, have raised awareness about its potential threat. However, awareness does not translate into a comprehensive risk-management framework.
A society cannot be effectively inoculated against persuasive AI, as that would require making each person immune to such agents — an impossible task. Moreover, any attempt to control and label AI interfaces would result in individuals transferring inputs to new domains, not unlike copying text produced by ChatGPT and pasting it into an email. System owners would therefore be responsible for tracking user activity and evaluating conversions.
However, persuasive AI need not be generative in nature. A wide range of organizations, individuals and entities have already bolstered their persuasive capabilities to achieve their objectives. Consider state actors’ use of computational propaganda, which involves manipulating information and public opinion to further national interests and agendas.
Meanwhile, the evolution of computational persuasion has provided the advertising-technology industry with a lucrative business model. This burgeoning field not only demonstrates the power of persuasive technologies to shape consumer behavior, but also underscores the significant role they could play in driving sales and achieving commercial objectives.
What unites these diverse actors is a desire to enhance their persuasive capacities. This mirrors the ever-expanding landscape of technology-driven influence, with all its known and unknown social, political, and economic implications. As persuasion is automated, a comprehensive ethical and regulatory framework becomes imperative.
Mark Esposito is a professor at Hult International Business School and a co-author of The Great Remobilization: Strategies and Designs for a Smarter Global Future. Josh Entsminger is a PhD student in innovation and public policy at the UCL Institute for Innovation and Public Purpose. Terence Tse is a professor at Hult International Business School and a co-author of The Great Remobilization: Strategies and Designs for a Smarter Global Future.
Copyright: Project Syndicate
In their recent op-ed “Trump Should Rein In Taiwan” in Foreign Policy magazine, Christopher Chivvis and Stephen Wertheim argued that the US should pressure President William Lai (賴清德) to “tone it down” to de-escalate tensions in the Taiwan Strait — as if Taiwan’s words are more of a threat to peace than Beijing’s actions. It is an old argument dressed up in new concern: that Washington must rein in Taipei to avoid war. However, this narrative gets it backward. Taiwan is not the problem; China is. Calls for a so-called “grand bargain” with Beijing — where the US pressures Taiwan into concessions
The term “assassin’s mace” originates from Chinese folklore, describing a concealed weapon used by a weaker hero to defeat a stronger adversary with an unexpected strike. In more general military parlance, the concept refers to an asymmetric capability that targets a critical vulnerability of an adversary. China has found its modern equivalent of the assassin’s mace with its high-altitude electromagnetic pulse (HEMP) weapons, which are nuclear warheads detonated at a high altitude, emitting intense electromagnetic radiation capable of disabling and destroying electronics. An assassin’s mace weapon possesses two essential characteristics: strategic surprise and the ability to neutralize a core dependency.
Chinese President and Chinese Communist Party (CCP) Chairman Xi Jinping (習近平) said in a politburo speech late last month that his party must protect the “bottom line” to prevent systemic threats. The tone of his address was grave, revealing deep anxieties about China’s current state of affairs. Essentially, what he worries most about is systemic threats to China’s normal development as a country. The US-China trade war has turned white hot: China’s export orders have plummeted, Chinese firms and enterprises are shutting up shop, and local debt risks are mounting daily, causing China’s economy to flag externally and hemorrhage internally. China’s
During the “426 rally” organized by the Chinese Nationalist Party (KMT) and the Taiwan People’s Party under the slogan “fight green communism, resist dictatorship,” leaders from the two opposition parties framed it as a battle against an allegedly authoritarian administration led by President William Lai (賴清德). While criticism of the government can be a healthy expression of a vibrant, pluralistic society, and protests are quite common in Taiwan, the discourse of the 426 rally nonetheless betrayed troubling signs of collective amnesia. Specifically, the KMT, which imposed 38 years of martial law in Taiwan from 1949 to 1987, has never fully faced its