Do you think engaging with an emerging tech tool can change your firmly held beliefs? Or sway you toward a decision you would not have otherwise made? Most of us humans think we are too smart for that, but mounting evidence suggests otherwise.
When it comes to a new crop of generative artificial intelligence (AI) technology, the power of “persuasion” has been identified as a potentially catastrophic risk right alongside fears that models could gain autonomy or help build a nuclear weapon. Separately, lower-stakes designs meant to influence behavior are already ubiquitous in the products many of us use everyday, nudging us to endlessly scroll on social platforms, or open Snapchat or Duolingo to continue a “streak.”
However, recent advances in that nascent technology from China are raising fresh national security concerns. New research funded by the US Department of State and released by an Australian think tank found that Chinese tech companies are on the cusp of creating and deploying technologies with “unprecedented persuasive capabilities.”
Illustration: Yusha
From a security perspective, that could be abused by Beijing or other actors to sway political opinions or sow social unrest and division. In other words, it is a weapon to subdue enemies without any fighting, the war tactic heralded by the Chinese philosopher General Sun Zi (孫子).
The Australian Strategic Policy Institute report published last week identified China’s commercial sector as “already a global leader” in the development and adoption of products designed to change attitudes or behaviors by exploiting physiological or cognitive vulnerabilities. To accomplish that, the tools rely heavily on analyzing personal data they collect and then tailor interactions with users. The paper identified a handful of Chinese firms that it says are already using such technology — spanning generative AI, virtual reality and the more emerging neurotechnology sector — to support Beijing’s propaganda and military goals.
However, that is also very much a global issue. China’s private sector might be racing ahead to develop persuasive methods, but it is following playbooks developed by US’ big tech firms to better understand their users and keep them engaged. Addressing the Beijing risk would require us to properly unpack how we let tech products influence our lives. However, fresh national security risks, combined with how AI and other new innovations can quickly scale up these tools’ effectiveness, should be a wake-up call at a time when persuasion is already so entrenched into Silicon Valley product design.
Part of what makes addressing this issue so difficult is that it can be a double-edged sword. A science study published earlier this year found that chatting with AI models could convince conspiracy theorists to reduce their beliefs, even among those who said they were important to their identity. That highlighted the positive “persuasive powers” of large language models and their ability to engage with personalized dialogue, the researchers said.
How to prevent those powers from being employed by Beijing or other bad actors for nefarious campaigns would be an increasing challenge for policymakers that goes beyond cutting off access to advanced semiconductors.
Demanding far more transparency would be one way to start, by requiring tech companies to provide clear disclosures when content is tailored in a way that could influence behaviors. Expanding data protection laws or giving users clearer ways to opt-out of having their information collected would also limit the ability of those tools to individually target users.
Prioritizing digital literacy and education is also imperative to raise awareness about persuasive technologies, how algorithms and personalized content work, how to recognize tactics and how to avoid being potentially manipulated by these systems.
Ultimately, a lot more research is needed on how to protect people from the risks of persuasive technology and it would be wise for the companies behind these tools to lead the charge, as firms such as OpenAI and Anthropic have begun doing with AI. Policymakers should also demand firms share findings with regulators and relevant stakeholders to build a global understanding of how those techniques could be exploited by adversaries. That information could then be used to set clear standards or targeted regulation.
The risk of technology so sophisticated that it allowing Beijing to pull the strings to change what you believe or who you are might still seem like a far-off, sci-fi concern. However, the stakes are too high for global policymakers to respond only after that has been unleashed. Now is the time for a global reckoning on how much personal information and influence we give tech companies over our lives.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News.
Monday was the 37th anniversary of former president Chiang Ching-kuo’s (蔣經國) death. Chiang — a son of former president Chiang Kai-shek (蔣介石), who had implemented party-state rule and martial law in Taiwan — has a complicated legacy. Whether one looks at his time in power in a positive or negative light depends very much on who they are, and what their relationship with the Chinese Nationalist Party (KMT) is. Although toward the end of his life Chiang Ching-kuo lifted martial law and steered Taiwan onto the path of democratization, these changes were forced upon him by internal and external pressures,
Chinese Nationalist Party (KMT) caucus whip Fu Kun-chi (傅?萁) has caused havoc with his attempts to overturn the democratic and constitutional order in the legislature. If we look at this devolution from the context of a transition to democracy from authoritarianism in a culturally Chinese sense — that of zhonghua (中華) — then we are playing witness to a servile spirit from a millennia-old form of totalitarianism that is intent on damaging the nation’s hard-won democracy. This servile spirit is ingrained in Chinese culture. About a century ago, Chinese satirist and author Lu Xun (魯迅) saw through the servile nature of
In their New York Times bestseller How Democracies Die, Harvard political scientists Steven Levitsky and Daniel Ziblatt said that democracies today “may die at the hands not of generals but of elected leaders. Many government efforts to subvert democracy are ‘legal,’ in the sense that they are approved by the legislature or accepted by the courts. They may even be portrayed as efforts to improve democracy — making the judiciary more efficient, combating corruption, or cleaning up the electoral process.” Moreover, the two authors observe that those who denounce such legal threats to democracy are often “dismissed as exaggerating or
The National Development Council (NDC) on Wednesday last week launched a six-month “digital nomad visitor visa” program, the Central News Agency (CNA) reported on Monday. The new visa is for foreign nationals from Taiwan’s list of visa-exempt countries who meet financial eligibility criteria and provide proof of work contracts, but it is not clear how it differs from other visitor visas for nationals of those countries, CNA wrote. The NDC last year said that it hoped to attract 100,000 “digital nomads,” according to the report. Interest in working remotely from abroad has significantly increased in recent years following improvements in