Do you think engaging with an emerging tech tool can change your firmly held beliefs? Or sway you toward a decision you would not have otherwise made? Most of us humans think we are too smart for that, but mounting evidence suggests otherwise.
When it comes to a new crop of generative artificial intelligence (AI) technology, the power of “persuasion” has been identified as a potentially catastrophic risk right alongside fears that models could gain autonomy or help build a nuclear weapon. Separately, lower-stakes designs meant to influence behavior are already ubiquitous in the products many of us use everyday, nudging us to endlessly scroll on social platforms, or open Snapchat or Duolingo to continue a “streak.”
However, recent advances in that nascent technology from China are raising fresh national security concerns. New research funded by the US Department of State and released by an Australian think tank found that Chinese tech companies are on the cusp of creating and deploying technologies with “unprecedented persuasive capabilities.”
Illustration: Yusha
From a security perspective, that could be abused by Beijing or other actors to sway political opinions or sow social unrest and division. In other words, it is a weapon to subdue enemies without any fighting, the war tactic heralded by the Chinese philosopher General Sun Zi (孫子).
The Australian Strategic Policy Institute report published last week identified China’s commercial sector as “already a global leader” in the development and adoption of products designed to change attitudes or behaviors by exploiting physiological or cognitive vulnerabilities. To accomplish that, the tools rely heavily on analyzing personal data they collect and then tailor interactions with users. The paper identified a handful of Chinese firms that it says are already using such technology — spanning generative AI, virtual reality and the more emerging neurotechnology sector — to support Beijing’s propaganda and military goals.
However, that is also very much a global issue. China’s private sector might be racing ahead to develop persuasive methods, but it is following playbooks developed by US’ big tech firms to better understand their users and keep them engaged. Addressing the Beijing risk would require us to properly unpack how we let tech products influence our lives. However, fresh national security risks, combined with how AI and other new innovations can quickly scale up these tools’ effectiveness, should be a wake-up call at a time when persuasion is already so entrenched into Silicon Valley product design.
Part of what makes addressing this issue so difficult is that it can be a double-edged sword. A science study published earlier this year found that chatting with AI models could convince conspiracy theorists to reduce their beliefs, even among those who said they were important to their identity. That highlighted the positive “persuasive powers” of large language models and their ability to engage with personalized dialogue, the researchers said.
How to prevent those powers from being employed by Beijing or other bad actors for nefarious campaigns would be an increasing challenge for policymakers that goes beyond cutting off access to advanced semiconductors.
Demanding far more transparency would be one way to start, by requiring tech companies to provide clear disclosures when content is tailored in a way that could influence behaviors. Expanding data protection laws or giving users clearer ways to opt-out of having their information collected would also limit the ability of those tools to individually target users.
Prioritizing digital literacy and education is also imperative to raise awareness about persuasive technologies, how algorithms and personalized content work, how to recognize tactics and how to avoid being potentially manipulated by these systems.
Ultimately, a lot more research is needed on how to protect people from the risks of persuasive technology and it would be wise for the companies behind these tools to lead the charge, as firms such as OpenAI and Anthropic have begun doing with AI. Policymakers should also demand firms share findings with regulators and relevant stakeholders to build a global understanding of how those techniques could be exploited by adversaries. That information could then be used to set clear standards or targeted regulation.
The risk of technology so sophisticated that it allowing Beijing to pull the strings to change what you believe or who you are might still seem like a far-off, sci-fi concern. However, the stakes are too high for global policymakers to respond only after that has been unleashed. Now is the time for a global reckoning on how much personal information and influence we give tech companies over our lives.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News.
On May 7, 1971, Henry Kissinger planned his first, ultra-secret mission to China and pondered whether it would be better to meet his Chinese interlocutors “in Pakistan where the Pakistanis would tape the meeting — or in China where the Chinese would do the taping.” After a flicker of thought, he decided to have the Chinese do all the tape recording, translating and transcribing. Fortuitously, historians have several thousand pages of verbatim texts of Dr. Kissinger’s negotiations with his Chinese counterparts. Paradoxically, behind the scenes, Chinese stenographers prepared verbatim English language typescripts faster than they could translate and type them
More than 30 years ago when I immigrated to the US, applied for citizenship and took the 100-question civics test, the one part of the naturalization process that left the deepest impression on me was one question on the N-400 form, which asked: “Have you ever been a member of, involved in or in any way associated with any communist or totalitarian party anywhere in the world?” Answering “yes” could lead to the rejection of your application. Some people might try their luck and lie, but if exposed, the consequences could be much worse — a person could be fined,
Xiaomi Corp founder Lei Jun (雷軍) on May 22 made a high-profile announcement, giving online viewers a sneak peek at the company’s first 3-nanometer mobile processor — the Xring O1 chip — and saying it is a breakthrough in China’s chip design history. Although Xiaomi might be capable of designing chips, it lacks the ability to manufacture them. No matter how beautifully planned the blueprints are, if they cannot be mass-produced, they are nothing more than drawings on paper. The truth is that China’s chipmaking efforts are still heavily reliant on the free world — particularly on Taiwan Semiconductor Manufacturing
Last week, Nvidia chief executive officer Jensen Huang (黃仁勳) unveiled the location of Nvidia’s new Taipei headquarters and announced plans to build the world’s first large-scale artificial intelligence (AI) supercomputer in Taiwan. In Taipei, Huang’s announcement was welcomed as a milestone for Taiwan’s tech industry. However, beneath the excitement lies a significant question: Can Taiwan’s electricity infrastructure, especially its renewable energy supply, keep up with growing demand from AI chipmaking? Despite its leadership in digital hardware, Taiwan lags behind in renewable energy adoption. Moreover, the electricity grid is already experiencing supply shortages. As Taiwan’s role in AI manufacturing expands, it is critical that