Do you think engaging with an emerging tech tool can change your firmly held beliefs? Or sway you toward a decision you would not have otherwise made? Most of us humans think we are too smart for that, but mounting evidence suggests otherwise.
When it comes to a new crop of generative artificial intelligence (AI) technology, the power of “persuasion” has been identified as a potentially catastrophic risk right alongside fears that models could gain autonomy or help build a nuclear weapon. Separately, lower-stakes designs meant to influence behavior are already ubiquitous in the products many of us use everyday, nudging us to endlessly scroll on social platforms, or open Snapchat or Duolingo to continue a “streak.”
However, recent advances in that nascent technology from China are raising fresh national security concerns. New research funded by the US Department of State and released by an Australian think tank found that Chinese tech companies are on the cusp of creating and deploying technologies with “unprecedented persuasive capabilities.”
Illustration: Yusha
From a security perspective, that could be abused by Beijing or other actors to sway political opinions or sow social unrest and division. In other words, it is a weapon to subdue enemies without any fighting, the war tactic heralded by the Chinese philosopher General Sun Zi (孫子).
The Australian Strategic Policy Institute report published last week identified China’s commercial sector as “already a global leader” in the development and adoption of products designed to change attitudes or behaviors by exploiting physiological or cognitive vulnerabilities. To accomplish that, the tools rely heavily on analyzing personal data they collect and then tailor interactions with users. The paper identified a handful of Chinese firms that it says are already using such technology — spanning generative AI, virtual reality and the more emerging neurotechnology sector — to support Beijing’s propaganda and military goals.
However, that is also very much a global issue. China’s private sector might be racing ahead to develop persuasive methods, but it is following playbooks developed by US’ big tech firms to better understand their users and keep them engaged. Addressing the Beijing risk would require us to properly unpack how we let tech products influence our lives. However, fresh national security risks, combined with how AI and other new innovations can quickly scale up these tools’ effectiveness, should be a wake-up call at a time when persuasion is already so entrenched into Silicon Valley product design.
Part of what makes addressing this issue so difficult is that it can be a double-edged sword. A science study published earlier this year found that chatting with AI models could convince conspiracy theorists to reduce their beliefs, even among those who said they were important to their identity. That highlighted the positive “persuasive powers” of large language models and their ability to engage with personalized dialogue, the researchers said.
How to prevent those powers from being employed by Beijing or other bad actors for nefarious campaigns would be an increasing challenge for policymakers that goes beyond cutting off access to advanced semiconductors.
Demanding far more transparency would be one way to start, by requiring tech companies to provide clear disclosures when content is tailored in a way that could influence behaviors. Expanding data protection laws or giving users clearer ways to opt-out of having their information collected would also limit the ability of those tools to individually target users.
Prioritizing digital literacy and education is also imperative to raise awareness about persuasive technologies, how algorithms and personalized content work, how to recognize tactics and how to avoid being potentially manipulated by these systems.
Ultimately, a lot more research is needed on how to protect people from the risks of persuasive technology and it would be wise for the companies behind these tools to lead the charge, as firms such as OpenAI and Anthropic have begun doing with AI. Policymakers should also demand firms share findings with regulators and relevant stakeholders to build a global understanding of how those techniques could be exploited by adversaries. That information could then be used to set clear standards or targeted regulation.
The risk of technology so sophisticated that it allowing Beijing to pull the strings to change what you believe or who you are might still seem like a far-off, sci-fi concern. However, the stakes are too high for global policymakers to respond only after that has been unleashed. Now is the time for a global reckoning on how much personal information and influence we give tech companies over our lives.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News.
When 17,000 troops from the US, the Philippines, Australia, Japan, Canada, France and New Zealand spread across the Philippine archipelago for the Balikatan military exercise, running from tomorrow through May 8, the official language would be about interoperability, readiness and regional peace. However, the strategic subtext is becoming harder to ignore: The exercises are increasingly about the military geography around Taiwan. Balikatan has always carried political weight. This year, however, the exercise looks different in ways that matter not only to Manila and Washington, but also to Taipei. What began in 2023 as a shift toward a more serious deterrence posture
Reports about Elon Musk planning his own semiconductor fab have sparked anxiety, with some warning that Taiwan Semiconductor Manufacturing Co (TSMC) could lose key customers to vertical integration. A closer reading suggests a more measured conclusion: Musk is advancing a strategic vision of in-house chip manufacturing, but remains far from replacing the existing foundry ecosystem. For TSMC, the short-term impact is limited; the medium-term challenge lies in supply diversification and pricing pressure, only in the long term could it evolve into a structural threat. The clearest signal is Musk’s announcement that Tesla and SpaceX plan to develop a fab project dubbed “Terafab”
China’s AI ecosystem has one defining difference from Silicon Valley: It is embrace of open source. While the US’ biggest companies race to build ever more powerful systems and insist only they can control them, Chinese labs have been giving the technology away for free. Open source — making a model available for anyone to use, download and build on — once seemed a niche, nerdy topic that no one besides developers cared about. However, when a new technology is driving trillions of dollars of investments and leading to immense concentrations of power, it offered an antidote. That is part of
In late January, Taiwan’s first indigenous submarine, the Hai Kun (海鯤, or Narwhal), completed its first submerged dive, reaching a depth of roughly 50m during trials in the waters off Kaohsiung. By March, it had managed a fifth dive, still well short of the deep-water and endurance tests required before the navy could accept the vessel. The original delivery deadline of November last year passed months ago. CSBC Corp, Taiwan, the lead contractor, now targets June and the Ministry of National Defense is levying daily penalties for every day the submarine remains unfinished. The Hai Kun was supposed to be