Beijing’s rigorous push for chatbots with core socialist values is the latest roadblock in its effort to catch up to the US in a race for artificial intelligence (AI) supremacy. It is also a timely reminder for the world that a chatbot cannot have its own political beliefs, the same way it cannot be expected to make human decisions.
It is easy for finger-wagging Western observers to jump on recent reporting that China is forcing companies to undergo intensive political tests as more evidence that AI development would be kneecapped by the government’s censorship regime. The arduous process adds a painstaking layer of work for tech firms, and restricting the freedom to experiment can impede innovation. The difficulty of creating AI models infused with specific values would likely hurt China’s efforts to create chatbots as sophisticated as those in the US in the short term. However, it also exposes a broader misunderstanding around the realities of AI, despite a global arms race and a mountain of industry hype propelling its growth.
Since the launch of OpenAI’s ChatGPT in late 2022 initiated a global generative AI frenzy, there has been a tendency from the US to China to anthropomorphize this emerging technology, but treating AI models like humans, and expecting them to act that way, is a dangerous path to forge for a technology still in its infancy. China’s misguided approach should serve as a wake-up call.
Illustration: Tania Chou
Beijing’s AI ambitions are already under severe threat from all-out US efforts to bar access to advanced semiconductors and chipmaking equipment. However, Chinese Internet regulators are also trying to impose political restrictions on the outputs from homegrown AI models, ensuring their responses do not go against Chinese Communist Party ideals or speak ill of leaders like Chinese President Xi Jinping (習近平). Companies are restricting certain phrases in the training data, which can limit overall performance and the ability to spit out accurate responses.
Moreover, Chinese AI developers are already at a disadvantage. There is far more English-language text online than Chinese that can be used for training data, not even counting what is already cut off by the Great Firewall. The black box nature of large language models (LLM) also makes censoring outputs inherently challenging. Some Chinese AI companies are now building a separate layer onto their chatbots to replace problematic responses in real time.
However, it would be unwise to dismiss all this as simply restricting its tech prowess in the long run.
Beijing wants to be the global AI leader by 2030, and is throwing the entire might of the state and private sector behind this effort. The government reiterated its commitment to develop the high-tech industry during last week’s Third Plenum, and in racing to create AI their own way, Chinese developers are also forced to approach LLMs in novel ways. Their research could potentially sharpen AI tools for harder tasks that they have traditionally struggled with.
Tech companies in the US have spent years trying to control the outputs from AI models and ensure they do not hallucinate or spew offensive responses — or, in the case of Elon Musk, ensure responses are not too “woke.” Many tech giants are still figuring out how to implement and control these types of guardrails.
Earlier this year, Alphabet Inc’s Google paused its AI image generator after it created historically inaccurate depictions of people of color in place of white people. An early Microsoft AI chatbot dubbed “Tay” was infamously shut down in 2016 after it was exploited on Twitter and started spitting out racist and hateful comments. As AI models are trained on gargantuan amounts of text scraped from the Internet, their responses risk perpetuating the racism, sexism and myriad other dark features baked into discourse there.
Companies like OpenAI have since made great strides in reducing inaccuracies, limiting biases and improving the overall outputs from chatbots — but these tools are still just machines trained on the work of humans. They can be re-engineered and tinkered with, or programmed not to use racial slurs or talk politics, but it is impossible for them to grasp morals or their own political ideologies.
China’s push to ensure chatbots toe the party line may be more extreme than the restrictions US companies are self-imposing on their AI tools. However, these efforts from different sides of the globe reveal a profound misunderstanding of how we should collectively approach AI.
The world is pouring vast swaths of money and immense amounts of energy into creating conversational chatbots.
Instead of trying to assign human values to bots and use more resources to make them sound more human, we should start asking how they can be used to help humans.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Taiwan has lost Trump. Or so a former State Department official and lobbyist would have us believe. Writing for online outlet Domino Theory in an article titled “How Taiwan lost Trump,” Christian Whiton provides a litany of reasons that the William Lai (賴清德) and Donald Trump administrations have supposedly fallen out — and it’s all Lai’s fault. Although many of Whiton’s claims are misleading or ill-informed, the article is helpfully, if unintentionally, revealing of a key aspect of the MAGA worldview. Whiton complains of the ruling Democratic Progressive Party’s “inability to understand and relate to the New Right in America.” Many
US lobbyist Christian Whiton has published an update to his article, “How Taiwan Lost Trump,” discussed on the editorial page on Sunday. His new article, titled “What Taiwan Should Do” refers to the three articles published in the Taipei Times, saying that none had offered a solution to the problems he identified. That is fair. The articles pushed back on points Whiton made that were felt partisan, misdirected or uninformed; in this response, he offers solutions of his own. While many are on point and he would find no disagreement here, the nuances of the political and historical complexities in
Taiwan is to hold a referendum on Saturday next week to decide whether the Ma-anshan Nuclear Power Plant, which was shut down in May after 40 years of service, should restart operations for as long as another 20 years. The referendum was proposed by the opposition Taiwan People’s Party (TPP) and passed in the legislature with support from the opposition Chinese Nationalist Party (KMT). Its question reads: “Do you agree that the Ma-anshan Nuclear Power Plant should continue operations upon approval by the competent authority and confirmation that there are no safety concerns?” Supporters of the proposal argue that nuclear power
The Centers for Disease Control and Prevention (CDC) earlier this month raised its travel alert for China’s Guangdong Province to Level 2 “Alert,” advising travelers to take enhanced precautions amid a chikungunya outbreak in the region. More than 8,000 cases have been reported in the province since June. Chikungunya is caused by the chikungunya virus and transmitted to humans through bites from infected mosquitoes, most commonly Aedes aegypti and Aedes albopictus. These species thrive in warm, humid climates and are also major vectors for dengue, Zika and yellow fever. The disease is characterized by high fever and severe, often incapacitating joint pain.