Beijing’s rigorous push for chatbots with core socialist values is the latest roadblock in its effort to catch up to the US in a race for artificial intelligence (AI) supremacy. It is also a timely reminder for the world that a chatbot cannot have its own political beliefs, the same way it cannot be expected to make human decisions.
It is easy for finger-wagging Western observers to jump on recent reporting that China is forcing companies to undergo intensive political tests as more evidence that AI development would be kneecapped by the government’s censorship regime. The arduous process adds a painstaking layer of work for tech firms, and restricting the freedom to experiment can impede innovation. The difficulty of creating AI models infused with specific values would likely hurt China’s efforts to create chatbots as sophisticated as those in the US in the short term. However, it also exposes a broader misunderstanding around the realities of AI, despite a global arms race and a mountain of industry hype propelling its growth.
Since the launch of OpenAI’s ChatGPT in late 2022 initiated a global generative AI frenzy, there has been a tendency from the US to China to anthropomorphize this emerging technology, but treating AI models like humans, and expecting them to act that way, is a dangerous path to forge for a technology still in its infancy. China’s misguided approach should serve as a wake-up call.
Illustration: Tania Chou
Beijing’s AI ambitions are already under severe threat from all-out US efforts to bar access to advanced semiconductors and chipmaking equipment. However, Chinese Internet regulators are also trying to impose political restrictions on the outputs from homegrown AI models, ensuring their responses do not go against Chinese Communist Party ideals or speak ill of leaders like Chinese President Xi Jinping (習近平). Companies are restricting certain phrases in the training data, which can limit overall performance and the ability to spit out accurate responses.
Moreover, Chinese AI developers are already at a disadvantage. There is far more English-language text online than Chinese that can be used for training data, not even counting what is already cut off by the Great Firewall. The black box nature of large language models (LLM) also makes censoring outputs inherently challenging. Some Chinese AI companies are now building a separate layer onto their chatbots to replace problematic responses in real time.
However, it would be unwise to dismiss all this as simply restricting its tech prowess in the long run.
Beijing wants to be the global AI leader by 2030, and is throwing the entire might of the state and private sector behind this effort. The government reiterated its commitment to develop the high-tech industry during last week’s Third Plenum, and in racing to create AI their own way, Chinese developers are also forced to approach LLMs in novel ways. Their research could potentially sharpen AI tools for harder tasks that they have traditionally struggled with.
Tech companies in the US have spent years trying to control the outputs from AI models and ensure they do not hallucinate or spew offensive responses — or, in the case of Elon Musk, ensure responses are not too “woke.” Many tech giants are still figuring out how to implement and control these types of guardrails.
Earlier this year, Alphabet Inc’s Google paused its AI image generator after it created historically inaccurate depictions of people of color in place of white people. An early Microsoft AI chatbot dubbed “Tay” was infamously shut down in 2016 after it was exploited on Twitter and started spitting out racist and hateful comments. As AI models are trained on gargantuan amounts of text scraped from the Internet, their responses risk perpetuating the racism, sexism and myriad other dark features baked into discourse there.
Companies like OpenAI have since made great strides in reducing inaccuracies, limiting biases and improving the overall outputs from chatbots — but these tools are still just machines trained on the work of humans. They can be re-engineered and tinkered with, or programmed not to use racial slurs or talk politics, but it is impossible for them to grasp morals or their own political ideologies.
China’s push to ensure chatbots toe the party line may be more extreme than the restrictions US companies are self-imposing on their AI tools. However, these efforts from different sides of the globe reveal a profound misunderstanding of how we should collectively approach AI.
The world is pouring vast swaths of money and immense amounts of energy into creating conversational chatbots.
Instead of trying to assign human values to bots and use more resources to make them sound more human, we should start asking how they can be used to help humans.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
US President Donald Trump created some consternation in Taiwan last week when he told a news conference that a successful trade deal with China would help with “unification.” Although the People’s Republic of China has never ruled Taiwan, Trump’s language struck a raw nerve in Taiwan given his open siding with Russian President Vladimir Putin’s aggression seeking to “reunify” Ukraine and Russia. On earlier occasions, Trump has criticized Taiwan for “stealing” the US’ chip industry and for relying too much on the US for defense, ominously presaging a weakening of US support for Taiwan. However, further examination of Trump’s remarks in
As the Chinese Communist Party (CCP) and its People’s Liberation Army (PLA) reach the point of confidence that they can start and win a war to destroy the democratic culture on Taiwan, any future decision to do so may likely be directly affected by the CCP’s ability to promote wars on the Korean Peninsula, in Europe, or, as most recently, on the Indian subcontinent. It stands to reason that the Trump Administration’s success early on May 10 to convince India and Pakistan to deescalate their four-day conventional military conflict, assessed to be close to a nuclear weapons exchange, also served to
China on May 23, 1951, imposed the so-called “17-Point Agreement” to formally annex Tibet. In March, China in its 18th White Paper misleadingly said it laid “firm foundations for the region’s human rights cause.” The agreement is invalid in international law, because it was signed under threat. Ngapo Ngawang Jigme, head of the Tibetan delegation sent to China for peace negotiations, was not authorized to sign the agreement on behalf of the Tibetan government and the delegation was made to sign it under duress. After seven decades, Tibet remains intact and there is global outpouring of sympathy for Tibetans. This realization
After India’s punitive precision strikes targeting what New Delhi called nine terrorist sites inside Pakistan, reactions poured in from governments around the world. The Ministry of Foreign Affairs (MOFA) issued a statement on May 10, opposing terrorism and expressing concern about the growing tensions between India and Pakistan. The statement noticeably expressed support for the Indian government’s right to maintain its national security and act against terrorists. The ministry said that it “works closely with democratic partners worldwide in staunch opposition to international terrorism” and expressed “firm support for all legitimate and necessary actions taken by the government of India