Beijing’s rigorous push for chatbots with core socialist values is the latest roadblock in its effort to catch up to the US in a race for artificial intelligence (AI) supremacy. It is also a timely reminder for the world that a chatbot cannot have its own political beliefs, the same way it cannot be expected to make human decisions.
It is easy for finger-wagging Western observers to jump on recent reporting that China is forcing companies to undergo intensive political tests as more evidence that AI development would be kneecapped by the government’s censorship regime. The arduous process adds a painstaking layer of work for tech firms, and restricting the freedom to experiment can impede innovation. The difficulty of creating AI models infused with specific values would likely hurt China’s efforts to create chatbots as sophisticated as those in the US in the short term. However, it also exposes a broader misunderstanding around the realities of AI, despite a global arms race and a mountain of industry hype propelling its growth.
Since the launch of OpenAI’s ChatGPT in late 2022 initiated a global generative AI frenzy, there has been a tendency from the US to China to anthropomorphize this emerging technology, but treating AI models like humans, and expecting them to act that way, is a dangerous path to forge for a technology still in its infancy. China’s misguided approach should serve as a wake-up call.
Illustration: Tania Chou
Beijing’s AI ambitions are already under severe threat from all-out US efforts to bar access to advanced semiconductors and chipmaking equipment. However, Chinese Internet regulators are also trying to impose political restrictions on the outputs from homegrown AI models, ensuring their responses do not go against Chinese Communist Party ideals or speak ill of leaders like Chinese President Xi Jinping (習近平). Companies are restricting certain phrases in the training data, which can limit overall performance and the ability to spit out accurate responses.
Moreover, Chinese AI developers are already at a disadvantage. There is far more English-language text online than Chinese that can be used for training data, not even counting what is already cut off by the Great Firewall. The black box nature of large language models (LLM) also makes censoring outputs inherently challenging. Some Chinese AI companies are now building a separate layer onto their chatbots to replace problematic responses in real time.
However, it would be unwise to dismiss all this as simply restricting its tech prowess in the long run.
Beijing wants to be the global AI leader by 2030, and is throwing the entire might of the state and private sector behind this effort. The government reiterated its commitment to develop the high-tech industry during last week’s Third Plenum, and in racing to create AI their own way, Chinese developers are also forced to approach LLMs in novel ways. Their research could potentially sharpen AI tools for harder tasks that they have traditionally struggled with.
Tech companies in the US have spent years trying to control the outputs from AI models and ensure they do not hallucinate or spew offensive responses — or, in the case of Elon Musk, ensure responses are not too “woke.” Many tech giants are still figuring out how to implement and control these types of guardrails.
Earlier this year, Alphabet Inc’s Google paused its AI image generator after it created historically inaccurate depictions of people of color in place of white people. An early Microsoft AI chatbot dubbed “Tay” was infamously shut down in 2016 after it was exploited on Twitter and started spitting out racist and hateful comments. As AI models are trained on gargantuan amounts of text scraped from the Internet, their responses risk perpetuating the racism, sexism and myriad other dark features baked into discourse there.
Companies like OpenAI have since made great strides in reducing inaccuracies, limiting biases and improving the overall outputs from chatbots — but these tools are still just machines trained on the work of humans. They can be re-engineered and tinkered with, or programmed not to use racial slurs or talk politics, but it is impossible for them to grasp morals or their own political ideologies.
China’s push to ensure chatbots toe the party line may be more extreme than the restrictions US companies are self-imposing on their AI tools. However, these efforts from different sides of the globe reveal a profound misunderstanding of how we should collectively approach AI.
The world is pouring vast swaths of money and immense amounts of energy into creating conversational chatbots.
Instead of trying to assign human values to bots and use more resources to make them sound more human, we should start asking how they can be used to help humans.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
After more than three weeks since the Honduran elections took place, its National Electoral Council finally certified the new president of Honduras. During the campaign, the two leading contenders, Nasry Asfura and Salvador Nasralla, who according to the council were separated by 27,026 votes in the final tally, promised to restore diplomatic ties with Taiwan if elected. Nasralla refused to accept the result and said that he would challenge all the irregularities in court. However, with formal recognition from the US and rapid acknowledgment from key regional governments, including Argentina and Panama, a reversal of the results appears institutionally and politically
In 2009, Taiwan Semiconductor Manufacturing Co (TSMC) made a welcome move to offer in-house contracts to all outsourced employees. It was a step forward for labor relations and the enterprise facing long-standing issues around outsourcing. TSMC founder Morris Chang (張忠謀) once said: “Anything that goes against basic values and principles must be reformed regardless of the cost — on this, there can be no compromise.” The quote is a testament to a core belief of the company’s culture: Injustices must be faced head-on and set right. If TSMC can be clear on its convictions, then should the Ministry of Education
The Chinese People’s Liberation Army (PLA) provided several reasons for military drills it conducted in five zones around Taiwan on Monday and yesterday. The first was as a warning to “Taiwanese independence forces” to cease and desist. This is a consistent line from the Chinese authorities. The second was that the drills were aimed at “deterrence” of outside military intervention. Monday’s announcement of the drills was the first time that Beijing has publicly used the second reason for conducting such drills. The Chinese Communist Party (CCP) leadership is clearly rattled by “external forces” apparently consolidating around an intention to intervene. The targets of
China’s recent aggressive military posture around Taiwan simply reflects the truth that China is a millennium behind, as Kobe City Councilor Norihiro Uehata has commented. While democratic countries work for peace, prosperity and progress, authoritarian countries such as Russia and China only care about territorial expansion, superpower status and world dominance, while their people suffer. Two millennia ago, the ancient Chinese philosopher Mencius (孟子) would have advised Chinese President Xi Jinping (習近平) that “people are the most important, state is lesser, and the ruler is the least important.” In fact, the reverse order is causing the great depression in China right now,