ChatGPT, a new artificial intelligence (AI) chatbot developed by San Francisco-based research laboratory OpenAI, has taken the world by storm. Hailed as a milestone in the evolution of so-called large language models (LLMs), the world’s most famous generative AI raises important questions about who controls this nascent market and whether these powerful technologies serve the public interest.
OpenAI’s release of ChatGPT in November last year quickly became a global sensation, attracting millions of users and allegedly killing the student essay. It is able to answer questions in conversational English (along with some other languages) and perform other tasks, such as writing computer code.
The answers that ChatGPT provides are fluent and compelling, but despite its facility for language, it can sometimes make mistakes or generate factual falsehoods, a phenomenon known among AI researchers as “hallucination.”
The fear of fabricated references has led several scientific journals to ban or restrict the use of ChatGPT and similar tools in academic papers, but while the chatbot might struggle with fact-checking, it is seemingly less prone to error when it comes to programming and can easily write efficient and elegant code.
For all its flaws, ChatGPT obviously represents a major technological breakthrough, which is why Microsoft last month announced a “multiyear, multibillion-dollar investment” in OpenAI, reportedly amounting to US$10 billion, on top of US$1 billion it previously committed to the company.
Originally a nonprofit, OpenAI is now a for-profit corporation valued at US$29 billion. While it has pledged to cap its profits, its loose-fitting structure limits investors’ returns to 10,000 percent.
ChatGPT is powered by a GPT-3, a powerful LLM trained on vast amounts of text to generate natural-sounding, human-like answers. While it is the world’s most celebrated generative AI, other big tech companies such as Google and Meta have been developing their own versions. While it is still unclear how these chatbots would be monetized, a paid version of ChatGPT is reportedly forthcoming, with OpenAI projecting US$1 billion in revenues by next year.
To be sure, bad actors could abuse these tools for various illicit schemes, such as sophisticated online scams or writing malware, but the technology’s prospective applications, from coding to protein discovery, offer cause for optimism.
McKinsey estimates that 50 to 60 percent of companies have incorporated AI-powered tools, such as chatbots, into their operations.
By expanding the use of LLMs, companies could improve efficiency and productivity.
However, the massive, immensely costly and rapidly increasing computing power needed to train and maintain generative AI tools represents a substantial barrier to entry that could lead to market concentration. The potential for monopolization, together with the risk of abuse, underscores the urgent need for policymakers to consider the implications of this technological breakthrough.
Fortunately, competition authorities in the US and elsewhere seem to be aware of these risks. The British Office of Communications late last year launched an investigation of the cloud-computing market, on which all large AI models rely, while the US Federal Trade Commission is investigating Amazon Web Services (AWS), which, along with Google and Microsoft Azure, dominates the market. These investigations could have far-reaching implications for AI-powered services, which rely on enormous economies of scale.
However, it is not clear what, if anything, policymakers should do. On the one hand, if regulators do nothing, the generative AI market could end up dominated by one or two companies, like every digital market before it. On the other hand, the emergence of open-source LLMs, such as the text-to-image tool Stable Diffusion, could ensure that the market remains competitive without further intervention.
Even if for-profit models become dominant, open-source competitors could chip away at their market share, just as Mozilla’s Firefox did to Google’s Chrome browser and Android did to Apple’s mobile operating system, iOS. Then again, cloud computing giants such as AWS and Microsoft Azure could also leverage generative AI products to increase their market power.
As was debated at the World Economic Forum meeting in Davos, Switzerland, last month, generative AI is too powerful and potentially transformative to leave its fate in the hands of a few dominant companies, but while there is a clear demand for regulatory intervention, the accelerated pace of technological advance leaves governments at a huge disadvantage.
To ensure that the public interest is represented at the technological frontier, the world needs a public alternative to for-profit LLMs. Democratic governments could form a multilateral body that would develop means to prevent fakery, trolling and other online harms, like a European Centre for Nuclear Research for generative AI. Alternatively, they could establish a publicly funded competitor with a different business model and incentives to foster competition between the two models.
Whichever path global policymakers choose, standing in place is not an option. It is abundantly clear that leaving it to the market to decide how these powerful technologies are used, and by whom, is a very risky proposition.
Diane Coyle is a professor of public policy at the University of Cambridge.
Copyright: Project Syndicate
As former president Ma Ying-jeou (馬英九) concludes his fourth visit to China since leaving office, Taiwan finds itself once again trapped in a familiar cycle of political theater. The Democratic Progressive Party (DPP) has criticized Ma’s participation in the Straits Forum as “dancing with Beijing,” while the Chinese Nationalist Party (KMT) defends it as an act of constitutional diplomacy. Both sides miss a crucial point: The real question is not whether Ma’s visit helps or hurts Taiwan — it is why Taiwan lacks a sophisticated, multi-track approach to one of the most complex geopolitical relationships in the world. The disagreement reduces Taiwan’s
Former president Ma Ying-jeou (馬英九) is visiting China, where he is addressed in a few ways, but never as a former president. On Sunday, he attended the Straits Forum in Xiamen, not as a former president of Taiwan, but as a former Chinese Nationalist Party (KMT) chairman. There, he met with Chinese People’s Political Consultative Conference Chairman Wang Huning (王滬寧). Presumably, Wang at least would have been aware that Ma had once been president, and yet he did not mention that fact, referring to him only as “Mr Ma Ying-jeou.” Perhaps the apparent oversight was not intended to convey a lack of
A foreign colleague of mine asked me recently, “What is a safe distance from potential People’s Liberation Army (PLA) Rocket Force’s (PLARF) Taiwan targets?” This article will answer this question and help people living in Taiwan have a deeper understanding of the threat. Why is it important to understand PLA/PLARF targeting strategy? According to RAND analysis, the PLA’s “systems destruction warfare” focuses on crippling an adversary’s operational system by targeting its networks, especially leadership, command and control (C2) nodes, sensors, and information hubs. Admiral Samuel Paparo, commander of US Indo-Pacific Command, noted in his 15 May 2025 Sedona Forum keynote speech that, as
Chinese Nationalist Party (KMT) Chairman Eric Chu (朱立倫) last week announced that the KMT was launching “Operation Patriot” in response to an unprecedented massive campaign to recall 31 KMT legislators. However, his action has also raised questions and doubts: Are these so-called “patriots” pledging allegiance to the country or to the party? While all KMT-proposed campaigns to recall Democratic Progressive Party (DPP) lawmakers have failed, and a growing number of local KMT chapter personnel have been indicted for allegedly forging petition signatures, media reports said that at least 26 recall motions against KMT legislators have passed the second signature threshold