A bout a year ago, San Francisco-based OpenAI released its chatbot, ChatGPT, triggering an artificial-intelligence (AI) gold rush and reigniting the age-old debate about the effects of automation on human welfare.
The fear of displacement by machines can be traced back to the 19th-century Industrial Revolution, when groups of English handloom weavers, known as Luddites, began destroying the power looms that threatened their livelihoods. The movement, which peaked between 1811 and 1817, was ultimately suppressed by government forces and its leaders were executed or exiled to Australia.
However, the Luddites’ arguments found an unexpected and somewhat ironic champion in renowned economist David Ricardo, who argued in his 1817 book On the Principles of Political Economy and Taxation that “the opinion entertained by the laboring class, that the employment of machinery is frequently detrimental to their interests, is not founded on prejudice and error, but is conformable to the correct principles of political economy.”
Illustration: Mountain People
British economist Nassau Senior, for his part, advised the weavers to “get out of that branch of production.”
They ended up doing just that. About 250,000 handloom jobs disappeared between 1820 and 1860.
However, while mechanization ended up benefiting human workers — the UK’s population and per capita real income multiplied over the same period — it adversely affected horses, whose numbers fell sharply as trains and other motorized vehicles replaced horse-drawn transport.
Since the Industrial Revolution, the prevailing pro-machine argument has been that by increasing labor productivity, automation boosts real incomes, allowing more individuals to enjoy higher living standards without corresponding job losses.
Moreover, liberation from tedious menial tasks has enabled us to redirect our energy to more valuable pursuits.
The Luddites’ modern-day counterparts, on the other hand, emphasize the downsides of automation, especially the potential to destroy livelihoods and communities. An equitable distribution of income and power, they argue, is crucial to reaping the long-term benefits of technological progress. Techno-pessimists such as Martin Ford and Daniel Susskind have argued that emerging technologies like AI will create too few new jobs, resulting in increased poverty and “technological unemployment.”
The rise of generative AI and the anticipated arrival of artificial general intelligence — an AI capable of any cognitive task that humans can perform — have supercharged the debate between techno-optimists and techno-skeptics.
For example, in the healthcare sector, a seemingly endless wellspring of tech hype, AI promises improved diagnostics, advanced telemedicine, more effective drugs, and reduced administrative burdens on doctors and nurses, leaving more time for patient care.
This seems to reflect the prevailing view among mainstream experts that generative AI will augment, rather than replace, human jobs. By automating routine tasks, it promises to free humans to pursue more creative work.
To be sure, this transformation will require lifelong learning, making continuous education a condition not just for participating in the job market, but also for accessing an expanding array of online services.
With the advent of generative AI, concerns have shifted from automation-induced job losses to the prospect of a superintelligence going rogue — a fear that dates back to Mary Shelley’s 1818 novel Frankenstein; or, The Modern Prometheus.
Echoing these sentiments, former Google chief executive officer Eric Schmidt recently remarked that while current AI models remain “under human control,” there is a real risk that one could develop the capability for “recursive self-improvement,” gain autonomy, and begin “setting its own goals.”
Eventually, he warned, a “computer cluster” could evolve into a “truly superhuman expert” capable of acting independently.
As experts and academics grow increasingly concerned about AI’s capacity to destroy the world, a growing number of voices have called for AI development to be aligned with human goals and values. There are two ways to achieve this. The first is to restrict the availability and sales of potentially harmful AI-based products, as policymakers in Europe and elsewhere have tried to do by imposing strict regulations on emerging technologies like autonomous vehicles and facial recognition.
One obvious problem with this approach is that reaching a consensus on what constitutes harm is difficult in a world in which moral relativism is the norm. As it is increasingly unclear who “owns” content that is deemed harmful, it is virtually impossible to hold vendors or providers accountable.
Moreover, attempts to regulate the use of technology tend to come too late.
The second way to rein in AI is to limit altogether the development of potentially dangerous products.
However, curbing demand is more complicated than restricting supply, especially in modern societies where competitive forces — commercial and geopolitical — make slowing down technological innovation exceedingly difficult.
The recent turmoil at OpenAI is a case in point. Last month, the company’s board of directors briefly fired CEO Sam Altman, reportedly due to concerns that AI could one day lead to humanity’s extinction. Although Altman was reinstated just days later, the scandal underscored the speed with which ostensibly beneficial technologies could become existential risks.
With rapid commercialization apparently taking precedence over caution, and competition hastening the development of increasingly powerful tools, an AI-induced apocalypse seems increasingly plausible.
The inescapable conclusion is that merely regulating AI is not enough, but by introducing concepts such as neo-Luddism and redistribution into the public debate, we could develop the political and intellectual vocabulary needed to mitigate the threats posed by these emerging technologies.
For example, a neo-Luddite might ask: Why are affluent societies, which already produce more than enough for their citizens to live comfortably, still focused on maximizing GDP growth? One answer might be the lack of a fair distribution of wealth and income that would ensure that the benefits of productivity and efficiency gains are widely shared.
Another explanation is that technology itself is not intrinsically good or bad; it is a means to an end. And in today’s political economy, “technological innovation” is often a euphemism for enabling the rich and powerful to redirect capital from industry to finance, thereby monopolizing the benefits of automation and immiserating everyone else.
Robert Skidelsky, a member of the British House of Lords, is professor emeritus of political economy at Warwick University.
Copyright: Project Syndicate
Pat Gelsinger took the reins as Intel CEO three years ago with hopes of reviving the US industrial icon. He soon made a big mistake. Intel had a sweet deal going with Taiwan Semiconductor Manufacturing Co (TSMC), the giant manufacturer of semiconductors for other companies. TSMC would make chips that Intel designed, but could not produce and was offering deep discounts to Intel, four people with knowledge of the agreement said. Instead of nurturing the relationship, Gelsinger — who hoped to restore Intel’s own manufacturing prowess — offended TSMC by calling out Taiwan’s precarious relations with China. “You don’t want all of
A chip made by Taiwan Semiconductor Manufacturing Co (TSMC) was found on a Huawei Technologies Co artificial intelligence (AI) processor, indicating a possible breach of US export restrictions that have been in place since 2019 on sensitive tech to the Chinese firm and others. The incident has triggered significant concern in the IT industry, as it appears that proxy buyers are acting on behalf of restricted Chinese companies to bypass the US rules, which are intended to protect its national security. Canada-based research firm TechInsights conducted a die analysis of the Huawei Ascend 910B AI Trainer, releasing its findings on Oct.
In honor of President Jimmy Carter’s 100th birthday, my longtime friend and colleague John Tkacik wrote an excellent op-ed reassessing Carter’s derecognition of Taipei. But I would like to add my own thoughts on this often-misunderstood president. During Carter’s single term as president of the United States from 1977 to 1981, despite numerous foreign policy and domestic challenges, he is widely recognized for brokering the historic 1978 Camp David Accords that ended the state of war between Egypt and Israel after more than three decades of hostilities. It is considered one of the most significant diplomatic achievements of the 20th century.
In a recent essay in Foreign Affairs, titled “The Upside on Uncertainty in Taiwan,” Johns Hopkins University professor James B. Steinberg makes the argument that the concept of strategic ambiguity has kept a tenuous peace across the Taiwan Strait. In his piece, Steinberg is primarily countering the arguments of Tufts University professor Sulmaan Wasif Khan, who in his thought-provoking new book The Struggle for Taiwan does some excellent out-of-the-box thinking looking at US policy toward Taiwan from 1943 on, and doing some fascinating “what if?” exercises. Reading through Steinberg’s comments, and just starting to read Khan’s book, we could already sense that