While the rise of artificial intelligence (AI) could revolutionize numerous sectors and unlock unprecedented economic opportunities, its energy intensity has raised serious environmental concerns.
In response, tech companies promote frugal AI practices and support research focused on reducing energy consumption, but this approach falls short of addressing the root causes of the industry’s growing demand for energy.
Developing, training and deploying large language models (LLMs) is an energy-intensive process that requires vast amounts of computational power. With the widespread adoption of AI leading to a surge in data centers’ electricity consumption, the International Energy Agency projects that AI-related energy demand would double by next year.
Data centers already account for 1 to 2 percent of global energy consumption — about the same as the entire airline industry. In Ireland, data centers accounted for 21 percent of total electricity consumption in 2023. As industries and citizens shift toward electrification to reduce greenhouse gas emissions, rising AI demand places enormous strain on power grids and the energy market.
Unsurprisingly, Ireland’s grid operator, EirGrid, has imposed a moratorium on new data center developments in Dublin until 2028. Countries such as Germany, Singapore and China have also imposed restrictions on new data center projects.
To mitigate the environmental impact of emerging technologies, the tech industry has begun to promote the concept of frugal AI, which involves raising awareness of AI’s carbon footprint and encouraging end users — academics and businesses — to select the most energy-efficient model for any given task.
However, while efforts to promote more conscious AI use are valuable, focusing solely on users’ behavior overlooks a critical fact: suppliers are the primary drivers of AI’s energy consumption.
Currently, factors like model architecture, data center efficiency and electricity-related emissions have the greatest impact on AI’s carbon footprint.
In addition, as technology evolves, individual users would have even less influence on its sustainability, especially as AI models become increasingly embedded within larger applications, making it harder for end users to discern which actions trigger resource-intensive processes.
These challenges are compounded by the rise of agentic AI — independent systems that collaborate to solve complex problems. While experts see this as the next big thing in AI development, such interactions require even more computational power than today’s most advanced LLMs, potentially exacerbating the technology’s environmental impact.
Moreover, shifting the responsibility for reducing AI’s carbon footprint to users is counterproductive, given the industry’s lack of transparency. Most cloud providers do not yet transparently disclose emissions data specifically related to generative AI, making it difficult to assess the environmental impact of their AI use.
A more effective approach would be for AI providers to provide consumers with detailed emissions data. Increased transparency would empower users to make informed decisions, while encouraging suppliers to develop more energy-efficient technologies.
With access to emissions data, consumers could compare AI applications and select the most energy-efficient model for a specific task. Businesses could also more easily choose a traditional information technology solution over an energy-intensive generative AI system if the overall impact is clear from the beginning. By working together, AI companies and consumers could balance AI’s potential benefits with its environmental costs.
To be sure, frugal AI might lead to some efficiency gains, but it does not address the core problem of AI’s insatiable energy demand. By providing greater transparency about energy consumption, sharing comprehensive emissions data and developing standardized metrics for AI models, companies could help clients optimize their carbon budgets and adopt more sustainable practices.
The automotive industry offers a useful model for increasing energy transparency in AI development. By labeling the energy efficiency of their vehicles, auto manufacturers allow buyers to make more sustainable choices. Generative AI providers could adopt a similar approach and establish standardized metrics to capture the environmental impact of their models.
One such metric could be electricity consumption per token, which quantifies the amount of energy required for an AI model to process a single unit of text.
Just as fuel-efficiency standards allow car buyers to compare different models and hold manufacturers accountable, businesses and individual users need reliable tools to evaluate the environmental impact of AI models before deploying them.
By introducing transparent metrics, technology companies could not only steer the industry toward more sustainable innovation, but also ensure that AI helps combat climate change instead of contributing to it.
Boris Ruf is Research Scientist Lead at AXA.
Copyright: Project Syndicate
On May 7, 1971, Henry Kissinger planned his first, ultra-secret mission to China and pondered whether it would be better to meet his Chinese interlocutors “in Pakistan where the Pakistanis would tape the meeting — or in China where the Chinese would do the taping.” After a flicker of thought, he decided to have the Chinese do all the tape recording, translating and transcribing. Fortuitously, historians have several thousand pages of verbatim texts of Dr. Kissinger’s negotiations with his Chinese counterparts. Paradoxically, behind the scenes, Chinese stenographers prepared verbatim English language typescripts faster than they could translate and type them
More than 30 years ago when I immigrated to the US, applied for citizenship and took the 100-question civics test, the one part of the naturalization process that left the deepest impression on me was one question on the N-400 form, which asked: “Have you ever been a member of, involved in or in any way associated with any communist or totalitarian party anywhere in the world?” Answering “yes” could lead to the rejection of your application. Some people might try their luck and lie, but if exposed, the consequences could be much worse — a person could be fined,
On May 13, the Legislative Yuan passed an amendment to Article 6 of the Nuclear Reactor Facilities Regulation Act (核子反應器設施管制法) that would extend the life of nuclear reactors from 40 to 60 years, thereby providing a legal basis for the extension or reactivation of nuclear power plants. On May 20, Chinese Nationalist Party (KMT) and Taiwan People’s Party (TPP) legislators used their numerical advantage to pass the TPP caucus’ proposal for a public referendum that would determine whether the Ma-anshan Nuclear Power Plant should resume operations, provided it is deemed safe by the authorities. The Central Election Commission (CEC) has
When China passed its “Anti-Secession” Law in 2005, much of the democratic world saw it as yet another sign of Beijing’s authoritarianism, its contempt for international law and its aggressive posture toward Taiwan. Rightly so — on the surface. However, this move, often dismissed as a uniquely Chinese form of legal intimidation, echoes a legal and historical precedent rooted not in authoritarian tradition, but in US constitutional history. The Chinese “Anti-Secession” Law, a domestic statute threatening the use of force should Taiwan formally declare independence, is widely interpreted as an emblem of the Chinese Communist Party’s disregard for international norms. Critics