For the past two years, the dominant narrative about artificial intelligence (AI) has been one of boundless possibility. Larger models, trillion-token training runs and record-breaking capex (capital expenditure) cycles have reinforced a sense of uninterrupted acceleration. But technological change is rarely so straightforward, and this time is no exception. As AI moves from experimentation to real-world applications, the limits imposed by the physical world, capital markets and political systems clearly matter more than its theoretical potential.
The most immediate constraint is electricity. Nowhere is this more evident than in the United States, where data-center power demand is expected to rise from roughly 35 gigawatts to 78GW by 2035. Northern Virginia, the world’s largest cloud-infrastructure cluster, has already effectively exhausted its available grid capacity. Utilities in Arizona, Georgia and Ohio warn that new substations might take almost a decade to build. A single campus can require 300-500MW, enough to power an entire city. Silicon can be manufactured quickly; high-voltage transmission cannot.
Markets are responding with the speed and ambition one would expect. Hyperscalers (the major tech firms building advanced AI models on the back of ever-greater computing capacity) have become among the world’s largest buyers of long-dated renewable energy. Private solar and wind farms are being built expressly to serve cloud facilities, and some firms are exploring next-generation small modular reactors as a way to bypass slower municipal infrastructure.
These efforts are going to eventually expand the frontier of what is possible, but they do not eliminate the constraint so much as redirect it. The next wave of AI capacity would likely not be in Northern Virginia or Dublin, but in regions where land, power and water remain abundant: the American Midwest, Scandinavia, parts of the Middle East and western China. The geography of AI is being written by physics, not preference.
Silicon is the next constraint, and here the story is becoming more complicated. While Nvidia once appeared to be the universal substrate beneath all AI development globally, that era is ending. In a significant milestone, Google trained its latest large language model, Gemini 3, entirely on its own Tensor Processing Units — and Amazon’s Trainium2, Microsoft’s Maia and Meta’s MTIA chips are all being developed for similar purposes. Similarly, in China, Huawei’s Ascend platform has become the backbone for domestic model training in the face of US export controls.
Some of this shift reflects natural technological maturation. As workloads increase, specialized accelerators become more efficient than the general-purpose GPUs originally adapted for AI. The timing is not accidental. Scarcity, geopolitical friction and cost pressures have pushed hyperscalers to assume a role once reserved for semiconductor firms. Given that departing from Nvidia’s software ecosystem carries enormous organizational costs, the growing willingness to incur it signals how severe the constraint has become. What could follow is a more fragmented hardware landscape, and with it, a more fragmented AI ecosystem. Once architectures diverge at the silicon level, they rarely reconverge.
The third constraint, capital, operates in a more subtle way. Hyperscaler investment plans for 2026 exceed US$518 billion, a figure that has risen by about two-thirds just in the past year. Society is already witnessing the largest private-sector infrastructure buildout in modern history. Meta, Microsoft, and Google revise their capex guidance so frequently that analysts struggle to keep pace.
Yet it is still early days for economic returns. Baidu reported 2.6 billion yuan (US$369.2 million) in AI-application-related revenue, driven largely by enterprise contracts and infrastructure subscriptions, and Tencent says it has lifted profitability through AI-enhanced efficiencies across its mature businesses. However, in the US, most companies still bury their AI earnings within broader cloud categories.
The gap between AI adoption and monetization is wide but familiar. In past technological waves, infrastructure spending routinely preceded productivity gains by years. The constraint comes not from weak investor sentiment, but from the strategic pressure enthusiasm creates: different firms pursue different conceptions of value because their business models and cost structures demand it.
Many sectors simply cannot adopt AI at the pace that new models are being released. Large banks, for example, remain bound by security and compliance frameworks that require air-gapped, on-site, fully auditable software deployments. Such rules instantly cut them off from the most advanced frontier models, which rely on cloud-side orchestration and rapid iterations through new versions. Health-care systems face similar limits, and governments even more so. The problem is not AI’s theoretical capabilities, but the difficulty of incorporating such tools into legacy systems built for a different era.
Taken together, these forces sketch a future very different from the one implied by the standard media narrative. AI is not converging toward a single universal frontier. Diverse regional and institutional architectures are being shaped by different limits — from power shortages in the US to land and cooling constraints in Singapore and Japan, “geopolitical” scarcity in China (where Western export controls limit access to advanced chips and cloud hardware), regulatory friction in Europe, and organizational rigidities across the corporate world. Technology might be global, but implementation is local.
Fortunately, real-world constraints are not the enemy of progress. Often, they form the scaffolding around which new systems take shape. The fiber-optic glut of the late 1990s, initially derided as wasteful overshoot, later underpinned the rise of streaming, social media,and cloud computing.
Today’s constraints could play a similar role. Power scarcity is already shifting the geography of AI. Silicon fragmentation is creating new national and corporate ecosystems. Capital asymmetries are pushing firms into different strategic equilibria. Institutional limits are shaping the first real use cases.
The next decade of AI is going to belong not to the systems with the greatest theoretical capability, but to the ecosystems most adept at turning real-world limits into design advantages. Possibility defines the horizon, but constraint should determine the route the world ultimately takes.
Jeffrey Wu is director at MindWorks Capital.
Copyright: Project Syndicate
The image was oddly quiet. No speeches, no flags, no dramatic announcements — just a Chinese cargo ship cutting through arctic ice and arriving in Britain in October. The Istanbul Bridge completed a journey that once existed only in theory, shaving weeks off traditional shipping routes. On paper, it was a story about efficiency. In strategic terms, it was about timing. Much like politics, arriving early matters. Especially when the route, the rules and the traffic are still undefined. For years, global politics has trained us to watch the loud moments: warships in the Taiwan Strait, sanctions announced at news conferences, leaders trading
Eighty-seven percent of Taiwan’s energy supply this year came from burning fossil fuels, with more than 47 percent of that from gas-fired power generation. The figures attracted international attention since they were in October published in a Reuters report, which highlighted the fragility and structural challenges of Taiwan’s energy sector, accumulated through long-standing policy choices. The nation’s overreliance on natural gas is proving unstable and inadequate. The rising use of natural gas does not project an image of a Taiwan committed to a green energy transition; rather, it seems that Taiwan is attempting to patch up structural gaps in lieu of
The Executive Yuan and the Presidential Office on Monday announced that they would not countersign or promulgate the amendments to the Act Governing the Allocation of Government Revenues and Expenditures (財政收支劃分法) passed by the Legislative Yuan — a first in the nation’s history and the ultimate measure the central government could take to counter what it called an unconstitutional legislation. Since taking office last year, the legislature — dominated by the opposition alliance of the Chinese Nationalist Party (KMT) and Taiwan People’s Party — has passed or proposed a slew of legislation that has stirred controversy and debate, such as extending
Chinese Nationalist Party (KMT) legislators have twice blocked President William Lai’s (賴清德) special defense budget bill in the Procedure Committee, preventing it from entering discussion or review. Meanwhile, KMT Legislator Chen Yu-jen (陳玉珍) proposed amendments that would enable lawmakers to use budgets for their assistants at their own discretion — with no requirement for receipts, staff registers, upper or lower headcount limits, or usage restrictions — prompting protest from legislative assistants. After the new legislature convened in February, the KMT joined forces with the Taiwan People’s Party (TPP) and, leveraging their slim majority, introduced bills that undermine the Constitution, disrupt constitutional