Trillions in AI Spending Shift the Race From Models to Infrastructure

AI may look like software on the surface, but this paper shows the real race is now for power, chips, cooling, and capital. As trillion-dollar infrastructure plans take shape, the future of AI could depend as much on financing structures as on the models themselves.

Financing the AI Buildout. Image Credit: FOTOGRIN / Shutterstock

A recent financial paper signals a major shift toward a capital-intensive phase of artificial intelligence (AI), where physical infrastructure, such as data centers, power systems, advanced hardware, and cooling systems, matters as much as algorithms.

In a paper posted on the Social Science Research Network (SSRN) as a draft, economist Stijn Van Nieuwerburgh demonstrated that as AI models become more compute-intensive, the demand for energy, processing, and storage is rising sharply. He estimated that the United States alone may require up to $8.2 trillion in data center investment between 2026 and 2032, with the paper arguing that AI infrastructure investment accounted for essentially all observed U.S. GDP growth in the fourth quarter of 2025.

Evolving Infrastructure Demands for AI Workloads

Modern AI development has surpassed the capabilities of traditional digital infrastructure. Earlier cloud systems based on central processing units (CPUs) were built for enterprise workloads, typically requiring power densities of 5-10 kilowatts per rack.

Training advanced AI models, including transformer-based systems and large language models (LLMs), has increased these requirements. AI clusters utilizing advanced graphical processing units (GPUs), such as those based on NVIDIA Blackwell, now require 80 to 120 kilowatts per rack, rendering conventional cooling methods insufficient.

The industry is shifting toward direct-to-chip liquid cooling and closed-loop refrigerant systems. Some AI data centers require hundreds of megawatts of power, necessitating dedicated substations and high-capacity transmission infrastructure. These demands are reshaping data center design and transforming AI infrastructure into a capital-intensive system. Access to reliable power is rapidly becoming the primary constraint on AI growth.

Financial Projections for AI Infrastructure Investment

To quantify the scale of AI infrastructure, the paper analyzed a standard 200-megawatt training campus. It found that a single facility of this size requires approximately $8.2 billion in total investment. Of this, around $2.6 billion is allocated to the physical site, including the building and grid. In comparison, the remaining $5.6 billion is dedicated to IT (Information Technology) equipment, mainly high-end GPUs and networking systems.

Using "nowcasting" methods and infrastructure datasets, the study projects that U.S. data center capacity could expand by 200 gigawatts between 2026 and 2032. This expansion would require an annual investment equivalent to 2.8% of GDP, exceeding historical infrastructure booms such as the American railroad system (2.4%) and the interstate highway system (1.6%). The findings indicate a shift in cost structure; in earlier data centers, buildings accounted for a larger share of investment, whereas in AI facilities, value is concentrated in compute, with 70% of capacity dedicated to GPU-based training workloads.

Transforming Financial Structures in AI Infrastructure

The results showed a clear shift in how large AI infrastructure projects are financed. Hyperscalers such as Microsoft, Google, Amazon, and Meta still drive demand, but the scale now exceeds what they can fund directly. This has led to greater reliance on external capital, including private credit funds, data center developers, and infrastructure REITs (real estate investment trusts).

A key example is the Hyperion data center project, financed through a joint venture named Beignet, in which Meta sold a 80% stake in a $30 billion project to Blue Owl. The deal used a bankruptcy-remote structure to raise $27 billion in debt, reaching a 90% leverage ratio. This is far beyond the 4% to 25% typical on hyperscaler balance sheets. These structures allow firms to remain asset-light and protect equity valuations, but they also increase financial complexity and opacity. Hyperscalers hold about $970 billion in lease commitments, with roughly $660 billion off-balance-sheet.

External financing also comes at a higher cost. Debt for the Beignet project was priced about 100 basis points above Meta’s standard corporate borrowing, adding roughly $5 billion in interest over time. This model separates compute use from infrastructure ownership, redistributing financial risk across a broader set of institutional investors rather than eliminating it.

Strategic Location Choices for AI Facilities

The deployment of AI infrastructure follows a geographic strategy. The paper distinguishes between training facilities, which handle batch computations, and inference centers, which support real-time applications such as AI-driven search and video recommendations.

Training campuses, costing around $8.2 billion, are often established in regions with low-cost power and reliable grid access. Securing an energy supply is the primary factor in site selection. In contrast, inference workloads necessitate low latency and are typically located near population centers. Location decisions are influenced by policy incentives, such as sales tax exemptions on IT equipment, which play a key role in attracting development.

Navigating Risks in AI Infrastructure Investment

While the AI infrastructure boom is driving economic growth, it also introduces new risks, including technological obsolescence. As the requirements for AI hardware evolve rapidly, facilities built today may not be compatible with future semiconductor technologies.

The study also highlights several financial and operational vulnerabilities. Many large data center projects rely heavily on a small group of hyperscalers, creating counterparty concentration and single-tenant exposure across the ecosystem. If AI demand weakens, hyperscalers could shift workloads from leased facilities to owned capacity, exposing outside investors to revenue pressure. Recent market behavior reflects this shift, with the beta (a measure of market sensitivity) of data center REITs rising from 0.5 to around 1.0, indicating closer alignment with the tech market.

The paper further notes that risks extend beyond tenant concentration. Delays in grid interconnection, higher electricity costs, permitting barriers, and supply-chain bottlenecks in advanced semiconductors and networking equipment could all slow deployment. It also warns that increasingly complex, securitization-like financing structures may weaken transparency and make underlying exposures harder to track. This situation shows the need for greater transparency in lease obligations and residual value guarantees. Effectively managing these risks will be key to sustaining long-term growth.

Journal reference:
Muhammad Osama

Written by

Muhammad Osama

Muhammad Osama is a full-time data analytics consultant and freelance technical writer based in Delhi, India. He specializes in transforming complex technical concepts into accessible content. He has a Bachelor of Technology in Mechanical Engineering with specialization in AI & Robotics from Galgotias University, India, and he has extensive experience in technical content writing, data science and analytics, and artificial intelligence.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Osama, Muhammad. (2026, April 22). Trillions in AI Spending Shift the Race From Models to Infrastructure. AZoAi. Retrieved on April 22, 2026 from https://www.azoai.com/news/20260422/Trillions-in-AI-Spending-Shift-the-Race-From-Models-to-Infrastructure.aspx.

  • MLA

    Osama, Muhammad. "Trillions in AI Spending Shift the Race From Models to Infrastructure". AZoAi. 22 April 2026. <https://www.azoai.com/news/20260422/Trillions-in-AI-Spending-Shift-the-Race-From-Models-to-Infrastructure.aspx>.

  • Chicago

    Osama, Muhammad. "Trillions in AI Spending Shift the Race From Models to Infrastructure". AZoAi. https://www.azoai.com/news/20260422/Trillions-in-AI-Spending-Shift-the-Race-From-Models-to-Infrastructure.aspx. (accessed April 22, 2026).

  • Harvard

    Osama, Muhammad. 2026. Trillions in AI Spending Shift the Race From Models to Infrastructure. AZoAi, viewed 22 April 2026, https://www.azoai.com/news/20260422/Trillions-in-AI-Spending-Shift-the-Race-From-Models-to-Infrastructure.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
How is One Scientist Using Human Cognition to Build Better AI?