A recent financial paper signals a major shift toward a capital-intensive phase of artificial intelligence (AI), where physical infrastructure, such as data centers, power systems, advanced hardware, and cooling systems, matters as much as algorithms.
Evolving Infrastructure Demands for AI Workloads
Modern AI development has surpassed the capabilities of traditional digital infrastructure. Earlier cloud systems based on central processing units (CPUs) were built for enterprise workloads, typically requiring power densities of 5-10 kilowatts per rack.
Training advanced AI models, including transformer-based systems and large language models (LLMs), has increased these requirements. AI clusters utilizing advanced graphical processing units (GPUs), such as those based on NVIDIA Blackwell, now require 80 to 120 kilowatts per rack, rendering conventional cooling methods insufficient.
The industry is shifting toward direct-to-chip liquid cooling and closed-loop refrigerant systems. Some AI data centers require hundreds of megawatts of power, necessitating dedicated substations and high-capacity transmission infrastructure. These demands are reshaping data center design and transforming AI infrastructure into a capital-intensive system. Access to reliable power is rapidly becoming the primary constraint on AI growth.
Financial Projections for AI Infrastructure Investment
To quantify the scale of AI infrastructure, the paper analyzed a standard 200-megawatt training campus. It found that a single facility of this size requires approximately $8.2 billion in total investment. Of this, around $2.6 billion is allocated to the physical site, including the building and grid. In comparison, the remaining $5.6 billion is dedicated to IT (Information Technology) equipment, mainly high-end GPUs and networking systems.
Using "nowcasting" methods and infrastructure datasets, the study projects that U.S. data center capacity could expand by 200 gigawatts between 2026 and 2032. This expansion would require an annual investment equivalent to 2.8% of GDP, exceeding historical infrastructure booms such as the American railroad system (2.4%) and the interstate highway system (1.6%). The findings indicate a shift in cost structure; in earlier data centers, buildings accounted for a larger share of investment, whereas in AI facilities, value is concentrated in compute, with 70% of capacity dedicated to GPU-based training workloads.
Transforming Financial Structures in AI Infrastructure
The results showed a clear shift in how large AI infrastructure projects are financed. Hyperscalers such as Microsoft, Google, Amazon, and Meta still drive demand, but the scale now exceeds what they can fund directly. This has led to greater reliance on external capital, including private credit funds, data center developers, and infrastructure REITs (real estate investment trusts).
A key example is the Hyperion data center project, financed through a joint venture named Beignet, in which Meta sold a 80% stake in a $30 billion project to Blue Owl. The deal used a bankruptcy-remote structure to raise $27 billion in debt, reaching a 90% leverage ratio. This is far beyond the 4% to 25% typical on hyperscaler balance sheets. These structures allow firms to remain asset-light and protect equity valuations, but they also increase financial complexity and opacity. Hyperscalers hold about $970 billion in lease commitments, with roughly $660 billion off-balance-sheet.
External financing also comes at a higher cost. Debt for the Beignet project was priced about 100 basis points above Meta’s standard corporate borrowing, adding roughly $5 billion in interest over time. This model separates compute use from infrastructure ownership, redistributing financial risk across a broader set of institutional investors rather than eliminating it.
Strategic Location Choices for AI Facilities
The deployment of AI infrastructure follows a geographic strategy. The paper distinguishes between training facilities, which handle batch computations, and inference centers, which support real-time applications such as AI-driven search and video recommendations.
Training campuses, costing around $8.2 billion, are often established in regions with low-cost power and reliable grid access. Securing an energy supply is the primary factor in site selection. In contrast, inference workloads necessitate low latency and are typically located near population centers. Location decisions are influenced by policy incentives, such as sales tax exemptions on IT equipment, which play a key role in attracting development.
Navigating Risks in AI Infrastructure Investment
While the AI infrastructure boom is driving economic growth, it also introduces new risks, including technological obsolescence. As the requirements for AI hardware evolve rapidly, facilities built today may not be compatible with future semiconductor technologies.
The study also highlights several financial and operational vulnerabilities. Many large data center projects rely heavily on a small group of hyperscalers, creating counterparty concentration and single-tenant exposure across the ecosystem. If AI demand weakens, hyperscalers could shift workloads from leased facilities to owned capacity, exposing outside investors to revenue pressure. Recent market behavior reflects this shift, with the beta (a measure of market sensitivity) of data center REITs rising from 0.5 to around 1.0, indicating closer alignment with the tech market.
The paper further notes that risks extend beyond tenant concentration. Delays in grid interconnection, higher electricity costs, permitting barriers, and supply-chain bottlenecks in advanced semiconductors and networking equipment could all slow deployment. It also warns that increasingly complex, securitization-like financing structures may weaken transparency and make underlying exposures harder to track. This situation shows the need for greater transparency in lease obligations and residual value guarantees. Effectively managing these risks will be key to sustaining long-term growth.