Paligo has published an interactive data-driven investigation that traces the infrastructure behind every AI answer:
https://paligo.net/ai-supply-game/
Here is what it walks through:
The Supply Chain
The entire AI industry rests on a handful of single points of failure that most people have never heard of. Paligo traced the full dependency chain and built a Disruption Simulator: Taiwan conflict? ASML export ban? Energy grid failure? Click one, watch it cascade.
Key Findings:
- Every advanced AI chip on earth depends on a lithography machine made by one company in the Netherlands. There is no alternative supplier.
- ASML ships around 50 EUV lithography machines a year. They are the only company that makes them. That is the entire global supply.
- TSMC manufactures over 90 % of the world's most advanced semiconductors. The next largest producer is not close.
- Paligo mapped the full AI chip dependency chain and built a disruption simulator. A single disruption at any node cascades through the entire stack.
The Footprint
What keeps the model running? AI data centers are projected to consume 945 terawatt-hours (TWh) by 2030. Cooling alone evaporates millions of liters of water daily across major providers. Paligo tracked the energy, water, and carbon cost of keeping the infrastructure running, with live counters and real-world comparisons that put the numbers in perspective.
Key Findings:
- There are more than 11,800 data centers, and 45 % of these are located in the USA.
- 945 terawatt-hours by 2030: This is the projected annual electricity consumption of AI data centers, roughly equivalent to the entire energy output of a mid-sized industrialised nation.
- Running a large language model requires keeping thousands of GPUs at stable temperatures. Cooling alone evaporates millions of liters of water per day across major providers.
- The cost of an AI answer is not measured in computational power alone. It is measured in water, electricity, and carbon, with all three scaling faster than the industry acknowledges.
The Intelligence
The cost of running AI at scale compounds every quarter. It cost upwards of $100 M+ to train GPT-4, and to serve GPT-5 requires over 200,000 GPUs. Because the model is frozen after training, it relies entirely on static knowledge, a fixed snapshot of information from its past. When asked for specific, current, or proprietary data, the model must either retrieve the information from an external source or resort to a guess based on outdated patterns.
Key Findings:
- It cost over $100 million to train GPT-4. Serving it is more expensive, and the cost compounds every quarter.
- GPT-5 requires over 200,000 GPUs to serve. The infrastructure cost of inference now exceeds the cost of training.
- Once training is complete, the model is frozen. Everything it knows is a snapshot. When asked about something current or specific, it either retrieves the answer from somewhere else or makes up the answer (hallucinate).
- The public conversation focuses on how models are trained. The more consequential cost is what it takes to keep them answering.
The Content Layer
The only part of the AI stack that most organizations truly control. Every other layer of this infrastructure is engineered to extraordinary precision and costs billions. The content layer, the part the model actually retrieves, is where most organizations carry years of accumulated "content debt": conflicting PDFs, outdated wikis, fragmented documentation. AI has turned that debt into a live liability.
Key Findings:
- Models will hallucinate. But fragmented, outdated, or contradictory content makes this significantly more likely. Structured content reduces this risk.
- AI has made "content debt" visible. Paligo has demonstrated this with a side-by-side comparison of identical questions answered from structured versus fragmented documentation.
- Most organisations spend nothing on the one layer of the AI stack they actually control.
The full interactive is here: https://paligo.net/ai-supply-game/
What it shows is the gap between how simple AI feels to use, and how complex and dependent the system behind it actually is.
If you're covering AI infrastructure, energy usage, or the real-world impact behind these systems, this could serve as a useful reference or visual.