Explore AI energy consumption patterns, GPU efficiency, and renewable energy trends across global data centers.
Explore the project →How do energy consumption patterns, electricity source, and GPU hardware characteristics interact to determine the most energy-efficient strategies for scaling AI data centers?
AI infrastructure is growing faster than our ability to measure its impact. This project connects the dots between hardware efficiency, energy mix, and carbon output to surface where the real leverage is.
Ultimately, this project aims to identify which combination of hardware, location, and energy source produces the most energy-efficient path to scaling AI.
Power Usage Effectiveness (PUE) values are drawn from the Lawrence Berkeley National Lab 2024 U.S. Data Center Energy Usage Report. We use the reported range of 1.1–2.4x as representative of hyperscale to legacy enterprise data centers.
GPU comparisons are limited to NVIDIA data center GPUs across four generations: Volta (V100), Ampere (A100), Ada Lovelace, and Hopper (H100). AMD and Google TPU workloads are excluded from the hardware efficiency analysis.
Carbon intensity by grid region uses IEA country-level averages (gCO₂/kWh). Sub-regional variation (e.g. ERCOT vs PJM within the U.S.) is not captured. This may understate variance in large, grid-diverse countries.
When data centers report renewable energy percentages, we treat these as Power Purchase Agreement-based figures, not real-time 24/7 matching. Actual hourly carbon intensity may differ from annual averages.
Primary sources, related work, and datasets powering this project.
Official IEA data product covering AI energy demand, data centre growth, and electricity projections by region and scenario.
Live interactive portal tracking real-time AI energy metrics, data centre capacity, and country-level consumption breakdowns.
Lawrence Berkeley National Lab report covering PUE ranges, total U.S. data centre electricity use, and hardware efficiency trends. Source for our 1.1–2.4x PUE assumption.
Five static charts arguing AI energy demand is growing fast but from a small base — currently a minor share of global emissions. Highlights geographic concentration (Ireland, Virginia) and uncertainty in projections depending on AI adoption assumptions.
Stacked bar chart (Highcharts) showing how power splits across servers, storage, networking, cooling, and infrastructure for enterprise, colocation, and hyperscale facilities. Hover tooltips enabled.
Introduces a performance-vs-energy tradeoff metric across six architectures (AlexNet → Swin Transformer) on two NVIDIA GPUs. 14 figures, 11 data tables. Measures energy via OpenZmeter, CodeCarbon, and Carbontracker. Code & data open source.
Benchmark scores across GPU generations — used to quantify gen-over-gen efficiency improvements in our hardware analysis.
Comprehensive GPU spec sheet including TDP, memory bandwidth, compute units, and release dates across NVIDIA and AMD product lines.
IEA report chapter on projected AI electricity demand through 2030, broken out by training vs. inference workloads and scenario assumptions.