NVIDIA Advocates for 800 VDC Ecosystem in AI Factories for Enhanced Efficiency and Scalability
Published on October 13, 2025 at 12:00 AM

NVIDIA, in a technical blog post dated October 13, 2025, has outlined the necessity for a fundamental shift in power infrastructure to support the increasing scale and power demands of modern AI workloads. The company proposes a dual-pronged approach involving the implementation of 800 VDC power distribution and integrated, multi-timescale energy storage to address the challenges of high power density and workload volatility in AI factories. According to NVIDIA, traditional data centers are evolving into AI factories, making power infrastructure a primary concern.
The current trend of increasing power consumption in AI is driven by the pursuit of performance, enabled by high-bandwidth interconnects like NVIDIA NVLink. As individual GPU power consumption increases, the resulting rack power density makes delivering power at traditional low voltages impractical. NVIDIA cites research with Microsoft and OpenAI documenting how synchronized GPU workloads can cause grid-scale oscillations, further necessitating a new approach.
The proposed solution involves transitioning to 800 VDC power distribution coupled with deep integration of energy storage. The advantages of 800 VDC include:
- Native 800 VDC end-to-end integration: Eliminates redundant conversions, improving overall power efficiency and supporting high-density GPU clusters.
- Reduced copper and cost: Allows the same wire gauge to carry 157% more power than 415 VAC, reducing copper usage, material, and installation costs.
- Improved efficiency: A native DC architecture eliminates multiple AC-to-DC conversion steps, boosting efficiency and reducing waste heat.
- Simplified and more reliable architecture: A DC distribution system inherently simplifies the infrastructure, with fewer components and potential failure points.