NVIDIA and Partners Gear Up for Gigawatt AI Factories with Vera Rubin Architecture

NVIDIA is preparing for the next generation of AI infrastructure with the Vera Rubin NVL144 architecture. This new open architecture rack server, designed for gigawatt AI factories, promises increased efficiency and scalability. More than 50 NVIDIA MGX partners are gearing up to support this innovative technology.
At the OCP Global Summit, NVIDIA unveiled specifications for the Vera Rubin NVL144 MGX-generation rack servers. These servers are designed to support the increasing demands of AI inference.
Ecosystem Support for NVIDIA Kyber
The Vera Rubin NVL144 is designed to support NVIDIA Kyber, which connects 576 Rubin Ultra GPUs. Over 20 industry partners are contributing new silicon, components, and power systems to support next-generation 800VDC data centers for the NVIDIA Kyber rack architecture.
Foxconn is constructing a 40-megawatt data center in Taiwan, known as Kaohsiung-1, specifically designed for 800VDC. CoreWeave, Lambda, Nebius, Oracle Cloud Infrastructure, and Together AI are also designing for 800-volt data centers.
Vertiv's Innovative Architecture
Vertiv has introduced a space-, cost-, and energy-efficient 800 VDC MGX reference architecture. This architecture provides a complete power and cooling infrastructure solution. Meanwhile, HPE is announcing product support for NVIDIA Kyber and NVIDIA Spectrum-XGS Ethernet scale-across technology.
Benefits of 800 VDC Infrastructure
Transitioning to 800 VDC infrastructure from traditional 415 or 480 VAC systems offers significant advantages. These include increased scalability, improved energy efficiency, reduced material usage, and higher capacity for performance in data centers.
Vera Rubin NVL144: Scaling AI Factories
The Vera Rubin NVL144 MGX compute tray features an energy-efficient, 100% liquid-cooled, modular design. Its central printed circuit board midplane replaces traditional cable connections, improving assembly and serviceability.
Key features include:
- Modular expansion bays for NVIDIA ConnectX-9 800GB/s networking.
- NVIDIA Rubin CPX for massive-context inference.
NVIDIA plans to contribute the upgraded rack and compute tray innovations as an open standard for the OCP consortium. The standards for compute trays and racks allow partners to mix and match modular components, accelerating scaling with the architecture.
NVIDIA Kyber Rack Server Generation
The OCP ecosystem is also preparing for NVIDIA Kyber, which includes innovations in 800 VDC power delivery, liquid cooling, and mechanical design. These innovations will support the move to rack server generation NVIDIA Kyber, the successor to NVIDIA Oberon, housing 576 NVIDIA Rubin Ultra GPUs by 2027.
NVIDIA Kyber is engineered to boost rack GPU density, scale up network size, and maximize performance for large-scale AI infrastructure. This will become a foundational element of hyperscale AI data centers, enabling superior performance, efficiency, and reliability for generative AI workloads.
NVIDIA NVLink Fusion Ecosystem
NVIDIA NVLink Fusion is gaining traction, enabling companies to integrate their semi-custom silicon into optimized data center architectures. Intel and Samsung Foundry are joining the NVLink Fusion ecosystem, allowing AI factories to scale up quickly for demanding workloads.
More than 20 NVIDIA partners are helping deliver rack servers with open standards, paving the way for future gigawatt AI factories.