As AI rack densities surge beyond 100 kW and next-generation 1 MW platforms approach commercialization, Enteligent has released a new technical white paper revealing that the next major constraint on AI-scale data center growth lies inside the rack itself. The study finds that pairing 800VDC facility distribution with 50VDC rack-level conversion eliminates the final AC bottleneck that limits electrical efficiency, cooling performance, and revenue density in high-performance AI environments.
Titled “800VDC-to-50VDC Power Delivery Architecture: Completing the DC-Native Power Stack for AI-Scale Data Centers,” the paper expands on Enteligent’s earlier research demonstrating the efficiency and CapEx benefits of 800VDC facility‑level power delivery. As AI workloads push infrastructure toward multi‑hundred‑megawatt and gigawatt‑scale deployments, the company argues that legacy AC architectures have reached their practical boundaries.
“AI-scale power demand has outgrown legacy AC infrastructure DC-native architectures finally remove the bottleneck.”
— Sean Burke, CEO, Enteligent
“AI is fundamentally reshaping data center power requirements,” said Sean Burke, CEO of Enteligent. “While 800VDC distribution solves upstream limitations, converting it directly to a 50VDC server bus within the rack removes the final conversion bottleneck at the server level. Together, they deliver a fully DC-native architecture built for modern compute.”
Key findings from a modeled 100 kW AI rack include:
• Traditional AC architectures waste 18–28 kW in conversion losses across transformers, UPS systems, PDUs and server PSUs.
• End-to-end electrical efficiency increases to 94–95%, compared to just 78–85% for conventional AC systems.
• Eliminating 15–20 kW of heat per rack reduces cooling requirements and unlocks higher compute density.
• Revenue density improves 10x–15x within the same physical footprint.
Industry partners echo the significance of the findings. “Power density is revenue density, and this architecture unlocks both,” said Frank Smith, VP of Growth at Claros. “By eliminating 15 to 20 kW of wasted heat per rack, you remove one of the most persistent barriers to AI data center scaling.”
Enteligent says the 800VDC-to-50VDC approach provides a unified electrical platform suitable for traditional enterprise servers, next-generation GPU compute, and emerging AI-centric architectures supporting the scale, efficiency and economics required for the AI era.
