At Dell Technologies World in Las Vegas, Dell announced its latest line of AI acceleration servers featuring Nvidia’s cutting-edge Blackwell Ultra GPUs. Designed for enterprise-scale AI deployment, these servers promise up to four times faster training speeds than previous generations.
The release comes as more organisations shift from testing AI tools to building full-scale production systems. With increasing demand for reliable and powerful infrastructure, Dell is expanding its AI Factory partnership with Nvidia to meet these evolving needs.
The new PowerEdge lineup includes both air-cooled and liquid-cooled models. The XE9780 and XE9785 are tailored for standard data centers, while the XE9780L and XE9785L are liquid-cooled for rack-level performance. These systems can scale up to 192 GPUs per rack—with support for up to 256 in Dell’s IR7000 racks—using direct-to-chip liquid cooling for maximum efficiency.
Michael Dell positioned the launch as part of a broader mission: “We’re making AI more accessible. With the Dell AI Factory and Nvidia, companies can now manage the full AI lifecycle at any scale—from training to deployment.”
Performance Meets Infrastructure Demands
On paper, the technical capabilities of Dell’s AI acceleration servers are impressive. But enterprise buyers will look beyond the specs. Pricing details remain under wraps, and these high-performance systems will require significant infrastructure investments—especially for the liquid-cooled versions. Many data centers may need upgrades just to accommodate them.
Competition in the AI server space is heating up. Super Micro has carved out a significant share of the market with similar hardware offerings, though recent production cost and margin challenges may give Dell a window to compete more effectively—if it can keep pricing in check.
Jensen Huang, Nvidia’s CEO, framed the partnership as critical to the next phase of AI evolution: “AI factories are the modern infrastructure powering breakthroughs in finance, healthcare, and manufacturing. With Dell, we’re delivering the broadest lineup of Blackwell AI systems—from the cloud to the edge.”
Beyond Hardware: Dell’s AI Acceleration Ecosystem
Dell’s AI strategy doesn’t stop at servers. The company has built a full ecosystem to support enterprise AI workloads—networking, storage, software, and services.
New PowerSwitch SN5600 and SN2201 switches—built on Nvidia’s Spectrum-X platform—join Nvidia Quantum-X800 InfiniBand for up to 800 Gbps throughput. The ObjectScale data platform now supports Nvidia BlueField-3 and Spectrum-4 integration for enhanced AI data management.
Dell is also bundling Nvidia’s full AI software suite, including Enterprise tools like NIM, NeMo microservices, and Blueprints to streamline AI development. For organisations lacking in-house AI expertise, Dell offers Managed Services to help with monitoring, reporting, and operations.
Availability rolls out in phases:
- Air-cooled PowerEdge XE9780 and XE9785 with HGX B300 GPUs: H2 2025
- Liquid-cooled XE9780L and XE9785L: Later in 2025
- PowerEdge XE7745 with RTX Pro 6000: July 2025
- PowerEdge XE9712 with GB300 NVL72: H2 2025
Dell also plans to support Nvidia’s Vera CPU and Vera Rubin platform, indicating a long-term roadmap focused on advanced AI infrastructure.
With its enterprise relationships, global footprint, and expanded hardware-software stack, Dell is positioning itself not just as a hardware supplier, but as a full-stack AI infrastructure partner. Still, long-term success depends on proving that its systems can deliver real, measurable business value—and securing consistent GPU supply from Nvidia during a period of unprecedented demand.