Was this helpful?
Thumbs UP Thumbs Down

AMD fires back at Nvidia with new Helios AI racks

In this photo the AMD logo is displayed on smartphone
AMD sign and logo near semiconductor company headquarters

AMD targets rack-scale AI

Advanced Micro Devices (AMD) has unveiled its “Helios” rack-scale AI reference system as a direct challenge to Nvidia’s dominance in AI infrastructure. The platform is built around open standards and aimed at next-generation data centres.

AMD says Helios integrates Instinct GPUs, EPYC CPUs, and Pensando networking to support frontier AI workloads, marking a strategic move toward offering full rack-scale infrastructure.

In this photo the AMD logo is displayed on smartphone

Build on open rack standards

Helios uses the Open Compute Project Open Rack Wide form factor that Meta contributed to OCP, and AMD presented Helios as a reference implementation of that standard.

By aligning with open standards, AMD aims to offer flexibility and interoperability against proprietary rival architectures.

This openness may appeal to hyperscalers seeking vendor-neutral deployment models. The strategy flips the script: hardware stack openness becomes a competitive advantage.

Strategy performance concept.

Performance leaps and memory advantage

AMD says a 72 GPU Helios rack can reach about 1.4 exaFLOPS in FP8 and 2.9 exaFLOPS in FP4, with roughly 31 terabytes of HBM4 memory and about 1.4 petabytes per second of aggregate memory bandwidth. AMD also states that this equates to about 50% more memory capacity than Nvidia’s next-generation racks.

These specs aim to meet next-gen AI training and inference workloads, especially large language models and multi-agent systems. For AI data centres, memory bandwidth and capacity are critical bottlenecks; Helios seeks to move past them.

A close up of AMD chip

Key silicon components

Key silicon components Helios pairs AMD Instinct MI400 series GPUs, including the MI450, sixth-generation EPYC Venice CPUs, and Pensando Vulcano AI NICs that AMD says will support UALink and Ultra Ethernet style fabrics.

By supplying the full hardware stack, AMD competes with Nvidia’s vertically integrated systems. For customers, the benefit is turn-key rack-scale compute built around an open ecosystem.

Cooling towers of nuclear power plant against the sky

Cooling and power innovations

The Helios rack supports both air-cooled and liquid-cooled configurations, leveraging quick-disconnect liquid cooling for high-density deployments. The double-wide layout allows higher power envelopes while maintaining data-centre efficiency.

As AI compute power grows, thermal and power delivery challenges become central; the Helios design addresses them explicitly.

AMD says the design addresses power and thermal constraints that hyperscalers increasingly face as rack power densities approach the megawatt per rack scale.

Nvidia headquarter

Competitive angle versus Nvidia

With Helios, AMD directly challenges Nvidia’s dominance in AI infrastructure. While Nvidia’s rack systems rely heavily on NVLink and proprietary interconnects, AMD emphasises open fabrics (UALink) and open standards.

The announcement positions AMD not only as an alternative but as a credible rival in the AI-server market. As organisations diversify away from Nvidia, Helios may capture new design wins and partnerships.

Oracle sign at the office building in UAE.

Early customer commitments

AMD and Oracle said Oracle will be the first hyperscaler to offer a publicly available cluster using MI450 GPUs, with a planned initial availability of 50,000 GPUs beginning in Q3 2026. The early adoption by a hyperscale cloud provider signals confidence in AMD’s infrastructure vision.

For AMD, securing marquee customers strengthens credibility and accelerates ecosystem momentum. For the market, it highlights that multi-vendor cloud infrastructure is becoming standard.

Strategy concept

Ecosystem and partner strategy

AMD is offering Helios as a reference design to OEMs and ODMs, enabling quicker time-to-market for system integrators. The open-rack approach allows partners to customise while preserving core architecture.

The ecosystem includes system makers, cloud providers, and research institutions. By fostering a broad partner network, AMD hopes to accelerate adoption and scale. It represents a shift from AMD as a chip vendor to an infrastructure platform leader.

Paper cards with numbers of years from 2024 to 2028

Timing and availability roadmap

Helios is slated for volume deployment in 2026, with reference units shown at the OCP Global Summit 2025. Earlier efforts involve MI350/MI355X launches in 2025, with full MI400/MI450-based Helios racks arriving later.

AMD previewed a roadmap that points to even larger systems beyond 2026 and said future MI500 family GPUs could enable higher GPU counts per rack. For buyers, 2026 becomes the window for Helios-era infrastructure planning. The timeline contrasts with Nvidia’s current generation rollout.

Doctor hold ai artificial intelligence concept healhtcare technology modern with

Impacts for AI model developers

For AI model builders and data-centre operators, Helios promises higher memory, bandwidth, and open interoperability, factors that reduce training time and expand model size. Developers may gain access to more flexible hardware stack options and avoid vendor lock-in.

This may accelerate research, improve cost efficiency, and open workflows to new architectures. Larger models and multi-agent systems stand to benefit.

Supply chain management concept transportation and logistic suppliers import export

Industry implications and supply-chain

Helios underscores the growing importance of rack-scale architecture over individual chips. It signals that infrastructure providers must consider power, cooling, network, and memory as integral to AI performance.

The open-rack trend may reshape supply chains, favouring vendors that deliver full-system solutions and modular scalability. AMD’s move may force competitors to adopt more open approaches or risk losing ecosystem ground.

Risk alert concept

Risks and challenges ahead

While Helios is promising, AMD faces risks: supply-chain execution, manufacturing yield, partner ramp-up, power/thermal logistics, and software ecosystem maturation (ROCm stack, networking drivers).

Customers must validate performance, reliability, and integration in production settings. Opening the rack ecosystem also increases coordination complexity. Success depends on market uptake, seamless delivery, and operational scalability.

Ready to see which GPU dominates this year? Check out the AMD vs NVIDIA latest GPU war that continues in 2025.

whats next concept

Strategic next steps

Helios represents AMD’s bold bet on infrastructure, stepping beyond chips to the rack-scale systems that power the AI era. For data-centre operators and cloud services, the message is clear: infrastructure leadership is shifting.

For AMD, this is a war footing to reclaim ground from Nvidia. For stakeholders: review your refresh cycle, map vendor strategies, and evaluate whether an open-rack future aligns with your roadmap. The battle for AI compute is escalating; Helios may be a game-changer.

Is this the beginning of the end for AMD/Nvidia dominance? Explore why Microsoft CTO aims to ditch AMD, Nvidia GPUs for homegrown.

Are you planning to adopt an open-rack AI infrastructure like Helios, or will you stay with established vendors such as Nvidia, and why? Share your thoughts.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you like this story, you’ll LOVE our Free email newsletter. Join today and be the first to receive stories like these.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.