Was this helpful?
Thumbs UP Thumbs Down

AMD Takes Aim at Nvidia’s Blackwell With Bold New AI Chip Reveal

A close up of AMD chip
AMD logo displayed on a phone screen

AMD Fires Back With the Instinct MI350 Series

AMD has officially launched the Instinct MI350 Series GPUs, promising a 4x boost in AI performance and a staggering 35x leap in inference capability over its previous generation.

This bold move targets Nvidia’s Blackwell chips head-on, aiming to shake up dominance in the AI accelerator space.

These chips, set to hit the market in Q3 2025, are part of AMD’s broader campaign to dethrone Nvidia and redefine the AI infrastructure landscape.

Hand holding a mobile with Nvidia logo

MI350 vs. Blackwell, The Showdown Begins

AMD claims the MI350 series delivers up to 40% more tokens per dollar compared to Nvidia’s Blackwell-based B200 GPU, a stat that will matter deeply in cost-conscious hyperscaler environments.

With performance per watt and price efficiency core to adoption decisions, AMD isn’t just chasing top-line specs but optimizing for ROI. Blackwell might have brand dominance, but AMD is aggressively framing itself as the smarter spend.

Man interacting with AI

Inferencing Just Got a 35x Upgrade.

The most eye-popping stat? A 35x generation-over-generation improvement in inference. That’s a massive leap for real-time AI performance, which AMD says will grow even more critical than training over the next few years.

With workloads shifting toward deployment, AMD is betting that the real action (and profit) will be in inference, and its chips are now built to lead there.

Silicon chip on white background

Helios Racks Are AMD’s Answer to Nvidia SuperPods

Alongside its new chips, AMD also introduced the Helios AI Rack, an integrated infrastructure platform powered by next-gen MI400 GPUs, EPYC Venice CPUs, and Pensando Vulcano NICs.

Think of it as AMD’s answer to Nvidia’s DGX SuperPods. These Helios racks aim to be plug-and-play for AI hyperscalers, offering dense, open-standard hardware with rack-scale performance optimized for massive AI workloads.

Detail of the processor

MI400 Series, Already in the Works

Even as MI350 enters the scene, AMD is already teasing the Instinct MI400 Series, designed to support the next generation of Helios AI infrastructure.

These upcoming chips are expected to deliver up to 10x inference performance boosts, especially for complex Mixture of Experts (MoE) models. AMD is quickly launching, scaling, and leapfrogging in real-time to keep pressure on Nvidia.

Portrait of African American developer using laptop to write code

AMD’s Ecosystem Just Got Serious

CEO Lisa Su clarified that AMD isn’t just selling chips; it’s building an ecosystem. AMD is evolving from component supplier to platform provider with new ROCm 7 software stack updates, developer cloud offerings, and hardware integration across partners.

The goal? Win developer hearts, boost compatibility, and build a support structure as compelling as Nvidia’s CUDA stack.

Meta logo on a glass building.

Meta, OpenAI, and Microsoft Are Already Onboard

AMD’s partner showcase was star-studded. Meta is using the MI300X for Llama 3 and 4 inference. OpenAI’s Sam Altman praised AMD’s infrastructure contributions. Microsoft is running proprietary and open-source models on AMD hardware in Azure.

These endorsements suggest AMD is already winning the trust of the industry’s most demanding AI customers, and that’s a big win.

close up view of computer motherboard with chip

Crusoe Commits to 13,000 AMD Chips

Crusoe, a vertically integrated AI cloud company, announced it will purchase 13,000 MI355X chips, worth an estimated $400 million. They’re even rolling out liquid-cooled infrastructure to handle them.

That’s a massive bet on AMD’s tech, and it positions Crusoe as one of the first major players building a public AI infrastructure service powered entirely by AMD hardware.

Closeup of an amdk62266afr

AMD Outpaces Its Energy Goals

Beyond raw performance, AMD says it’s surpassed its own sustainability goals. The MI350 series achieved a 38x improvement in energy efficiency for AI training and HPC nodes, exceeding its 5-year target of 30x.

With electricity consumption for training becoming a real cost and climate issue, this efficiency advantage could prove just as important as speed in enterprise adoption.

AMD building in Ontario, Canada

Future Goal, 95% Less Energy Per Training Run

AMD has set its sights on even more dramatic sustainability improvements. The company aims to reduce electricity consumption by 95% for typical model training by 2030.

That’s not just greenwashing, it’s a strategic positioning statement for cloud providers under pressure to meet net-zero and ESG benchmarks while still scaling AI.

Hugging face ai logo with a smiling face icon on

ROCm 7 Is Built for Today’s AI Needs

The new ROCm 7 software stack is tailor-made for generative AI and high-performance computing. It delivers deeper PyTorch integration, optimized support for Hugging Face and ONNX, and a growing base of developer tools.

AMD’s biggest software challenge has always been ecosystem friction. ROCm 7 is a significant step toward parity with Nvidia’s CUDA empire.

Developers coding on computer

Developer Cloud Levels the Playing Field

AMD also announced its Developer Cloud platform, a fully managed cloud environment for AI teams to test, train, and fine-tune models on Instinct GPUs.

Think of it as AMD’s sandbox for experimentation and developer onboarding. It lowers the barrier to entry and gives researchers a place to try AMD hardware without deploying it locally first.

Stock market and money

AMD’s Pricing Power Could Shift the Market

One of AMD’s strongest cards is price-performance. The MI350 series reportedly delivers 40% more tokens per dollar than Nvidia B200 systems.

For enterprises and hyperscalers watching their cloud bills balloon, that’s not a feature, it’s a lifeline. AMD’s ability to win on economics could be the biggest threat to Nvidia’s high-margin dominance.

A mobile phone displaying NVIDIA logo with a blurry background showing a downfall arrow

A Broader Strategy Against Blackwell

Nvidia’s Blackwell platform, anchored by the B200 and GB2.00, is formidable. But AMD’s response isn’t just performance, accessibility, openness, and affordability.

While Nvidia builds vertical stacks and proprietary ecosystems, AMD is going horizontal: open standards, broad interoperability, and hyperscaler-friendly infrastructure. It’s a strategic divergence with long-term consequences.

AMD advanced micro devices

Wall Street Is Watching And Reacting

Despite the big reveal, AMD’s stock dipped slightly after the announcement, not an opportunity or verdict. Retail investor sentiment turned bullish, with many seeing the drop as a short-term overreaction.

With Crusoe orders locked in and more partnerships expected to be announced, AMD builds a runway extending beyond this quarter.

But not all the news is rosy: AMD’s Zen CPUs were just hit by a serious security flaw.

A close up of AMD chip

AMD Isn’t Catching Up, It’s Racing Ahead

From silicon to software, AMD thinks long-term. The MI350 Series isn’t just a rival to Nvidia’s Blackwell;  it’s part of a broader shift toward inference-optimized, cost-efficient, open infrastructure AI.

With the MI400 already in view and major players like Meta and OpenAI on board, AMD isn’t just trying to catch up anymore. It’s trying to lead.

And AMD’s momentum isn’t just in data centers: SteamOS now lands on AMD handhelds as Windows 11 slips.

What do you think about AMD’s bold move to aim for a better future for AMD? Please share your thoughts and drop a comment.

Read More From This Brand:

Don’t forget to follow us for more exclusive content on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.