7 min read
7 min read

AMD CEO Lisa Su predicts the demand for AI accelerator chips could top $500 billion in the next few years. This explosive growth is fueled by rising infrastructure needs from leading AI developers like OpenAI and xAI.
Su’s forecast marks a significant expectation shift, as previous market estimates were much lower. The projection also highlights AMD’s growing confidence in its Instinct series chips as genuine contenders in the AI infrastructure race.

Su emphasized that inferencing, not just training, is fueling AI chip demand. As more businesses deploy models in real-time services, inference needs far outpace initial training runs.
She believes this will create long-term hardware demand across cloud, edge, and enterprise environments. AMD is positioning its MI350 series to serve this rapidly expanding use case while continuing to develop higher-performance AI accelerators.

At its “Advancing AI” conference, AMD launched the MI350 series, showcasing significant gains in compute efficiency. These chips reportedly deliver up to 40% more tokens per dollar and run inference tasks at significantly lower cost than competitors.
The MI355X variant, also introduced, is 35 times faster than its predecessor. With significant performance gains, AMD is aggressively targeting Nvidia’s stronghold in the AI GPU market.

AMD announced that seven of the top 10 AI companies are already deploying Instinct accelerators. Clients include OpenAI, Meta, Microsoft, Oracle, and India’s Reliance Jio.
This early adoption signals growing trust in AMD’s performance and value, despite Nvidia’s continued dominance. Su believes broader ecosystem traction will come from open standards and developer-first infrastructure.

Su warned that the U.S. must dramatically scale manufacturing to meet the coming wave of AI chip demand. She cited the importance of public-private partnerships and praised the U.S. AI Action Plan as a strong blueprint.
With reshoring efforts underway, she urged faster permitting, subsidies, and policy support to secure supply chains and prevent foreign dependency.

Su said the market is moving toward a Cambrian explosion of chip types. She expects purpose-built accelerators beyond just GPUs to serve new workloads ranging from scientific discovery to mobile apps.
AMD’s roadmap includes modular architectures and chiplets, aiming to meet the specific needs of different AI models, rather than a one-size-fits-all approach.

Su pointed to Trump’s AI Action Plan as a welcome policy framework. The plan includes measures to ban ideologically biased AI in federal systems and promote AI exports.
It also simplifies approvals for new data centers, which is crucial for AI model deployment. For AMD, this represents a favorable climate for expanding U.S. chip operations.
Su acknowledged that making chips at TSMC’s Arizona plant will cost 5% to 20% more than in Taiwan. However, she noted the tradeoff is acceptable for greater resilience and control over domestic supply chains.
The first batch of Arizona-made chips is expected by the end of 2025, aligning with growing U.S. reshoring efforts.

AMD introduced Helios, a full AI rack-scale platform aimed at hyperscalers. Launching in 2026, Helios integrates MI350 accelerators with networking, software, and management layers.
It’s designed to provide plug-and-play performance for cloud and enterprise customers. AMD says Helios will lower the total cost of ownership compared to existing AI infrastructure.

AMD believes that democratizing access to its AI tools is the way forward. Through its developer cloud access program, engineers, startups, and researchers can test Instinct accelerators firsthand.
This move empowers innovation by reducing entry barriers for smaller players. AMD also emphasized its commitment to open-source platforms and tools like ROCm.
By embracing openness and transparency, AMD hopes to cultivate a thriving community that continuously improves and adapts its technology for broader AI applications.

While $500 billion might sound staggering, Lisa Su believes it could be the beginning. The demand for specialized hardware will keep climbing as AI becomes embedded in more industries, such as healthcare, finance, logistics, and creative sectors.
Su noted that even consumer-grade AI will require massive back-end inferencing power. With AI tools transitioning from novelty to necessity, the addressable market could eventually exceed previous forecasts.
AMD aims to capture this momentum with continuous hardware and software innovation.

Although Nvidia remains the leader in AI accelerators, AMD has made strides to level the playing field. The MI355 chips not only outperform rivals on multiple fronts, but they also promise better energy efficiency and cost-effectiveness.
Su clarified that AMD isn’t just playing catch-up; it’s actively innovating to disrupt. With major hyperscalers now testing AMD hardware and with TCO advantages on its side, AMD is positioning itself as the cost-performance champion in AI compute.

AI workloads shift from training massive models to running them in production, often millions of times daily. This is where inferencing becomes essential. AMD is targeting this exact need with chips optimized for inference efficiency.
Lisa Su noted that the company’s roadmap is focused on adapting to this trend. As generative AI, chatbots, and assistants become part of everyday workflows, AMD believes inferencing accelerators will power the next growth phase in computing.

Crusoe Energy, a fast-growing cloud infrastructure startup, made headlines with a $400 million purchase of 13,000 MI355X GPUs.
This is AMD’s most significant AI chip order and validates the performance promises made at its Advancing AI event. Crusoe plans to deploy these chips across data centers in North America for inference-intensive workloads.
The deal underscores growing customer trust in AMD’s roadmap and signals that even non-hyperscalers see AMD as a formidable Nvidia alternative.

Following AMD’s announcements, several analysts raised their price targets on AMD stock. Evercore bumped its target to $144, Roth Capital to $150, citing strong MI350 series performance and a rapidly expanding AI customer base.
Analysts noted AMD’s improved ROCm software stack and growing influence in inferencing. With hyperscalers scaling up deployments and early benchmarks looking promising, Wall Street is becoming more optimistic about AMD’s potential to narrow the AI gap with Nvidia.
The momentum might not stop there, but AMD could have an arm-powered surprise on the way. Here’s what the latest leak reveals.

Lisa Su believes that AI isn’t just another tech wave; it’s a generational opportunity that will shape the next decade of computing. Every stack layer will need smarter, more efficient chips, from data centers to edge devices.
AMD’s strategy is to win across layers: by offering scalable performance, open software, and modular design. As global demand surges and new applications emerge, AMD sees itself playing a central role in the ongoing AI revolution.
And while AMD bets big on AI, gamers aren’t sold on every move. Here’s why the RX 9060 XT is sparking debate.
What do you think about the AI chip surging demand in the coming days? Please share your thoughts and drop a comment.
Read More From This Brand:
Don’t forget to follow us for more exclusive content on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Dan Mitchell has been in the computer industry for more than 25 years, getting started with computers at age 7 on an Apple II.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!