6 min read
6 min read

Amazon Web Services is no longer just renting out Nvidia chips; it is designing its own. At AWS re:Invent 2025, AWS unveiled Trainium 3, a custom accelerator designed for training and running large AI models in the cloud.
The message is subtle but powerful: Amazon wants more control over its AI stack and less dependence on outside silicon suppliers.

AWS states that Trainium 3 can reduce the cost of training and serving AI models by up to 50 percent compared to equivalent GPU-based systems.
AWS reports that Trainium 3 delivers up to about 4.4 times the compute performance of Trainium 2 while improving energy efficiency by roughly 40 percent. For customers wrestling with eye-watering AI bills, that combination of speed and efficiency is complex to ignore and very appealing.

Inside AWS, the mantra is clear: you do not need to beat Nvidia on every benchmark if you can win on economics. Executives openly say they must prove that their chips deliver strong performance at the right price.
That framing tells us how Trainium will be sold as the practical, cost-efficient workhorse for many workloads, rather than a flashy spec sheet champion.

These chips are not just lab toys. Amazon is using Trainium to train its own Nova family of AI models and to power customer services. Anthropic is scaling Project Rainier on AWS and plans to use more than one million Trainium 2 chips to build and deploy Claude in production.
When big-name AI labs bet on your silicon, it signals to the rest of the industry that your hardware is truly production-ready.

Custom hardware is only one layer of Amazon’s strategy. On top of the hardware sits Amazon Nova, the in-house model family, and Amazon Bedrock, which provides a single interface to call models from Anthropic Mistral and other third-party providers, as well as Amazon’s own models.
The long-term goal is evident to me: AWS aims to be the platform where companies can access chips, models, and tools within a single, integrated cloud environment.

The timing could not be better for Amazon. Nvidia’s top GPUs are incredibly powerful but also expensive and often in short supply. As AI demand explodes, many companies are wary of being locked into a single supplier.
Custom chips from hyperscalers like AWS and TPUs from Google give them negotiating leverage and technical options, which is precisely what procurement teams crave right now.

Despite all this, Nvidia is far from doomed. It still serves as the reference platform for many cutting-edge models and is expected to generate hundreds of billions of dollars in chip bookings through 2026. However, when Amazon claims it can halve AI costs with Trainium 3, traders take notice.
Nvidia shares often wobble on these announcements, reflecting genuine concern that its margins could face pressure.

The relationship is not purely adversarial. Amazon has already announced that its next chip, Trainium four, will support Nvidia NVLink Fusion, a high-speed interconnect.
That means future AWS clusters will mix custom silicon with Nvidia hardware in the same fabric. This is a classic coopetition move; Amazon reduces its reliance, while Nvidia stays deeply embedded in the most advanced cloud architectures.

One quiet advantage Nvidia still holds is neutrality. Cloud providers such as Amazon, Google, and Microsoft compete fiercely with one another. Some customers will hesitate to build critical systems around a rival’s proprietary chips.
Nvidia, positioned in the middle as an independent supplier, can continue to sell to all sides. That makes its ecosystem resilient even as hyperscalers roll out more homegrown silicon.

By pairing Trainium with its massive cloud footprint, AWS is pitching a new industrial era for AI, not just a toolkit. Executives talk about creating the compute fabric for the AI industrial revolution.
In practice, this means tightly integrating chips, networking, storage, and models, so customers can scale quickly without having to stitch together parts from multiple vendors themselves.

Bedrock is a crucial piece in this puzzle. It enables customers to call Nova, Claude, Mistral, Google, and other models through a single AWS interface, all running on Amazon’s infrastructure, making it much easier for developers to experiment and standardize.
For Nvidia, it is a mixed picture; it still powers many of those models, but the brand relationship increasingly flows through AWS first.

The rise of Trainium three and other proprietary accelerators marks a more profound shift. The old model was simple: buy GPUs from Nvidia and rent them out at a markup.
Now, the most significant cloud providers want to design, own, and optimize their own silicon. That change moves more of the value chain in-house and inevitably chips away at Nvidia’s long-term pricing power.
And if you want to see how cloud giants are already reshaping Nvidia’s global footing, take a look at Amazon and Microsoft’s back campaign to cut Nvidia’s China exports.

In the short term, Nvidia and Amazon will continue to succeed together, as many flagship models still rely on Nvidia hardware. But over the next decade, the balance could tilt.
If AWS proves its chips are good enough for most workloads at far lower cost, more customers will quietly migrate. That is how disruption often starts, not with a crash, but with a slow, relentless shift.
And if you want to see another way Amazon is reshaping its future stack, take a look at Amazon expands robotics program, raising concerns over warehouse jobs.
What do you think about Amazon’s custom chip, which is giving competitors a tough time and dominating the AI market? Please share your thoughts and drop a comment.
Read More From This Brand:
Don’t forget to follow us for more exclusive content on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Dan Mitchell has been in the computer industry for more than 25 years, getting started with computers at age 7 on an Apple II.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!