Was this helpful?
Thumbs UP Thumbs Down

OpenAI dreams big, city sized AI supercomputers

OpenAI headquarters glass building in San Francisco, USA
Sam altman and OpenAI logo.

OpenAI dreams of city sized AI

Sam Altman isn’t thinking small. His latest vision: AI supercomputers so massive, they could power entire cities. To pull it off, OpenAI is working on custom chips with Broadcom.

The goal isn’t just scale. Some executives envision increasing per-user compute dramatically, though this remains speculative. That could mean billions of chips worldwide, matching the scale of conventional microchips. It’s an ambitious plan that stretches the imagination.

Engineer in rubber gloves holding computer microchip.

Custom chips for faster AI

OpenAI’s Broadcom deal isn’t just about building more machines. These chips are designed to make AI inference faster and cheaper, allowing OpenAI to serve users more efficiently without skyrocketing costs.

This approach mirrors what Apple did with iPhones, tightly pairing hardware and software. For AI, that could mean smaller power bills, quicker responses, and a more sustainable AI rollout.

Nvidia headquarter

Nvidia still leads AI training

Training giant AI models still relies on Nvidia. Its chips are versatile, handling tasks ranging from language modeling to image generation. OpenAI uses Nvidia chips to train the next-gen systems.

However, Nvidia chips alone aren’t enough for delivering AI at scale. That’s where Broadcom’s custom chips come in, boosting efficiency during the inference phase when AI meets users.

Word or phrase inference in a dictionary

The power of inference

Inference is how AI models answer real-world questions. OpenAI’s newest models require chips optimized for high-bandwidth memory, enabling efficient runtime, lower latency, and scalable throughput across massive workloads.

OpenAI signed LOIs/MOUs with Samsung and SK hynix to ramp advanced memory supply, reportedly targeting up to ~900,000 wafer starts/month for Stargate. Optimized chips make the AI faster, cheaper, and more energy-efficient for everyday users.

Businessman working with a cloud computing diagram on the new

Sparsity cuts computing load

New AI models use “sparsity,” activating only parts of their neural networks to respond to queries. Older models activated large portions, wasting compute power.

With custom chips, sparsity becomes even more efficient. OpenAI can process complex tasks while consuming less electricity, a critical factor at the scale they’re aiming for.

Broadcom ground sign at the headquarters in San Jose California

Broadcom’s networking magic

AI chips alone aren’t enough. Broadcom supplies networking chips, switches, cables, and optical interconnects that tie all processors together into unified, high-bandwidth supercomputers, enabling scale, reliability, and fast data movement.

This connectivity is vital. It lets OpenAI scale operations, allowing multiple data centers to act as a single, cohesive AI powerhouse.

OpenAI logo displayed on phone screen

Gigawatt scale ambition

Altman estimates OpenAI’s total AI compute already reaches about 2 gigawatts today. Expansion plans could add roughly 10 more gigawatts by 2030, on top of the baseline, dramatically scaling infrastructure.

To put it in perspective, that’s on the order of a small country’s peak demand (e.g., comparable to Portugal or Switzerland), per recent reporting. This is industrial-scale AI, and it’s only growing.

A cropped view of businessman using calculator near money and contract

Trillion dollar AI investments

Recent coverage estimates cumulative 2025 infrastructure deals approaching ~$1 trillion, depending on final build-outs and timelines. These are huge numbers, reflecting the scope of city-sized AI ambitions.

It’s an unprecedented industrial project, one Altman calls the largest joint effort in human history. The scale is difficult to imagine, but it signals AI’s growing global footprint.

Person using tablet with cloud icon overlay.

AI chips mirror cloud giants

Amazon and Google have used custom chips to optimize cloud computing. OpenAI follows a similar strategy, developing hardware for AI training and inference to enhance efficiency and performance at scale.

Custom chips allow tighter hardware-software integration, making operations faster and energy usage lower. The approach could shape the future of AI infrastructure.

Selective focus of training inscription on cubes surrounded by blocks.

Inference vs training needs

Training models and running inference rely on different chips: training favors versatile, high-throughput hardware for diverse workloads, whereas inference benefits from optimized, task-specific accelerators tuned for latency, efficiency, and cost.

OpenAI’s split approach, Nvidia for training, Broadcom for inference, maximizes performance and cost efficiency, letting users access AI without bottlenecks.

LED bulb with lighting new technology of energy

Energy challenges loom

AI supercomputers demand enormous electricity. A single city-scale installation can consume gigawatts of power, straining local grids, necessitating dedicated generation, and prompting efficiency innovations to manage heat, sustainability, and costs.

By designing chips specifically for sparse models and inference, OpenAI reduces wasted energy. Efficiency is key to keeping operations sustainable and costs manageable.

OpenAI headquarters glass building in San Francisco, USA

Supplier diversity matters

OpenAI is diversifying chip suppliers. The Stargate site in Texas relies mainly on Nvidia for training, but AMD and Broadcom chips help with inference.

This ensures OpenAI doesn’t hit bottlenecks and maintains steady progress. Reliable access to abundant chips is crucial for scaling services globally, supporting billions of users with faster inference, training, resilience, and performance.

Business competition concept.

Competition heats up

xAI and Meta are also building supercomputers. xAI’s Memphis ‘Colossus’ is being expanded to roughly 1.2 gigawatts of capacity, and Meta’s Hyperion could hit 5 gigawatts eventually.

OpenAI isn’t alone; this is an accelerating AI arms race, where custom silicon, data-center scale, and relentless energy efficiency advances will ultimately define the industry’s leaders and long-term competitive advantage.

Businessman drawing innovation word graphics.

Efficiency drives innovation

Custom chips and sparsity allow faster, cheaper, and greener AI. OpenAI’s focus on inference hardware is reshaping how AI services are delivered.

Efficiency becomes a decisive competitive advantage, enabling AI companies to deliver advanced features while avoiding runaway costs, reducing energy use significantly, and preventing extra strain on already stressed power grids.

Man interacted with artificial intelligence.

A massive industrial project

Altman calls this the biggest joint industrial project ever. AI at this scale is not just software; it’s hardware, electricity, and global logistics.

From chips to gigawatt-scale data centers, every piece must fit together perfectly. The challenge is massive, but the potential is revolutionary.

Could Malaysia’s new rules shake up the AI chip trade? Explore why the new Malaysia rules target US AI chip imports.

Handwriting text final thoughts concept meaning the conclusion or last

City sized AI is here soon

OpenAI’s vision may sound like science fiction, but custom chips, Broadcom deals, and gigawatt-scale infrastructure are bringing it closer to reality. The company is building systems that could serve billions of users efficiently.

This is not just software anymore. It’s a massive industrial project combining chips, electricity, and logistics. How AI supercomputers will impact everyday life is only starting to unfold, and the next decade could redefine what’s possible.

Wondering how other companies are making waves in AI innovation? Explore how Marvell thrives with custom AI silicon chips.

Imagine AI supercomputers so massive they could power entire cities. Is this innovation or insanity? How do you see the future unfolding? Share your thoughts in the comments.

Read More From This Brand:

Don’t forget to follow us for more exclusive content on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.