8 min read
8 min read

You might have heard about the Stargate project making headlines lately. It’s this enormous AI data center campus in Abilene, Texas, that was supposed to change how artificial intelligence gets built in America.
Oracle and OpenAI recently decided to cancel their plans to expand this flagship location. They were talking about growing it from about 1.2 gigawatts of power capacity to roughly 2 gigawatts, but those negotiations fell apart after months of back-and-forth. The main campus with eight buildings is still moving forward, though, and two of them are already running AI workloads as we speak.

Imagine trying to agree on building something the size of a small city, but you can’t decide who pays for what. That’s pretty much what happened here with the Stargate project’s Texas expansion. The financing terms for expanding the Abilene campus became really difficult to work out.
OpenAI also kept changing its mind about how much data center space it would actually need down the road. When you have partners shifting their numbers every few months, it becomes nearly impossible to lock in contracts for construction that costs billions.

Reports about the Abilene site said winter weather earlier this year disrupted parts of the cooling infrastructure and temporarily affected operations. Those reports described the outage as one of several factors that complicated the push for a larger expansion.
Oracle and Crusoe have publicly rejected claims that the overall Abilene buildout is in trouble. Both companies say construction remains on track, with two buildings already operational and the rest of the campus still moving forward.

It was reported that Nvidia paid a $150 million deposit to Crusoe tied to the unused Abilene expansion capacity. The reported goal was to help keep Nvidia chips, rather than AMD hardware, in the data center footprint that was originally expected to serve OpenAI.
That detail has been widely cited in follow-up coverage, but Nvidia has not publicly confirmed the arrangement itself. What is clear is that Nvidia remains deeply involved in the AI infrastructure race as customers compete for power, space, and GPU supply.

Meta has been reported as a potential tenant for the unused Abilene expansion capacity that OpenAI decided not to pursue. The discussions were described as preliminary, and no final lease agreement has been publicly announced.
If Meta ultimately takes that space, it would represent a quick shift in how valuable AI-ready land, power, and data center capacity can change hands. The episode also shows how closely chip vendors, developers, and model companies are now tied together in the race to secure infrastructure.
Little-known fact: The Abilene campus covers more than 1,000 acres and will eventually have eight massive data center buildings when fully complete.

The Abilene campus is only one part of Stargate, the AI infrastructure effort OpenAI and its partners announced at the White House in January 2025. The project was introduced as a plan to invest up to $500 billion and build 10 gigawatts of AI infrastructure in the United States over four years.
The cancellation of one Abilene-area expansion does not mean the broader effort has collapsed. OpenAI and Oracle still have an agreement to develop 4.5 gigawatts of additional U.S. Stargate capacity, and OpenAI has publicly announced multiple other sites, including locations in Texas, New Mexico, Ohio, Wisconsin, and the Midwest.

OpenAI’s infrastructure planning has been shifting as its compute needs evolve and new sites come online. Reporting on the canceled expansion said OpenAI’s requirements changed during negotiations, which helped push the company to place additional capacity elsewhere rather than enlarge Abilene.
OpenAI has also said it considered expanding Abilene further but chose other locations for that added capacity. The company is already running early workloads at Abilene, where Oracle has begun delivering Nvidia GB200 racks, while newer chip generations remain part of the industry’s forward-looking roadmap rather than a confirmed deployment plan for this site.
Little-known fact: OpenAI’s infrastructure lead, Sachin Katti, confirmed they considered expanding in Abilene but chose to put that extra capacity in other locations instead.

AI data centers are extraordinarily expensive to build, and the financing pressure is becoming more visible across the industry. Reuters reported that Oracle is planning thousands of job cuts as it manages a cash crunch tied to its data center expansion, while Oracle and OpenAI’s broader infrastructure partnership has been described as exceeding $300 billion over five years.
OpenAI has also told investors it expects compute spending of around $600 billion through 2030. That figure is lower than some earlier long-range ambitions discussed around larger infrastructure plans, which shows how quickly projected AI spending can shift as demand, hardware, and financing conditions change.

Power demand is one reason data center growth is becoming a bigger public issue in Texas. Nationally and across the state, residents and local officials have been raising concerns about how large data centers could affect electricity demand, water use, and local infrastructure.
A single gigawatt is an enormous amount of power. Using average U.S. household electricity consumption as a rough benchmark, that level of capacity is often compared with the needs of hundreds of thousands of homes, which helps explain why AI campuses are drawing so much public attention.

OpenAI’s infrastructure strategy has continued to evolve as the company adds more partners and more sites. Publicly, the company has emphasized a mix of approaches that includes Stargate development with Oracle and SoftBank, continued use of Microsoft Azure, and other infrastructure partnerships.
Industry coverage has also reported changes in OpenAI’s infrastructure leadership during this buildout period. Even without focusing on a single executive, the broader takeaway is clear: OpenAI is still refining how much of its future compute it wants to own directly and how much it wants partners to provide.

Large AI models rely on enormous amounts of computing capacity, and facilities like the Abilene campus are part of the infrastructure used to train and run those systems. That means data center buildouts are not just real-estate stories; they affect how quickly AI companies can add new computing power.
When projects are delayed, resized, or moved, companies may have to rebalance where they train models and serve AI products. That does not automatically translate into changes for any one consumer app, but it does shape the pace and scale of future AI services.

Oracle responded publicly after the expansion reports and said recent claims about the Abilene site were “false and incorrect.” The company also said it and Crusoe were operating in lockstep, that two buildings were already operational, and that the rest of the campus remained on schedule.
Oracle further said that its separate 4.5-gigawatt agreement with OpenAI was still moving forward. That response does not erase the reported decision not to pursue one Abilene-area expansion, but it does show Oracle wants investors and customers to distinguish that decision from the wider Stargate buildout.
If you’re wondering where all this massive AI infrastructure could lead next, take a look at Can AI agents from OpenAI help streamline business operations?

The AI infrastructure boom is real, but the Abilene story shows how hard it is to turn enormous AI ambitions into physical projects. Financing, hardware roadmaps, power availability, and partner coordination all shape whether expansion plans move forward on schedule.
Stargate is still active, and the broader Oracle OpenAI buildout continues even after one Abilene-area expansion was dropped. The bigger lesson is that AI growth now depends as much on land, power, construction, and financing as it does on software and model breakthroughs.
If you want to see the kind of technology these massive projects are racing to support, take a look at OpenAI launches model built on Cerebras chip technology.
What do you think about all this drama between OpenAI, Oracle, and Meta? Drop your thoughts in the comments, and don’t forget to hit that like button if you enjoyed this wild ride through the world of AI data centers.
This slideshow was made with AI assistance and human editing.
Don’t forget to follow us for more exclusive content on MSN.
Read More From This Brand:
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Father, tech enthusiast, pilot and traveler. Trying to stay up to date with all of the latest and greatest tech trends that are shaping out daily lives.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!