7 min read
7 min read

OpenAI chairman Bret Taylor didn’t mince words: training your large language model (LLM) is a great way to “burn through millions of dollars.” In a podcast appearance, Taylor likened the effort to setting piles of money on fire.
His advice to startups? Don’t do it. Unless you’re a tech giant with unlimited capital, using existing tools is smarter than building foundational AI from the ground up.

Taylor said only OpenAI, Google, Meta, and Anthropic are realistically able to train frontier models due to the astronomical costs involved.
Between GPUs, infrastructure, R&D, safety validation and elite talent, training a competitive frontier LLM can cost tens to hundreds of millions today, and some projects already exceed $1 billion per training run, particularly for next-generation models.
For most companies, even trying is financial suicide. The message is clear: unless you’re one of the elite, don’t try to compete at the base model level.

Rather than training an LLM, companies can license access to powerful models like GPT-4 via APIs. OpenAI, for instance, sells tokens for developers to integrate AI into their products. There are also open-source models available.
Taylor emphasized that this model lowers entry barriers, speeds up development, and allows startups to focus on building differentiated products without spending a fortune on infrastructure.

Because of the enormous costs and complex logistics, there’s no real “indie data center” movement for training foundational models. AI development is consolidating rapidly, with most new LLMs coming from a few major players.
This consolidation creates risk but also opportunity. It opens the door for others to build creative applications, rather than reinventing the AI wheel at a massive cost.

Some global players are proving that leaner alternatives can still make waves. China’s DeepSeek used a less resource-intensive approach to build its R1 model and corresponding chatbot.
It topped the App Store charts, briefly outpacing ChatGPT. The model’s success sparked debate over whether tech giants are overengineering and overspending for marginal gains. So while Taylor’s warning holds weight, it’s not an absolute rule.

Taylor warned that custom-built models depreciate quickly. The AI landscape moves so fast that what’s cutting-edge today may be obsolete next quarter.
If you sink millions into training your LLM, you may find it overtaken by open-source or leased models before you even hit the market. That’s a significant risk for startups with finite capital and pressure to ship fast.

Instead of building base models, Taylor recommended targeting the tools market, apps, agents, and services powered by AI. He likened these to “pickaxes in a gold rush.”
The idea is simple: provide the shovels and rakes, not the mines. These tools address real user needs today and can scale rapidly by riding the infrastructure created by AI giants.

Taylor sees the future of software in “agent companies,” startups that use AI to create autonomous agents that perform tasks. He compared this to the SaaS boom of the early 2010s.
These companies won’t need to build their models but can create highly valuable businesses by packaging AI into user-friendly solutions. The opportunity lies not in the core model, but in its application.

Training your own LLM brings more than financial risk; it opens you to scrutiny around bias, misinformation, data handling, and safety. Major AI labs have the teams and experience to navigate these challenges.
For smaller companies, leasing a model means offloading those headaches to someone else, reducing the chance of regulatory blowback while shipping an AI-powered product.

Let’s be honest: Taylor’s advice is also self-serving. OpenAI, where he serves as chairman, profits from developers using its API rather than competing directly. Encouraging startups to build on its tech also helps OpenAI dominate the application layer.
Still, the advice isn’t wrong; it’s just layered. Giants want to own the rails and build them faster than anyone else.

The cost of training AI models continues to climb. State-of-the-art chips like Nvidia’s H100s are complicated to get and expensive to operate. Custom data centers cost hundreds of millions.
And once trained, models need to be continually fine-tuned and served at scale. Even mid-sized companies struggle to manage these costs, making DIY AI increasingly unrealistic.

It’s not just about machines, it’s about minds. Training top-tier AI models requires elite research and engineering talent. Companies like Meta have reportedly offered $100 million signing bonuses to poach OpenAI employees.
Competing for that kind of talent is impossible for startups without mega-capital. You can’t train the model if you can’t attract or keep the team.

OpenAI has scrapped or delayed some of its most ambitious model rollouts, citing safety and performance concerns. A recently planned open-weight model was postponed indefinitely.
And its attempted acquisition of coding startup Windsurf collapsed amid strategic disagreements. Even for the best-funded teams in the world, launching next-gen models isn’t smooth sailing. That says a lot about the uphill battle others face.

OpenAI had publicly slated an open‑weight release for summer 2025, but announced in mid‑July that it has delayed the launch indefinitely for additional safety testing and internal review.
This hybrid approach balances transparency and control, and may offer a way for smaller players to stand out without reinventing the wheel.
Still, it requires careful management and usually, a partner with serious AI chops to pull off.

Launching your model also means dealing with thorny issues around intellectual property, licensing data, and cross-border compliance. A lawsuit or IP complaint (like OpenAI faced with its “io” branding) can derail even well-funded efforts.
By contrast, licensing a model off the shelf avoids these minefields and allows startups to focus on product-market fit.
Thinking long-term with AI? Sam Altman has a vision that ChatGPT will remember everything about you.

The AI hype train is real and seductive. Everyone wants to claim they’re training the next GPT-level model. But Bret Taylor’s advice rings true: the winners will be those who build durable, helpful tools, not just powerful engines.
In AI’s next chapter, delivering real utility will beat bragging rights. So if you’re building, make it count.
Wondering what it really takes to power all that AI magic? Sam Altman just shared the eye-opening numbers behind a single ChatGPT query.
What do you think about Sam Altman advising you not to invest all your capital in training your AI bot? Please share your thoughts and drop a comment.
Read More From This Brand:
Don’t forget to follow us for more exclusive content on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Dan Mitchell has been in the computer industry for more than 25 years, getting started with computers at age 7 on an Apple II.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!