6 min read
6 min read

OpenAI once promised to build artificial intelligence that “safely benefits humanity.” That word safely carried weight, especially as ChatGPT became the world’s most popular AI chatbot. Now, critics say that promise looks different.
A recent IRS filing shows the company quietly removed the word from its official mission. The timing has raised eyebrows, especially as OpenAI reshapes itself into a more traditional profit-driven business.

In its 2022 IRS filing, OpenAI said its mission was to build a general-purpose AI that safely benefits humanity, unconstrained by a need to generate financial return. Safety was baked into the core sentence.
By late 2025, the updated filing described the mission as ensuring that artificial general intelligence benefits all of humanity. The reference to safety and to being unconstrained by profit was gone.

OpenAI began in 2015 as a nonprofit research lab. Its founders said discoveries would be shared openly and royalty-free, aiming to benefit society rather than generate profits.
But building advanced AI systems proved expensive. In 2019, CEO Sam Altman led the creation of a for-profit subsidiary to attract outside funding and scale development.

Microsoft invested 1 billion dollars when OpenAI formed its for-profit arm. By 2024, that total had grown to about 13 billion dollars, giving the software giant a significant financial stake.
Even so, Microsoft did not initially hold a seat on the nonprofit board. It funded the venture but lacked direct power to steer the broader mission.
Little-known fact: Delaware law requires public benefit corporations to issue a report to stockholders at least every two years, detailing objectives, measurable results, and progress toward their stated public benefits.

In late 2024, OpenAI raised 6.6 billion dollars from multiple investors. The deal included a major condition tied to its future corporate structure.
The funding would convert to debt unless OpenAI shifted into a more traditional for-profit model where investors could own shares without profit caps and potentially gain board seats.

In October 2025, OpenAI reached an agreement with the attorneys general of California and Delaware to reorganize. The company split into a nonprofit foundation and a for-profit public benefit corporation.
Public benefit corporations must consider society and the environment, not just shareholders. But their boards decide how to balance those interests and what to disclose.

The restructured nonprofit, known as the OpenAI Foundation, holds approximately 26% of the equity in the newly formed OpenAI Group PBC. Microsoft, after years of multibillion-dollar investments, owns about 27%.
The remaining 47% is held by employees and other private investors. While the Foundation no longer holds a majority economic stake, it still appoints the OpenAI Group board, so formal governance remains with the nonprofit even as outside shareholders gain more financial power and influence.

The restructuring of OpenAI quickly attracted a wave of new investment from major global players. Just two months after the new corporate structure was officially endorsed by the California and Delaware attorneys general, SoftBank finalized a staggering 41 billion dollar investment, signaling strong confidence in the company’s growth and future potential.
As of early February 2026, OpenAI was also in talks for tens of billions more from major tech players. Its estimated valuation climbed above 500 billion dollars.

OpenAI is currently facing several lawsuits related to the safety of its products. According to court filings, some plaintiffs allege psychological manipulation and negligence, while others have raised claims connected to wrongful death and assisted suicide. These cases are still unfolding, but they add legal weight to an already sensitive debate about AI oversight.
Because of that backdrop, the removal of explicit safety language from the company’s mission statement carries more significance. It shifts attention to how responsibility is defined inside a fast-growing AI firm. As OpenAI expands and attracts more investors, critics argue that accountability mechanisms must evolve just as quickly.

The restructuring documents include provisions aimed at maintaining oversight and promoting safety, including new governance mechanisms focused on risk review.
The OpenAI Foundation’s board appoints the directors of the for-profit OpenAI Group PBC, and there is overlapping membership between the two boards, a structure that critics say may limit how independent that oversight feels in practice.
Little-known fact: When OpenAI first created its for-profit subsidiary in 2019, it capped investor returns at 100 times their initial investment, a structure designed to attract funding while preserving nonprofit oversight.

California Attorney General Rob Bonta said the agreement secured concessions to ensure charitable assets are used properly. He also predicted that safety would remain a top priority.
Yet, the updated mission statements for both the OpenAI Foundation and the OpenAI Group no longer explicitly mention safety. By removing that word from the official language, it becomes much more difficult for outsiders to determine whether protecting users and society is still a core priority.

Some observers point to other models. When Health Net converted in 1992, regulators required most of its equity to move to another nonprofit foundation with majority control.
The Philadelphia Inquirer adopted a public benefit structure while remaining owned by a nonprofit institute. These examples show other ways to balance investment and mission.
For an example of how digital platforms can have real-world consequences, read Lawsuit over AI chatbot suicide settled by Google and Character.AI.

OpenAI says it still views advancing AI capability, safety, and positive impact as central to its mission. On its website, the company states that building powerful systems requires progress in safety research at the same time. It frames this as one of the most important challenges of our time and says responsible deployment remains part of its long-term vision.
Still, the context around that promise has changed. With billions of dollars in new funding, a valuation above 500 billion dollars, and a revised mission statement that no longer includes the word safely, critics question whether the internal balance has shifted.
Want a closer look at how OpenAI is sharpening its edge? Check out how it’s introducing age prediction tech to ChatGPT.
What do you think about OpenAI’s evolving safety promise? Share your thoughts.
This slideshow was made with AI assistance and human editing.
Don’t forget to follow us for more exclusive content on MSN.
Read More From This Brand:
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Father, tech enthusiast, pilot and traveler. Trying to stay up to date with all of the latest and greatest tech trends that are shaping out daily lives.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!