Was this helpful?
Thumbs UP Thumbs Down

Microsoft says OpenAI’s AGI will need to pass an expert verification test

OpenAI logo displayed on a laptop.
Microsoft store in New York

Microsoft draws the line for AGI

Microsoft and OpenAI said any AGI declaration will be subject to verification by an independent expert panel. However, the companies have not yet released detailed rules about how those experts will be chosen or how the verification will be conducted.

The move formalizes a system of checks that could reshape how breakthroughs in intelligent computing are announced and trusted. It sets a precedent for transparency in an era where AI progress is moving faster than regulation or public understanding.

Two business men shaking hands.

A new chapter in the partnership

The new agreement extends Microsoft’s rights to OpenAI’s models and products through 2032 while explicitly granting OpenAI greater flexibility to pursue external partnerships and new funding arrangements.

Both sides have committed to clearer terms on data, intellectual property, and profit sharing. This structure strengthens long-term cooperation while recognizing the growing influence of artificial intelligence across software, hardware, and cloud ecosystems.

Microsoft logo displayed on phone screen

Microsoft prepares for true general AI

Artificial general intelligence is defined as a system capable of reasoning, learning, and adapting across any domain like a human being. No company has achieved it yet, but the announcement signals that Microsoft expects major progress soon.

By establishing verification rules early, it is setting a global example for responsible innovation and ensuring that future claims come with verifiable evidence, not just marketing ambition or industry hype.

Verification concept

How verification will work?

The companies said an independent expert panel with relevant expertise would assess any AGI claim, but they have not yet disclosed who will appoint the panel or what specific criteria will define AGI.

The panel’s assessment will determine when and how Microsoft’s special rights over future models are activated. By using independent experts, both companies are building a layer of credibility that has rarely existed in the world of commercial AI research.

Microsoft logo displayed on phone screen

Why Microsoft made the change?

Analysts told reporters the change protects Microsoft’s massive investment and reduces the risk that premature claims of breakthrough capabilities would unsettle markets or damage public trust.

It also places Microsoft at the forefront of responsible AI development. The company wants to ensure that AGI, if achieved, benefits society broadly rather than fueling confusion or unrealistic expectations.

Trust concept

The growing focus on trust

If the verification process is implemented transparently, it could give users more confidence in major claims about human-level performance, but much will depend on how public and rigorous the verification steps are.

People will be able to trust that the systems they use at home or work have been verified through real evidence.

This step could influence everything from digital assistants to AI-powered learning platforms, where trust and accuracy matter as much as innovation itself.

OpenAI logo displayed on a laptop.

What OpenAI stands to gain?

While it accepts stricter verification rules, OpenAI gains more flexibility under the new agreement. It can release certain models more independently and collaborate with partners outside Microsoft.

That creative freedom allows OpenAI to expand its research direction while maintaining a close link with one of the world’s largest computing networks. The balance reflects how both companies see shared growth in advancing general intelligence.

Hands counting US dollar bills.

The money behind the mission

Microsoft now holds roughly a 27 percent stake in the restructured OpenAI for-profit entity, a position valued at about 135 billion dollars and implying a total company valuation of nearly 500 billion dollars.

For investors, it signals stability in an industry known for rapid change. Clear oversight could make AI development less speculative and more sustainable for the long term.

Approved concept

Future assistants may need official validation

The verification rule could change how AI evolves in consumer life. If a future assistant or home device claims true intelligence, it will have the validation to prove it.

This adds reliability to AI tools that manage schedules, write code, or control home systems. For users, that means smarter technology built on trust, not on exaggerated claims or unverified breakthroughs.

Person interacting with digital transparency icons.

Transparency becomes the next AI standard

This agreement sends a message to every AI developer and cloud provider. Transparency and expert validation will soon become expected, not optional.

Other firms like Google and Anthropic are watching closely to see if this model improves credibility. If it succeeds, similar verification systems could become standard practice for the most advanced AI models worldwide.

The concept answers to the questions.

Questions still waiting for answers

Despite the announcement, many details remain unresolved. Who will appoint the experts, and what criteria will define AGI? How much of the process will be public?

These questions will determine whether the verification is seen as truly independent or simply symbolic. The clarity of these answers could shape how governments and users judge the next wave of intelligent systems.

With uncertainty growing, many are now asking how long Microsoft and OpenAI can last as partners in this high-stakes AI race.

Looking ahead.

Looking ahead towards a responsible AI

The Microsoft-OpenAI framework reflects a turning point for artificial intelligence. It aligns profit goals with accountability and public safety. Whether AGI arrives next year or years later, the groundwork for transparent evaluation is now in place.

This approach could guide how humanity manages its most powerful computing systems and ensure that smarter living remains built on trust, evidence, and shared progress.

As the AI landscape matures, OpenAI identifies mental health concerns among ChatGPT users, showing how ethical responsibility extends beyond technology itself.

What do you think about this? Let us know in the comments, and don’t forget to leave a like.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.