Was this helpful?
Thumbs UP Thumbs Down

Big names unite against AI takeover: has AI gone too far?

Robotic hand assisting person for signing document over reflective desk
Prince Harry

Big names unite against AI takeover

Hundreds of powerful voices around the world are joining forces to demand a pause on the race toward creating machines that could outthink humans. The call warns that superintelligent AI could become too advanced to control, posing risks that go far beyond what most people realize.

What makes this move stand out is who’s behind it. Nobel laureates, military leaders, and celebrities all agree that AI is advancing faster than society can keep up with. Their message is simple: humanity should not rush into building something it might never be able to stop.

Two business men shaking hands.

A surprising alliance forms

It’s rare to see names like Prince Harry and Steve Bannon on the same page, but this campaign made it happen. They’re joined by Apple’s Steve Wozniak, Virgin’s Richard Branson, and rapper Will.i.am, among hundreds of others calling for a global AI rethink.

The list crosses political lines and industries, showing how concern over AI’s direction now unites people who usually disagree on everything else. For once, the warning is coming not just from tech insiders, but from all corners of society.

Businessman man official holding red prohibition sign no over boxes.

What they want banned

The statement urges a global prohibition on developing superintelligent AI until scientists confirm it’s controllable and society supports it.

They say this isn’t about slowing progress, but about keeping control. AI is already reshaping business, art, and communication, but systems that think beyond human intelligence could trigger irreversible consequences if left unchecked.

Time running out concept with dissolving alarm clock.

A growing sense of urgency

The call comes at a time when AI is everywhere, from smart assistants to self-driving cars. With companies like OpenAI, Google, and Meta investing billions, even experts say progress is happening faster than expected.

Some industry leaders say AGI may be near, though estimates vary widely. OpenAI’s Sam Altman has suggested superintelligence could arrive by around 2030; Meta’s Mark Zuckerberg wrote in July 2025 that superintelligence is ‘now in sight.’

Fake profile concept.

Who’s behind the statement

The appeal was organized by the Future of Life Institute, a nonprofit that studies global risks like nuclear weapons and AI. The group has long warned that technology is advancing without enough oversight or public debate.

Its early supporters included Elon Musk, though he’s now deep in the AI race with his own company, xAI. The group says it doesn’t take money from Big Tech and is currently funded by Ethereum co-founder Vitalik Buterin.

Robotic hand assisting person for signing document over reflective desk

The man leading the charge

Anthony Aguirre, a physicist and the group’s executive director, says AI companies are moving ahead without asking if people even want human-replacing systems. He believes society should decide together how far to take these technologies.

He argues that the world has quietly accepted an AI-driven future without real public consent. In his view, the decision to build smarter-than-human machines should be a choice made by everyone, not just tech founders and investors.

Robot and human fingers about to touch

Calls for public discussion

The statement’s biggest goal is to open conversation, not shut down science. Aguirre hopes it sparks debate among policymakers, scientists, and citizens worldwide.

He believes AI isn’t just a tech issue, it’s a human one. Asking “what do we actually want from AI?” could shape how governments set the rules before it’s too late.

Risk word on keyboard

Comparing AI to other risks

Supporters compare advanced AI to other dangerous technologies that required global cooperation, such as nuclear power. They say superintelligence could be even harder to control once it reaches a tipping point.

Like nuclear treaties, they suggest a worldwide agreement might be the only way to manage the risks. Without that, every country could end up in a race to build the most powerful AI first.

The White House in Washington DC

The White House reaction

Initial reports didn’t note any official U.S. response, and with no comprehensive federal AI law in place, rapid industry releases are intensifying concerns about government urgency.

In Washington, AI regulation has been slow-moving, even as companies continue releasing new tools that reshape work, creativity, and even politics. The call for an AI ban could pressure leaders to move faster.

People attentively listening to a speaker

Public opinion divided

According to an NBC News/SurveyMonkey poll reported in June 2025, 44% said AI would make life better, 42% worse, and 13% no effect.

This divide shows how uncertain people feel about where AI is headed. For every person excited about smarter tools, another fears what happens when machines start making decisions humans can’t undo.

Big Tech companies.

Tech giants stay quiet

Interestingly, major AI company CEOs (e.g., at OpenAI, Meta, Google) do not appear among publicly named signatories so far. OpenAI’s Sam Altman, Meta’s Mark Zuckerberg, and Elon Musk all continue working toward more advanced systems, even while acknowledging the risks.

Altman has said he expects superintelligence to appear before 2030. Musk once warned about “robots killing people,” yet he is building humanoid robots himself. And Zuckerberg recently declared that superintelligence is “now in sight.”

OpenAI logo displayed on a phone screen.

Recent tensions with OpenAI

The debate also comes amid tension between OpenAI and watchdog groups. The Future of Life Institute said the company even subpoenaed them earlier this month, questioning their funding sources after the group pushed for stronger AI oversight.

OpenAI called it due diligence, but critics saw it as retaliation. The clash highlights how messy the conversation around AI regulation has become, especially when ethics groups challenge powerful developers.

Business people team sitting around meeting table and assembling wooden

Experts join from all fields

Beyond celebrities, the statement includes leading scientists such as Geoffrey Hinton, Yoshua Bengio, and Nobel physicist John Mather. Religious voices like Vatican AI adviser Paolo Benanti also signed on.

Signatories span multiple countries and sectors, showing that fear over runaway AI isn’t just a Western concern. For many signers, it’s about protecting humanity, not politics or profit.

Portrait of a woman questioning.

Why it matters now

AI is already writing code, diagnosing patients, and generating realistic images. If those systems evolve into something smarter than us, today’s choices will determine whether they stay tools or become threats.

This call for a pause is about slowing down to think clearly. Once AI crosses the line into superintelligence, there might be no way back.

Flags of developing nations.

The push for global unity

Aguirre and his team hope their statement inspires cooperation among nations, not rivalry. He says the world has a chance to act before superintelligence becomes an uncontrollable force.

They want policymakers to treat advanced AI as a shared responsibility. Like climate change or nuclear power, it’s an issue too big for one country to handle alone.

Are we starting to sound like bots? See how Altman’s warning about AI talk might already be showing up in everyday conversations.

Handwriting text final thoughts concept meaning the conclusion or last

A moment to reflect

This wave of support shows just how serious the AI conversation has become. From royals to scientists, the message is clear: humanity needs to hit pause before it builds something it can’t control. Maybe this isn’t about stopping AI forever, but about steering it wisely.

Can AI really replace a therapist? Discover why clients are pushing back and what they say AI can, and can’t, understand about human emotions.

Do you think this global stand will spark real change, or is it already too late for control? Let’s see where people stand. Drop a like and comment if you’re following the debate.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.