Was this helpful?
Thumbs UP Thumbs Down

Superhuman AI trigger hunger strikes at Anthropic and DeepMind offices

A female scientist and ai robot working together in the science
People rallying carrying on strike signage

Hunger strikes demand AI pause

Two activists, Guido Reichstadter and Michaël Trazzi, staged hunger strikes outside Anthropic’s San Francisco offices and DeepMind’s London headquarters in early September 2025, urging a pause on frontier AI development by the companies involved.

Their hunger strike highlights growing public concern about the potential risks of superhuman AI. Michaël Trazzi, who has described himself as a former AI-safety researcher, joined the DeepMind protest in London.

Both men are urging AI leaders to consider the societal consequences of unchecked development and are willing to endure personal hardship to send a strong message about the urgent need for global coordination.

A man sitting at pc using artificial intelligence converting text commands

Activists fear superhuman AI risks

Reichstadter argues that rapidly advancing AI could harm society if left unregulated. He is asking Anthropic to stop all frontier development and focus on safer, more limited systems.

His protest emphasizes the urgency of slowing down AI races to prevent potentially catastrophic outcomes. Trazzi believes that collective pressure from the public and industry peers could force AI leaders to act responsibly.

By urging CEOs to commit to pauses, he hopes to create a global agreement that ensures AI progress benefits humanity without crossing dangerous thresholds.

Medicine in blister packs

Reichstadter survives on minimal intake

Reichstadter has said he is subsisting on water, electrolytes, and multivitamins while fasting; his posts and interviews say the fast began in early September 2025.

His resolve demonstrates the seriousness of activists’ demands and their willingness to risk personal health to highlight AI threats. The prolonged fast is also symbolic, showing that the urgency of AI oversight is not a theoretical debate.

Activists like Reichstadter hope the sacrifice will spark media attention, public discourse, and, ultimately, a reevaluation of AI companies’ development priorities.

Artificial intelligence, AI research of robot and cyborg

Trazzi joins London protest

Michael Trazzi, 29, is a former AI safety researcher concerned about the rapid pace of AI innovation. He has spent several days without food outside DeepMind’s headquarters in London, adding to the momentum generated by Reichstadter.

His participation demonstrates that younger tech experts are also alarmed by the potential risks of unchecked AI. Trazzi’s background in AI safety and software engineering gives his protest added credibility.

By calling for CEOs to coordinate on development pauses, he is emphasizing that practical steps can prevent the uncontrolled release of powerful AI systems globally.

Anthropic an artificial intelligence startup company logo.

Anthropic warned about job losses

Anthropic CEO Dario Amodei has warned in interviews that AI could eliminate roughly half of entry-level white-collar jobs within a few years, a projection he says should spur policy and industry action.

This prediction underscores the social and economic challenges AI could introduce if development continues at the current pace. Activists highlight this as part of a broader threat.

By focusing on employment risks, they hope to communicate that the societal consequences of AI extend beyond technical questions and could fundamentally reshape work, income, and livelihoods across multiple industries in the coming years.

Elon Musk arrives at the 10th annual breakthrough prize ceremony

Hinton and Musk voice AI concerns

Leading AI figures like Geoffrey Hinton and Elon Musk have publicly warned about the risks of rapid AI development. Their concerns include potential misuse and loss of human control over highly capable systems in the near future.

Despite these warnings, AI companies continue aggressive development. Activists argue that public attention, combined with ethical leadership from tech giants, is necessary to slow down progress and ensure that AI tools are implemented safely, responsibly, and with long-term human interests in mind.

Google deepmind logo displayed on phone screen

AI could rival human intelligence

DeepMind CEO Demis Hassabis has said he sees artificial general intelligence as plausibly emerging within about five to ten years, a timeline he discussed in recent interviews.

If achieved, such advancement could significantly change workplaces, innovation, and decision-making processes worldwide. Activists stress that without proper coordination and oversight, this timeline could create unforeseen societal and technological disruptions.

By drawing attention to the potential emergence of superhuman AI, protesters aim to influence policies that ensure these systems enhance human life rather than creating uncontrollable risks.

In the system control room technical operator sits and monitors

Global AI coordination urged

Trazzi wants global coordination among AI labs to pause the release of frontier models. His request is for CEOs to publicly commit to halting development if peers do the same, fostering collaboration instead of competition.

This strategy emphasizes that collective action, transparency, and accountability could reduce the likelihood of harmful outcomes.

Activists believe that if enough industry leaders adopt this approach, the entire AI ecosystem could shift toward responsible and ethical development practices without slowing beneficial innovations.

Deepseek website seen on an iphone screen deepseek is a

China’s AI strategy differs

China’s policy guidance emphasizes integrating AI across the economy and sets milestones toward an “intelligent economy” by 2035; the government emphasizes practical deployment alongside research on advanced systems.

Chinese firms like DeepSeek deploy AI in infrastructure, transportation, and city planning. Unlike the United States, which invests heavily in research and speculative projects, China’s approach emphasizes immediate, real-world utility, showing that national strategies can vary widely in balancing innovation with societal impact.

USA and china flags on wooden table in office international

US AI spending exceeds China

Some analysts argue that U.S. government and private sector AI investment substantially exceeds China’s in certain areas in recent years, though exact ratios are debated and depend on which projects are counted.

US investments focus on ambitious projects like human-level AI, often within private tech monopolies. These high-profile efforts aim for groundbreaking capabilities but may overlook scalable real-world applications.

The strategy contrasts with China’s practical deployments, reflecting different priorities between the two global AI leaders.

Disappointed job applicants sitting in the waiting room and staring

AI threatens traditional jobs

AI has already displaced thousands of workers, especially in entry-level white-collar roles. The trend is expected to accelerate as systems become more capable, raising concerns about long-term employment stability.

Activists use this argument to show that AI’s societal impact extends beyond technology itself. By highlighting job losses and economic disruptions, they aim to convince leaders and the public that careful regulation is critical to prevent inequality and maintain human livelihoods while integrating AI safely.

Google DeepMind on a phone screen

AI misuse worries experts

DeepMind’s Hassabis warns that AI in the wrong hands could cause harm. Potential threats include repurposing powerful systems for malicious ends, which could affect individuals, companies, and nations alike.

This concern emphasizes the need for strict access controls and governance. Activists stress that by slowing development, encouraging transparency, and coordinating global policies, the AI community can prevent dangerous misuse while still promoting beneficial applications for science, medicine, and industry.

Google AI logo on the screen of a smartphone in

AI threatens content creators

Google’s AI Overviews and experimental AI Mode produce synthesized answers that can reduce referral clicks to publisher sites; multiple studies and publisher reports show drops in clickthrough rates, though the size of the effect varies by topic and data source.

This shift demonstrates that AI can disrupt existing economic models beyond jobs. Activists highlight such indirect consequences as further proof that AI’s societal impact is broad, and thoughtful intervention is necessary to preserve ecosystems of creators, publishers, and users in an increasingly AI-driven world.

Risk word written on cubes.

Urgency drives personal risk

Both Reichstadter and Trazzi are willing to risk personal health to raise awareness. Their hunger strikes symbolize the urgency of global attention on AI safety before more powerful systems are released.

The personal sacrifices serve as a call to action. By attracting media coverage and public scrutiny, activists hope to pressure CEOs, policymakers, and the global community to adopt coordinated measures that ensure AI remains a tool for human advancement rather than a source of potential harm.

A female scientist and ai robot working together in the science

Superhuman AI could impact society

Hassabis highlights that advanced AI could rival humans in reasoning, creativity, and decision-making. The potential societal consequences are vast, including ethical dilemmas and challenges to governance structures.

Activists argue that preparation and regulation must begin now. By advocating for pauses and public dialogue, they aim to shape the trajectory of AI in ways that maximize benefits while minimizing risks, ensuring humanity retains control over technologies that could otherwise surpass our capabilities.

Looking for the hidden side of AI in big business? Explore how AI shame grips the corporate sector as executives secretly fear exposure.

Digital government transformation and online public services logos over person using laptop.

Global attention could spark change

Activists believe public pressure can influence tech leaders. A collective call for cautious AI development could foster safer research practices, delaying potentially hazardous innovations.

Their message resonates beyond offices. By engaging citizens, governments, and companies, these hunger strikes aim to promote transparency, ethical development, and global cooperation.

If enough people participate in the conversation, the balance between AI progress and societal safety could shift, ensuring the future of artificial intelligence benefits everyone while avoiding dangerous consequences.

Curious who’s fixing AI’s mistakes behind the scenes? Humans are now hired to clean up messy AI-generated content.

If you have any concerns about AI development, share your thoughts in the comments. We’d love to hear how you think humanity can safely navigate the rise of superhuman AI.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you like this story, you’ll LOVE our Free email newsletter. Join today and be the first to receive stories like these.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.