Was this helpful?
Thumbs UP Thumbs Down

Google DeepMind CEO suggests AI firms not to repeat mistakes of early social media

DeepMind logo displayed
Google DeepMind displayed on cell phone

A warning from the future

Artificial intelligence is racing ahead faster than most people expected, and one of the world’s top AI leaders is waving a red flag.

Demis Hassabis, the CEO of Google DeepMind, believes the technology could change life as much as electricity did, but he also says it must be handled with extreme care.

At the Athens Innovation Summit in September 2025, Hassabis warned the biggest danger is repeating social media’s ‘move fast and break things’ mistakes and urged developers and policymakers to avoid the same rush to growth without safeguards.”

Man and woman using cellphones to share files

Lessons from social media

Hassabis pointed to the early years of social platforms when the motto “move fast and break things” guided decisions. That mindset brought rapid growth but also created toxic side effects that no one fully understood until it was too late.

Platforms pushed for clicks and shares instead of user well-being, which fueled polarization, misinformation, and mental health struggles. His message is that AI developers cannot afford to take the same reckless approach because the scale and impact of this technology will be much larger.

Businessman holding a foldable smartphone social media concept

The trap of engagement

Social media algorithms were designed to maximize attention by keeping people scrolling for longer periods. While that made companies rich, it did not always benefit users themselves.

Hassabis warned that if AI systems follow this same design, they could hijack people’s focus in ways that cause serious harm. He explained that the goal must be to serve individuals rather than exploit them, or the technology could amplify addiction, division, and emotional stress on a massive scale.

Two scientists working with computer powered VFX hologram of human brain with the help of AI technology

A call for scientific discipline

Rather than racing ahead blindly, Hassabis said AI companies should adopt the scientific method in how they build. That means carefully testing and understanding how these systems behave before releasing them to millions of people.

This approach would help uncover hidden risks and limit potential damage. He believes every deployment of AI should involve thorough evaluation, peer review, and safeguards, rather than rushing products into the world without preparation.

A man and artificial intelligence concept with related icon

The Athens speech

Hassabis spoke on stage with Greek Prime Minister Kyriakos Mitsotakis and outlined a vision that balances bold innovation with strict safety checks.

His words were shaped by the belief that AI carries far greater stakes than any earlier digital revolution. He emphasized that developers and policymakers must work together to ensure progress comes with careful oversight, rather than repeating the careless experiments that marked the rise of social platforms.

UVA Universiteit Van Amsterdam (University of Amsterdam) building in Netherlands

Early cracks already visible

A University of Amsterdam preprint populated a bare-bones social network with 500 LLM-powered agents (GPT-4o-mini in reported runs); the bots quickly formed cliques, amplified extreme voices, and produced influencer-like elites even without recommendation algorithms.

Within days, the bots formed cliques, amplified extreme voices, and allowed small groups to dominate conversations. The surprising part was that this happened without ads or recommendation algorithms, suggesting that the problems might be built into the way networks reward certain kinds of interactions.

Programmer or IT person in glasses reading script, programming and cybersecurity research on computer

Failed fixes in experiments

The same Amsterdam team tried six different ways to reduce the dysfunction inside the bot network. They tested chronological feeds, removing follower counts, and other design tweaks meant to calm things down.

Yet nothing fully solved the issue. The researchers concluded that the challenges went deeper than coding tricks. This raised fresh concerns that if AI begins to shape online spaces, it could carry forward the same dysfunctional habits that broke trust in social media.

A man sitting at pc using artificial intelligence converting text commands

Beyond text and chat

Artificial intelligence is not only about words on a screen. It is already shaping images, voices, and even digital characters that influence how people interact online. Virtual influencers are gaining popularity, while brands are experimenting with AI-generated faces and personalities.

Some creators worry that licensing their likeness to machines forever could weaken their careers. This shows how quickly AI is becoming embedded in cultural life, raising questions about how much control people will keep over their own identities.

Sam Altman OpenAI CEO during a speech with John Elkann Exor company CEO at technology fair seminary

Industry leaders split

Some industry leaders emphasize different risks: OpenAI’s Sam Altman has said AI can be net-positive and commented on how children will grow up with AI, while others warn that social platforms’ endless feeds cause serious harm.

On the other hand, Reddit cofounder Alexis Ohanian believes AI could hand users more control over their online experience. These mixed opinions reveal just how divided the industry remains about the best path forward and highlight the urgency of Hassabis’s warning.

Risk word on keyboard

An echo of addiction

Hassabis cautioned that AI could create new forms of dependency if companies chase addictive designs. He compared it to the way social platforms engineered features that kept people returning again and again, often at the expense of their mental health.

With AI able to generate conversation, media, and even personal advice, the risk of over-reliance is very real. He said the challenge is ensuring AI supports people in healthy ways rather than creating traps that consume their time and attention.

AI risks and warnings hologram.

The idea of jagged intelligence

Hassabis described today’s AI systems as showing “jagged intelligence.” This means they can perform brilliantly in narrow tasks like strategy games or protein folding but fail unpredictably in other areas.

That inconsistency could be dangerous if left unchecked. A model might offer valuable insights one moment and then push harmful content the next. He stressed that developers must respect these sharp edges, testing carefully to understand both the strengths and weaknesses before scaling them globally.

Young person using a mobile phone

The risk of inequality

Another issue Hassabis raised is the possibility of AI deepening inequality. Social platforms already widened divides by amplifying certain voices and leaving others behind.

If AI systems are rolled out without guardrails, they could concentrate benefits among powerful companies and wealthy nations. He urged leaders to create frameworks that ensure fairness, so the advantages of AI are spread across society instead of magnifying existing gaps.

AI Bubble at the center of the screen and in background a manager working on a computer

International cooperation matters

Hassabis suggested that AI may need global safety frameworks similar to those used in nuclear power or aviation. These industries developed international standards because the stakes were simply too high for mistakes.

AI, with its potential to affect economies, politics, and personal lives worldwide, deserves the same level of care. He argued that without coordinated efforts, competition could push companies to cut corners, repeating the reckless cycle that fueled social media’s unchecked expansion.

DeepMind logo displayed

New ways to handle data

Behind the scenes, researchers at Google DeepMind are also exploring how to solve another growing problem, the shortage of safe data.

Researchers have proposed generative approaches, described in a recent paper as ‘Generative Data Refinement,’ that use models to rewrite or sanitize toxic or private parts of datasets.

So more of the web can be safely reused for training. DeepMind and other labs are exploring related techniques for enlarging safe training corpora.

Artificial General Intelligence AGI

The timeline to general AI

Hassabis has warned that AGI could plausibly arrive within the next decade. sometimes summarized as ‘within five to ten years’ in recent reporting, which is why he stresses acting now on safety and governance.

He said this possibility makes the current moment especially important. Every choice made now will influence the systems that shape industries, governments, and daily life in the near future. The balance between bold exploration and responsible caution is what will determine the outcome.

Want to know why AGI won’t stop Microsoft from keeping OpenAI close? Check out Microsoft’s push to keep OpenAI tech even after AGI arrives.

Human interact with AI artificial intelligence brain processor in concept

Building a safer future

Hassabis summed up his vision by saying AI must be built to benefit people rather than manipulate them. That means prioritizing ethics, rigorous testing, and international cooperation over speed and profits.

He believes true progress will come from foresight, not recklessness. As he put it, the challenge is to stay bold about opportunities while never losing sight of the risks.

If you’ve ever wondered how AI can solve the toughest equations but stumble on basics, check out Google DeepMind CEO says AI aces math olympiad but struggles with high school problems.

What do you think is the best way forward for AI? Let us know your thoughts in the comments.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you like this story, you’ll LOVE our Free email newsletter. Join today and be the first to receive stories like these.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.