Was this helpful?
Thumbs UP Thumbs Down

Senior AI workers are resigning and speaking out about risks

OpenAI logo displayed on phone screen
OpenAI headquarters glass building in San Francisco, USA

They’re quitting in the news

Most people quit their jobs with a quick email and maybe some goodbye cupcakes. But in the AI world, things are different this week. Researchers are quitting in the most public way possible, by writing essays in the New York Times and posting letters on social media for millions to see.

Zoë Hitzig, a researcher at OpenAI, announced her resignation the same day her company started testing ads on ChatGPT. She used one of the biggest platforms in the country to explain why she had to leave. It’s a dramatic way to quit, but she felt the topic was too important for a quiet exit.

Chatgpt app on phone screen

The big fear about ads

So what got this researcher so worked up? It’s about ads coming to ChatGPT. Hitzig explained that people tell these chatbots their deepest thoughts, medical fears, relationship problems, and beliefs about God.

She worries that if a company builds ads based on that super personal information, it could be used to manipulate us in ways we don’t even understand yet. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent, she wrote. We might not realize we’re being sold something based on our secret worries.

Facebook logo displayed on phone

Following Facebook’s path

Hitzig pointed to Facebook as a warning. In its early years, Facebook promised users would control their data and could vote on policy changes. Those commitments slowly eroded over time.

Privacy changes that were supposed to give users more control actually ended up making private information public, according to the Federal Trade Commission. All of this happened gradually under pressure from an advertising model that rewarded engagement above all else.

The New York Times displayed on laptop.

Not the first to sound the alarm

This idea of a big public goodbye isn’t totally new. The article compares Hitzig’s move to Greg Smith, who quit Goldman Sachs back in 2012, with his own New York Times op-ed.

His letter captured the bad feelings people had after the big financial crisis. It was such a big deal that he ended up on 60 Minutes and wrote a best-selling book. It shows that a well-timed resignation letter can sometimes make a huge splash and get people talking about serious issues.

Anthropic logo displayed on phone

Another researcher warns the world is in peril

Just a day earlier, a top safety researcher from Anthropic also called it quits. Mrinank Sharma, who led a team focused on keeping AI safe, posted his goodbye note on social media for everyone to see.

In his letter, he didn’t hold back, saying that “the world is in peril.” He talked about how hard it is to actually get a big company to follow its own rules when there’s pressure to move fast and make money. Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions, he wrote.

Little-known fact: Before leaving, Sharma co-authored a study analyzing 1.5 million real conversations with Claude.

UK flag against sky

Quitting to become invisible

Sharma’s exit was extra interesting because of what he plans to do next. After working on some of the most advanced technology in the world, he said he’s moving back to the UK to let himself become invisible for a period of time.

It sounds like he needs a total break from the spotlight and the stress. He wants to step away from all the structure of a tech job and just see what happens. Sometimes, you need to get quiet to figure out what really matters.

Artificial intelligence concept

A poetry degree?

And here’s the twist: after warning about bioterrorism and AI safety, Sharma said he might go study poetry. He wants to focus on courageous speech and thinks writing poems might be a better way to contribute right now.

It’s a pretty big jump from building safeguards for super-smart computers to reading Rumi and Rilke. But maybe he thinks that understanding human feelings through poetry is just as important as understanding the tech. He included a poem by William Stafford in his resignation letter.

Elon Musk

Trouble at Elon Musk’s xAI

It’s not just OpenAI and Anthropic seeing people leave. At Elon Musk’s AI company, xAI, two of the original co-founders quit within 24 hours of each other this week. That means half of the company’s original 12 founders are now gone.

Tony Wu, one of the co-founders, posted a heartfelt goodbye, talking about the war rooms and battles they fought together. But he didn’t really explain why he left, leaving everyone to guess what’s going on behind the scenes.

xAI logo displayed on a phone.

A recalibration on the big picture

The other xAI co-founder who quit, Jimmy Ba, also kept his reasons a little mysterious. He posted that it’s time to recalibrate my gradient on the big picture. That’s a very techy way of saying he needs to step back and rethink everything.

Ba also gave a pretty stark warning around the same time. He said that AI systems that can improve themselves without humans could show up within just one year. That’s the kind of scenario that used to be just science fiction.

Safety written on road

Safety vs making money

These big exits show a real tug-of-war happening inside these companies. On one side, you have researchers who are really worried about safety and ethics. On the other hand, you have company leaders under pressure to make money and grow fast.

Sometimes those two sides just can’t find a middle ground. When that happens, the people worried about safety often feel like the only choice they have is to walk away, and to speak up about why, on their way out the door. 

OpenAI logo displayed on phone screen

Getting fired for speaking up?

According to MarketWatch’s summary of internal disputes at OpenAI, safety lead Ryan Beiermeister says she was fired after raising alarms over ChatGPT developing more explicit or erotic content.

OpenAI reportedly gave a different explanation for her dismissal, which she disputes. The limited public reporting makes it hard to independently verify all details of the dispute.

Geoffrey Hinton

The godfather started this trend

This recent wave of exits follows a path blazed by a true legend in the field. Geoffrey Hinton, often called one of the godfathers of AI, left his job at Google a few years ago specifically so he could speak freely about the dangers.

He warns that AI could cause huge problems, like making it impossible for people to know what’s true anymore. Recently, he told LBC that AI has achieved consciousness and displays self-preservation instincts.

And if you’re curious how other AI players are pushing ahead despite those warnings, take a look at how Anthropic is enhancing Claude with next-level Skills.

Investor investing money concept.

Wall Street isn’t worried

Analysts who track AI investment say funding hasn’t meaningfully slowed in response to these safety-driven resignations. They argue that unless safety disputes start to hurt product quality or returns, investors are unlikely to change course.

Unless the resignations actually start to hurt a company’s ability to make a good product, the money will keep flowing in. For now, Wall Street seems more excited about the potential profits than the potential perils.

And if you want a closer look at the kind of features investors are betting on, check out how OpenAI is introducing age prediction tech to ChatGPT.

What do you think about all these AI departures? Are the researchers right to be worried? Drop a like and follow for more stories like this.

This slideshow was made with AI assistance and human editing.

Don’t forget to follow us for more exclusive content on MSN.

Read More From This Brand:

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.