Was this helpful?
Thumbs UP Thumbs Down

OpenAI robotics head departs in wake of Pentagon AI controversy

OpenAI logo displayed on a phone
OpenAI headquarters glass building in San Francisco, USA

When a job just doesn’t feel right anymore

Have you ever stayed at a job even though something about it bothered you? For most of us, it’s just a feeling we push aside. But for Caitlin Kalinowski, who led important hardware projects at OpenAI, that feeling was too strong to ignore.

Kalinowski didn’t leave because she was mad at her boss or wanted more money. She left because she believed the company was moving too fast into an area that needed a lot of careful thought, working with the military.

OpenAI logo displayed on a phone screen.

Who is Caitlin Kalinowski, really?

Before joining OpenAI in November 2024, Caitlin Kalinowski led Meta’s augmented reality glasses effort after years of hardware work across Meta and Apple. OpenAI brought her in to lead robotics and consumer hardware, making her one of the company’s most prominent hires in hardware.

Her job was to help figure out how to put AI into physical robots, machines that could one day help around the house or in factories. So when someone with her background and reputation quits, the whole tech industry stops to pay attention.

US Pentagon in Washington DC building aerial view

The Pentagon deal that changed everything

So, what made Caitlin Kalinowski quit? It all started when OpenAI signed a new agreement with the US Department of Defense, often called the Pentagon. This deal would let the military use OpenAI’s advanced technology inside secure Defense Department computing systems for national security work.

On the surface, that might sound okay. But for Kalinowski and a lot of other people, it opened the door to some scary possibilities. She wasn’t against helping the military entirely, but she was worried about where this path might lead.

Man interacted with Ai

Two big fears about AI and the military

Caitlin Kalinowski shared exactly why she was so concerned. She pointed to two main issues that she felt needed way more discussion. The first is using AI to spy on American citizens without a judge’s approval. The second is creating weapons that could make life-or-death decisions all on their own, without a human being in control.

These aren’t small worries. They get to the heart of how we want technology to fit into our lives and our country. Kalinowski believes AI can help keep us safe, but not if it means giving up our privacy or handing over the power to kill to a machine. She wanted clear rules, and she didn’t think they were in place.

Safety written on road

What does rushed really mean?

In her posts online, Kalinowski kept using one word to describe the Pentagon deal: rushed. She said the company announced before they had figured out the guardrails. Think of guardrails like the safety barriers on a winding mountain road. They keep you from going over the edge.

She felt OpenAI drove onto that mountain road at top speed without building the barriers first. It wasn’t that she thought the destination was bad. She just thought the way they got there was reckless. For her, it was a problem with how the company was being run, a governance concern, as she called it.

OpenAI logo displayed on a phone

OpenAI says they have clear rules

OpenAI said its Pentagon agreement includes explicit red lines for how its technology can be used. The company said its systems may not be intentionally used for domestic surveillance of U.S. persons and nationals or to direct autonomous weapons systems.

They also promised it wouldn’t be used to power weapons that can kill without a human’s okay. They believe they have found a way to help with national security while still sticking to their values.

Anthropic logo displayed on phone

It’s not just OpenAI, Anthropic said no too

This whole debate gets even more interesting when you look at what another AI company did. A company called Anthropic was actually talking to the Pentagon first about a similar deal. But in the end, Anthropic turned it down. They were worried about the same things: their technology being used for mass surveillance or in killer robots.

Because Anthropic said no, the Pentagon moved on and signed a deal with OpenAI instead. The government even called Anthropic a supply chain risk, which is a pretty serious label. This shows you how divided the tech world is right now. Some companies are willing to work with the military, and others are drawing a hard line.

Screen with ChatGPT chat

Everyday people vote with their phones

This isn’t just a debate happening in fancy boardrooms. Regular people are getting involved, too. When news of the OpenAI-Pentagon deal broke, a lot of ChatGPT users got upset. And they showed it in a very modern way: they deleted the app from their phones.

Sensor Tower said U.S. ChatGPT app uninstalls surged 295% day over day on February 28, 2026, after OpenAI’s Pentagon deal drew criticism. The firm also reported that Claude’s U.S. downloads rose during the same period and that the app climbed to the top of the U.S. Apple App Store charts.

Little-known fact: The daily uninstall rate for ChatGPT went from its average of 9% to a huge spike right after the Pentagon news came out. That’s a lot of people making a statement with their smartphones.

OpenAI CEO Sam Altman attends and addresses a conference.

The boss admits it looked sloppy

Even Sam Altman, the big boss at OpenAI, had to admit the deal announcement didn’t go well. After seeing all the backlash from employees and users, he called the way it was handled opportunistic and sloppy” on social media. That’s a pretty honest thing for a CEO to say about his own company.

Because of the uproar, OpenAI went back and changed the agreement. They added clearer language stating that their systems wouldn’t be used to spy on Americans, specifically saying they shall not be intentionally used for domestic surveillance of U.S. persons and nationals.

Partial view of businessman shaking hands with robot

What happens to OpenAI’s robot dreams now?

Kalinowski’s resignation removes a high-profile leader from OpenAI’s robotics and hardware efforts. Because she was hired to help lead robotics and consumer hardware, her departure is a notable setback for a part of the company focused on bringing AI into physical devices.

While the Pentagon deal might help OpenAI in one way, it might have hurt them in another by messing up their own internal projects and goals.

Military units in server room observing enemy activity with automated

A bigger fight in the tech world

This whole story is really about a much bigger fight happening in Silicon Valley. For years, tech companies have been known for their move fast and break things attitude. But when it comes to working with the military, people are asking, what if the thing you break is trust, or privacy, or even world peace?

Caitlin Kalinowski’s resignation is a symbol of this new tension. On one side, you have the drive to grow, make money, and work with the government. On the other hand, you have a group of employees and experts who are pleading for caution and clear ethics. It’s a battle that’s not going away anytime soon.

Man interacting with AI and holding a tablet

She left, but she’s not done with AI

So, what’s next for Caitlin Kalinowski? She says she’s going to take a little time for herself, which makes sense after a big decision like this. But she also made it clear she’s not done with technology. She wants to keep working on what she calls responsible physical AI.

That means she still wants to build robots and cool gadgets, but she wants to do it in a way that feels right to her. She’s looking for a path where she can innovate without compromising her values. It will be interesting to see where she ends up and what she builds next.

And if you’re curious about the kind of cutting-edge tech shaping the future of AI, take a look at OpenAI’s launch of a model built on Cerebras chip technology.

Male doctor working on his office table

Why this story matters to you

You might be thinking, I don’t work in tech, why should I care? Well, the AI being built today will be in your world tomorrow. It could be in your car, your doctor’s office, or even helping to decide what news you see. The rules these companies set now will affect all of us.

When someone like Caitlin Kalinowski stands up and says slow down, it’s a chance for all of us to think about the kind of future we want. Do we want technology that watches us? Do we want machines that can fight wars alone? These are questions for everyone, not just the people in California. It’s our future too.

If you want to see another big twist in the AI race, check out OpenAI ends ambitious Stargate project after failed Oracle talks.

What do you think? Should AI companies work with the military? Drop a comment below and hit that like button if you’re following along.

This slideshow was made with AI assistance and human editing.

Don’t forget to follow us for more exclusive content on MSN.

Read More From This Brand:

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.