Was this helpful?
Thumbs UP Thumbs Down

OpenAI makes moves to address Pentagon AI controversy

Sam Altman OpenAI CEO during a speech with John Elkann Exor company CEO at technology fair seminary
OpenAI logo displayed on phone screen

Your chatbot just got caught up in a big controversy

You know how your phone seems to know everything about you? Well, imagine that same technology being used by the military. That’s essentially what happened when OpenAI, the company behind ChatGPT, signed a deal with the U.S. Department of Defense.

And let’s just say, people had feelings about it. Within days, thousands of ChatGPT users did something they never thought they’d do: they deleted the app. The backlash was so intense that OpenAI’s CEO had to step in.

OpenAI CEO Sam Altman attends and addresses a conference.

OpenAI’s CEO says they rushed this

Sam Altman, the face of OpenAI, didn’t hide from the criticism. He posted on X and essentially acknowledged that OpenAI had mishandled the rollout. He called the whole thing “opportunistic and sloppy” and admitted they shouldn’t have rushed the announcement.

The timing couldn’t have been worse. OpenAI signed this deal just hours after the Pentagon stopped working with another AI company called Anthropic. To many people, it looked like OpenAI was swooping in to take their place without thinking things through.

US department of defense logo

What OpenAI actually agreed to do for the Pentagon

So what’s actually in this deal? OpenAI will let the Defense Department use its AI technology for classified military operations. Think about analyzing massive amounts of data, helping with cybersecurity, and handling administrative tasks, not necessarily robots with guns.

The keyword here is classified, which means we won’t know exactly how they’re using it. And that’s exactly what made people nervous. When you combine secret military operations with powerful AI, it’s easy to see why eyebrows went up.

Scammer using computer

The real fear? Your government is watching you

Here’s the part that really got people worked up: domestic surveillance. Critics worried that the same AI helping the military could eventually be used to spy on American citizens right here at home.

We’re not talking about some sci-fi movie. The technology exists. AI can scan through massive amounts of data, phone records, online activity, you name it, and find patterns. And once that capability is in government hands, some folks worry it’s only a matter of time before someone uses it in ways we didn’t sign up for.

ChatGPT logo displayed

People started dumping ChatGPT like crazy

The numbers tell a pretty wild story. According to data from Sensor Tower, the number of people uninstalling ChatGPT jumped by nearly 300% the day after the news broke. That’s not a typo; it was almost triple the normal rate.

Meanwhile, Anthropic’s chatbot Claude shot straight to the top of Apple’s App Store charts. It became the number one free app practically overnight. People were literally replacing one AI with another and sending a clear message: we’re watching what you do.

Anthropic logo displayed on phone

Remember the rival company that said no?

Here’s where the story gets interesting. The Pentagon originally worked with Anthropic, another AI company founded by former OpenAI people. But Anthropic drew a line in the sand. They refused to let their technology be used for mass surveillance or fully autonomous weapons.

The Trump administration wasn’t happy about that. Defense Secretary Pete Hegseth called Anthropic a supply chain risk, and soon federal agencies were told to phase out their tools. Anthropic stuck to their principles and paid the price.

Sam Altman OpenAI CEO during a speech with John Elkann Exor company CEO at technology fair seminary

Sam Altman had to go back and fix things

After watching the backlash explode online, Altman announced changes to the contract. The new language explicitly says OpenAI’s systems cannot be intentionally used for domestic surveillance of US persons and nationals. That’s a fancy way of saying they won’t help spy on Americans.

They also added that intelligence agencies like the NSA would need a whole new agreement before they could use OpenAI’s tools. It was damage control, plain and simple. Altman knew they had to do something to calm people down.

Fun fact: This isn’t the first time OpenAI has backtracked. In 2023, the company faced similar pressure when users worried about privacy, leading to new data controls in Europe.

Coworkers working together on laptops

Even OpenAI’s own employees were freaking out

It wasn’t just random internet users who were upset. Nearly 100 OpenAI employees joined Google employees in signing an open letter.

Think about that for a second. The very people building this technology were worried about how it might be used. They warned that the government was trying to play companies against each other, hoping one would eventually give in to demands that crossed ethical lines.

Red office folder with inscription claims

The whole red lines thing got complicated

OpenAI kept saying they had guardrails in place, more than any previous deal, actually. They claimed their technology wouldn’t be used to direct autonomous weapons. Sounds good, right? But critics pointed out that contract language allowing “all lawful purposes” could leave loopholes.

The problem is that U.S. surveillance laws still allow very broad data collection in national security contexts. So if a contract says the government can use AI for anything lawful, that might include things many Americans would find creepy. It’s the difference between what’s legal and what’s right.

Google logo displayed

This isn’t the first time tech and the military have mixed

Remember when Google got involved with Project Maven back in 2018? That was a Pentagon program using AI to analyze drone footage. Thousands of Google employees protested, and eventually the company backed away from renewing the contract.

Now we’re seeing history repeat itself with OpenAI. The difference this time? AI has gotten way more powerful in just a few years. These models can write, reason, and analyze information in ways that seemed impossible not long ago. The stakes are higher, and so are the emotions.

Fun fact: Over 4,000 Google employees signed a petition protesting Project Maven in 2018, leading to the company releasing new AI ethics principles.

Palantir logo displayed on phone man holding

The military already uses AI in surprising ways

Here’s something that might surprise you: AI is already all over modern military operations. Companies like Palantir provide data analytics tools to NATO, Ukraine, and the U.S., pulling together satellite images and intelligence reports to help make faster decisions.

The UK Ministry of Defense recently signed a £240 million contract with Palantir. NATO even uses AI to process massive amounts of data. Military leaders insist there’s always a human in the loop, making final decisions, but as AI gets smarter, that line gets blurrier.

Chatgpt logo displayed on a phone screen

Why regular people should care about all this

You might be thinking, I’m not in the military, so why does this matter? Fair question. Here’s the thing: AI tools like ChatGPT are becoming part of everyday life. We use them for homework, work emails, planning vacations, you name it.

When the companies behind these tools start working with the military, it changes the relationship. It raises questions about trust. If your favorite app is helping with classified operations, what else might it be doing behind the scenes? And once surveillance capabilities exist, who decides how they get used?

If you want a closer look at the growing tensions shaping the AI world, check out how OpenAI and Elon Musk clash just got bigger.

OpenAI CEO Sam Altman at the artificial intelligence Revolution Forum in Taipei

Where things stand now

So what’s the bottom line? OpenAI changed its contract to explicitly ban domestic surveillance and keep intelligence agencies from using its tools without new agreements. Sam Altman admitted they messed up by rushing the announcement.

But the bigger conversation isn’t going away. How much power should AI companies have? Where do we draw the line between national security and privacy? Those questions don’t have easy answers, but they’re worth asking as this technology becomes more powerful by the day.

If you’re curious about where the technology itself is heading next, take a quick look at how OpenAI launched a model built on Cerebras chip technology.

What do you think about OpenAI’s Pentagon deal? Drop your thoughts in the comments and hit that like button if you found this helpful.

This slideshow was made with AI assistance and human editing.

Don’t forget to follow us for more exclusive content on MSN.

Read More From This Brand:

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.