Was this helpful?
Thumbs UP Thumbs Down

Microsoft Employees Barred From DeepSeek

Microsoft logo on a building
DeepSeek logo displayed on a phone

Ever Wonder Why Some Apps Get Banned?

Sometimes, an app comes out of nowhere, gets super popular, and then vanishes from certain platforms. That’s what happened with DeepSeek, a Chinese AI chatbot that shot to the top of U.S. app charts earlier this year.

But now, Microsoft says its employees are completely banned from using it. The reason? It’s not just competition. It’s about privacy, propaganda, and national security.

ChatGPT logo displayed on a screen

What Makes DeepSeek So Popular, And So Problematic?

DeepSeek caught attention fast by offering an AI chatbot that could “show its work.” People liked how it explained reasoning step by step, something competitors didn’t always do. It was also open source and cheaper to run than big names like ChatGPT or Gemini.

But with that popularity came scrutiny. Tech experts and lawmakers began to worry that DeepSeek was storing too much personal information, and storing it in the wrong place. What seemed like a cool new tool suddenly became a topic in government hearings, and not in a good way.

Microsoft logo on a building

Microsoft Steps In, “No DeepSeek For Our Employees”

Brad Smith, Microsoft’s President, made it clear during a Senate hearing, DeepSeek is off-limits for their staff. This wasn’t a quiet internal rule; he said it publicly, right in front of lawmakers.

It’s a bold move, especially coming from the world’s most valuable company. When a tech giant like Microsoft speaks out so directly, it signals that this isn’t just about tech preferences. It’s about protecting sensitive company information from potential foreign access.

Flag of China

Why China’s Involvement Raises Red Flags

DeepSeek’s data is stored on servers located in China. That’s not just a technical detail; it’s a legal one. Chinese law requires companies to share information with the government if asked. That’s what has people worried.

If an American user inputs personal or business info, there’s a chance it could end up in the hands of Chinese authorities. That’s not a hypothetical risk; it’s written into China’s laws. For companies working with sensitive data, even the possibility of that happening is a deal-breaker.

Microsoft store logo displayed on a phone

Banned From The Microsoft Store Too

Not only are Microsoft employees banned from using DeepSeek, but the app is also missing from the Microsoft Store. That’s a big deal. Microsoft allows many other AI chat apps on its platform, including ones from competitors like Perplexity. But DeepSeek didn’t cut.

Brad Smith said the company didn’t want to expose users to potential risks like propaganda or data leaks. So while you can still download DeepSeek elsewhere, don’t expect to find it on any official Microsoft device or app marketplace anytime soon. They’re drawing a clear line.

DeepSeek logo displayed on a phone

Propaganda Concerns, Not Just Paranoia

One of the biggest fears surrounding DeepSeek is the potential for subtle propaganda. Because the AI was developed in China and trained under government oversight, experts worry it could be designed to quietly promote certain views or omit sensitive topics.

Imagine asking a question and getting an answer that leaves out key facts or repeats misleading ideas. That might not seem obvious right away, but over time, it could shape how people think.

Microsoft azure cloud logo displayed on phone

Open Source Doesn’t Mean Open Access

Even though DeepSeek is open source, meaning anyone can look at the code or run it, that doesn’t automatically make it safe. Microsoft did offer DeepSeek’s R1 model on its Azure cloud platform, but only after making changes.

The company ran intense tests, called “red teaming,” to find flaws and remove anything dangerous. They even altered the code to reduce harmful outputs. Hosting the raw model is very different from letting employees use the app version, where users’ inputs could still get sent back to China.

Microsoft Azure logo displayed on a phone

Red Teaming, AI’s Safety Checkpoint

Before launching DeepSeek’s model on Azure, Microsoft said it put it through “rigorous red teaming.” That means they hired experts to try and break it, pushing the AI to see what kinds of dangerous, biased, or misleading responses it might give.

Red teaming is a critical part of making AI safer. Microsoft used the results to tweak the model and strip out risky behavior. But even with those changes, they still weren’t comfortable with the full DeepSeek app.

NASA logo displayed

DeepSeek Isn’t Alone In Facing Bans

Microsoft isn’t the only one saying no to DeepSeek. Government agencies like NASA and the U.S. Navy have also blocked the app from their networks. South Korea even removed it from local app stores. The U.S.

Congress is now considering a law that would ban DeepSeek from all government devices. Officials say the risks are just too high when it comes to foreign access and potential manipulation. This isn’t just about one app; it’s part of a broader effort to make sure AI tools used by Americans are safe.

Smartphone with padlock and privacy written on it, concept of privacy

Your Data Tells More Than You Think

DeepSeek’s privacy policy states it collects user inputs and device information, including IP addresses and device identifiers. That’s a lot more than most people realize. Details like how fast you type, what you hesitate on, or what you search for can reveal habits, preferences, and even stress levels.

All that data, stored on foreign servers, raises serious concerns. In today’s digital world, data is power. That’s why companies and governments are getting more careful about who gets access to it, and how that access is managed and monitored.

Data breach concept

One Breach Can Change Everything

In January 2025, DeepSeek suffered a data breach that exposed over a million user records. While the company moved quickly to respond, the damage was done. Security experts say this incident proves that even promising tech platforms can have major weaknesses.

When people use chatbots, they often share personal or sensitive information. If that gets leaked or stolen, it can lead to identity theft, phishing attacks, or worse. For organizations like Microsoft, a breach like this reinforces why they need to stay cautious.

Perplexity logo displayed on phone

Not All Apps Are Created Equal

Microsoft’s ban isn’t about blocking all AI chatbots, just the ones that raise serious red flags. You can still find apps like Perplexity in the Microsoft Store. What sets DeepSeek apart is the combination of privacy concerns, propaganda risks, and its links to Chinese law.

Some apps are built with stronger safety nets. Others leave too many questions unanswered. It’s not about where the app comes from; it’s about how it handles your data, what it says, and who might be listening on the other end.

Microsoft logo on a building in LA

Following The Cloud Money Trail

Microsoft’s cloud business, Azure, now makes up nearly one-third of its total revenue. That’s huge. With so many companies using Azure to store sensitive data, Microsoft is under pressure to keep its infrastructure secure and compliant. Hosting risky apps or tools would threaten that trust.

So their handling of DeepSeek isn’t just about tech, it’s also about reputation and financial responsibility. When cloud services become this important, every decision counts. And that includes which AI tools they support and how they’re rolled out to users around the world.

View of USA flag

Changing The Way Microsoft Builds Data Centers

In a sign of shifting priorities, Microsoft recently canceled hundreds of megawatts of planned data center leases in the U.S. Experts say it’s part of a strategy to build more flexible and regionally secure systems.

As global rules about data privacy get tougher, big tech companies need to be more nimble. They can’t just rely on giant hubs anymore; they need to place data closer to where it’s used and under stricter control.

Judge holding a gavel.

The National Security Side Of AI

U.S. lawmakers are increasingly framing AI safety as a national security issue. That’s why DeepSeek keeps showing up in official reports and hearings. Congress recently called it a “profound threat” to privacy and security.

They pointed out how it collects detailed information and follows Chinese censorship laws. With the rise of AI, digital tools are becoming more than just apps; they’re becoming political and strategic assets.

Curious how Microsoft is handling other big tech shifts? Check out what they just changed about Adobe.

Copilot logo displayed on a phone screen

Competition Still Matters, But Trust Matters More

It’s easy to think Microsoft banned DeepSeek just to protect its own AI product, Copilot. After all, the two tools compete for similar users. But Microsoft isn’t banning every rival app. Others, like ChatGPT and Perplexity, are still welcome.

That suggests the decision wasn’t only about money, it was about risk. When user trust is on the line, especially with sensitive data, Microsoft chose caution over convenience.

Want to see how Microsoft is rewarding those shaping its AI? Take a look at how it may credit data contributors.

What’s your take on Microsoft’s AI moves? Drop a comment below and give this post a like if you found it interesting.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.