Was this helpful?
Thumbs UP Thumbs Down

OpenAI’s latest decision sparks a growing “Cancel ChatGPT” wave

Sam Altman with a blurred logo of ChatGPT in the background
Sam altman and OpenAI logo.

OpenAI sparks a growing Cancel ChatGPT wave

OpenAI’s latest decisions have ignited a backlash online, with users claiming they are unsubscribing from ChatGPT in protest. The controversy comes after Sam Altman pledged OpenAI’s models to the U.S. Department of War, bypassing Anthropic’s strict ethical stance.

Anthropic refused contracts that would allow use in autonomous weapons or mass surveillance. OpenAI, in contrast, left interpretation to government officials, prompting widespread online debate and fueling the #CancelChatGPT movement.

Anthropic logo displayed on phone

Anthropic takes a strong ethical stance

Anthropic, known for Claude AI, recently drew attention for refusing government contracts that violated its two “red lines”: no autonomous weapons and no mass surveillance. This bold position set it apart in a field often criticized for prioritizing profit over ethics.

The company emphasized that older models may still be preserved for research, while new deployments must follow strict ethical guidelines. Their refusal highlights a rare prioritization of safety and moral responsibility in AI development.

Sam Altman with a blurred logo of ChatGPT in the background

OpenAI sidesteps Anthropic’s red lines

Sam Altman publicly committed ChatGPT and other OpenAI models to the U.S. Department of War. Although he stated they would not be used for mass surveillance of citizens, officials clarified that use under “all lawful means” could still occur.

By leaving the final interpretation to the government, OpenAI sparked user concern. Many fear indirect surveillance could happen under laws like the Patriot Act, which allows extensive metadata collection in certain scenarios.

Side view of group of multiethnic people protesting outdoors.

Online backlash intensifies

Thousands of ChatGPT users took to Reddit, Twitter, and other forums to express outrage. Many pledged to cancel subscriptions, citing concerns over OpenAI’s willingness to let government interpretation dictate AI deployment.

The incident highlights growing public awareness of AI ethics. Users are increasingly scrutinizing how AI models are used in high-stakes areas, from national security to mass surveillance.

Big Tech companies.

Funding and corporate influence

ChatGPT’s latest funding round valued OpenAI at $730 billion, with backers including Amazon, SoftBank, and Nvidia. Microsoft continues collaborating while also developing its own AI systems, reinforcing its influence in the market.

The massive valuation gives OpenAI considerable power in shaping AI deployment. Critics argue that such influence comes with a responsibility to uphold ethical standards and prevent misuse in government programs.

Little-known fact: ChatGPT processes over 2.5 billion user prompts every single day, showing how intensively people around the world interact with the AI.

AI hallucination displayed on a phone.

Risks of AI hallucinations

Even advanced AI models like ChatGPT can generate false or nonsensical outputs, known as hallucinations. When these models are used in high-stakes contexts, errors could have serious real-world consequences.

Critics warn that allowing AI systems to make decisions without strict oversight, especially in security or military contexts, could lead to unintended risks, from privacy breaches to misinterpretation of the law.

Google, Apple, Meta, Amazon, and Microsoft logos appears on a phone screen.

Other AI companies relax restrictions

Several major competitors have eased or narrowed earlier limits on military or surveillance applications. Google removed explicit references to bans on autonomous weapons from its AI principles while reopening the door to certain defense projects; Microsoft and Amazon have expanded defense-related AI work under “responsible use” and human-in-the-loop framing; Meta now permits national-security users to access its Llama models for some defense scenarios.

Anthropic’s contractual “red lines” on fully autonomous weapons and mass domestic surveillance still stand out in contrast, highlighting how few major AI companies are willing to set firm, written limits even under pressure from powerful government clients.

Sam Altman OpenAI CEO during a speech with John Elkann Exor company CEO at technology fair seminary

Damage control by OpenAI

OpenAI and Sam Altman responded with reassurances, claiming the Department of War would respect AI safeguards. However, explanations remain vague, leaving many users skeptical about the actual implementation of ethical boundaries.

The controversy reflects the tension between corporate ambitions, government power, and public trust. Many users continue to question whether OpenAI can prevent misuse of its technology under these conditions.

Little-known fact: When ChatGPT first launched in late 2022, it reached 1 million users in just five days and was one of the fastest-growing consumer applications in history.

Claude on phone screen AI behind

Claude AI overtakes ChatGPT

Following the backlash, Anthropic’s Claude AI app surpassed ChatGPT on Android and iOS rankings. Claude is also available for Windows 11, gaining attention from users who prioritize ethical AI deployment.

This shift suggests that users consider corporate ethics when choosing AI tools, showing a market response to how companies handle AI safety and government partnerships.

Guy interacting with intelligent AI chatbot

Public trust in AI is fragile

Users are increasingly cautious about trusting AI systems. Incidents like OpenAI’s government pledges show how quickly public confidence can erode when ethical safeguards are unclear or inconsistent.

Surveys indicate many users would abandon AI platforms if they felt models were being used in ways that violate personal privacy or ethical standards, highlighting the importance of transparency.

Businessman working with corporate social responsibility.

Corporate responsibility in AI

Anthropic’s firm stance underscores how companies can shape AI ethics. Their insistence on control over deployment demonstrates a model for responsible AI, contrasting with OpenAI’s hands-off approach.

As AI becomes more powerful, corporate decisions will have far-reaching consequences, making ethical leadership a central concern for developers, investors, and regulators alike.

Male hand using laptop computer to search for information user

User engagement shapes the AI market

The #CancelChatGPT movement illustrates how user sentiment can influence corporate behavior. Popular backlash can affect subscriptions, rankings, and the adoption of competing AI platforms like Claude.

This trend shows that public opinion is increasingly a powerful force in shaping how AI tools are developed and deployed, incentivizing ethical practices in competitive markets.

How will AI impact the job market and candidate selection? Here’s why Bob Sternfels says AI is redefining the perfect candidate.

Gavel on desk with judge working in courtroom.

Ethics matter in AI

OpenAI’s recent decisions have ignited widespread debate over ethics, transparency, and corporate responsibility in AI. The choice to allow government interpretation of AI use has led to online backlash, including the growing #CancelChatGPT movement.

Meanwhile, Anthropic’s Claude AI, which prioritizes ethical safeguards, has gained popularity, showing that users are paying attention to more than just features and performance. This episode demonstrates that ethical considerations are now central to public trust in AI.

Are millions of Apple devices at risk right now? Here’s why Apple sends an urgent warning affecting millions of devices.

What do you think about OpenAI sparking the Cancel ChatGPT wave? Share your thoughts.

This slideshow was made with AI assistance and human editing.

Don’t forget to follow us for more exclusive content right here on MSN.

Read More From This Brand:

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.