Was this helpful?
Thumbs UP Thumbs Down

OpenAI tightens Sora 2 rules after Hollywood deepfake backlash

A closeup view of the wording, deepfake
OpenAI Sora displayed on a phone

OpenAI and Deepfake Rules

OpenAI did tighten Sora 2 rules in 2025 OpenAI did tighten the rules for its video tool, Sora 2, in October 2025 after many actors complained. This change was a direct response to famous figures like actor Bryan Cranston raising concerns about deepfake videos.

A deepfake is a video made with artificial intelligence that looks like a real person doing or saying something they never did.

OpenAI strengthened its controls to make sure people cannot use someone’s image or voice without their permission. This policy update aimed to calm fears from Hollywood studios and major talent agencies.

OpenAI Sora logo displayed on a phone screen

Sora 2 made videos very realistic

Sora 2, which launched in late September 2025, made creating videos from simple text ideas much easier and more realistic than before. The new app allowed users to create clips up to 15 seconds long, or 25 seconds for paid Pro users.

Since the videos looked so real and even included synchronized sound, the risks of making fake content increased immediately.

The realism caused quick worry in Hollywood about the unauthorized use of actors’ faces and voices in clips they did not agree to.

Social media dislike reaction man using mobile phone indoors closeup

New rules require consent to use likeness

The concern over realism led directly to the new rules that require consent to use someone’s likeness. The tightened policy now focuses on protecting an individual’s likeness or dislikeness, which means their face and voice.

OpenAI stated that “all artists, performers, and individuals will have the right to determine how and whether they can be simulated.” Before this change, the system had controls that were easily bypassed, letting users create unauthorized celebrity videos.

The new “opt-in” policy means you must agree before an AI can generate your likeness, offering more control to the talent.

A businessman uses AI technology for data analysis and investment

Deepfakes are growing very fast in 2025

The need for these new consent rules is urgent because deepfakes are growing very fast in 2025. The sheer number of deepfake videos and images is growing very fast, proving why new rules are necessary.

Experts projected that the number of deepfake files shared online would skyrocket to 8 million in 2025. This is a massive increase, as only about 500,000 files were shared just two years earlier in 2023.

This rapid growth, estimated at over 900% annually, shows how quickly this powerful AI technology is spreading across the internet.

Loss concept

Deepfakes have caused big financial losses

Unfortunately, the spread of deepfakes has caused big financial losses for many people.

Deepfake technology is often used by criminals to trick people out of money, which is why it is not just a Hollywood issue. In the first quarter of 2025 alone, financial losses from deepfake-enabled fraud in North America surpassed $200 million.

These scams use highly realistic cloned voices or faces to trick people into sending money. This shows that the technology poses a serious financial risk beyond just actors worrying about their careers.

SAG-AFTRA signs held by equity members

Actor Bryan Cranston helped push for change

To fight these risks, actor Bryan Cranston helped push for change from the technology companies. Bryan Cranston, known for shows like Breaking Bad, played a key role in making OpenAI change its rules.

Unauthorized videos featuring his likeness appeared on the Sora 2 platform soon after its launch. He took his concerns to the Screen Actors Guild, or SAG-AFTRA, which is a big union for performers.

His public complaint helped force OpenAI to announce stronger protections against the unauthorized use of any performer’s image or voice.

A sign of Disney store in Venice

Studios sued other AI video companies

The concern in Hollywood about Sora 2’s deepfakes is not new, as major companies had already sued other AI video makers. These lawsuits argue that AI companies used copyrighted movies and shows to train their programs without permission.

This legal battle shows that studios want to protect their content and the jobs of writers and actors. The conflict over Sora 2 is now the biggest example of this problem in 2025.

Sora OpenAI text to video generative AI model

Sora 2 introduced a cameo feature

Even as the lawsuits were happening, Sora 2 introduced a new feature called “Cameo.” OpenAI’s launch of the Sora 2 mobile app in late September 2025 included a new feature called “Cameos.”

This feature was designed to let users easily put themselves or their friends into AI-generated videos. To do this, a user must upload a short verification video with a liveness check to prove they are a real person and give consent.

While this was a security step, its launch brought more attention to how easily the system could be used to impersonate others, especially celebrities.

Sora OpenAI text to video generative AI model december 16

Deepfake detection tools are dropping

A major problem that makes deepfakes so dangerous, even with new features, is that deepfake detection tools are dropping in effectiveness.

Even with new rules, the tools made to find deepfakes are becoming less effective. The problem is that the new AI video makers, like Sora 2, are improving faster than the detection software.

Studies in 2025 show that defensive AI tools often lose 45-50% of their accuracy when deployed against real-world deepfakes rather than lab-generated ones, underscoring the detection gap.

Deepfake AI disinformation fake news and misinformation daily newspaper reading

Deepfakes threaten biometric security too

The fact that deepfake detection is dropping is bad because deepfakes threaten biometric security, too. Deepfakes are not just an issue for actors; they are also being used to trick security systems that check your identity.

Financial companies use systems that scan a person’s face or voice to check if they are real before giving them access. Deepfakes like “face swaps” or cloned voices are now used to bypass these checks.

According to security reports, attacks using deepfakes to trick identity verification systems increased by a huge 704% in 2023, and are still rising in 2025.

Bryan Cranston at an event

Actors’ union demanded better protection

Because of the threats to security and careers, the actors’ union demanded better protection from tech companies. The actors’ union, called SAG-AFTRA, worked with top talent agencies to fight for better rules after Sora 2 launched.

The union’s president condemned the initial system, which required performers to ask for their likeness to be removed, calling it a threat to the industry.

Following this pressure and the Bryan Cranston incident, OpenAI agreed to work with the union and agencies. This cooperation led to a promise from the company to strengthen its safety checks against unauthorized voice and likeness use.

A closeup view of the wording, deepfake

New laws are being made to fight fakes

The pressure from unions is also leading to new laws that are being made to fight fakes. Governments are trying to create new laws to keep up with the fast pace of AI deepfakes. In the U.S., the NO FAKES Act was introduced in April 2025.

This proposal aims to make it illegal to create or share an AI-generated copy of someone’s voice or likeness without their clear permission. Other countries are also working on similar rules.

If you’ve ever wondered what happens when every frame literally counts, don’t miss this AI video generator that costs you every second.

New Google Pixel 9a Android smartphone is displayed for editorial purposes

Technology makes detection easier for users

While new laws are important, technology itself is also stepping up to make detection easier for users. Even though detecting deepfakes is hard, new technology is making it possible to prove a video is real.

Companies are adding special hidden information, called a watermark or metadata, to videos right when they are captured by a camera or phone.

Devices like the Google Pixel 10 are being built to support this Content Credentials technology. This helps users know that a video is real and was not changed by AI, making it easier to trust what they see online.

If you’ve ever wanted to create pro-level videos without effort, don’t miss Microsoft’s new AI video tool, which is free and shockingly easy to master.

Do you think stronger laws or better technology is the best way to fight deepfakes? Share your opinion in the comments

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you like this story, you’ll LOVE our Free email newsletter. Join today and be the first to receive stories like these.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.