9 min read
9 min read

You might have noticed some pretty intense videos popping up on your feed claiming to show the conflict in Iran. Rockets chasing jets, giant fireballs, and explosions that look straight out of a movie have been spreading like crazy across social media. Here’s the thing: a lot of it isn’t real at all.
X, the platform formerly known as Twitter, has seen this flood of fake clips and is now stepping in to stop it. They want to prevent people from making money off videos that are actually made by artificial intelligence instead of filmed by real journalists on the ground.

X just announced a brand new rule to clean up all the misinformation floating around. If you post an AI-generated video of an armed conflict without clearly labeling it, you could lose your ability to make money on the platform for 90 days. Get caught doing it again, and you’re banned for good from their ad-sharing program.
This is a pretty big deal for a site that has loosened its rules on what people can post since Elon Musk took over. The company says it’s all about making sure folks get real information when things get tense in the world, not computer-generated fantasy footage that tricks everyone.

You might wonder why anyone bothers making and sharing fake battle scenes online. The answer often comes down to cold, hard cash. On X, creators who pay for X Premium and rack up millions of views and strong engagement can earn money each month from ad revenue sharing.
Videos that shock people or make them angry tend to go viral really fast. And going viral means more eyes on the content, which means more money from ads. X’s new rule is designed to take away that money incentive for unlabeled AI war clips, hoping it will slow down the flood of misleading videos created just for profit.

So how will X actually find all these sneaky AI videos hiding in plain sight? They’re using a couple of different tools to play detective online. One of their main methods is Community Notes, which is the feature you see on some posts, adding fact-checks written by other users who spot something off.
They’re also looking at the digital breadcrumbs hidden inside the videos themselves. Things called metadata and other technical signals can sometimes give away that a video was cooked up by a computer instead of filmed by a person on the ground with a phone.
Fun fact: If a video clearly indicates it’s produced by AI, creators won’t get sanctioned. Violations will be flagged through Community Notes, plus metadata and technical signals embedded in AI-generated content.

You might have seen a clip that looked like an Iranian rocket chasing and blowing up a US jet. It sounded absolutely huge, and it got viewed about 70 million times by people around the world. But according to BBC Verify fact-checkers, that video was completely fake from start to finish.
It’s a perfect example of how convincing these AI creations can be to the average person scrolling quickly. Another video swapped out real smoke from a missile strike for a giant, fake fireball that was many times bigger than the real thing. It just goes to show we can’t always believe our own eyes online.
Little-known fact: The fake video of missiles striking Tel Aviv with explosion sounds was featured in more than 300 posts, shared tens of thousands of times across multiple social media platforms.

It’s not just battle scenes getting the AI treatment during this conflict. Fake images started circulating showing famous landmarks in flames and chaos. Pictures supposedly showing the Burj Khalifa, the tallest building in the world, on fire were shared as if they were real news from the region.
Full Fact, a UK organization that checks facts online, says they’ve spotted a ton of this stuff lately across all platforms. They’ve even seen a fake picture of an aircraft carrier sinking and an image pretending to show the body of a top Iranian leader. Nothing seems off-limits anymore.

You might think, I would never fall for those blurry, fake-looking videos people share. But here’s the real problem: even low-quality fakes get shared thousands and thousands of times before anyone realizes. Sometimes they still have a watermark from an AI video generator on them, and people share them anyway without thinking.
The sheer amount of this content flooding social media is what makes it so scary for everyone. The content is so easy to make and spreads so fast that it becomes really hard to tell what’s actually happening in the world. The fake stuff drowns out the real reporting from journalists on the ground.

X isn’t the only place dealing with this massive headache of fake content. Meta, the company that runs both Instagram and Facebook, has also seen tons of these fake war videos on its platforms every single day. People are sharing the same clips across all their social media accounts without checking anything.
One fake clip on Instagram claimed to show a huge fire after the US airbase in Saudi Arabia was destroyed during the conflict. It turned out to be old footage from a strike in Yemen from a year and a half ago that had nothing to do with Iran.

Here’s a strange twist that researchers have noticed happening during this conflict. Some people are now asking AI chatbots if a certain video is fake or not. They then take the chatbot’s answer and post it as proof that the video is real and trustworthy.
The problem? Chatbots are not great at knowing what’s happening right this second in a war zone full of fast changes. They often get it completely wrong. So people are accidentally, or maybe on purpose, using bad information from one AI to prove that another AI video is the real deal.

You might think, Who cares if a video is fake? I know it’s probably not real anyway. But during a conflict, knowing what’s actually happening can be a matter of life and death for real people. Families need to know if their loved ones are safe or if their city is under attack right now.
Fake videos can also be used to trick entire countries and change how people feel about the war in dangerous ways. Bad information can make a tense situation much more dangerous for everyone involved. That’s why X says it’s so important for people to have access to authentic information from the ground.

Groups that spend their time checking facts online are getting buried under all this AI content flooding social media. They say they are increasingly seeing AI turbocharge the spread of misinformation across all platforms. It’s becoming a huge part of their daily job to figure out what’s real and what’s fake.
They are the ones pointing out that a huge fire is actually an old clip from years ago, or that a missile strike video has tell-tale signs of being made by a computer. Their work is more important than ever, even if it feels like they are playing a never-ending game of whack-a-mole online.

This new rule is actually a pretty big deal for X and shows they’re paying attention. Since Elon Musk bought the company for billions of dollars, the site has generally gotten rid of a lot of its old rules about misinformation. Musk has called those old policies censorship in the past.
So bringing in this new rule about AI videos shows the company is willing to make some changes when the problem gets bad enough. They say they will continue to refine how they handle this to make sure the platform can be trusted during big world events that affect everyone.
Want to see why Musk’s AI updates are raising eyebrows? Check out Elon Musk’s AI sparks concerns over inappropriate content.

So what can you actually do when you’re just scrolling through your feed at home? The best thing is to be a little skeptical, especially if a video looks totally wild or makes you really angry right away. Check who posted it and if they usually share reliable stuff or random content.
Look for clues like weird-looking hands or faces, or text that doesn’t quite make sense with what you’re seeing. And if a video seems too crazy to be true, it probably is exactly that. Taking a second before you hit that share button is a small thing we can all do to help stop the spread of fake news.
Want to see which of Musk’s bold promises didn’t quite pan out? Take a look at the Elon Musk claims that went unfulfilled in 2025.
If this new rule about AI war videos makes you think twice before sharing, hit that like button and drop a comment below.
This slideshow was made with AI assistance and human editing.
Don’t forget to follow us for more exclusive content right here on MSN.
Read More From This Brand:
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Father, tech enthusiast, pilot and traveler. Trying to stay up to date with all of the latest and greatest tech trends that are shaping out daily lives.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!