6 min read
6 min read

Tech creator Enderman reportedly discovered his YouTube channel, which had around 350,000 subscribers, had been terminated. What stunned him most was that it wasn’t due to copyright or community strikes but an automated AI decision.
His account was allegedly linked to a foreign-language channel he had never heard of, which had received multiple copyright violations.
Within hours, years of content became inaccessible on the platform, prompting fans and fellow creators to question how such a drastic action was taken.

Enderman’s case shows the growing risk of overreliance on AI moderation. According to public reports, the termination appears to have stemmed from an automated association algorithm, though it is not confirmed whether human review was involved.
For creators who depend on their channels for income, this kind of instant termination feels terrifying.
The lack of warning or explanation has fueled debate about whether AI can responsibly manage the livelihoods of millions of people who upload daily to the platform.

According to YouTube’s notice, Enderman’s channel was terminated because it was linked to another account that had received three copyright strikes.
The strange part is that this “linked” account appeared to be a Japanese-language channel the YouTuber had never interacted with.
The AI system assumed a connection, and both channels were deleted simultaneously. The creator described the experience as “unreal” and “unjust,” pointing to automated systems as the only possible explanation.

Multiple creators have reported similar terminations allegedly linked to the same foreign-language channel. Some of these creators also had substantial subscriber counts, though not all claims have been independently verified.
For them, it’s not just about one glitch but about a possible systemic issue in YouTube’s new AI-powered enforcement. The fear of a random connection leading to a permanent ban has now become a recurring nightmare for many in the creator community.

In a heartfelt message, Enderman told fans that losing his main channel felt like watching years of effort vanish overnight. From tutorials to tech reviews, everything he built suddenly became inaccessible.
“It feels like being bullied by an algorithm,” he said. The emotional toll is visible; his videos were not just content, but a record of his career, creativity, and community. Now, digital history sits in limbo with no human support in sight.

Enderman tried to appeal the termination but ran into another wall of automation. According to him, the appeals process itself appears powered by AI, with no real person reviewing his case.
The frustration reflects a bigger industry problem: creators often have no path to explain or defend themselves when algorithms make catastrophic errors in judgment.

One of the most unusual aspects of the story is the alleged connection between Enderman’s account and a Japanese-language channel discussing a role-playing game. He said he had never interacted with it, but YouTube’s AI somehow decided they were linked.
It’s unclear whether this was caused by shared metadata, account overlap, or mistaken identity. Whatever the reason, the false link cost him both of his active channels within days.

Soon after Enderman shared his experience, other YouTubers came forward saying the same thing had happened to them. Channels like Scratchit Gaming and 4096, each with hundreds of thousands of subscribers, claimed YouTube terminated them for being “linked” to the same foreign account.
The consistency of these stories has sparked discussions about whether YouTube’s AI systems are connecting accounts based on flawed criteria, such as IP overlap or metadata confusion.

So far, YouTube hasn’t publicly commented on these terminations. The lack of transparency has only deepened frustration within the creator community. Without an official explanation or fix, many worry that similar automated bans could happen again.
YouTube’s failure to address the situation has raised broader questions about accountability in AI moderation and whether creators can truly trust the platform to protect their digital livelihoods.

Some fans and members of the creator community began efforts to archive publicly available copies of the deleted videos.
This spontaneous effort highlights how deeply fans value creators’ work and how fragile online content can be in the age of automated enforcement. For some, the incident feels like a warning about the risks of centralized platforms.

AI moderation systems are designed to catch bad actors quickly, but when they fail, the consequences can be massive. In Enderman’s case, automation appeared to bypass human review entirely.
Critics argue that this shows a growing gap between efficiency and fairness. As AI handles billions of uploads daily, even a 0.1% failure rate means thousands of creators could be wrongly punished. It’s a chilling reminder that automation without oversight can easily cross ethical lines.

This incident stands as a cautionary tale about automation’s limits in the creative economy. AI can streamline moderation, but when it goes unchecked, it risks erasing livelihoods in seconds.
For every restored channel, there may be others that never return. Enderman’s story reminds us that behind every username is a person, and behind every algorithmic action, there must be accountability.
See how major social platforms are facing fresh legal pressure over youth wellbeing in NYC, as Meta, TikTok, Snap, and YouTube are taken to court over the teen mental health crisis.

The takeaway is clear: creators want more intelligent AI, not total automation. They appreciate the need for safety and policy enforcement but insist that human oversight is crucial for fairness.
As AI becomes increasingly powerful, platforms like YouTube face a pivotal test: can they strike a balance between efficiency and empathy? For now, creators like Enderman hope their ordeal sparks real reform in how AI and humans share responsibility online.
Learn how YouTube’s latest AI tool is aiming to combat deepfakes as YouTube rolls out likeness-detection technology to combat deepfakes.
What do you think about YouTube’s action behind removing a user’s account on AI moderation? Please share your thoughts and drop a comment.
Read More From This Brand:
Don’t forget to follow us for more exclusive content on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Father, tech enthusiast, pilot and traveler. Trying to stay up to date with all of the latest and greatest tech trends that are shaping out daily lives.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!