7 min read
7 min read

OpenAI CEO Sam Altman is ringing the alarm about a looming AI-powered fraud crisis. At a recent Federal Reserve conference, he warned that artificial intelligence tools can now mimic human voices so well, they can bypass bank security.
Altman’s blunt assessment? “AI has fully defeated voiceprint authentication.” The message for institutions still using voice ID is clear: upgrade your systems now or risk severe financial damage very soon.

It used to take hours of voice data to clone someone’s voice. Now, a few seconds of audio can be enough. AI tools can generate stunningly accurate voice replicas, fooling even sophisticated systems.
That means a scammer needs a short voicemail, podcast snippet, or TikTok video to access your bank account, potentially. The line between real and fake is rapidly disappearing, and scammers are taking full advantage.

Voice-based authentication grew popular over a decade ago, especially for wealthy clients. Customers would recite a challenge phrase to access their accounts.
But today, Altman calls that practice “crazy.” With modern voice cloning tools, anyone can generate that phrase in your voice.
Yet many banks haven’t updated their systems. According to Altman, this makes them dangerously vulnerable to modern scams hiding in plain sight.

One of the scariest parts of Altman’s warning? You don’t need elite tools to pull off a scam. Basic AI voice cloning software is widely available online, even free.
Criminals no longer need deep expertise or expensive resources. That makes voice-based scams cheap, scalable, and incredibly hard to trace. The threat isn’t coming. It’s already here, and easier to execute than most people think.

Altman didn’t stop at voice. He warned that we’re rapidly approaching a future where deepfake videos and FaceTime calls are “indistinguishable from reality.” Imagine receiving a call from your boss or spouse, only to find out later it wasn’t them.
This is the next frontier of AI-driven fraud, and it will force banks, governments, and everyday people to rethink how we verify identity in a digital world.

Michelle Bowman, Vice Chair for Supervision at the Federal Reserve, echoed Altman’s concern. She even floated the idea of partnering with tech companies to develop new fraud prevention tools.
It’s a notable shift. Regulators aren’t just reacting to AI risks but exploring collaborations to build smarter defenses.
The Fed’s involvement could signal that national policy around AI and financial fraud is on the way.

Fraud analysts predict deepfake-related financial losses could hit $40 billion within the next two years, up from $12 billion in 2023. That’s a massive jump and a clear signal that scams are scaling with AI’s power.
As tools become more accessible, not just billionaires or CEOs are at risk. Every day, people could become victims of synthetic fraud that looks and sounds like someone they trust.

In recent tests, journalists used AI voice clones to access major bank accounts. With just a few seconds of a public recording, they mimicked real customers and fooled security systems.
These weren’t shady experiments; to prove a point: the status quo is broken. And if trained reporters can pull it off, you can bet cybercriminals already have.

Altman isn’t the only one concerned. The FBI and federal agencies have also issued voice and video deepfake scams alerts. These include impersonation of children in distress, fake ransom calls, and even AI-generated messages from political figures.
In one shocking case, someone used AI to mimic a U.S. senator’s voice in messages to foreign diplomats. The stakes are no longer theoretical; they’re geopolitical.

Altman told the Federal Reserve audience that society is “unprepared” for how fast this is happening. He emphasized that institutions must act now, not later, to get ahead of evolving threats.
Waiting until voice or video scams become rampant could cost banks and users dearly. His advice? Build stronger authentication systems before the trust in them collapses altogether.

Interestingly, Altman noted that AI has compromised most traditional authentication methods except one: passwords. While they’re far from perfect, passwords remain one of the few tools that AI hasn’t yet completely overrun.
But even that may not last forever. In the meantime, banks and users should think twice about abandoning passwords for “smart” biometrics that are now easily duped.

Altman clarified that OpenAI itself isn’t creating tools for impersonation or fraud. However, the company is deeply aware of the risks and wants to be part of the solution.
OpenAI is reportedly exploring ways to help detect voice deepfakes and promote digital integrity. However, the reality is that any powerful AI model can be misused, and OpenAI wants regulators and users to take that risk seriously.

Altman also backs a project called The Orb, a biometric tool that uses eye-scanning to confirm if someone is human. The idea is to create a “proof of personhood” system for a world where AI-generated voices, images, and video are everywhere.
While still in early stages, The Orb reflects a growing trend in AI development: building tools to verify reality in a world where digital fakery is frictionless.

AI impersonators have already targeted political systems. In one case, officials were fooled by a cloned voice impersonating Secretary of State Marco Rubio. These impersonation attacks aren’t just about money but power, influence, and trust.
If someone can make a politician say something they didn’t, what’s stopping them from manipulating voters, creating fake scandals, or triggering real-world consequences?

Multiple reports of parents receiving fake calls from AI-generated voices that sound like their children in distress have surfaced.
These emotional manipulations are some of the most terrifying examples of voice cloning’s dark side.
A few seconds of your kid’s voice on social media could be enough for scammers to fabricate a kidnapping, a medical emergency, or worse. The emotional cost here is just as severe as the financial one.
It’s a chilling reminder of why digital security matters more than ever. Check out how encryption shapes our everyday security.

Though voice scams were his focus at the Fed event, Altman also addressed AI’s effect on work. He doesn’t share the doomsday views of others, saying no one can predict precisely what will happen.
Yes, some jobs will vanish, but others will emerge. As for the far future? He imagines a world where humans don’t “work” in the traditional sense, and that’s a debate for another day.
Not everyone sees eye to eye on that future, especially not Zuckerberg. Mark Zuckerberg fires shots at Altman as Meta eyes AI superintelligence lead.
What do you think about people getting scam calls from banks using AI? How can you secure yourself from it? Please share your thoughts and drop a comment.
Read More From This Brand:
Don’t forget to follow us for more exclusive content on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Dan Mitchell has been in the computer industry for more than 25 years, getting started with computers at age 7 on an Apple II.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!