Was this helpful?
Thumbs UP Thumbs Down

iOS video calls at risk as hackers inject AI deepfakes with a new tool

Deepfake generating fake news on socialcables media
A senior man with a beard smiles while using his phone.

iOS video calls face deepfake threat

Security researchers disclosed a new video-injection tool that can feed AI-generated deepfake video into live iOS calls, a technique demonstrated against jailbroken iPhones (iOS 15+), where an attacker can replace the camera feed with a synthetic stream to impersonate someone during FaceTime or verification flows.

Experts warn the tool could be used to bypass identity checks, mislead individuals, or conduct fraud. While Apple has not reported any breaches, analysts say the risk highlights evolving AI-driven threats.

Deepfake generating fake news on socialcables media

Real-time impersonation risk

Unlike traditional scams using pre-recorded footage, this tool enables dynamic face swapping during ongoing video calls. Hackers can map an AI-generated likeness onto themselves, responding naturally in conversation.

Real-time swapping is harder to spot because expressions and timing can adapt naturally. Researchers are developing real-time detection approaches (challenge–response probes, corneal reflection analysis and biometric watermarks), but practical live-call detection still faces latency and false-positive challenges.

Apple logo closeup at the apple building in Germany

Apple yet to issue response

As of now, Apple has not released a public statement specifically addressing this reported tool. While FaceTime uses end-to-end encryption to protect call data, that security does not prevent manipulation at the video source before transmission.

Experts note that this kind of deepfake injection happens on the attacker’s device, meaning platform providers cannot easily block it. Apple’s existing security measures focus on network integrity, not AI-generated visual deception.

Deepfake concept matching facial movements face swapping or impersonation

Deepfake tech grows more accessible

Generative face-swap models and open-source toolkits have made real-time deepfakes far more accessible on consumer hardware.

Separately, the new injection method demonstrates how an attacker with control of a compromised (jailbroken) iPhone can route synthetic video into live calls, a different threat than the mere availability of face-swap models.

As more people gain access to plug-and-play deepfake generators, the chances of misuse in scams, fraud schemes, and misinformation campaigns continue to rise at a rapid pace.

Woman working from home having online group video conference on a laptop.

Targets include businesses

The risk goes beyond personal video calls. Enterprises that use video conferencing for onboarding, client meetings, or negotiations could be vulnerable to impersonation attacks. A convincing deepfake could trick employees into sharing confidential data or authorizing payments.

With remote work now widespread, organizations depend heavily on digital communication, making real-time verification challenges more critical. Experts urge companies to be proactive in adopting stronger identity confirmation practices to defend against this type of deception.

Cyberthreat by a computer hacker and laptop with glitch effect digitally altered.

Threat to financial verification

Financial institutions increasingly rely on video calls to authenticate users or complete high-value transactions. This deepfake injection tool could undermine those efforts, allowing fraudsters to impersonate account holders.

Analysts warn that banks and fintech platforms must prepare countermeasures, such as biometric verification beyond facial recognition.

Voice consistency checks, document validation, and secondary device authentication are being recommended to close gaps left exposed by the growing sophistication of visual manipulation tools.

Female business woman lawyers working at the law firms judge

Law enforcement challenges

For law enforcement agencies, real-time deepfake use adds a new layer of complexity to digital crimes. Tracking perpetrators becomes harder when attackers hide behind AI-generated faces that leave no direct trace.

The challenge is compounded by cross-border operations, where cybercriminals exploit legal loopholes. Agencies already struggling with traditional online fraud now face a tool that blends social engineering with advanced machine learning, further complicating investigations and prosecutions.

Young woman picked out by face detection or facial recognition

Detection tools still lagging

Although deepfake detection technology is improving, most tools are designed for analyzing static media rather than live video. Spotting subtle facial glitches or inconsistencies in real time remains difficult.

Researchers say detection models often require a processing delay, which undermines their usefulness in a live conversation. Until detection catches up, the balance favors attackers, who can exploit the lag to carry out fraud before their deception is flagged.

Cyber security data protection information privacy antivirus virus defence internet

Privacy advocates weigh in

Privacy experts argue the rise of real-time deepfakes reflects the broader risks of AI misuse. They stress that technology firms and regulators must work together to set safeguards before attacks scale further.

Without intervention, individuals may lose confidence in video communication altogether. Advocates also highlight the psychological harm caused by impersonation, warning that victims can face long-term trust issues if they are deceived in personal or professional relationships.

Arrow on graph showing growth over a person's hand.

Growing black market demand

Security reporting and threat-intelligence firms say underground markets are already offering face-changing services and plug-and-play toolkits that lower the bar for misuse.

The commercialization of such software makes it easier for even low-skill attackers to launch convincing impersonation scams. Security analysts expect demand to grow as fraudsters look for ways to outpace traditional verification systems, fueling an arms race in cybercrime.

A wooden blocks with the word impact written on it

Impact on remote hiring

Hiring processes that depend on video interviews are at particular risk. With this tool, attackers could impersonate candidates to gain employment fraudulently or infiltrate companies with malicious intent.

Industries that rely on contractor verification, such as IT and finance, are seen as especially vulnerable. Experts recommend multi-step verification, including follow-up checks and credential validation, to reduce the chance of being deceived by an applicant using real-time deepfake technology.

Person touching digital display of 'The rules have changed' phrase.

Potential regulatory push

The rise of live deepfake threats may drive calls for stricter regulation. Lawmakers in several countries have already proposed rules to limit the malicious use of AI-generated content.

Analysts believe video call platforms and employers may soon face compliance requirements to adopt stronger identity verification.

However, balancing innovation with regulation remains a challenge, as overly strict controls could hinder legitimate uses of AI-enhanced media while targeting bad actors.

A cyber security data protection information privacy internet technology concept

Security firms urge vigilance

Cybersecurity companies are advising both individuals and businesses to remain cautious in video interactions.

Recommended best practices include confirming identities through secondary channels, such as follow-up phone calls, and being wary of requests for money or sensitive information made during calls.

Experts emphasize that while the technology is new, the underlying risk is familiar social engineering, now supercharged by AI. Awareness and layered verification remain the best defenses.

Deepfake AI and face swap in video

Psychological manipulation angle

Beyond financial risks, deepfakes in live calls can be weaponized for emotional or psychological manipulation. An attacker posing as a trusted contact could pressure someone into harmful decisions or extract sensitive personal details.

This raises concerns for vulnerable populations, such as seniors, who are often targeted in scams. Analysts warn that the realistic nature of these tools could amplify the success rate of manipulative tactics in everyday video interactions.

business handshake with social media symbol

Industry collaboration needed

Experts say addressing real-time deepfake threats requires a combined effort from tech companies, regulators, and cybersecurity researchers. Stronger AI-based detection, public awareness campaigns, and cross-industry standards are all part of the solution.

As video calling becomes more central to work and social life, platforms like FaceTime, Zoom, and Teams will need to coordinate closely with security experts to stay ahead of attackers who exploit these fast-evolving tools.

With video calls now essential, the push for stronger defenses echoes the new Microsoft Teams update that brings better safeguards against scams, highlighting a shift toward safer digital workplaces.

Risk alert concept

Future risks remain high

The appearance of this new deepfake injection tool highlights how quickly cyber threats are evolving. Analysts caution that even if detection improves, criminals are likely to refine their methods in parallel.

As AI continues to advance, the challenge will be preventing misuse without stifling innovation. For now, both individuals and organizations must assume that video calls are no longer immune to deception and take steps to adapt.

Staying secure in an AI-driven world means combining vigilance with everyday precautions, starting from basics like how to check if your phone was hacked.

What do you think about this? Let us know in the comments, and don’t forget to leave a like.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.