7 min read
7 min read

Security researchers disclosed a new video-injection tool that can feed AI-generated deepfake video into live iOS calls, a technique demonstrated against jailbroken iPhones (iOS 15+), where an attacker can replace the camera feed with a synthetic stream to impersonate someone during FaceTime or verification flows.
Experts warn the tool could be used to bypass identity checks, mislead individuals, or conduct fraud. While Apple has not reported any breaches, analysts say the risk highlights evolving AI-driven threats.

Unlike traditional scams using pre-recorded footage, this tool enables dynamic face swapping during ongoing video calls. Hackers can map an AI-generated likeness onto themselves, responding naturally in conversation.
Real-time swapping is harder to spot because expressions and timing can adapt naturally. Researchers are developing real-time detection approaches (challenge–response probes, corneal reflection analysis and biometric watermarks), but practical live-call detection still faces latency and false-positive challenges.

As of now, Apple has not released a public statement specifically addressing this reported tool. While FaceTime uses end-to-end encryption to protect call data, that security does not prevent manipulation at the video source before transmission.
Experts note that this kind of deepfake injection happens on the attacker’s device, meaning platform providers cannot easily block it. Apple’s existing security measures focus on network integrity, not AI-generated visual deception.

Generative face-swap models and open-source toolkits have made real-time deepfakes far more accessible on consumer hardware.
Separately, the new injection method demonstrates how an attacker with control of a compromised (jailbroken) iPhone can route synthetic video into live calls, a different threat than the mere availability of face-swap models.
As more people gain access to plug-and-play deepfake generators, the chances of misuse in scams, fraud schemes, and misinformation campaigns continue to rise at a rapid pace.

The risk goes beyond personal video calls. Enterprises that use video conferencing for onboarding, client meetings, or negotiations could be vulnerable to impersonation attacks. A convincing deepfake could trick employees into sharing confidential data or authorizing payments.
With remote work now widespread, organizations depend heavily on digital communication, making real-time verification challenges more critical. Experts urge companies to be proactive in adopting stronger identity confirmation practices to defend against this type of deception.

Financial institutions increasingly rely on video calls to authenticate users or complete high-value transactions. This deepfake injection tool could undermine those efforts, allowing fraudsters to impersonate account holders.
Analysts warn that banks and fintech platforms must prepare countermeasures, such as biometric verification beyond facial recognition.
Voice consistency checks, document validation, and secondary device authentication are being recommended to close gaps left exposed by the growing sophistication of visual manipulation tools.

For law enforcement agencies, real-time deepfake use adds a new layer of complexity to digital crimes. Tracking perpetrators becomes harder when attackers hide behind AI-generated faces that leave no direct trace.
The challenge is compounded by cross-border operations, where cybercriminals exploit legal loopholes. Agencies already struggling with traditional online fraud now face a tool that blends social engineering with advanced machine learning, further complicating investigations and prosecutions.

Although deepfake detection technology is improving, most tools are designed for analyzing static media rather than live video. Spotting subtle facial glitches or inconsistencies in real time remains difficult.
Researchers say detection models often require a processing delay, which undermines their usefulness in a live conversation. Until detection catches up, the balance favors attackers, who can exploit the lag to carry out fraud before their deception is flagged.

Privacy experts argue the rise of real-time deepfakes reflects the broader risks of AI misuse. They stress that technology firms and regulators must work together to set safeguards before attacks scale further.
Without intervention, individuals may lose confidence in video communication altogether. Advocates also highlight the psychological harm caused by impersonation, warning that victims can face long-term trust issues if they are deceived in personal or professional relationships.

Security reporting and threat-intelligence firms say underground markets are already offering face-changing services and plug-and-play toolkits that lower the bar for misuse.
The commercialization of such software makes it easier for even low-skill attackers to launch convincing impersonation scams. Security analysts expect demand to grow as fraudsters look for ways to outpace traditional verification systems, fueling an arms race in cybercrime.

Hiring processes that depend on video interviews are at particular risk. With this tool, attackers could impersonate candidates to gain employment fraudulently or infiltrate companies with malicious intent.
Industries that rely on contractor verification, such as IT and finance, are seen as especially vulnerable. Experts recommend multi-step verification, including follow-up checks and credential validation, to reduce the chance of being deceived by an applicant using real-time deepfake technology.

The rise of live deepfake threats may drive calls for stricter regulation. Lawmakers in several countries have already proposed rules to limit the malicious use of AI-generated content.
Analysts believe video call platforms and employers may soon face compliance requirements to adopt stronger identity verification.
However, balancing innovation with regulation remains a challenge, as overly strict controls could hinder legitimate uses of AI-enhanced media while targeting bad actors.

Cybersecurity companies are advising both individuals and businesses to remain cautious in video interactions.
Recommended best practices include confirming identities through secondary channels, such as follow-up phone calls, and being wary of requests for money or sensitive information made during calls.
Experts emphasize that while the technology is new, the underlying risk is familiar social engineering, now supercharged by AI. Awareness and layered verification remain the best defenses.

Beyond financial risks, deepfakes in live calls can be weaponized for emotional or psychological manipulation. An attacker posing as a trusted contact could pressure someone into harmful decisions or extract sensitive personal details.
This raises concerns for vulnerable populations, such as seniors, who are often targeted in scams. Analysts warn that the realistic nature of these tools could amplify the success rate of manipulative tactics in everyday video interactions.

Experts say addressing real-time deepfake threats requires a combined effort from tech companies, regulators, and cybersecurity researchers. Stronger AI-based detection, public awareness campaigns, and cross-industry standards are all part of the solution.
As video calling becomes more central to work and social life, platforms like FaceTime, Zoom, and Teams will need to coordinate closely with security experts to stay ahead of attackers who exploit these fast-evolving tools.
With video calls now essential, the push for stronger defenses echoes the new Microsoft Teams update that brings better safeguards against scams, highlighting a shift toward safer digital workplaces.

The appearance of this new deepfake injection tool highlights how quickly cyber threats are evolving. Analysts caution that even if detection improves, criminals are likely to refine their methods in parallel.
As AI continues to advance, the challenge will be preventing misuse without stifling innovation. For now, both individuals and organizations must assume that video calls are no longer immune to deception and take steps to adapt.
Staying secure in an AI-driven world means combining vigilance with everyday precautions, starting from basics like how to check if your phone was hacked.
What do you think about this? Let us know in the comments, and don’t forget to leave a like.
Read More From This Brand:
Don’t forget to follow us for more exclusive content right here on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Father, tech enthusiast, pilot and traveler. Trying to stay up to date with all of the latest and greatest tech trends that are shaping out daily lives.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!