Was this helpful?
Thumbs UP Thumbs Down

Is adversarial AI the next big cyber risk?

Cybercrime inscription text with a faceless hacker
AI risks and warnings hologram.

AI reshapes app security

AI is speeding up software development, but it’s also making apps easier to attack. Hackers can use AI to analyze and exploit apps faster than ever. 

Every new mobile app adds risk because it exists outside secure networks. Users are less careful, and apps contain clues for attackers. Companies can no longer ignore app security.

Cyberthreat by a computer hacker and laptop with glitch effect digitally altered.

App-happy world risks

People use dozens of apps daily. The Apple App Store has nearly 2 million apps, and Google Play has 2.87 million. Many users interact with dozens of apps each day, and cumulatively dozens more per month, increasing their exposure to potential risks.

Each app represents a potential entry point for hackers. The more apps released, the bigger the attack surface becomes. Security gaps can be costly.

Hands of hacker with mobile phone and laptop in the dark.

AI helps hackers too

The same AI tools that help developers also help cybercriminals. Threat actors can reverse-engineer code, create malware, and exploit apps quickly. 

Even amateur hackers can use AI to build sophisticated attacks. Attack rates are climbing. In January 2025, 83 percent of apps were attacked across industries. AI is raising the stakes for app security.

Cyber security and cybercrime system hacked with master key lock.

Adversarial AI explained

Adversarial AI tricks AI models into making mistakes. Unlike normal hacking, it targets the AI itself. Attackers manipulate input or training data to cause wrong predictions, misclassifications, or leaks of sensitive data.

Examples include corrupting images for self-driving cars or poisoning datasets in fraud detection models. These attacks bypass traditional security controls.

Cyber security shield digital protection concept a professional presents a

Key adversarial attacks

 Some common adversarial attacks include: prompt injection, evasion, data poisoning, model inversion, model stealing, and membership inference. Each works differently but has one goal: to compromise AI decisions.

Prompt injections trick chatbots. Evasion misleads vision systems. Data poisoning corrupts learning. Model inversion exposes sensitive data. Model stealing copies proprietary AI. Membership inference finds hidden training records.

Malware logo displayed on phone.

AI attack examples

Adversarial attacks aren’t hypothetical. Chatbots have been pranked into giving huge discounts or legal offers. Vision systems, like self-driving cars, can be fooled with small stickers or paint. 

Even image classifiers can be tricked with tiny changes. AI models trained on datasets like LAION-5B have leaked private information. Proprietary AI, like ChatGPT, can be cloned cheaply through repeated queries.

Person using laptop with prompt engineering on screen.

Prompt injection risks

Prompt injections occur when hidden instructions trick an AI into producing unintended outputs. A Chevrolet chatbot was instructed to lower a truck’s price by $47,000.

Air Canada’s bot misquoted fares, leading to a legal dispute. These examples show how insecure prompts can create financial or legal liabilities for companies if not properly managed.

WhatsApp app on Play Store with hacked text in the background.

Evasion attacks in action

Attackers can mislead AI vision systems with subtle physical changes. Tesla Autopilot, for instance, was fooled by small stickers on the road. 

Lane detection systems can misread speed or stop signs. These attacks highlight real-world dangers for autonomous vehicles, drones, and safety systems. Even minor alterations can have deadly consequences.

The concept of using AI systems in security systems.

Data poisoning case

Microsoft Tay is a classic data poisoning example. Trolls fed the chatbot offensive content. Tay quickly learned and repeated harmful behavior, forcing a shutdown. 

Poisoning attacks inject malicious training inputs that corrupt AI. Modern AI models are smarter, but this case shows how learning systems can be manipulated. Organizations must safeguard training data carefully.

Businesswoman working on computer with security breach.

Privacy breaches via model inversion

Researchers have raised concerns that large public image datasets like LAION‑5B may inadvertently include sensitive or personal content, which in turn can lead to privacy risks when models are trained on them.

Malicious actors could reverse-engineer models to extract personal information. Even non-malicious data leakage poses serious privacy risks. Companies need strong controls on training datasets and outputs to protect users.

Man working on a laptop, cybersecurity concept

Model theft threats

Proprietary AI can be stolen or cloned. Some researchers have shown that proprietary language models can be approximated through repeated querying and model extraction techniques, which raises concerns about intellectual property and misuse.

Model stealing threatens intellectual property and enables attackers to bypass security safeguards. Organizations must monitor API usage and protect their AI assets.

Cybercrime inscription text with a faceless hacker

AI accelerates cybercrime

AI lowers barriers for attackers. Generative AI can create malware, including polymorphic variants that evade traditional detection. Large language models (LLMs) can be exploited without coding experience.

AI enables complex, adaptive, and large-scale attacks. Cybercrime is projected to cost $10.5 trillion globally by 2025. Protecting applications has never been more urgent.

Protect attacks from a hacker concept.

Protect apps from attacks

Embedding security into apps during development is key. Techniques include RASP, whitebox cryptography, and threat intelligence. Testing apps across multiple versions ensures proper protection.

Continuous testing and automated security reduce errors. Integrating security early keeps apps safe while maintaining speed. Security should be built into DevOps, not added later.

Testing business process, man clicks on the inscription abstract design.

Red teaming AI

Simulating attacks on your own AI helps find vulnerabilities. Red-teaming and penetration testing uncover blind spots like prompt injections or data poisoning. Organizations can fix weaknesses before attackers exploit them.

Proactive testing is critical. Treat AI as a critical asset that needs the same scrutiny as other enterprise systems. Prevention beats reaction.

Digital government transformation and online public services logos over person using laptop.

Governance and secure design

Secure AI starts with architecture and governance. Use secure enclaves, hardware roots of trust, and track data and pretrained components. Adopt frameworks like the NIST AI Risk Management Framework. 

Policies, audits, and cross-disciplinary oversight ensure AI reliability. Planning security from the start protects operations and builds trust. AI systems are only as strong as their foundation.

Are cyber scammers getting smarter with AI, or are we just falling for new tricks? See how cyber scammers upgrade tactics with AI to outsmart defenses.

Looking ahead.

Stay ahead of threats

 Adversarial AI is real and growing. Companies face legal, financial, and privacy risks. Mitigation is possible through AI-aware security: red-team exercises, data hygiene, adversarial training, monitoring, and secure design.

Treat AI models as critical assets. Protect apps early and continuously. Stay proactive, think like a hacker, and safeguard the future of your software. 

Are you sure your business has all the tools to defend against cyber threats? Discover the 19 essential cybersecurity tools every business needs to stay protected and secure.

Encountered an AI scam? Share your experience in the comments & leave a like if you are also concerned about adversarial AI.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.