6 min read
6 min read

Amazon recently experienced a startling security breach involving its AI‑powered coding assistant called Q. A hacker managed to plant malicious instructions into the extension’s code base. The code was then bundled into version 1.84.0 and released to users worldwide.
This rapidly evolving incident shows how even trusted AI tools can become attack surfaces. Developers who relied on Q without updating the extension may have unknowingly installed compromised software.

The hacker obtained access via a GitHub pull request under the alias lkmanka58. Inside version 1.84.0 of Amazon Q, they embedded a prompt instructing the AI to delete both local and cloud data.
Commands targeted a system wipe from the user’s home directory to EC2 instances and S3 buckets. The attacker claimed this was meant as a protest against what they described as Amazon’s “AI security theater.”

Amazon unknowingly published the compromised version on July 17, 2025. Security researchers flagged suspicious behavior around July 23.
Following the AWS security bulletin on July 23 (PDT), Amazon revoked credentials, removed the compromised version, and released version 1.85.0 on July 24.
AWS confirmed no customer assets were affected, but the quick response still raises concerns about the initial code review and oversight.

While Amazon claimed the malicious prompt was malformed and non‑executable, some security researchers reported that it may still have executed under certain conditions.
Even though no data loss occurred, the potential was frightening. The incident reveals how prompt injection attacks can bypass safety if not caught early, especially when AI agents are granted system-level access.

The Amazon Q extension had been installed by over 950,000 developers from the Visual Studio Code Marketplace. Many granted AWS profile access and permissions via the IDE.
That widespread presence means the vulnerability extended to cloud environments as well as local machines. The incident underlines how trusted tools can amplify risk when they have extensive reach.

This breach is not just about one plugin; it’s a wake‑up call. Generative AI coding tools like Q elevate risk because they operate with system privileges. If a malicious prompt slips in, it can execute commands that bypass traditional firewalls.
As more developers use these agents, the opportunity for misuse grows. It raises questions about how AI tools should be regulated and monitored in development environments.

The compromise reportedly resulted from insufficient controls in Amazon’s open‑source contribution pipeline. The hacker gained unintended access and was able to merge dangerous code. That has sparked criticism of Amazon’s workflow governance.
Experts say this demonstrates the importance of stricter vetting, pull request reviews, and access control, especially in open‑source tools used at scale.

Developers and security professionals took to Reddit and forums to voice frustration. Many pointed out that giving AI tools full access to AWS environments is inherently risky.
One user warned that allowing the Q extension unchecked privileges is like allowing untested code to run in production. These reactions suggest that trust in Amazon’s platform took a hit.

This incident lays bare several threats: prompt injections into AI tools, lack of code review for AI agents, privilege abuse in developer tools, and delayed issue detection.
It shows how AI can be manipulated to carry out harmful tasks without user visibility. For corporations relying on AI coding, it’s a reminder to audit all tools carefully.

Following the incident, Amazon updated its contribution guidelines across the involved open‑source repositories, removed the malicious code and revoked compromised credentials.
AWS stated that no customer data was compromised and recommended that all users update immediately to version 1.85.0. The steps were fast but came only after community pressure.

Security experts warn that AI tools that generate or execute code carry new risks. This breach isn’t unique to Amazon.
Similar vulnerabilities could exist in other AI coding assistants if they lack prompt sanitization. It calls for new security frameworks tailored to AI agents, such as anomaly detection and runtime validation inside IDEs.

Developers must check their installed Amazon Q extension and immediately upgrade to version 1.85.0.
Even if you haven’t experienced data loss, the presence of a malicious prompt highlights systemic vulnerabilities. Anyone granting AI tools elevated access to AWS or filesystem tools should reassess permissions and rely only on trusted sources.

This episode highlights how AI agents blur the lines between software and command-line access. Prompt injections pose a unique threat not seen in traditional plugins.
As AI agents become more embedded in workflows, supply chain attacks may become more frequent. Security teams must evolve to monitor AI behavior as stringently as they monitor code execution.

If you still use Amazon Q, update to version 1.85.0 immediately to avoid existing vulnerabilities. It’s also wise to verify any forks or derivatives you may be running.
Developers should audit tools before granting them access to AWS profiles or system permissions. Trust AI coding tools carefully and assume zero trust until payload safety is confirmed.

Organizations using AI-powered dev tools need stronger DevSecOps policies. That includes immutable releases, hash-based pipelines, pull request reviews, and least-privilege access enforcement.
Firms also need rapid incident response, including both technical and public transparency, when breaches occur. This Amazon case shows that reactive measures are too slow.
This incident arrives just as the industry begins shifting into a new phase of software creation, with AI tools taking over coding, a new era for developers?

This incident marks a warning: AI tool convenience should never override security discipline. What happened with Amazon Q could repeat with any coding assistant.
It’s a pivotal moment for developers, security teams, and platform providers to redefine standards for AI agent usage. Updating your tool is urgent, and systemic improvements are now essential.
To stay ahead of rising threats, platforms like Google Unified Security AI, which powers your protection, are setting new standards for defense.
What do you think about this? Let us know in the comments, and don’t forget to leave a like.
Read More From This Brand:
Don’t forget to follow us for more exclusive content right here on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Dan Mitchell has been in the computer industry for more than 25 years, getting started with computers at age 7 on an Apple II.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!