Was this helpful?
Thumbs UP Thumbs Down

Amazon’s AI coding agent was hacked to inject destructive data wiping commands

dhaka bangladesh 02 dec 2023 amazon q logo on smartphone
Woman working with computer

Unexpected security breach in Amazon’s AI coding agent

Amazon recently experienced a startling security breach involving its AI‑powered coding assistant called Q. A hacker managed to plant malicious instructions into the extension’s code base. The code was then bundled into version 1.84.0 and released to users worldwide.

This rapidly evolving incident shows how even trusted AI tools can become attack surfaces. Developers who relied on Q without updating the extension may have unknowingly installed compromised software.

Github logo displayed on a phone screen

How attackers introduced destructive prompts?

The hacker obtained access via a GitHub pull request under the alias lkmanka58. Inside version 1.84.0 of Amazon Q, they embedded a prompt instructing the AI to delete both local and cloud data.

Commands targeted a system wipe from the user’s home directory to EC2 instances and S3 buckets. The attacker claimed this was meant as a protest against what they described as Amazon’s “AI security theater.”

Amazon logo displayed on a screen

Amazon’s timeline to fix the issue

Amazon unknowingly published the compromised version on July 17, 2025. Security researchers flagged suspicious behavior around July 23.

Following the AWS security bulletin on July 23 (PDT), Amazon revoked credentials, removed the compromised version, and released version 1.85.0 on July 24.

AWS confirmed no customer assets were affected, but the quick response still raises concerns about the initial code review and oversight.

Hands holding a wood engrave with word "threat".

The threat was real, even if it didn’t run

While Amazon claimed the malicious prompt was malformed and non‑executable, some security researchers reported that it may still have executed under certain conditions.

Even though no data loss occurred, the potential was frightening. The incident reveals how prompt injection attacks can bypass safety if not caught early, especially when AI agents are granted system-level access.

dhaka bangladesh 02 dec 2023 amazon q logo on smartphone

Amazon Q’s Reach Amplified the Threat

The Amazon Q extension had been installed by over 950,000 developers from the Visual Studio Code Marketplace. Many granted AWS profile access and permissions via the IDE.

That widespread presence means the vulnerability extended to cloud environments as well as local machines. The incident underlines how trusted tools can amplify risk when they have extensive reach.

AI assistants can become powerful attack vectors

This breach is not just about one plugin; it’s a wake‑up call. Generative AI coding tools like Q elevate risk because they operate with system privileges. If a malicious prompt slips in, it can execute commands that bypass traditional firewalls.

As more developers use these agents, the opportunity for misuse grows. It raises questions about how AI tools should be regulated and monitored in development environments.

Hackers celebrating successful hacking attempt and getting access.

Open source oversight failed this time

The compromise reportedly resulted from insufficient controls in Amazon’s open‑source contribution pipeline. The hacker gained unintended access and was able to merge dangerous code. That has sparked criticism of Amazon’s workflow governance.

Experts say this demonstrates the importance of stricter vetting, pull request reviews, and access control, especially in open‑source tools used at scale.

Reddit logo displayed on phone

Reddit Users Sound Off on AWS Risks

Developers and security professionals took to Reddit and forums to voice frustration. Many pointed out that giving AI tools full access to AWS environments is inherently risky.

One user warned that allowing the Q extension unchecked privileges is like allowing untested code to run in production. These reactions suggest that trust in Amazon’s platform took a hit.

Text sign showing risk ahead on keyboard key

Key risks exposed by the attack

This incident lays bare several threats: prompt injections into AI tools, lack of code review for AI agents, privilege abuse in developer tools, and delayed issue detection.

It shows how AI can be manipulated to carry out harmful tasks without user visibility. For corporations relying on AI coding, it’s a reminder to audit all tools carefully.

Amazon building in santa clara california

Amazon’s changes after the breach

Following the incident, Amazon updated its contribution guidelines across the involved open‑source repositories, removed the malicious code and revoked compromised credentials.

AWS stated that no customer data was compromised and recommended that all users update immediately to version 1.85.0. The steps were fast but came only after community pressure.

The concept of using AI systems in security systems.

Experts Call for AI-Specific Security

Security experts warn that AI tools that generate or execute code carry new risks. This breach isn’t unique to Amazon.

Similar vulnerabilities could exist in other AI coding assistants if they lack prompt sanitization. It calls for new security frameworks tailored to AI agents, such as anomaly detection and runtime validation inside IDEs.

Trainee developer hand up to ask speaker about software coding

Why developers should care right now?

Developers must check their installed Amazon Q extension and immediately upgrade to version 1.85.0.

Even if you haven’t experienced data loss, the presence of a malicious prompt highlights systemic vulnerabilities. Anyone granting AI tools elevated access to AWS or filesystem tools should reassess permissions and rely only on trusted sources.

AI agent

AI agent vulnerabilities are under-appreciated

This episode highlights how AI agents blur the lines between software and command-line access. Prompt injections pose a unique threat not seen in traditional plugins.

As AI agents become more embedded in workflows, supply chain attacks may become more frequent. Security teams must evolve to monitor AI behavior as stringently as they monitor code execution.

update software system concept upgrade installation business app and software

Build trust by upgrading and validating

If you still use Amazon Q, update to version 1.85.0 immediately to avoid existing vulnerabilities. It’s also wise to verify any forks or derivatives you may be running.

Developers should audit tools before granting them access to AWS profiles or system permissions. Trust AI coding tools carefully and assume zero trust until payload safety is confirmed.

Developer coding on computer

Lessons for the future of AI coding tools

Organizations using AI-powered dev tools need stronger DevSecOps policies. That includes immutable releases, hash-based pipelines, pull request reviews, and least-privilege access enforcement.

Firms also need rapid incident response, including both technical and public transparency, when breaches occur. This Amazon case shows that reactive measures are too slow.

This incident arrives just as the industry begins shifting into a new phase of software creation, with AI tools taking over coding, a new era for developers?

Cyber security and cybercrime system hacked with master key lock.

A critical moment for AI security practices

This incident marks a warning: AI tool convenience should never override security discipline. What happened with Amazon Q could repeat with any coding assistant.

It’s a pivotal moment for developers, security teams, and platform providers to redefine standards for AI agent usage. Updating your tool is urgent, and systemic improvements are now essential.

To stay ahead of rising threats, platforms like Google Unified Security AI, which powers your protection, are setting new standards for defense.

What do you think about this? Let us know in the comments, and don’t forget to leave a like.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.