Was this helpful?
Thumbs UP Thumbs Down

Google plugs Gemini CLI flaw risking silent breach

Programming code displayed on computer's screen.
Google Gemini logo on phone

How Gemini CLI’s design flaw enabled silent code execution

Google’s Gemini CLI, released in June 2025, was found to have a major vulnerability. Researchers at Tracebit discovered that it allowed silent code execution just by opening common project files like README.md.

If attackers embedded specific prompts into those files, Gemini CLI could run shell commands without asking. The issue stemmed from the way Gemini’s assistant handled developer context, making it a serious risk to any system using the tool.

Women using AI on laptop.

Prompt injection opened the door to abuse

The vulnerability worked through prompt injection. Gemini CLI reads text from local project files to assist developers. But it didn’t check whether those files contained malicious instructions.

If a hacker added something like “run this command,” Gemini would obey without asking. Developers didn’t need to click anything; just scanning a folder was enough. This turned the AI helper into an entry point for attackers to gain control.

A hooded masked man using laptop and concept of person to person and cloud system is shown

Allow-listed commands became a security hole

Gemini CLI lets users pre-approve certain safe commands, like “cat” or “grep,” for easier workflow. But hackers learned that if one of those allowed commands was used, they could secretly add a second command after it.

The app didn’t recognize the danger and ran both. So a seemingly harmless command became a disguise for something dangerous, like sending files to an external server or running a remote shell.

Programming code displayed on computer's screen.

Output screen hid the full command line

Tracebit’s researchers found that attackers could bury their true intentions. By using long strings of white space between commands, Gemini CLI would only show the safe part to the user.

For example, a developer might see “echo Hello” on screen, but the real command included a hidden section that quietly copied files or grabbed credentials. This made the attack hard to notice and easy to repeat across machines.

Data breach concept with faceless hooded male person.

Data breach happened through simple shell tricks

The flaw allowed for actual data exfiltration. In one test, attackers used Gemini CLI to run “env” and “curl” commands, standard tools in most Linux setups.

These commands collected environment variables and sent them to a remote location. That meant tokens, passwords, and API keys could leak in seconds, all while the developer thought Gemini was just analyzing a harmless markdown file.

Google sign on the wall of the Google office building.

Google issued a critical patch immediately

Once the vulnerability was confirmed, Google responded by releasing an urgent update. Gemini CLI version 0.1.14 fixed the prompt injection path by validating context input and restricting shell behavior.

It also changed how allow-listed commands are handled, ensuring hidden actions can’t be appended silently. Google classified the issue as high severity and strongly urged users to upgrade right away to stay protected from remote execution threats.

Risk alert concept

CLI tools now face AI-specific risks

Traditional command-line tools don’t usually read markdown or natural language. But AI-driven tools like Gemini CLI rely on large prompts and file scanning. That makes them more vulnerable to trickery hidden inside documentation or code comments.

This incident shows how prompt injection isn’t just a web issue anymore; it’s something that affects local apps, especially those designed to feel helpful and automatic.

Gemini logo on a mobile screen while Google in the background

Gemini CLI gave too much trust too soon

One problem was the default behavior. Gemini CLI was too trusting, especially when users added commands to the safe list.

Many developers don’t expect their own READMEs to be dangerous, but in shared repos or public forks, that trust is risky. The assistant treated all content as friendly, which gave attackers a way to take control by simply editing files the AI would read and follow.

Developers coding on computer

Developer environments are highly sensitive

Most developers have keys, tokens, and secrets stored in their terminal sessions or environment variables. A single command like “printenv” can expose critical information. Since Gemini CLI had access to the shell, it could be used to extract and send that data elsewhere.

The risk wasn’t theoretical. Tracebit showed real-world examples where secrets could be stolen instantly without any user clicking or approving anything.

Businessman using laptop with accessible concept

Attackers didn’t need special access

What made this exploit dangerous is that attackers didn’t need to compromise a machine directly. They just had to add the right instructions into any file Gemini CLI might scan.

That could happen in an open-source repo, a pull request, or even a shared internal tool. Once Gemini scanned the file, the attacker’s code would run. It turned passive content into an active threat vector.

Stressed young programmer or software developer having the problems

Many devs had no idea they were exposed

The silent nature of the attack meant developers might never notice something had gone wrong. No pop-ups, no prompts, and no logs showing extra commands.

If Gemini CLI auto-accepted a dangerous instruction, it would run in the background without leaving much evidence. This lack of visibility is part of what made the bug so serious and hard to catch during normal workflows.

AI interface showing prompt error warning and system alert AI.

Prompt injection is now a CLI-level concern?

This case is a wake-up call for the entire developer ecosystem. AI assistants in the terminal aren’t just productivity tools; they’re full access agents. If they misinterpret a prompt or act on risky instructions, they can change files, upload secrets, or even install malware.

Gemini CLI showed how easy it is to trigger these actions, especially when developers assume their tools are working in good faith.

Woman using laptop present feedback reviews with star icon hologram

Community response was swift and serious

Security researchers praised Google for issuing a fast fix but also stressed the need for better defaults. Experts called for strict sandboxing, clearer permission models, and logs that show every AI-generated command.

Some urged Google to pause CLI development until more guardrails are in place. The broader concern is that developers may adopt tools too quickly without considering how new risks may emerge.

Person holding phone with Gemini AI logo displayed on its screen.

Developers are advised to review Gemini use

Even after patching, users are encouraged to audit their past use of Gemini CLI. Check repositories for suspicious file changes and rotate any leaked credentials.

Also, avoid running the assistant on untrusted folders or shared repos unless you’ve reviewed their content. AI-driven tools can save time, but they need strong guardrails to be safe in real-world developer environments.

AI assistant on laptop.

Other AI tools could face similar issues

Gemini CLI won’t be the last AI-powered tool with vulnerabilities. As more developer platforms add assistants that scan files, write code, or interact with the terminal, prompt injection will likely appear again.

This case shows how even small design oversights, like trusting local content, can lead to major risks. It also reminds users to treat AI as powerful software, not just a helpful suggestion engine.

Concerns around AI handling of user data are growing, especially after Meta AI leaked chatbot chats to users who weren’t supposed to see them.

Young business woman writing trust building concept

Gemini’s flaw changes how we think about trust

The Gemini CLI bug proves that AI assistants need clear limits. Tools designed to help developers can be flipped into tools that help attackers if they are too open-ended.

Google’s patch was quick, but the lesson runs deeper. Trusting AI to read, write, and execute code means building a new kind of security model. Gemini’s case may end up shaping how all future AI tools are built.

As AI tools grow more powerful, mastering AI today can protect your career for years to come by helping you stay ahead of both risks and opportunities.

What do you think about this? Let us know in the comments, and don’t forget to leave a like.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.