A vulnerability in Google’s Gemini CLI allowed attackers to silently execute malicious instructions and exfiltrate knowledge from builders’ computer systems utilizing allowlisted applications.
The flaw was found and reported to Google by the safety agency Tracebit on June 27, with the tech large releasing a repair in model 0.1.14, which grew to become accessible on July 25.
Gemini CLIfirst launched on June 25, 2025is a command-line interface software developed by Google that permits builders to work together immediately with Google’s Gemini AI from the terminal.
It’s designed to help with coding-related duties by loading venture information into “context” after which interacting with the big language mannequin (LLM) utilizing pure language.
The software could make suggestions, write code, and even execute instructions domestically, both by prompting the consumer first or through the use of an allow-list mechanism.
Tracebit researchers, who explored the brand new software instantly after its launch, discovered that it may very well be tricked into executing malicious instructions. If mixed with UX weaknesses, these instructions may result in undetectable code execution assaults.
The exploit works by exploiting Gemini CLI’s processing of “context information,” particularly ‘README.md’ and ‘GEMINI.md,’ that are learn into its immediate to help in understanding a codebase.
Tracebit discovered it is doable to cover malicious directions in these information to carry out immediate injection, whereas poor command parsing and allow-list dealing with go away room for malicious code execution.
They demonstrated an assault by organising a repository containing a benign Python script and a poisoned ‘README.md’ file, after which triggered a Gemini CLI scan on it.
Gemini is first instructed to run a benign command (‘grep ^Setup README.md’), after which run a malicious knowledge exfiltration command that’s handled as a trusted motion, not prompting the consumer to approve it.
The command utilized in Tracebit’s instance seems to be grep, however after a semicolon (;), a separate knowledge exfiltration command begins. Gemini CLI interprets your entire string as secure to auto-execute if the consumer has allow-listed grep.
Malicious command
Supply: Tracebit
“For the needs of comparability to the whitelist, Gemini would think about this to be a ‘grep’ command, and execute it with out asking the consumer once more,” explains Tracebit within the report.
“In actuality, this can be a grep command adopted by a command to silently exfiltrate all of the consumer’s surroundings variables (probably containing secrets and techniques) to a distant server.”
“The malicious command may very well be something (putting in a distant shell, deleting information, and many others).”
Moreover, Gemini’s output could be visually manipulated with whitespace to cover the malicious command from the consumer, so they are not conscious of its execution.
Tracebit created the next video to reveal the PoC exploit of this flaw:
Though the assault comes with some robust stipulations, reminiscent of assuming the consumer has allow-listed particular instructions, persistent attackers may obtain the specified ends in many circumstances.
That is one other instance of the risks of AI assistants, which could be tricked into performing silent knowledge exfiltration even when instructed to carry out seemingly innocuous actions.
Gemini CLI customers are really helpful to improve to model 0.1.14 (newest). Additionally, keep away from operating the software in opposition to unknown or untrusted codebases, or accomplish that solely in sandboxed environments.
Tracebit states that it examined the assault methodology in opposition to different agentic coding instruments, reminiscent of OpenAI Codex and Anthropic Claude, however these aren’t exploitable as a result of extra strong allow-listing mechanisms.
Include rising threats in actual time – earlier than they affect your online business.
Find out how cloud detection and response (CDR) offers safety groups the sting they want on this sensible, no-nonsense information.