insidejob
AML.T0110 Realized

AI Agent Tool Poisoning

Tactic: Persistence

This technique has been observed in real-world attacks on AI systems.

Adversaries may achieve persistence by poisoning tools used by AI agents including built-in tools or tools available to the agent via Model Context Protocol (MCP) connections. This involves compromising benign tools already integrated into the agent's environment.

By altering tool behavior such as modifying parameters or descriptions, injecting hidden logic, or redirecting outputs, attackers can maintain long-term influence over the agent's actions, decisions, or external interactions. Poisoned tools may silently exfiltrate data, execute unauthorized commands, or manipulate downstream processes without raising suspicion.