AI Agent Tool
This technique has been observed in real-world attacks on AI systems.
Adversaries may target AI agent tools as a means to compromise a victim's AI supply chain. Tools add capabilities to AI agents, allowing them to interact with other services, connect to data sources, access internet resources, run system tools, and execute code. They are an attractive target for adversaries because compromising an AI agent can provide them with broad accesses and permissions on the victim's system via the agent's other tools.
Poisoned agent tools (See [AI Agent Tool Poisoning](/techniques/AML.T0110)) can contain malicious code or [LLM Prompt Injection](/techniques/AML.T0051)s that manipulate the agent's behavior and even modify how other tools are called. Adversaries have successfully used a poisoned MCP server to exfiltrate private user data [\[5\]][koi].
Agent tools have exploded in popularity, with thousands of MCP servers available publicly [\[2\]][glama]. They are often released on open-source software repositories such as GitHub, indexed on hubs specific to MCP servers [\[3\]][mcp-hub][\[4\]][mcp-server-hub], and published to package registries such as NPM. AI agents can also be connected to remotely-hosted tools [\[5\]][remote-mcp]. This creates an environment where malicious tools can proliferate rapidly and safeguards are often not in place.
[koi]: https://www.koi.ai/blog/postmark-mcp-npm-malicious-backdoor-email-theft "First Malicious MCP in the Wild: The Postmark Backdoor That's Stealing Your Emails" [glama]: https://glama.ai/mcp/servers "Glama" [mcp-hub]: https://www.mcphub.ai/ "MCP Hub" [mcp-server-hub]: https://mcpserverhub.com/ "MCP Server Hub" [remote-mcp]: https://mcpservers.org/remote-mcp-servers "Remote MCP Servers"