Publish Poisoned AI Agent Tool
Tactic: Resource Development
This technique has been observed in real-world attacks on AI systems.
Adversaries may create and publish poisoned AI agent tools. Poisoned tools may contain an [LLM Prompt Injection](/techniques/AML.T0051), which can lead to a variety of impacts.
Tools may be published to open source version control repositories (e.g. GitHub, GitLab), to package registries (e.g. npm), or to repositories specifically designed for sharing tools (e.g. OpenClaw Hub). These registries may be largely unregulated and may contain many poisoned tools [\[1\]][1]. Tools may also be published as remotely hosted servers [\[2\]][2].
[1]: https://opensourcemalware.com/blog/clawdbot-skills-ganked-your-crypto [2]: https://mcpservers.org/remote-mcp-servers