insidejob
AML.T0086 Realized

Exfiltration via AI Agent Tool Invocation

Tactic: Exfiltration

This technique has been observed in real-world attacks on AI systems.

AI agent tools capable of performing write operations may be invoked to exfiltrate data to an adversary. Sensitive information can be encoded into the tool's input parameters and transmitted to an adversary-controlled location (such as an inbox, document, or server) as part of a seemingly legitimate action. Variants include sending emails, creating or modifying documents, updating CRM records, or even generating media such as images or videos.

The invoked tool itself may be legitimate but invoked by an adversary via [LLM Prompt Injection](/techniques/AML.T0051), or the tool may be malicious (See [AI Agent Tool Poisoning](/techniques/AML.T0110).

[AI Agent Tool Poisoning](/techniques/AML.T0110) can also be used manipulate the inputs and destination of a separate legitimate tool, invoked through normal usage by the victim.