insidejob

MITRE ATLAS v5.5.0

Adversarial Threat Landscape for AI Systems

16 tactics 101 techniques 66 sub-techniques 0 mitigations 57 case studies
Realized — seen in the wild Demonstrated — lab/research Feasible — theoretical

AML.TA0002 Reconnaissance 8

ATT&CK TA0043

The adversary is trying to gather information about the AI system they can use to plan future operations.

AML.TA0003 Resource Development 13

ATT&CK TA0042

The adversary is trying to establish resources they can use to support operations.

AML.TA0004 Initial Access 7

ATT&CK TA0001

The adversary is trying to gain access to the AI system.

AML.TA0000 AI Model Access 4

The adversary is attempting to gain some level of access to an AI model.

AML.TA0005 Execution 6

ATT&CK TA0002

The adversary is trying to run malicious code embedded in AI artifacts or software.

AML.TA0006 Persistence 9

ATT&CK TA0003

The adversary is trying to maintain their foothold via AI artifacts or software.

AML.TA0012 Privilege Escalation 4

ATT&CK TA0004

The adversary is trying to gain higher-level permissions.

AML.TA0007 Defense Evasion 15

ATT&CK TA0005

The adversary is trying to avoid being detected by AI-enabled security software.

AML.TA0013 Credential Access 6

ATT&CK TA0006

The adversary is trying to steal account names and passwords.

AML.TA0008 Discovery 9

ATT&CK TA0007

The adversary is trying to figure out your AI environment.

AML.TA0015 Lateral Movement 2

ATT&CK TA0008

The adversary is trying to move through your AI environment.

AML.TA0009 Collection 4

ATT&CK TA0009

The adversary is trying to gather AI artifacts and other related information relevant to their goal.

AML.TA0001 AI Attack Staging 6

The adversary is leveraging their knowledge of and access to the target system to tailor the attack.

AML.TA0014 Command and Control 3

ATT&CK TA0011

The adversary is trying to communicate with compromised AI systems to control them.

AML.TA0010 Exfiltration 6

ATT&CK TA0010

The adversary is trying to steal AI artifacts or other information about the AI system.

AML.TA0011 Impact 9

ATT&CK TA0040

The adversary is trying to manipulate, interrupt, erode confidence in, or destroy your AI systems and data.

Case Studies 57

AML.CS0000 Evasion of Deep Learning Detector for Malware C&C Traffic
AML.CS0001 Botnet Domain Generation Algorithm (DGA) Detection Evasion
AML.CS0002 VirusTotal Poisoning
AML.CS0003 Bypassing Cylance's AI Malware Detection
AML.CS0004 Camera Hijack Attack on Facial Recognition System
AML.CS0005 Attack on Machine Translation Services
AML.CS0006 ClearviewAI Misconfiguration
AML.CS0007 GPT-2 Model Replication
AML.CS0008 ProofPoint Evasion
AML.CS0009 Tay Poisoning
AML.CS0010 Microsoft Azure Service Disruption
AML.CS0011 Microsoft Edge AI Evasion
AML.CS0012 Face Identification System Evasion via Physical Countermeasures
AML.CS0013 Backdoor Attack on Deep Learning Models in Mobile Apps
AML.CS0014 Confusing Antimalware Neural Networks
AML.CS0015 Compromised PyTorch Dependency Chain
AML.CS0016 Achieving Code Execution in MathGPT via Prompt Injection
AML.CS0017 Bypassing ID.me Identity Verification
AML.CS0018 Arbitrary Code Execution with Google Colab
AML.CS0019 PoisonGPT
AML.CS0020 Indirect Prompt Injection Threats: Bing Chat Data Pirate
AML.CS0021 ChatGPT Conversation Exfiltration
AML.CS0022 ChatGPT Package Hallucination
AML.CS0023 ShadowRay
AML.CS0024 Morris II Worm: RAG-Based Attack
AML.CS0025 Web-Scale Data Poisoning: Split-View Attack
AML.CS0026 Financial Transaction Hijacking with M365 Copilot as an Insider
AML.CS0027 Organization Confusion on Hugging Face
AML.CS0028 AI Model Tampering via Supply Chain Attack
AML.CS0029 Google Bard Conversation Exfiltration
AML.CS0030 LLM Jacking
AML.CS0031 Malicious Models on Hugging Face
AML.CS0032 Attempted Evasion of ML Phishing Webpage Detection System
AML.CS0033 Live Deepfake Image Injection to Evade Mobile KYC Verification
AML.CS0034 ProKYC: Deepfake Tool for Account Fraud Attacks
AML.CS0035 Data Exfiltration from Slack AI via Indirect Prompt Injection
AML.CS0036 AIKatz: Attacking LLM Desktop Applications
AML.CS0037 Data Exfiltration via Agent Tools in Copilot Studio
AML.CS0038 Planting Instructions for Delayed Automatic AI Agent Tool Invocation
AML.CS0039 Living Off AI: Prompt Injection via Jira Service Management
AML.CS0040 Hacking ChatGPT’s Memories with Prompt Injection
AML.CS0041 Rules File Backdoor: Supply Chain Attack on AI Coding Assistants
AML.CS0042 SesameOp: Novel backdoor uses OpenAI Assistants API for command and control
AML.CS0043 Malware Prototype with Embedded Prompt Injection
AML.CS0044 LAMEHUG: Malware Leveraging Dynamic AI-Generated Commands
AML.CS0045 Data Exfiltration via an MCP Server used by Cursor
AML.CS0046 Data Destruction via Indirect Prompt Injection Targeting Claude Computer-Use
AML.CS0047 Code to Deploy Destructive AI Agent Discovered in Amazon Q VS Code Extension
AML.CS0048 Exposed ClawdBot Control Interfaces Leads to Credential Access and Execution
AML.CS0049 Supply Chain Compromise via Poisoned ClawdBot Skill
AML.CS0050 OpenClaw 1-Click Remote Code Execution
AML.CS0051 OpenClaw Command & Control via Prompt Injection
AML.CS0052 LLMSmith: RCE Vulnerabilities in LLM-Integrated Applications
AML.CS0053 Poisoned Postmark MCP Server Email Exfiltration
AML.CS0054 Data Exfiltration via Remote Poisoned MCP Tool
AML.CS0055 AI ClickFix: Hijacking Computer-Use Agents Using ClickFix
AML.CS0056 Model Distillation Campaigns Targeting Anthropic Claude

Resources