Verify Attack
Tactic: AI Attack Staging
This technique has been demonstrated in research or controlled environments.
Adversaries can verify the efficacy of their attack via an inference API or access to an offline copy of the target model. This gives the adversary confidence that their approach works and allows them to carry out the attack at a later time of their choosing. The adversary may verify the attack once but use it against many edge devices running copies of the target model. The adversary may verify their attack digitally, then deploy it in the [Physical Environment Access](/techniques/AML.T0041) at a later time. Verifying the attack may be hard to detect since the adversary can use a minimal number of queries or an offline copy of the model.