insidejob
AML.T0058 Realized

Publish Poisoned Models

This technique has been observed in real-world attacks on AI systems.

Adversaries may publish a poisoned model to a public location such as a model registry or code repository. The poisoned model may be a novel model or a poisoned variant of an existing open-source model. This model may be introduced to a victim system via [AI Supply Chain Compromise](/techniques/AML.T0010).