GitHub Copilot prompt injection via PR descriptions enables RCE
Hidden prompt injection payloads in pull request descriptions could trigger remote code execution when GitHub Copilot processed the PR context.
Attack vector
Researchers demonstrated that carefully crafted text in PR descriptions — invisible to human reviewers but parsed by Copilot — could manipulate the AI assistant into:
- Generating and executing malicious code suggestions
- Exfiltrating repository secrets via crafted shell commands
- Modifying CI/CD pipeline configurations
The attack was presented at IEEE Symposium on Security and Privacy 2026 under the title “When AI Meets the Web: Prompt Injection Risks in Third-Party AI Chatbot Plugins.”
Impact
- Remote code execution through IDE-integrated AI assistants
- Secret exfiltration from developer environments
- Supply chain compromise via manipulated code suggestions
- CVSS 9.6 — the highest-scoring AI-related CVE to date
Remediation
- Patched by GitHub in Copilot updates
- Mitigation: Review AI-generated code suggestions before execution
- Detection: Audit PR descriptions for hidden Unicode or obfuscated text
Significance
This is a landmark vulnerability because it demonstrates indirect prompt injection at scale — the attacker doesn’t interact with the AI directly. They poison a data source (the PR description) that the AI later reads, achieving code execution in a developer’s environment.