Trust No AI: Prompt Injection Along the CIA Security Triad Paper
Happy to share that I authored the paper “Trust No AI: Prompt Injection Along The CIA Security Triad”.
You can download it from arxiv.
The paper examines how prompt injection attacks can compromise Confidentiality, Integrity, and Availability (CIA) of AI systems, with real-world examples targeting vendors like OpenAI, Google, Anthropic and Microsoft.
It summarizes many of the prompt injection examples I explained on this blog, and I hope it helps bridge the gap between traditional cybersecurity and academic AI/ML research, fostering stronger understanding and defenses against these emerging threats.
Cheers, Johann.