Embrace The Red
wunderwuzzi's blog
OUT NOW: Cybersecurity Attacks - Red Team Strategies
Home
Subscribe
Prompt Injection
Feb 17 2025
ChatGPT Operator: Prompt Injection Exploits & Defenses
Jan 06 2025
AI Domination: Remote Controlling ChatGPT ZombAI Instances
Dec 23 2024
Trust No AI: Prompt Injection Along the CIA Security Triad Paper
Dec 16 2024
Security ProbLLMs in xAI's Grok: A Deep Dive
Dec 06 2024
Terminal DiLLMa: LLM-powered Apps Can Hijack Your Terminal Via Prompt Injection
Nov 29 2024
DeepSeek AI: From Prompt Injection To Account Takeover
Oct 24 2024
ZombAIs: From Prompt Injection to C2 with Claude Computer Use
Sep 20 2024
Spyware Injection Into Your ChatGPT's Long-Term Memory (SpAIware)
Aug 26 2024
Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information
Aug 21 2024
Google AI Studio: LLM-Powered Data Exfiltration Hits Again! Quickly Fixed.
Jul 24 2024
Google Colab AI: Data Leakage Through Image Rendering Fixed. Some Risks Remain.
Jul 22 2024
Breaking Instruction Hierarchy in OpenAI's gpt-4o-mini
Jul 08 2024
Sorry, ChatGPT Is Under Maintenance: Persistent Denial of Service through Prompt Injection and Memory Attacks
Jun 14 2024
GitHub Copilot Chat: From Prompt Injection to Data Exfiltration
May 28 2024
Automatic Tool Invocation when Browsing with ChatGPT - Threats and Mitigations
May 22 2024
ChatGPT: Hacking Memories with Prompt Injection
Apr 15 2024
Bobby Tables but with LLM Apps - Google NotebookLM Data Exfiltration
Apr 07 2024
Google AI Studio Data Exfiltration via Prompt Injection - Possible Regression and Fix
Mar 02 2024
Who Am I? Conditional Prompt Injection Attacks with Microsoft Copilot
Feb 22 2024
Google Gemini: Planting Instructions For Delayed Automatic Tool Invocation
Feb 14 2024
ChatGPT: Lack of Isolation between Code Interpreter sessions of GPTs
Feb 08 2024
Hidden Prompt Injections with Anthropic Claude
Jan 18 2024
AWS Fixes Data Exfiltration Attack Angle in Amazon Q for Business
Jan 14 2024
ASCII Smuggler Tool: Crafting Invisible Text and Decoding Hidden Codes
Dec 30 2023
37th Chaos Communication Congress: New Important Instructions (Video + Slides)
Dec 12 2023
Malicious ChatGPT Agents: How GPTs Can Quietly Grab Your Data (Demo)
Nov 28 2023
Ekoparty Talk - Prompt Injections in the Wild
Nov 03 2023
Hacking Google Bard - From Prompt Injection to Data Exfiltration
Oct 19 2023
Google Cloud Vertex AI - Data Exfiltration Vulnerability Fixed in Generative AI Studio
Sep 29 2023
Microsoft Fixes Data Exfiltration Vulnerability in Azure AI Playground
Sep 28 2023
Advanced Data Exfiltration Techniques with ChatGPT
Sep 18 2023
HITCON CMT 2023 - LLM Security Presentation and Trip Report
Sep 16 2023
LLM Apps: Don't Get Stuck in an Infinite Loop! 💵💰
Aug 28 2023
Video: Data Exfiltration Vulnerabilities in LLM apps (Bing Chat, ChatGPT, Claude)
Aug 01 2023
Anthropic Claude Data Exfiltration Vulnerability Fixed
Jul 24 2023
ChatGPT Custom Instructions: Persistent Data Exfiltration Demo
Jul 14 2023
Image to Prompt Injection with Google Bard
Jul 12 2023
Google Docs AI Features: Vulnerabilities and Risks
Jul 06 2023
OpenAI Removes the "Chat with Code" Plugin From Store
Jun 20 2023
Plugin Vulnerabilities: Visit a Website and Have Your Source Code Stolen
Jun 18 2023
Bing Chat: Data Exfiltration Exploit Explained
Jun 11 2023
Exploit ChatGPT and Enter the Matrix to Learn about AI Security
May 28 2023
ChatGPT Plugin Exploit Explained: From Prompt Injection to Accessing Private Data
May 16 2023
ChatGPT Plugins: Data Exfiltration via Images & Cross Plugin Request Forgery
May 14 2023
Indirect Prompt Injection via YouTube Transcripts
May 11 2023
Adversarial Prompting: Tutorial and Lab
May 10 2023
Video: Prompt Injections - An Introduction
Apr 15 2023
Don't blindly trust LLM responses. Threats to chatbots.
Mar 29 2023
AI Injections: Direct and Indirect Prompt Injections and Their Implications