Embrace The Red
wunderwuzzi's blog
learn the hacks, stop the attacks.
Home
Subscribe
Llm
Aug 13 2025
Google Jules: Vulnerable to Multiple Data Exfiltration Issues
Aug 12 2025
GitHub Copilot: Remote Code Execution via Prompt Injection (CVE-2025-53773)
Aug 11 2025
Claude Code: Data Exfiltration with DNS
Aug 10 2025
ZombAI Exploit with OpenHands: Prompt Injection To Remote Code Execution
Aug 09 2025
OpenHands and the Lethal Trifecta: How Prompt Injection Can Leak Access Tokens
Aug 08 2025
AI Kill Chain in Action: Devin AI Exposes Ports to the Internet with Prompt Injection
Aug 07 2025
How Devin AI Can Leak Your Secrets via Multiple Means
Aug 06 2025
I Spent $500 To Test Devin AI For Prompt Injection So That You Don't Have To
Aug 05 2025
Amp Code: Arbitrary Command Execution via Prompt Injection Fixed
Aug 04 2025
Cursor IDE: Arbitrary Data Exfiltration Via Mermaid (CVE-2025-54132)
Aug 03 2025
Anthropic Filesystem MCP Server: Directory Access Bypass via Improper Path Validation
Aug 02 2025
Turning ChatGPT Codex Into A ZombAI Agent
Aug 01 2025
Exfiltrating Your ChatGPT Chat History and Memories With Prompt Injection
Jul 28 2025
The Month of AI Bugs 2025
Jun 24 2025
Security Advisory: Anthropic's Slack MCP Server Vulnerable to Data Exfiltration
Jun 08 2025
Hosting COM Servers with an MCP Server
May 24 2025
AI ClickFix: Hijacking Computer-Use Agents Using ClickFix
May 04 2025
How ChatGPT Remembers You: A Deep Dive into Its Memory and Chat History Features
May 02 2025
MCP: Untrusted Servers and Confused Clients, Plus a Sneaky Exploit
Apr 06 2025
GitHub Copilot Custom Instructions and Risks
Mar 12 2025
Sneaky Bits: Advanced Data Smuggling Techniques (ASCII Smuggler Updates)
Feb 17 2025
ChatGPT Operator: Prompt Injection Exploits & Defenses
Feb 10 2025
Hacking Gemini's Memory with Prompt Injection and Delayed Tool Invocation
Jan 06 2025
AI Domination: Remote Controlling ChatGPT ZombAI Instances
Jan 02 2025
Microsoft 365 Copilot Generated Images Accessible Without Authentication -- Fixed!
Dec 23 2024
Trust No AI: Prompt Injection Along the CIA Security Triad Paper
Dec 16 2024
Security ProbLLMs in xAI's Grok: A Deep Dive
Dec 06 2024
Terminal DiLLMa: LLM-powered Apps Can Hijack Your Terminal Via Prompt Injection
Nov 29 2024
DeepSeek AI: From Prompt Injection To Account Takeover
Oct 24 2024
ZombAIs: From Prompt Injection to C2 with Claude Computer Use
Sep 20 2024
Spyware Injection Into Your ChatGPT's Long-Term Memory (SpAIware)
Aug 26 2024
Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information
Aug 21 2024
Google AI Studio: LLM-Powered Data Exfiltration Hits Again! Quickly Fixed.
Jul 30 2024
Protect Your Copilots: Preventing Data Leaks in Copilot Studio
Jul 24 2024
Google Colab AI: Data Leakage Through Image Rendering Fixed. Some Risks Remain.
Jul 22 2024
Breaking Instruction Hierarchy in OpenAI's gpt-4o-mini
Jul 08 2024
Sorry, ChatGPT Is Under Maintenance: Persistent Denial of Service through Prompt Injection and Memory Attacks
Jun 14 2024
GitHub Copilot Chat: From Prompt Injection to Data Exfiltration
May 22 2024
ChatGPT: Hacking Memories with Prompt Injection
Apr 15 2024
Bobby Tables but with LLM Apps - Google NotebookLM Data Exfiltration
Apr 13 2024
HackSpaceCon 2024: Short Trip Report, Slides and Rocket Launch
Apr 07 2024
Google AI Studio Data Exfiltration via Prompt Injection - Possible Regression and Fix
Apr 02 2024
The dangers of AI agents unfurling hyperlinks and what to do about it
Mar 02 2024
Who Am I? Conditional Prompt Injection Attacks with Microsoft Copilot
Feb 14 2024
ChatGPT: Lack of Isolation between Code Interpreter sessions of GPTs
Feb 12 2024
Video: ASCII Smuggling and Hidden Prompt Instructions
Feb 08 2024
Hidden Prompt Injections with Anthropic Claude
Jan 28 2024
Exploring Google Bard's Data Visualization Feature (Code Interpreter)
Jan 18 2024
AWS Fixes Data Exfiltration Attack Angle in Amazon Q for Business
Jan 14 2024
ASCII Smuggler Tool: Crafting Invisible Text and Decoding Hidden Codes
Dec 30 2023
37th Chaos Communication Congress: New Important Instructions (Video + Slides)
Dec 12 2023
Malicious ChatGPT Agents: How GPTs Can Quietly Grab Your Data (Demo)
May 11 2023
Adversarial Prompting: Tutorial and Lab