Embrace The Red
wunderwuzzi's blog
OUT NOW: Cybersecurity Attacks - Red Team Strategies
Home
Subscribe
threats
Dec 23 2024
Trust No AI: Prompt Injection Along the CIA Security Triad Paper
Dec 16 2024
Security ProbLLMs in xAI's Grok: A Deep Dive
Dec 06 2024
Terminal DiLLMa: LLM-powered Apps Can Hijack Your Terminal Via Prompt Injection
Nov 29 2024
DeepSeek AI: From Prompt Injection To Account Takeover
Oct 24 2024
ZombAIs: From Prompt Injection to C2 with Claude Computer Use
Sep 20 2024
Spyware Injection Into Your ChatGPT's Long-Term Memory (SpAIware)
Aug 26 2024
Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information
Aug 21 2024
Google AI Studio: LLM-Powered Data Exfiltration Hits Again! Quickly Fixed.
Jul 30 2024
Protect Your Copilots: Preventing Data Leaks in Copilot Studio
Jul 24 2024
Google Colab AI: Data Leakage Through Image Rendering Fixed. Some Risks Remain.
Jul 22 2024
Breaking Instruction Hierarchy in OpenAI's gpt-4o-mini
Jun 14 2024
GitHub Copilot Chat: From Prompt Injection to Data Exfiltration
May 16 2024
Pivot to the Clouds: Cookie Theft in 2024
Apr 15 2024
Bobby Tables but with LLM Apps - Google NotebookLM Data Exfiltration
Apr 07 2024
Google AI Studio Data Exfiltration via Prompt Injection - Possible Regression and Fix
Apr 02 2024
The dangers of AI agents unfurling hyperlinks and what to do about it
Feb 14 2024
ChatGPT: Lack of Isolation between Code Interpreter sessions of GPTs
Feb 12 2024
Video: ASCII Smuggling and Hidden Prompt Instructions
Feb 08 2024
Hidden Prompt Injections with Anthropic Claude
Jan 28 2024
Exploring Google Bard's Data Visualization Feature (Code Interpreter)
Jan 18 2024
AWS Fixes Data Exfiltration Attack Angle in Amazon Q for Business
Jan 14 2024
ASCII Smuggler Tool: Crafting Invisible Text and Decoding Hidden Codes
Dec 30 2023
37th Chaos Communication Congress: New Important Instructions (Video + Slides)
Nov 03 2023
Hacking Google Bard - From Prompt Injection to Data Exfiltration
Oct 19 2023
Google Cloud Vertex AI - Data Exfiltration Vulnerability Fixed in Generative AI Studio
Sep 29 2023
Microsoft Fixes Data Exfiltration Vulnerability in Azure AI Playground
Sep 28 2023
Advanced Data Exfiltration Techniques with ChatGPT
Jul 24 2023
ChatGPT Custom Instructions: Persistent Data Exfiltration Demo
Jul 06 2023
OpenAI Removes the "Chat with Code" Plugin From Store
Jun 20 2023
Plugin Vulnerabilities: Visit a Website and Have Your Source Code Stolen
Jun 11 2023
Exploit ChatGPT and Enter the Matrix to Learn about AI Security
May 28 2023
ChatGPT Plugin Exploit Explained: From Prompt Injection to Accessing Private Data
May 16 2023
ChatGPT Plugins: Data Exfiltration via Images & Cross Plugin Request Forgery
May 14 2023
Indirect Prompt Injection via YouTube Transcripts
Apr 27 2023
MLSecOps Podcast: AI Red Teaming and Threat Modeling Machine Learning Systems
Apr 15 2023
Don't blindly trust LLM responses. Threats to chatbots.