Embrace The Red
wunderwuzzi's blog
OUT NOW: Cybersecurity Attacks - Red Team Strategies
Home
Subscribe
aiml
Dec 16 2024
Security ProbLLMs in xAI's Grok: A Deep Dive
Dec 06 2024
Terminal DiLLMa: LLM-powered Apps Can Hijack Your Terminal Via Prompt Injection
Nov 29 2024
DeepSeek AI: From Prompt Injection To Account Takeover
Oct 24 2024
ZombAIs: From Prompt Injection to C2 with Claude Computer Use
Aug 26 2024
Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information
Aug 21 2024
Google AI Studio: LLM-Powered Data Exfiltration Hits Again! Quickly Fixed.
Jul 24 2024
Google Colab AI: Data Leakage Through Image Rendering Fixed. Some Risks Remain.
Jul 22 2024
Breaking Instruction Hierarchy in OpenAI's gpt-4o-mini
Jun 14 2024
GitHub Copilot Chat: From Prompt Injection to Data Exfiltration
Apr 15 2024
Bobby Tables but with LLM Apps - Google NotebookLM Data Exfiltration
Apr 13 2024
HackSpaceCon 2024: Short Trip Report, Slides and Rocket Launch
Apr 07 2024
Google AI Studio Data Exfiltration via Prompt Injection - Possible Regression and Fix
Apr 02 2024
The dangers of AI agents unfurling hyperlinks and what to do about it
Mar 04 2024
ASCII Smuggler - Improvements
Mar 02 2024
Who Am I? Conditional Prompt Injection Attacks with Microsoft Copilot
Feb 22 2024
Google Gemini: Planting Instructions For Delayed Automatic Tool Invocation
Feb 14 2024
ChatGPT: Lack of Isolation between Code Interpreter sessions of GPTs
Feb 12 2024
Video: ASCII Smuggling and Hidden Prompt Instructions
Feb 08 2024
Hidden Prompt Injections with Anthropic Claude
Jan 28 2024
Exploring Google Bard's Data Visualization Feature (Code Interpreter)
Jan 18 2024
AWS Fixes Data Exfiltration Attack Angle in Amazon Q for Business
Jan 14 2024
ASCII Smuggler Tool: Crafting Invisible Text and Decoding Hidden Codes
Dec 30 2023
37th Chaos Communication Congress: New Important Instructions (Video + Slides)
Dec 20 2023
OpenAI Begins Tackling ChatGPT Data Leak Vulnerability
Dec 12 2023
Malicious ChatGPT Agents: How GPTs Can Quietly Grab Your Data (Demo)
Nov 28 2023
Ekoparty Talk - Prompt Injections in the Wild
Nov 03 2023
Hacking Google Bard - From Prompt Injection to Data Exfiltration
Oct 19 2023
Google Cloud Vertex AI - Data Exfiltration Vulnerability Fixed in Generative AI Studio
Sep 29 2023
Microsoft Fixes Data Exfiltration Vulnerability in Azure AI Playground
Sep 28 2023
Advanced Data Exfiltration Techniques with ChatGPT
Sep 18 2023
HITCON CMT 2023 - LLM Security Presentation and Trip Report
Sep 16 2023
LLM Apps: Don't Get Stuck in an Infinite Loop! 💵💰
Aug 28 2023
Video: Data Exfiltration Vulnerabilities in LLM apps (Bing Chat, ChatGPT, Claude)
Aug 01 2023
Anthropic Claude Data Exfiltration Vulnerability Fixed
Jul 24 2023
ChatGPT Custom Instructions: Persistent Data Exfiltration Demo
Jul 14 2023
Image to Prompt Injection with Google Bard
Jul 12 2023
Google Docs AI Features: Vulnerabilities and Risks
Jul 06 2023
OpenAI Removes the "Chat with Code" Plugin From Store
Jun 20 2023
Plugin Vulnerabilities: Visit a Website and Have Your Source Code Stolen
Jun 18 2023
Bing Chat: Data Exfiltration Exploit Explained
Jun 11 2023
Exploit ChatGPT and Enter the Matrix to Learn about AI Security
May 28 2023
ChatGPT Plugin Exploit Explained: From Prompt Injection to Accessing Private Data
May 16 2023
ChatGPT Plugins: Data Exfiltration via Images & Cross Plugin Request Forgery
May 14 2023
Indirect Prompt Injection via YouTube Transcripts
May 11 2023
Adversarial Prompting: Tutorial and Lab
May 10 2023
Video: Prompt Injections - An Introduction
Apr 27 2023
MLSecOps Podcast: AI Red Teaming and Threat Modeling Machine Learning Systems
Apr 15 2023
Don't blindly trust LLM responses. Threats to chatbots.
Mar 29 2023
AI Injections: Direct and Indirect Prompt Injections and Their Implications
Mar 26 2023
Bing Chat claims to have robbed a bank and it left no trace
Mar 05 2023
Yolo: Natural Language to Shell Commands with ChatGPT API
Dec 02 2022
ChatGPT: Imagine you are a database server