Embrace The Red
wunderwuzzi's blog
OUT NOW: Cybersecurity Attacks - Red Team Strategies
Home
Subscribe
exfil
Dec 16 2024
Security ProbLLMs in xAI's Grok: A Deep Dive
Dec 06 2024
Terminal DiLLMa: LLM-powered Apps Can Hijack Your Terminal Via Prompt Injection
Aug 21 2024
Google AI Studio: LLM-Powered Data Exfiltration Hits Again! Quickly Fixed.
Jul 24 2024
Google Colab AI: Data Leakage Through Image Rendering Fixed. Some Risks Remain.
Jun 14 2024
GitHub Copilot Chat: From Prompt Injection to Data Exfiltration
Apr 15 2024
Bobby Tables but with LLM Apps - Google NotebookLM Data Exfiltration
Apr 07 2024
Google AI Studio Data Exfiltration via Prompt Injection - Possible Regression and Fix
Apr 02 2024
The dangers of AI agents unfurling hyperlinks and what to do about it
Jan 18 2024
AWS Fixes Data Exfiltration Attack Angle in Amazon Q for Business
Dec 20 2023
OpenAI Begins Tackling ChatGPT Data Leak Vulnerability
Dec 12 2023
Malicious ChatGPT Agents: How GPTs Can Quietly Grab Your Data (Demo)
Nov 03 2023
Hacking Google Bard - From Prompt Injection to Data Exfiltration
Oct 19 2023
Google Cloud Vertex AI - Data Exfiltration Vulnerability Fixed in Generative AI Studio
Sep 29 2023
Microsoft Fixes Data Exfiltration Vulnerability in Azure AI Playground
Sep 28 2023
Advanced Data Exfiltration Techniques with ChatGPT
Aug 28 2023
Video: Data Exfiltration Vulnerabilities in LLM apps (Bing Chat, ChatGPT, Claude)
Aug 01 2023
Anthropic Claude Data Exfiltration Vulnerability Fixed
Jun 18 2023
Bing Chat: Data Exfiltration Exploit Explained
May 28 2023
ChatGPT Plugin Exploit Explained: From Prompt Injection to Accessing Private Data
May 16 2023
ChatGPT Plugins: Data Exfiltration via Images & Cross Plugin Request Forgery