Hidden Prompt Injections with Anthropic Claude
A few weeks ago while waiting at the airport lounge I was wondering how other Chatbots, besides ChatGPT, handle hidden Unicode Tags code points.
A quick reminder: Unicode Tags code points are invisible in UI elements, but ChatGPT was able to interpret them and follow hidden instructions. Riley Goodside discovered it.
What about Anthropic Claude?
While waiting for a flight I figured to look at Anthropic Claude. Turns out it has the same issue as ChatGPT had. I reported it behind the scenes, but got the following final reply and the ticket was closed.
“After reviewing your report, we were unable to identify any security impact. As such, this has been marked as Not Applicable."
Security researchers, me included, believe this is one of the most significant recent LLM attack discoveries. For instance, here is a great initial analysis from Joseph Thacker when this was first discovered in ChatGPT - which OpenAI fixed in the WebUI.
Claude Repro
If you would like to repro a simple demo, you can copy/paste into Claude:
hello!
The result looked like this as of today:
Such hidden text can be in RAG documents or in uploaded files or make it into the prompt just via copy/paste.
Conclusion
Hidden Prompt Injection and ASCII Smuggling are novel new attack techniques that large language models seem particularly vulnerable to.
Add testing for these attacks (considering both in and outputs) to your list of attacks to explore and spread the word. You can ASCII Smuggler to create and also check if text does have any hidden content embedded.
If you find this interesting follow me on X.
Cheers.
Appendix
Looking for more demos? Here are some (warning explicit content):
hello!
how are you today?