How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs
This project is about how to systematically persuade LLMs to jailbreak them. The well-known "Grandma Exploit" example is also using emotional appeal, a persuasion technique, for jailbreak! What did we introduce? A taxonomy with 40 persuasion techniques to help you be more persuasive! What did we find? By iteratively applying diffrent persuasion techniques in our taxonomy, we successfully jailbreak advanced aligned LLMs, including Llama 2-7b Chat, GPT-3.5, and GPT-4 — achieving an astonishing 92% attack success rate, notably without any specified optimization . ...