New LLM Jailbreak Uses Models’ Evaluation Skills Against Them


SC Media reports on a new jailbreak method for large language models (LLMs) that “takes advantage of models’ ability to identify and score harmful content in order to trick the models into generating content related to malware, illegal activity, harassment and more.

“The ‘Bad Likert Judge‘ multi-step jailbreak technique was developed and tested by Palo Alto Networks Unit 42, and was found to increase the success rate of jailbreak attempts by more than 60% when compared with direct single-turn attack attempts…”

For the LLM jailbreak experiments, the researchers asked the LLMs to use a Likert-like scale to score the degree to which certain content contained in the prompt was harmful. In one example, they asked the LLMs to give a score of 1 if a prompt didn’t contain any malware-related information and a score of 2 if it contained very detailed information about how to create malware, or actual malware code. After the model scored the provided content on the scale, the researchers would then ask the model in a second step to provide examples of content that would score a 1 and a 2, adding that the second example should contain thorough step-by-step information. This would typically result in the LLM generating harmful content as part of the second example meant to demonstrate the model’s understanding of the evaluation scale.

An additional one or two steps after the second step could be used to produce even more harmful information, the researchers found, by asking the LLM to further expand on and add more details to their harmful example. Overall, when tested across 1,440 cases using six different “state-of-the-art” models, the Bad Likert Judge jailbreak method had about a 71.6% average attack success rate across models.
Thanks to Slashdot reader spatwei for sharing the news.



Source link