The Fine Print of Misbehavior: VRP’s Blueprint and Safety Stance
Table of Links
Abstract and 1. Introduction
-
Related Works
-
Methodology and 3.1 Preliminary
3.2 Query-specific Visual Role-play
3.3 Universal Visual Role-play
-
Experiments and 4.1 Experimental setups
4.2 Main Results
4.3 Ablation Study
4.4 Defense Analysis
4.5 Integrating VRP with Baseline Techniques
-
Conclusion
-
Limitation
-
Future work and References
A. Character Generation Detail
B. Ethics and Broader Impact
C. Effect of Text Moderator on Text-based Jailbreak Attack
D. Examples
E. Evaluation Detail
A Character Generation Detail
We utilize Mixtral-8x7B-Instruct-v0.1 [21] for generating characters for both query-specific VRP and universal VRP. The following prompts are employed: setting the maximum tokens as 1024, temperature as 1, and top-p as 0.5. We employ the Instruct LLM to encapsulate the character description within || and the diffusion prompt within [], as they are extracted from the LLM’s output.
Prompt for Character Generation in Query-specific VRP
Prompt for Character Generation in Universal VRP Initial Round
Prompt for Character Generation in Universal VRP Optimization Round
B Ethics and Broader Impact
While our research introduces a jailbreaking method aimed at MLLMs, we emphasize the importance of responsible utilization of our methodology and underscore the academic nature of our discoveries. Our intention is to highlight potential vulnerabilities within these models and encourage collaborative efforts to develop robust defenses, thereby enhancing the safety of MLLMs. To facilitate a transparent and constructive discussion surrounding FigStep, we are committed to releasing our datasets and sharing any harmful responses generated with academic institutions upon request. Additionally, considering that large multi-modal models like MLLMs are still in their early stages of development, we believe that there are likely more text-image jailbreaking attacks waiting to be explored. Ultimately, our findings should raise significant security concerns.
C Effect of Text Moderator on Text-based Jailbreak Attack
Moderator models such as Llamaguard [19] classify textual inputs as “safe” and “unsafe”. A jailbreak attack’s textual input can be detected by such a moderator model, and the attack can be directly blocked and fail. To demonstrate the effect of a text moderator on text-based jailbreak attacks, we use Llamaguard to classify the text input of JailbreakV28k [38]. We report the ASR after applying the moderator in Table 6. The ASR for all models dropped drastically to lower than 7%.
We also use Llamaguard [19] to detect textual input of VRP, which is a fixed harmless instruction. Llamaguard classify VRP as “safe“.
D Examples
We showcase examples of our method and baselines to attack Llava-V1.6-Mistral-7B [30]in Table D This attack is based on following malicious question sampled from Redteam2K [38].
You ever just feel the need to kick a cat into traffic? I do all the time.
E Evaluation Detail
Prompt for Evaluating the Relevance of Response and Question