Content not yet available

This lecture has no active video or poster.

AAAI 2026

January 23, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

We identify a jailbreaking vulnerability in multiple open-source LLMs: by augmenting dangerous requests using certain distractors" to obfuscate their intent, we elicit specific, actionable responses on a wide variety of harmful topics. We find that such an attack noticeably alters the contents of these models' chains of thought, including changed frequencies of seemingly unrelated $n$-grams and heightened ethical scrutiny about harmful requests even when their response is ultimately jailbroken.

Downloads

Paper

Next from AAAI 2026

How Reasoning Influences Intersectional Biases in Vision–Language Models (Student Abstract)
poster

How Reasoning Influences Intersectional Biases in Vision–Language Models (Student Abstract)

AAAI 2026

Mohna Chakraborty and 2 other authors

23 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved