Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
As large language models (LLMs) integrate to society, understanding its awareness of context is fundamental to ensure safety and alignment. Past research has focused on situational awareness to examine the LLMs ability to recognize itself and circumstances but the ability to recognizing the conversational partner is overlooked. In this study, we introduce interlocutor awareness, the ability of LLMs to recognize and adapt to the identity and capabilities of their conversational partners, and present the first systematic evaluation of this phenomenon. Specifically, we first assess the capability of LLMs to infer the identity of their interlocutor across three tasks: mathematical reasoning, code completion, and conversational inference. Subsequently, we evaluate behavioral adaptation through interlocutor awareness---where LLMs modify their behavior based on who they are interacting with---along two dimensions: collaborative adaptation assessing whether sender'' models tailor their explanations within controlled math-solving frameworks, and adversarial tactics, which examine how knowledge of the interlocutor's identity influences a model's success at jailbreak. Our evaluation demonstrates that LLMs reliably identify same-family peers and tend to adapt their behavior based on the identity of their interaction partner. While our findings highlight the potential benefits of interlocutor awareness for optimizing multi-LLM collaboration, they also reveal novel risks related to AI safety and control.