
Premium content
Access to this content requires a subscription. You must be a premium user to view this content.

poster
Is the Pope Catholic? Yes, the Pope is Catholic. Generative Evaluation of Non-Literal Intent Resolution in LLMs
keywords:
generative evaluation
non-literal intention
pragmatics
Humans often express their communicative intents indirectly or non-literally, which requires their interlocutors---human or AI---to understand beyond the literal meaning of words. While most existing work has focused on discriminative evaluations, we present a new approach to generatively evaluate large language models' (LLMs') intention understanding by examining their responses to non-literal utterances. Ideally, an LLM should respond in line with the true intention of a non-literal utterance, not its literal interpretation. Our findings show that LLMs struggle to generate contextually relevant responses to non-literal language. We also find that providing oracle intentions substantially improves response appropriateness, but using chain-of-thought to make models spell out intentions before responding improves much less. These findings suggest that LLMs are not yet pragmatic interlocutors, and that explicitly modeling intention could improve LLM responses to non-literal language.