Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Theory of Mind (ToM) refers to the ability to reason about others’ mental states, such as beliefs, desires, and intentions. Equipping large language models (LLMs)-driven agents with ToM has been shown to improve their coordination in multiagent collaborative tasks. However, we find that the mismatches in ToM reasoning depth between agents—what we call misaligned ToM orders—can lead to insufficient or excessive reasoning about others, thereby impairing their coordination. To address this issue, we design an adaptive ToM (A-ToM) agent, which can align in ToM orders with its partner. Based on prior interactions, the agent estimates the partner’s likely ToM order and leverages this estimation to predict the partner’s action, thereby facilitating behavioral coordination. We conduct empirical evaluations on four multi-agent coordination tasks: a repeated matrix game, two grid navigation tasks and an Overcooked task. The results validate our findings on ToM alignment and demonstrate the effectiveness of our A-ToM agent. Furthermore, we investigate the applicability of both our findings and the A-ToM agent.
