Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Effective agent coordination is crucial in cooperative Multiagent Reinforcement Learning (MARL). While recent advances have significantly improved cooperation by modeling agent interactions through various graph structures, most existing approaches primarily focus on homogeneous agents. Despite the ubiquity of heterogeneous agents, constructing a comprehensive graph that captures their diverse attributes and relationships from scratch is notoriously labor-intensive for both humans and agents, which makes policy learning extremely challenging. To tackle this difficulty, we propose a novel method that utilizes a fuzzy human attention-guided graph to model inter-agent relationships. Instead of learning the graph entirely from scratch, we incorporate abstract human attention, with its uncertainty captured through fuzzy logic, to guide the graph development process. To further accommodate the varying attributes and objectives of heterogeneous agents while maintaining their learning capabilities, the attention-guided graph is fine-tuned through a hyper-network. Our proposed approach is end-to-end trainable and agnostic to specific MARL methods. Empirical evaluations conducted on challenging heterogeneous scenarios from the StarCraft Multiagent Challenge (SMAC) and SMACv2 validate the effectiveness of the proposed method.
