Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Multi-agent reinforcement learning enables sophisticated collaborative behaviors in autonomous systems, yet fundamental scalability barriers persist: existing methods struggle to coordinate large agent populations and struggle with extended decision-making horizons. This research develops hierarchical approaches to scale up multi-agent learning systems through two complementary directions: structural scaling for coordinating increasing numbers of agents and temporal scaling for extending decision-making horizons. This paper presents four integrated contributions: a taxonomical survey establishing hierarchical architectures as the theoretical foundation for scalable multi-agent learning systems, a benchmark for long-horizon multi-objective multi-agent reinforcement learning, a framework integrating self-organizing neural networks with multiple reinforcement learning agents for hierarchical tri-level control, and a framework leveraging large language models for zero-shot multi-agent planning. Through comprehensive validations, this work demonstrates that hierarchical heterogeneous modular architectures provide unified, interpretable solutions to multi-agent scalability --- bridging theoretical multi-agent reinforcement learning research with real-world deployment requirements.
