Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Learning from human feedback enables AI systems and robots to learn policies that align with human intent. While existing work has primarily examined learning from demonstrations, corrections, and preferences in single-agent settings, these ideas have yet to be fully extended to multi-agent domains—where cooperation, decentralization, and non-stationary dynamics demand new methods. In this thesis summary, I highlight my current work and outline future directions for multi-robot learning from human feedback, offering deployment strategies that align supervisor intent with robot teams in the real world.
