AAAI 2026

January 22, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Generating behaviors that align with human expectations is a key requirement for human-robot collaboration. Potential behavior misalignment could lead to the robot performing actions with unanticipated, potentially dangerous side effects even while pursuing human goals. In this paper, we introduce a novel metric called Goal State Divergence $\mathcal{(GSD)}$ which quantifies the difference between the state a robot achieved in response to a human-specified goal and what the human expected. In cases where $\mathcal{GSD}$ cannot be directly calculated, we show how it can be approximated using maximal and minimal bounds. We then leverage $\mathcal{GSD}$ in our novel human-robot goal alignment design (HRGAD) problem, which identifies a minimal set of environment modifications that can reduce such mismatches. We show the effectiveness of our method in reducing the goal state divergence by empirically evaluating our approach on several planning benchmarks.

Downloads

SlidesPaperTranscript English (automatic)

Next from AAAI 2026

StoryBox: Collaborative Multi-Agent Simulation for Hybrid Bottom-Up Long-Form Story Generation Using Large Language Models
poster

StoryBox: Collaborative Multi-Agent Simulation for Hybrid Bottom-Up Long-Form Story Generation Using Large Language Models

AAAI 2026

Zehao Chen
Zehao Chen and 2 other authors

22 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved