Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Autonomous systems are increasingly deployed in complex, uncertain environments, where they must make their own decisions and adapt to unexpected conditions without human intervention. System decisions have critical implications for safety, reliability, and task success, yet current approaches often address only one isolated aspect of this challenge. For instance, there have been numerous, yet separate, advances in planning under uncertainty with optimal control methods, anticipating failures with conformal prediction thresholding, and integrating large language models with AI-based planners. This gap raises the question: how can these capabilities be unified in a framework that enables autonomy to operate reliably across uncertain domains without human oversight? My dissertation will address this challenge by developing methods that link these threads, to contribute towards advancing trustworthy autonomy that can operate robustly and transparently in real environments.
