AAAI 2026

January 22, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Neural networks are increasingly important to learn action policies. Policy predicate abstraction (PPA) verifies safety of such a neural policy pi by over-approximating the state space subgraph induced by pi and using counterexample-guided abstraction refinement (CEGAR) to iteratively refine the abstraction. So far, PPA verifies safety in non-deterministic systems. This work extends PPA to probabilistic verification. Extending the abstract state space computation is relatively straightforward. Abstraction refinement, however, becomes substantially more complex, due to the more intricate form of counterexamples and the various sources of spuriousness it entails. We tackle this challenge by drawing inspiration from prior work on probabilistic CEGAR, empowering it to deal with neural pi. The resulting algorithm decides whether pi is safe with respect to a desired upper bound on unsafety probability. Invoking the algorithm incrementally, we can also derive upper and lower bounds automatically. Our experiments show that these algorithms can derive non-trivial bounds, whereas encodings into state-of-the-art probabilistic model checkers turn out to be ineffective.

Downloads

Paper

Next from AAAI 2026

Exploiting All Mamba Fusion for Efficient RGB-D Tracking
poster

Exploiting All Mamba Fusion for Efficient RGB-D Tracking

AAAI 2026

+5
Changqin Huang and 7 other authors

22 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved