AAAI 2026

January 23, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

This paper examines a variety of classical optimization problems, including well-known minimization tasks and more general variational inequalities. We consider a stochastic formulation of these problems and, unlike most previous work, we take into account the complex Markov nature of the noise. We also consider the geometry of the problem in an arbitrary non-Euclidean setting and propose four methods based on the Mirror Descent iteration technique. The theoretical analysis is provided for smooth and convex minimization problems and variational inequalities with Lipschitz and monotone operators. The convergence guarantees obtained are optimal for first-order stochastic methods, as evidenced by the lower bound estimates provided in this paper. In order to validate the theoretical results, we present the relevant numerical experiments on various reinforcement learning tasks.

Downloads

SlidesPaperTranscript English (automatic)

Next from AAAI 2026

Task Prototype-Based Knowledge Retrieval for Multi-Task Learning from Partially Annotated Data
poster

Task Prototype-Based Knowledge Retrieval for Multi-Task Learning from Partially Annotated Data

AAAI 2026

Hyung-Il Kim and 2 other authors

23 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved