Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Expressive generative models have recently shown promise in offline reinforcement learning (RL) by capturing the complex, multimodal nature of dataset behaviors. Yet, directly integrating these models into policy optimization introduces substantial computational and stability challenges due to the intricacies of their sampling processes. We introduce Flow Latent Policy (FLP), a novel offline RL framework that decouples expressivity from optimization by operating entirely in the latent space of a pre-trained, frozen flow-based behavior model. FLP learns a simple latent Gaussian policy whose samples are transformed through the flow to produce complex, behavior-aligned actions. This design enables closed-form behavior regularization via latent-space KL divergence and allows policy optimization without expensive backpropagation through the generative model. Experiments on the OGBench benchmark demonstrate that FLP achieves competitive or superior performance across diverse tasks, combining the benefits of expressive modeling and tractable optimization in a unified approach.