Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Generative modeling has emerged as a powerful approach for visuomotor policy learning, with diffusion models achieving strong results in robotic manipulation. However, they suffer from two major limitations: poor data efficiency and slow sampling due to iterative inference. While recent advances introduce equivariant architectures to address the former, slow sampling speed remains a challenge. We propose Efficient Equivariant Flow Policy (EEFlow), a generative policy learning framework based on flow matching, which models a continuous path from noise to action using ordinary differential equations (ODEs). We theoretically show that under an isotropic Gaussian prior and an equivariant velocity field, EEFlow preserves equivariance in the learned action distribution, promoting better generalization across symmetric states and reducing data requirements. To improve sampling efficiency, we introduce a second-order regularizer that penalizes acceleration. Since computing acceleration requires intractable marginal trajectories, we propose a novel surrogate loss that enables stable training using only readily available conditional trajectories. Evaluated on extensive manipulation tasks, EEFlow matches or exceeds the performance of baselines while offering fast inference, highlighting its potential for high-performance, efficient robotic control.
