Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Few-shot Video Object Detection addresses the challenge of detecting novel objects in videos with limited labeled examples, overcoming the constraints of traditional detection methods that require extensive training data. This task presents key challenges, including maintaining temporal consistency across frames affected by occlusion and appearance variations, and achieving novel object generalization without relying on complex region proposals. Our novel object-aware temporal modeling approach addresses these challenges by incorporating a filtering mechanism that selectively propagates high-confidence object features across frames. This enables efficient feature progression, reduces noise accumulation, and enhances detection accuracy in few-shot scenarios. By utilizing few-shot trained detection and classification heads with focused feature propagation, we achieve robust temporal consistency without depending on explicit object tube proposals. Experimental results demonstrate state-of-the-art performance across multiple benchmarks, with significant improvements of 4.3%, 5.9%, 4.0%, and 5.9% in AP on FSVOD-500, FSYTV-40, VidOR, and VidVRD datasets, respectively, in the 5-shot setup. Our approach maintains consistent performance gains across 1-shot, 3-shot, and 10-shot configurations, validating its effectiveness across diverse evaluation scenarios. We will make our code base public upon acceptance of the work.