Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Recent works have evidenced how a sequential fine-tuning (SeqFT) phase of pre-trained vision transformers (ViTs) followed by a classifier refinement process through approximate distributions of class features, offers effective solutions to class incremental learning (CIL). However, this approach suffers from distribution drift due to the sequential optimization of shared backbone parameters, leading to a mismatch between the approximate distributions of previous classes and those of the updated model. This distribution mismatch generally leads to degraded performance in classifier refinement over time. To tackle this issue, we introduce the latent space transition operator, built on which we propose the Sequential Learning with Drift Compensation (SLDC) method. First, the linear SLDC method, which estimates a linear operator, is developed by solving a regularized least-squares problem between pre- and post-optimization features. Hereafter, the weak-nonlinear SLDC method, which assumes that appropriate transition operators are located at the intersection between linear and nonlinear regions, is developed by constructing learnable weak-nonlinear transformations. Finally, in both variants, knowledge distillation (KD) is applied to further mitigate the representation drift. Extensive experiments on CIL benchmarks demonstrate that SLDC significantly enhances the performance of SeqFT. Notably, by combining KD (to reduce representation drift) with SLDC (to counteract distribution drift), SeqFT achieves comparable performance to joint training across all evaluated datasets.