Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
The potential of Generalized Category Discovery (GCD) lies in its ability to identify previously undiscovered patterns in both labeled and unlabeled data by leveraging insights from partially labeled training samples. However, interference can arise due to the model's dual focus on discovering both novel and known categories, often leading to conflicts that obscure true patterns in the dataset. This paper presents a divide-and-conquer framework, Foundation-Adaptive Integrated Refinement (FAIR), which fine-tunes pretrained foundational weights for various purposes, divided into $\texttt{Foundation}$ (pretrained weights), $\texttt{Adaptive}$ (weights fine-tuned with an anticlimactic loss), and $\texttt{Integrated}$ (weights adjusted for both labeled and unlabeled data). The $\texttt{Adaptive}$ utilizes a newly proposed adaptive contrastive loss that introduces variances within classes to preserve the individuality of representations. The $\texttt{Integrated}$ addresses inherent estimation errors while dynamically estimating the number of categories, incorporating a cosine-based perturbation mechanism as a margin to ensure the ground truth falls within it, rather than relying on biased estimates. Extensive experiments on six benchmark datasets demonstrate our method's effectiveness, outperforming state-of-the-art algorithms, especially on fine-grained datasets.
