Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Transformer-based models are highly vulnerable to adversarial attacks, where even small perturbations can cause significant misclassifications. This paper introduces textitI-Guard, a defense framework to increase the robustness of transformer-based models against adversarial perturbations. textitI-Guard leverages model interpretability to identify influential parameters responsible for adversarial misclassifications. By selectively fine-tuning a small fraction of model parameters, our approach effectively balances performance on both original and adversarial test sets. We conduct extensive experiments on English and code-mixed Hinglish datasets and demonstrate that textitI-Guard significantly improves model robustness. Furthermore, we demonstrate the transferability of textitI-Guard in handling other character-based perturbations.