Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Stance detection aims to identify the viewpoint (favor, against, or neutral) expressed in text towards a specific target. Recent studies on zero-shot and few-shot stance detection focus primarily on learning generalized representations from explicit targets. However, these methods often neglect implicit yet semantically important targets and fail to sufficiently exploit fine-grained contextual cues, limiting model performance in nuanced scenarios. To overcome these limitations, we propose a novel two-stage framework: First, a data augmentation framework named Hierarchical Collaborative Target Augmentation (HCTA) employs Large Language Models (LLMs) to identify and annotate implicit targets via Chain-of-Thought (CoT) prompting and multi-model voting, significantly enriching training data with latent semantic relations. Second, we introduce FiCAN, a Fine-grained Context-aware Attention Network, integrating a joint text-target encoding and a sparse cross-attention mechanism to selectively capture critical fine-grained contextual clues. Experiments on the VAST benchmark dataset demonstrate that our approach achieves state-of-the-art results, confirming the effectiveness of implicit target augmentation and fine-grained contextual modeling.