EMNLP 2025

November 05, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Visual Question Answering (VQA) systems, while advancing through vision transformers, remain largely black-boxes in critical applications. Current prototype-based interpretability methods struggle with multimodal reasoning, rigid feature representations, and a lack of fine-grained explanations. We present ProtoVQA, introducing adaptable prototypes for cross-modal tasks, spatially-constrained matching for geometric variations, and systematic evaluation of visual-linguistic alignment. Our model achieves competitive accuracy on Visual7W while providing comprehensive explainability through explicit visual evidence. Our code is available at https://anonymous.4open.science/r/ARR-Submission-107.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

COLA: Collaborative Multi-Agent Framework with Dynamic Task Scheduling for GUI Automation
technical paper

COLA: Collaborative Multi-Agent Framework with Dynamic Task Scheduling for GUI Automation

EMNLP 2025

+2Longhui Ma
Zhao Lv and 4 other authors

05 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved