EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Open-ended survey responses provide valuable insights in marketing research, but low-quality responses not only burden researchers with manual filtering but also risk leading to misleading conclusions, underscoring the need for effective evaluation. Existing automatic evaluation methods target LLM-generated text and inadequately assess human-written responses with their distinct characteristics. We propose a two-stage evaluation framework specifically designed for human survey responses. First, gibberish filtering filters out nonsensical responses, then three dimensions—effort, relevance, and completeness—are evaluated using LLM capabilities, grounded in empirical analysis of real-world survey data. Validation on English and Korean datasets shows that our framework outperforms existing metrics and demonstrates high practical applicability for real-world applications across multilingual setting, showing strong correlations with expert assessment.

Downloads

Paper

Next from EMNLP 2025

RAGulator: Lightweight Out-of-Context Detectors for Grounded Text Generation
poster

RAGulator: Lightweight Out-of-Context Detectors for Grounded Text Generation

EMNLP 2025

Jiajun Liu and 2 other authors

06 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2026 Underline - All rights reserved