EMNLP 2025

November 05, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Text embeddings play an important role in NLP but are costly to store and use. Compressing embeddings addresses these challenges, but selecting the best compression methods remains difficult. Existing evaluation methods for compressed embeddings are either expensive or too simplistic. We introduce a new intrinsic evaluation framework with multiple task-agnostic metrics, including a novel spectral fidelity measure called \textbf{EOS } that is resilient to embedding anisotropy. We tested on a set of embeddings across four tasks. Our framework shows that intrinsic metrics reliably predict downstream performance and reveal how different models rely on local versus global structure. This provides a practical, efficient, and interpretable alternative to standard evaluations for compressed embeddings\footnote{We will release the framework to the public. This will save researchers significant time.}.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

2Columns1Row: A Russian Benchmark for Textual and Multimodal Table Understanding and Reasoning
poster

2Columns1Row: A Russian Benchmark for Textual and Multimodal Table Understanding and Reasoning

EMNLP 2025

+1Alena Fenogenova
Alena Fenogenova and 3 other authors

05 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved