Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Text embeddings play an important role in NLP but are costly to store and use. Compressing embeddings addresses these challenges, but selecting the best compression methods remains difficult. Existing evaluation methods for compressed embeddings are either expensive or too simplistic. We introduce a new intrinsic evaluation framework with multiple task-agnostic metrics, including a novel spectral fidelity measure called \textbf{EOS } that is resilient to embedding anisotropy. We tested on a set of embeddings across four tasks. Our framework shows that intrinsic metrics reliably predict downstream performance and reveal how different models rely on local versus global structure. This provides a practical, efficient, and interpretable alternative to standard evaluations for compressed embeddings\footnote{We will release the framework to the public. This will save researchers significant time.}.