EMNLP 2025

November 07, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Sentence embeddings play an important role in tasks such as clustering, semantic search, and retrieval-augmented generation, yet they generally lack interpretability. We propose a framework for decomposing embeddings into interpretable components, which we define as semantic regions, i.e. connected subsets on the embedding hypersphere. These regions reveal both the internal semantic structure of individual embeddings and the set-theoretical relationships between them. We further show that these regions exhibit a hierarchical organization that reflects semantic inclusion, including hypernymy and hyponymy relations. Empirical results across both synthetic and real-world datasets validate the existence of these regions and demonstrate their utility for sentence embedding interpretability.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

Revisiting Pruning vs Quantization for Small Language Models
poster

Revisiting Pruning vs Quantization for Small Language Models

EMNLP 2025

Simon Kurz and 1 other author

07 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved