Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Text embeddings and text embedding models are bread-and-butter in many NLP tasks, especially in those that involve classification, clustering or search. However, interpretability challenges persist, especially in explaining obtained similarity scores, which is crucial for applications requiring transparency. In this piece, we give a structured overview of methods specializing in explaining those similarity scores, an interesting and fairly unexplored research area. Specifically, we highlight the methods' individual ideas and techniques, as well as their trade-offs, assessing their potential for interpreting text embeddings and explaining similarity. We finally outline opportunities and open challenges for future research.