Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
In this paper, we investigate the relationship between Information Content (IC) and the squared norm of embeddings at the text level, focusing on the mechanisms and composition functions used to combine token embeddings. i) We formally derive two sufficient conditions for this correspondence to hold in embedding models. ii) We empirically examine the correspondence and the validity of these conditions at word level for both static and contextual embeddings and different subword token composition mechanisms. iii) Building on Shannon’s Constant Entropy Rate principle, we explore whether embedding mechanisms exhibit a linearly monotonic increase in information content as text length increases. Our formal analysis and experiments reveal that: i) At the word embedding level, models satisfy the sufficient conditions and show a strong correspondence when certain subword composition functions are applied. ii) Only scaled embedding averages proposed in this paper and certain information-theoretic composition functions preserve the correspondence. Some non-compositional representations—such as the CLS token in BERT or the EOS token in LLaMA—tend to converge toward a fixed point. The CLS token in ModernBERT, however, exhibits behavior that aligns more closely with the CER hypothesis.