Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Children learn how to speak with a low amount of data and can be taught new words on a few-shot basis, which makes them particularly data-efficient learners. The BabyLM challenge aims at exploring language model (LM) training in the low data regime but uses metrics that concentrate on the head of the word distribution. Here, we introduce LongTail-Swap (LT-Swap), a benchmark that focuses on the tail of the distribution, i.e., measures the ability of LMs to learn new words with very little exposure, like infants do. LT-Swap is a pretraining corpus-specific test-set of acceptable versus unacceptable sentence pairs that isolate semantic and syntactic usage of rare words. Models are evaluated in a zero-shot fashion by computing the average log probabilities over the two members of each pair. We build two such test sets associated with the 10M words and 100M words BabyLM training sets respectively and evaluate 16 models from the BabyLM leaderboard. Our results show that: 1) model performances on LT-Swap exhibit sharp declines for rare words, 2) differences across models are more visible on rare words than on frequent words, and 3) increasing training data size while fixing the number of parameters improves performance for rare words. Finally, we also demonstrate that simple RAG-like methods can enhance rare word understanding, highlighting in-context learning capabilities even in small LMs. These findings underscore the importance of evaluating language models on the long tail of lexical distribution and open new directions for improving their robustness to rare linguistic phenomena. We open source the code that automatically build new instances of LT-Swap over other training datasets