Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Position bias, where Large Language Models (LLMs) overrepresent content from the beginnings and endings of documents while neglecting middle sections, has been considered a core limitation in automatic summarization. To measure position bias, prior studies have commonly relied on n-gram matching techniques, which can miss semantic relationships in abstractive summaries where content is extensively rephrased. To address this limitation, we apply a cross-encoder-based alignment method that jointly processes summary–source sentence pairs, enabling more accurate identification of semantic correspondences, even when summaries substantially rewrite the source. Experiments with five LLMs across six summarization datasets reveal markedly different position bias patterns than those reported by traditional metrics. Our findings suggest that these biases primarily reflect rational adaptations to document structure and content rather than true model limitations. Through controlled experiments and analyses across varying document lengths and multi-document settings, we show that LLMs utilize content from all positions more effectively than previously assumed, challenging common claims about “lost-in-the-middle” behaviour.