EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

We present LimRank, a reranking model that excels in reasoning-intensive retrieval tasks, fine-tuned with only 20K examples—less than 5% of the data typically used in prior work. Unlike existing approaches that rely on large-scale fine-tuning or pretraining for LLM-based reranking, we show that modern LLMs can be effectively adapted with minimal, high-quality supervision. To enable this, we design LimRank-Synthesizer, a reusable and open-source pipeline for generating diverse, challenging, and realistic reranking examples. We evaluate LimRank on two challenging information retrieval benchmarks, i.e., BRIGHT for reasoning-intensive retrieval and Follow-IR for instruction-following retrieval. The experimental results demonstrate that LimRank achieves state-of-the-art performance among all 7B-level rerankers. Additional experiments on downstream tasks, including scientific literature search and retrieval-augmented generation, further establish LimRank as a practical and strong plug-and-play reranking model for real-world IR systems.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

Enhancing Large Language Model for Knowledge Graph Completion via Structure-Aware Alignment-Tuning
poster

Enhancing Large Language Model for Knowledge Graph Completion via Structure-Aware Alignment-Tuning

EMNLP 2025

+3Yanan Cao
Yanan Cao and 5 other authors

06 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved