EMNLP 2025

November 07, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

As fine-tuning becomes the dominant paradigm for improving large language models (LLMs), understanding what changes during this process is increasingly important. Traditional benchmarking often fails to explain textitwhy one model outperforms another. In this work, we use textbfmodel diffing, a mechanistic interpretability approach, to analyze the specific capability differences between textbfGemma-2-9b-it and a textbfSimPO-enhanced variant. Using textbfcrosscoders, we identify and categorize latent representations that differentiate the two models. We find that SimPO acquired latent concepts predominantly enhance safety mechanisms (+32.8\%), multilingual capabilities (+43.8\%), and instruction-following (+151.7\%), while its additional training also reduces emphasis on model self-reference (-44.1\%) and hallucination management (-68.5\%). Our analysis shows that model diffing can yield fine-grained insights beyond leaderboard metrics, attributing performance gaps to concrete mechanistic capabilities. This approach offers a transparent and targeted framework for comparing LLMs.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

RALS: Resources and Baselines for Romanian Automatic Lexical Simplification
poster

RALS: Resources and Baselines for Romanian Automatic Lexical Simplification

EMNLP 2025

+1
Fabian Anghel and 3 other authors

07 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved