EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Multi-digit addition is a clear probe of the computational power of large language models. To dissect the internal arithmetic processes in LLaMA-3-8B-Instruct, we combine linear probing with logit-lens inspection. Inspired by the step-by-step manner in which humans perform addition, we propose and analyze a coherent four-stage trajectory in the forward pass: Formula-structure representations become linearly decodable first, while the answer token is still far down the candidate list. Core computational features then emerge prominently. At deeper activation layers, numerical abstractions of the result become clearer, enabling near-perfect detection and decoding of the individual digits in the sum. Near the output, the model organizes and generates the final content, with the correct token reliably occupying the top rank. This trajectory suggests a hierarchical process that favors internal computation over rote memorization. We release our code and data to facilitate reproducibility.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

Forget for Get: A Lightweight Two-phase Gradient Method for Knowledge Editing in Large Language Models
poster

Forget for Get: A Lightweight Two-phase Gradient Method for Knowledge Editing in Large Language Models

EMNLP 2025

+1Xiping Hu
Xiping Hu and 3 other authors

06 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved