EMNLP 2025

November 05, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

We propose a new approach for the author002 ship attribution task that leverages the various linguistic representations learned at different layers of pre-trained transformer-based mod005 els. We evaluate our approach on two pop006 ular authorship attribution models and three evaluation datasets, in in-domain and out-of008 domain scenarios. We found that utilizing vari009 ous transformer layers improves the robustness of authorship attribution models when tested on out-of-domain data, resulting in new state012 of-the-art results. Our analysis gives further insights into how our model’s different layers get specialized in representing certain stylistic features that benefit the model when tested out of the domain.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

Knowledge Editing through Chain-of-Thought
poster

Knowledge Editing through Chain-of-Thought

EMNLP 2025

+2Weihang Su
Qingyao Ai and 4 other authors

05 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2026 Underline - All rights reserved