Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
We propose a new approach for the author002 ship attribution task that leverages the various linguistic representations learned at different layers of pre-trained transformer-based mod005 els. We evaluate our approach on two pop006 ular authorship attribution models and three evaluation datasets, in in-domain and out-of008 domain scenarios. We found that utilizing vari009 ous transformer layers improves the robustness of authorship attribution models when tested on out-of-domain data, resulting in new state012 of-the-art results. Our analysis gives further insights into how our model’s different layers get specialized in representing certain stylistic features that benefit the model when tested out of the domain.