EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Two major domain specialization approaches for Large Language Models (LLMs), fine-tuning and In-Context Learning (ICL), have been compared across various domains. While prior research has examined the similarities and differences between these approaches in task-specific capabilities, less is known about how they affect the feature of the generated text itself. To address this research gap, we conducted an experimental study using Accounting Audit Procedure Generation (AAPG) task, a highly specialized task requiring expert accounting knowledge. This task provides a practical testbed for a multi-perspective analysis of domain specialization due to its technical complexity and the large gap between general and domain expert knowledge. The results show consistent differences in output characteristics across models when comparing fine-tuning, ICL, and their hybrid approaches.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

EasyRec: Simple yet Effective Language Models for Recommendation
poster

EasyRec: Simple yet Effective Language Models for Recommendation

EMNLP 2025

Chao Huang
Chao Huang and 1 other author

06 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved