EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Language models can be used to provide interactive, personalized student feedback in educational settings. However, real-world deployment faces three key challenges: privacy concerns, limited computational resources, and the need for pedagogically valid responses. These constraints require small, open-source models that can run locally and reliably ground their outputs in correct information. We introduce SCRIBE, a framework for multi-hop, tool-augmented reasoning designed to generate valid responses to student questions about feedback reports. SCRIBE combines domain-specific tools with a self-reflective inference pipeline that supports iterative reasoning, tool use, and error recovery. We distill these capabilities into 3B and 8B models via two-stage LoRA fine-tuning on synthetic GPT-4o-generated data. Evaluation using a human-aligned GPT-Judge and a user study with 108 students shows that SCRIBE matches or exceeds the perceived quality of much larger models, demonstrating its viability for low-resource, privacy-sensitive educational applications.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

Logit Space Constrained Fine-Tuning for Mitigating Hallucinations in LLM-Based Recommender Systems
poster

Logit Space Constrained Fine-Tuning for Mitigating Hallucinations in LLM-Based Recommender Systems

EMNLP 2025

+2
Qingfeng Chen and 4 other authors

06 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved