UNDERLINE DOI: https://doi.org/10.48448/bz1w-xt22

poster

NAACL 2025

•

May 06, 2025

•

Albuquerque, United States

Balancing Forget Quality and Model Utility: A Reverse KL-Divergence Knowledge Distillation Approach for Better Unlearning in LLMs

Please log in to leave a comment

Downloads

PaperTranscript English (automatic)

Next from NAACL 2025

Improving Data Annotation for Low-Resource Relation Extraction with Logical Rule-Augmented Collaborative Language Models
poster

Improving Data Annotation for Low-Resource Relation Extraction with Logical Rule-Augmented Collaborative Language Models

NAACL 2025

+2Junfan Chen
Xiyang Liu and 4 other authors

06 May 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved