Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/tx22-0486

poster

ACL 2024

August 12, 2024

Bangkok, Thailand

RDRec: Rationale Distillation for LLM-based Recommendation

keywords:

rationale distillation; recommendation; large language model

Large language model (LLM)-based recommender models that bridge users and items through textual prompts for effective semantic reasoning have gained considerable attention. However, few methods consider the underlying rationales behind interactions, such as user preferences and item attributes, limiting the reasoning ability of LLMs for recommendations. This paper proposes a rationale distillation recommender (RDRec), a compact model designed to learn rationales generated by a larger language model (LM). By leveraging rationales from reviews related to users and items, RDRec remarkably specifies their profiles for recommendations. Experiments show that RDRec achieves state-of-the-art (SOTA) performance in both top-N and sequential recommendations. Our code is available online.

Downloads

SlidesTranscript English (automatic)

Next from ACL 2024

Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal
poster

Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal

ACL 2024

Jianheng Huang
Jianheng Huang

12 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved