EMNLP 2025

November 05, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Most state-of-the-art recommendation systems rely on ID-based embedding techniques, which, while effective, face significant challenges, including high training data requirements and poor generalization in cold-start scenarios. Inspired by the success of foundation models in visual and language tasks, this work introduces a foundation model for sequential recommendation systems aimed at enhancing generalization. Our proposed RecGPT model features a novel vector quantization-based item tokenizer that creates generalized token representations, effectively capturing diverse semantic features for context-aware recommendations. Additionally, we utilize a decoder-only transformer architecture with an autoregressive modeling paradigm that incorporates bidirectional attention, allowing the model to understand item token relationships and complex user behaviors across diverse scenarios. We evaluate our model across several public datasets, targeting zero-shot and cold-start scenarios, and find significant improvements in generalization over existing methods.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

Towards Holistic Evaluation of Large Audio-Language Models: A Comprehensive Survey
poster

Towards Holistic Evaluation of Large Audio-Language Models: A Comprehensive Survey

EMNLP 2025

Hung-yi Lee
Neo S. Ho and 2 other authors

05 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved