Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Most state-of-the-art recommendation systems rely on ID-based embedding techniques, which, while effective, face significant challenges, including high training data requirements and poor generalization in cold-start scenarios. Inspired by the success of foundation models in visual and language tasks, this work introduces a foundation model for sequential recommendation systems aimed at enhancing generalization. Our proposed RecGPT model features a novel vector quantization-based item tokenizer that creates generalized token representations, effectively capturing diverse semantic features for context-aware recommendations. Additionally, we utilize a decoder-only transformer architecture with an autoregressive modeling paradigm that incorporates bidirectional attention, allowing the model to understand item token relationships and complex user behaviors across diverse scenarios. We evaluate our model across several public datasets, targeting zero-shot and cold-start scenarios, and find significant improvements in generalization over existing methods.