Predicting the next item a user will interact with is a core task in sequential recommendation (SR). Traditional approaches predominantly focus on modeling patterns in item purchase sequences, yet often fall short in uncovering the underlying motivations behind user behavior. To overcome this limitation, we introduce IntentRec, a novel SR framework designed to incorporate latent user intent signals extracted from user-written reviews. Unlike conventional models that treat item sequences in isolation, IntentRec bridges the semantic gap between review content and behavioral data by aligning their representations in a shared embedding space through contrastive learning. Review sequences chronologically ordered text reflecting users’ thoughts serve as a rich source of intent, which is fused into the item sequence representation during training. To ensure practicality in real-time recommendation scenarios, our method excludes review inputs at inference time, acknowledging that reviews naturally occur after item interactions. IntentRec employs BERT, a pre-trained language model, to extract nuanced user intent from textual reviews, and introduces a cross-attention-enhanced contrastive loss to tightly couple review-derived signals with item-based preferences. Extensive experiments conducted on four widely-used SR benchmarks demonstrate that IntentRec consistently outperforms eight state-of-the-art baselines. Further ablation studies confirm the crucial role of review-based user intent in improving sequential recommendation accuracy.
扫码关注我们
求助内容:
应助结果提醒方式:
