Abstract:To address the problem of inadequate self-supervised signal quality in contrastive learning models for sequential recommendation, a combinatorial enumeration and time-interval contrastive learning model was proposed. The model generated enhanced sequences which preserved temporal information through time-interval perturbation-based data augmentation. A combinatorial enumeration strategy was introduced to integrate user behavior and time-interval information, constructing multi-view augmented sequence pairs. The model employed a multi-head attention mechanism to encode user behavior sequences and optimized self-supervised signals through multi-task joint training, which improved overall performance. The proposed model is well-suited for scenarios with high data sparsity and uneven interaction behaviors, effectively addressing challenges in self-supervised signal modeling. Experimental results on three real-world datasets demonstrate that the model outperforms state-of-the-art contrastive learning models in terms of Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG). Specifically, HR@5 and NDCG@5 improve by 5.61% and 8.53%, respectively.