Most knowledge tracing (KT) models are trained and applied within a single domain, limiting their ability to fully leverage shared information across different domains. In cross-domain knowledge tracing (CDKT) tasks, existing methods struggle to accurately distinguish useful knowledge from irrelevant noise, often resulting in negative transfer that significantly impairs model performance and generalization. To mitigate the impact of negative transfer, we propose a novel CDKT framework-DisenKT. Built on a Variational Attention Autoencoder (VAAE), which combines a variational autoencoder with a hierarchical attention mechanism, DisenKT encodes student interaction sequences and disentangles the latent knowledge state into domain-exclusive and domain-shared representations. To effectively separate these two types of representations, we employ mutual information minimization as a regularization strategy. This enables the model to focus on transferable knowledge while suppressing irrelevant information, thereby improving prediction accuracy and generalization. We evaluated DisenKT on four real-world datasets (ASSISTments 2009, Junyi, KDD Cup 2006–2007 Algebra, and PTADisc). The results demonstrate that, in course-level CDKT tasks, DisenKT achieves an average improvement of approximately 3.09% in AUC and 1.50% in ACC compared to the best baseline. In student-level CDKT tasks, the model attains improvements of around 0.72% in AUC and 3.50% in ACC. These findings strongly validate the effectiveness of DisenKT in cross-domain knowledge transfer.
扫码关注我们
求助内容:
应助结果提醒方式:
