Pub Date : 2025-02-04DOI: 10.1016/j.ipm.2025.104081
Siyu Chen , Lifan Peng , Xiaoqian Zhang , Yufeng Chen , Er Wang , Zhenwen Ren
Deep multi-view subspace clustering outperforms classic multi-view clustering methods due to its powerful nonlinear feature extraction capabilities. Nevertheless, current deep multi-view clustering approaches face several challenges: (1) a lack of multi-level feature expression during consensus feature learning; (2) some nonlinear geometric structures in the data have not been fully utilized, leading to incomplete graph information representation; (3) the neglect of robust supervision from the original feature matrix in the multi-view clustering. To address these issues, we propose a Deep Multi-view Subspace Clustering via Hierarchical Diversity Optimization of Consensus Learning, termed as DMSC-HDOC. Our framework integrates three key modules: The hierarchical self-weighted fusion (HSF) module to resample the original features and learn more diverse features. On this basis, dual laplacian constraint (DLC) module are exploited to mine the geometric structure of the data samples. Finally, self-alignment contrast (SaC) is effectively used to supervise the consensus features of the original features. Extensive experiments on the several widely used datasets have shown the superiority of the proposed DMSC-HDOC compared to existing state-of-the-arts methods.
{"title":"Deep multi-view subspace clustering via hierarchical diversity optimization of consensus learning","authors":"Siyu Chen , Lifan Peng , Xiaoqian Zhang , Yufeng Chen , Er Wang , Zhenwen Ren","doi":"10.1016/j.ipm.2025.104081","DOIUrl":"10.1016/j.ipm.2025.104081","url":null,"abstract":"<div><div>Deep multi-view subspace clustering outperforms classic multi-view clustering methods due to its powerful nonlinear feature extraction capabilities. Nevertheless, current deep multi-view clustering approaches face several challenges: (1) a lack of multi-level feature expression during consensus feature learning; (2) some nonlinear geometric structures in the data have not been fully utilized, leading to incomplete graph information representation; (3) the neglect of robust supervision from the original feature matrix in the multi-view clustering. To address these issues, we propose a Deep Multi-view Subspace Clustering via Hierarchical Diversity Optimization of Consensus Learning, termed as DMSC-HDOC. Our framework integrates three key modules: The hierarchical self-weighted fusion (HSF) module to resample the original features and learn more diverse features. On this basis, dual laplacian constraint (DLC) module are exploited to mine the geometric structure of the data samples. Finally, self-alignment contrast (SaC) is effectively used to supervise the consensus features of the original features. Extensive experiments on the several widely used datasets have shown the superiority of the proposed DMSC-HDOC compared to existing state-of-the-arts methods.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 3","pages":"Article 104081"},"PeriodicalIF":7.4,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.ipm.2025.104077
Dun Lan , Chuanhou Sun , Xiangjun Dong , Ping Qiu , Yongshun Gong , Xinwang Liu , Philippe Fournier-Viger , Chengqi Zhang
Repetitive Negative Sequential Patterns (RNSPs) can provide critical insights into the importance of sequences. However, most current RNSP mining methods require users to set an appropriate support threshold to obtain the expected number of patterns, which is a very difficult task for the users without prior experience. To address this issue, we propose a new algorithm, TK-RNSP, to mine the Top- RNSPs with the highest support, without the need to set a support threshold. In detail, we achieve a significant breakthrough by proposing a series of definitions that enable RNSP mining to satisfy anti-monotonicity. Then, we propose a bitmap-based Depth-First Backtracking Search (DFBS) strategy to decrease the heavy computational burden by increasing the speed of support calculation. Finally, we propose the algorithm TK-RNSP in an one-stage process, which can effectively reduce the generation of unnecessary patterns and improve computational efficiency comparing to those two-stage process algorithms. To the best of our knowledge, TK-RNSP is the first algorithm to mine Top- RNSPs. Extensive experiments on eight datasets show that TK-RNSP has better flexibility and efficiency to mine Top- RNSPs.
{"title":"TK-RNSP: Efficient Top-K Repetitive Negative Sequential Pattern mining","authors":"Dun Lan , Chuanhou Sun , Xiangjun Dong , Ping Qiu , Yongshun Gong , Xinwang Liu , Philippe Fournier-Viger , Chengqi Zhang","doi":"10.1016/j.ipm.2025.104077","DOIUrl":"10.1016/j.ipm.2025.104077","url":null,"abstract":"<div><div>Repetitive Negative Sequential Patterns (RNSPs) can provide critical insights into the importance of sequences. However, most current RNSP mining methods require users to set an appropriate support threshold to obtain the expected number of patterns, which is a very difficult task for the users without prior experience. To address this issue, we propose a new algorithm, TK-RNSP, to mine the Top-<span><math><mi>K</mi></math></span> RNSPs with the highest support, without the need to set a support threshold. In detail, we achieve a significant breakthrough by proposing a series of definitions that enable RNSP mining to satisfy anti-monotonicity. Then, we propose a bitmap-based Depth-First Backtracking Search (DFBS) strategy to decrease the heavy computational burden by increasing the speed of support calculation. Finally, we propose the algorithm TK-RNSP in an one-stage process, which can effectively reduce the generation of unnecessary patterns and improve computational efficiency comparing to those two-stage process algorithms. To the best of our knowledge, TK-RNSP is the first algorithm to mine Top-<span><math><mi>K</mi></math></span> RNSPs. Extensive experiments on eight datasets show that TK-RNSP has better flexibility and efficiency to mine Top-<span><math><mi>K</mi></math></span> RNSPs.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 3","pages":"Article 104077"},"PeriodicalIF":7.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social recommendations play a crucial role in helping users filter information and discover potential requirements. However, existing works often ignore the effects of memory patterns and social inconsistency, which hide the recommendation for capturing evolving user interests. To overcome these problems, a model incorporating the Forgetting curve and Memory Replay for Evolving Socially-aware recommendation (FMRES) is proposed to navigate users’ fresh interests. Specifically, a cognitive-inspired Ebbinghaus curve is integrated with item attributes to consider users’ personalized interest forgetting and retention. Then, the memory replay mechanism is employed to revive forgotten yet valuable items, fostering user engagement and enhancing relevance in recommendations. By aggregating the neighbors’ social characters, consistent friends are sampled to identify meaningful and impactful relationships. Finally, temporal representations of users and items are incorporated to track the evolution of users’ interests by utilizing gated recurrent units. Extensive experiments on three datasets demonstrate that the proposed model consistently outperforms advanced baseline methods over various metrics.
{"title":"Incorporating Forgetting Curve and Memory Replay for Evolving Socially-aware Recommendation","authors":"Hongqi Chen, Zhiyong Feng, Shizhan Chen, Hongyue Wu, Yingchao Sun, Jingyu Li, Qinghang Gao, Lu Zhang, Xiao Xue","doi":"10.1016/j.ipm.2025.104070","DOIUrl":"10.1016/j.ipm.2025.104070","url":null,"abstract":"<div><div>Social recommendations play a crucial role in helping users filter information and discover potential requirements. However, existing works often ignore the effects of memory patterns and social inconsistency, which hide the recommendation for capturing evolving user interests. To overcome these problems, a model incorporating the Forgetting curve and Memory Replay for Evolving Socially-aware recommendation (FMRES) is proposed to navigate users’ fresh interests. Specifically, a cognitive-inspired Ebbinghaus curve is integrated with item attributes to consider users’ personalized interest forgetting and retention. Then, the memory replay mechanism is employed to revive forgotten yet valuable items, fostering user engagement and enhancing relevance in recommendations. By aggregating the neighbors’ social characters, consistent friends are sampled to identify meaningful and impactful relationships. Finally, temporal representations of users and items are incorporated to track the evolution of users’ interests by utilizing gated recurrent units. Extensive experiments on three datasets demonstrate that the proposed model consistently outperforms advanced baseline methods over various metrics.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 3","pages":"Article 104070"},"PeriodicalIF":7.4,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-28DOI: 10.1016/j.ipm.2025.104069
Abbas Pirmoradi, Orland Hoeber
Interactive information retrieval (IIR) interfaces are commonly evaluated using questionnaires that collect post-task subjective measures such as satisfaction, ease of use, usefulness, and user engagement. Although the importance of measuring emotional responses during the search process has been recognized, incorporating this aspect into IIR user studies has been challenging. We have developed a novel method to capture real-time emotional responses based on advances in facial emotion classification approaches. We utilize consumer-grade front-facing cameras to collect emotional responses, which synchronize with the user’s interactions with the search interface. In a controlled laboratory study, the relevance of search results was manipulated to validate the approach’s effectiveness and explore how search results’ relevance impacts users’ emotional responses, post-task evaluations of the search interface, and interactions with search interface features. This enabled us to examine whether we could detect emotional responses, whether recency effects were observed in post-task evaluations, and whether feature use correlated with emotional responses. The study was conducted in the context of exploratory search within an academic digital library. The results of this study demonstrate that both positive and negative emotional responses can be reliably detected during the search process. There is evidence of recency effects in post-task measures, and the study identifies specific interactive features used during the experience of positive and negative emotional responses. This serves as a foundation for the use of emotional responses to supplement post-task survey data when evaluating search interfaces.
{"title":"Bridging in-task emotional responses with post-task evaluations in digital library search interface user studies","authors":"Abbas Pirmoradi, Orland Hoeber","doi":"10.1016/j.ipm.2025.104069","DOIUrl":"10.1016/j.ipm.2025.104069","url":null,"abstract":"<div><div>Interactive information retrieval (IIR) interfaces are commonly evaluated using questionnaires that collect post-task subjective measures such as satisfaction, ease of use, usefulness, and user engagement. Although the importance of measuring emotional responses during the search process has been recognized, incorporating this aspect into IIR user studies has been challenging. We have developed a novel method to capture real-time emotional responses based on advances in facial emotion classification approaches. We utilize consumer-grade front-facing cameras to collect emotional responses, which synchronize with the user’s interactions with the search interface. In a controlled laboratory study, the relevance of search results was manipulated to validate the approach’s effectiveness and explore how search results’ relevance impacts users’ emotional responses, post-task evaluations of the search interface, and interactions with search interface features. This enabled us to examine whether we could detect emotional responses, whether recency effects were observed in post-task evaluations, and whether feature use correlated with emotional responses. The study was conducted in the context of exploratory search within an academic digital library. The results of this study demonstrate that both positive and negative emotional responses can be reliably detected during the search process. There is evidence of recency effects in post-task measures, and the study identifies specific interactive features used during the experience of positive and negative emotional responses. This serves as a foundation for the use of emotional responses to supplement post-task survey data when evaluating search interfaces.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 3","pages":"Article 104069"},"PeriodicalIF":7.4,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-28DOI: 10.1016/j.ipm.2025.104076
Zhu Zhang, Bo Yang, Yimeng Lu
Sequential recommendation (SR) focuses on capturing users’ interests from their historical behaviors. Transformer-based SR models have demonstrated promising performance by leveraging self-attention for sequential modeling. Recently, Mamba, a novel sequential model, has shown competitive performance compared to Transformers. In SR tasks, item representation learning involves both global and local context information. While several existing SR models attempt to address this integration, they suffer from inferior performance or computational inefficiency. Moreover, existing Mamba-based SR model appears to capture only the global context information. Given Mamba’s merits in enhancing model performance and efficiency, there is substantial potential to more effectively integrate both global and local context information within a Mamba-based framework. Additionally, consistency training, which is pivotal for enhancing model performance, remains underexplored in existing SR models.
To tackle these challenges, we propose a Local Context Enhanced Consistency-aware Mamba-based Sequential Recommendation Model (LC-Mamba). LC-Mamba captures both global and local context information to improve recommendation performance. Specifically, LC-Mamba leverages a GNN-based sequence encoder to extract information from local neighbors for each item (local context information) in a graph view, while utilizing a Mamba-based sequence encoder to capture dependencies between items in the sequence (global context information) in a sequential view. Furthermore, we introduce consistency training, including model-level and representation-level consistency, to further enhance performance. Specifically, we incorporate R-Drop regularization into the Mamba-based sequence encoder to mitigate the inconsistency between training and inference caused by random dropout (model-level consistency). Additionally, we leverage contrastive learning to enhance consistency between the item representations learned from the sequential and graph views (representation-level consistency). Extensive experiments on three widely used datasets illustrate that LC-Mamba outperforms baseline models in HR and NDCG, achieving up to a 31.03% improvement in NDCG. LC-Mamba can be applied to real-world applications such as e-commerce and content platforms.
{"title":"A Local context enhanced Consistency-aware Mamba-based Sequential Recommendation model","authors":"Zhu Zhang, Bo Yang, Yimeng Lu","doi":"10.1016/j.ipm.2025.104076","DOIUrl":"10.1016/j.ipm.2025.104076","url":null,"abstract":"<div><div>Sequential recommendation (SR) focuses on capturing users’ interests from their historical behaviors. Transformer-based SR models have demonstrated promising performance by leveraging self-attention for sequential modeling. Recently, Mamba, a novel sequential model, has shown competitive performance compared to Transformers. In SR tasks, item representation learning involves both global and local context information. While several existing SR models attempt to address this integration, they suffer from inferior performance or computational inefficiency. Moreover, existing Mamba-based SR model appears to capture only the global context information. Given Mamba’s merits in enhancing model performance and efficiency, there is substantial potential to more effectively integrate both global and local context information within a Mamba-based framework. Additionally, consistency training, which is pivotal for enhancing model performance, remains underexplored in existing SR models.</div><div>To tackle these challenges, we propose a Local Context Enhanced Consistency-aware Mamba-based Sequential Recommendation Model (LC-Mamba). LC-Mamba captures both global and local context information to improve recommendation performance. Specifically, LC-Mamba leverages a GNN-based sequence encoder to extract information from local neighbors for each item (local context information) in a graph view, while utilizing a Mamba-based sequence encoder to capture dependencies between items in the sequence (global context information) in a sequential view. Furthermore, we introduce consistency training, including model-level and representation-level consistency, to further enhance performance. Specifically, we incorporate R-Drop regularization into the Mamba-based sequence encoder to mitigate the inconsistency between training and inference caused by random dropout (model-level consistency). Additionally, we leverage contrastive learning to enhance consistency between the item representations learned from the sequential and graph views (representation-level consistency). Extensive experiments on three widely used datasets illustrate that LC-Mamba outperforms baseline models in HR and NDCG, achieving up to a 31.03% improvement in NDCG. LC-Mamba can be applied to real-world applications such as e-commerce and content platforms.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 3","pages":"Article 104076"},"PeriodicalIF":7.4,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-28DOI: 10.1016/j.ipm.2025.104078
Shaolin Zhu , Leiyu Pan , Dong Jian , Deyi Xiong
Large language models (LLMs) hold great promise for cross-lingual applications to power machine translation (MT) systems. However, directly fine-tuning LLMs on parallel data risks catastrophic forgetting and lacks explainability in cross-lingual knowledge transfer. In this paper, we introduce MoE-LLM, a novel fusion framework that enhances the multilingual translation abilities of LLMs by incorporating sparse Mixture-of-Experts (MoEs) components via hybrid transfer learning. MoE-LLM freezes the LLM parameters, mitigating forgetting, and introduces specialized translation experts within the MoEs modules. Our hybrid initialization strategy further bridges the representation gap by warm-starting MoE parameters using LLM representations. We evaluated MoE-LLM on 10 translation directions across 6 languages using the WMT benchmark. Compared with directly fine-tuning LLMs, MoE-LLM significantly improved translation quality, achieving gains of up to 2.5 BLEU points, with at least some improvement in zero-shot translation scenarios and surpassing other strong baselines like Adapter and LoRA-F. Our ablation studies highlight the effectiveness of the cascaded fusion strategy and the mixed initialization approach for optimal performance. MoE-LLM offers an effective and explainable solution for adapting pre-trained LLMs to multilingual machine translation, with particular benefits in low-resource and zero-shot scenarios.
{"title":"Overcoming language barriers via machine translation with sparse Mixture-of-Experts fusion of large language models","authors":"Shaolin Zhu , Leiyu Pan , Dong Jian , Deyi Xiong","doi":"10.1016/j.ipm.2025.104078","DOIUrl":"10.1016/j.ipm.2025.104078","url":null,"abstract":"<div><div>Large language models (LLMs) hold great promise for cross-lingual applications to power machine translation (MT) systems. However, directly fine-tuning LLMs on parallel data risks catastrophic forgetting and lacks explainability in cross-lingual knowledge transfer. In this paper, we introduce MoE-LLM, a novel fusion framework that enhances the multilingual translation abilities of LLMs by incorporating sparse Mixture-of-Experts (MoEs) components via hybrid transfer learning. MoE-LLM freezes the LLM parameters, mitigating forgetting, and introduces specialized translation experts within the MoEs modules. Our hybrid initialization strategy further bridges the representation gap by warm-starting MoE parameters using LLM representations. We evaluated MoE-LLM on 10 translation directions across 6 languages using the WMT benchmark. Compared with directly fine-tuning LLMs, MoE-LLM significantly improved translation quality, achieving gains of up to 2.5 BLEU points, with at least some improvement in zero-shot translation scenarios and surpassing other strong baselines like Adapter and LoRA-F. Our ablation studies highlight the effectiveness of the cascaded fusion strategy and the mixed initialization approach for optimal performance. MoE-LLM offers an effective and explainable solution for adapting pre-trained LLMs to multilingual machine translation, with particular benefits in low-resource and zero-shot scenarios.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 3","pages":"Article 104078"},"PeriodicalIF":7.4,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-27DOI: 10.1016/j.ipm.2025.104072
Hyunwook Yu , Yejin Cho , Geunchul Park , Mucheol Kim
The bidirectional encoder representations from transformers (BERT) model has achieved remarkable success in various natural language processing tasks for Latin-based languages. However, the Korean language presents unique challenges with limited data resources and complex linguistic structures. In this paper, we present KRongBERT, a language model specifically designed through a morphological approach to effectively address the unique linguistic complexities of Korean. KRongBERT mitigates the out-of-vocabulary issues that arise with byte-pair-encoding tokenizers in Korean and incorporates language-specific embedding layers to enhance understanding. Our model demonstrates up to an 1.56% improvement in performance on specific natural language understanding tasks compared to the traditional BERT implementations. Notably, KRongBERT achieves superior performance compared to existing state-of-the-art Korean BERT models while utilizing only 11.42% of the data required by other models. Our experiments demonstrate that KRongBERT efficiently handles the complexities of the Korean language, outperforming current state-of-the-art approaches. The code is publicly available at https://github.com/Splo2t/KRongBERT.
{"title":"KRongBERT: Enhanced factorization-based morphological approach for the Korean pretrained language model","authors":"Hyunwook Yu , Yejin Cho , Geunchul Park , Mucheol Kim","doi":"10.1016/j.ipm.2025.104072","DOIUrl":"10.1016/j.ipm.2025.104072","url":null,"abstract":"<div><div>The bidirectional encoder representations from transformers (BERT) model has achieved remarkable success in various natural language processing tasks for Latin-based languages. However, the Korean language presents unique challenges with limited data resources and complex linguistic structures. In this paper, we present KRongBERT, a language model specifically designed through a morphological approach to effectively address the unique linguistic complexities of Korean. KRongBERT mitigates the out-of-vocabulary issues that arise with byte-pair-encoding tokenizers in Korean and incorporates language-specific embedding layers to enhance understanding. Our model demonstrates up to an 1.56% improvement in performance on specific natural language understanding tasks compared to the traditional BERT implementations. Notably, KRongBERT achieves superior performance compared to existing state-of-the-art Korean BERT models while utilizing only 11.42% of the data required by other models. Our experiments demonstrate that KRongBERT efficiently handles the complexities of the Korean language, outperforming current state-of-the-art approaches. The code is publicly available at <span><span>https://github.com/Splo2t/KRongBERT</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 3","pages":"Article 104072"},"PeriodicalIF":7.4,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Knowledge Tracing (KT) is an important research area in online education that focuses on predicting future academic performance based on students’ historical exercise records. The key to solving the KT problem lies in assessing students’ knowledge states through their responses to concept-related exercises. However, analyzing exercise records from a single perspective does not provide a comprehensive model of student knowledge. The truth is that students’ knowledge states often exhibit long- and short-term phenomena, corresponding to long-term knowledge systems and short-term real-time learning, both of which are closely related to learning quality and preferences. Existing studies have often neglected the learning preferences implied by long-term knowledge states and their impact on student performance. Therefore, we introduce a hybrid knowledge tracing model that utilizes both long- and short-term knowledge state representations (L-SKSKT). It enhances KT by fusing these two types of knowledge state representations and measuring their impact on learning quality. L-SKSKT includes a graph construction method designed to model students’ long- and short-term knowledge states. In addition, L-SKSKT incorporates a knowledge state graph embedding model that can effectively capture long- and short-term dependencies, generating corresponding knowledge state representations. Furthermore, we propose a fusion mechanism to integrate these representations and trace their impact on learning outcomes. Extensive empirical results on four benchmark datasets show that our approach achieves the best performance for KT, and beats various strong baselines with a large margin.
{"title":"Exploring long- and short-term knowledge state graph representations with adaptive fusion for knowledge tracing","authors":"Ganfeng Yu , Zhiwen Xie , Guangyou Zhou , Zhuo Zhao , Jimmy Xiangji Huang","doi":"10.1016/j.ipm.2025.104074","DOIUrl":"10.1016/j.ipm.2025.104074","url":null,"abstract":"<div><div>Knowledge Tracing (KT) is an important research area in online education that focuses on predicting future academic performance based on students’ historical exercise records. The key to solving the KT problem lies in assessing students’ knowledge states through their responses to concept-related exercises. However, analyzing exercise records from a single perspective does not provide a comprehensive model of student knowledge. The truth is that students’ knowledge states often exhibit long- and short-term phenomena, corresponding to long-term knowledge systems and short-term real-time learning, both of which are closely related to learning quality and preferences. Existing studies have often neglected the learning preferences implied by long-term knowledge states and their impact on student performance. Therefore, we introduce a hybrid knowledge tracing model that utilizes both long- and short-term knowledge state representations (L-SKSKT). It enhances KT by fusing these two types of knowledge state representations and measuring their impact on learning quality. L-SKSKT includes a graph construction method designed to model students’ long- and short-term knowledge states. In addition, L-SKSKT incorporates a knowledge state graph embedding model that can effectively capture long- and short-term dependencies, generating corresponding knowledge state representations. Furthermore, we propose a fusion mechanism to integrate these representations and trace their impact on learning outcomes. Extensive empirical results on four benchmark datasets show that our approach achieves the best performance for KT, and beats various strong baselines with a large margin.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 3","pages":"Article 104074"},"PeriodicalIF":7.4,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-24DOI: 10.1016/j.ipm.2025.104073
Xun Li, Hongyun Cai, Chuan Feng, Ao Zhao
Recently, link prediction methods based graph neural networks have garnered significant attention and achieved great success on large datasets. However, existing methods usually rely on explicit graph structures, which is hard to obtain in sparse graphs. In addition, the incomplete graph data used for model training may lead to distribution shift between training and testing sets. To address these issues, this paper proposes a novel link prediction method for sparse graphs based on variational graph autoencoder and pairwise learning. By incorporating noise perturbation variational autoencoders, the proposed method can enhance robustness during sparse graph training. Instead of relying on explicit graph features, we reconstruct the original adjacency matrix by disturbing node feature mean encoding or variance encoding. To mitigate the impact of insufficient topological information, we introduce pairwise learning scheme, which obtains pairwise edges through negative sampling and iteratively optimize the positive and negative complementary probability adjacency matrix. Furthermore, we integrate the probability adjacency matrix and node similarity prediction based on message passing networks into a dual-stream framework to predict unknown links. Experimental results on multiple sparse networks demonstrate the superior link prediction performance of our proposed method over baseline approaches. Our method improves AUC from 0.3% to 1.5% and Precision from 1.4% to 5.2% across seven datasets.
{"title":"Dual stream fusion link prediction for sparse graph based on variational graph autoencoder and pairwise learning","authors":"Xun Li, Hongyun Cai, Chuan Feng, Ao Zhao","doi":"10.1016/j.ipm.2025.104073","DOIUrl":"10.1016/j.ipm.2025.104073","url":null,"abstract":"<div><div>Recently, link prediction methods based graph neural networks have garnered significant attention and achieved great success on large datasets. However, existing methods usually rely on explicit graph structures, which is hard to obtain in sparse graphs. In addition, the incomplete graph data used for model training may lead to distribution shift between training and testing sets. To address these issues, this paper proposes a novel link prediction method for sparse graphs based on variational graph autoencoder and pairwise learning. By incorporating noise perturbation variational autoencoders, the proposed method can enhance robustness during sparse graph training. Instead of relying on explicit graph features, we reconstruct the original adjacency matrix by disturbing node feature mean encoding or variance encoding. To mitigate the impact of insufficient topological information, we introduce pairwise learning scheme, which obtains pairwise edges through negative sampling and iteratively optimize the positive and negative complementary probability adjacency matrix. Furthermore, we integrate the probability adjacency matrix and node similarity prediction based on message passing networks into a dual-stream framework to predict unknown links. Experimental results on multiple sparse networks demonstrate the superior link prediction performance of our proposed method over baseline approaches. Our method improves AUC from 0.3% to 1.5% and Precision from 1.4% to 5.2% across seven datasets.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 3","pages":"Article 104073"},"PeriodicalIF":7.4,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-22DOI: 10.1016/j.ipm.2025.104063
Xinyi Yang , Lerong Ding , Wei Wang , Jianlin Yang
Interdisciplinary research has emerged as an important approach to tackling complex issues that cut across disciplines. Previous research assessed the interdisciplinarity of a paper without considering differences in functional structures. This study proposes a method to identify interdisciplinary research patterns by measuring the level of interdisciplinarity in research articles across four sections: Introduction, Methods, Results, and Discussion. With 19,712 articles in Bioinformatics, we revealed that interdisciplinarity typically arranges in the sequence of Introduction, Methods, Results, and Discussion. We also identified six patterns, each featuring specific high-interdisciplinary sections, including All-round Integration, Multidisciplinary Application Exploration, Multidisciplinary Background Research, Multidisciplinary Approach, Interdisciplinary Analysis, and Non-Interdisciplinary Research. We further investigated the academic value of interdisciplinary research through citation impact and novel insights. Even with low citation counts, the number of high-level interdisciplinary research continues to grow. The topic analysis also demonstrated that different interdisciplinary research patterns prioritize certain aspects to solve the core problems of a research field. Moreover, the research focus of each pattern is consistent with the function of its highly interdisciplinary sections. For example, in protein structure research, the Multidisciplinary Approach pattern prioritizes accurate modelling and techniques, while the Multidisciplinary Application Exploration pattern emphasizes biological applications such as vaccine development. These findings provide management with guidance on how to encourage interdisciplinary research that genuinely contributes to innovation.
{"title":"Identification of interdisciplinary research patterns based on the functional structures of IMRaD","authors":"Xinyi Yang , Lerong Ding , Wei Wang , Jianlin Yang","doi":"10.1016/j.ipm.2025.104063","DOIUrl":"10.1016/j.ipm.2025.104063","url":null,"abstract":"<div><div>Interdisciplinary research has emerged as an important approach to tackling complex issues that cut across disciplines. Previous research assessed the interdisciplinarity of a paper without considering differences in functional structures. This study proposes a method to identify interdisciplinary research patterns by measuring the level of interdisciplinarity in research articles across four sections: Introduction, Methods, Results, and Discussion. With 19,712 articles in Bioinformatics, we revealed that interdisciplinarity typically arranges in the sequence of Introduction, Methods, Results, and Discussion. We also identified six patterns, each featuring specific high-interdisciplinary sections, including All-round Integration, Multidisciplinary Application Exploration, Multidisciplinary Background Research, Multidisciplinary Approach, Interdisciplinary Analysis, and Non-Interdisciplinary Research. We further investigated the academic value of interdisciplinary research through citation impact and novel insights. Even with low citation counts, the number of high-level interdisciplinary research continues to grow. The topic analysis also demonstrated that different interdisciplinary research patterns prioritize certain aspects to solve the core problems of a research field. Moreover, the research focus of each pattern is consistent with the function of its highly interdisciplinary sections. For example, in protein structure research, the Multidisciplinary Approach pattern prioritizes accurate modelling and techniques, while the Multidisciplinary Application Exploration pattern emphasizes biological applications such as vaccine development. These findings provide management with guidance on how to encourage interdisciplinary research that genuinely contributes to innovation.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 3","pages":"Article 104063"},"PeriodicalIF":7.4,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143139096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}