首页 > 最新文献

Information Processing & Management最新文献

英文 中文
SemiBCP-SAM2 : Semi-supervised model via enhanced bidirectional copy-paste based on SAM2 for medical image segmentation SemiBCP-SAM2:基于SAM2的增强双向复制粘贴半监督模型,用于医学图像分割
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-31 DOI: 10.1016/j.ipm.2025.104576
Guangqi Yang , Xiaoxin Guo , Haoran Zhang , Zhenyuan Zheng , Hongliang Dong , Songbai Xu
Insufficient use of unlabeled data often leads to inaccurate medical image segmentation, and noise in pseudo-labels can further destabilize training. In this paper, we propose a semi-supervised model based on the SAM2 combined with a bidirectional copy-paste mean teacher model (SemiBCP-SAM2). Specifically, we use a student model to generate segmentation results, which are then used as input prompts for SAM2 to generate additional pseudo-labels, providing auxiliary supervision to guide student learning. We also introduce a Masked Prompt (MP) mechanism that reduces prompt confidence to better handle uncertainty and noise, improving its performance in complex or incomplete information scenarios. Another major contribution is the transplantability of this model that can be achieved by replacing the baseline network in the student-teacher model, and can enhance the performance of other semi-supervised segmentation networks at a lower cost. We conduct comparative experiments and performance evaluations of SemiBCP-SAM2 on the ACDC (100 MRI scans) and PROMISE12 (50 MRI scans) datasets. On ACDC, with 5% and 10% labeled data, SemiBCP-SAM2 improves Dice by 0.29% and 1.16%, and Jaccard by 0.39% and 1.84%. On PROMISE12, with 5% and 20% labeled data, it improves Dice by 1.61% and 2.03%, and Jaccard by 1.99% and 2.79%. Source code is released at https://github.com/ydlam/SemiBCP-SAM2.
未标记数据的使用不足往往导致医学图像分割不准确,而伪标签中的噪声会进一步破坏训练的稳定性。在本文中,我们提出了一个基于SAM2和双向复制粘贴平均教师模型(SemiBCP-SAM2)的半监督模型。具体来说,我们使用学生模型来生成分割结果,然后将其用作SAM2的输入提示,以生成额外的伪标签,为指导学生学习提供辅助监督。我们还引入了掩蔽提示(MP)机制,该机制降低了提示置信度,以更好地处理不确定性和噪声,提高了其在复杂或不完整信息场景中的性能。另一个主要贡献是该模型的可移植性,可以通过替换学生-教师模型中的基线网络来实现,并且可以以较低的成本提高其他半监督分割网络的性能。我们在ACDC(100次MRI扫描)和PROMISE12(50次MRI扫描)数据集上对SemiBCP-SAM2进行了比较实验和性能评估。在ACDC上,当标记数据分别为5%和10%时,SemiBCP-SAM2分别提高Dice 0.29%和1.16%,Jaccard提高0.39%和1.84%。在PROMISE12上,标记数据分别为5%和20%时,Dice分别提高了1.61%和2.03%,Jaccard分别提高了1.99%和2.79%。源代码发布在https://github.com/ydlam/SemiBCP-SAM2。
{"title":"SemiBCP-SAM2 : Semi-supervised model via enhanced bidirectional copy-paste based on SAM2 for medical image segmentation","authors":"Guangqi Yang ,&nbsp;Xiaoxin Guo ,&nbsp;Haoran Zhang ,&nbsp;Zhenyuan Zheng ,&nbsp;Hongliang Dong ,&nbsp;Songbai Xu","doi":"10.1016/j.ipm.2025.104576","DOIUrl":"10.1016/j.ipm.2025.104576","url":null,"abstract":"<div><div>Insufficient use of unlabeled data often leads to inaccurate medical image segmentation, and noise in pseudo-labels can further destabilize training. In this paper, we propose a semi-supervised model based on the SAM2 combined with a bidirectional copy-paste mean teacher model (SemiBCP-SAM2). Specifically, we use a student model to generate segmentation results, which are then used as input prompts for SAM2 to generate additional pseudo-labels, providing auxiliary supervision to guide student learning. We also introduce a Masked Prompt (MP) mechanism that reduces prompt confidence to better handle uncertainty and noise, improving its performance in complex or incomplete information scenarios. Another major contribution is the transplantability of this model that can be achieved by replacing the baseline network in the student-teacher model, and can enhance the performance of other semi-supervised segmentation networks at a lower cost. We conduct comparative experiments and performance evaluations of SemiBCP-SAM2 on the ACDC (100 MRI scans) and PROMISE12 (50 MRI scans) datasets. On ACDC, with 5% and 10% labeled data, SemiBCP-SAM2 improves Dice by 0.29% and 1.16%, and Jaccard by 0.39% and 1.84%. On PROMISE12, with 5% and 20% labeled data, it improves Dice by 1.61% and 2.03%, and Jaccard by 1.99% and 2.79%. Source code is released at <span><span>https://github.com/ydlam/SemiBCP-SAM2</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104576"},"PeriodicalIF":6.9,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From tracking to thinking: Facilitating post-exercise reflection by a large language model-mediated journaling system 从跟踪到思考:通过大型语言模型介导的日志系统促进运动后反思
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-31 DOI: 10.1016/j.ipm.2025.104574
Xianglin Zhao , Yucheng Jin , Annie Yan Wang , Ming Zhang
Wearable devices provide rich quantitative data for self-reflection on physical activity. However, users often struggle to derive meaningful insights from these data, highlighting the need for enhanced support. To investigate whether Large Language Models (LLMs) can facilitate this process, we propose and evaluate a human-LLM collaborative reflective journaling paradigm. We developed PaceMind, an LLM-mediated journaling system that implements this paradigm based on a three-stage reflection framework. It can generate data-driven drafts and personalized questions to guide users in integrating exercise data with personal insights. A two-week within-subjects study (N=21) compared the LLM-mediated system with a template-based journaling baseline. The LLM-mediated design significantly improved the perceived effectiveness of reflection support and increased users’ intention to use the system. However, perceived ease of use did not improve significantly. Users appreciated the LLM’s scaffolding for easing data sense-making, but also reported added cognitive work in verifying and personalizing the LLM-generated content. Although objective activity levels did not change significantly, the LLM-mediated condition showed a trend toward more adaptive exercise planning and sustained engagement. Our findings provide empirical evidence for a human-LLM collaborative reflection paradigm in a data-intensive exercise context. They highlight both the potential to deepen user reflection and underscore the critical design challenge of balancing automation with meaningful cognitive engagement and user control.
可穿戴设备为身体活动的自我反思提供了丰富的定量数据。然而,用户往往很难从这些数据中获得有意义的见解,这突出了对增强支持的需求。为了研究大型语言模型(llm)是否能促进这一过程,我们提出并评估了一个人类- llm协作反射日志范式。我们开发了PaceMind,这是一个基于llm的日志系统,它基于三阶段反射框架实现了这种范式。它可以生成数据驱动的草稿和个性化问题,引导用户将锻炼数据与个人见解相结合。一项为期两周的受试者研究(N=21)将llm介导的系统与基于模板的日志基线进行了比较。法学硕士介导的设计显著提高了反思支持的感知有效性,增加了用户使用系统的意愿。然而,感知易用性并没有显著提高。用户对LLM简化数据意义构建的框架表示赞赏,但也报告了在验证和个性化LLM生成的内容方面增加的认知工作。虽然客观活动水平没有显著变化,但llm介导的条件显示出更适应性的运动计划和持续参与的趋势。我们的研究结果为数据密集型练习环境中的人类-法学硕士协作反思范式提供了经验证据。他们强调了深化用户反思的潜力,并强调了平衡自动化与有意义的认知参与和用户控制的关键设计挑战。
{"title":"From tracking to thinking: Facilitating post-exercise reflection by a large language model-mediated journaling system","authors":"Xianglin Zhao ,&nbsp;Yucheng Jin ,&nbsp;Annie Yan Wang ,&nbsp;Ming Zhang","doi":"10.1016/j.ipm.2025.104574","DOIUrl":"10.1016/j.ipm.2025.104574","url":null,"abstract":"<div><div>Wearable devices provide rich quantitative data for self-reflection on physical activity. However, users often struggle to derive meaningful insights from these data, highlighting the need for enhanced support. To investigate whether Large Language Models (LLMs) can facilitate this process, we propose and evaluate a human-LLM collaborative reflective journaling paradigm. We developed <em>PaceMind</em>, an LLM-mediated journaling system that implements this paradigm based on a three-stage reflection framework. It can generate data-driven drafts and personalized questions to guide users in integrating exercise data with personal insights. A two-week within-subjects study (<span><math><mrow><mi>N</mi><mo>=</mo><mn>21</mn></mrow></math></span>) compared the LLM-mediated system with a template-based journaling baseline. The LLM-mediated design significantly improved the perceived effectiveness of reflection support and increased users’ intention to use the system. However, perceived ease of use did not improve significantly. Users appreciated the LLM’s scaffolding for easing data sense-making, but also reported added cognitive work in verifying and personalizing the LLM-generated content. Although objective activity levels did not change significantly, the LLM-mediated condition showed a trend toward more adaptive exercise planning and sustained engagement. Our findings provide empirical evidence for a human-LLM collaborative reflection paradigm in a data-intensive exercise context. They highlight both the potential to deepen user reflection and underscore the critical design challenge of balancing automation with meaningful cognitive engagement and user control.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104574"},"PeriodicalIF":6.9,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A text-based emotional pattern discrepancy aware model for enhanced generalization in depression detection 基于文本的情绪模式差异感知模型在抑郁症检测中的增强泛化
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-31 DOI: 10.1016/j.ipm.2025.104575
Haibo Zhang , Zhenyu Liu , Yang Wu , Jiaqian Yuan , Gang Li , Zhijie Ding , Bin Hu
Text-based automated depression detection is one of the current hot topics. However, current research lacks the exploration of key verbal behaviors in depression detection scenarios, resulting in insufficient generalization performance of the models. To address this issue, we propose a depression detection method based on emotional pattern discrepancies, as the discrepancies are one of the fundamental features of depression as an affective disorder. Specifically, we propose an Emotional Pattern Discrepancy Aware Depression Detection Model (EPDAD). The EPDAD employs specially designed modules and loss functions to train the model. This approach enables the model to dynamically and comprehensively perceive the different emotional patterns reflected by depressed and healthy individuals in response to various emotional stimuli. As a result, it enhances the model’s ability to learn the essential features of depression. We evaluate the generalization performance of our model from a cross-dataset and cross-topic perspective using MODMA (52 samples) and MIDD (520 samples) datasets. In cross-topic generalization experiments, our method improves F1 score by 10.39% and 1.77% on MODMA and MIDD, respectively, in comparison to the state-of-the-art method. In cross-dataset generalization experiments, our method improves the F1 score by a maximum of 6.37%. We also compare our model with large language models, and the results indicate it is more effective for depression detection tasks. Our research contributes to the practical application of depression detection models. Our code is available at: https://github.com/hbZhzzz/EPDAD.
基于文本的抑郁症自动检测是当前研究的热点之一。然而,目前的研究缺乏对抑郁症检测场景中关键言语行为的探索,导致模型的泛化性能不足。为了解决这个问题,我们提出了一种基于情绪模式差异的抑郁症检测方法,因为差异是抑郁症作为一种情感障碍的基本特征之一。具体而言,我们提出了一个情绪模式差异感知抑郁检测模型(EPDAD)。EPDAD采用专门设计的模块和损失函数对模型进行训练。该方法使模型能够动态、全面地感知抑郁个体和健康个体对各种情绪刺激所反映的不同情绪模式。因此,它增强了模型学习抑郁症基本特征的能力。我们使用MODMA(52个样本)和MIDD(520个样本)数据集从跨数据集和跨主题的角度评估了我们的模型的泛化性能。在交叉主题泛化实验中,我们的方法在MODMA和MIDD上分别提高了10.39%和1.77%的F1分数。在跨数据集泛化实验中,我们的方法将F1分数提高了6.37%。我们还将我们的模型与大型语言模型进行了比较,结果表明它在抑郁检测任务中更有效。我们的研究有助于抑郁症检测模型的实际应用。我们的代码可在:https://github.com/hbZhzzz/EPDAD。
{"title":"A text-based emotional pattern discrepancy aware model for enhanced generalization in depression detection","authors":"Haibo Zhang ,&nbsp;Zhenyu Liu ,&nbsp;Yang Wu ,&nbsp;Jiaqian Yuan ,&nbsp;Gang Li ,&nbsp;Zhijie Ding ,&nbsp;Bin Hu","doi":"10.1016/j.ipm.2025.104575","DOIUrl":"10.1016/j.ipm.2025.104575","url":null,"abstract":"<div><div>Text-based automated depression detection is one of the current hot topics. However, current research lacks the exploration of key verbal behaviors in depression detection scenarios, resulting in insufficient generalization performance of the models. To address this issue, we propose a depression detection method based on emotional pattern discrepancies, as the discrepancies are one of the fundamental features of depression as an affective disorder. Specifically, we propose an Emotional Pattern Discrepancy Aware Depression Detection Model (EPDAD). The EPDAD employs specially designed modules and loss functions to train the model. This approach enables the model to dynamically and comprehensively perceive the different emotional patterns reflected by depressed and healthy individuals in response to various emotional stimuli. As a result, it enhances the model’s ability to learn the essential features of depression. We evaluate the generalization performance of our model from a cross-dataset and cross-topic perspective using MODMA (52 samples) and MIDD (520 samples) datasets. In cross-topic generalization experiments, our method improves F1 score by 10.39% and 1.77% on MODMA and MIDD, respectively, in comparison to the state-of-the-art method. In cross-dataset generalization experiments, our method improves the F1 score by a maximum of 6.37%. We also compare our model with large language models, and the results indicate it is more effective for depression detection tasks. Our research contributes to the practical application of depression detection models. Our code is available at: <span><span>https://github.com/hbZhzzz/EPDAD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104575"},"PeriodicalIF":6.9,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Outlier detector fusing latent representation and fuzzy granule 融合潜在表示和模糊颗粒的离群值检测器
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-31 DOI: 10.1016/j.ipm.2025.104571
Xinyu Su , Shihao Wang , Wei Huang , Zheng Li , Hongmei Chen , Zhong Yuan
Unsupervised outlier detection is a critical task in data mining. Two prominent paradigms, fuzzy information granulation and representation learning, have shown promise but face fundamental, opposing limitations. Fuzzy information granulation-based methods excel at modeling data uncertainty but struggle with the curse of dimensionality and noise in high-dimensional spaces. Conversely, representation learning-based methods effectively handle high-dimensional data but often neglect the uncertainty information inherent in data, such as fuzziness. To address these limitations, we propose Latent Representation-based Outlier Detection with fuzzy granule (LROD). In LROD, we utilize representation learning to address the challenges encountered by fuzzy information granulation-based methods in high-dimensional data by deriving a compact and effective representation from the original feature space. The reconstruction error of each sample serves as the first component of the outlier score. This error, derived from representation learning, effectively captures global structural abnormal information in the data. Subsequently, we introduce fuzzy information granulation on this new representation to address data uncertainty. The second component is formed by aggregating abnormal information from fuzzy information granules, which are induced by various attribute subsets. Finally, these two components are fused to produce the final outlier score. Experimental results demonstrate that LROD outperforms 20 competing methods across 15 datasets, achieving improvements of 4.5%, 10.5%, and 3.1% in AUC, AP, and G-mean metrics, respectively, compared to the second-best method, validating its superior effectiveness. This study demonstrates the significant benefits of a hybrid method, providing a new framework for fusing global structural information with local uncertainty measures to achieve state-of-the-art performance in outlier detection. The code is publicly available at https://github.com/Mxeron/LROD.
无监督异常点检测是数据挖掘中的一项关键任务。两个突出的范式,模糊信息粒化和表示学习,已经显示出希望,但面临根本的,相反的限制。基于模糊信息粒化的方法在数据不确定性建模方面具有优势,但在高维空间中存在维数和噪声的问题。相反,基于表示学习的方法可以有效地处理高维数据,但往往忽略了数据固有的不确定性信息,如模糊性。为了解决这些限制,我们提出了基于模糊颗粒(LROD)的潜在表示的离群检测。在LROD中,我们利用表征学习来解决基于模糊信息颗粒化方法在高维数据中遇到的挑战,从原始特征空间中获得紧凑有效的表征。每个样本的重构误差作为离群值的第一个分量。这种误差来源于表示学习,可以有效地捕获数据中的全局结构异常信息。随后,我们在这个新的表示上引入模糊信息粒化来解决数据的不确定性。第二部分是将由不同属性子集引起的模糊信息颗粒中的异常信息聚合而成。最后,将这两个分量融合以产生最终的异常值得分。实验结果表明,LROD在15个数据集上优于20种竞争方法,在AUC、AP和G-mean指标上分别比第二好的方法提高了4.5%、10.5%和3.1%,验证了其优越的有效性。这项研究证明了混合方法的显著优势,为融合全局结构信息和局部不确定性措施提供了一个新的框架,以实现离群值检测的最先进性能。该代码可在https://github.com/Mxeron/LROD上公开获得。
{"title":"Outlier detector fusing latent representation and fuzzy granule","authors":"Xinyu Su ,&nbsp;Shihao Wang ,&nbsp;Wei Huang ,&nbsp;Zheng Li ,&nbsp;Hongmei Chen ,&nbsp;Zhong Yuan","doi":"10.1016/j.ipm.2025.104571","DOIUrl":"10.1016/j.ipm.2025.104571","url":null,"abstract":"<div><div>Unsupervised outlier detection is a critical task in data mining. Two prominent paradigms, fuzzy information granulation and representation learning, have shown promise but face fundamental, opposing limitations. Fuzzy information granulation-based methods excel at modeling data uncertainty but struggle with the curse of dimensionality and noise in high-dimensional spaces. Conversely, representation learning-based methods effectively handle high-dimensional data but often neglect the uncertainty information inherent in data, such as fuzziness. To address these limitations, we propose Latent Representation-based Outlier Detection with fuzzy granule (LROD). In LROD, we utilize representation learning to address the challenges encountered by fuzzy information granulation-based methods in high-dimensional data by deriving a compact and effective representation from the original feature space. The reconstruction error of each sample serves as the first component of the outlier score. This error, derived from representation learning, effectively captures global structural abnormal information in the data. Subsequently, we introduce fuzzy information granulation on this new representation to address data uncertainty. The second component is formed by aggregating abnormal information from fuzzy information granules, which are induced by various attribute subsets. Finally, these two components are fused to produce the final outlier score. Experimental results demonstrate that LROD outperforms 20 competing methods across 15 datasets, achieving improvements of 4.5%, 10.5%, and 3.1% in AUC, AP, and G-mean metrics, respectively, compared to the second-best method, validating its superior effectiveness. This study demonstrates the significant benefits of a hybrid method, providing a new framework for fusing global structural information with local uncertainty measures to achieve state-of-the-art performance in outlier detection. The code is publicly available at <span><span>https://github.com/Mxeron/LROD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104571"},"PeriodicalIF":6.9,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Next POI recommendation for random group based on Spatio-Temporal heterogeneous graph 基于时空异构图的随机分组POI推荐
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-31 DOI: 10.1016/j.ipm.2025.104584
Yan Hai , Jing Wang , Zhizhong Liu , Lingqiang Meng , Ling Shang , Quan Z. Sheng
Next Point-of-Interest (POI) recommendations for random groups are challenging due to the instability of member relationships and the dynamic evolution of member preferences. To address the issues above, this work proposes a novel Next POI recommendation for Random Group based on Spatio-Temporal Heterogeneous Graph (named as NPRRG-STHG) model. Specifically, NPRRG-STHG constructs a spatio-temporal heterogeneous graph and uses HNode2Vec to learn members’ multidimensional preferences. Next, NPRRG-STHG balances preference differences among group members and generates a fitted representation of the random group. Meanwhile, NPRRG-STHG learns comprehensive POI representations from spatio-temporal enhanced POI interaction graphs and POI transfer graphs using Edge-Enhanced Bipartite Graph Neural Network (EBGNN) and Spatio-Temporal Graph Convolutional Network (STGCN) models, respectively. Finally, NPRRG-STHG recommends the next POI that best matches the random group’s overall preferences. We validated NPRRG-STHG on three public benchmark datasets (Foursquare, Gowalla, and Yelp) with 124,933 to 860,888 check-in records. Compared to advanced baselines, NPRRG-STHG achieved average improvements of about 21.4% in Precision@K and 36.7% in NDCG@K. Ablation studies further verify the effectiveness of each component. These results demonstrate that NPRRG-STHG provides an effective solution for next POI recommendations in random groups.
由于成员关系的不稳定性和成员偏好的动态演变,随机群体的下一个兴趣点(POI)推荐具有挑战性。为了解决上述问题,本研究提出了一种基于时空异构图(NPRRG-STHG)模型的随机组Next POI建议。具体而言,NPRRG-STHG构建了一个时空异构图,并使用HNode2Vec来学习成员的多维偏好。然后,NPRRG-STHG平衡群体成员之间的偏好差异,生成随机群体的拟合表示。同时,NPRRG-STHG分别使用边缘增强二部图神经网络(EBGNN)和时空图卷积网络(STGCN)模型从时空增强的POI交互图和POI转移图中学习POI的综合表示。最后,NPRRG-STHG会推荐下一个最符合随机组总体偏好的POI。我们在三个公共基准数据集(Foursquare, Gowalla和Yelp)上验证了NPRRG-STHG,其中包含124,933到860,888条签到记录。与先进基线相比,NPRRG-STHG在Precision@K和NDCG@K的平均改善率分别为21.4%和36.7%。消融研究进一步验证了各组分的有效性。这些结果表明,NPRRG-STHG为随机分组的下一个POI推荐提供了有效的解决方案。
{"title":"Next POI recommendation for random group based on Spatio-Temporal heterogeneous graph","authors":"Yan Hai ,&nbsp;Jing Wang ,&nbsp;Zhizhong Liu ,&nbsp;Lingqiang Meng ,&nbsp;Ling Shang ,&nbsp;Quan Z. Sheng","doi":"10.1016/j.ipm.2025.104584","DOIUrl":"10.1016/j.ipm.2025.104584","url":null,"abstract":"<div><div>Next Point-of-Interest (POI) recommendations for random groups are challenging due to the instability of member relationships and the dynamic evolution of member preferences. To address the issues above, this work proposes a novel Next POI recommendation for Random Group based on Spatio-Temporal Heterogeneous Graph (named as NPRRG-STHG) model. Specifically, NPRRG-STHG constructs a spatio-temporal heterogeneous graph and uses HNode2Vec to learn members’ multidimensional preferences. Next, NPRRG-STHG balances preference differences among group members and generates a fitted representation of the random group. Meanwhile, NPRRG-STHG learns comprehensive POI representations from spatio-temporal enhanced POI interaction graphs and POI transfer graphs using Edge-Enhanced Bipartite Graph Neural Network (EBGNN) and Spatio-Temporal Graph Convolutional Network (STGCN) models, respectively. Finally, NPRRG-STHG recommends the next POI that best matches the random group’s overall preferences. We validated NPRRG-STHG on three public benchmark datasets (Foursquare, Gowalla, and Yelp) with 124,933 to 860,888 check-in records. Compared to advanced baselines, NPRRG-STHG achieved average improvements of about 21.4% in Precision@K and 36.7% in NDCG@K. Ablation studies further verify the effectiveness of each component. These results demonstrate that NPRRG-STHG provides an effective solution for next POI recommendations in random groups.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104584"},"PeriodicalIF":6.9,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty-aware facial attribute recognition through joint learning of shared and label-specific attention 基于共享注意和标签特定注意联合学习的不确定性感知人脸属性识别
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-31 DOI: 10.1016/j.ipm.2025.104603
Haval I. Hussein , Masoud M. Hassan
Facial attribute recognition (FAR) has garnered significant attention due to its wide-ranging applications in biometrics and security. Traditional FAR methods typically learn shared feature representations across all attributes; however, they often fail to capture the unique characteristics necessary for each attribute, thereby limiting performance. Moreover, these methods frequently neglect uncertainty quantification, which is crucial for enhancing model reliability. To address these issues, we propose a novel FAR model that integrates global and label-specific feature learning with uncertainty quantification. The proposed model utilizes EfficientNetV2B0 as the backbone architecture and introduces two specialized heads: one for refining global features through shared attention and another for learning label-specific attention. These heads were trained jointly, and their predicted probabilities were averaged during inference to improve performance. Experiments conducted on the CelebA and LFWA datasets demonstrated that the proposed model outperformed both baseline and state-of-the-art models, achieving average accuracies of 92.11% and 87.46%, respectively. Moreover, the inclusion of uncertainty quantification provided valuable insights into model confidence, which was accompanied by measurable performance improvements, with average accuracy gains of 0.01% on CelebA and 0.05% on LFWA. Despite the improvement in accuracy, the model maintained a computational efficiency of 3.1 GFLOPs and a parameter count of 24.57 million. Additionally, visualization results using Grad-CAM showed that the attention modules accurately focused on relevant facial regions, thereby validating the interpretability of the model. These results highlight the potential of our approach for accurate, efficient, and interpretable FAR in real-world applications.
人脸属性识别因其在生物识别和安全领域的广泛应用而受到广泛关注。传统FAR方法通常学习跨所有属性的共享特征表示;然而,它们常常不能捕获每个属性所必需的惟一特征,从而限制了性能。此外,这些方法往往忽略了对提高模型可靠性至关重要的不确定性量化。为了解决这些问题,我们提出了一种新的FAR模型,该模型将全局和特定标签的特征学习与不确定性量化相结合。所提出的模型利用EfficientNetV2B0作为主干架构,并引入了两个专门的头:一个用于通过共享注意来精炼全局特征,另一个用于学习特定标签的注意。对这些头部进行联合训练,并在推理过程中对其预测概率进行平均,以提高性能。在CelebA和LFWA数据集上进行的实验表明,该模型优于基线模型和最先进的模型,平均准确率分别为92.11%和87.46%。此外,不确定性量化的包含为模型置信度提供了有价值的见解,这伴随着可测量的性能改进,CelebA和LFWA的平均精度分别提高了0.01%和0.05%。尽管精度有所提高,但模型的计算效率保持在3.1 GFLOPs,参数数为2457万。此外,使用Grad-CAM的可视化结果表明,注意模块准确地集中在相关的面部区域,从而验证了模型的可解释性。这些结果突出了我们的方法在实际应用中准确、高效和可解释的FAR的潜力。
{"title":"Uncertainty-aware facial attribute recognition through joint learning of shared and label-specific attention","authors":"Haval I. Hussein ,&nbsp;Masoud M. Hassan","doi":"10.1016/j.ipm.2025.104603","DOIUrl":"10.1016/j.ipm.2025.104603","url":null,"abstract":"<div><div>Facial attribute recognition (FAR) has garnered significant attention due to its wide-ranging applications in biometrics and security. Traditional FAR methods typically learn shared feature representations across all attributes; however, they often fail to capture the unique characteristics necessary for each attribute, thereby limiting performance. Moreover, these methods frequently neglect uncertainty quantification, which is crucial for enhancing model reliability. To address these issues, we propose a novel FAR model that integrates global and label-specific feature learning with uncertainty quantification. The proposed model utilizes EfficientNetV2B0 as the backbone architecture and introduces two specialized heads: one for refining global features through shared attention and another for learning label-specific attention. These heads were trained jointly, and their predicted probabilities were averaged during inference to improve performance. Experiments conducted on the CelebA and LFWA datasets demonstrated that the proposed model outperformed both baseline and state-of-the-art models, achieving average accuracies of 92.11% and 87.46%, respectively. Moreover, the inclusion of uncertainty quantification provided valuable insights into model confidence, which was accompanied by measurable performance improvements, with average accuracy gains of 0.01% on CelebA and 0.05% on LFWA. Despite the improvement in accuracy, the model maintained a computational efficiency of 3.1 GFLOPs and a parameter count of 24.57 million. Additionally, visualization results using Grad-CAM showed that the attention modules accurately focused on relevant facial regions, thereby validating the interpretability of the model. These results highlight the potential of our approach for accurate, efficient, and interpretable FAR in real-world applications.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104603"},"PeriodicalIF":6.9,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph embedding-based dual-channel quality prediction method for yarn manufacturing system 基于图嵌入的纱线制造系统双通道质量预测方法
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-30 DOI: 10.1016/j.ipm.2025.104592
Xiaohu Zheng , Siqi Du , Zhifeng Liu , Jian Wang , Shunli Hou , Ke Wang
Yarn quality is a critical control object in the textile industry. Accurate quality prediction not only reduces costs but also provides data support for process optimization. However, the high-dimensional and noisy data in the yarn production process cause poor model predictive capability, and existing methods struggle to capture the complex interrelationships between processes. To address this, a dual-channel quality prediction method based on graph embedding is proposed. Specifically, a process-oriented heterogeneous network is constructed to represent the production process nodes and their collaborative relationships in the form of a directed heterogeneous graph. Based on this graph structure, a dynamically adjustable embedding module is designed to generate node embeddings with good interpretability for the process flow. A dual-channel quality prediction architecture is then designed for quality prediction in scenarios with noisy data. The proposed method is experimentally validated on cotton production data from an enterprise with 608 data samples. The results show that the proposed method demonstrates the best overall performance in predicting different yarn quality indicators. The mean square error, mean absolute error and root mean square error are reduced by 45.3%, 34.5% and 25.2% on average, respectively. This provides a new modeling reference for quality prediction in manufacturing scenarios with clear process flows.
纱线质量是纺织工业的关键控制对象。准确的质量预测不仅可以降低成本,还可以为工艺优化提供数据支持。然而,纱线生产过程中的高维和噪声数据导致模型预测能力差,现有方法难以捕捉过程之间复杂的相互关系。针对这一问题,提出了一种基于图嵌入的双通道质量预测方法。具体而言,构建了面向过程的异构网络,以有向异构图的形式表示生产过程节点及其协作关系。在此基础上,设计了一个动态可调的嵌入模块,生成具有良好可解释性的节点嵌入。然后设计了双通道质量预测体系结构,用于有噪声数据场景下的质量预测。在某企业608个数据样本的棉花生产数据上对该方法进行了实验验证。结果表明,该方法在预测纱线质量指标方面具有较好的综合性能。均方误差、平均绝对误差和均方根误差平均分别降低45.3%、34.5%和25.2%。这为具有清晰工艺流程的制造场景的质量预测提供了新的建模参考。
{"title":"Graph embedding-based dual-channel quality prediction method for yarn manufacturing system","authors":"Xiaohu Zheng ,&nbsp;Siqi Du ,&nbsp;Zhifeng Liu ,&nbsp;Jian Wang ,&nbsp;Shunli Hou ,&nbsp;Ke Wang","doi":"10.1016/j.ipm.2025.104592","DOIUrl":"10.1016/j.ipm.2025.104592","url":null,"abstract":"<div><div>Yarn quality is a critical control object in the textile industry. Accurate quality prediction not only reduces costs but also provides data support for process optimization. However, the high-dimensional and noisy data in the yarn production process cause poor model predictive capability, and existing methods struggle to capture the complex interrelationships between processes. To address this, a dual-channel quality prediction method based on graph embedding is proposed. Specifically, a process-oriented heterogeneous network is constructed to represent the production process nodes and their collaborative relationships in the form of a directed heterogeneous graph. Based on this graph structure, a dynamically adjustable embedding module is designed to generate node embeddings with good interpretability for the process flow. A dual-channel quality prediction architecture is then designed for quality prediction in scenarios with noisy data. The proposed method is experimentally validated on cotton production data from an enterprise with 608 data samples. The results show that the proposed method demonstrates the best overall performance in predicting different yarn quality indicators. The mean square error, mean absolute error and root mean square error are reduced by 45.3%, 34.5% and 25.2% on average, respectively. This provides a new modeling reference for quality prediction in manufacturing scenarios with clear process flows.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104592"},"PeriodicalIF":6.9,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cognitive alignment network: Integrating sensory-perceptual cues for predicate similarity discrimination 认知对齐网络:谓词相似性判别的感官-知觉线索整合
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-30 DOI: 10.1016/j.ipm.2025.104577
Na Tian , Qihang Jia , Wenna Liu , Xiangfu Ding , Wencang Zhao
Scene graph generation is a crucial task in visual scene understanding and reasoning, but its performance is often limited by the long-tail distribution of relationships. The high similarity of predicate semantics makes tail predicates easily covered by high-frequency head predicates, leading to semantic confusion and a decline in rare relationship recognition performance. To address it, we propose the Cognitive Alignment Network, which draws inspiration from the cognitive psychology mechanism of sensation and perception for alleviating predicate semantic confusion. It explicitly models the reasoning process by separating coarse-grained sensory capture from fine-grained perceptual reasoning. The sensory sensitive module enhances the extraction of object features from visual stimuli, while the perceptual reinforcement module improves the relationship instances between similar predicates to ensure fine-grained semantic distinctions without altering the underlying meaning. Experiments show that the mR@K of CANet outperforms the current state-of-the-art method by 2.6% on Visual Genome and its Scorewtd improves by 1.4% on Open Images V6. Visualization results also validate that incorporating cognitive mechanisms into scene graphs can effectively mitigate the long-tail problem and enhance the model’s generalization and reasoning capabilities in real-world scenes.
场景图的生成是视觉场景理解和推理的关键任务,但其性能往往受到关系长尾分布的限制。谓词语义的高相似性使得尾谓词容易被高频的头谓词覆盖,从而导致语义混淆和罕见关系识别性能下降。为了解决这一问题,我们提出了认知对齐网络,该网络从感觉和知觉的认知心理学机制中获得灵感,以减轻谓词语义混淆。它通过将粗粒度的感官捕获与细粒度的感知推理分离,明确地对推理过程进行建模。感官敏感模块增强了从视觉刺激中提取物体特征,而感知强化模块改进了相似谓词之间的关系实例,以确保在不改变潜在含义的情况下实现细粒度的语义区分。实验表明,CANet的mR@K在Visual Genome上的性能比目前最先进的方法高出2.6%,在Open Images V6上的Scorewtd提高了1.4%。可视化结果还验证了在场景图中加入认知机制可以有效地缓解长尾问题,提高模型在现实场景中的泛化和推理能力。
{"title":"Cognitive alignment network: Integrating sensory-perceptual cues for predicate similarity discrimination","authors":"Na Tian ,&nbsp;Qihang Jia ,&nbsp;Wenna Liu ,&nbsp;Xiangfu Ding ,&nbsp;Wencang Zhao","doi":"10.1016/j.ipm.2025.104577","DOIUrl":"10.1016/j.ipm.2025.104577","url":null,"abstract":"<div><div>Scene graph generation is a crucial task in visual scene understanding and reasoning, but its performance is often limited by the long-tail distribution of relationships. The high similarity of predicate semantics makes tail predicates easily covered by high-frequency head predicates, leading to semantic confusion and a decline in rare relationship recognition performance. To address it, we propose the Cognitive Alignment Network, which draws inspiration from the cognitive psychology mechanism of sensation and perception for alleviating predicate semantic confusion. It explicitly models the reasoning process by separating coarse-grained sensory capture from fine-grained perceptual reasoning. The sensory sensitive module enhances the extraction of object features from visual stimuli, while the perceptual reinforcement module improves the relationship instances between similar predicates to ensure fine-grained semantic distinctions without altering the underlying meaning. Experiments show that the mR@K of CANet outperforms the current state-of-the-art method by 2.6% on Visual Genome and its <em>Score<sub>wtd</sub></em> improves by 1.4% on Open Images V6. Visualization results also validate that incorporating cognitive mechanisms into scene graphs can effectively mitigate the long-tail problem and enhance the model’s generalization and reasoning capabilities in real-world scenes.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104577"},"PeriodicalIF":6.9,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCTNet: Structured and causality-guided spatiotemporal diffusion network for unsupervised traffic accident detection 面向无监督交通事故检测的结构化因果导向时空扩散网络
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-30 DOI: 10.1016/j.ipm.2025.104598
Huilin Liu , Xiaolong Hu , Yu Jiang , Tianyue Wan , Wanqi Ma
Accurate detections of traffic accidents are challenging, and existing detection methods struggle to maintain robustness to diverse conditions. To address this, we propose a structured and causality-guided spatiotemporal diffusion network (SCTNet) for unsupervised traffic accident detection. The SCTNet framework integrates dual-phase patch sampling (DPPS) to mitigate sampling bias between training and testing phases. Spatiotemporal causal graph fusion (STCGF) captures the causal dependencies among interacting agents, and a structured spatiotemporal noise (SSTN) mechanism enhances temporal sensitivity and context consistency. The diffusion-based dual-stream design enables the fusion of visual and motion information for robust spatiotemporal representation learning. Experiments conducted on two traffic datasets show that SCTNet achieves higher detection accuracy and stronger cross-domain generalization than existing methods. More generally, our study contributes to data-driven decision making and research on intelligent information systems in complex, dynamic transportation environments. The source code is available at https://github.com/Jasoncode0115/SCTNet.
对交通事故的准确检测具有挑战性,现有的检测方法难以保持对各种条件的鲁棒性。为了解决这个问题,我们提出了一个结构化和因果导向的时空扩散网络(SCTNet),用于无监督交通事故检测。SCTNet框架集成了双相位补丁采样(DPPS),以减轻训练阶段和测试阶段之间的采样偏差。时空因果图融合(STCGF)捕获交互主体之间的因果依赖关系,结构化时空噪声(SSTN)机制增强了时间敏感性和上下文一致性。基于扩散的双流设计实现了视觉和运动信息的融合,实现了鲁棒的时空表征学习。在两个交通数据集上进行的实验表明,SCTNet比现有方法具有更高的检测精度和更强的跨域泛化能力。更广泛地说,我们的研究有助于数据驱动的决策和复杂动态交通环境中智能信息系统的研究。源代码可从https://github.com/Jasoncode0115/SCTNet获得。
{"title":"SCTNet: Structured and causality-guided spatiotemporal diffusion network for unsupervised traffic accident detection","authors":"Huilin Liu ,&nbsp;Xiaolong Hu ,&nbsp;Yu Jiang ,&nbsp;Tianyue Wan ,&nbsp;Wanqi Ma","doi":"10.1016/j.ipm.2025.104598","DOIUrl":"10.1016/j.ipm.2025.104598","url":null,"abstract":"<div><div>Accurate detections of traffic accidents are challenging, and existing detection methods struggle to maintain robustness to diverse conditions. To address this, we propose a structured and causality-guided spatiotemporal diffusion network (SCTNet) for unsupervised traffic accident detection. The SCTNet framework integrates dual-phase patch sampling (DPPS) to mitigate sampling bias between training and testing phases. Spatiotemporal causal graph fusion (STCGF) captures the causal dependencies among interacting agents, and a structured spatiotemporal noise (SSTN) mechanism enhances temporal sensitivity and context consistency. The diffusion-based dual-stream design enables the fusion of visual and motion information for robust spatiotemporal representation learning. Experiments conducted on two traffic datasets show that SCTNet achieves higher detection accuracy and stronger cross-domain generalization than existing methods. More generally, our study contributes to data-driven decision making and research on intelligent information systems in complex, dynamic transportation environments. The source code is available at <span><span>https://github.com/Jasoncode0115/SCTNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104598"},"PeriodicalIF":6.9,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145847574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph attention convolutional networks for interpretable multi-hop knowledge graph reasoning 可解释多跳知识图推理的图注意卷积网络
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-30 DOI: 10.1016/j.ipm.2025.104581
Hao Liu , Dong Li , Bing Zeng , Haopeng Ren
Effective multi-hop reasoning over knowledge graphs is critical for knowledge completion, yet prior methods often struggle to model relation dependencies and capture neighborhood context interactions, thereby limiting path interpretability and predictive performance. To adequately model the interaction between neighborhood information and context, we introduce a graph attention convolutional (GAC) mechanism that aggregates and updates node information within the local first-order neighborhood. We then employ attention mechanisms to generate entity and relation reasoning contexts and construct GAC-based policy networks to reinforce interaction between these contexts and their corresponding neighborhoods. Extensive experiments on five knowledge graphs demonstrate the effectiveness of our method, which achieves notable improvements on FB15K-237, including a 7.6 % relative improvement in Hits@1, a 14.6 % increase in MRR, and a 6.9 % enhancement in path interpretability.
知识图上有效的多跳推理对于知识完成至关重要,然而先前的方法往往难以建立关系依赖关系模型和捕获邻近上下文交互,从而限制了路径的可解释性和预测性能。为了充分模拟邻域信息和上下文之间的相互作用,我们引入了一种图注意卷积(GAC)机制,该机制聚合和更新局部一阶邻域内的节点信息。然后,我们使用注意机制来生成实体和关系推理上下文,并构建基于gac的策略网络来加强这些上下文与其相应邻域之间的交互。在五个知识图上的大量实验证明了我们的方法的有效性,该方法在FB15K-237上取得了显著的改进,其中Hits@1的相对改进为7.6%,MRR提高了14.6%,路径可解释性提高了6.9%。
{"title":"Graph attention convolutional networks for interpretable multi-hop knowledge graph reasoning","authors":"Hao Liu ,&nbsp;Dong Li ,&nbsp;Bing Zeng ,&nbsp;Haopeng Ren","doi":"10.1016/j.ipm.2025.104581","DOIUrl":"10.1016/j.ipm.2025.104581","url":null,"abstract":"<div><div>Effective multi-hop reasoning over knowledge graphs is critical for knowledge completion, yet prior methods often struggle to model relation dependencies and capture neighborhood context interactions, thereby limiting path interpretability and predictive performance. To adequately model the interaction between neighborhood information and context, we introduce a graph attention convolutional (GAC) mechanism that aggregates and updates node information within the local first-order neighborhood. We then employ attention mechanisms to generate entity and relation reasoning contexts and construct GAC-based policy networks to reinforce interaction between these contexts and their corresponding neighborhoods. Extensive experiments on five knowledge graphs demonstrate the effectiveness of our method, which achieves notable improvements on FB15K-237, including a 7.6 % relative improvement in Hits@1, a 14.6 % increase in MRR, and a 6.9 % enhancement in path interpretability.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104581"},"PeriodicalIF":6.9,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Processing & Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1