首页 > 最新文献

Information Processing & Management最新文献

英文 中文
Developing Fairness, Accuracy, and Serendipity Objective Functions for Recommendation System and Establishing Trade-off through Multi-Objective Evolutionary Optimization 基于多目标进化优化的推荐系统公平性、准确性和偶然性目标函数建立权衡关系
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-06-01 Epub Date: 2026-01-03 DOI: 10.1016/j.ipm.2025.104604
Shresth Khaitan , Rahul Shrivastava
Balancing accuracy while establishing a trade-off optimization with fairness and serendipity remains a challenging problem in commercial recommender systems. However, recent multi-objective recommendation methods have often overlooked the need to investigate pleasantly surprising items, thereby mitigating popularity bias and ensuring the equitable inclusion of items in the recommendation list. Hence, this study develops the objective functions for Fairness, Accuracy, and Serendipity and integrates them into a proposed unified Multi-Objective Evolutionary Algorithm-Based Recommendation Framework (FAS-MOEA). The proposed objective functions for accuracy ensure the balanced inclusion of long-tail and popular items through weighted evaluation. The fairness-based objective function incorporates genre-aware fairness, aligning recommendation distributions with both global and user-specific genre profiles. The serendipity-based proposed objective function learns implicit, context-sensitive preferences for novel yet relevant items. Lastly, the proposed framework establishes the balanced trade-off among these competing objectives to generate the Pareto optimal recommendation solution. The proposed models' validation demonstrates substantial improvement over the competing models on three benchmark datasets, MovieLens 100K, MovieLens 1M, and Amazon Electronics (5-core), attaining an enhancement of 27.21% in F1-score, 8.44% in fairness, and 16.66% in serendipity score. The generated Pareto front exhibits the models' ability to navigate trade-offs among these competing goals and develop an accurate, fair, and pleasantly surprising recommendation.
在商业推荐系统中,平衡准确性的同时建立公平性和偶然性的权衡优化仍然是一个具有挑战性的问题。然而,最近的多目标推荐方法往往忽略了调查令人惊喜的项目的需要,从而减轻了受欢迎程度的偏见,并确保在推荐列表中公平地包含项目。因此,本研究开发了公平性、准确性和偶然性的目标函数,并将它们整合到一个统一的多目标进化算法推荐框架(FAS-MOEA)中。所提出的精度目标函数通过加权评价确保了长尾项目和热门项目的均衡纳入。基于公平性的目标函数结合了类型感知公平性,将推荐分布与全局和用户特定类型配置文件对齐。基于偶然性的目标函数学习对新颖但相关的物品的内隐的、上下文敏感的偏好。最后,提出的框架在这些竞争目标之间建立平衡权衡,以生成帕累托最优推荐解。在MovieLens 100K、MovieLens 1M和Amazon Electronics(5核)三个基准数据集上,所提出模型的验证表明,与竞争模型相比,该模型有了很大的改进,f1得分提高了27.21%,公平性提高了8.44%,意外得分提高了16.66%。生成的帕累托前沿展示了模型在这些相互竞争的目标之间进行权衡的能力,并开发出准确、公平和令人惊喜的建议。
{"title":"Developing Fairness, Accuracy, and Serendipity Objective Functions for Recommendation System and Establishing Trade-off through Multi-Objective Evolutionary Optimization","authors":"Shresth Khaitan ,&nbsp;Rahul Shrivastava","doi":"10.1016/j.ipm.2025.104604","DOIUrl":"10.1016/j.ipm.2025.104604","url":null,"abstract":"<div><div>Balancing accuracy while establishing a trade-off optimization with fairness and serendipity remains a challenging problem in commercial recommender systems. However, recent multi-objective recommendation methods have often overlooked the need to investigate pleasantly surprising items, thereby mitigating popularity bias and ensuring the equitable inclusion of items in the recommendation list. Hence, this study develops the objective functions for Fairness, Accuracy, and Serendipity and integrates them into a proposed unified Multi-Objective Evolutionary Algorithm-Based Recommendation Framework (FAS-MOEA). The proposed objective functions for accuracy ensure the balanced inclusion of long-tail and popular items through weighted evaluation. The fairness-based objective function incorporates genre-aware fairness, aligning recommendation distributions with both global and user-specific genre profiles. The serendipity-based proposed objective function learns implicit, context-sensitive preferences for novel yet relevant items. Lastly, the proposed framework establishes the balanced trade-off among these competing objectives to generate the Pareto optimal recommendation solution. The proposed models' validation demonstrates substantial improvement over the competing models on three benchmark datasets, MovieLens 100K, MovieLens 1M, and Amazon Electronics (5-core), attaining an enhancement of 27.21% in F1-score, 8.44% in fairness, and 16.66% in serendipity score. The generated Pareto front exhibits the models' ability to navigate trade-offs among these competing goals and develop an accurate, fair, and pleasantly surprising recommendation.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104604"},"PeriodicalIF":6.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LS-BiLLMs: Label supervised bi-directional large language models for token- and sequence-level information extraction LS-BiLLMs:用于标记和序列级信息提取的标签监督双向大型语言模型
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-06-01 Epub Date: 2026-01-07 DOI: 10.1016/j.ipm.2025.104568
Zongxi Li , Xianming Li , Jing Li , Haoran Xie , Fu Lee Wang , Qing Li
Large Language Models (LLMs) have achieved remarkable generative capabilities but often underperform in sequence- and token-level classification tasks due to the causal masking constraint in decoder-only architectures. This unidirectional attention prevents tokens from accessing bidirectional context, limiting representation learning for discriminative prediction. We propose Label-Supervised Bi-directional Large Language Models (LS-BiLLMs), a lightweight adaptation method that (1) employs direct label supervision to align latent representations with task-specific labels and (2) removes the causal mask to enable bidirectional information flow. Implemented with LoRA-based fine-tuning, LS-BiLLMs efficiently adapt compact open-weight LLMs, such as LLaMA, Qwen, and Mistral, for classification without complex prompt engineering. Experiments across text classification, named-entity recognition, and commonsense reasoning benchmarks show consistent gains over instruction-tuned and encoder-based baselines. While unmasking sacrifices autoregressive generation, it substantially enhances discriminative understanding and efficiency. These findings reveal how causal directionality in attention mechanisms affects representational learning and reasoning in modern LLMs.
大型语言模型(llm)已经取得了显著的生成能力,但由于仅解码器架构中的因果屏蔽约束,在序列和标记级分类任务中往往表现不佳。这种单向注意阻止了令牌访问双向上下文,限制了判别预测的表示学习。我们提出了标签监督双向大型语言模型(LS-BiLLMs),这是一种轻量级的自适应方法,它(1)采用直接标签监督来将潜在表征与特定于任务的标签对齐,(2)去除因果掩码以实现双向信息流。通过基于lora的微调,ls - billm可以有效地适应紧凑的开重llm,如LLaMA、Qwen和Mistral,无需复杂的即时工程即可进行分类。跨文本分类、命名实体识别和常识性推理基准的实验显示,与指令调优和基于编码器的基准相比,获得了一致的收益。虽然揭开面纱牺牲了自回归生成,但它大大提高了判别理解和效率。这些发现揭示了注意机制中的因果方向性如何影响现代法学硕士的表征学习和推理。
{"title":"LS-BiLLMs: Label supervised bi-directional large language models for token- and sequence-level information extraction","authors":"Zongxi Li ,&nbsp;Xianming Li ,&nbsp;Jing Li ,&nbsp;Haoran Xie ,&nbsp;Fu Lee Wang ,&nbsp;Qing Li","doi":"10.1016/j.ipm.2025.104568","DOIUrl":"10.1016/j.ipm.2025.104568","url":null,"abstract":"<div><div>Large Language Models (LLMs) have achieved remarkable generative capabilities but often underperform in sequence- and token-level classification tasks due to the causal masking constraint in decoder-only architectures. This unidirectional attention prevents tokens from accessing bidirectional context, limiting representation learning for discriminative prediction. We propose Label-Supervised Bi-directional Large Language Models (LS-BiLLMs), a lightweight adaptation method that (1) employs direct label supervision to align latent representations with task-specific labels and (2) removes the causal mask to enable bidirectional information flow. Implemented with LoRA-based fine-tuning, LS-BiLLMs efficiently adapt compact open-weight LLMs, such as LLaMA, Qwen, and Mistral, for classification without complex prompt engineering. Experiments across text classification, named-entity recognition, and commonsense reasoning benchmarks show consistent gains over instruction-tuned and encoder-based baselines. While unmasking sacrifices autoregressive generation, it substantially enhances discriminative understanding and efficiency. These findings reveal how causal directionality in attention mechanisms affects representational learning and reasoning in modern LLMs.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104568"},"PeriodicalIF":6.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Topic propagation prediction model based on topic lifecycle and user social circle 基于主题生命周期和用户社交圈的主题传播预测模型
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-06-01 Epub Date: 2026-01-05 DOI: 10.1016/j.ipm.2025.104558
Chaolong Jia, Kangle Chen, Guoding Wang, Guicai Deng, Rong Wang, Tun Li, Yunpeng Xiao
This paper presents a topic propagation prediction model that jointly considers topic lifecycle stages and dynamic social circles. A time-window-based topic representation captures lifecycle-aware evolution patterns, while SC2vec embeds dynamic social circle structures based on interaction strength and topology. These features are fused via a Temporal Graph Convolutional Network (TGCN) to model spatiotemporal propagation dynamics. Experiments on Weibo and Twitter datasets, covering over 1.5 million user interactions across four real-world trending topics, show that the proposed model consistently outperforms recent baselines in MAE and RMSE, effectively mitigating data sparsity and improving prediction accuracy.
提出了一种综合考虑话题生命周期阶段和动态社交圈的话题传播预测模型。基于时间窗口的主题表示捕获生命周期感知的进化模式,而SC2vec则基于交互强度和拓扑嵌入动态社交圈结构。这些特征通过时间图卷积网络(TGCN)进行融合,以模拟时空传播动态。在微博和Twitter数据集上进行的实验,涵盖了四个现实世界趋势主题的150多万用户交互,表明所提出的模型始终优于MAE和RMSE的最新基线,有效地降低了数据稀疏性并提高了预测精度。
{"title":"Topic propagation prediction model based on topic lifecycle and user social circle","authors":"Chaolong Jia,&nbsp;Kangle Chen,&nbsp;Guoding Wang,&nbsp;Guicai Deng,&nbsp;Rong Wang,&nbsp;Tun Li,&nbsp;Yunpeng Xiao","doi":"10.1016/j.ipm.2025.104558","DOIUrl":"10.1016/j.ipm.2025.104558","url":null,"abstract":"<div><div>This paper presents a topic propagation prediction model that jointly considers topic lifecycle stages and dynamic social circles. A time-window-based topic representation captures lifecycle-aware evolution patterns, while SC2vec embeds dynamic social circle structures based on interaction strength and topology. These features are fused via a Temporal Graph Convolutional Network (TGCN) to model spatiotemporal propagation dynamics. Experiments on Weibo and Twitter datasets, covering over 1.5 million user interactions across four real-world trending topics, show that the proposed model consistently outperforms recent baselines in MAE and RMSE, effectively mitigating data sparsity and improving prediction accuracy.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104558"},"PeriodicalIF":6.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PrQAC : Prompting LLaMA3 with question-aware image captions and answer candidates for knowledge-based VQA PrQAC:用问题感知图像说明提示LLaMA3,并回答基于知识的VQA候选人
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-06-01 Epub Date: 2026-01-10 DOI: 10.1016/j.ipm.2025.104606
Peichao Jiang , Mayire Ibrayim , Linying Wang , Wenjie Xu
The Knowledge-Based Visual Question Answering (VQA) task requires cross-modal reasoning by integrating external knowledge. Current studies commonly employ large language model (LLM) as implicit knowledge sources to retrieve the information required for answering questions. However, we argue that these approaches still struggle to effectively integrate visual information, thereby failing to fully exploit the reasoning capabilities of LLM. To address this, we propose PrQAC (Prompting LLaMA3 with Question-Aware Image Captions and Answer Candidates), a new prompting framework for Knowledge-Based VQA. It consists of three key stages: (1) Image Caption Generation, a frozen multimodal large language model (MLLM) generates two captions–generic captions rich in visual details and question-aware captions containing relevant knowledge. (2) Candidate Answer Generation, a generic VQA model is trained using question-aware captions and Knowledge-Based VQA datasets to generate high-quality in-context examples and candidate answers. (3) In-Context Prompt Construction, generated elements are combined into a structured prompt to guide the LLM toward the final answer. We replace GPT-3 with LLaMA3 to reduce computational cost. Experimental results demonstrate that PrQAC outperforms state-of-the-art methods by 1.79% on the OK-VQA dataset (14k samples), and by 4.10% (Direct Answer) and 5.57% (Multiple Choice) on the A-OKVQA dataset (25k samples).
基于知识的可视化问答(VQA)任务需要通过整合外部知识进行跨模态推理。目前的研究普遍采用大语言模型(LLM)作为隐式知识来源来检索回答问题所需的信息。然而,我们认为这些方法仍然难以有效地整合视觉信息,从而未能充分利用LLM的推理能力。为了解决这个问题,我们提出了一个新的基于知识的VQA提示框架PrQAC (prompt LLaMA3 with problem - aware Image Captions and Answer candidate)。它包括三个关键阶段:(1)图像标题生成,一个冻结的多模态大语言模型(MLLM)生成两个标题——富含视觉细节的通用标题和包含相关知识的问题感知标题。(2)候选答案生成,使用问题感知的标题和基于知识的VQA数据集训练通用的VQA模型,以生成高质量的上下文示例和候选答案。(3)语境提示构建(In-Context Prompt Construction),将生成的元素组合成结构化提示,引导法学硕士找到最终答案。我们用LLaMA3代替GPT-3,以减少计算成本。实验结果表明,PrQAC在OK-VQA数据集(14k个样本)上优于最先进的方法1.79%,在A-OKVQA数据集(25k个样本)上优于4.10%(直接回答)和5.57%(选择题)。
{"title":"PrQAC : Prompting LLaMA3 with question-aware image captions and answer candidates for knowledge-based VQA","authors":"Peichao Jiang ,&nbsp;Mayire Ibrayim ,&nbsp;Linying Wang ,&nbsp;Wenjie Xu","doi":"10.1016/j.ipm.2025.104606","DOIUrl":"10.1016/j.ipm.2025.104606","url":null,"abstract":"<div><div>The Knowledge-Based Visual Question Answering (VQA) task requires cross-modal reasoning by integrating external knowledge. Current studies commonly employ large language model (LLM) as implicit knowledge sources to retrieve the information required for answering questions. However, we argue that these approaches still struggle to effectively integrate visual information, thereby failing to fully exploit the reasoning capabilities of LLM. To address this, we propose PrQAC (Prompting LLaMA3 with Question-Aware Image Captions and Answer Candidates), a new prompting framework for Knowledge-Based VQA. It consists of three key stages: (1) Image Caption Generation, a frozen multimodal large language model (MLLM) generates two captions–generic captions rich in visual details and question-aware captions containing relevant knowledge. (2) Candidate Answer Generation, a generic VQA model is trained using question-aware captions and Knowledge-Based VQA datasets to generate high-quality in-context examples and candidate answers. (3) In-Context Prompt Construction, generated elements are combined into a structured prompt to guide the LLM toward the final answer. We replace GPT-3 with LLaMA3 to reduce computational cost. Experimental results demonstrate that PrQAC outperforms state-of-the-art methods by 1.79% on the OK-VQA dataset (14k samples), and by 4.10% (Direct Answer) and 5.57% (Multiple Choice) on the A-OKVQA dataset (25k samples).</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104606"},"PeriodicalIF":6.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CooSBR: Rethinking neighborhood integration for session-based recommendation cosbr:重新考虑基于会话的推荐的邻域集成
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-06-01 Epub Date: 2026-01-13 DOI: 10.1016/j.ipm.2026.104625
Yuhan Yang , Jie Zou , Guojia An , Weikang Guo , Mingshi Yan , Yang Yang , Heng Tao Shen
Most of the existing work on session-based recommendation has considered leveraging neighborhoods to refine the target session representation for improving the recommendation performance. However, the potential of neighborhoods is still not fully exploited due to two main limitations: First, most existing methods tend to overlook the cooperative relationships between neighborhoods derived from different perspectives. Second, they often fail to preserve the self-anchoring property of the current session representations when integrating neighborhoods from multiple perspectives. To address these limitations, we propose a novel session-based recommendation framework named CooSBR. Specifically, this proposed model consists of two core components: the neighbor cooperation module and the session-centric diffusion enhancement module. In the neighbor cooperation module, mutual contrastive learning directly models the cooperative relationship between neighborhood representations from different perspectives, while pivot contrastive learning indirectly strengthens this cooperation by aligning each neighborhood view with a pivot embedding that integrates the target session and that view. In the session-centric diffusion enhancement module, a multi-conditional diffusion process is introduced to progressively integrate multi-perspective neighborhood information, while maintaining the inherent semantics of the session and preserving its self-anchoring property. Extensive experiments conducted on three real-world datasets demonstrate the effectiveness of CooSBR, yielding average improvements of 5.10% (HR@10), 5.25% (HR@20), 8.80% (MRR@10), and 8.95% (MRR@20).
大多数现有的基于会话的推荐工作都考虑了利用邻域来改进目标会话表示,以提高推荐性能。然而,邻域的潜力仍然没有得到充分开发,主要有两个方面的限制:一是现有的方法往往忽略了从不同角度出发的邻域之间的合作关系。其次,当从多个角度整合邻域时,它们往往不能保持当前会话表示的自锚属性。为了解决这些限制,我们提出了一个新的基于会话的推荐框架,名为CooSBR。具体而言,该模型由两个核心组件组成:邻居合作模块和以会话为中心的扩散增强模块。在邻域合作模块中,相互对比学习直接从不同角度对邻域表示之间的合作关系进行建模,而枢轴对比学习通过将每个邻域视图与整合目标会话和该视图的枢轴嵌入对齐来间接加强这种合作。在以会话为中心的扩散增强模块中,引入多条件扩散过程,在保持会话固有语义的同时,逐步整合多视角邻域信息,并保持会话的自锚定特性。在三个真实数据集上进行的大量实验证明了CooSBR的有效性,平均提高了5.10% (HR@10)、5.25% (HR@20)、8.80% (MRR@10)和8.95% (MRR@20)。
{"title":"CooSBR: Rethinking neighborhood integration for session-based recommendation","authors":"Yuhan Yang ,&nbsp;Jie Zou ,&nbsp;Guojia An ,&nbsp;Weikang Guo ,&nbsp;Mingshi Yan ,&nbsp;Yang Yang ,&nbsp;Heng Tao Shen","doi":"10.1016/j.ipm.2026.104625","DOIUrl":"10.1016/j.ipm.2026.104625","url":null,"abstract":"<div><div>Most of the existing work on session-based recommendation has considered leveraging neighborhoods to refine the target session representation for improving the recommendation performance. However, the potential of neighborhoods is still not fully exploited due to two main limitations: First, most existing methods tend to overlook the cooperative relationships between neighborhoods derived from different perspectives. Second, they often fail to preserve the self-anchoring property of the current session representations when integrating neighborhoods from multiple perspectives. To address these limitations, we propose a novel session-based recommendation framework named <strong>CooSBR</strong>. Specifically, this proposed model consists of two core components: <em>the neighbor cooperation module</em> and <em>the session-centric diffusion enhancement module</em>. In the neighbor cooperation module, mutual contrastive learning directly models the cooperative relationship between neighborhood representations from different perspectives, while pivot contrastive learning indirectly strengthens this cooperation by aligning each neighborhood view with a pivot embedding that integrates the target session and that view. In the session-centric diffusion enhancement module, a multi-conditional diffusion process is introduced to progressively integrate multi-perspective neighborhood information, while maintaining the inherent semantics of the session and preserving its self-anchoring property. Extensive experiments conducted on three real-world datasets demonstrate the effectiveness of CooSBR, yielding average improvements of 5.10% (HR@10), 5.25% (HR@20), 8.80% (MRR@10), and 8.95% (MRR@20).</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104625"},"PeriodicalIF":6.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond efficient fine-tuning: Efficient hybrid fine-tuning of CLIP models guided by explainable ViT attention 超越高效的微调:由可解释的ViT注意力指导的CLIP模型的高效混合微调
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-06-01 Epub Date: 2026-01-20 DOI: 10.1016/j.ipm.2026.104628
Hui Ye , Xuri Ge , Junqi Wang , Junchen Fu , Xin Xin , Jiao Xue , Yao Chen , Pengjie Ren , Zhumin Chen
To address the inefficiency of full fine-tuning of Contrastive Language-Image Pre-training (CLIP) models and the performance loss of adapter-based methods, we propose a novel efficient hybrid fine-tuning strategy (called HFLIP) to achieve a balance of efficiency and performance. HFLIP fine-tunes the key selected ViT blocks with interpretable semantic attention supervision on selected transformer heads via machine-learning methods’ selection, while keeping other blocks adapter-based for efficiency. Specifically, HFLIP introduces two key components: (1) a Dynamic Block-selection Genetic Algorithm (DBGA) that automatically selects a small subset of critical blocks in the ViT for full tuning, while keeping the rest adapter-tuned, ensuring a proper trade-off between fine-tuning effectiveness and efficiency; and (2) a Clustering-based Head-selection with Explainable-attention Guidance (CHEG), where hierarchical clustering is employed to identify representative attention heads, which are then fine-tuned under guidance from explainable attention maps, encouraging semantically consistent and globally diverse attention patterns. Extensive experiments on multiple downstream tasks show that HFLIP achieves comparable or even better performance than full fine-tuning by updating only 30% of the training parameters, while reducing GPU memory consumption by about 16%. In addition, HFLIP makes the CLIP-based ViT attention mechanism more interpretable compared to both the pretrained CLIP and other fine-tuned variants. We release our code at https://github.com/huiye8870/HFLIP.
为了解决对比语言-图像预训练(CLIP)模型完全微调的低效率和基于适配器的方法的性能损失问题,我们提出了一种新的高效混合微调策略(称为HFLIP)来实现效率和性能的平衡。HFLIP通过机器学习方法的选择对选定的变压器磁头进行可解释语义注意力监督的关键选定ViT块进行微调,同时保持其他块基于适配器的效率。具体来说,HFLIP引入了两个关键组件:(1)动态块选择遗传算法(DBGA),该算法自动选择ViT中的一小部分关键块进行全面调优,同时保持其余适配器调优,确保在微调效果和效率之间进行适当的权衡;(2)基于聚类的可解释注意指导(CHEG)头选择,其中采用分层聚类来识别具有代表性的注意头,然后在可解释注意图的指导下对其进行微调,从而鼓励语义一致和全局多样化的注意模式。在多个下游任务上进行的大量实验表明,HFLIP通过仅更新30%的训练参数,实现了与完全微调相当甚至更好的性能,同时减少了约16%的GPU内存消耗。此外,与预训练的CLIP和其他微调的变体相比,HFLIP使基于CLIP的ViT注意机制更具可解释性。我们在https://github.com/huiye8870/HFLIP上发布我们的代码。
{"title":"Beyond efficient fine-tuning: Efficient hybrid fine-tuning of CLIP models guided by explainable ViT attention","authors":"Hui Ye ,&nbsp;Xuri Ge ,&nbsp;Junqi Wang ,&nbsp;Junchen Fu ,&nbsp;Xin Xin ,&nbsp;Jiao Xue ,&nbsp;Yao Chen ,&nbsp;Pengjie Ren ,&nbsp;Zhumin Chen","doi":"10.1016/j.ipm.2026.104628","DOIUrl":"10.1016/j.ipm.2026.104628","url":null,"abstract":"<div><div>To address the inefficiency of full fine-tuning of Contrastive Language-Image Pre-training (CLIP) models and the performance loss of adapter-based methods, we propose a novel efficient hybrid fine-tuning strategy (called HFLIP) to achieve a balance of efficiency and performance. HFLIP fine-tunes the key selected ViT blocks with interpretable semantic attention supervision on selected transformer heads via machine-learning methods’ selection, while keeping other blocks adapter-based for efficiency. Specifically, HFLIP introduces two key components: (1) a Dynamic Block-selection Genetic Algorithm (DBGA) that automatically selects a small subset of critical blocks in the ViT for full tuning, while keeping the rest adapter-tuned, ensuring a proper trade-off between fine-tuning effectiveness and efficiency; and (2) a Clustering-based Head-selection with Explainable-attention Guidance (CHEG), where hierarchical clustering is employed to identify representative attention heads, which are then fine-tuned under guidance from explainable attention maps, encouraging semantically consistent and globally diverse attention patterns. Extensive experiments on multiple downstream tasks show that HFLIP achieves comparable or even better performance than full fine-tuning by updating only 30% of the training parameters, while reducing GPU memory consumption by about 16%. In addition, HFLIP makes the CLIP-based ViT attention mechanism more interpretable compared to both the pretrained CLIP and other fine-tuned variants. We release our code at <span><span>https://github.com/huiye8870/HFLIP</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104628"},"PeriodicalIF":6.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SEHLP: A summary-enhanced large language model for financial report sentiment analysis via hybrid LoRA and dynamic prefix tuning SEHLP:通过混合LoRA和动态前缀调优,用于财务报告情感分析的摘要增强大型语言模型
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-06-01 Epub Date: 2026-01-20 DOI: 10.1016/j.ipm.2026.104639
Haozhou Li, Qinke Peng, Xu Mou, Zeyuan Zeng, Ruimeng Li, Jinzhi Wang, Wentong Sun
Financial sentiment analysis (FSA) has garnered considerable attention for its potential to detect bullish and bearish sentiments that drive stock market fluctuations. Nonetheless, extracting salient sentiments from analyst reports encounters two main challenges. First, the highly specialized terms and expressions prevalent in these reports make it difficult for general Large Language Models (LLMs) to interpret financial expertise. Second, sentiment cues are implicit and dispersed across long-range dependencies, whereas existing LLM-based FSA methods relying on a single fine-tuning strategy lack fine-grained control during adaptation, thus leading to key information loss. To tackle these issues, we propose SEHLP, the first LLM that integrates summary information with a hybrid adaptation strategy that combines Low-rank Adaptation (LoRA) and dynamic Prefix Tuning to enhance FSA. Specifically, we employ prompt engineering on Qwen-2.5-14B to generate concise summaries that distill salient insights of each report, and construct FinLLaMA as SEHLP’s backbone through Supervised Fine-tuning (SFT) on extensive domain-specific instructions, enhancing financial knowledge comprehension. To inject summary information and enable fine-grained control during fine-tuning, we propose a hybrid adaptation strategy that concatenates LoRA-updated attention projections with dynamic summary-enhanced key-value prefixes, thereby fully utilizing sentiment cues in analyst reports and their summaries. Moreover, we construct a large-scale LCFR-Instruct corpus with 16,912 samples to address the lack of high-quality FSA instruction data. Comprehensive experiments on the LCFR-Instruct and FinTHUC-Instruct benchmark datasets indicate that SEHLP, with only 1.3B parameters, consistently surpasses competing LLMs, exhibiting ACC gains of 1.89% and 1.59% over the larger FinGPT-7B model on both datasets while maintaining superior efficiency. Our code is publicly accessible at https://github.com/lhz9999/SEHLP.
金融情绪分析(Financial sentiment analysis,简称FSA)因能够发现驱动股市波动的看涨和看跌情绪而备受关注。然而,从分析师报告中提取重要情绪面临着两个主要挑战。首先,这些报告中普遍存在的高度专业化的术语和表达使得一般的大型语言模型(llm)很难解释金融专业知识。其次,情绪线索是隐式的,分散在长期依赖关系中,而现有的基于llm的FSA方法依赖于单一的微调策略,在适应过程中缺乏细粒度控制,从而导致关键信息丢失。为了解决这些问题,我们提出了SEHLP,这是第一个将摘要信息与混合自适应策略相结合的LLM,该策略结合了低秩自适应(LoRA)和动态前缀调优来增强FSA。具体而言,我们在Qwen-2.5-14B上采用即时工程生成简明摘要,从中提炼出每篇报告的突出见解,并在广泛的领域特定指令上通过监督微调(SFT)构建FinLLaMA作为SEHLP的主干,增强金融知识理解。为了注入摘要信息并在微调过程中实现细粒度控制,我们提出了一种混合适应策略,该策略将lora更新的注意力预测与动态摘要增强的键值前缀连接起来,从而充分利用分析师报告及其摘要中的情绪线索。此外,我们构建了一个包含16,912个样本的大规模lcfr - instruction语料库,以解决缺乏高质量的FSA指令数据的问题。在lcfr - directive和finthuc - directive基准数据集上进行的综合实验表明,仅使用13亿个参数的SEHLP始终优于竞争对手的llm,在这两个数据集上,SEHLP比更大的FinGPT-7B模型的ACC增益分别为1.89%和1.59%,同时保持了优越的效率。我们的代码可以在https://github.com/lhz9999/SEHLP上公开访问。
{"title":"SEHLP: A summary-enhanced large language model for financial report sentiment analysis via hybrid LoRA and dynamic prefix tuning","authors":"Haozhou Li,&nbsp;Qinke Peng,&nbsp;Xu Mou,&nbsp;Zeyuan Zeng,&nbsp;Ruimeng Li,&nbsp;Jinzhi Wang,&nbsp;Wentong Sun","doi":"10.1016/j.ipm.2026.104639","DOIUrl":"10.1016/j.ipm.2026.104639","url":null,"abstract":"<div><div>Financial sentiment analysis (FSA) has garnered considerable attention for its potential to detect bullish and bearish sentiments that drive stock market fluctuations. Nonetheless, extracting salient sentiments from analyst reports encounters two main challenges. First, the highly specialized terms and expressions prevalent in these reports make it difficult for general Large Language Models (LLMs) to interpret financial expertise. Second, sentiment cues are implicit and dispersed across long-range dependencies, whereas existing LLM-based FSA methods relying on a single fine-tuning strategy lack fine-grained control during adaptation, thus leading to key information loss. To tackle these issues, we propose SEHLP, the first LLM that integrates summary information with a hybrid adaptation strategy that combines Low-rank Adaptation (LoRA) and dynamic Prefix Tuning to enhance FSA. Specifically, we employ prompt engineering on Qwen-2.5-14B to generate concise summaries that distill salient insights of each report, and construct FinLLaMA as SEHLP’s backbone through Supervised Fine-tuning (SFT) on extensive domain-specific instructions, enhancing financial knowledge comprehension. To inject summary information and enable fine-grained control during fine-tuning, we propose a hybrid adaptation strategy that concatenates LoRA-updated attention projections with dynamic summary-enhanced key-value prefixes, thereby fully utilizing sentiment cues in analyst reports and their summaries. Moreover, we construct a large-scale LCFR-Instruct corpus with 16,912 samples to address the lack of high-quality FSA instruction data. Comprehensive experiments on the LCFR-Instruct and FinTHUC-Instruct benchmark datasets indicate that SEHLP, with only 1.3B parameters, consistently surpasses competing LLMs, exhibiting ACC gains of 1.89% and 1.59% over the larger FinGPT-7B model on both datasets while maintaining superior efficiency. Our code is publicly accessible at <span><span>https://github.com/lhz9999/SEHLP</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104639"},"PeriodicalIF":6.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CausalLog: Log parsing using LLMs with causal intervention for bias mitigation CausalLog:使用带有因果干预的llm进行日志解析,以减轻偏差
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-06-01 Epub Date: 2026-01-21 DOI: 10.1016/j.ipm.2026.104609
Yuan Tian, Shi Ying, Tiangang Li
Log parsing transforms unstructured log messages into structured formats, serving as a critical step for various log analysis tasks. Large language models (LLMs) have recently shown strong performance in this task. However, they tend to rely on their experiential knowledge as shortcuts, introducing bias and reducing parsing accuracy. To address this issue, we propose CausalLog, a lightweight and flexible debiasing framework for log parsing. CausalLog is inspired by the Structural Causal Model and the front-door adjustment principle. On this basis, counterfactual rewriting is implemented through tailored prompt engineering, aiming to mitigate biases without accessing LLM internals. In addition, n-gram statistics of log data are integrated as a bias-free reference for an adjustment score, which helps improve both parsing accuracy and interpretability. Experiments on public log datasets show that CausalLog outperforms state-of-the-art methods, providing observational evidence that it improves both log grouping and template extraction accuracy.
日志解析将非结构化的日志消息转换为结构化的格式,是执行各种日志分析任务的关键步骤。大型语言模型(llm)最近在这项任务中表现出了强大的性能。然而,他们往往依赖于他们的经验知识作为捷径,引入偏见和降低解析的准确性。为了解决这个问题,我们提出了CausalLog,这是一个轻量级的、灵活的日志解析去偏框架。CausalLog的灵感来源于结构因果模型和前门调整原则。在此基础上,通过定制的提示工程实现反事实重写,旨在在不访问LLM内部的情况下减轻偏见。此外,将日志数据的n-gram统计信息集成为调整分数的无偏差参考,有助于提高解析精度和可解释性。在公共日志数据集上的实验表明,CausalLog优于最先进的方法,提供了观察证据,表明它提高了日志分组和模板提取的准确性。
{"title":"CausalLog: Log parsing using LLMs with causal intervention for bias mitigation","authors":"Yuan Tian,&nbsp;Shi Ying,&nbsp;Tiangang Li","doi":"10.1016/j.ipm.2026.104609","DOIUrl":"10.1016/j.ipm.2026.104609","url":null,"abstract":"<div><div>Log parsing transforms unstructured log messages into structured formats, serving as a critical step for various log analysis tasks. Large language models (LLMs) have recently shown strong performance in this task. However, they tend to rely on their experiential knowledge as shortcuts, introducing bias and reducing parsing accuracy. To address this issue, we propose CausalLog, a lightweight and flexible debiasing framework for log parsing. CausalLog is inspired by the Structural Causal Model and the front-door adjustment principle. On this basis, counterfactual rewriting is implemented through tailored prompt engineering, aiming to mitigate biases without accessing LLM internals. In addition, n-gram statistics of log data are integrated as a bias-free reference for an adjustment score, which helps improve both parsing accuracy and interpretability. Experiments on public log datasets show that CausalLog outperforms state-of-the-art methods, providing observational evidence that it improves both log grouping and template extraction accuracy.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104609"},"PeriodicalIF":6.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applying artificial neural networks, symmetrical and asymmetrical approaches to measure the nexus of digital competences among European educators 应用人工神经网络,对称和不对称的方法来衡量欧洲教育工作者之间的数字能力关系
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-06-01 Epub Date: 2026-01-07 DOI: 10.1016/j.ipm.2026.104610
Muhammad Zaheer Asghar , Elena Barbera , Javed Iqbal , Ercan Akpınar , Amir Narimani
This study explores the interrelationships among digital tool usage, digital content creation, technology-supported collaboration, and digital assessment practices among 250 in-service teachers from Türkiye, Portugal, Romania, and Spain. Training participants completed structured and open-ended questions as part of a European-funded program delivered through a gamified Learning Management System designed to enhance collaboration. A comprehensive mixed-methods approach integrated Partial Least Squares Structural Equation Modeling (PLS-SEM), Multi-Group Analysis (MGA), Artificial Neural Networks (ANN), and fuzzy-set Qualitative Comparative Analysis (fsQCA), providing complementary perspectives on the data-driven associations among digital competence dimensions. PLS-SEM results revealed significant positive correlations among digital tool usage, content creation, collaboration, and assessment. Digital collaboration (β = 0.513) and content creation (β = 0.202) were positively associated with digital assessment, with collaboration (β = 0.370) showing a stronger associative pathway between tool usage and assessment than content creation (β = 0.139). fsQCA identified that the concurrent presence of tool usage, content creation, and collaboration was linked to higher digital assessment outcomes (consistency = 0.847). ANN sensitivity analysis highlighted the relative importance of collaboration (1.02) compared with tool usage (0.53) and content creation (0.24) in revealing associations within digital assessment practices. These multi-method correlational findings underscore the central role of technology-supported collaboration in integrating digital competences and provide data-driven insights for educational policy and teacher professional development.
本研究对来自土耳其、葡萄牙、罗马尼亚和西班牙的250名在职教师进行了调查,探讨了数字工具使用、数字内容创作、技术支持的协作和数字评估实践之间的相互关系。培训参与者完成了结构化和开放式的问题,这是欧洲资助的项目的一部分,该项目通过游戏化学习管理系统提供,旨在加强合作。一种综合的混合方法方法集成了偏最小二乘结构方程建模(PLS-SEM)、多群分析(MGA)、人工神经网络(ANN)和模糊集定性比较分析(fsQCA),为数字能力维度之间的数据驱动关联提供了互补的视角。PLS-SEM结果显示,数字工具使用、内容创建、协作和评估之间存在显著的正相关关系。数字协作(β = 0.513)和内容创造(β = 0.202)与数字评估呈正相关,与内容创造(β = 0.139)相比,协作(β = 0.370)在工具使用和评估之间显示出更强的关联途径。fsQCA确定工具使用、内容创建和协作的并发存在与更高的数字评估结果相关联(一致性= 0.847)。人工神经网络敏感性分析强调了协作(1.02)与工具使用(0.53)和内容创建(0.24)在揭示数字评估实践中的关联方面的相对重要性。这些多方法相关性研究结果强调了技术支持的协作在整合数字能力方面的核心作用,并为教育政策和教师专业发展提供了数据驱动的见解。
{"title":"Applying artificial neural networks, symmetrical and asymmetrical approaches to measure the nexus of digital competences among European educators","authors":"Muhammad Zaheer Asghar ,&nbsp;Elena Barbera ,&nbsp;Javed Iqbal ,&nbsp;Ercan Akpınar ,&nbsp;Amir Narimani","doi":"10.1016/j.ipm.2026.104610","DOIUrl":"10.1016/j.ipm.2026.104610","url":null,"abstract":"<div><div>This study explores the interrelationships among digital tool usage, digital content creation, technology-supported collaboration, and digital assessment practices among 250 in-service teachers from Türkiye, Portugal, Romania, and Spain. Training participants completed structured and open-ended questions as part of a European-funded program delivered through a gamified Learning Management System designed to enhance collaboration. A comprehensive mixed-methods approach integrated Partial Least Squares Structural Equation Modeling (PLS-SEM), Multi-Group Analysis (MGA), Artificial Neural Networks (ANN), and fuzzy-set Qualitative Comparative Analysis (fsQCA), providing complementary perspectives on the data-driven associations among digital competence dimensions. PLS-SEM results revealed significant positive correlations among digital tool usage, content creation, collaboration, and assessment. Digital collaboration (β = 0.513) and content creation (β = 0.202) were positively associated with digital assessment, with collaboration (β = 0.370) showing a stronger associative pathway between tool usage and assessment than content creation (β = 0.139). fsQCA identified that the concurrent presence of tool usage, content creation, and collaboration was linked to higher digital assessment outcomes (consistency = 0.847). ANN sensitivity analysis highlighted the relative importance of collaboration (1.02) compared with tool usage (0.53) and content creation (0.24) in revealing associations within digital assessment practices. These multi-method correlational findings underscore the central role of technology-supported collaboration in integrating digital competences and provide data-driven insights for educational policy and teacher professional development.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104610"},"PeriodicalIF":6.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Formal modeling and discovery of cross-organizational business processes: A privacy-preserving two-stage approach 跨组织业务流程的正式建模和发现:一种保护隐私的两阶段方法
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-06-01 Epub Date: 2026-01-06 DOI: 10.1016/j.ipm.2025.104585
Wei Liu , Ge Xin , Xiaoliang Chen , Xu Gu , Duoqian Miao , Peng Lu , Lujia Li
To address the limitations of traditional process mining techniques in meeting the practical requirements of cross-organizational business processes, this paper proposes a dedicated modeling and mining method for such settings. First, we introduce HTC_WF_Net (Hierarchical Temporal Collaborative Workflow Net), an extension of workflow nets that incorporates nested transitions, temporal attributes, and collaboration-related places across organizations. Next, a hierarchical construction method for cross-organizational business event logs is proposed, together with the definition of corresponding collaboration patterns. Finally, a privacy-preserving cross-organizational process discovery method (COPM, Cross-Organizational Process Mining) is developed based on HTC_WF_Net and the hierarchical logs. Experimental results demonstrate the effectiveness of the proposed approach. Compared with several baseline methods on four real-world and two simulated event log datasets, the approach achieves higher model precision and F-score, along with improved readability and mining efficiency.
为了解决传统流程挖掘技术在满足跨组织业务流程实际需求方面的局限性,本文提出了一种专门的跨组织业务流程建模和挖掘方法。首先,我们介绍HTC_WF_Net(分层时间协作工作流网),它是工作流网的扩展,包含了嵌套转换、时间属性和跨组织的协作相关位置。其次,提出了跨组织业务事件日志的分层构建方法,并定义了相应的协作模式。最后,基于HTC_WF_Net和分层日志,提出了一种保护隐私的跨组织过程发现方法(COPM, cross-organizational process Mining)。实验结果证明了该方法的有效性。在4个真实事件日志数据集和2个模拟事件日志数据集上,与几种基线方法相比,该方法获得了更高的模型精度和f值,同时提高了可读性和挖掘效率。
{"title":"Formal modeling and discovery of cross-organizational business processes: A privacy-preserving two-stage approach","authors":"Wei Liu ,&nbsp;Ge Xin ,&nbsp;Xiaoliang Chen ,&nbsp;Xu Gu ,&nbsp;Duoqian Miao ,&nbsp;Peng Lu ,&nbsp;Lujia Li","doi":"10.1016/j.ipm.2025.104585","DOIUrl":"10.1016/j.ipm.2025.104585","url":null,"abstract":"<div><div>To address the limitations of traditional process mining techniques in meeting the practical requirements of cross-organizational business processes, this paper proposes a dedicated modeling and mining method for such settings. First, we introduce HTC_WF_Net (Hierarchical Temporal Collaborative Workflow Net), an extension of workflow nets that incorporates nested transitions, temporal attributes, and collaboration-related places across organizations. Next, a hierarchical construction method for cross-organizational business event logs is proposed, together with the definition of corresponding collaboration patterns. Finally, a privacy-preserving cross-organizational process discovery method (COPM, Cross-Organizational Process Mining) is developed based on HTC_WF_Net and the hierarchical logs. Experimental results demonstrate the effectiveness of the proposed approach. Compared with several baseline methods on four real-world and two simulated event log datasets, the approach achieves higher model precision and F-score, along with improved readability and mining efficiency.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104585"},"PeriodicalIF":6.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Processing & Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1