Pub Date : 2026-01-31DOI: 10.1016/j.ins.2026.123179
Xia Lin , Kai Zhang , Ligang Zhou
The criterion-oriented three-way decision model is an effective tool for simultaneously obtaining preference ranking results and three-way classification results in a multi-criteria environment. In view of this, this paper attempts to analyze the research framework of criterion-oriented three-way decision models from the perspective of four different generalized risk functions, which provides a valuable way to obtain both preference ranking results and three-way classification results in multi-criteria decision-making. Firstly, based on a criterion-oriented benefit fuzzy concept, this paper employs a generalized risk function (comprising both the relative loss and relative utility functions) to analyze decision threshold construction and develop two types of three-way decision models based on criterion-oriented benefit fuzzy concepts. Then, by comparing the criterion-oriented benefit fuzzy concept and considering practical semantics, this paper introduces the criterion-oriented cost fuzzy concept. Building on this, utilizing the relative loss and relative utility functions, two types of three-way decision models are proposed based on the criterion-oriented cost fuzzy concept. The relationships and distinctions among the four proposed three-way decision models are then analyzed in detail. The results reveal that, in different three-way decision models, when the monotonic increasing function and the monotonic decreasing function respectively retain the same value, the four types of risk functions exhibit the following regularities: , , , Finally, using a specific relative risk function, this paper demonstrates the effectiveness of the proposed models through case studies and related discussions.
{"title":"The criterion-oriented three-way decision models with generalized risk function","authors":"Xia Lin , Kai Zhang , Ligang Zhou","doi":"10.1016/j.ins.2026.123179","DOIUrl":"10.1016/j.ins.2026.123179","url":null,"abstract":"<div><div>The criterion-oriented three-way decision model is an effective tool for simultaneously obtaining preference ranking results and three-way classification results in a multi-criteria environment. In view of this, this paper attempts to analyze the research framework of criterion-oriented three-way decision models from the perspective of four different generalized risk functions, which provides a valuable way to obtain both preference ranking results and three-way classification results in multi-criteria decision-making. Firstly, based on a criterion-oriented benefit fuzzy concept, this paper employs a generalized risk function (comprising both the relative loss and relative utility functions) to analyze decision threshold construction and develop two types of three-way decision models based on criterion-oriented benefit fuzzy concepts. Then, by comparing the criterion-oriented benefit fuzzy concept and considering practical semantics, this paper introduces the criterion-oriented cost fuzzy concept. Building on this, utilizing the relative loss and relative utility functions, two types of three-way decision models are proposed based on the criterion-oriented cost fuzzy concept. The relationships and distinctions among the four proposed three-way decision models are then analyzed in detail. The results reveal that, in different three-way decision models, when the monotonic increasing function <span><math><mi>f</mi></math></span> and the monotonic decreasing function <span><math><mi>g</mi></math></span> respectively retain the same value, the four types of risk functions exhibit the following regularities: <span><math><msub><mrow><mi>α</mi></mrow><mn>1</mn></msub><mo>=</mo><msub><mrow><mi>α</mi></mrow><mn>2</mn></msub><mo>,</mo><msub><mrow><mi>α</mi></mrow><mn>3</mn></msub><mo>=</mo><msub><mrow><mi>α</mi></mrow><mn>4</mn></msub></math></span>, <span><math><msub><mrow><mi>β</mi></mrow><mn>1</mn></msub><mo>=</mo><msub><mrow><mi>β</mi></mrow><mn>2</mn></msub><mo>,</mo><msub><mrow><mi>β</mi></mrow><mn>3</mn></msub><mo>=</mo><msub><mrow><mi>β</mi></mrow><mn>4</mn></msub></math></span>, <span><math><msub><mrow><mi>γ</mi></mrow><mn>1</mn></msub><mo>=</mo><msub><mrow><mi>γ</mi></mrow><mn>2</mn></msub><mo>,</mo><msub><mrow><mi>γ</mi></mrow><mn>3</mn></msub><mo>=</mo><msub><mrow><mi>γ</mi></mrow><mn>4</mn></msub></math></span>, <span><math><msub><mrow><mi>α</mi></mrow><mn>1</mn></msub><mo>+</mo><msub><mrow><mi>β</mi></mrow><mn>3</mn></msub><mo>=</mo><mn>1</mn><mo>,</mo><msub><mrow><mi>γ</mi></mrow><mn>1</mn></msub><mo>+</mo><msub><mrow><mi>γ</mi></mrow><mn>3</mn></msub><mo>=</mo><mn>1.</mn></math></span> Finally, using a specific relative risk function, this paper demonstrates the effectiveness of the proposed models through case studies and related discussions.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123179"},"PeriodicalIF":6.8,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-31DOI: 10.1016/j.ins.2026.123180
Yang Gao , Gang Quan , Wujie Wen , Scott Piersall , Qian Lou , Liqiang Wang
Sparse matrix–vector multiplication (SpMV) is a fundamental operation in scientific computing, data analysis, and machine learning. When the data being processed are sensitive, preserving privacy becomes critical, and homomorphic encryption (HE) has emerged as a leading approach for addressing this challenge. Although HE enables privacy-preserving computation, its application to SpMV has remained largely unaddressed. To the best of our knowledge, this paper presents the first framework that efficiently integrates HE with SpMV, addressing the dual challenges of computational efficiency and data privacy. In particular, we introduce a novel compressed matrix format, named Compressed Sparse Sorted Column (CSSC), which is specifically designed to optimize encrypted sparse matrix computations. By preserving sparsity and enabling efficient ciphertext packing, CSSC significantly reduces storage and computational overhead. Our experimental results on real-world datasets demonstrate that the proposed method achieves significant gains in both processing time and memory usage. This study advances privacy-preserving SpMV and lays the groundwork for secure applications in federated learning, encrypted databases, and scientific computing, beyond.
{"title":"Efficient privacy-preserving sparse matrix-vector multiplication using homomorphic encryption","authors":"Yang Gao , Gang Quan , Wujie Wen , Scott Piersall , Qian Lou , Liqiang Wang","doi":"10.1016/j.ins.2026.123180","DOIUrl":"10.1016/j.ins.2026.123180","url":null,"abstract":"<div><div>Sparse matrix–vector multiplication (SpMV) is a fundamental operation in scientific computing, data analysis, and machine learning. When the data being processed are sensitive, preserving privacy becomes critical, and homomorphic encryption (HE) has emerged as a leading approach for addressing this challenge. Although HE enables privacy-preserving computation, its application to SpMV has remained largely unaddressed. To the best of our knowledge, this paper presents the first framework that efficiently integrates HE with SpMV, addressing the dual challenges of computational efficiency and data privacy. In particular, we introduce a novel compressed matrix format, named Compressed Sparse Sorted Column (CSSC), which is specifically designed to optimize encrypted sparse matrix computations. By preserving sparsity and enabling efficient ciphertext packing, CSSC significantly reduces storage and computational overhead. Our experimental results on real-world datasets demonstrate that the proposed method achieves significant gains in both processing time and memory usage. This study advances privacy-preserving SpMV and lays the groundwork for secure applications in federated learning, encrypted databases, and scientific computing, beyond.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123180"},"PeriodicalIF":6.8,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1016/j.ins.2026.123165
Peixu Zhang , Mohan Zhang , Tongyu Wang , Xinyu Yang
Fine-grained facial attribute editing represents a new frontier in deepfake technology, creating hyper-realistic forgeries that evade traditional detection methods by preserving identity and motion consistency. This subtlety poses a dual challenge: existing detectors, tuned for coarse artifacts, are rendered ineffective, and research is hampered by the absence of dedicated benchmark datasets. This paper argues that the key to unmasking these fine-grained forgeries lies not in static appearance, but in the temporal inconsistencies of underlying facial dynamics. To catalyze research in this area, we introduce EditForge, the first large-scale video dataset focused specifically on fine-grained facial attribute editing. Our analysis confirms that dynamic signals provide a powerful forensic trace that is consistently disrupted during the fine-grained forgery process. Building on this insight, we propose Fine-grained Forgery Mamba (F2-Mamba), a novel multimodal detection framework. F2-Mamba synergistically models features from facial dynamics, static appearance, and audio, employing robust alignment mechanisms and a Bi-Mamba architecture to efficiently capture long-range, cross-modal temporal dependencies. Extensive experiments validate that F2-Mamba establishes a new state-of-the-art, achieving area under the ROC curve (AUC) of 99.0% on fine-grained forgeries. This performance signals a paradigm shift towards behavior-based, dynamic analysis, significantly raising the bar for future forgery generation.
{"title":"Beyond static cues: Detecting fine-grained forgeries via temporal inconsistencies in facial dynamics","authors":"Peixu Zhang , Mohan Zhang , Tongyu Wang , Xinyu Yang","doi":"10.1016/j.ins.2026.123165","DOIUrl":"10.1016/j.ins.2026.123165","url":null,"abstract":"<div><div>Fine-grained facial attribute editing represents a new frontier in deepfake technology, creating hyper-realistic forgeries that evade traditional detection methods by preserving identity and motion consistency. This subtlety poses a dual challenge: existing detectors, tuned for coarse artifacts, are rendered ineffective, and research is hampered by the absence of dedicated benchmark datasets. This paper argues that the key to unmasking these fine-grained forgeries lies not in static appearance, but in the temporal inconsistencies of underlying facial dynamics. To catalyze research in this area, we introduce EditForge, the first large-scale video dataset focused specifically on fine-grained facial attribute editing. Our analysis confirms that dynamic signals provide a powerful forensic trace that is consistently disrupted during the fine-grained forgery process. Building on this insight, we propose Fine-grained Forgery Mamba (F<sup>2</sup>-Mamba), a novel multimodal detection framework. F<sup>2</sup>-Mamba synergistically models features from facial dynamics, static appearance, and audio, employing robust alignment mechanisms and a Bi-Mamba architecture to efficiently capture long-range, cross-modal temporal dependencies. Extensive experiments validate that F<sup>2</sup>-Mamba establishes a new state-of-the-art, achieving area under the ROC curve (AUC) of 99.0% on fine-grained forgeries. This performance signals a paradigm shift towards behavior-based, dynamic analysis, significantly raising the bar for future forgery generation.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123165"},"PeriodicalIF":6.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1016/j.ins.2026.123176
Liang Huang, Jihao Fan
We investigate the optimal selection of high-fidelity quantum links that can preserve fragile quantum states during information transmission. However, uniformly estimating the fidelities of all links becomes prohibitively costly in large-scale networks with numerous noisy connections. To overcome this limitation, we recast link selection and fidelity inference as an optimal-action discovery task within a reinforcement learning framework. Subsequently, we propose an algorithm termed Epsilon-Greedy Quantum Link Selection (EGreedyQLiS). This algorithm effectively identifies the optimal link among numerous quantum links and provides accurate fidelity estimates with a low consumption of quantum resources. EGreedyQLiS infers link fidelities using observations obtained from a standard network benchmarking procedure and greedily optimizes link selection during the fidelity estimation procedure. This optimization strategy concentrates quantum resources on estimating high-fidelity links, thereby providing accurate fidelity estimation for these links. The results of extensive simulations demonstrate that EGreedyQLiS exceeds existing approaches in optimal link identification with reduced quantum resource overhead.
{"title":"Application and performance analysis of Epsilon-Greedy optimization strategy in quantum link selection","authors":"Liang Huang, Jihao Fan","doi":"10.1016/j.ins.2026.123176","DOIUrl":"10.1016/j.ins.2026.123176","url":null,"abstract":"<div><div>We investigate the optimal selection of <em>high-fidelity</em> quantum links that can preserve fragile quantum states during information transmission. However, uniformly estimating the fidelities of all links becomes prohibitively costly in large-scale networks with numerous noisy connections. To overcome this limitation, we recast link selection and fidelity inference as an optimal-action discovery task within a reinforcement learning framework. Subsequently, we propose an algorithm termed Epsilon-Greedy Quantum Link Selection (EGreedyQLiS). This algorithm effectively identifies the optimal link among numerous quantum links and provides accurate fidelity estimates with a low consumption of quantum resources. EGreedyQLiS infers link fidelities using observations obtained from a standard <em>network benchmarking</em> procedure and greedily optimizes link selection during the fidelity estimation procedure. This optimization strategy concentrates quantum resources on estimating high-fidelity links, thereby providing accurate fidelity estimation for these links. The results of extensive simulations demonstrate that EGreedyQLiS exceeds existing approaches in optimal link identification with reduced quantum resource overhead.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123176"},"PeriodicalIF":6.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1016/j.ins.2026.123178
Yong Liu , Wenhao Luo , Guangxia Xu
Delegated Proof of Stake (DPoS) is widely used in public blockchains, but static, vote-centric election rules struggle to cope with heterogeneous wide-area networks and evolving attack strategies. Producers are often chosen mainly by stake, with little regard for real-time operational quality, leading to performance bottlenecks, stake plutocracy, and unstable committees. We present ADAPT-DPoS, a dynamic, data-driven framework that casts producer selection as a multi-attribute decision-making problem. It combines entropy- and SHAP-based dynamic weighting, a two-phase TOPSIS–PROMETHEE II ranking pipeline, and an adaptive producer-count controller driven by transaction load, candidate-pool quality, and latency signals. Experiments on a heterogeneous WAN testbed with geographically distributed nodes show that, under a strict P99 latency bound of 1 s, ADAPT-DPoS drives latency-bounded throughput close to the physical limit of the deployment and achieves about 64% higher LBT than vanilla DPoS (798 vs. 486 TPS in S0). Under adversarial stress, it reduces block misses by88% (MR: 9.69%1.15%) and substantially improves decentralization and reward–contribution alignment (Nakamoto: 5.4114.89; RCA: 0.240.86), demonstrating that MADM-based, feedback-driven design can significantly enhance DPoS-style consensus.
委托权益证明(DPoS)广泛应用于公共区块链,但静态的、以投票为中心的选举规则难以应对异构广域网和不断发展的攻击策略。生产商通常主要是通过股份来选择的,很少考虑实时运营质量,从而导致绩效瓶颈、股权财阀和不稳定的委员会。我们提出ADAPT-DPoS,一个动态的,数据驱动的框架,将生产者选择作为一个多属性决策问题。它结合了基于熵和shap的动态加权、两阶段TOPSIS-PROMETHEE II排序管道,以及由事务负载、候选池质量和延迟信号驱动的自适应生产者计数控制器。在具有地理分布节点的异构WAN测试平台上进行的实验表明,在严格的P99延迟限制为1秒的情况下,ADAPT-DPoS驱动的延迟限制吞吐量接近部署的物理极限,并且比普通DPoS实现的LBT高约64% (798 TPS vs. 486 TPS)。在对抗压力下,它减少了约88%的区块失手(MR: 9.69%→1.15%),并大幅改善了去中心化和奖励贡献一致性(Nakamoto: 5.41→14.89;RCA: 0.24→0.86),表明基于madm的反馈驱动设计可以显着增强dpos风格的共识。
{"title":"ADAPT-DPoS: Data-driven producer selection in delegated proof of stake","authors":"Yong Liu , Wenhao Luo , Guangxia Xu","doi":"10.1016/j.ins.2026.123178","DOIUrl":"10.1016/j.ins.2026.123178","url":null,"abstract":"<div><div>Delegated Proof of Stake (DPoS) is widely used in public blockchains, but static, vote-centric election rules struggle to cope with heterogeneous wide-area networks and evolving attack strategies. Producers are often chosen mainly by stake, with little regard for real-time operational quality, leading to performance bottlenecks, stake plutocracy, and unstable committees. We present ADAPT-DPoS, a dynamic, data-driven framework that casts producer selection as a multi-attribute decision-making problem. It combines entropy- and SHAP-based dynamic weighting, a two-phase TOPSIS–PROMETHEE II ranking pipeline, and an adaptive producer-count controller driven by transaction load, candidate-pool quality, and latency signals. Experiments on a heterogeneous WAN testbed with geographically distributed nodes show that, under a strict P99 latency bound of 1 s, ADAPT-DPoS drives latency-bounded throughput close to the physical limit of the deployment and achieves <strong>about 64% higher LBT than vanilla DPoS</strong> (798 vs. 486 TPS in S0). Under adversarial stress, it <strong>reduces block misses by</strong> <span><math><mo>∼</mo></math></span><strong>88%</strong> (MR: 9.69%<span><math><mo>→</mo></math></span>1.15%) and substantially improves decentralization and reward–contribution alignment (Nakamoto: 5.41<span><math><mo>→</mo></math></span>14.89; RCA: 0.24<span><math><mo>→</mo></math></span>0.86), demonstrating that MADM-based, feedback-driven design can significantly enhance DPoS-style consensus.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123178"},"PeriodicalIF":6.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1016/j.ins.2026.123153
Hailin Li , Jie Wang
The primary purpose of sequential recommendation is to analyze user behavior sequences, extract user preferences and identify dependencies between items to generate relevant recommendations. Although sequential recommendation models have advantages, they still face challenges in discovering complex relationships in users’ purchase histories and capturing users’ dynamic behavior patterns with shallow network structures. In this article, we propose the Visibility Graph and Convolutional Neural Networks for Sequential Recommendation (VCRec), which is the first application of visibility graphs to sequential recommendation. The VCRec model transforms users and their behavior sequences into user-embedding vectors and item-embedding matrices. The improved adaptive visibility graph algorithm is then proposed to encode the item-embedding matrices in both paired and non-paired ways, and obtains the three-dimensional tensor. High-order features of the items are extracted using residual and convolutional blocks. The resulting item features are combined with user embedding vectors to predict the subsequent item with which the user will engage. Extensive experiments on realistic datasets have demonstrated the performance of the VCRec model. These experimental results suggest that the VCRec model produces high-quality recommendations efficiently, which is of significant practical value.
{"title":"VCRec: Visibility graph and convolutional neural networks for sequential recommendation","authors":"Hailin Li , Jie Wang","doi":"10.1016/j.ins.2026.123153","DOIUrl":"10.1016/j.ins.2026.123153","url":null,"abstract":"<div><div>The primary purpose of sequential recommendation is to analyze user behavior sequences, extract user preferences and identify dependencies between items to generate relevant recommendations. Although sequential recommendation models have advantages, they still face challenges in discovering complex relationships in users’ purchase histories and capturing users’ dynamic behavior patterns with shallow network structures. In this article, we propose the Visibility Graph and Convolutional Neural Networks for Sequential Recommendation (VCRec), which is the first application of visibility graphs to sequential recommendation. The VCRec model transforms users and their behavior sequences into user-embedding vectors and item-embedding matrices. The improved adaptive visibility graph algorithm is then proposed to encode the item-embedding matrices in both paired and non-paired ways, and obtains the three-dimensional tensor. High-order features of the items are extracted using residual and convolutional blocks. The resulting item features are combined with user embedding vectors to predict the subsequent item with which the user will engage. Extensive experiments on realistic datasets have demonstrated the performance of the VCRec model. These experimental results suggest that the VCRec model produces high-quality recommendations efficiently, which is of significant practical value.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123153"},"PeriodicalIF":6.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Merchant-fishing vessel collision accidents threaten crew safety, property, and marine ecology. Currently, accident reports in unstructured text form hinder efficient risk-based decision-making during navigation. To improve the accuracy and efficiency of risk assessment and decision-making, this paper proposes a semi-automated multi-stage risk assessment framework based on large language models (LLMs), machine learning algorithms, and Grey Relational Analysis. First, a workflow based on the Dify platform is constructed to extract and identify risk-influencing factors (RIFs) from ship collision accident reports. Second, Bayesian network structure learning and association rule mining are utilized to explore the causal relationships among these factors. Third, a Grey Relational Attraction Analysis(GRAA) model is established to reduce the dependence of complex, multi-dimensional risk assessment on limited sample sizes, enhancing the reliability of risk quantification. The results indicate that core-layer risk factors such as inadequate crew competence, fatigue, and improper emergency response play key roles in the ship collision accident propagation network. Additionally, this study constructs an integrated risk assessment pipeline based on the Dify-LLM workflow, achieving rapid extraction and quantification of risk information from unstructured text.
{"title":"Risks analysis and countermeasures research of merchant fishing vessels collision accidents based on LLM and GRAA","authors":"Xueman Wang , Xinping Xiao , Mingyun Gao , Congjun Rao","doi":"10.1016/j.ins.2026.123167","DOIUrl":"10.1016/j.ins.2026.123167","url":null,"abstract":"<div><div>Merchant-fishing vessel collision accidents threaten crew safety, property, and marine ecology. Currently, accident reports in unstructured text form hinder efficient risk-based decision-making during navigation. To improve the accuracy and efficiency of risk assessment and decision-making, this paper proposes a semi-automated multi-stage risk assessment framework based on large language models (LLMs), machine learning algorithms, and Grey Relational Analysis. First, a workflow based on the Dify platform is constructed to extract and identify risk-influencing factors (RIFs) from ship collision accident reports. Second, Bayesian network structure learning and association rule mining are utilized to explore the causal relationships among these factors. Third, a Grey Relational Attraction Analysis(GRAA) model is established to reduce the dependence of complex, multi-dimensional risk assessment on limited sample sizes, enhancing the reliability of risk quantification. The results indicate that core-layer risk factors such as inadequate crew competence, fatigue, and improper emergency response play key roles in the ship collision accident propagation network. Additionally, this study constructs an integrated risk assessment pipeline based on the Dify-LLM workflow, achieving rapid extraction and quantification of risk information from unstructured text.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123167"},"PeriodicalIF":6.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1016/j.ins.2026.123166
Shujuan Feng , Jinming Wang , Yangkai Wu , Fei Liu , Ezzeddine Touti
Campus anomalies due to large crowds are monitored and thwarted using unmanned aerial vehicle (UAV) images/ videos. A pattern anomaly is detected by correlating the physical dimensions of the input image with the usual and unusual activities used for training. The proposed Boundary Position-induced Object Anomaly Detection (BPOAD) method uses deep ensemble learning with parallel, modular functions to identify unusual crowd behaviour patterns. The BPOAD method uniquely coordinates bagging and boosting processes within its method. The bagging component creates diverse training subsets to enhance boundary detection precision and orientation, while the boosting component adaptively weights misclassified instances to improve feature correlation across training sets. This dual approach allows the system to maintain high accuracy even after precision saturation, as the model can selectively apply either technique based on extracted features. This method establishes robust decision boundaries that maximise anomaly detection by correlating the physical dimensions of input images with normal and abnormal activity patterns. In real-world campus security applications, this significantly reduces false alarm rates and faster response times to potential threats. Experimental results demonstrate BPOAD’s effectiveness with 12.79% improved anomaly detection precision, 11.81% higher sensitivity, and 11.54% increased recall compared to existing methods. These improvements enable campus security personnel to more accurately identify and respond to unusual situations, ultimately enhancing overall campus safety management.
{"title":"Campus anomaly detection systems from the perspective of unmanned aerial vehicles","authors":"Shujuan Feng , Jinming Wang , Yangkai Wu , Fei Liu , Ezzeddine Touti","doi":"10.1016/j.ins.2026.123166","DOIUrl":"10.1016/j.ins.2026.123166","url":null,"abstract":"<div><div>Campus anomalies due to large crowds are monitored and thwarted using unmanned aerial vehicle (UAV) images/ videos. A pattern anomaly is detected by correlating the physical dimensions of the input image with the usual and unusual activities used for training. The proposed Boundary Position-induced Object Anomaly Detection (BPOAD) method uses deep ensemble learning with parallel, modular functions to identify unusual crowd behaviour patterns. The BPOAD method uniquely coordinates bagging and boosting processes within its method. The bagging component creates diverse training subsets to enhance boundary detection precision and orientation, while the boosting component adaptively weights misclassified instances to improve feature correlation across training sets. This dual approach allows the system to maintain high accuracy even after precision saturation, as the model can selectively apply either technique based on extracted features. This method establishes robust decision boundaries that maximise anomaly detection by correlating the physical dimensions of input images with normal and abnormal activity patterns. In real-world campus security applications, this significantly reduces false alarm rates and faster response times to potential threats. Experimental results demonstrate BPOAD’s effectiveness with 12.79% improved anomaly detection precision, 11.81% higher sensitivity, and 11.54% increased recall compared to existing methods. These improvements enable campus security personnel to more accurately identify and respond to unusual situations, ultimately enhancing overall campus safety management.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123166"},"PeriodicalIF":6.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146190942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1016/j.ins.2026.123182
Fernando M. Rodríguez-Bejarano , Sergio Santander-Jiménez , Miguel A. Vega-Rodríguez
Gene networks have gained considerable relevance in cancer research, enabling the representation of complex biological relationships that provide insights into the mechanisms driving tumor development and progression. The increasing availability of biological data facilitates the construction of clinically relevant gene networks by integrating multiple information sources. Specifically, we consider mutation data, patient survival data, and protein-protein interaction data to identify networks whose genes are recurrently mutated, significantly involved in patient survival, and functionally associated. To this end, we apply multi-objective optimization to simultaneously maximize survival impact, functional association, and mutation coverage. Herein, we introduce MOTEA-GENSU (Multi-Objective Two-archive Evolutionary Algorithm to discover GEne Networks involved in SUrvival), a novel method that employs two collaborative archives and intelligent evolutionary operators to guide the generation of high-quality gene networks. Evaluation across 27 real biological scenarios covering diverse cancer types shows that MOTEA-GENSU outperforms existing methods, achieving superior results in 92.6% of comparisons, with improvements of up to 315.8% over the best-performing competing approach, and consistently surpassing all state-of-the-art methods on average within each evaluated dataset. Biological analysis of the identified networks validates their functional coherence and significant impact on cancer patient survival, revealing clinically relevant networks composed of genes with demonstrated prognostic value.
基因网络已经在癌症研究中获得了相当大的相关性,使复杂的生物关系的表征能够深入了解驱动肿瘤发生和进展的机制。生物数据的可获得性不断提高,通过整合多种信息来源,促进了临床相关基因网络的构建。具体来说,我们考虑突变数据、患者生存数据和蛋白质-蛋白质相互作用数据,以确定基因反复突变、显著参与患者生存和功能相关的网络。为此,我们应用多目标优化来同时最大化生存影响、功能关联和突变覆盖。本文提出了MOTEA-GENSU (Multi-Objective two -archive Evolutionary Algorithm to discover involved in SUrvival GEne Networks)算法,该算法采用两个协同档案和智能进化算子来指导高质量基因网络的生成。对涵盖不同癌症类型的27种真实生物学情景的评估表明,MOTEA-GENSU优于现有方法,在92.6%的比较中取得了优异的结果,比表现最佳的竞争方法提高了315.8%,并且在每个评估数据集中平均持续超过所有最先进的方法。对已识别网络的生物学分析验证了它们的功能一致性和对癌症患者生存的重大影响,揭示了由具有预后价值的基因组成的临床相关网络。
{"title":"Multi-objective two-archive evolutionary algorithm to optimize the discovery of gene networks involved in cancer survival","authors":"Fernando M. Rodríguez-Bejarano , Sergio Santander-Jiménez , Miguel A. Vega-Rodríguez","doi":"10.1016/j.ins.2026.123182","DOIUrl":"10.1016/j.ins.2026.123182","url":null,"abstract":"<div><div>Gene networks have gained considerable relevance in cancer research, enabling the representation of complex biological relationships that provide insights into the mechanisms driving tumor development and progression. The increasing availability of biological data facilitates the construction of clinically relevant gene networks by integrating multiple information sources. Specifically, we consider mutation data, patient survival data, and protein-protein interaction data to identify networks whose genes are recurrently mutated, significantly involved in patient survival, and functionally associated. To this end, we apply multi-objective optimization to simultaneously maximize survival impact, functional association, and mutation coverage. Herein, we introduce MOTEA-GENSU (Multi-Objective Two-archive Evolutionary Algorithm to discover GEne Networks involved in SUrvival), a novel method that employs two collaborative archives and intelligent evolutionary operators to guide the generation of high-quality gene networks. Evaluation across 27 real biological scenarios covering diverse cancer types shows that MOTEA-GENSU outperforms existing methods, achieving superior results in 92.6% of comparisons, with improvements of up to 315.8% over the best-performing competing approach, and consistently surpassing all state-of-the-art methods on average within each evaluated dataset. Biological analysis of the identified networks validates their functional coherence and significant impact on cancer patient survival, revealing clinically relevant networks composed of genes with demonstrated prognostic value.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123182"},"PeriodicalIF":6.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-29DOI: 10.1016/j.ins.2026.123172
Zehan Tan, Henghua Shen
In this paper, we novelly apply the classical Lyapunov stability analysis to hybrid human-Artificial Intelligence (AI) customer service systems. The core idea is to use the Lyapunov ellipsoid of a linear autonomous dynamical system (LADS) to assess the customers’ emotional states and automatically determine whether a switch from the AI agent to a human agent is necessary. This involves two innovations: 1) User emotions are modeled as discrete-time LADSs in the Pleasure–Arousal–Dominance (PAD) space, parameterized by MBTI-specific dynamics matrices; 2) A Lyapunov function defines a safe emotional ellipsoid whose boundary, together with a Lyapunov Decay Rate (LDR), forms a dual-trigger switching mechanism to transfer service from the AI agent to a human agent when the user’s real-time emotional state approaches too fast or crosses the ellipsoid boundary, thus proactively preventing emotional destabilization.
To evaluate the proposed framework, we construct a domain-specific, multi-turn customer service dialogue dataset with PAD annotations. We compare our method with three other existing customer service systems, including methods with Fixed Lyapunov Ellipsoid for All (FLEA), Rule-Based Thresholding (RBT) and No-Switching Baseline (NSB). Comparative experiments demonstrate that the proposed switching mechanism significantly improves reduces negative emotional outcomes, enhances system usability and minimizes unnecessary human intervention.
{"title":"Lyapunov-based emotion-aware switching in hybrid human-artificial intelligence customer service systems","authors":"Zehan Tan, Henghua Shen","doi":"10.1016/j.ins.2026.123172","DOIUrl":"10.1016/j.ins.2026.123172","url":null,"abstract":"<div><div>In this paper, we novelly apply the classical Lyapunov stability analysis to hybrid human-Artificial Intelligence (AI) customer service systems. The core idea is to use the Lyapunov ellipsoid of a linear autonomous dynamical system (LADS) to assess the customers’ emotional states and automatically determine whether a switch from the AI agent to a human agent is necessary. This involves two innovations: 1) User emotions are modeled as discrete-time LADSs in the Pleasure–Arousal–Dominance (PAD) space, parameterized by MBTI-specific dynamics matrices; 2) A Lyapunov function defines a safe emotional ellipsoid whose boundary, together with a Lyapunov Decay Rate (LDR), forms a dual-trigger switching mechanism to transfer service from the AI agent to a human agent when the user’s real-time emotional state approaches too fast or crosses the ellipsoid boundary, thus proactively preventing emotional destabilization.</div><div>To evaluate the proposed framework, we construct a domain-specific, multi-turn customer service dialogue dataset with PAD annotations. We compare our method with three other existing customer service systems, including methods with Fixed Lyapunov Ellipsoid for All (FLEA), Rule-Based Thresholding (RBT) and No-Switching Baseline (NSB). Comparative experiments demonstrate that the proposed switching mechanism significantly improves reduces negative emotional outcomes, enhances system usability and minimizes unnecessary human intervention.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123172"},"PeriodicalIF":6.8,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146190943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}