Blockchains today come in various shapes and forms and continue to grow considerably in size. This poses significant challenges for data processing, not only because of structural differences in layer 1 data representations but also due to the unique attributes and layer 2 functionalities of different systems. This paper proposes a generic, blockchain-agnostic mechanism for representing both layer 1 and layer 2 data while preserving their structural attributes. The proposed approach is designed for heterogeneous architectures and aims to enable efficient, large-scale analysis of blockchain data using massive parallelism.
{"title":"BRIDGE: A memory-efficient blockchain-agnostic layer for chain topology representation in heterogeneous architectures","authors":"Ciprian Pungilă, Otilia Muntean, Andreea-Rebeca Tonu","doi":"10.1016/j.ins.2025.122911","DOIUrl":"10.1016/j.ins.2025.122911","url":null,"abstract":"<div><div>Blockchains today come in various shapes and forms and continue to grow considerably in size. This poses significant challenges for data processing, not only because of structural differences in layer 1 data representations but also due to the unique attributes and layer 2 functionalities of different systems. This paper proposes a generic, blockchain-agnostic mechanism for representing both layer 1 and layer 2 data while preserving their structural attributes. The proposed approach is designed for heterogeneous architectures and aims to enable efficient, large-scale analysis of blockchain data using massive parallelism.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122911"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-25Epub Date: 2025-12-07DOI: 10.1016/j.ins.2025.122985
Bo XU , Yijun HE , Yonghui XU
To address the challenges of joint policy optimization in large-scale systems, researchers in multi-agent reinforcement learning (MARL) have introduced algorithms like QMIX to handle collaborative decision-making in intricate environments. However, QMIX has limitations in complex tasks, especially in extremely complex environments and long-term dependencies,such as difficulty in effectively capturing long-term value information and solving the credit assignment problem in highly nonlinear team interactions. To this end, we propose the QACN algorithm, which innovates individual agent networks into a fusion actor critic structure. The actor module facilitates rapid policy updates for agents, whereas the critic module adopts a Double Deep Q-Network (DQN) approach to curb overestimation. Furthermore, the critic applies temporal-difference learning to enhance the optimization of the state-action value function. Compared to the original method, our work addresses the shortcomings of traditional methods in evaluating long-term benefits and handling complex interactions by introducing a complete actor-critic architecture, thereby significantly improving the learning efficiency and resource utilization ability of intelligent agents. Evaluations were carried out within the intricate StarCraft II scenario, serving as a representative testbed. The outcomes highlight a marked improvement in performance relative to current advanced benchmark methods. Specifically, the QACN algorithm achieved an average win rate improvement of 24% over QMIX and 10–17% over WQMIX/QPLEX, particularly excelling in super-hard scenarios such as 3s5z_vs_3s6z where it reached a 17.7% win rate while most baselines failed completely. The empirical data validates the superior adaptability and training efficiency of our approach when facing extreme operational conditions.
{"title":"QACN: Actor-critic augmented value decomposition for long-term cooperative multi-agent reinforcement learning in superhard scenarios","authors":"Bo XU , Yijun HE , Yonghui XU","doi":"10.1016/j.ins.2025.122985","DOIUrl":"10.1016/j.ins.2025.122985","url":null,"abstract":"<div><div>To address the challenges of<!--> <!-->joint policy optimization<!--> <!-->in<!--> <!-->large-scale systems,<!--> <!-->researchers in<!--> <!-->multi-agent reinforcement learning (MARL)<!--> <!-->have introduced<!--> <!-->algorithms<!--> <!-->like<!--> <!-->QMIX<!--> <!-->to handle<!--> <!-->collaborative decision-making<!--> <!-->in<!--> <!-->intricate environments. However, QMIX has limitations in complex tasks, especially in extremely complex environments and long-term dependencies,such as difficulty in effectively capturing long-term value information and solving the<!--> <!-->credit assignment<!--> <!-->problem in highly nonlinear team interactions. To this end, we propose the QACN algorithm, which innovates individual agent networks into a fusion actor critic structure. The actor module facilitates<!--> <!-->rapid policy updates for agents,<!--> <!-->whereas the critic module adopts a<!--> <!-->Double Deep Q-Network (DQN)<!--> <!-->approach to curb<!--> <!-->overestimation.<!--> <!-->Furthermore, the critic applies<!--> <!-->temporal-difference learning<!--> <!-->to enhance the optimization of<!--> <!-->the state-action value function. Compared to the original method, our work addresses the shortcomings of traditional methods in evaluating long-term benefits and handling complex interactions by introducing a complete actor-critic architecture, thereby significantly improving the learning efficiency and resource utilization ability of intelligent agents. Evaluations were carried out within the<!--> <!-->intricate<!--> <!-->StarCraft II scenario, serving as a representative testbed.<!--> <!-->The outcomes highlight a<!--> <!-->marked improvement<!--> <!-->in performance relative to current<!--> <!-->advanced benchmark methods. Specifically, the QACN algorithm achieved an average win rate improvement of 24% over QMIX and 10–17% over WQMIX/QPLEX, particularly excelling in super-hard scenarios such as 3s5z_vs_3s6z where it reached a 17.7% win rate while most baselines failed completely. The empirical data validates the superior adaptability and training efficiency of our approach when facing extreme operational conditions.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122985"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-25Epub Date: 2025-12-01DOI: 10.1016/j.ins.2025.122943
Kun Liang , Wei Wang , Lifang Xiao , Wenshu Liang , Qilong Liu
Non-negative Matrix Factorization (NMF) is recognized for its capacity to extract local features from data. Nonetheless, real-world datasets often contain heterogeneous information arising from diverse collection methods. NMF mitigates this issue by approximating raw data as the product of a basis matrix and an encoding matrix to capture heterogeneity; however, a single encoding matrix may simultaneously encompass multiple information types, potentially limiting the model’s applicability for specific tasks. To address this constraint, we propose a novel low-rank splitting NMF method to disentangle heterogeneous information. Specifically, the data matrix is factorized into a basis matrix multiplied by multiple distinct coefficient matrices, each yielding a representation corresponding to a specific information type. Furthermore, to better capture the underlying geometric structure of the data manifold, we integrate an adaptive neighbor graph into the model. The corresponding optimization problem is formulated and solved using multiplicative update rules. Convergence of the proposed algorithm is established, and its computational complexity is analyzed. Experimental results across ten real-world datasets demonstrate the effectiveness of our method, which outperforms eleven state-of-the-art approaches.
{"title":"Heterogeneous information disentangling via low-rank splitting non-negative matrix factorization with adaptive graph learning","authors":"Kun Liang , Wei Wang , Lifang Xiao , Wenshu Liang , Qilong Liu","doi":"10.1016/j.ins.2025.122943","DOIUrl":"10.1016/j.ins.2025.122943","url":null,"abstract":"<div><div>Non-negative Matrix Factorization (NMF) is recognized for its capacity to extract local features from data. Nonetheless, real-world datasets often contain heterogeneous information arising from diverse collection methods. NMF mitigates this issue by approximating raw data as the product of a basis matrix and an encoding matrix to capture heterogeneity; however, a single encoding matrix may simultaneously encompass multiple information types, potentially limiting the model’s applicability for specific tasks. To address this constraint, we propose a novel low-rank splitting NMF method to disentangle heterogeneous information. Specifically, the data matrix is factorized into a basis matrix multiplied by multiple distinct coefficient matrices, each yielding a representation corresponding to a specific information type. Furthermore, to better capture the underlying geometric structure of the data manifold, we integrate an adaptive neighbor graph into the model. The corresponding optimization problem is formulated and solved using multiplicative update rules. Convergence of the proposed algorithm is established, and its computational complexity is analyzed. Experimental results across ten real-world datasets demonstrate the effectiveness of our method, which outperforms eleven state-of-the-art approaches.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122943"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145697838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data has become a foundational element driving the digital economy and artificial intelligence. However, the current data sharing situation remains suboptimal. A key barrier is the lack of mutual trust between data providers and data processors. Data providers are concerned about the potential disclosure of sensitive information contained in raw data, while data processors worry about unfair transactions, such as the theft or misuse of their models without payment. Additionally, data providers need to verify the quality of the models without revealing. To address these challenges, we propose a privacy-preserving model outsourcing and fair trading scheme, where all participants including data processors and data providers, can be un-trusted. This framework separates data ownership, algorithm execution, and verification, allowing data buyers (for processed data instead of raw data) to validate the model performance without accessing raw data, thus preventing leakage and resale. We introduce three core protocols: data validation protocol, algorithm validation protocol, and fairness arbitration protocol, which ensure transaction integrity, protect privacy, and secure fair compensation. Through extensive security analysis and dynamic game theory analysis, we demonstrate the security and fairness that maximize benefits for all parties involved.
{"title":"Privacy-aware data processing and fair model trading protocols among un-trusted participants","authors":"Yining Tan , Ruoting Xiong , Haoran Qin , Yuxian Chen , Lianchong Zhang , Wei Ren , Tianqing Zhu","doi":"10.1016/j.ins.2025.122946","DOIUrl":"10.1016/j.ins.2025.122946","url":null,"abstract":"<div><div>Data has become a foundational element driving the digital economy and artificial intelligence. However, the current data sharing situation remains suboptimal. A key barrier is the lack of mutual trust between data providers and data processors. Data providers are concerned about the potential disclosure of sensitive information contained in raw data, while data processors worry about unfair transactions, such as the theft or misuse of their models without payment. Additionally, data providers need to verify the quality of the models without revealing. To address these challenges, we propose a privacy-preserving model outsourcing and fair trading scheme, where all participants including data processors and data providers, can be un-trusted. This framework separates data ownership, algorithm execution, and verification, allowing data buyers (for processed data instead of raw data) to validate the model performance without accessing raw data, thus preventing leakage and resale. We introduce three core protocols: data validation protocol, algorithm validation protocol, and fairness arbitration protocol, which ensure transaction integrity, protect privacy, and secure fair compensation. Through extensive security analysis and dynamic game theory analysis, we demonstrate the security and fairness that maximize benefits for all parties involved.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122946"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-25Epub Date: 2025-12-08DOI: 10.1016/j.ins.2025.122987
Shuangsi Xue , Junkai Tan , Zihang Guo , Tiansen Niu , Hui Cao , Badong Chen
Hybrid shared control is a promising paradigm for integrating human expertise and autonomous intelligence in complex cyber-physical systems and human-robot interaction scenarios. However, achieving guaranteed performance and rapid adaptation under uncertainties and input constraints remains a major challenge. This paper introduces a unified hybrid shared control framework that synthesizes fixed-time game-theoretic learning and prescribed performance guarantees. The core innovation is a smooth shared control mechanism that dynamically allocates authority between human operators and autonomous controllers based on confidence and alignment, formulated within a nonzero–sum game structure. Fixed-time composite learning is employed to ensure rapid convergence of the optimal shared policy, leveraging experience replay from both agents. Prescribed performance control is integrated to guarantee bounded tracking errors under saturation. Rigorous Lyapunov-based analysis proves fixed-time stability and convergence. Simulations on mobile robot and UAV platforms demonstrate that the proposed method achieves 63.08 % faster convergence and 81.18 % lower tracking error compared to standard ADP baselines, while maintaining robustness to noise and model uncertainties.
{"title":"Human-robotics hybrid shared control with guaranteed performance: A fixed-time game-theoretic learning approach","authors":"Shuangsi Xue , Junkai Tan , Zihang Guo , Tiansen Niu , Hui Cao , Badong Chen","doi":"10.1016/j.ins.2025.122987","DOIUrl":"10.1016/j.ins.2025.122987","url":null,"abstract":"<div><div>Hybrid shared control is a promising paradigm for integrating human expertise and autonomous intelligence in complex cyber-physical systems and human-robot interaction scenarios. However, achieving guaranteed performance and rapid adaptation under uncertainties and input constraints remains a major challenge. This paper introduces a unified hybrid shared control framework that synthesizes fixed-time game-theoretic learning and prescribed performance guarantees. The core innovation is a smooth shared control mechanism that dynamically allocates authority between human operators and autonomous controllers based on confidence and alignment, formulated within a nonzero–sum game structure. Fixed-time composite learning is employed to ensure rapid convergence of the optimal shared policy, leveraging experience replay from both agents. Prescribed performance control is integrated to guarantee bounded tracking errors under saturation. Rigorous Lyapunov-based analysis proves fixed-time stability and convergence. Simulations on mobile robot and UAV platforms demonstrate that the proposed method achieves 63.08 % faster convergence and 81.18 % lower tracking error compared to standard ADP baselines, while maintaining robustness to noise and model uncertainties.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122987"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-25Epub Date: 2025-12-06DOI: 10.1016/j.ins.2025.122930
Raffaele Giancotti , Francesco Gullo , Pietro H. Guzzi , Edoardo Serra , Pierangelo Veltri
We study the problem of segmentation of temporal graphs (TGraphSeg): given a temporal graph – i.e., a sequence of snapshots depicting the relationships (temporal edges) occurring among entities (vertices) of interest at specific timestamps – aggregate similar consecutive snapshots and replace every aggregated snapshot set with a single, well-representative (newly computed) snapshot, to optimize a tradeoff between data reduction ratio and closeness between segmented and original graphs. This novel fundamental problem produces more compact versions of temporal graphs, thus making them more amenable to resource-efficient and responsible downstream processing, without any need for changing the technology already in place to handle them in their raw form.
The proposed TGraphSeg is formulated as an instance of Sequence Segmentation (SeqSeg), a well-established combinatorial optimization problem for time-series data reduction. The contextualization of SeqSeg to the temporal-graph setting implies major technical challenges, including defining a proper distance function among snapshots and a methodology to compute representative snapshots from the aggregated ones. Additionally, non-trivial implementation challenges are faced to efficiently adapt popular SeqSeg algorithms to the context at hand. Effective solutions are provided to address all these challenges. Based on these, we devise a principled formulation of TGraphSeg, along with two algorithms: an exact, more accurate one and a heuristic, faster one. Our contributions are complemented by an extensive experimental evaluation, which attests to the high performance of the proposed algorithms and their superiority over baselines on a variety of real data and the downstream tasks of vertex similarity search and temporal community detection.
Reproducibility:Source code is available athttps://github.com/rafgia/temporal-graph-segmentation.
{"title":"Segmentation of temporal graphs","authors":"Raffaele Giancotti , Francesco Gullo , Pietro H. Guzzi , Edoardo Serra , Pierangelo Veltri","doi":"10.1016/j.ins.2025.122930","DOIUrl":"10.1016/j.ins.2025.122930","url":null,"abstract":"<div><div>We study the problem of <em>segmentation of temporal graphs</em> (<span>TGraphSeg</span>): given a temporal graph – i.e., a sequence of snapshots depicting the relationships (temporal edges) occurring among entities (vertices) of interest at specific timestamps – aggregate similar consecutive snapshots and replace every aggregated snapshot set with a single, well-representative (newly computed) snapshot, to optimize a tradeoff between data reduction ratio and closeness between segmented and original graphs. This novel fundamental problem produces more compact versions of temporal graphs, thus making them more amenable to resource-efficient and responsible downstream processing, without any need for changing the technology already in place to handle them in their raw form.</div><div>The proposed <span>TGraphSeg</span> is formulated as an instance of <em>Sequence Segmentation</em> (<span>SeqSeg</span>), a well-established combinatorial optimization problem for time-series data reduction. The contextualization of <span>SeqSeg</span> to the temporal-graph setting implies major technical challenges, including defining a proper distance function among snapshots and a methodology to compute representative snapshots from the aggregated ones. Additionally, non-trivial implementation challenges are faced to efficiently adapt popular <span>SeqSeg</span> algorithms to the context at hand. Effective solutions are provided to address all these challenges. Based on these, we devise a principled formulation of <span>TGraphSeg</span>, along with two algorithms: an exact, more accurate one and a heuristic, faster one. Our contributions are complemented by an extensive experimental evaluation, which attests to the high performance of the proposed algorithms and their superiority over baselines on a variety of real data and the downstream tasks of vertex similarity search and temporal community detection.</div><div><u><strong>Reproducibility:</strong></u> <em>Source code is available at</em> <span><span>https://github.com/rafgia/temporal-graph-segmentation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122930"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-25Epub Date: 2026-01-09DOI: 10.1016/j.ins.2026.123103
Hoang-Thang Ta , Duy-Quy Thai , Abu Bakar Siddiqur Rahman , Grigori Sidorov , Alexander Gelbukh
In this paper, we introduce FC-KAN, a Kolmogorov-Arnold Network (KAN) that leverages combinations of popular mathematical functions such as B-splines, wavelets, and radial basis functions on low-dimensional data through element-wise operations. We explore several methods for combining the outputs of these functions, including sum, element-wise product, the addition of sum and element-wise product, representations of quadratic and cubic functions, concatenation, linear transformation of the concatenated output, and others. In our experiments, we compare FC-KAN with a multi-layer perceptron network (MLP) and other existing KANs, such as BSRBF-KAN, EfficientKAN, FastKAN, and FasterKAN, on the MNIST and Fashion-MNIST datasets. Two variants of FC-KAN, which use a combination of outputs from B-splines and Difference of Gaussians (DoG) and from B-splines and linear transformations in the form of a quadratic function, outperformed all other models on the average of 5 independent training runs. However, FC-KAN still has limitations, including challenges with parameter scalability and efficiency, as well as limited capability compared to CNNs when handling multi-channel datasets such as CIFAR-10 and CIFAR-100. We expect that FC-KAN can leverage function combinations to design future KANs. Our repository is publicly available at: https://github.com/hoangthangta/FC_KAN.
{"title":"FC-KAN: Function combinations in Kolmogorov-Arnold networks","authors":"Hoang-Thang Ta , Duy-Quy Thai , Abu Bakar Siddiqur Rahman , Grigori Sidorov , Alexander Gelbukh","doi":"10.1016/j.ins.2026.123103","DOIUrl":"10.1016/j.ins.2026.123103","url":null,"abstract":"<div><div>In this paper, we introduce FC-KAN, a Kolmogorov-Arnold Network (KAN) that leverages combinations of popular mathematical functions such as B-splines, wavelets, and radial basis functions on low-dimensional data through element-wise operations. We explore several methods for combining the outputs of these functions, including sum, element-wise product, the addition of sum and element-wise product, representations of quadratic and cubic functions, concatenation, linear transformation of the concatenated output, and others. In our experiments, we compare FC-KAN with a multi-layer perceptron network (MLP) and other existing KANs, such as BSRBF-KAN, EfficientKAN, FastKAN, and FasterKAN, on the MNIST and Fashion-MNIST datasets. Two variants of FC-KAN, which use a combination of outputs from B-splines and Difference of Gaussians (DoG) and from B-splines and linear transformations in the form of a quadratic function, outperformed all other models on the average of 5 independent training runs. However, FC-KAN still has limitations, including challenges with parameter scalability and efficiency, as well as limited capability compared to CNNs when handling multi-channel datasets such as CIFAR-10 and CIFAR-100. We expect that FC-KAN can leverage function combinations to design future KANs. Our repository is publicly available at: <span><span>https://github.com/hoangthangta/FC_KAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"736 ","pages":"Article 123103"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-25Epub Date: 2026-01-10DOI: 10.1016/j.ins.2026.123073
Guo Wei-Feng , Sun Zening , Zhao Mengtong , Yue Cai-Tong , Cheng Han
It is challenging to identify personalized critical biomarkers (PCBs) from high-throughput omics data of individual cancer patients. While evolutionary computation has shown promise in discovering PCBs via multi-objective (i.e., minimizing PCB count, maximizing early warning scores) and multimodal (i.e., multiple effective PCB sets) optimization, current methods fail to leverage the sparsity of PCB problems(i.e., fewer efficient PCBs)—limiting their search ability in high-dimensional data. To tackle this challenge, we introduce TSSKEA, a two-stage evolutionary algorithm guided by sparse knowledge that integrates sparse knowledge from molecular interaction networks and historical/current non-dominated solutions into multi-objective multimodal optimization. It uses a variable striped sparse population sampling (VSSPS) strategy and two-stage knowledge guidance to handle large-scale sparsity. Validated across three TCGA cancer datasets—specifically BRCA, LUSC, and LUAD—TSSKEA demonstrates superior performance compared to alternative approaches by delivering the highest early warning signal score in detecting personalized node and edge biomarkers. Compared with the existing representative method MMPDNB-RBM, on the three cancer datasets the early warning scores of PDNB were increased by 2.7 times, 1.4 times, and 11.1 times, while those of PDENB were enhanced by 1.5 times, 0.5 times, and 1.8 times, respectively. Additionally, TSSKEA exhibits considerable advantages compared to other state-of-the-art approaches with regard to algorithmic convergence, diversity and multimodal characteristics.
{"title":"Sparse knowledge guided multiobjective multimodal optimization for identification of personalized critical biomarkers in cancer","authors":"Guo Wei-Feng , Sun Zening , Zhao Mengtong , Yue Cai-Tong , Cheng Han","doi":"10.1016/j.ins.2026.123073","DOIUrl":"10.1016/j.ins.2026.123073","url":null,"abstract":"<div><div>It is challenging to identify personalized critical biomarkers (PCBs) from high-throughput omics data of individual cancer patients. While evolutionary computation has shown promise in discovering PCBs via multi-objective (i.e., minimizing PCB count, maximizing early warning scores) and multimodal (i.e., multiple effective PCB sets) optimization, current methods fail to leverage the sparsity of PCB problems(i.e., fewer efficient PCBs)—limiting their search ability in high-dimensional data. To tackle this challenge, we introduce TSSKEA, a two-stage evolutionary algorithm guided by sparse knowledge that integrates sparse knowledge from molecular interaction networks and historical/current non-dominated solutions into multi-objective multimodal optimization. It uses a variable striped sparse population sampling (VSSPS) strategy and two-stage knowledge guidance to handle large-scale sparsity. Validated across three TCGA cancer datasets—specifically BRCA, LUSC, and LUAD—TSSKEA demonstrates superior performance compared to alternative approaches by delivering the highest early warning signal score in detecting personalized node and edge biomarkers. Compared with the existing representative method MMPDNB-RBM, on the three cancer datasets the early warning scores of PDNB were increased by 2.7 times, 1.4 times, and 11.1 times, while those of PDENB were enhanced by 1.5 times, 0.5 times, and 1.8 times, respectively. Additionally, TSSKEA exhibits considerable advantages compared to other state-of-the-art approaches with regard to algorithmic convergence, diversity and multimodal characteristics.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"736 ","pages":"Article 123073"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-25Epub Date: 2025-12-02DOI: 10.1016/j.ins.2025.122950
Andrzej Piegat, Marcin Pluciński
The article presents a method for determining a tolerant solution of a singular system of fuzzy linear equations . The vector provides information about the control objectives defined by experts, and is a crisp vector of control (decision) variables, for which the optimal values should be determined. The above task may seem unrealizable or very difficult due to the simultaneous occurrence of singularity and uncertainty in the system. The authors of the article demonstrate how this task can be solved using the Shifted Membership Function method, which extends the possibilities of solving difficult uncertainty problems. The article contains numerous examples and illustrations of the problem.
{"title":"Determinant-free, tolerance solution of a singular fuzzy linear equation system using the shifted membership function method for control and decision-making tasks","authors":"Andrzej Piegat, Marcin Pluciński","doi":"10.1016/j.ins.2025.122950","DOIUrl":"10.1016/j.ins.2025.122950","url":null,"abstract":"<div><div>The article presents a method for determining a tolerant solution of a singular system of fuzzy linear equations <span><math><mrow><mover><mi>A</mi><mo>~</mo></mover></mrow><mstyle><mspace></mspace></mstyle><mi>X</mi><msub><mo>=</mo><mi>T</mi></msub><msub><mrow><mover><mi>Y</mi><mo>~</mo></mover></mrow><mi>T</mi></msub></math></span>. The vector <span><math><msub><mrow><mover><mi>Y</mi><mo>~</mo></mover></mrow><mi>T</mi></msub></math></span> provides information about the control objectives defined by experts, and <span><math><mi>X</mi></math></span> is a crisp vector of control (decision) variables, for which the optimal values should be determined. The above task may seem unrealizable or very difficult due to the simultaneous occurrence of singularity and uncertainty in the system. The authors of the article demonstrate how this task can be solved using the Shifted Membership Function method, which extends the possibilities of solving difficult uncertainty problems. The article contains numerous examples and illustrations of the problem.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122950"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-25Epub Date: 2025-12-09DOI: 10.1016/j.ins.2025.122953
Zhijie Wang , Liangtian He , Shaobing Gao , Jifei Miao , Liang-Jian Deng , Jun Liu
Quaternions offer an elegant and powerful representation for color images, as they align seamlessly with the intrinsic structure of color channels and effectively preserve their inherent correlations. Recent advancements in quaternion-based low-rank methods have shown promising results in a wide range of color image processing applications. However, these methods often incur high computational costs because they require performing a full quaternion singular value decomposition (QSVD) at each iteration. In this paper, we propose a novel reweighted low-rank quaternion matrix factorization (RLQMF) method tailored for color image inpainting. By leveraging the quaternion bilinear factorization (QBF) technique, our approach significantly reduces the computational burden associated with full QSVD in large-scale quaternion matrix computations. To further enhance recovery accuracy, we integrate a deep denoising prior into the RLQMF framework, resulting in an enhanced variant called DeepRLQMF, which enables the incorporation of any advanced, pre-trained deep denoising method during the iterative process. Theoretically, we provide rigorous proofs to establish the mathematical soundness of both RLQMF and DeepRLQMF. Comprehensive experiments validate the computational efficiency of our quaternion low-rank factorization method and demonstrate its superior performance against leading quaternion-based low-rank alternatives. The code and supplementary material for this work are publicly available at: https://github.com/1989helt/DeepRLQMF.
{"title":"Reweighted low-rank quaternion matrix factorization with deep denoising prior for color image inpainting","authors":"Zhijie Wang , Liangtian He , Shaobing Gao , Jifei Miao , Liang-Jian Deng , Jun Liu","doi":"10.1016/j.ins.2025.122953","DOIUrl":"10.1016/j.ins.2025.122953","url":null,"abstract":"<div><div>Quaternions offer an elegant and powerful representation for color images, as they align seamlessly with the intrinsic structure of color channels and effectively preserve their inherent correlations. Recent advancements in quaternion-based low-rank methods have shown promising results in a wide range of color image processing applications. However, these methods often incur high computational costs because they require performing a full quaternion singular value decomposition (QSVD) at each iteration. In this paper, we propose a novel reweighted low-rank quaternion matrix factorization (RLQMF) method tailored for color image inpainting. By leveraging the quaternion bilinear factorization (QBF) technique, our approach significantly reduces the computational burden associated with full QSVD in large-scale quaternion matrix computations. To further enhance recovery accuracy, we integrate a deep denoising prior into the RLQMF framework, resulting in an enhanced variant called DeepRLQMF, which enables the incorporation of any advanced, pre-trained deep denoising method during the iterative process. Theoretically, we provide rigorous proofs to establish the mathematical soundness of both RLQMF and DeepRLQMF. Comprehensive experiments validate the computational efficiency of our quaternion low-rank factorization method and demonstrate its superior performance against leading quaternion-based low-rank alternatives. The code and supplementary material for this work are publicly available at: <span><span>https://github.com/1989helt/DeepRLQMF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122953"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}