Pub Date : 2026-05-25Epub Date: 2026-01-28DOI: 10.1016/j.ins.2026.123163
Chao Wang , Weiwei Fu , Haoyang Li , Linqi Ye , Yang Zhou
Visual Question Answering (VQA) models frequently rely on language priors while overlooking visual content. Current mainstream debiasing methods face limitations: data augmentation techniques demand high manual annotation costs and struggle to achieve balanced mitigation of biases, while ensemble-based approaches only capture language priors through a QA branch without fully identifying comprehensive bias. We propose FAIR, a bias reshaping method that utilizes pseudo-label functions to balance distribution bias and emphasizes learning weights for challenging samples. Moreover, we find that using model logit distributions as a substitute can achieve comparable effects to traditional data distribution annotations required by previous ensemble methods. Experimental results demonstrate that FAIR achieves the best balance among comparable methods, reaching 64.03% accuracy on VQA v2 and 60.96% on VQA-CP v2.
{"title":"Focal equilibrium: Bias reshaping for generalizable and robust visual understanding","authors":"Chao Wang , Weiwei Fu , Haoyang Li , Linqi Ye , Yang Zhou","doi":"10.1016/j.ins.2026.123163","DOIUrl":"10.1016/j.ins.2026.123163","url":null,"abstract":"<div><div>Visual Question Answering (VQA) models frequently rely on language priors while overlooking visual content. Current mainstream debiasing methods face limitations: data augmentation techniques demand high manual annotation costs and struggle to achieve balanced mitigation of biases, while ensemble-based approaches only capture language priors through a QA branch without fully identifying comprehensive bias. We propose <span>FAIR</span>, a bias reshaping method that utilizes pseudo-label functions to balance distribution bias and emphasizes learning weights for challenging samples. Moreover, we find that using model logit distributions as a substitute can achieve comparable effects to traditional data distribution annotations required by previous ensemble methods. Experimental results demonstrate that <span>FAIR</span> achieves the best balance among comparable methods, reaching 64.03% accuracy on VQA v2 and 60.96% on VQA-CP v2.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123163"},"PeriodicalIF":6.8,"publicationDate":"2026-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146190948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Merchant-fishing vessel collision accidents threaten crew safety, property, and marine ecology. Currently, accident reports in unstructured text form hinder efficient risk-based decision-making during navigation. To improve the accuracy and efficiency of risk assessment and decision-making, this paper proposes a semi-automated multi-stage risk assessment framework based on large language models (LLMs), machine learning algorithms, and Grey Relational Analysis. First, a workflow based on the Dify platform is constructed to extract and identify risk-influencing factors (RIFs) from ship collision accident reports. Second, Bayesian network structure learning and association rule mining are utilized to explore the causal relationships among these factors. Third, a Grey Relational Attraction Analysis(GRAA) model is established to reduce the dependence of complex, multi-dimensional risk assessment on limited sample sizes, enhancing the reliability of risk quantification. The results indicate that core-layer risk factors such as inadequate crew competence, fatigue, and improper emergency response play key roles in the ship collision accident propagation network. Additionally, this study constructs an integrated risk assessment pipeline based on the Dify-LLM workflow, achieving rapid extraction and quantification of risk information from unstructured text.
{"title":"Risks analysis and countermeasures research of merchant fishing vessels collision accidents based on LLM and GRAA","authors":"Xueman Wang , Xinping Xiao , Mingyun Gao , Congjun Rao","doi":"10.1016/j.ins.2026.123167","DOIUrl":"10.1016/j.ins.2026.123167","url":null,"abstract":"<div><div>Merchant-fishing vessel collision accidents threaten crew safety, property, and marine ecology. Currently, accident reports in unstructured text form hinder efficient risk-based decision-making during navigation. To improve the accuracy and efficiency of risk assessment and decision-making, this paper proposes a semi-automated multi-stage risk assessment framework based on large language models (LLMs), machine learning algorithms, and Grey Relational Analysis. First, a workflow based on the Dify platform is constructed to extract and identify risk-influencing factors (RIFs) from ship collision accident reports. Second, Bayesian network structure learning and association rule mining are utilized to explore the causal relationships among these factors. Third, a Grey Relational Attraction Analysis(GRAA) model is established to reduce the dependence of complex, multi-dimensional risk assessment on limited sample sizes, enhancing the reliability of risk quantification. The results indicate that core-layer risk factors such as inadequate crew competence, fatigue, and improper emergency response play key roles in the ship collision accident propagation network. Additionally, this study constructs an integrated risk assessment pipeline based on the Dify-LLM workflow, achieving rapid extraction and quantification of risk information from unstructured text.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123167"},"PeriodicalIF":6.8,"publicationDate":"2026-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-05-25Epub Date: 2026-01-29DOI: 10.1016/j.ins.2026.123168
Wei Wang , Zixin Huang , Ning Chen
This paper investigates the model-free finite-horizon optimal control problem of discrete-time linear time-invariant systems with a prescribed degree of stability. Initially, a novel finite-horizon cost function is formulated, and the corresponding time-varying Riccati equation (TVRE) is derived. It is proven that the solution to the TVRE ensures the exponential stability of the closed-loop system with a prescribed degree. Subsequently, a time-varying Q-function is designed, and a Q-learning-based backward-in-time algorithm is developed to estimate solutions for the TVRE and the optimal time-varying state feedback gains, all without requiring knowledge of the system dynamics. Finally, a simulation study is conducted to validate the efficacy of the proposed algorithm. It shows that as the setting parameter decreases, the degree of stability increases, and the convergence to the equilibrium point becomes faster.
{"title":"Model-free finite-horizon optimal control of linear systems with prescribed degree of stability","authors":"Wei Wang , Zixin Huang , Ning Chen","doi":"10.1016/j.ins.2026.123168","DOIUrl":"10.1016/j.ins.2026.123168","url":null,"abstract":"<div><div>This paper investigates the model-free finite-horizon optimal control problem of discrete-time linear time-invariant systems with a prescribed degree of stability. Initially, a novel finite-horizon cost function is formulated, and the corresponding time-varying Riccati equation (TVRE) is derived. It is proven that the solution to the TVRE ensures the exponential stability of the closed-loop system with a prescribed degree. Subsequently, a time-varying Q-function is designed, and a Q-learning-based backward-in-time algorithm is developed to estimate solutions for the TVRE and the optimal time-varying state feedback gains, all without requiring knowledge of the system dynamics. Finally, a simulation study is conducted to validate the efficacy of the proposed algorithm. It shows that as the setting parameter decreases, the degree of stability increases, and the convergence to the equilibrium point becomes faster.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123168"},"PeriodicalIF":6.8,"publicationDate":"2026-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-05-25Epub Date: 2026-01-29DOI: 10.1016/j.ins.2026.123145
Xiaopeng Yi , Chongyang Liu , Huey Tyng Cheong , Kok Lay Teo
In this paper, we investigate dynamic optimization problems governed by nonlinear fractional switched systems with multiple time-delays and subject to terminal state inequality constraints, where both the switching times and system parameters are treated as decision variables. For this problem, we first transform it into an equivalent form on a normalized time horizon with fixed switching points using a novel time-scaling transformation, with time-delays expressed in terms of subsystem durations in the original time horizon. A third-order numerical integration scheme is then applied to discretize the transformed problem, resulting in a discrete-time dynamic optimization problem. Furthermore, gradients of the cost and constraint functions with respect to the decision variables are derived, and a gradient-based optimization algorithm is developed to solve the resulting problem. Lastly, three representative numerical examples are provided to showcase the effectiveness and broad applicability of the proposed method.
{"title":"Dynamic optimization of nonlinear fractional switched systems with multiple time-delays","authors":"Xiaopeng Yi , Chongyang Liu , Huey Tyng Cheong , Kok Lay Teo","doi":"10.1016/j.ins.2026.123145","DOIUrl":"10.1016/j.ins.2026.123145","url":null,"abstract":"<div><div>In this paper, we investigate dynamic optimization problems governed by nonlinear fractional switched systems with multiple time-delays and subject to terminal state inequality constraints, where both the switching times and system parameters are treated as decision variables. For this problem, we first transform it into an equivalent form on a normalized time horizon with fixed switching points using a novel time-scaling transformation, with time-delays expressed in terms of subsystem durations in the original time horizon. A third-order numerical integration scheme is then applied to discretize the transformed problem, resulting in a discrete-time dynamic optimization problem. Furthermore, gradients of the cost and constraint functions with respect to the decision variables are derived, and a gradient-based optimization algorithm is developed to solve the resulting problem. Lastly, three representative numerical examples are provided to showcase the effectiveness and broad applicability of the proposed method.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123145"},"PeriodicalIF":6.8,"publicationDate":"2026-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-05-25Epub Date: 2026-01-31DOI: 10.1016/j.ins.2026.123179
Xia Lin , Kai Zhang , Ligang Zhou
The criterion-oriented three-way decision model is an effective tool for simultaneously obtaining preference ranking results and three-way classification results in a multi-criteria environment. In view of this, this paper attempts to analyze the research framework of criterion-oriented three-way decision models from the perspective of four different generalized risk functions, which provides a valuable way to obtain both preference ranking results and three-way classification results in multi-criteria decision-making. Firstly, based on a criterion-oriented benefit fuzzy concept, this paper employs a generalized risk function (comprising both the relative loss and relative utility functions) to analyze decision threshold construction and develop two types of three-way decision models based on criterion-oriented benefit fuzzy concepts. Then, by comparing the criterion-oriented benefit fuzzy concept and considering practical semantics, this paper introduces the criterion-oriented cost fuzzy concept. Building on this, utilizing the relative loss and relative utility functions, two types of three-way decision models are proposed based on the criterion-oriented cost fuzzy concept. The relationships and distinctions among the four proposed three-way decision models are then analyzed in detail. The results reveal that, in different three-way decision models, when the monotonic increasing function and the monotonic decreasing function respectively retain the same value, the four types of risk functions exhibit the following regularities: , , , Finally, using a specific relative risk function, this paper demonstrates the effectiveness of the proposed models through case studies and related discussions.
{"title":"The criterion-oriented three-way decision models with generalized risk function","authors":"Xia Lin , Kai Zhang , Ligang Zhou","doi":"10.1016/j.ins.2026.123179","DOIUrl":"10.1016/j.ins.2026.123179","url":null,"abstract":"<div><div>The criterion-oriented three-way decision model is an effective tool for simultaneously obtaining preference ranking results and three-way classification results in a multi-criteria environment. In view of this, this paper attempts to analyze the research framework of criterion-oriented three-way decision models from the perspective of four different generalized risk functions, which provides a valuable way to obtain both preference ranking results and three-way classification results in multi-criteria decision-making. Firstly, based on a criterion-oriented benefit fuzzy concept, this paper employs a generalized risk function (comprising both the relative loss and relative utility functions) to analyze decision threshold construction and develop two types of three-way decision models based on criterion-oriented benefit fuzzy concepts. Then, by comparing the criterion-oriented benefit fuzzy concept and considering practical semantics, this paper introduces the criterion-oriented cost fuzzy concept. Building on this, utilizing the relative loss and relative utility functions, two types of three-way decision models are proposed based on the criterion-oriented cost fuzzy concept. The relationships and distinctions among the four proposed three-way decision models are then analyzed in detail. The results reveal that, in different three-way decision models, when the monotonic increasing function <span><math><mi>f</mi></math></span> and the monotonic decreasing function <span><math><mi>g</mi></math></span> respectively retain the same value, the four types of risk functions exhibit the following regularities: <span><math><msub><mrow><mi>α</mi></mrow><mn>1</mn></msub><mo>=</mo><msub><mrow><mi>α</mi></mrow><mn>2</mn></msub><mo>,</mo><msub><mrow><mi>α</mi></mrow><mn>3</mn></msub><mo>=</mo><msub><mrow><mi>α</mi></mrow><mn>4</mn></msub></math></span>, <span><math><msub><mrow><mi>β</mi></mrow><mn>1</mn></msub><mo>=</mo><msub><mrow><mi>β</mi></mrow><mn>2</mn></msub><mo>,</mo><msub><mrow><mi>β</mi></mrow><mn>3</mn></msub><mo>=</mo><msub><mrow><mi>β</mi></mrow><mn>4</mn></msub></math></span>, <span><math><msub><mrow><mi>γ</mi></mrow><mn>1</mn></msub><mo>=</mo><msub><mrow><mi>γ</mi></mrow><mn>2</mn></msub><mo>,</mo><msub><mrow><mi>γ</mi></mrow><mn>3</mn></msub><mo>=</mo><msub><mrow><mi>γ</mi></mrow><mn>4</mn></msub></math></span>, <span><math><msub><mrow><mi>α</mi></mrow><mn>1</mn></msub><mo>+</mo><msub><mrow><mi>β</mi></mrow><mn>3</mn></msub><mo>=</mo><mn>1</mn><mo>,</mo><msub><mrow><mi>γ</mi></mrow><mn>1</mn></msub><mo>+</mo><msub><mrow><mi>γ</mi></mrow><mn>3</mn></msub><mo>=</mo><mn>1.</mn></math></span> Finally, using a specific relative risk function, this paper demonstrates the effectiveness of the proposed models through case studies and related discussions.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123179"},"PeriodicalIF":6.8,"publicationDate":"2026-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-05-25Epub Date: 2026-01-29DOI: 10.1016/j.ins.2026.123171
Yu Su , Junyi Zhang
While edge streaming is becoming increasingly prevalent in missions that demand real-time stream processing, conventional data pipelines often suffer from inefficiencies caused by redundant memory copies, thread contention, and high latency in edge computing applications. To address these challenges, this paper proposes a new zero-copy lock-free data pipeline framework (LZDP) co-grounded in an adaptable edge streaming architecture. The contributions of this work lie in: (a) the pipelined structural zero-copy mechanism by leveraging (b) layered folding executors with memory atomic frames and synchronized atomic operators, eliminating the redundant data copying; (c) an extensible template-driven in-band control closed-loop by reconstructing the datapath for signal synchronous routing strategy to ensure multi-thread safety; and (d) lock-free synchronization under the unified architecture of the multi-producer–consumer model, implemented by a pipelined zero-coupling conception, featuring hot-pluggable components, tailored in stream-batch integrated processing. Microbenchmarks and experiments varying thread count with message sizes demonstrate that the conceived pipeline achieves 3-4x performance improvements over comparative frameworks in latency and throughput. By replacing internal execution queue backends with alternative approaches within unified LZDP logic and operator chains, experiments and ablation studies validate the synergistic effects and throughput scalability of the targeted conceptions. Further tests reveal that deploying generic streaming middleware for architectural consistency inevitably entails additional performance trade-offs, whereas the conceived architecture for specific edge scenarios offers distinct advantages in deployment cost, lightweight, and adaptability.
{"title":"A zero-copy lock-free data pipeline for edge streaming","authors":"Yu Su , Junyi Zhang","doi":"10.1016/j.ins.2026.123171","DOIUrl":"10.1016/j.ins.2026.123171","url":null,"abstract":"<div><div>While edge streaming is becoming increasingly prevalent in missions that demand real-time stream processing, conventional data pipelines often suffer from inefficiencies caused by redundant memory copies, thread contention, and high latency in edge computing applications. To address these challenges, this paper proposes a new zero-copy lock-free data pipeline framework (LZDP) co-grounded in an adaptable edge streaming architecture. The contributions of this work lie in: (a) the pipelined structural zero-copy mechanism by leveraging (b) layered folding executors with memory atomic frames and synchronized atomic operators, eliminating the redundant data copying; (c) an extensible template-driven in-band control closed-loop by reconstructing the datapath for signal synchronous routing strategy to ensure multi-thread safety; and (d) lock-free synchronization under the unified architecture of the multi-producer–consumer model, implemented by a pipelined zero-coupling conception, featuring hot-pluggable components, tailored in stream-batch integrated processing. Microbenchmarks and experiments varying thread count with message sizes demonstrate that the conceived pipeline achieves 3-4x performance improvements over comparative frameworks in latency and throughput. By replacing internal execution queue backends with alternative approaches within unified LZDP logic and operator chains, experiments and ablation studies validate the synergistic effects and throughput scalability of the targeted conceptions. Further tests reveal that deploying generic streaming middleware for architectural consistency inevitably entails additional performance trade-offs, whereas the conceived architecture for specific edge scenarios offers distinct advantages in deployment cost, lightweight, and adaptability.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123171"},"PeriodicalIF":6.8,"publicationDate":"2026-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-05-25Epub Date: 2026-01-24DOI: 10.1016/j.ins.2026.123146
Junyi Gou , Liangliang Sun , Jing Liu , Zhenghao Song , Ge Guo , She-Gan Gao , Ke Liu , Natalja Matsveichuk , Yuri Sotskov
Differential Evolution (DE), a population-driven stochastic optimization technique, has garnered significant interest among researchers across diverse disciplines because of its simple approach, high resilience, and few control parameters. However, numerous existing DE variants frequently encounter limitations when tackling intricate optimization problems, especially due to premature convergence weakness. To mitigate these deficiencies, the paper proposes an adaptive differential evolution with a deeply informed mutation strategy and historical information for numerical optimization (ADEDH), the main contributions of which can be outlined as follows: Firstly, a bi-stage parameter control strategy is proposed to achieve a better balance between exploration and exploitation. Secondly, a deeply informed mutation strategy is implemented, which uses the historical population to mirror the objective landscape and help guide the evolution. Thirdly, a diversity enhancement strategy based on historical information is proposed to tackle the premature convergence weakness. ADEDH is evaluated against nine outstanding competitors under a vast testing framework, containing CEC2013, CEC2014, and CEC2017 test suites. Additionally, the feasibility of ADEDH is further validated through its application to the parameter identification problem of a photovoltaic model. Experimental results demonstrate that ADEDH diversifies the population, attains superior solution precision, and achieves better stability.
{"title":"An adaptive differential evolution with deeply informed mutation strategy and historical information for numerical optimization","authors":"Junyi Gou , Liangliang Sun , Jing Liu , Zhenghao Song , Ge Guo , She-Gan Gao , Ke Liu , Natalja Matsveichuk , Yuri Sotskov","doi":"10.1016/j.ins.2026.123146","DOIUrl":"10.1016/j.ins.2026.123146","url":null,"abstract":"<div><div>Differential Evolution (DE), a population-driven stochastic optimization technique, has garnered significant interest among researchers across diverse disciplines because of its simple approach, high resilience, and few control parameters. However, numerous existing DE variants frequently encounter limitations when tackling intricate optimization problems, especially due to premature convergence weakness. To mitigate these deficiencies, the paper proposes an adaptive differential evolution with a deeply informed mutation strategy and historical information for numerical optimization (ADEDH), the main contributions of which can be outlined as follows: Firstly, a bi-stage parameter control strategy is proposed to achieve a better balance between exploration and exploitation. Secondly, a deeply informed mutation strategy is implemented, which uses the historical population to mirror the objective landscape and help guide the evolution. Thirdly, a diversity enhancement strategy based on historical information is proposed to tackle the premature convergence weakness. ADEDH is evaluated against nine outstanding competitors under a vast testing framework, containing CEC2013, CEC2014, and CEC2017 test suites. Additionally, the feasibility of ADEDH is further validated through its application to the parameter identification problem of a photovoltaic model. Experimental results demonstrate that ADEDH diversifies the population, attains superior solution precision, and achieves better stability.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123146"},"PeriodicalIF":6.8,"publicationDate":"2026-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-05-25Epub Date: 2026-02-09DOI: 10.1016/j.ins.2026.123188
Tao Lv , Jian Dai , Chao Yang , Yuan Sun , Fei Zhou , Zhenwen Ren
Multiple kernel clustering (MKC) has gained significant attention due to its capacity to efficiently handle nonlinear information. However, existing MKC methods often overlook heterogeneous neighbor relations and fail to fully capture high-order complementary information across kernels. Consequently, these limitations impede its widespread adoption in practical applications. To address these limitations, we propose a novel clustering method termed Tensorized Topological Manifold for Multiple Kernel Clustering (i.e., TTMMKC). Specifically, TTMMKC constructs a kernel topological graph by adopting a manifold topological structure as a means to measure similarities between instances instead of the Euclidean structure to explore the local neighbor propagation structure in Hilbert space. Moreover, TTMMKC integrates multiple kernel topological graphs into a third-order tensor and utilizes the tensor nuclear norm to explore complementary consistency information between heterogeneous kernel topological graphs. Experimental results on benchmark datasets across various scenarios demonstrate that TTMMKC effectively leverages higher-order topological structures to accurately capture the intrinsic propagation information of data in Hilbert space.
{"title":"Tensorized topological manifold for multiple kernel clustering","authors":"Tao Lv , Jian Dai , Chao Yang , Yuan Sun , Fei Zhou , Zhenwen Ren","doi":"10.1016/j.ins.2026.123188","DOIUrl":"10.1016/j.ins.2026.123188","url":null,"abstract":"<div><div>Multiple kernel clustering (MKC) has gained significant attention due to its capacity to efficiently handle nonlinear information. However, existing MKC methods often overlook heterogeneous neighbor relations and fail to fully capture high-order complementary information across kernels. Consequently, these limitations impede its widespread adoption in practical applications. To address these limitations, we propose a novel clustering method termed Tensorized Topological Manifold for Multiple Kernel Clustering (i.e., TTMMKC). Specifically, TTMMKC constructs a kernel topological graph by adopting a manifold topological structure as a means to measure similarities between instances instead of the Euclidean structure to explore the local neighbor propagation structure in Hilbert space. Moreover, TTMMKC integrates multiple kernel topological graphs into a third-order tensor and utilizes the tensor nuclear norm to explore complementary consistency information between heterogeneous kernel topological graphs. Experimental results on benchmark datasets across various scenarios demonstrate that TTMMKC effectively leverages higher-order topological structures to accurately capture the intrinsic propagation information of data in Hilbert space.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123188"},"PeriodicalIF":6.8,"publicationDate":"2026-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-05-25Epub Date: 2026-02-03DOI: 10.1016/j.ins.2026.123185
Abdelilah Serji, El Bekkaye Mermri, Mohammed Blej
Automatic Emergency Braking (AEB) systems help prevent or reduce collisions by automatically applying brakes when drivers fail to react in time. This paper proposes a novel approach to model AEB systems using Fuzzy Timed Petri Nets (FTPN), integrating the 3-, 4-, and 5-second rules that define safety distance based on speed and distance. Our approach offers a cost-effective solution for real-time AEB control by modelling deceleration timing relative to vehicle speed and environment. FTPNs combine fuzzy logic with timed Petri nets to capture the uncertainty and timing in AEB scenarios. The proposed system models different rules to determine braking decisions using fuzzy variables such as vehicle speed, front distance, and safety margin. An expert system generates Fuzzy Timed Production Rules (FTPR), which are then converted to FTPNs for accurate modelling. This rule-to-model transformation is essential for system accuracy. Over 1000 models were tested in various scenarios, with top performers achieving critical delays under 0.003 s and over 90% verification success. This research contributes to road safety by enabling vehicles to make timely and informed braking decisions in emergencies.
{"title":"Modelling safety distance rule-based automatic emergency braking systems using fuzzy timed petri nets","authors":"Abdelilah Serji, El Bekkaye Mermri, Mohammed Blej","doi":"10.1016/j.ins.2026.123185","DOIUrl":"10.1016/j.ins.2026.123185","url":null,"abstract":"<div><div>Automatic Emergency Braking (AEB) systems help prevent or reduce collisions by automatically applying brakes when drivers fail to react in time. This paper proposes a novel approach to model AEB systems using Fuzzy Timed Petri Nets (FTPN), integrating the 3-, 4-, and 5-second rules that define safety distance based on speed and distance. Our approach offers a cost-effective solution for real-time AEB control by modelling deceleration timing relative to vehicle speed and environment. FTPNs combine fuzzy logic with timed Petri nets to capture the uncertainty and timing in AEB scenarios. The proposed system models different rules to determine braking decisions using fuzzy variables such as vehicle speed, front distance, and safety margin. An expert system generates Fuzzy Timed Production Rules (FTPR), which are then converted to FTPNs for accurate modelling. This rule-to-model transformation is essential for system accuracy. Over 1000 models were tested in various scenarios, with top performers achieving critical delays under 0.003 s and over 90% verification success. This research contributes to road safety by enabling vehicles to make timely and informed braking decisions in emergencies.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123185"},"PeriodicalIF":6.8,"publicationDate":"2026-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-05-25Epub Date: 2026-01-21DOI: 10.1016/j.ins.2026.123138
Ziqi Meng , Wentao Fan , Bo Wang , Chunlin Chen , Huaxiong Li
The popular embedded feature selection approaches generally incorporate feature selection into a classification or regression task with sparse learning. In data mining, feature selection serves as an essential process. Due to the common scarcity of label information, unsupervised feature selection has attracted increasing attention. Most current methods face two challenges. Firstly, a vast majority of them rely on discovering the similarity relationships among samples to guide feature selection, which limits their efficiency and scalability due to the high time consumption of similarity graph learning. Secondly, they generally explore the data in the original or a fixed low-dimensional space, i.e., from a single-view perspective, which may not sufficiently exploit the underlying information. To address these issues, a novel diverse Embeddings and consensus Pseudo-supervision based unsupervised Feature Selection method, i.e., EPFS, is proposed in this paper, which solves the problem from a multi-view perspective in an efficient way. The EPFS framework integrates latent embedding learning, consensus pseudo-label learning, and sparse feature selection, enabling their mutual reinforcement and synergistic enhancement. For enhancing the pseudo-label quality, EPFS generates multiple distinct latent embeddings by mapping the original data into heterogeneous informative subspaces with simultaneous encoder–decoder reconstruction loss minimization. An auto-weighted collaboration strategy is adopted to learn a consensus pseudo-label matrix by using diverse embeddings. The sparse feature selection process is seamlessly incorporated into the framework. With an efficient linear-time algorithm, our model surpasses existing state-of-the-art approaches in experimental evaluations.
{"title":"Diverse embeddings and consensus pseudo-supervision learning for unsupervised feature selection","authors":"Ziqi Meng , Wentao Fan , Bo Wang , Chunlin Chen , Huaxiong Li","doi":"10.1016/j.ins.2026.123138","DOIUrl":"10.1016/j.ins.2026.123138","url":null,"abstract":"<div><div>The popular embedded feature selection approaches generally incorporate feature selection into a classification or regression task with sparse learning. In data mining, feature selection serves as an essential process. Due to the common scarcity of label information, unsupervised feature selection has attracted increasing attention. Most current methods face two challenges. Firstly, a vast majority of them rely on discovering the similarity relationships among samples to guide feature selection, which limits their efficiency and scalability due to the high time consumption of similarity graph learning. Secondly, they generally explore the data in the original or a fixed low-dimensional space, i.e., from a single-view perspective, which may not sufficiently exploit the underlying information. To address these issues, a novel diverse Embeddings and consensus Pseudo-supervision based unsupervised Feature Selection method, i.e., EPFS, is proposed in this paper, which solves the problem from a multi-view perspective in an efficient way. The EPFS framework integrates latent embedding learning, consensus pseudo-label learning, and sparse feature selection, enabling their mutual reinforcement and synergistic enhancement. For enhancing the pseudo-label quality, EPFS generates multiple distinct latent embeddings by mapping the original data into heterogeneous informative subspaces with simultaneous encoder–decoder reconstruction loss minimization. An auto-weighted collaboration strategy is adopted to learn a consensus pseudo-label matrix by using diverse embeddings. The sparse feature selection process is seamlessly incorporated into the framework. With an efficient linear-time algorithm, our model surpasses existing state-of-the-art approaches in experimental evaluations.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123138"},"PeriodicalIF":6.8,"publicationDate":"2026-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}