Pub Date : 2025-12-23DOI: 10.1109/tai.2025.3647596
Ruochen Jin, Boning Tong, Shu Yang, Bojian Hou, Li Shen
Vertical federated learning (VFL) enables a paradigm for vertically partitioned data across clients to collaboratively train machine learning models. Feature selection (FS) plays a crucial role in Vertical Federated Learning (VFL) due to the unique nature that data are distributed across multiple clients. In VFL, different clients possess distinct subsets of features for overlapping data samples, making the process of identifying and selecting the most relevant features a complex yet essential task. Previous FS efforts have primarily revolved around intra-client feature selection, overlooking vital feature interaction across clients, leading to subpar model outcomes. We introduce ICAFS, a novel multi-stage ensemble approach for effective FS in VFL by considering inter-client interactions. By employing conditional feature synthesis alongside multiple learnable feature selectors, ICAFS facilitates ensemble FS over these selectors using synthetic embeddings. This method bypasses the limitations of private gradient sharing and allows for model training using real data with refined embeddings. Experiments on multiple real-world datasets demonstrate that ICAFS surpasses current state-of-the-art methods in prediction accuracy.
{"title":"ICAFS: Inter-Client-Aware Feature Selection for Vertical Federated Learning.","authors":"Ruochen Jin, Boning Tong, Shu Yang, Bojian Hou, Li Shen","doi":"10.1109/tai.2025.3647596","DOIUrl":"10.1109/tai.2025.3647596","url":null,"abstract":"<p><p>Vertical federated learning (VFL) enables a paradigm for vertically partitioned data across clients to collaboratively train machine learning models. Feature selection (FS) plays a crucial role in Vertical Federated Learning (VFL) due to the unique nature that data are distributed across multiple clients. In VFL, different clients possess distinct subsets of features for overlapping data samples, making the process of identifying and selecting the most relevant features a complex yet essential task. Previous FS efforts have primarily revolved around intra-client feature selection, overlooking vital feature interaction across clients, leading to subpar model outcomes. We introduce ICAFS, a novel multi-stage ensemble approach for effective FS in VFL by considering inter-client interactions. By employing conditional feature synthesis alongside multiple learnable feature selectors, ICAFS facilitates ensemble FS over these selectors using synthetic embeddings. This method bypasses the limitations of private gradient sharing and allows for model training using real data with refined embeddings. Experiments on multiple real-world datasets demonstrate that ICAFS surpasses current state-of-the-art methods in prediction accuracy.</p>","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12893625/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146183782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1109/TAI.2025.3641262
{"title":"2025 Index IEEE Transactions on Artificial Intelligence","authors":"","doi":"10.1109/TAI.2025.3641262","DOIUrl":"https://doi.org/10.1109/TAI.2025.3641262","url":null,"abstract":"","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 12","pages":"1-61"},"PeriodicalIF":0.0,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11283132","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1109/TAI.2025.3607850
Jiaguang Li;Ying Wei;Zihan Gao;Yubo Wang
Few-shot segmentation strives to segment novel categories using merely a limited number of labeled images. Current prototype learning and correlation learning approaches struggle to effectively harness both high- and low-level information from support and query images, leading to suboptimal segmentation results. In this work, we propose a multilayer feature cross-activation (FCA) network with high- and low-level prototypes (HLP), which fully utilizes support and query information from both features and prototypes perspectives. Specifically, for the FCA module, we design a simple activation method that uses all the pixel-level support foreground features to activate query features, thereby obtaining activation maps without losing pixel-level detail information of support features. For the HLP module, we combine image-level prototypes with pixel-level prototypes to fully utilize high-level category information and low-level attribute information of support features. Besides, the prototype generation method integrates query information, which further enables the prototypes to better match target query features, especially when there are great differences between query and support images. Extensive experiments on PASCAL-$5^{i}$ and COCO-$20^{i}$ under 1-shot and 5-shot settings validate the effectiveness of our FCA-HLP. Our method establishes new state-of-the-art performance. Additionally, we analyze the performance of the multilayer FCA network in the absence of the HLP module. The results indicate that even without prototypes, the FCA module can still deliver strong performance.
{"title":"FCA-HLP: Multilayer Feature Cross-Activation Network With High- and Low-Level Prototypes for Few-Shot Segmentation","authors":"Jiaguang Li;Ying Wei;Zihan Gao;Yubo Wang","doi":"10.1109/TAI.2025.3607850","DOIUrl":"https://doi.org/10.1109/TAI.2025.3607850","url":null,"abstract":"Few-shot segmentation strives to segment novel categories using merely a limited number of labeled images. Current prototype learning and correlation learning approaches struggle to effectively harness both high- and low-level information from support and query images, leading to suboptimal segmentation results. In this work, we propose a multilayer feature cross-activation (FCA) network with high- and low-level prototypes (HLP), which fully utilizes support and query information from both features and prototypes perspectives. Specifically, for the FCA module, we design a simple activation method that uses all the pixel-level support foreground features to activate query features, thereby obtaining activation maps without losing pixel-level detail information of support features. For the HLP module, we combine image-level prototypes with pixel-level prototypes to fully utilize high-level category information and low-level attribute information of support features. Besides, the prototype generation method integrates query information, which further enables the prototypes to better match target query features, especially when there are great differences between query and support images. Extensive experiments on PASCAL-<inline-formula><tex-math>$5^{i}$</tex-math></inline-formula> and COCO-<inline-formula><tex-math>$20^{i}$</tex-math></inline-formula> under 1-shot and 5-shot settings validate the effectiveness of our FCA-HLP. Our method establishes new state-of-the-art performance. Additionally, we analyze the performance of the multilayer FCA network in the absence of the HLP module. The results indicate that even without prototypes, the FCA module can still deliver strong performance.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"7 3","pages":"1803-1815"},"PeriodicalIF":0.0,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1109/TAI.2025.3606902
Xin Liu;Zhaoyin Shi;Shuanghao Zhang;Long Chen;Weiping Ding;Xiaopin Zhong;Zongze Wu;C. L. Philip Chen
Class-incremental learning (CIL) with the broad learning system (BLS) has emerged as a computationally efficient alternative to deep incremental models. However, existing BLS-based CIL methods struggle with complex data distributions and are highly sensitive to hyperparameter tuning, leading to suboptimal knowledge retention. To address these challenges, we propose CIBLS-PLS (class-incremental broad learning system with pseudolabel-guided stacked structure), which enhances knowledge retention and adaptability. Different from traditional stacked BLS, where blocks are strictly chained through input-output dependencies, CIBLS-PLS adopts a more flexible stacking structure, allowing each block to independently contribute to knowledge preservation. Each BLS layer integrates dual-storage modules that retain key features from previous data, while pseudolabels are generated to facilitate seamless knowledge integration within the stacked residual learning framework. Model parameters are updated efficiently via closed-form ridge regression, significantly reducing computational overhead while maintaining high accuracy. Additionally, to further enhance model generalization, an adaptive scaling mechanism dynamically regulates the contribution of residual blocks, effectively preventing overfitting as the number of blocks increases. This property is rigorously validated through both theoretical analysis and extensive experiments. Results on seven large-scale image datasets demonstrate that CIBLS-PLS achieves state-of-the-art performance in accuracy and knowledge retention while maintaining competitive computational efficiency, paving the way for robust and scalable broad learning-based incremental models.
{"title":"CIBLS-PLS: A Class-Incremental Broad Learning System With Pseudolabel-Guided Stacked Structure","authors":"Xin Liu;Zhaoyin Shi;Shuanghao Zhang;Long Chen;Weiping Ding;Xiaopin Zhong;Zongze Wu;C. L. Philip Chen","doi":"10.1109/TAI.2025.3606902","DOIUrl":"https://doi.org/10.1109/TAI.2025.3606902","url":null,"abstract":"Class-incremental learning (CIL) with the broad learning system (BLS) has emerged as a computationally efficient alternative to deep incremental models. However, existing BLS-based CIL methods struggle with complex data distributions and are highly sensitive to hyperparameter tuning, leading to suboptimal knowledge retention. To address these challenges, we propose CIBLS-PLS (class-incremental broad learning system with pseudolabel-guided stacked structure), which enhances knowledge retention and adaptability. Different from traditional stacked BLS, where blocks are strictly chained through input-output dependencies, CIBLS-PLS adopts a more flexible stacking structure, allowing each block to independently contribute to knowledge preservation. Each BLS layer integrates dual-storage modules that retain key features from previous data, while pseudolabels are generated to facilitate seamless knowledge integration within the stacked residual learning framework. Model parameters are updated efficiently via closed-form ridge regression, significantly reducing computational overhead while maintaining high accuracy. Additionally, to further enhance model generalization, an adaptive scaling mechanism dynamically regulates the contribution of residual blocks, effectively preventing overfitting as the number of blocks increases. This property is rigorously validated through both theoretical analysis and extensive experiments. Results on seven large-scale image datasets demonstrate that CIBLS-PLS achieves state-of-the-art performance in accuracy and knowledge retention while maintaining competitive computational efficiency, paving the way for robust and scalable broad learning-based incremental models.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"7 3","pages":"1745-1760"},"PeriodicalIF":0.0,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The coordination of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) is valuable in many applications, such as emergency search and rescue, and has received increasing attention in recent years. Given their distinct tasks and dynamic characteristics, the UAV team is typically controlled with higher maneuverability for rapid searching, while the UGV team is operated on roads at lower speeds to ensure stability and performance. This discrepancy naturally results in a multirate control problem, which has not been adequately addressed in previous works. Therefore, we present a multirate distributed receding horizon reinforcement learning (RHRL) framework to solve the optimal UAV–UGV formation control problem on fast and slow time scales. The proposed approach includes a distributed RHRL algorithm operating at a slower time scale for the formation control of UGV teams, and another distributed RHRL algorithm functioning at a faster time scale for the formation control of UAV teams. The state information among homogeneous UAV/UGV agents and heterogeneous agents across different teams are exchanged at different frequencies to balance control performance and communication load. Notably, our approach integrates the receding horizon strategy to enhance learning efficiency and provides theoretical guarantees in multirate distributed RL. Theoretically, learning convergence at different time scales and closed-loop stability are guaranteed. Comparative numerical validations are conducted to demonstrate the effectiveness of our approach in heterogeneous UAV–UGV formation control under different time scales and tasks.
{"title":"Multirate Distributed Receding Horizon Reinforcement Learning for Optimal UAV–UGV Formation Control","authors":"Xinglong Zhang;Cong Li;Ronghua Zhang;Quan Xiong;Wei Jiang;Xin Xu","doi":"10.1109/TAI.2025.3607722","DOIUrl":"https://doi.org/10.1109/TAI.2025.3607722","url":null,"abstract":"The coordination of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) is valuable in many applications, such as emergency search and rescue, and has received increasing attention in recent years. Given their distinct tasks and dynamic characteristics, the UAV team is typically controlled with higher maneuverability for rapid searching, while the UGV team is operated on roads at lower speeds to ensure stability and performance. This discrepancy naturally results in a multirate control problem, which has not been adequately addressed in previous works. Therefore, we present a multirate distributed receding horizon reinforcement learning (RHRL) framework to solve the optimal UAV–UGV formation control problem on fast and slow time scales. The proposed approach includes a distributed RHRL algorithm operating at a slower time scale for the formation control of UGV teams, and another distributed RHRL algorithm functioning at a faster time scale for the formation control of UAV teams. The state information among homogeneous UAV/UGV agents and heterogeneous agents across different teams are exchanged at different frequencies to balance control performance and communication load. Notably, our approach integrates the receding horizon strategy to enhance learning efficiency and provides theoretical guarantees in multirate distributed RL. Theoretically, learning convergence at different time scales and closed-loop stability are guaranteed. Comparative numerical validations are conducted to demonstrate the effectiveness of our approach in heterogeneous UAV–UGV formation control under different time scales and tasks.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"7 3","pages":"1776-1791"},"PeriodicalIF":0.0,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1109/TAI.2025.3607790
Hui Li;Jun Ning;Shaocheng Tong
This article studies the distributed optimal cluster consensus control problem with a data-driven value iteration (VI) algorithm for Takagi–Sugeno (T–S) fuzzy multiagent systems (MASs) with unknown system dynamics. In distributed optimal cluster consensus control design, we view each agent’s control policy and its neighboring followers’ control policy as rival players, and then a fuzzy distributed optimal cluster consensus control policy is proposed by applying differential graphical game theory and acyclic partition. Since the analytical optimal cluster consensus control solutions are reduced to solving the distributed game algebraic Riccati equations (GAREs), which are difficult to obtain their analytical solutions, a data-driven VI algorithm is presented. It is proven that the developed algorithm can converge to the approximation solutions of optimal controllers, and the proposed fuzzy distributed optimal cluster consensus control scheme not only guarantees the followers in each cluster to asymptotically track their corresponding leaders but also achieves the Nash equilibrium of differential graphical game. Finally, we apply the developed fuzzy distributed optimal cluster consensus control method with a data-driven VI algorithm to multiple nonlinear unmanned surface vehicle (USV) systems, the computer simulation results verify the effectiveness of the developed optimal control approach.
{"title":"Distributed Reinforcement Learning Optimal Cluster Consensus Control for Takagi–Sugeno Fuzzy Multiagent Systems","authors":"Hui Li;Jun Ning;Shaocheng Tong","doi":"10.1109/TAI.2025.3607790","DOIUrl":"https://doi.org/10.1109/TAI.2025.3607790","url":null,"abstract":"This article studies the distributed optimal cluster consensus control problem with a data-driven value iteration (VI) algorithm for Takagi–Sugeno (T–S) fuzzy multiagent systems (MASs) with unknown system dynamics. In distributed optimal cluster consensus control design, we view each agent’s control policy and its neighboring followers’ control policy as rival players, and then a fuzzy distributed optimal cluster consensus control policy is proposed by applying differential graphical game theory and acyclic partition. Since the analytical optimal cluster consensus control solutions are reduced to solving the distributed game algebraic Riccati equations (GAREs), which are difficult to obtain their analytical solutions, a data-driven VI algorithm is presented. It is proven that the developed algorithm can converge to the approximation solutions of optimal controllers, and the proposed fuzzy distributed optimal cluster consensus control scheme not only guarantees the followers in each cluster to asymptotically track their corresponding leaders but also achieves the Nash equilibrium of differential graphical game. Finally, we apply the developed fuzzy distributed optimal cluster consensus control method with a data-driven VI algorithm to multiple nonlinear unmanned surface vehicle (USV) systems, the computer simulation results verify the effectiveness of the developed optimal control approach.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"7 3","pages":"1792-1802"},"PeriodicalIF":0.0,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1109/TAI.2025.3606456
Hongwei Yang;Jiaoxuan Lin;Hui He;Weizhe Zhang;Letu Suya
Cross-network node classification seeks to leverage labeled source networks to assist node classification in an unlabeled target network. However, existing heterogeneous graph adaptation methods often rely on restrictive assumptions, such as the presence of a single source network or strong correlations between source and target nodes, which rarely hold in practice. To address this, we propose a novel Wasserstein distance-based multisource heterogeneous graph adaptation framework (WMHGA), which aims to learn transferable node representations across networks in order to improve the accuracy of node classification tasks. Specifically, we propose a Wasserstein distance-based heterogeneous graph adaptation approach to learn node representations that are invariant to domain variations. Then, we propose two Wasserstein distance-based knowledge distillation approaches to identify more valuable samples from the source graph and learn label-discriminative node representations of these samples for knowledge transfer. In addition, we devise a Wasserstein distance-based aggregated prediction to prioritize highly relevant source nodes while suppressing irrelevant ones, thereby ensuring more accurate node classification in the target network. Extensive experiments have been conducted on three real-world datasets, demonstrating that the proposed WMHGA model outperforms the state-of-the-art baselines.
{"title":"Wasserstein Distance-Based Multisource Heterogeneous Graph Adaptation for Cross-Network Node Classification","authors":"Hongwei Yang;Jiaoxuan Lin;Hui He;Weizhe Zhang;Letu Suya","doi":"10.1109/TAI.2025.3606456","DOIUrl":"https://doi.org/10.1109/TAI.2025.3606456","url":null,"abstract":"Cross-network node classification seeks to leverage labeled source networks to assist node classification in an unlabeled target network. However, existing heterogeneous graph adaptation methods often rely on restrictive assumptions, such as the presence of a single source network or strong correlations between source and target nodes, which rarely hold in practice. To address this, we propose a novel Wasserstein distance-based multisource heterogeneous graph adaptation framework (WMHGA), which aims to learn transferable node representations across networks in order to improve the accuracy of node classification tasks. Specifically, we propose a Wasserstein distance-based heterogeneous graph adaptation approach to learn node representations that are invariant to domain variations. Then, we propose two Wasserstein distance-based knowledge distillation approaches to identify more valuable samples from the source graph and learn label-discriminative node representations of these samples for knowledge transfer. In addition, we devise a Wasserstein distance-based aggregated prediction to prioritize highly relevant source nodes while suppressing irrelevant ones, thereby ensuring more accurate node classification in the target network. Extensive experiments have been conducted on three real-world datasets, demonstrating that the proposed WMHGA model outperforms the state-of-the-art baselines.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"7 3","pages":"1730-1744"},"PeriodicalIF":0.0,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The dual thinking framework considers fast, intuitive, and slower logical processing. The perception of dual thinking in vision requires images where inferences from intuitive and logical processing differ, and the latter is under-explored in current studies. We introduce a novel adversarial dataset to provide evidence for the dual thinking framework in human vision, which also facilitates the study of the qualitative behavior of deep learning models. Our psychophysical studies show the presence of multiple inferences in rapid succession, and analysis of errors shows that the early stopping of visual processing can result in missing relevant information. Multimodal large language models (MLLMs) and vision language models (VLMs) have made significant progress in correcting errors in intuitive processing in human vision and showed enhanced performance on images requiring logical processing. However, their improvements in logical processing have not kept pace with their advancements in intuitive processing. In contrast, segmentation models exhibit errors similar to those seen in intuitive human processing and lack understanding of substructures, as indicated by errors related to subcomponents in identified instances. As artificial intelligence (AI)-based systems find increasing applications in safety-critical domains such as autonomous driving, the integration of logical processing capabilities becomes essential. This not only enhances performance but also addresses the limitations of scaling-based approaches while ensuring robustness and reliability in real-world environments.
{"title":"Dual Thinking and Logical Processing in Human Vision and Multimodal Large Language Models","authors":"Kailas Dayanandan;Nikhil Kumar;Anand Sinha;Brejesh Lall","doi":"10.1109/TAI.2025.3606452","DOIUrl":"https://doi.org/10.1109/TAI.2025.3606452","url":null,"abstract":"The dual thinking framework considers fast, intuitive, and slower logical processing. The perception of dual thinking in vision requires images where inferences from intuitive and logical processing differ, and the latter is under-explored in current studies. We introduce a novel adversarial dataset to provide evidence for the dual thinking framework in human vision, which also facilitates the study of the qualitative behavior of deep learning models. Our psychophysical studies show the presence of multiple inferences in rapid succession, and analysis of errors shows that the early stopping of visual processing can result in missing relevant information. Multimodal large language models (MLLMs) and vision language models (VLMs) have made significant progress in correcting errors in intuitive processing in human vision and showed enhanced performance on images requiring logical processing. However, their improvements in logical processing have not kept pace with their advancements in intuitive processing. In contrast, segmentation models exhibit errors similar to those seen in intuitive human processing and lack understanding of substructures, as indicated by errors related to subcomponents in identified instances. As artificial intelligence (AI)-based systems find increasing applications in safety-critical domains such as autonomous driving, the integration of logical processing capabilities becomes essential. This not only enhances performance but also addresses the limitations of scaling-based approaches while ensuring robustness and reliability in real-world environments.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"7 3","pages":"1715-1729"},"PeriodicalIF":0.0,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}