首页 > 最新文献

IEEE transactions on artificial intelligence最新文献

英文 中文
Heterogeneous Hypergraph Embedding for Node Classification in Dynamic Networks 用于动态网络节点分类的异构超图嵌入
Pub Date : 2024-08-26 DOI: 10.1109/TAI.2024.3450658
Malik Khizar Hayat;Shan Xue;Jia Wu;Jian Yang
Graphs are a foundational way to represent scenarios where objects interact in pairs. Recently, graph neural networks (GNNs) have become widely used for modeling simple graph structures, either in homogeneous or heterogeneous graphs, where edges represent pairwise relationships between nodes. However, many real-world situations involve more complex interactions where multiple nodes interact simultaneously, as observed in contexts such as social groups and gene-gene interactions. Traditional graph embeddings often fail to capture these multifaceted nonpairwise dynamics. A hypergraph, which generalizes a simple graph by connecting two or more nodes via a single hyperedge, offers a more efficient way to represent these interactions. While most existing research focuses on homogeneous and static hypergraph embeddings, many real-world networks are inherently heterogeneous and dynamic. To address this gap, we propose a GNN-based embedding for dynamic heterogeneous hypergraphs, specifically designed to capture nonpairwise interactions and their evolution over time. Unlike traditional embedding methods that rely on distance or meta-path-based strategies for node neighborhood aggregation, a $k$-hop neighborhood strategy is introduced to effectively encapsulate higher-order interactions in dynamic networks. Furthermore, the information aggregation process is enhanced by incorporating semantic hyperedges, further enriching hypergraph embeddings. Finally, embeddings learned from each timestamp are aggregated using a mean operation to derive the final node embeddings. Extensive experiments on five real-world datasets, along with comparisons against homogeneous, heterogeneous, and hypergraph-based baselines (both static and dynamic), demonstrate the robustness and superiority of our model.
图是表示对象成对互动场景的一种基本方法。最近,图神经网络(GNN)被广泛应用于简单图结构的建模,无论是同构图还是异构图,其中边代表节点之间的成对关系。然而,现实世界中的许多情况涉及更复杂的互动,即多个节点同时互动,如在社会群体和基因-基因互动中观察到的情况。传统的图嵌入往往无法捕捉这些多方面的非成对动态。超图通过单个超边连接两个或多个节点,从而概括了简单图,为表示这些相互作用提供了更有效的方法。虽然现有研究大多集中在同构和静态的超图嵌入上,但现实世界中的许多网络本质上是异构和动态的。为了弥补这一不足,我们提出了一种基于 GNN 的动态异构超图嵌入方法,专门用于捕捉非成对交互及其随时间的演变。传统的嵌入方法依赖于基于距离或元路径的节点邻域聚合策略,与之不同的是,我们引入了 $k$-hop 邻域策略,以有效封装动态网络中的高阶交互。此外,信息聚合过程还结合了语义超图,进一步丰富了超图嵌入。最后,利用均值运算对从每个时间戳中学习到的嵌入进行聚合,从而得出最终的节点嵌入。在五个真实世界数据集上进行的广泛实验,以及与同构、异构和基于超图的基线(静态和动态)的比较,证明了我们的模型的鲁棒性和优越性。
{"title":"Heterogeneous Hypergraph Embedding for Node Classification in Dynamic Networks","authors":"Malik Khizar Hayat;Shan Xue;Jia Wu;Jian Yang","doi":"10.1109/TAI.2024.3450658","DOIUrl":"https://doi.org/10.1109/TAI.2024.3450658","url":null,"abstract":"Graphs are a foundational way to represent scenarios where objects interact in pairs. Recently, graph neural networks (GNNs) have become widely used for modeling simple graph structures, either in homogeneous or heterogeneous graphs, where edges represent pairwise relationships between nodes. However, many real-world situations involve more complex interactions where multiple nodes interact simultaneously, as observed in contexts such as social groups and gene-gene interactions. Traditional graph embeddings often fail to capture these multifaceted nonpairwise dynamics. A hypergraph, which generalizes a simple graph by connecting two or more nodes via a single hyperedge, offers a more efficient way to represent these interactions. While most existing research focuses on homogeneous and static hypergraph embeddings, many real-world networks are inherently heterogeneous and dynamic. To address this gap, we propose a GNN-based embedding for dynamic heterogeneous hypergraphs, specifically designed to capture nonpairwise interactions and their evolution over time. Unlike traditional embedding methods that rely on distance or meta-path-based strategies for node neighborhood aggregation, a \u0000<inline-formula><tex-math>$k$</tex-math></inline-formula>\u0000-hop neighborhood strategy is introduced to effectively encapsulate higher-order interactions in dynamic networks. Furthermore, the information aggregation process is enhanced by incorporating semantic hyperedges, further enriching hypergraph embeddings. Finally, embeddings learned from each timestamp are aggregated using a mean operation to derive the final node embeddings. Extensive experiments on five real-world datasets, along with comparisons against homogeneous, heterogeneous, and hypergraph-based baselines (both static and dynamic), demonstrate the robustness and superiority of our model.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 11","pages":"5465-5477"},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differentially Private and Heterogeneity-Robust Federated Learning With Theoretical Guarantee
Pub Date : 2024-08-21 DOI: 10.1109/TAI.2024.3446759
Xiuhua Wang;Shuai Wang;Yiwei Li;Fengrui Fan;Shikang Li;Xiaodong Lin
Federated learning (FL) is a popular distributed paradigm where enormous clients collaboratively train a machine learning (ML) model under the orchestration of a central server without knowing the clients’ private raw data. The development of effective FL algorithms faces multiple practical challenges including data heterogeneity and clients’ privacy protection. Despite that numerous attempts have been made to deal with data heterogeneity or rigorous privacy protection, none have effectively tackled both issues simultaneously. In this article, we propose a differentially private and heterogeneity-robust FL algorithm, named DP-FedCVR to mitigate the data heterogeneity by following the client-variance-reduction strategy. Besides, it adopts a sophisticated differential privacy (DP) mechanism where the privacy-amplified strategy is applied, to achieve a rigorous privacy protection guarantee. We show that the proposed DP-FedCVR algorithm maintains its heterogeneity-robustness though DP noises are incorporated, while achieving a sublinear convergence rate for a nonconvex FL problem. Numerical experiments based on image classification tasks are presented to demonstrate that DP-FedCVR provides superior performance over the benchmark algorithms in the presence of data heterogeneity and various DP privacy budgets.
{"title":"Differentially Private and Heterogeneity-Robust Federated Learning With Theoretical Guarantee","authors":"Xiuhua Wang;Shuai Wang;Yiwei Li;Fengrui Fan;Shikang Li;Xiaodong Lin","doi":"10.1109/TAI.2024.3446759","DOIUrl":"https://doi.org/10.1109/TAI.2024.3446759","url":null,"abstract":"Federated learning (FL) is a popular distributed paradigm where enormous clients collaboratively train a machine learning (ML) model under the orchestration of a central server without knowing the clients’ private raw data. The development of effective FL algorithms faces multiple practical challenges including data heterogeneity and clients’ privacy protection. Despite that numerous attempts have been made to deal with data heterogeneity or rigorous privacy protection, none have effectively tackled both issues simultaneously. In this article, we propose a differentially private and heterogeneity-robust FL algorithm, named \u0000<monospace>DP-FedCVR</monospace>\u0000 to mitigate the data heterogeneity by following the client-variance-reduction strategy. Besides, it adopts a sophisticated differential privacy (DP) mechanism where the privacy-amplified strategy is applied, to achieve a rigorous privacy protection guarantee. We show that the proposed \u0000<monospace>DP-FedCVR</monospace>\u0000 algorithm maintains its heterogeneity-robustness though DP noises are incorporated, while achieving a sublinear convergence rate for a nonconvex FL problem. Numerical experiments based on image classification tasks are presented to demonstrate that \u0000<monospace>DP-FedCVR</monospace>\u0000 provides superior performance over the benchmark algorithms in the presence of data heterogeneity and various DP privacy budgets.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"6369-6384"},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RD-Net: Residual-Dense Network for Glaucoma Prediction Using Structural Features of Optic Nerve Head
Pub Date : 2024-08-21 DOI: 10.1109/TAI.2024.3447578
Preity;Ashish Kumar Bhandari;Akanksha Jha;Syed Shahnawazuddin
Glaucoma is called as the silent thief of eyesight. It is related to the internal damage of optical nerve head (ONH). For early screening, the simplest way is to analyze the subtle variations in structural features such as cup to disc ratio (CDR), disc damage likelihood scale (DDLS), rim width of the inferior, superior, nasal, and temporal (ISNT) regions of ONH. This can be done by accurate segmentation of optic disc (OD) and optic cup (OC). In this work, we have introduced a deep learning framework, called residual dense network (RD-NET), for disc and cup segmentation. Based on the segmentation results, the structural features are calculated. The proposed design differs from the traditional U-Net in that it utilizes filters with variable sizes and an alternative optimization method throughout the up- and down-sampling processes. The introduced method is a hybrid deep learning model that incorporates dense residual blocks and squeeze excitation block introduced within the conventional U-Net architecture. Unlike the classical approaches that are primarily based on CDR calculation, in this work, we first segment OD and OC using RD-Net and then analyze ISNT and DDLS. Once a suspicious case is detected, we then go for CDR calculation. In addition to developing an efficient segmentation model, six distinct kinds of data augmentation techniques have been also used in this study to increase the amount of training data. This, in turn, leads to a better estimation of model parameters. The model is rigorously trained and tested on four benchmark datasets namely DRISHTI, RIMONE, ORIGA, and REFUGE. Subsequently, the structural parameters are calculated for glaucoma prediction. The average accuracies are observed to be 0.9940 and 0.9894 for OD and cup segmentation, respectively. The extensive experiments presented in this article show that our method outperforms other existing state-of-the art algorithms.
青光眼被称为视力的无声小偷。它与视神经头(ONH)的内部损伤有关。要进行早期筛查,最简单的方法是分析结构特征的细微变化,如视盘杯比值(CDR)、视盘损伤可能性标度(DDLS)、视神经头下部、上部、鼻部和颞部(ISNT)的边缘宽度。这可以通过准确分割视盘(OD)和视杯(OC)来实现。在这项工作中,我们引入了一种深度学习框架,称为残差密集网络(RD-NET),用于视盘和视杯的分割。根据分割结果,计算出结构特征。所提出的设计与传统的 U-Net 不同,它在整个上采样和下采样过程中使用了大小可变的滤波器和另一种优化方法。所引入的方法是一种混合深度学习模型,它结合了传统 U-Net 架构中引入的密集残差块和挤压激励块。与主要基于 CDR 计算的经典方法不同,在这项工作中,我们首先使用 RD-Net 对 OD 和 OC 进行分割,然后分析 ISNT 和 DDLS。一旦检测到可疑情况,我们就进行 CDR 计算。除了开发高效的分割模型,本研究还使用了六种不同的数据增强技术来增加训练数据量。这反过来又能更好地估计模型参数。该模型在 DRISHTI、RIMONE、ORIGA 和 REFUGE 四个基准数据集上进行了严格的训练和测试。随后,计算了青光眼预测的结构参数。观察发现,外径和杯状分割的平均准确率分别为 0.9940 和 0.9894。本文介绍的大量实验表明,我们的方法优于其他现有的先进算法。
{"title":"RD-Net: Residual-Dense Network for Glaucoma Prediction Using Structural Features of Optic Nerve Head","authors":"Preity;Ashish Kumar Bhandari;Akanksha Jha;Syed Shahnawazuddin","doi":"10.1109/TAI.2024.3447578","DOIUrl":"https://doi.org/10.1109/TAI.2024.3447578","url":null,"abstract":"Glaucoma is called as the silent thief of eyesight. It is related to the internal damage of optical nerve head (ONH). For early screening, the simplest way is to analyze the subtle variations in structural features such as cup to disc ratio (CDR), disc damage likelihood scale (DDLS), rim width of the inferior, superior, nasal, and temporal (ISNT) regions of ONH. This can be done by accurate segmentation of optic disc (OD) and optic cup (OC). In this work, we have introduced a deep learning framework, called residual dense network (RD-NET), for disc and cup segmentation. Based on the segmentation results, the structural features are calculated. The proposed design differs from the traditional U-Net in that it utilizes filters with variable sizes and an alternative optimization method throughout the up- and down-sampling processes. The introduced method is a hybrid deep learning model that incorporates dense residual blocks and squeeze excitation block introduced within the conventional U-Net architecture. Unlike the classical approaches that are primarily based on CDR calculation, in this work, we first segment OD and OC using RD-Net and then analyze ISNT and DDLS. Once a suspicious case is detected, we then go for CDR calculation. In addition to developing an efficient segmentation model, six distinct kinds of data augmentation techniques have been also used in this study to increase the amount of training data. This, in turn, leads to a better estimation of model parameters. The model is rigorously trained and tested on four benchmark datasets namely DRISHTI, RIMONE, ORIGA, and REFUGE. Subsequently, the structural parameters are calculated for glaucoma prediction. The average accuracies are observed to be 0.9940 and 0.9894 for OD and cup segmentation, respectively. The extensive experiments presented in this article show that our method outperforms other existing state-of-the art algorithms.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 1","pages":"107-117"},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative Representation Learning in Recurrent Neural Networks for Causal Timeseries Forecasting
Pub Date : 2024-08-20 DOI: 10.1109/TAI.2024.3446465
Georgios Chatziparaskevas;Ioannis Mademlis;Ioannis Pitas
Feed-forward deep neural networks (DNNs) are the state of the art in timeseries forecasting. A particularly significant scenario is the causal one: when an arbitrary subset of variables of a given multivariate timeseries is specified as forecasting target, with the remaining ones (exogenous variables) causing the target at each time instance. Then, the goal is to predict a temporal window of future target values, given a window of historical exogenous values. To this end, this article proposes a novel deep recurrent neural architecture, called generative-regressing recurrent neural network (GRRNN), which surpasses competing ones in causal forecasting evaluation metrics, by smartly combining generative learning and regression. During training, the generative module learns to synthesize historical target timeseries from historical exogenous inputs via conditional adversarial learning, thus internally encoding the input timeseries into semantically meaningful features. During a forward pass, these features are passed over as input to the regression module, which outputs the actual future target forecasts in a sequence-to-sequence fashion. Thus, the task of timeseries generation is synergistically combined with the task of timeseries forecasting, under an end-to-end multitask training setting. Methodologically, GRRNN contributes a novel augmentation of pure supervised learning, tailored to causal timeseries forecasting, which essentially forces the generative module to transform the historical exogenous timeseries to a more appropriate representation, before feeding it as input to the actual forecasting regressor. Extensive experimental evaluation on relevant public datasets obtained from disparate fields, ranging from air pollution data to sentiment analysis of social media posts, confirms that GRRNN achieves top performance in multistep long-term forecasting.
{"title":"Generative Representation Learning in Recurrent Neural Networks for Causal Timeseries Forecasting","authors":"Georgios Chatziparaskevas;Ioannis Mademlis;Ioannis Pitas","doi":"10.1109/TAI.2024.3446465","DOIUrl":"https://doi.org/10.1109/TAI.2024.3446465","url":null,"abstract":"Feed-forward deep neural networks (DNNs) are the state of the art in timeseries forecasting. A particularly significant scenario is the causal one: when an arbitrary subset of variables of a given multivariate timeseries is specified as forecasting target, with the remaining ones (exogenous variables) \u0000<italic>causing</i>\u0000 the target at each time instance. Then, the goal is to predict a temporal window of future target values, given a window of historical exogenous values. To this end, this article proposes a novel deep recurrent neural architecture, called generative-regressing recurrent neural network (GRRNN), which surpasses competing ones in causal forecasting evaluation metrics, by smartly combining generative learning and regression. During training, the generative module learns to synthesize historical target timeseries from historical exogenous inputs via conditional adversarial learning, thus internally encoding the input timeseries into semantically meaningful features. During a forward pass, these features are passed over as input to the regression module, which outputs the actual future target forecasts in a sequence-to-sequence fashion. Thus, the task of timeseries generation is synergistically combined with the task of timeseries forecasting, under an end-to-end multitask training setting. Methodologically, GRRNN contributes a novel augmentation of pure supervised learning, tailored to causal timeseries forecasting, which essentially forces the generative module to transform the historical exogenous timeseries to a more appropriate representation, before feeding it as input to the actual forecasting regressor. Extensive experimental evaluation on relevant public datasets obtained from disparate fields, ranging from air pollution data to sentiment analysis of social media posts, confirms that GRRNN achieves top performance in multistep long-term forecasting.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"6412-6425"},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuro-Symbolic AI for Military Applications
Pub Date : 2024-08-19 DOI: 10.1109/TAI.2024.3444746
Desta Haileselassie Hagos;Danda B. Rawat
Artificial intelligence (AI) plays a significant role in enhancing the capabilities of defense systems, revolutionizing strategic decision-making, and shaping the future landscape of military operations. Neuro-Symbolic AI is an emerging approach that leverages and augments the strengths of neural networks and symbolic reasoning. These systems have the potential to be more impactful and flexible than traditional AI systems, making them well-suited for military applications. This article comprehensively explores the diverse dimensions and capabilities of Neuro-Symbolic AI, aiming to shed light on its potential applications in military contexts. We investigate its capacity to improve decision-making, automate complex intelligence analysis, and strengthen autonomous systems. We further explore its potential to solve complex tasks in various domains, in addition to its applications in military contexts. Through this exploration, we address ethical, strategic, and technical considerations crucial to the development and deployment of Neuro-Symbolic AI in military and civilian applications. Contributing to the growing body of research, this study represents a comprehensive exploration of the extensive possibilities offered by Neuro-Symbolic AI.
{"title":"Neuro-Symbolic AI for Military Applications","authors":"Desta Haileselassie Hagos;Danda B. Rawat","doi":"10.1109/TAI.2024.3444746","DOIUrl":"https://doi.org/10.1109/TAI.2024.3444746","url":null,"abstract":"Artificial intelligence (AI) plays a significant role in enhancing the capabilities of defense systems, revolutionizing strategic decision-making, and shaping the future landscape of military operations. Neuro-Symbolic AI is an emerging approach that leverages and augments the strengths of neural networks and symbolic reasoning. These systems have the potential to be more impactful and flexible than traditional AI systems, making them well-suited for military applications. This article comprehensively explores the diverse dimensions and capabilities of Neuro-Symbolic AI, aiming to shed light on its potential applications in military contexts. We investigate its capacity to improve decision-making, automate complex intelligence analysis, and strengthen autonomous systems. We further explore its potential to solve complex tasks in various domains, in addition to its applications in military contexts. Through this exploration, we address ethical, strategic, and technical considerations crucial to the development and deployment of Neuro-Symbolic AI in military and civilian applications. Contributing to the growing body of research, this study represents a comprehensive exploration of the extensive possibilities offered by Neuro-Symbolic AI.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"6012-6026"},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preference Prediction-Based Evolutionary Multiobjective Optimization for Gasoline Blending Scheduling
Pub Date : 2024-08-19 DOI: 10.1109/TAI.2024.3444736
Wenxuan Fang;Wei Du;Guo Yu;Renchu He;Yang Tang;Yaochu Jin
Gasoline blending scheduling is challenging, involving multiple conflicting objectives and a large decision space with many mixed integers. Due to these difficulties, one promising solution is to use preference-based multiobjective evolutionary algorithms (PBMOEAs). However, in practical applications, suitable preferences of decision makers are often difficult to generalize and summarize from their operational experience. This article proposes a novel framework called preference prediction-based evolutionary multiobjective optimization (PP-EMO). In PP-EMO, suitable preferences for a new environment can be automatically obtained from historical operational experience by a machine learning-based preference prediction model when we feed the model with the input of the optimization environment. We have found that the predicted preference is able to guide the optimization to efficiently obtain a set of promising scheduling scenarios. Finally, we conducted comparative tests across various environments, and the experimental results demonstrate that the proposed PP-EMO framework outperforms existing methods. Compared with no preference, PP-EMO reduces operating costs by about 25% and decreases blending errors by 50% under demanding operational conditions.
汽油调和调度具有挑战性,涉及多个相互冲突的目标和一个包含许多混合整数的巨大决策空间。鉴于这些困难,一种有前途的解决方案是使用基于偏好的多目标进化算法(PBMOEAs)。然而,在实际应用中,决策者的适当偏好往往很难从他们的操作经验中归纳和总结出来。本文提出了一种新的框架,称为基于偏好预测的多目标进化优化(PP-EMO)。在 PP-EMO 中,当我们向基于机器学习的偏好预测模型输入优化环境时,该模型可自动从历史操作经验中获取新环境下的合适偏好。我们发现,预测的偏好能够指导优化工作,从而有效地获得一组有前景的调度方案。最后,我们进行了各种环境下的对比测试,实验结果表明,所提出的 PP-EMO 框架优于现有方法。在苛刻的运行条件下,与无偏好相比,PP-EMO 降低了约 25% 的运行成本,减少了 50% 的混合误差。
{"title":"Preference Prediction-Based Evolutionary Multiobjective Optimization for Gasoline Blending Scheduling","authors":"Wenxuan Fang;Wei Du;Guo Yu;Renchu He;Yang Tang;Yaochu Jin","doi":"10.1109/TAI.2024.3444736","DOIUrl":"https://doi.org/10.1109/TAI.2024.3444736","url":null,"abstract":"Gasoline blending scheduling is challenging, involving multiple conflicting objectives and a large decision space with many mixed integers. Due to these difficulties, one promising solution is to use preference-based multiobjective evolutionary algorithms (PBMOEAs). However, in practical applications, suitable preferences of decision makers are often difficult to generalize and summarize from their operational experience. This article proposes a novel framework called preference prediction-based evolutionary multiobjective optimization (PP-EMO). In PP-EMO, suitable preferences for a new environment can be automatically obtained from historical operational experience by a machine learning-based preference prediction model when we feed the model with the input of the optimization environment. We have found that the predicted preference is able to guide the optimization to efficiently obtain a set of promising scheduling scenarios. Finally, we conducted comparative tests across various environments, and the experimental results demonstrate that the proposed PP-EMO framework outperforms existing methods. Compared with no preference, PP-EMO reduces operating costs by about 25% and decreases blending errors by 50% under demanding operational conditions.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 1","pages":"79-92"},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142975993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review on Transferability Estimation in Deep Transfer Learning
Pub Date : 2024-08-19 DOI: 10.1109/TAI.2024.3445892
Yihao Xue;Rui Yang;Xiaohan Chen;Weibo Liu;Zidong Wang;Xiaohui Liu
Deep transfer learning has become increasingly prevalent in various fields such as industry and medical science in recent years. To ensure the successful implementation of target tasks and improve the transfer performance, it is meaningful to prevent negative transfer. However, the dissimilarity between the data from source domain and target domain can pose challenges to transfer learning. Additionally, different transfer models exhibit significant variations in the performance of target tasks, potentially leading to a negative transfer phenomenon. To mitigate the adverse effects of the above factors, transferability estimation methods are employed in this field to evaluate the transferability of the data and the models of various deep transfer learning methods. These methods ascertain transferability by incorporating mutual information between the data or models of the source domain and the target domain. This article furnishes a comprehensive overview of four categories of transferability estimation methods in recent years. It employs qualitative analysis to evaluate various transferability estimation approaches, assisting researchers in selecting appropriate methods. Furthermore, this article evaluates the open problems associated with transferability estimation methods, proposing potential emerging areas for further research. Last, the open-source datasets commonly used in transferability estimation studies are summarized in this study.
{"title":"A Review on Transferability Estimation in Deep Transfer Learning","authors":"Yihao Xue;Rui Yang;Xiaohan Chen;Weibo Liu;Zidong Wang;Xiaohui Liu","doi":"10.1109/TAI.2024.3445892","DOIUrl":"https://doi.org/10.1109/TAI.2024.3445892","url":null,"abstract":"Deep transfer learning has become increasingly prevalent in various fields such as industry and medical science in recent years. To ensure the successful implementation of target tasks and improve the transfer performance, it is meaningful to prevent negative transfer. However, the dissimilarity between the data from source domain and target domain can pose challenges to transfer learning. Additionally, different transfer models exhibit significant variations in the performance of target tasks, potentially leading to a negative transfer phenomenon. To mitigate the adverse effects of the above factors, transferability estimation methods are employed in this field to evaluate the transferability of the data and the models of various deep transfer learning methods. These methods ascertain transferability by incorporating mutual information between the data or models of the source domain and the target domain. This article furnishes a comprehensive overview of four categories of transferability estimation methods in recent years. It employs qualitative analysis to evaluate various transferability estimation approaches, assisting researchers in selecting appropriate methods. Furthermore, this article evaluates the open problems associated with transferability estimation methods, proposing potential emerging areas for further research. Last, the open-source datasets commonly used in transferability estimation studies are summarized in this study.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"5894-5914"},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Composite Fixed-Time RL-Optimized Control for Nonlinear Systems and Its Application to Intelligent Ship Autopilot
Pub Date : 2024-08-19 DOI: 10.1109/TAI.2024.3444731
Siwen Liu;Yi Zuo;Tieshan Li;Huanqing Wang;Xiaoyang Gao;Yang Xiao
In the article, an adaptive fixed-time reinforcement learning (RL) optimized control policy is given for nonlinear systems. Radial basis function neural networks (RBFNNs) are exploited to fit uncertain nonlinearities appeared in the considered systems and RL is applied under the critic-actor architecture by using RBFNNs. Specifically, a novel fixed-time smooth estimation system is proposed to improve the estimating performance of RBFNNs. The introduction of the hyperbolic tangent function effectively avoids the singularity problem of the derivative of the virtual controller. The stability analysis shows that the tracking error inclines to an adjustable region near the origin in a fixed-time interval and the boundedness of all signals is obtained. Finally, the intelligent ship autopilot is simulated to prove the utilizability of the obtained control way.
文章针对非线性系统给出了一种自适应固定时间强化学习(RL)优化控制策略。文章利用径向基函数神经网络(RBFNN)来拟合所考虑系统中出现的不确定非线性因素,并通过使用 RBFNN 在批判者-行为者架构下应用 RL。具体来说,为了提高 RBFNNs 的估计性能,提出了一种新的固定时间平滑估计系统。双曲正切函数的引入有效避免了虚拟控制器导数的奇异性问题。稳定性分析表明,在固定的时间间隔内,跟踪误差倾向于原点附近的可调区域,并获得了所有信号的有界性。最后,对智能船舶自动驾驶仪进行了仿真,以证明所获控制方法的可用性。
{"title":"Adaptive Composite Fixed-Time RL-Optimized Control for Nonlinear Systems and Its Application to Intelligent Ship Autopilot","authors":"Siwen Liu;Yi Zuo;Tieshan Li;Huanqing Wang;Xiaoyang Gao;Yang Xiao","doi":"10.1109/TAI.2024.3444731","DOIUrl":"https://doi.org/10.1109/TAI.2024.3444731","url":null,"abstract":"In the article, an adaptive fixed-time reinforcement learning (RL) optimized control policy is given for nonlinear systems. Radial basis function neural networks (RBFNNs) are exploited to fit uncertain nonlinearities appeared in the considered systems and RL is applied under the critic-actor architecture by using RBFNNs. Specifically, a novel fixed-time smooth estimation system is proposed to improve the estimating performance of RBFNNs. The introduction of the hyperbolic tangent function effectively avoids the singularity problem of the derivative of the virtual controller. The stability analysis shows that the tracking error inclines to an adjustable region near the origin in a fixed-time interval and the boundedness of all signals is obtained. Finally, the intelligent ship autopilot is simulated to prove the utilizability of the obtained control way.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 1","pages":"66-78"},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives
Pub Date : 2024-08-19 DOI: 10.1109/TAI.2024.3444742
Desta Haileselassie Hagos;Rick Battle;Danda B. Rawat
The emergence of generative artificial intelligence (AI) and large language models (LLMs) has marked a new era of natural language processing (NLP), introducing unprecedented capabilities that are revolutionizing various domains. This article explores the current state of these cutting-edge technologies, demonstrating their remarkable advancements and wide-ranging applications. Our article contributes to providing a holistic perspective on the technical foundations, practical applications, and emerging challenges within the evolving landscape of generative AI and LLMs. We believe that understanding the generative capabilities of AI systems and the specific context of LLMs is crucial for researchers, practitioners, and policymakers to collaboratively shape the responsible and ethical integration of these technologies into various domains. Furthermore, we identify and address main research gaps, providing valuable insights to guide future research endeavors within the AI research community.
{"title":"Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives","authors":"Desta Haileselassie Hagos;Rick Battle;Danda B. Rawat","doi":"10.1109/TAI.2024.3444742","DOIUrl":"https://doi.org/10.1109/TAI.2024.3444742","url":null,"abstract":"The emergence of generative artificial intelligence (AI) and large language models (LLMs) has marked a new era of natural language processing (NLP), introducing unprecedented capabilities that are revolutionizing various domains. This article explores the current state of these cutting-edge technologies, demonstrating their remarkable advancements and wide-ranging applications. Our article contributes to providing a holistic perspective on the technical foundations, practical applications, and emerging challenges within the evolving landscape of generative AI and LLMs. We believe that understanding the generative capabilities of AI systems and the specific context of LLMs is crucial for researchers, practitioners, and policymakers to collaboratively shape the responsible and ethical integration of these technologies into various domains. Furthermore, we identify and address main research gaps, providing valuable insights to guide future research endeavors within the AI research community.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"5873-5893"},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
meMIA: Multilevel Ensemble Membership Inference Attack
Pub Date : 2024-08-19 DOI: 10.1109/TAI.2024.3445326
Najeeb Ullah;Muhammad Naveed Aman;Biplab Sikdar
Leakage of private information in machine learning models can lead to breaches of confidentiality, identity theft, and unauthorized access to personal data. Ensuring the safe and trustworthy deployment of AI systems necessitates addressing privacy concerns to prevent unintentional disclosure and discrimination. One significant threat, membership inference (MI) attacks, exploit vulnerabilities in target learning models to determine if a given sample was part of the training set. However, the effectiveness of existing MI attacks is often limited by the number of classes in the dataset or the need for diverse multilevel adversarial features to exploit overfitted models. To enhance MI attack performance, we propose meMIA, a novel framework based on stacked ensemble learning. meMIA integrates embeddings from a neural network (NN) and a long short-term memory (LSTM) model, training a subsequent NN, termed the meta-model, on the concatenated embeddings. This method leverages the complementary strengths of NN and LSTM models; the LSTM captures order differences in confidence scores, while the NN discerns probability distribution differences between member and nonmember samples. We extensively evaluate meMIA on seven benchmark datasets, demonstrating that it surpasses current state-of-the-art MI attacks, achieving accuracy up to 94.6% and near-perfect recall. meMIA's superior performance, especially on datasets with fewer classes, underscores the urgent need for robust defenses against privacy attacks in machine learning, contributing to the safer and more ethical use of AI technologies.
机器学习模型中私人信息的泄露会导致泄密、身份盗窃和未经授权访问个人数据。要确保安全、可靠地部署人工智能系统,就必须解决隐私问题,防止无意泄露和歧视。其中一个重要威胁是成员推理(MI)攻击,它利用目标学习模型中的漏洞来确定给定样本是否是训练集的一部分。然而,现有 MI 攻击的有效性往往受限于数据集中的类别数量,或需要利用过度拟合模型的多样化多层次对抗特征。为了提高 MI 攻击性能,我们提出了基于堆叠集合学习的新型框架 meMIA。meMIA 整合了神经网络(NN)和长短期记忆(LSTM)模型的嵌入,并在连接嵌入上训练后续的 NN(称为元模型)。这种方法利用了 NN 模型和 LSTM 模型的互补优势;LSTM 可捕捉置信度得分的顺序差异,而 NN 则可识别成员样本和非成员样本之间的概率分布差异。我们在七个基准数据集上对meMIA进行了广泛评估,结果表明它超越了当前最先进的MI攻击,准确率高达94.6%,召回率也接近完美。meMIA的卓越性能,尤其是在类别较少的数据集上的性能,凸显了在机器学习中对隐私攻击进行稳健防御的迫切需要,有助于更安全、更合乎道德地使用人工智能技术。
{"title":"meMIA: Multilevel Ensemble Membership Inference Attack","authors":"Najeeb Ullah;Muhammad Naveed Aman;Biplab Sikdar","doi":"10.1109/TAI.2024.3445326","DOIUrl":"https://doi.org/10.1109/TAI.2024.3445326","url":null,"abstract":"Leakage of private information in machine learning models can lead to breaches of confidentiality, identity theft, and unauthorized access to personal data. Ensuring the safe and trustworthy deployment of AI systems necessitates addressing privacy concerns to prevent unintentional disclosure and discrimination. One significant threat, membership inference (MI) attacks, exploit vulnerabilities in target learning models to determine if a given sample was part of the training set. However, the effectiveness of existing MI attacks is often limited by the number of classes in the dataset or the need for diverse multilevel adversarial features to exploit overfitted models. To enhance MI attack performance, we propose meMIA, a novel framework based on stacked ensemble learning. meMIA integrates embeddings from a neural network (NN) and a long short-term memory (LSTM) model, training a subsequent NN, termed the meta-model, on the concatenated embeddings. This method leverages the complementary strengths of NN and LSTM models; the LSTM captures order differences in confidence scores, while the NN discerns probability distribution differences between member and nonmember samples. We extensively evaluate meMIA on seven benchmark datasets, demonstrating that it surpasses current state-of-the-art MI attacks, achieving accuracy up to 94.6% and near-perfect recall. meMIA's superior performance, especially on datasets with fewer classes, underscores the urgent need for robust defenses against privacy attacks in machine learning, contributing to the safer and more ethical use of AI technologies.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 1","pages":"93-106"},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on artificial intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1