首页 > 最新文献

Artificial Intelligence Review最新文献

英文 中文
Deep learning for surgical instrument recognition and segmentation in robotic-assisted surgeries: a systematic review 深度学习用于机器人辅助手术中的手术器械识别和分割:系统综述
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-04 DOI: 10.1007/s10462-024-10979-w
Fatimaelzahraa Ali Ahmed, Mahmoud Yousef, Mariam Ali Ahmed, Hasan Omar Ali, Anns Mahboob, Hazrat Ali, Zubair Shah, Omar Aboumarzouk, Abdulla Al Ansari, Shidin Balakrishnan

Applying deep learning (DL) for annotating surgical instruments in robot-assisted minimally invasive surgeries (MIS) represents a significant advancement in surgical technology. This systematic review examines 48 studies that utilize advanced DL methods and architectures. These sophisticated DL models have shown notable improvements in the precision and efficiency of detecting and segmenting surgical tools. The enhanced capabilities of these models support various clinical applications, including real-time intraoperative guidance, comprehensive postoperative evaluations, and objective assessments of surgical skills. By accurately identifying and segmenting surgical instruments in video data, DL models provide detailed feedback to surgeons, thereby improving surgical outcomes and reducing complication risks. Furthermore, the application of DL in surgical education is transformative. The review underscores the significant impact of DL on improving the accuracy of skill assessments and the overall quality of surgical training programs. However, implementing DL in surgical tool detection and segmentation faces challenges, such as the need for large, accurately annotated datasets to train these models effectively. The manual annotation process is labor-intensive and time-consuming, posing a significant bottleneck. Future research should focus on automating the detection and segmentation process and enhancing the robustness of DL models against environmental variations. Expanding the application of DL models across various surgical specialties will be essential to fully realize this technology’s potential. Integrating DL with other emerging technologies, such as augmented reality (AR), also offers promising opportunities to further enhance the precision and efficacy of surgical procedures.

将深度学习(DL)应用于机器人辅助微创手术(MIS)中的手术器械标注,是外科技术的一大进步。本系统性综述研究了 48 项采用先进 DL 方法和架构的研究。这些先进的 DL 模型在检测和分割手术工具的精度和效率方面都有显著提高。这些模型的增强功能支持各种临床应用,包括术中实时引导、术后综合评估和手术技能的客观评估。通过准确识别和分割视频数据中的手术器械,DL 模型可向外科医生提供详细反馈,从而改善手术效果并降低并发症风险。此外,DL 在外科教育中的应用具有变革性。综述强调了 DL 对提高技能评估准确性和外科培训计划整体质量的重大影响。然而,在手术工具检测和分割中应用 DL 面临着挑战,例如需要大量准确标注的数据集来有效训练这些模型。人工标注过程耗费大量人力和时间,是一个重大瓶颈。未来的研究应侧重于检测和分割过程的自动化,以及增强 DL 模型对环境变化的稳健性。要充分发挥这项技术的潜力,就必须将 DL 模型的应用范围扩大到各个外科专科。将 DL 与增强现实(AR)等其他新兴技术相结合,也为进一步提高外科手术的精确性和有效性提供了大有可为的机会。
{"title":"Deep learning for surgical instrument recognition and segmentation in robotic-assisted surgeries: a systematic review","authors":"Fatimaelzahraa Ali Ahmed,&nbsp;Mahmoud Yousef,&nbsp;Mariam Ali Ahmed,&nbsp;Hasan Omar Ali,&nbsp;Anns Mahboob,&nbsp;Hazrat Ali,&nbsp;Zubair Shah,&nbsp;Omar Aboumarzouk,&nbsp;Abdulla Al Ansari,&nbsp;Shidin Balakrishnan","doi":"10.1007/s10462-024-10979-w","DOIUrl":"10.1007/s10462-024-10979-w","url":null,"abstract":"<div><p>Applying deep learning (DL) for annotating surgical instruments in robot-assisted minimally invasive surgeries (MIS) represents a significant advancement in surgical technology. This systematic review examines 48 studies that utilize advanced DL methods and architectures. These sophisticated DL models have shown notable improvements in the precision and efficiency of detecting and segmenting surgical tools. The enhanced capabilities of these models support various clinical applications, including real-time intraoperative guidance, comprehensive postoperative evaluations, and objective assessments of surgical skills. By accurately identifying and segmenting surgical instruments in video data, DL models provide detailed feedback to surgeons, thereby improving surgical outcomes and reducing complication risks. Furthermore, the application of DL in surgical education is transformative. The review underscores the significant impact of DL on improving the accuracy of skill assessments and the overall quality of surgical training programs. However, implementing DL in surgical tool detection and segmentation faces challenges, such as the need for large, accurately annotated datasets to train these models effectively. The manual annotation process is labor-intensive and time-consuming, posing a significant bottleneck. Future research should focus on automating the detection and segmentation process and enhancing the robustness of DL models against environmental variations. Expanding the application of DL models across various surgical specialties will be essential to fully realize this technology’s potential. Integrating DL with other emerging technologies, such as augmented reality (AR), also offers promising opportunities to further enhance the precision and efficacy of surgical procedures.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 1","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10979-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on feature extraction and learning techniques for link prediction in homogeneous and heterogeneous complex networks 同构和异构复杂网络中链接预测的特征提取和学习技术概览
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1007/s10462-024-10998-7
Puneet Kapoor, Sakshi Kaushal, Harish Kumar, Kushal Kanwar

Complex networks are commonly observed in several real-world areas, such as social, biological, and technical systems, where they exhibit complicated patterns of connectedness and organised clusters. These networks have intricate topological characteristics that frequently elude conventional characterization. Link prediction in complex networks, like data flow in telecommunications networks, protein interactions in biological systems, and social media interactions on platforms like Facebook, etc., is an essential element of network analytics and presents fresh research challenges. Consequently, there is a growing emphasis in research on creating new link prediction methods for different network applications. This survey investigates several strategies related to link prediction, ranging from feature extraction based to feature learning based techniques, with a specific focus on their utilisation in dynamic and developing network topologies. Furthermore, this paper emphasises on a wide variety of feature learning techniques that go beyond basic feature extraction and matrix factorization. It includes advanced learning-based algorithms and neural network techniques specifically designed for link prediction. The study also presents evaluation results of different link prediction techniques on homogeneous and heterogeneous network datasets, and provides a thorough examination of existing methods and potential areas for further investigation.

复杂网络常见于现实世界的多个领域,如社会、生物和技术系统,它们表现出复杂的连接模式和有组织的集群。这些网络具有错综复杂的拓扑特征,常常无法用常规方法描述。复杂网络中的链接预测,如电信网络中的数据流、生物系统中的蛋白质交互以及 Facebook 等平台上的社交媒体交互等,是网络分析的基本要素,也提出了新的研究挑战。因此,针对不同网络应用创建新链接预测方法的研究越来越受到重视。本调查研究了与链接预测相关的几种策略,从基于特征提取的技术到基于特征学习的技术,重点关注它们在动态和发展中网络拓扑结构中的应用。此外,本文还强调了各种超越基本特征提取和矩阵因式分解的特征学习技术。其中包括基于学习的高级算法和专门用于链路预测的神经网络技术。研究还介绍了不同链接预测技术在同质和异质网络数据集上的评估结果,并对现有方法和有待进一步研究的潜在领域进行了深入探讨。
{"title":"A survey on feature extraction and learning techniques for link prediction in homogeneous and heterogeneous complex networks","authors":"Puneet Kapoor,&nbsp;Sakshi Kaushal,&nbsp;Harish Kumar,&nbsp;Kushal Kanwar","doi":"10.1007/s10462-024-10998-7","DOIUrl":"10.1007/s10462-024-10998-7","url":null,"abstract":"<div><p>Complex networks are commonly observed in several real-world areas, such as social, biological, and technical systems, where they exhibit complicated patterns of connectedness and organised clusters. These networks have intricate topological characteristics that frequently elude conventional characterization. Link prediction in complex networks, like data flow in telecommunications networks, protein interactions in biological systems, and social media interactions on platforms like Facebook, etc., is an essential element of network analytics and presents fresh research challenges. Consequently, there is a growing emphasis in research on creating new link prediction methods for different network applications. This survey investigates several strategies related to link prediction, ranging from feature extraction based to feature learning based techniques, with a specific focus on their utilisation in dynamic and developing network topologies. Furthermore, this paper emphasises on a wide variety of feature learning techniques that go beyond basic feature extraction and matrix factorization. It includes advanced learning-based algorithms and neural network techniques specifically designed for link prediction. The study also presents evaluation results of different link prediction techniques on homogeneous and heterogeneous network datasets, and provides a thorough examination of existing methods and potential areas for further investigation.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10998-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GOG-MBSHO: multi-strategy fusion binary sea-horse optimizer with Gaussian transfer function for feature selection of cancer gene expression data GOG-MBSHO:采用高斯传递函数的多策略融合二元海马优化器,用于癌症基因表达数据的特征选择
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1007/s10462-024-10954-5
Yu-Cai Wang, Hao-Ming Song, Jie-Sheng Wang, Yu-Wei Song, Yu-Liang Qi, Xin-Ru Ma

Cancer gene expression data has the characteristics of high-dimensional, multi-text and multi-classification. The problem of cancer subtype diagnosis can be solved by selecting the most representative and predictive genes from a large number of gene expression data. Feature selection technology can effectively reduce the dimension of data, which helps analyze the information on cancer gene expression data. A multi-strategy fusion binary sea-horse optimizer based on Gaussian transfer function (GOG-MBSHO) is proposed to solve the feature selection problem of cancer gene expression data. Firstly, the multi-strategy includes golden sine strategy, hippo escape strategy and multiple inertia weight strategies. The sea-horse optimizer with the golden sine strategy does not disrupt the structure of the original algorithm. Embedding the golden sine strategy within the spiral motion of the sea-horse optimizer enhances the movement of the algorithm and improves its global exploration and local exploitation capabilities. The hippo escape strategy is introduced for random selection, which avoids the algorithm from falling into local optima, increases the search diversity, and improves the optimization accuracy of the algorithm. The advantage of multiple inertial weight strategies is that dynamic exploitation and exploration can be carried out to accelerate the convergence speed and improve the performance of the algorithm. Then, the effectiveness of multi-strategy fusion was demonstrated by 15 UCI datasets. The simulation results show that the proposed Gaussian transfer function is better than the commonly used S-type and V-type transfer functions, which can improve the classification accuracy, effectively reduce the number of features, and obtain better fitness value. Finally, comparing with other binary swarm intelligent optimization algorithms on 15 cancer gene expression datasets, it is proved that the proposed GOG1-MBSHO has great advantages in the feature selection of cancer gene expression data.

癌症基因表达数据具有高维、多文本、多分类等特点。从大量基因表达数据中筛选出最具代表性和预测性的基因,可以解决癌症亚型诊断问题。特征选择技术能有效降低数据维度,有助于分析癌症基因表达数据信息。本文提出了一种基于高斯传递函数的多策略融合二元海马优化器(GOG-MBSHO)来解决癌症基因表达数据的特征选择问题。首先,多策略包括黄金正弦策略、河马逃逸策略和多惯性权重策略。采用金正弦策略的海马优化器不会破坏原始算法的结构。在海马优化器的螺旋运动中嵌入金正弦策略,增强了算法的运动能力,提高了全局探索和局部开发能力。引入河马逃逸策略进行随机选择,避免了算法陷入局部最优,增加了搜索多样性,提高了算法的优化精度。多惯性权重策略的优势在于可以进行动态利用和探索,加快收敛速度,提高算法性能。然后,通过 15 个 UCI 数据集证明了多策略融合的有效性。仿真结果表明,所提出的高斯传递函数优于常用的 S 型和 V 型传递函数,可以提高分类精度,有效减少特征数量,获得更好的适配值。最后,在 15 个癌症基因表达数据集上与其他二元蜂群智能优化算法进行比较,证明所提出的 GOG1-MBSHO 在癌症基因表达数据的特征选择方面具有很大优势。
{"title":"GOG-MBSHO: multi-strategy fusion binary sea-horse optimizer with Gaussian transfer function for feature selection of cancer gene expression data","authors":"Yu-Cai Wang,&nbsp;Hao-Ming Song,&nbsp;Jie-Sheng Wang,&nbsp;Yu-Wei Song,&nbsp;Yu-Liang Qi,&nbsp;Xin-Ru Ma","doi":"10.1007/s10462-024-10954-5","DOIUrl":"10.1007/s10462-024-10954-5","url":null,"abstract":"<div><p>Cancer gene expression data has the characteristics of high-dimensional, multi-text and multi-classification. The problem of cancer subtype diagnosis can be solved by selecting the most representative and predictive genes from a large number of gene expression data. Feature selection technology can effectively reduce the dimension of data, which helps analyze the information on cancer gene expression data. A multi-strategy fusion binary sea-horse optimizer based on Gaussian transfer function (GOG-MBSHO) is proposed to solve the feature selection problem of cancer gene expression data. Firstly, the multi-strategy includes golden sine strategy, hippo escape strategy and multiple inertia weight strategies. The sea-horse optimizer with the golden sine strategy does not disrupt the structure of the original algorithm. Embedding the golden sine strategy within the spiral motion of the sea-horse optimizer enhances the movement of the algorithm and improves its global exploration and local exploitation capabilities. The hippo escape strategy is introduced for random selection, which avoids the algorithm from falling into local optima, increases the search diversity, and improves the optimization accuracy of the algorithm. The advantage of multiple inertial weight strategies is that dynamic exploitation and exploration can be carried out to accelerate the convergence speed and improve the performance of the algorithm. Then, the effectiveness of multi-strategy fusion was demonstrated by 15 UCI datasets. The simulation results show that the proposed Gaussian transfer function is better than the commonly used S-type and V-type transfer functions, which can improve the classification accuracy, effectively reduce the number of features, and obtain better fitness value. Finally, comparing with other binary swarm intelligent optimization algorithms on 15 cancer gene expression datasets, it is proved that the proposed GOG1-MBSHO has great advantages in the feature selection of cancer gene expression data.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10954-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A framework for measuring the training efficiency of a neural architecture 衡量神经架构训练效率的框架
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1007/s10462-024-10943-8
Eduardo Cueto-Mendoza, John Kelleher

Measuring Efficiency in neural network system development is an open research problem. This paper presents an experimental framework to measure the training efficiency of a neural architecture. To demonstrate our approach, we analyze the training efficiency of Convolutional Neural Networks and Bayesian equivalents on the MNIST and CIFAR-10 tasks. Our results show that training efficiency decays as training progresses and varies across different stopping criteria for a given neural model and learning task. We also find a non-linear relationship between training stopping criteria, training Efficiency, model size, and training Efficiency. Furthermore, we illustrate the potential confounding effects of overtraining on measuring the training efficiency of a neural architecture. Regarding relative training efficiency across different architectures, our results indicate that CNNs are more efficient than BCNNs on both datasets. More generally, as a learning task becomes more complex, the relative difference in training efficiency between different architectures becomes more pronounced.

衡量神经网络系统开发的效率是一个尚未解决的研究问题。本文提出了一个测量神经架构训练效率的实验框架。为了证明我们的方法,我们分析了卷积神经网络和贝叶斯等效网络在 MNIST 和 CIFAR-10 任务中的训练效率。我们的结果表明,训练效率会随着训练的进行而下降,并且在给定神经模型和学习任务的不同停止标准下会有所不同。我们还发现训练停止标准、训练效率、模型大小和训练效率之间存在非线性关系。此外,我们还说明了过度训练对衡量神经架构训练效率的潜在干扰效应。关于不同架构的相对训练效率,我们的结果表明,在两个数据集上,CNN 比 BCNN 更有效率。一般来说,随着学习任务变得越来越复杂,不同架构之间训练效率的相对差异也会越来越明显。
{"title":"A framework for measuring the training efficiency of a neural architecture","authors":"Eduardo Cueto-Mendoza,&nbsp;John Kelleher","doi":"10.1007/s10462-024-10943-8","DOIUrl":"10.1007/s10462-024-10943-8","url":null,"abstract":"<div><p>Measuring Efficiency in neural network system development is an open research problem. This paper presents an experimental framework to measure the training efficiency of a neural architecture. To demonstrate our approach, we analyze the training efficiency of Convolutional Neural Networks and Bayesian equivalents on the MNIST and CIFAR-10 tasks. Our results show that training efficiency decays as training progresses and varies across different stopping criteria for a given neural model and learning task. We also find a non-linear relationship between training stopping criteria, training Efficiency, model size, and training Efficiency. Furthermore, we illustrate the potential confounding effects of overtraining on measuring the training efficiency of a neural architecture. Regarding relative training efficiency across different architectures, our results indicate that CNNs are more efficient than BCNNs on both datasets. More generally, as a learning task becomes more complex, the relative difference in training efficiency between different architectures becomes more pronounced.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10943-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
(p,q,r-)Fractional fuzzy sets and their aggregation operators and applications (p,q,r-)分数模糊集及其聚合算子和应用
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-25 DOI: 10.1007/s10462-024-10911-2
Muhammad Gulistan, Ying Hongbin, Witold Pedrycz, Muhammad Rahim, Fazli Amin, Hamiden Abd El-Wahed Khalifa

Using (p,q,r-) fractional fuzzy sets ((p,q,r-) FFS) to demonstrate the stability of cryptocurrencies is considered due to the complex and volatile nature of cryptocurrency markets, where traditional models may fall short in capturing nuances and uncertainties. (p,q,r-) FFS provides a flexible framework for modeling cryptocurrency stability by accommodating imprecise data, multidimensional analysis of various market factors, and adaptability to the unique characteristics of the cryptocurrency space, potentially offering a more comprehensive understanding of the factors influencing stability. Existing studies have explored Picture Fuzzy Sets and Spherical Fuzzy Sets, built on membership, neutrality, and non-membership grades. However, these sets can’t reach the maximum value (equal to (1)) due to grade constraints. For example, when considering (wp =(h,langle text{0.9,0.8,1.0}rangle left|hin Hright.)), these sets fall short. This is obvious when a decision-maker possesses complete confidence in an alternative, they have the option to assign a value of 1 as the assessment score for that alternative. This signifies that they harbor no doubts or uncertainties regarding the chosen option. To address this, (p,q,r-) Fractional Fuzzy Sets ((p,q,r-) FFSs) are introduced, using new parameters (p), (q), and (r). These parameters abide by (p),(qge 1) and (r) as the least common multiple of (p) and (q). We establish operational laws for (p,q,r-) FFSs. Based on these operational laws, we proposed a series of aggregation operators (AOs) to aggregate the information in context of (p,q,r-) fractional fuzzy numbers. Furthermore, we constructed a novel multi-criteria group decision-making (MCGDM) method to deal with real-world decision-making problems. A numerical example is provided to demonstrate the proposed approach.

使用(p,q,r-)分数模糊集((p,q,r-) FFS)来证明加密货币的稳定性被认为是由于加密货币市场复杂多变的性质,传统模型可能无法捕捉到细微差别和不确定性。图象矩阵为加密货币稳定性建模提供了一个灵活的框架,它可以容纳不精确的数据,对各种市场因素进行多维分析,并能适应加密货币领域的独特特征,从而有可能为影响稳定性的因素提供更全面的理解。现有研究探索了建立在成员、中立和非成员等级上的图片模糊集和球形模糊集。然而,由于等级的限制,这些集合无法达到最大值(等于(1))。例如,当考虑到(wp =(h,langle text{0.9,0.8,1.0}rangle left|hin Hright.这一点很明显,当决策者对某一备选方案完全有信心时,他们可以选择将该备选方案的评估分值定为 1。这表示他们对所选方案没有任何怀疑或不确定性。为了解决这个问题,引入了分数模糊集((p,q,r-) FFSs),使用新的参数(p), (q), 和(r)。这些参数遵守(p)、(q)和(r)作为(p)和(q)的最小公倍数。我们建立了 (p,q,r-) FFSs 的运算法则。基于这些运算法则,我们提出了一系列聚合算子(AOs)来聚合(p,q,r-)分数模糊数的信息。此外,我们还构建了一种新颖的多标准群体决策(MCGDM)方法来处理现实世界中的决策问题。我们提供了一个数值示例来演示所提出的方法。
{"title":"(p,q,r-)Fractional fuzzy sets and their aggregation operators and applications","authors":"Muhammad Gulistan,&nbsp;Ying Hongbin,&nbsp;Witold Pedrycz,&nbsp;Muhammad Rahim,&nbsp;Fazli Amin,&nbsp;Hamiden Abd El-Wahed Khalifa","doi":"10.1007/s10462-024-10911-2","DOIUrl":"10.1007/s10462-024-10911-2","url":null,"abstract":"<div><p>Using <span>(p,q,r-)</span> fractional fuzzy sets (<span>(p,q,r-)</span> FFS) to demonstrate the stability of cryptocurrencies is considered due to the complex and volatile nature of cryptocurrency markets, where traditional models may fall short in capturing nuances and uncertainties. <span>(p,q,r-)</span> FFS provides a flexible framework for modeling cryptocurrency stability by accommodating imprecise data, multidimensional analysis of various market factors, and adaptability to the unique characteristics of the cryptocurrency space, potentially offering a more comprehensive understanding of the factors influencing stability. Existing studies have explored Picture Fuzzy Sets and Spherical Fuzzy Sets, built on membership, neutrality, and non-membership grades. However, these sets can’t reach the maximum value (equal to <span>(1)</span>) due to grade constraints. For example, when considering <span>(wp =(h,langle text{0.9,0.8,1.0}rangle left|hin Hright.))</span>, these sets fall short. This is obvious when a decision-maker possesses complete confidence in an alternative, they have the option to assign a value of 1 as the assessment score for that alternative. This signifies that they harbor no doubts or uncertainties regarding the chosen option. To address this, <span>(p,q,r-)</span> Fractional Fuzzy Sets (<span>(p,q,r-)</span> FFSs) are introduced, using new parameters <span>(p)</span>, <span>(q)</span>, and <span>(r)</span>. These parameters abide by <span>(p)</span>,<span>(qge 1)</span> and <span>(r)</span> as the least common multiple of <span>(p)</span> and <span>(q)</span>. We establish operational laws for <span>(p,q,r-)</span> FFSs. Based on these operational laws, we proposed a series of aggregation operators (AOs) to aggregate the information in context of <span>(p,q,r-)</span> fractional fuzzy numbers. Furthermore, we constructed a novel multi-criteria group decision-making (MCGDM) method to deal with real-world decision-making problems. A numerical example is provided to demonstrate the proposed approach.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10911-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142519064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bio-inspired disease prediction: harnessing the power of electric eel foraging optimization algorithm with machine learning for heart disease prediction 生物启发的疾病预测:利用电鳗觅食优化算法和机器学习的力量进行心脏病预测
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-24 DOI: 10.1007/s10462-024-10975-0
Geetha Narasimhan, Akila Victor

Heart disease is the most significant health problem around the world. Thus, it emphasizes the need for accurate and efficient predictive models for early diagnosis. This study proposes an innovative approach integrating the Electric Eel Foraging Optimization Algorithm (EEFOA) with the Random Forest (RF) algorithm for classifying heart disease prediction. EEFOA draws inspiration from the foraging behaviour of electric eels, a bio-inspired optimization framework capable of effectively exploring complex solutions. The objective is to improve the predictive performance of heart disease diagnosis by integrating optimization and Machine learning methodologies. The experiment uses a heart disease dataset comprising clinical and demographic features of at-risk individuals. Subsequently, EEFOA was applied to optimize the features of the dataset and classification using the RF algorithm, thereby enhancing its predictive performance. The results demonstrate that the Electric Eel Foraging Optimization Algorithm Random Forest (EEFOARF) model outperforms traditional RF and other state-of-the-art classifiers in terms of predictive accuracy, sensitivity, specificity, precision, and Log_Loss, achieving remarkable scores of 96.59%, 95.15%, 98.04%, 98%, and 0.1179, respectively. The proposed methodology has the potential to make a significant contribution, thereby reducing morbidity and mortality rates.

心脏病是全球最严重的健康问题。因此,我们需要准确高效的预测模型来进行早期诊断。本研究提出了一种创新方法,将电鳗觅食优化算法(EEFOA)与随机森林(RF)算法相结合,用于心脏病的分类预测。EEFOA 从电鳗的觅食行为中汲取灵感,是一种能够有效探索复杂解决方案的生物启发优化框架。其目标是通过整合优化和机器学习方法,提高心脏病诊断的预测性能。实验使用的心脏病数据集包含高危人群的临床和人口特征。随后,应用 EEFOA 对数据集的特征进行优化,并使用射频算法进行分类,从而提高其预测性能。结果表明,电鳗觅食优化算法随机森林(EEFOARF)模型在预测准确性、灵敏度、特异性、精确度和对数损失(Log_Loss)方面优于传统的射频算法和其他最先进的分类器,分别取得了 96.59%、95.15%、98.04%、98% 和 0.1179 的显著得分。所提出的方法有望做出重大贡献,从而降低发病率和死亡率。
{"title":"Bio-inspired disease prediction: harnessing the power of electric eel foraging optimization algorithm with machine learning for heart disease prediction","authors":"Geetha Narasimhan,&nbsp;Akila Victor","doi":"10.1007/s10462-024-10975-0","DOIUrl":"10.1007/s10462-024-10975-0","url":null,"abstract":"<div><p>Heart disease is the most significant health problem around the world. Thus, it emphasizes the need for accurate and efficient predictive models for early diagnosis. This study proposes an innovative approach integrating the Electric Eel Foraging Optimization Algorithm (EEFOA) with the Random Forest (RF) algorithm for classifying heart disease prediction. EEFOA draws inspiration from the foraging behaviour of electric eels, a bio-inspired optimization framework capable of effectively exploring complex solutions. The objective is to improve the predictive performance of heart disease diagnosis by integrating optimization and Machine learning methodologies. The experiment uses a heart disease dataset comprising clinical and demographic features of at-risk individuals. Subsequently, EEFOA was applied to optimize the features of the dataset and classification using the RF algorithm, thereby enhancing its predictive performance. The results demonstrate that the Electric Eel Foraging Optimization Algorithm Random Forest (EEFOARF) model outperforms traditional RF and other state-of-the-art classifiers in terms of predictive accuracy, sensitivity, specificity, precision, and Log_Loss, achieving remarkable scores of 96.59%, 95.15%, 98.04%, 98%, and 0.1179, respectively. The proposed methodology has the potential to make a significant contribution, thereby reducing morbidity and mortality rates.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10975-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142519000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chronobridge: a novel framework for enhanced temporal and relational reasoning in temporal knowledge graphs Chronobridge:在时态知识图谱中增强时态和关系推理的新框架
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-22 DOI: 10.1007/s10462-024-10983-0
Qian Liu, Siling Feng, Mengxing Huang, Uzair Aslam Bhatti

The task of predicting entities and relations in Temporal Knowledge Graph (TKG) extrapolation is crucial and has been studied extensively. Mainstream algorithms, such as Gated Recurrent Unit (GRU) models, primarily focus on encoding historical factual features within TKGs, often neglecting the importance of incorporating entities and relational features during decoding. This bias ultimately leads to loss of detail and inadequate prediction accuracy during the inference process. To address this issue, a novel ChronoBridge framework is proposed that features a dual mechanism of a chronological node encoder and a bridged feature fusion decoder. Specifically, the chronological node encoder employs an advanced recursive neural network with an enhanced GRU in an autoregressive manner to model historical KG sequences, thereby accurately capturing entity changes over time and significantly enhancing the model’s ability to identify and encode temporal patterns of facts across the timeline. Meanwhile, the bridged feature fusion decoder utilizes a new variant of GRU and a multilayer perception mechanism during the prediction phase to extract entity and relation features and fuse them for inference, thereby strengthening the reasoning capabilities of the model for future events. Testing on three standard datasets showed significant improvements, with a 25.21% increase in MRR accuracy and a 39.38% enhancement in relation inference. This advancement not only improves the understanding of temporal evolution in knowledge graphs but also sets a foundation for future research and applications of TKG reasoning.

在时态知识图谱(TKG)推断中预测实体和关系是一项至关重要的任务,人们对此进行了广泛的研究。主流算法,如门控循环单元(GRU)模型,主要侧重于对 TKG 中的历史事实特征进行编码,往往忽视了在解码过程中纳入实体和关系特征的重要性。这种偏差最终导致推理过程中细节丢失和预测准确性不足。为了解决这个问题,我们提出了一个新颖的 ChronoBridge 框架,它具有时间节点编码器和桥接特征融合解码器的双重机制。具体来说,年表节点编码器采用了先进的递归神经网络,并以自回归的方式增强了 GRU,对历史 KG 序列进行建模,从而准确捕捉实体随时间的变化,并显著增强了模型识别和编码整个时间轴上事实的时间模式的能力。同时,桥接特征融合解码器在预测阶段利用 GRU 的新变体和多层感知机制提取实体和关系特征,并将其融合进行推理,从而加强了模型对未来事件的推理能力。在三个标准数据集上进行的测试表明,该模型的推理能力有了显著提高,MRR 准确率提高了 25.21%,关系推理能力提高了 39.38%。这一进步不仅提高了人们对知识图谱中时间演化的理解,还为 TKG 推理的未来研究和应用奠定了基础。
{"title":"Chronobridge: a novel framework for enhanced temporal and relational reasoning in temporal knowledge graphs","authors":"Qian Liu,&nbsp;Siling Feng,&nbsp;Mengxing Huang,&nbsp;Uzair Aslam Bhatti","doi":"10.1007/s10462-024-10983-0","DOIUrl":"10.1007/s10462-024-10983-0","url":null,"abstract":"<div><p>The task of predicting entities and relations in Temporal Knowledge Graph (TKG) extrapolation is crucial and has been studied extensively. Mainstream algorithms, such as Gated Recurrent Unit (GRU) models, primarily focus on encoding historical factual features within TKGs, often neglecting the importance of incorporating entities and relational features during decoding. This bias ultimately leads to loss of detail and inadequate prediction accuracy during the inference process. To address this issue, a novel ChronoBridge framework is proposed that features a dual mechanism of a chronological node encoder and a bridged feature fusion decoder. Specifically, the chronological node encoder employs an advanced recursive neural network with an enhanced GRU in an autoregressive manner to model historical KG sequences, thereby accurately capturing entity changes over time and significantly enhancing the model’s ability to identify and encode temporal patterns of facts across the timeline. Meanwhile, the bridged feature fusion decoder utilizes a new variant of GRU and a multilayer perception mechanism during the prediction phase to extract entity and relation features and fuse them for inference, thereby strengthening the reasoning capabilities of the model for future events. Testing on three standard datasets showed significant improvements, with a 25.21% increase in MRR accuracy and a 39.38% enhancement in relation inference. This advancement not only improves the understanding of temporal evolution in knowledge graphs but also sets a foundation for future research and applications of TKG reasoning.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10983-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142453024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Counterfactuals in fuzzy relational models 模糊关系模型中的反事实
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-22 DOI: 10.1007/s10462-024-10996-9
Rami Al-Hmouz, Witold Pedrycz, Ahmed Ammari

Given the pressing need for explainability in Machine Learning systems, the studies on counterfactual explanations have gained significant interest. This research delves into this timely problem cast in a unique context of relational systems described by fuzzy relational equations. We develop a comprehensive solution to the counterfactual problems encountered in this setting, which is a novel contribution to the field. An underlying optimization problem is formulated, and its gradient-based solution is constructed. We demonstrate that the non-uniqueness of the derived solution is conveniently formalized and quantified by admitting a result coming in the form of information granules of a higher type, namely type-2 or interval-valued fuzzy set. The construction of the solution in this format is realized by invoking the principle of justifiable granularity, another innovative aspect of our research. We also discuss ways of designing fuzzy relations and elaborate on methods of carrying out counterfactual explanations in rule-based models. Illustrative examples are included to present the performance of the method and interpret the obtained results.

鉴于机器学习系统对可解释性的迫切需求,有关反事实解释的研究获得了极大的关注。本研究在模糊关系方程描述的关系系统的独特背景下,深入探讨了这一适时的问题。我们针对这种情况下遇到的反事实问题开发了一种全面的解决方案,这是对该领域的一个新贡献。我们提出了一个基本的优化问题,并构建了基于梯度的解决方案。我们证明,推导出的解决方案的非唯一性可以很方便地形式化和量化,因为它允许以更高类型(即类型 2 或区间值模糊集)的信息颗粒形式出现的结果。以这种形式构建解决方案是通过引用合理粒度原则来实现的,这是我们研究的另一个创新方面。我们还讨论了设计模糊关系的方法,并阐述了在基于规则的模型中进行反事实解释的方法。我们还列举了一些示例,以展示该方法的性能并解释所获得的结果。
{"title":"Counterfactuals in fuzzy relational models","authors":"Rami Al-Hmouz,&nbsp;Witold Pedrycz,&nbsp;Ahmed Ammari","doi":"10.1007/s10462-024-10996-9","DOIUrl":"10.1007/s10462-024-10996-9","url":null,"abstract":"<div><p>Given the pressing need for explainability in Machine Learning systems, the studies on counterfactual explanations have gained significant interest. This research delves into this timely problem cast in a unique context of relational systems described by fuzzy relational equations. We develop a comprehensive solution to the counterfactual problems encountered in this setting, which is a novel contribution to the field. An underlying optimization problem is formulated, and its gradient-based solution is constructed. We demonstrate that the non-uniqueness of the derived solution is conveniently formalized and quantified by admitting a result coming in the form of information granules of a higher type, namely type-2 or interval-valued fuzzy set. The construction of the solution in this format is realized by invoking the principle of justifiable granularity, another innovative aspect of our research. We also discuss ways of designing fuzzy relations and elaborate on methods of carrying out counterfactual explanations in rule-based models. Illustrative examples are included to present the performance of the method and interpret the obtained results.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10996-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142452893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph neural networks for multi-view learning: a taxonomic review 用于多视角学习的图神经网络:分类综述
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-21 DOI: 10.1007/s10462-024-10990-1
Shunxin Xiao, Jiacheng Li, Jielong Lu, Sujia Huang, Bao Zeng, Shiping Wang

With the explosive growth of user-generated content, multi-view learning has become a rapidly growing direction in pattern recognition and data analysis areas. Due to the significant application value of multi-view learning, there has been a continuous emergence of research based on machine learning methods and traditional deep learning paradigms. The core challenge in multi-view learning lies in harnessing both consistent and complementary information to forge a unified, comprehensive representation. However, many multi-view learning tasks are based on graph-structured data, making existing methods unable to effectively mine the information contained in the input multiple data sources. Recently, graph neural networks (GNN) techniques have been widely utilized to deal with non-Euclidean data, such as graphs or manifolds. Thus, it is essential to combine the advantages of the powerful learning capability of GNN models and multi-view data. In this paper, we aim to provide a comprehensive survey of recent research works on GNN-based multi-view learning. In detail, we first provide a taxonomy of GNN-based multi-view learning methods according to the input form of models: multi-relation, multi-attribute and mixed. Then, we introduce the applications of multi-view learning, including recommendation systems, computer vision and so on. Moreover, several public datasets and open-source codes are introduced for implementation. Finally, we analyze the challenges of applying GNN models on various multi-view learning tasks and state new future directions in this field.

随着用户生成内容的爆炸式增长,多视角学习已成为模式识别和数据分析领域一个快速发展的方向。由于多视角学习具有重要的应用价值,基于机器学习方法和传统深度学习范式的研究不断涌现。多视图学习的核心挑战在于如何利用一致和互补的信息来形成统一、全面的表征。然而,许多多视图学习任务都是基于图结构数据的,这使得现有方法无法有效挖掘输入的多个数据源中包含的信息。最近,图神经网络(GNN)技术被广泛用于处理非欧几里得数据,如图或流形。因此,将 GNN 模型强大的学习能力与多视图数据的优势结合起来是非常必要的。在本文中,我们旨在对基于 GNN 的多视图学习的最新研究成果进行全面考察。具体而言,我们首先根据模型的输入形式(多相关、多属性和混合)对基于 GNN 的多视图学习方法进行了分类。然后,我们介绍了多视角学习的应用,包括推荐系统、计算机视觉等。此外,我们还介绍了几个公共数据集和开放源代码,以供实施。最后,我们分析了在各种多视图学习任务中应用 GNN 模型所面临的挑战,并指出了该领域未来的新方向。
{"title":"Graph neural networks for multi-view learning: a taxonomic review","authors":"Shunxin Xiao,&nbsp;Jiacheng Li,&nbsp;Jielong Lu,&nbsp;Sujia Huang,&nbsp;Bao Zeng,&nbsp;Shiping Wang","doi":"10.1007/s10462-024-10990-1","DOIUrl":"10.1007/s10462-024-10990-1","url":null,"abstract":"<div><p>With the explosive growth of user-generated content, multi-view learning has become a rapidly growing direction in pattern recognition and data analysis areas. Due to the significant application value of multi-view learning, there has been a continuous emergence of research based on machine learning methods and traditional deep learning paradigms. The core challenge in multi-view learning lies in harnessing both consistent and complementary information to forge a unified, comprehensive representation. However, many multi-view learning tasks are based on graph-structured data, making existing methods unable to effectively mine the information contained in the input multiple data sources. Recently, graph neural networks (GNN) techniques have been widely utilized to deal with non-Euclidean data, such as graphs or manifolds. Thus, it is essential to combine the advantages of the powerful learning capability of GNN models and multi-view data. In this paper, we aim to provide a comprehensive survey of recent research works on GNN-based multi-view learning. In detail, we first provide a taxonomy of GNN-based multi-view learning methods according to the input form of models: multi-relation, multi-attribute and mixed. Then, we introduce the applications of multi-view learning, including recommendation systems, computer vision and so on. Moreover, several public datasets and open-source codes are introduced for implementation. Finally, we analyze the challenges of applying GNN models on various multi-view learning tasks and state new future directions in this field.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10990-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142452997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence techniques for dynamic security assessments - a survey 用于动态安全评估的人工智能技术--调查
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-21 DOI: 10.1007/s10462-024-10993-y
Miguel Cuevas, Ricardo Álvarez-Malebrán, Claudia Rahmann, Diego Ortiz, José Peña, Rodigo Rozas-Valderrama

The increasing uptake of converter-interfaced generation (CIG) is changing power system dynamics, rendering them extremely dependent on fast and complex control systems. Regularly assessing the stability of these systems across a wide range of operating conditions is thus a critical task for ensuring secure operation. However, the simultaneous simulation of both fast and slow (electromechanical) phenomena, along with an increased number of critical operating conditions, pushes traditional dynamic security assessments (DSA) to their limits. While DSA has served its purpose well, it will not be tenable in future electricity systems with thousands of power electronic devices at different voltage levels on the grid. Therefore, reducing both human and computational efforts required for stability studies is more critical than ever. In response to these challenges, several advanced simulation techniques leveraging artificial intelligence (AI) have been proposed in recent years. AI techniques can handle the increased uncertainty and complexity of power systems by capturing the non-linear relationships between the system’s operational conditions and their stability without solving the set of algebraic-differential equations that model the system. Once these relationships are established, system stability can be promptly and accurately evaluated for a wide range of scenarios. While hundreds of research articles confirm that AI techniques are paving the way for fast stability assessments, many questions and issues must still be addressed, especially regarding the pertinence of studying specific types of stability with the existing AI-based methods and their application in real-world scenarios. In this context, this article presents a comprehensive review of AI-based techniques for stability assessments in power systems. Different AI technical implementations, such as learning algorithms and the generation and treatment of input data, are widely discussed and contextualized. Their practical applications, considering the type of stability, system under study, and type of applications, are also addressed. We review the ongoing research efforts and the AI-based techniques put forward thus far for DSA, contextualizing and interrelating them. We also discuss the advantages, limitations, challenges, and future trends of AI techniques for stability studies.

变流器并网发电(CIG)的日益普及正在改变着电力系统的动态,使其对快速而复杂的控制系统极为依赖。因此,定期评估这些系统在各种运行条件下的稳定性是确保安全运行的关键任务。然而,快速和慢速(机电)现象的同时模拟,以及关键运行条件数量的增加,将传统的动态安全评估(DSA)推向了极限。虽然动态安全评估已很好地实现了其目的,但在未来的电力系统中,电网上不同电压等级的电力电子设备数以千计,动态安全评估将难以为继。因此,减少稳定性研究所需的人力和计算工作量比以往任何时候都更为重要。为了应对这些挑战,近年来提出了几种利用人工智能(AI)的先进模拟技术。人工智能技术可以捕捉系统运行条件与其稳定性之间的非线性关系,而无需求解模拟系统的代数微分方程集,从而应对电力系统日益增加的不确定性和复杂性。一旦建立了这些关系,就可以针对各种情况及时、准确地评估系统稳定性。尽管数百篇研究文章证实,人工智能技术正在为快速评估稳定性铺平道路,但仍有许多问题和难题需要解决,特别是关于使用现有的基于人工智能的方法研究特定类型稳定性的相关性及其在现实世界中的应用。在此背景下,本文全面回顾了基于人工智能的电力系统稳定性评估技术。文章广泛讨论了不同的人工智能技术实现方法,如学习算法、输入数据的生成和处理,并结合实际情况进行了阐述。考虑到稳定性类型、所研究的系统和应用类型,还讨论了它们的实际应用。我们回顾了目前正在进行的研究工作以及迄今为止针对 DSA 提出的基于人工智能的技术,并将它们联系起来,相互关联。我们还讨论了用于稳定性研究的人工智能技术的优势、局限性、挑战和未来趋势。
{"title":"Artificial intelligence techniques for dynamic security assessments - a survey","authors":"Miguel Cuevas,&nbsp;Ricardo Álvarez-Malebrán,&nbsp;Claudia Rahmann,&nbsp;Diego Ortiz,&nbsp;José Peña,&nbsp;Rodigo Rozas-Valderrama","doi":"10.1007/s10462-024-10993-y","DOIUrl":"10.1007/s10462-024-10993-y","url":null,"abstract":"<div><p>The increasing uptake of converter-interfaced generation (CIG) is changing power system dynamics, rendering them extremely dependent on fast and complex control systems. Regularly assessing the stability of these systems across a wide range of operating conditions is thus a critical task for ensuring secure operation. However, the simultaneous simulation of both fast and slow (electromechanical) phenomena, along with an increased number of critical operating conditions, pushes traditional dynamic security assessments (DSA) to their limits. While DSA has served its purpose well, it will not be tenable in future electricity systems with thousands of power electronic devices at different voltage levels on the grid. Therefore, reducing both human and computational efforts required for stability studies is more critical than ever. In response to these challenges, several advanced simulation techniques leveraging artificial intelligence (AI) have been proposed in recent years. AI techniques can handle the increased uncertainty and complexity of power systems by capturing the non-linear relationships between the system’s operational conditions and their stability without solving the set of algebraic-differential equations that model the system. Once these relationships are established, system stability can be promptly and accurately evaluated for a wide range of scenarios. While hundreds of research articles confirm that AI techniques are paving the way for fast stability assessments, many questions and issues must still be addressed, especially regarding the pertinence of studying specific types of stability with the existing AI-based methods and their application in real-world scenarios. In this context, this article presents a comprehensive review of AI-based techniques for stability assessments in power systems. Different AI technical implementations, such as learning algorithms and the generation and treatment of input data, are widely discussed and contextualized. Their practical applications, considering the type of stability, system under study, and type of applications, are also addressed. We review the ongoing research efforts and the AI-based techniques put forward thus far for DSA, contextualizing and interrelating them. We also discuss the advantages, limitations, challenges, and future trends of AI techniques for stability studies.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10993-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142452995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1