首页 > 最新文献

Artificial Intelligence Review最新文献

英文 中文
Artificial intelligence techniques for dynamic security assessments - a survey 用于动态安全评估的人工智能技术--调查
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-21 DOI: 10.1007/s10462-024-10993-y
Miguel Cuevas, Ricardo Álvarez-Malebrán, Claudia Rahmann, Diego Ortiz, José Peña, Rodigo Rozas-Valderrama

The increasing uptake of converter-interfaced generation (CIG) is changing power system dynamics, rendering them extremely dependent on fast and complex control systems. Regularly assessing the stability of these systems across a wide range of operating conditions is thus a critical task for ensuring secure operation. However, the simultaneous simulation of both fast and slow (electromechanical) phenomena, along with an increased number of critical operating conditions, pushes traditional dynamic security assessments (DSA) to their limits. While DSA has served its purpose well, it will not be tenable in future electricity systems with thousands of power electronic devices at different voltage levels on the grid. Therefore, reducing both human and computational efforts required for stability studies is more critical than ever. In response to these challenges, several advanced simulation techniques leveraging artificial intelligence (AI) have been proposed in recent years. AI techniques can handle the increased uncertainty and complexity of power systems by capturing the non-linear relationships between the system’s operational conditions and their stability without solving the set of algebraic-differential equations that model the system. Once these relationships are established, system stability can be promptly and accurately evaluated for a wide range of scenarios. While hundreds of research articles confirm that AI techniques are paving the way for fast stability assessments, many questions and issues must still be addressed, especially regarding the pertinence of studying specific types of stability with the existing AI-based methods and their application in real-world scenarios. In this context, this article presents a comprehensive review of AI-based techniques for stability assessments in power systems. Different AI technical implementations, such as learning algorithms and the generation and treatment of input data, are widely discussed and contextualized. Their practical applications, considering the type of stability, system under study, and type of applications, are also addressed. We review the ongoing research efforts and the AI-based techniques put forward thus far for DSA, contextualizing and interrelating them. We also discuss the advantages, limitations, challenges, and future trends of AI techniques for stability studies.

变流器并网发电(CIG)的日益普及正在改变着电力系统的动态,使其对快速而复杂的控制系统极为依赖。因此,定期评估这些系统在各种运行条件下的稳定性是确保安全运行的关键任务。然而,快速和慢速(机电)现象的同时模拟,以及关键运行条件数量的增加,将传统的动态安全评估(DSA)推向了极限。虽然动态安全评估已很好地实现了其目的,但在未来的电力系统中,电网上不同电压等级的电力电子设备数以千计,动态安全评估将难以为继。因此,减少稳定性研究所需的人力和计算工作量比以往任何时候都更为重要。为了应对这些挑战,近年来提出了几种利用人工智能(AI)的先进模拟技术。人工智能技术可以捕捉系统运行条件与其稳定性之间的非线性关系,而无需求解模拟系统的代数微分方程集,从而应对电力系统日益增加的不确定性和复杂性。一旦建立了这些关系,就可以针对各种情况及时、准确地评估系统稳定性。尽管数百篇研究文章证实,人工智能技术正在为快速评估稳定性铺平道路,但仍有许多问题和难题需要解决,特别是关于使用现有的基于人工智能的方法研究特定类型稳定性的相关性及其在现实世界中的应用。在此背景下,本文全面回顾了基于人工智能的电力系统稳定性评估技术。文章广泛讨论了不同的人工智能技术实现方法,如学习算法、输入数据的生成和处理,并结合实际情况进行了阐述。考虑到稳定性类型、所研究的系统和应用类型,还讨论了它们的实际应用。我们回顾了目前正在进行的研究工作以及迄今为止针对 DSA 提出的基于人工智能的技术,并将它们联系起来,相互关联。我们还讨论了用于稳定性研究的人工智能技术的优势、局限性、挑战和未来趋势。
{"title":"Artificial intelligence techniques for dynamic security assessments - a survey","authors":"Miguel Cuevas,&nbsp;Ricardo Álvarez-Malebrán,&nbsp;Claudia Rahmann,&nbsp;Diego Ortiz,&nbsp;José Peña,&nbsp;Rodigo Rozas-Valderrama","doi":"10.1007/s10462-024-10993-y","DOIUrl":"10.1007/s10462-024-10993-y","url":null,"abstract":"<div><p>The increasing uptake of converter-interfaced generation (CIG) is changing power system dynamics, rendering them extremely dependent on fast and complex control systems. Regularly assessing the stability of these systems across a wide range of operating conditions is thus a critical task for ensuring secure operation. However, the simultaneous simulation of both fast and slow (electromechanical) phenomena, along with an increased number of critical operating conditions, pushes traditional dynamic security assessments (DSA) to their limits. While DSA has served its purpose well, it will not be tenable in future electricity systems with thousands of power electronic devices at different voltage levels on the grid. Therefore, reducing both human and computational efforts required for stability studies is more critical than ever. In response to these challenges, several advanced simulation techniques leveraging artificial intelligence (AI) have been proposed in recent years. AI techniques can handle the increased uncertainty and complexity of power systems by capturing the non-linear relationships between the system’s operational conditions and their stability without solving the set of algebraic-differential equations that model the system. Once these relationships are established, system stability can be promptly and accurately evaluated for a wide range of scenarios. While hundreds of research articles confirm that AI techniques are paving the way for fast stability assessments, many questions and issues must still be addressed, especially regarding the pertinence of studying specific types of stability with the existing AI-based methods and their application in real-world scenarios. In this context, this article presents a comprehensive review of AI-based techniques for stability assessments in power systems. Different AI technical implementations, such as learning algorithms and the generation and treatment of input data, are widely discussed and contextualized. Their practical applications, considering the type of stability, system under study, and type of applications, are also addressed. We review the ongoing research efforts and the AI-based techniques put forward thus far for DSA, contextualizing and interrelating them. We also discuss the advantages, limitations, challenges, and future trends of AI techniques for stability studies.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10993-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142452995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of Artificial Intelligence methods in bladder cancer: segmentation, classification, and detection 膀胱癌人工智能方法综述:分割、分类和检测
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-21 DOI: 10.1007/s10462-024-10953-6
Ayah Bashkami, Ahmad Nasayreh, Sharif Naser Makhadmeh, Hasan Gharaibeh, Ahmed Ibrahim Alzahrani, Ayed Alwadain, Jia Heming, Absalom E. Ezugwu, Laith Abualigah

Artificial intelligence (AI) and other disruptive technologies can potentially improve healthcare across various disciplines. Its subclasses, artificial neural networks, deep learning, and machine learning, excel in extracting insights from large datasets and improving predictive models to boost their utility and accuracy. Though research in this area is still in its early phases, it holds enormous potential for the diagnosis, prognosis, and treatment of urological diseases, such as bladder cancer. The long-used nomograms and other classic forecasting approaches are being reconsidered considering AI’s capabilities. This review emphasizes the coming integration of artificial intelligence into healthcare settings while critically examining the most recent and significant literature on the subject. This study seeks to define the status of AI and its potential for the future, with a special emphasis on how AI can transform bladder cancer diagnosis and treatment.

人工智能(AI)和其他颠覆性技术有可能改善各学科的医疗保健。它的子类,即人工神经网络、深度学习和机器学习,擅长从大型数据集中提取洞察力,并改进预测模型以提高其实用性和准确性。虽然这一领域的研究仍处于早期阶段,但它在膀胱癌等泌尿系统疾病的诊断、预后和治疗方面蕴藏着巨大的潜力。考虑到人工智能的能力,人们正在重新考虑长期使用的提名图和其他经典预测方法。本综述强调了人工智能即将融入医疗环境的趋势,同时对有关这一主题的最新重要文献进行了批判性研究。本研究旨在明确人工智能的现状及其未来潜力,并特别强调人工智能如何改变膀胱癌的诊断和治疗。
{"title":"A review of Artificial Intelligence methods in bladder cancer: segmentation, classification, and detection","authors":"Ayah Bashkami,&nbsp;Ahmad Nasayreh,&nbsp;Sharif Naser Makhadmeh,&nbsp;Hasan Gharaibeh,&nbsp;Ahmed Ibrahim Alzahrani,&nbsp;Ayed Alwadain,&nbsp;Jia Heming,&nbsp;Absalom E. Ezugwu,&nbsp;Laith Abualigah","doi":"10.1007/s10462-024-10953-6","DOIUrl":"10.1007/s10462-024-10953-6","url":null,"abstract":"<div><p>Artificial intelligence (AI) and other disruptive technologies can potentially improve healthcare across various disciplines. Its subclasses, artificial neural networks, deep learning, and machine learning, excel in extracting insights from large datasets and improving predictive models to boost their utility and accuracy. Though research in this area is still in its early phases, it holds enormous potential for the diagnosis, prognosis, and treatment of urological diseases, such as bladder cancer. The long-used nomograms and other classic forecasting approaches are being reconsidered considering AI’s capabilities. This review emphasizes the coming integration of artificial intelligence into healthcare settings while critically examining the most recent and significant literature on the subject. This study seeks to define the status of AI and its potential for the future, with a special emphasis on how AI can transform bladder cancer diagnosis and treatment.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10953-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142452994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of recent approaches to form understanding in scanned documents 扫描文件形式理解最新方法概览
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-21 DOI: 10.1007/s10462-024-11000-0
Abdelrahman Abdallah, Daniel Eberharter, Zoe Pfister, Adam Jatowt

This paper presents a comprehensive survey of over 100 research works on the topic of form understanding in the context of scanned documents. We delve into recent advancements and breakthroughs in the field, with particular focus on transformer-based models, which have been shown to improve performance in form understanding tasks by up to 25% in accuracy compared to traditional methods. Our research methodology involves an in-depth analysis of popular documents and trends over the last decade, including 15 state-of-the-art models and 10 benchmark datasets. By examining these works, we offer novel insights into the evolution of this domain. Specifically, we highlight how transformers have revolutionized form-understanding techniques by enhancing the ability to process noisy scanned documents with significant improvements in OCR accuracy. Furthermore, we present an overview of the most relevant datasets, such as FUNSD, CORD, and SROIE, which serve as benchmarks for evaluating the performance of the models. By comparing the capabilities of these models and reporting an average improvement of 10–15% in key form extraction tasks, we aim to provide researchers and practitioners with useful guidance in selecting the most suitable solutions for their form understanding applications.

本文全面调查了 100 多项关于扫描文档中形式理解主题的研究工作。我们深入探讨了该领域的最新进展和突破,尤其关注基于变压器的模型,与传统方法相比,这些模型已被证明可将形式理解任务的准确率提高 25%。我们的研究方法包括深入分析过去十年的流行文档和趋势,其中包括 15 种最先进的模型和 10 个基准数据集。通过研究这些作品,我们对这一领域的演变有了新的认识。具体来说,我们重点介绍了变换器如何通过提高处理噪声扫描文档的能力来彻底改变形式理解技术,并显著提高 OCR 的准确性。此外,我们还概述了最相关的数据集,如 FUNSD、CORD 和 SROIE,这些数据集是评估模型性能的基准。通过比较这些模型的能力,并报告在关键表单提取任务中平均提高了 10-15% 的性能,我们旨在为研究人员和从业人员提供有用的指导,帮助他们为表单理解应用选择最合适的解决方案。
{"title":"A survey of recent approaches to form understanding in scanned documents","authors":"Abdelrahman Abdallah,&nbsp;Daniel Eberharter,&nbsp;Zoe Pfister,&nbsp;Adam Jatowt","doi":"10.1007/s10462-024-11000-0","DOIUrl":"10.1007/s10462-024-11000-0","url":null,"abstract":"<div><p>This paper presents a comprehensive survey of over 100 research works on the topic of form understanding in the context of scanned documents. We delve into recent advancements and breakthroughs in the field, with particular focus on transformer-based models, which have been shown to improve performance in form understanding tasks by up to 25% in accuracy compared to traditional methods. Our research methodology involves an in-depth analysis of popular documents and trends over the last decade, including 15 state-of-the-art models and 10 benchmark datasets. By examining these works, we offer novel insights into the evolution of this domain. Specifically, we highlight how transformers have revolutionized form-understanding techniques by enhancing the ability to process noisy scanned documents with significant improvements in OCR accuracy. Furthermore, we present an overview of the most relevant datasets, such as FUNSD, CORD, and SROIE, which serve as benchmarks for evaluating the performance of the models. By comparing the capabilities of these models and reporting an average improvement of 10–15% in key form extraction tasks, we aim to provide researchers and practitioners with useful guidance in selecting the most suitable solutions for their form understanding applications.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-11000-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142452996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tire wear monitoring using feature fusion and CatBoost classifier 利用特征融合和 CatBoost 分类器进行轮胎磨损监测
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-19 DOI: 10.1007/s10462-024-10999-6
C. V. Prasshanth, V. Sugumaran

Addressing the critical issue of tire wear is essential for enhancing vehicle safety, performance, and maintenance. Worn-out tires often lead to accidents, underscoring the need for effective monitoring systems. This study is vital for several reasons: safety, as worn tires increase the risk of accidents due to reduced traction and longer braking distances; performance, as uneven tire wear affects vehicle handling and fuel efficiency; maintenance costs, as early detection can prevent more severe damage to suspension and alignment systems; and regulatory compliance, as ensuring tire integrity helps meet safety regulations imposed by transportation authorities. In response, this study systematically evaluates tire conditions at 25%, 50%, 75%, and 100% wear, with an intact tire as a reference, using vibration signals as the primary data source. The analysis employs statistical, histogram, and autoregressive–moving-average (ARMA) feature extraction techniques, followed by feature selection to identify key parameters influencing tire wear. CatBoost is used for feature classification, leveraging its adaptability and efficiency in distinguishing varying wear patterns. Additionally, the study incorporates feature fusion to combine different types of features for a more comprehensive analysis. The proposed methodology not only offers a robust framework for accurately classifying tire wear levels but also holds significant potential for real-time implementation, contributing to proactive maintenance practices, prolonged tire lifespan, and overall vehicular safety.

解决轮胎磨损这一关键问题对于提高车辆安全性、性能和维护至关重要。磨损的轮胎往往会导致事故,因此需要有效的监测系统。这项研究之所以至关重要,主要有以下几个原因:安全性,因为磨损的轮胎会降低牵引力并延长制动距离,从而增加事故风险;性能,因为轮胎磨损不均匀会影响车辆的操控性和燃油效率;维护成本,因为早期检测可以防止悬挂和定位系统受到更严重的损坏;以及法规遵从性,因为确保轮胎完整性有助于满足交通管理部门的安全法规要求。为此,本研究使用振动信号作为主要数据源,以完好轮胎为参照物,系统地评估了轮胎在磨损 25%、50%、75% 和 100% 时的状况。分析采用了统计、直方图和自回归移动平均(ARMA)特征提取技术,然后进行特征选择,以确定影响轮胎磨损的关键参数。CatBoost 用于特征分类,利用其在区分不同磨损模式方面的适应性和效率。此外,该研究还采用了特征融合技术,将不同类型的特征结合起来,以进行更全面的分析。所提出的方法不仅为准确分类轮胎磨损程度提供了一个强大的框架,而且在实时实施方面也具有巨大的潜力,有助于积极主动的维护实践、延长轮胎使用寿命和整体车辆安全。
{"title":"Tire wear monitoring using feature fusion and CatBoost classifier","authors":"C. V. Prasshanth,&nbsp;V. Sugumaran","doi":"10.1007/s10462-024-10999-6","DOIUrl":"10.1007/s10462-024-10999-6","url":null,"abstract":"<div><p>Addressing the critical issue of tire wear is essential for enhancing vehicle safety, performance, and maintenance. Worn-out tires often lead to accidents, underscoring the need for effective monitoring systems. This study is vital for several reasons: safety, as worn tires increase the risk of accidents due to reduced traction and longer braking distances; performance, as uneven tire wear affects vehicle handling and fuel efficiency; maintenance costs, as early detection can prevent more severe damage to suspension and alignment systems; and regulatory compliance, as ensuring tire integrity helps meet safety regulations imposed by transportation authorities. In response, this study systematically evaluates tire conditions at 25%, 50%, 75%, and 100% wear, with an intact tire as a reference, using vibration signals as the primary data source. The analysis employs statistical, histogram, and autoregressive–moving-average (ARMA) feature extraction techniques, followed by feature selection to identify key parameters influencing tire wear. CatBoost is used for feature classification, leveraging its adaptability and efficiency in distinguishing varying wear patterns. Additionally, the study incorporates feature fusion to combine different types of features for a more comprehensive analysis. The proposed methodology not only offers a robust framework for accurately classifying tire wear levels but also holds significant potential for real-time implementation, contributing to proactive maintenance practices, prolonged tire lifespan, and overall vehicular safety.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10999-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142451092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clarity in complexity: how aggregating explanations resolves the disagreement problem 复杂中的清晰:汇总解释如何解决分歧问题
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-19 DOI: 10.1007/s10462-024-10952-7
Oana Mitruț, Gabriela Moise, Alin Moldoveanu, Florica Moldoveanu, Marius Leordeanu, Livia Petrescu

The Rashômon Effect, applied in Explainable Machine Learning, refers to the disagreement between the explanations provided by various attribution explainers and to the dissimilarity across multiple explanations generated by a particular explainer for a single instance from the dataset (differences between feature importances and their associated signs and ranks), an undesirable outcome especially in sensitive domains such as healthcare or finance. We propose a method inspired from textual-case based reasoning for aligning explanations from various explainers in order to resolve the disagreement and dissimilarity problems. We iteratively generated a number of 100 explanations for each instance from six popular datasets, using three prevalent feature attribution explainers: LIME, Anchors and SHAP (with the variations Tree SHAP and Kernel SHAP) and consequently applied a global cluster-based aggregation strategy that quantifies alignment and reveals similarities and associations between explanations. We evaluated our method by weighting the (:k)-NN algorithm with agreed feature overlap explanation weights and compared it to a non-weighted (:k)-NN predictor, having as task binary classification. Also, we compared the results of the weighted (:k)-NN algorithm using aggregated feature overlap explanation weights to the weighted (:k)-NN algorithm using weights produced by a single explanation method (either LIME, SHAP or Anchors). Our global alignment method benefited the most from a hybridization with feature importance scores (information gain), that was essential for acquiring a more accurate estimate of disagreement, for enabling explainers to reach a consensus across multiple explanations and for supporting effective model learning through improved classification performance.

可解释机器学习(Explainable Machine Learning)中应用的 "罗生门效应"(Rashômon Effect)是指不同归因解释者提供的解释之间存在分歧,以及特定解释者针对数据集中的单个实例生成的多个解释之间存在不相似性(特征导入量及其相关符号和等级之间存在差异)。我们从基于文本案例的推理中获得灵感,提出了一种方法来对齐来自不同解释者的解释,以解决分歧和差异问题。我们使用三种流行的特征归因解释器,从六个流行数据集的每个实例中反复生成了 100 个解释:我们使用 LIME、Anchors 和 SHAP(包括 Tree SHAP 和 Kernel SHAP 变体)这三种流行的特征归因解释器为六个流行数据集的每个实例生成了 100 个解释,并因此应用了一种基于聚类的全局聚合策略,该策略可量化对齐情况并揭示解释之间的相似性和关联性。我们通过使用商定的特征重叠解释权重对(:k)-NN 算法进行加权来评估我们的方法,并将其与任务为二元分类的非加权(:k)-NN 预测器进行比较。此外,我们还比较了使用聚合特征重叠解释权重的加权(:k)-NN 算法和使用单一解释方法(LIME、SHAP 或 Anchors)产生的权重的加权(:k)-NN 算法的结果。我们的全局配准方法从与特征重要性得分(信息增益)的混合中获益最大,这对于获得更准确的分歧估计、使解释者在多个解释中达成共识以及通过提高分类性能支持有效的模型学习至关重要。
{"title":"Clarity in complexity: how aggregating explanations resolves the disagreement problem","authors":"Oana Mitruț,&nbsp;Gabriela Moise,&nbsp;Alin Moldoveanu,&nbsp;Florica Moldoveanu,&nbsp;Marius Leordeanu,&nbsp;Livia Petrescu","doi":"10.1007/s10462-024-10952-7","DOIUrl":"10.1007/s10462-024-10952-7","url":null,"abstract":"<div><p>The Rashômon Effect, applied in Explainable Machine Learning, refers to the disagreement between the explanations provided by various attribution explainers and to the dissimilarity across multiple explanations generated by a particular explainer for a single instance from the dataset (differences between feature importances and their associated signs and ranks), an undesirable outcome especially in sensitive domains such as healthcare or finance. We propose a method inspired from textual-case based reasoning for aligning explanations from various explainers in order to resolve the disagreement and dissimilarity problems. We iteratively generated a number of 100 explanations for each instance from six popular datasets, using three prevalent feature attribution explainers: LIME, Anchors and SHAP (with the variations Tree SHAP and Kernel SHAP) and consequently applied a global cluster-based aggregation strategy that quantifies alignment and reveals similarities and associations between explanations. We evaluated our method by weighting the <span>(:k)</span>-NN algorithm with agreed feature overlap explanation weights and compared it to a non-weighted <span>(:k)</span>-NN predictor, having as task binary classification. Also, we compared the results of the weighted <span>(:k)</span>-NN algorithm using aggregated feature overlap explanation weights to the weighted <span>(:k)</span>-NN algorithm using weights produced by a single explanation method (either LIME, SHAP or Anchors). Our global alignment method benefited the most from a hybridization with feature importance scores (information gain), that was essential for acquiring a more accurate estimate of disagreement, for enabling explainers to reach a consensus across multiple explanations and for supporting effective model learning through improved classification performance.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10952-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142451093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Controllable image synthesis methods, applications and challenges: a comprehensive survey 可控图像合成方法、应用和挑战:全面调查
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1007/s10462-024-10987-w
Shanshan Huang, Qingsong Li, Jun Liao, Shu Wang, Li Liu, Lian Li

Controllable Image Synthesis (CIS) is a methodology that allows users to generate desired images or manipulate specific attributes of images by providing precise input conditions or modifying latent representations. In recent years, CIS has attracted considerable attention in the field of image processing, with significant advances in consistency, controllability and harmony. However, several challenges still remain, particularly regarding the fine-grained controllability and interpretability of synthesized images. In this paper, we comprehensively and systematically review the CIS from problem definition, taxonomy and evaluation systems to existing challenges and future research directions. First, the definition of CIS is given, and several representative deep generative models are introduced in detail. Second, the existing CIS methods are divided into three categories according to the different control manners used and discuss the typical work in each category critically. Furthermore, we introduce the public datasets and evaluation metrics commonly used in image synthesis and analyze the representative CIS methods. Finally, we present several open issues and discuss the future research direction of CIS.

可控图像合成(CIS)是一种方法,它允许用户通过提供精确的输入条件或修改潜在表征来生成所需的图像或处理图像的特定属性。近年来,CIS 在图像处理领域备受关注,在一致性、可控性和和谐性方面取得了显著进步。然而,一些挑战依然存在,特别是在合成图像的细粒度可控性和可解释性方面。在本文中,我们从问题定义、分类和评估系统到现有挑战和未来研究方向,全面系统地回顾了 CIS。首先,给出了 CIS 的定义,并详细介绍了几种具有代表性的深度生成模型。其次,根据控制方式的不同,将现有的 CIS 方法分为三类,并对每一类中的典型工作进行了批判性讨论。此外,我们还介绍了图像合成中常用的公共数据集和评价指标,并对具有代表性的 CIS 方法进行了分析。最后,我们提出了几个开放性问题,并讨论了 CIS 的未来研究方向。
{"title":"Controllable image synthesis methods, applications and challenges: a comprehensive survey","authors":"Shanshan Huang,&nbsp;Qingsong Li,&nbsp;Jun Liao,&nbsp;Shu Wang,&nbsp;Li Liu,&nbsp;Lian Li","doi":"10.1007/s10462-024-10987-w","DOIUrl":"10.1007/s10462-024-10987-w","url":null,"abstract":"<div><p>Controllable Image Synthesis (CIS) is a methodology that allows users to generate desired images or manipulate specific attributes of images by providing precise input conditions or modifying latent representations. In recent years, CIS has attracted considerable attention in the field of image processing, with significant advances in consistency, controllability and harmony. However, several challenges still remain, particularly regarding the fine-grained controllability and interpretability of synthesized images. In this paper, we comprehensively and systematically review the CIS from problem definition, taxonomy and evaluation systems to existing challenges and future research directions. First, the definition of CIS is given, and several representative deep generative models are introduced in detail. Second, the existing CIS methods are divided into three categories according to the different control manners used and discuss the typical work in each category critically. Furthermore, we introduce the public datasets and evaluation metrics commonly used in image synthesis and analyze the representative CIS methods. Finally, we present several open issues and discuss the future research direction of CIS.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10987-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142447318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The atmospheric boundary layer: a review of current challenges and a new generation of machine learning techniques 大气边界层:当前挑战与新一代机器学习技术综述
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-17 DOI: 10.1007/s10462-024-10962-5
Linda Canché-Cab, Liliana San-Pedro, Bassam Ali, Michel Rivero, Mauricio Escalante

Atmospheric boundary layer (ABL) structure and dynamics are important aspects to consider in human health. The ABL is characterized by a high degree of spatial and temporal variability that hinders their understanding. This paper aims to provide a comprehensive overview of machine learning (ML) methodologies, encompassing deep learning and ensemble approaches, within the scope of ABL research. The goal is to highlight the challenges and opportunities of using ML in turbulence modeling and parameterization in areas such as atmospheric pollution, meteorology, and renewable energy. The review emphasizes the validation of results to ensure their reliability and applicability. ML has proven to be a valuable tool for understanding and predicting how ABL spatial and seasonal variability affects pollutant dispersion and public health. In addition, it has been demonstrated that ML can be used to estimate several variables and parameters, such as ABL height, making it a promising approach to enhance air quality management and urban planning.

大气边界层(ABL)的结构和动力学是人类健康需要考虑的重要方面。大气边界层具有高度的时空变异性,这阻碍了对其的理解。本文旨在全面概述 ABL 研究范围内的机器学习(ML)方法,包括深度学习和集合方法。目的是强调在大气污染、气象学和可再生能源等领域的湍流建模和参数化中使用 ML 所面临的挑战和机遇。综述强调了对结果的验证,以确保其可靠性和适用性。事实证明,ML 是了解和预测 ABL 空间和季节变化如何影响污染物扩散和公众健康的重要工具。此外,研究还证明,ML 可用于估算 ABL 高度等多个变量和参数,使其成为加强空气质量管理和城市规划的一种有前途的方法。
{"title":"The atmospheric boundary layer: a review of current challenges and a new generation of machine learning techniques","authors":"Linda Canché-Cab,&nbsp;Liliana San-Pedro,&nbsp;Bassam Ali,&nbsp;Michel Rivero,&nbsp;Mauricio Escalante","doi":"10.1007/s10462-024-10962-5","DOIUrl":"10.1007/s10462-024-10962-5","url":null,"abstract":"<div><p>Atmospheric boundary layer (ABL) structure and dynamics are important aspects to consider in human health. The ABL is characterized by a high degree of spatial and temporal variability that hinders their understanding. This paper aims to provide a comprehensive overview of machine learning (ML) methodologies, encompassing deep learning and ensemble approaches, within the scope of ABL research. The goal is to highlight the challenges and opportunities of using ML in turbulence modeling and parameterization in areas such as atmospheric pollution, meteorology, and renewable energy. The review emphasizes the validation of results to ensure their reliability and applicability. ML has proven to be a valuable tool for understanding and predicting how ABL spatial and seasonal variability affects pollutant dispersion and public health. In addition, it has been demonstrated that ML can be used to estimate several variables and parameters, such as ABL height, making it a promising approach to enhance air quality management and urban planning.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10962-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142443223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surface defect inspection of industrial products with object detection deep networks: a systematic review 利用对象检测深度网络检测工业产品表面缺陷:系统综述
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-17 DOI: 10.1007/s10462-024-10956-3
Yuxin Ma, Jiaxing Yin, Feng Huang, Qipeng Li

One of the focal points in industrial product defect detection lies in the utilization of deep learning-based object detection algorithms. With the continuous introduction of these algorithms and their refined models, notable achievements have been attained. However, challenges persist in industrial settings, such as substantial variations in defect scales, the delicate balance between accuracy and speed, and the detection of small objects. Various methods have been proposed to address these challenges and propel the advancement of defect detection. To comprehensively review the latest developments in deep learning-based industrial product defect detection algorithms and foster further progress, this paper encompasses typical datasets and evaluation metrics used in industrial product defect detection, traces the development history of supervised one-stage and two-stage object detection algorithm-based and unsupervised algorithm-based industrial defect detection methods, discusses major challenges, and outlines future directions. It highlights the potential for further improving the accuracy, speed, and reliability of defect detection systems in industrial applications.

工业产品缺陷检测的重点之一在于利用基于深度学习的物体检测算法。随着这些算法及其完善模型的不断推出,已经取得了显著成就。然而,在工业环境中仍然存在一些挑战,例如缺陷尺度的巨大差异、精度与速度之间的微妙平衡以及小物体的检测。人们提出了各种方法来应对这些挑战,推动缺陷检测技术的进步。为了全面回顾基于深度学习的工业产品缺陷检测算法的最新发展并促进其进一步进步,本文概述了工业产品缺陷检测中使用的典型数据集和评估指标,追溯了基于有监督的单阶段和双阶段物体检测算法以及基于无监督算法的工业产品缺陷检测方法的发展历程,讨论了主要挑战并概述了未来方向。报告强调了进一步提高工业应用中缺陷检测系统的准确性、速度和可靠性的潜力。
{"title":"Surface defect inspection of industrial products with object detection deep networks: a systematic review","authors":"Yuxin Ma,&nbsp;Jiaxing Yin,&nbsp;Feng Huang,&nbsp;Qipeng Li","doi":"10.1007/s10462-024-10956-3","DOIUrl":"10.1007/s10462-024-10956-3","url":null,"abstract":"<div><p>One of the focal points in industrial product defect detection lies in the utilization of deep learning-based object detection algorithms. With the continuous introduction of these algorithms and their refined models, notable achievements have been attained. However, challenges persist in industrial settings, such as substantial variations in defect scales, the delicate balance between accuracy and speed, and the detection of small objects. Various methods have been proposed to address these challenges and propel the advancement of defect detection. To comprehensively review the latest developments in deep learning-based industrial product defect detection algorithms and foster further progress, this paper encompasses typical datasets and evaluation metrics used in industrial product defect detection, traces the development history of supervised one-stage and two-stage object detection algorithm-based and unsupervised algorithm-based industrial defect detection methods, discusses major challenges, and outlines future directions. It highlights the potential for further improving the accuracy, speed, and reliability of defect detection systems in industrial applications.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10956-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142443225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recent applications and advances of African Vultures Optimization Algorithm 非洲秃鹫优化算法的最新应用和进展
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-17 DOI: 10.1007/s10462-024-10981-2
Abdelazim G. Hussien, Farhad Soleimanian Gharehchopogh, Anas Bouaouda, Sumit Kumar, Gang Hu

The African Vultures Optimization Algorithm (AVOA) is a recently developed meta-heuristic algorithm inspired by the foraging behavior of African vultures in nature. This algorithm has gained attention due to its simplicity, flexibility, and effectiveness in tackling many optimization problems. The significance of this review lies in its comprehensive examination of the AVOA’s development, core principles, and applications. By analyzing 112 studies, this review highlights the algorithm’s versatility and the growing interest in enhancing its performance for real-world optimization challenges. This review methodically explores the evolution of AVOA, investigating proposed improvements that enhance the algorithm’s ability to adapt to various search geometries in optimization problems. Additionally, it introduces the AVOA solver, detailing its functionality and application in different optimization scenarios. The review demonstrates the AVOA’s effectiveness, particularly its unique weighting mechanism, which mimics vulture behavior during the search process. The findings underscore the algorithm’s robustness, ease of use, and lack of dependence on derivative information. The review also critically evaluates the AVOA’s convergence behavior, identifying its strengths and limitations. In conclusion, the study not only consolidates the existing knowledge on AVOA but also proposes directions for future research, including potential adaptations and enhancements to address its limitations. The insights gained from this review offer valuable guidance for researchers and practitioners seeking to apply or improve the AVOA in various optimization tasks.

非洲秃鹫优化算法(AVOA)是最近开发的一种元启发式算法,其灵感来自非洲秃鹫在自然界中的觅食行为。该算法因其简单、灵活、有效地解决了许多优化问题而备受关注。本综述的意义在于对 AVOA 的发展、核心原理和应用进行了全面考察。通过分析 112 项研究,本综述强调了该算法的多功能性,以及人们对提高其性能以应对实际优化挑战的日益浓厚的兴趣。本综述有条不紊地探讨了 AVOA 的演变过程,研究了为提高算法适应优化问题中各种搜索几何形状的能力而提出的改进建议。此外,它还介绍了 AVOA 求解器,详细说明了其功能和在不同优化场景中的应用。综述展示了 AVOA 的有效性,尤其是其独特的加权机制,即在搜索过程中模仿秃鹫的行为。研究结果强调了该算法的稳健性、易用性以及对衍生信息的不依赖性。综述还对 AVOA 的收敛行为进行了批判性评估,确定了其优势和局限性。总之,本研究不仅整合了有关 AVOA 的现有知识,还提出了未来的研究方向,包括为解决其局限性而可能进行的调整和改进。从本综述中获得的见解为寻求在各种优化任务中应用或改进 AVOA 的研究人员和从业人员提供了宝贵的指导。
{"title":"Recent applications and advances of African Vultures Optimization Algorithm","authors":"Abdelazim G. Hussien,&nbsp;Farhad Soleimanian Gharehchopogh,&nbsp;Anas Bouaouda,&nbsp;Sumit Kumar,&nbsp;Gang Hu","doi":"10.1007/s10462-024-10981-2","DOIUrl":"10.1007/s10462-024-10981-2","url":null,"abstract":"<div><p>The African Vultures Optimization Algorithm (AVOA) is a recently developed meta-heuristic algorithm inspired by the foraging behavior of African vultures in nature. This algorithm has gained attention due to its simplicity, flexibility, and effectiveness in tackling many optimization problems. The significance of this review lies in its comprehensive examination of the AVOA’s development, core principles, and applications. By analyzing 112 studies, this review highlights the algorithm’s versatility and the growing interest in enhancing its performance for real-world optimization challenges. This review methodically explores the evolution of AVOA, investigating proposed improvements that enhance the algorithm’s ability to adapt to various search geometries in optimization problems. Additionally, it introduces the AVOA solver, detailing its functionality and application in different optimization scenarios. The review demonstrates the AVOA’s effectiveness, particularly its unique weighting mechanism, which mimics vulture behavior during the search process. The findings underscore the algorithm’s robustness, ease of use, and lack of dependence on derivative information. The review also critically evaluates the AVOA’s convergence behavior, identifying its strengths and limitations. In conclusion, the study not only consolidates the existing knowledge on AVOA but also proposes directions for future research, including potential adaptations and enhancements to address its limitations. The insights gained from this review offer valuable guidance for researchers and practitioners seeking to apply or improve the AVOA in various optimization tasks.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10981-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142443224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient propositional system for Abductive Logic Programming 用于归纳逻辑编程的高效命题系统
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-17 DOI: 10.1007/s10462-024-10928-7
Marco Gavanelli, Pascual Julián-Iranzo, Fernando Sáenz-Pérez

Abductive logic programming (ALP) extends logic programming with hypothetical reasoning by means of abducibles, an extension able to handle interesting problems, such as diagnosis, planning, and verification with formal methods. Implementations of this extension have been using Prolog meta-interpreters and Prolog programs with Constraint Handling Rules (CHR). While the latter adds a clean and efficient interface to the host system, it still suffers in performance for large programs. Here, the concern is to obtain a more performant implementation of the SCIFF system following a compiled approach. This paper, as a first step in this long term goal, sets out a propositional ALP system following SCIFF, eliminating the need for CHR and achieving better performance.

归纳逻辑编程(ALP)是通过abducibles对逻辑编程进行假设推理的扩展,这种扩展能够用形式化方法处理诊断、规划和验证等有趣的问题。这一扩展的实现一直使用 Prolog 元解释器和带有约束处理规则(CHR)的 Prolog 程序。虽然后者为主机系统增加了一个简洁高效的接口,但对于大型程序而言,其性能仍然受到影响。在此,我们关注的是如何通过编译方法获得性能更高的 SCIFF 系统实现。作为实现这一长期目标的第一步,本文提出了一种遵循 SCIFF 的命题式 ALP 系统,无需使用 CHR,并实现了更好的性能。
{"title":"An efficient propositional system for Abductive Logic Programming","authors":"Marco Gavanelli,&nbsp;Pascual Julián-Iranzo,&nbsp;Fernando Sáenz-Pérez","doi":"10.1007/s10462-024-10928-7","DOIUrl":"10.1007/s10462-024-10928-7","url":null,"abstract":"<div><p>Abductive logic programming (ALP) extends logic programming with hypothetical reasoning by means of abducibles, an extension able to handle interesting problems, such as diagnosis, planning, and verification with formal methods. Implementations of this extension have been using Prolog meta-interpreters and Prolog programs with Constraint Handling Rules (<span>CHR</span>). While the latter adds a clean and efficient interface to the host system, it still suffers in performance for large programs. Here, the concern is to obtain a more performant implementation of the <span>SCIFF</span> system following a compiled approach. This paper, as a first step in this long term goal, sets out a propositional ALP system following <span>SCIFF</span>, eliminating the need for <span>CHR</span> and achieving better performance.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 12","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10928-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142443351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1