首页 > 最新文献

Artificial Intelligence Review最新文献

英文 中文
Trinocular vision with deep learning for object twist estimation: a benchmark approach and dataset 三维视觉与深度学习的对象扭曲估计:基准方法和数据集
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-08 DOI: 10.1007/s10462-025-11461-x
Jinghao Wang, KaiKai Cui, Teng Teng, Zi Wang, Boming Ren, Qifeng Yu

Accurate estimation of a rigid object’s 6D pose and twist is a fundamental challenge for enabling autonomous systems operation and interaction with the environment. Monocular vision can mitigate the drift inherent in inertial methods and enables the estimation of non-cooperative target twist. However, the estimation accuracy of monocular vision is compromised by inherent depth ambiguity, a limitation that multi-camera systems can overcome. The lack of multi-view datasets with high-precision annotations hinders the development of robust perception algorithms. To address this, we present the first trinocular pose and twist estimation dataset for non-cooperative targets, comprising images of aircraft (7,824 for training, 4,710 for testing) and satellites (7683 for training, 4380 for testing), all manually annotated with sub-pixel-level keypoints and pose labels derived from optimization. We develop a neural network that predicts semantic keypoints for robust pose estimation. Combined with a multi-view optimization framework and twist estimation, our system achieves a mean angular velocity error of (0.1^{circ })/s and a mean linear velocity error of 0.3mm/s. Our open-source dataset and method provide a critical benchmark for future research in aerospace missions. We have open-sourced the dataset at the following https://www.kaggle.com/datasets/mingshiwuwjh/trinocular-pose-and-twist-estimation-dataset.

准确估计刚性物体的6D姿态和扭曲是实现自主系统运行和与环境交互的基本挑战。单目视觉可以减轻惯性方法固有的漂移,并且可以估计非合作目标扭转。然而,单目视觉的估计精度受到固有深度模糊的影响,这是多相机系统可以克服的一个限制。具有高精度注释的多视图数据集的缺乏阻碍了鲁棒感知算法的发展。为了解决这个问题,我们提出了第一个非合作目标的三视角姿态和扭曲估计数据集,包括飞机图像(7,824张用于训练,4,710张用于测试)和卫星图像(7683张用于训练,4380张用于测试),所有图像都手动标注了亚像素级关键点和姿态标签。我们开发了一个神经网络,用于鲁棒姿态估计预测语义关键点。结合多视角优化框架和扭距估计,系统的平均角速度误差为(0.1^{circ }) /s,平均线速度误差为0.3mm/s。我们的开源数据集和方法为未来的航空航天任务研究提供了关键的基准。我们已经在下面的https://www.kaggle.com/datasets/mingshiwuwjh/trinocular-pose-and-twist-estimation-dataset上开源了这个数据集。
{"title":"Trinocular vision with deep learning for object twist estimation: a benchmark approach and dataset","authors":"Jinghao Wang,&nbsp;KaiKai Cui,&nbsp;Teng Teng,&nbsp;Zi Wang,&nbsp;Boming Ren,&nbsp;Qifeng Yu","doi":"10.1007/s10462-025-11461-x","DOIUrl":"10.1007/s10462-025-11461-x","url":null,"abstract":"<div><p>Accurate estimation of a rigid object’s 6D pose and twist is a fundamental challenge for enabling autonomous systems operation and interaction with the environment. Monocular vision can mitigate the drift inherent in inertial methods and enables the estimation of non-cooperative target twist. However, the estimation accuracy of monocular vision is compromised by inherent depth ambiguity, a limitation that multi-camera systems can overcome. The lack of multi-view datasets with high-precision annotations hinders the development of robust perception algorithms. To address this, we present the first trinocular pose and twist estimation dataset for non-cooperative targets, comprising images of aircraft (7,824 for training, 4,710 for testing) and satellites (7683 for training, 4380 for testing), all manually annotated with sub-pixel-level keypoints and pose labels derived from optimization. We develop a neural network that predicts semantic keypoints for robust pose estimation. Combined with a multi-view optimization framework and twist estimation, our system achieves a mean angular velocity error of <span>(0.1^{circ })</span>/s and a mean linear velocity error of 0.3mm/s. Our open-source dataset and method provide a critical benchmark for future research in aerospace missions. We have open-sourced the dataset at the following https://www.kaggle.com/datasets/mingshiwuwjh/trinocular-pose-and-twist-estimation-dataset.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11461-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive review of theoretical concepts and advancements in physics-informed neural networks with applications in structural engineering 全面回顾了基于物理的神经网络在结构工程中的应用的理论概念和进展
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-02 DOI: 10.1007/s10462-025-11444-y
Surendra Baniya, Damodar Maity

Structural engineering (SE) is a diverse field with numerous applications, including computational mechanics, structural simulation, and topology optimization, all governed by fundamental physical principles and typically addressed through classical numerical methods. These methods have delivered reliable and accurate solutions for forward problems within well-defined domains. However, they can become less effective when faced with high-dimensional spaces, complex geometries, irregular domains, or inverse problems with limited data. In parallel, data-driven models have gained popularity, but their dependence on large datasets and lack of physical interpretability restrict their generalization to unseen conditions. Physics-Informed Neural Networks (PINNs) have recently emerged as complementary tools that combine the strengths of numerical and data-driven approaches. By embedding governing physical laws directly into the learning process, PINNs reduce reliance on extensive datasets while improving interpretability and robustness. Although they may not yet rival classical solvers in terms of computational efficiency or accuracy for standard forward problems, PINNs offer unique advantages in scenarios where meshing is challenging, data and physics need to be integrated, or inverse problems require parameter identification and damage detection. This review provides a comprehensive overview of PINNs in SE, focusing on their theoretical framework, training strategies, computational implementations, and applications to both forward and inverse problems. The discussion highlights their advantages in accuracy, flexibility, and hybrid data-physics integration, while also outlining current limitations and future research directions to enhance their robustness and applicability for solving complex real-world SE problems.

结构工程(SE)是一个具有众多应用的多元化领域,包括计算力学、结构模拟和拓扑优化,所有这些都受基本物理原理的支配,通常通过经典的数值方法来解决。这些方法为定义良好的领域内的正向问题提供了可靠和准确的解决方案。然而,当面对高维空间、复杂几何、不规则域或数据有限的逆问题时,它们可能会变得不那么有效。与此同时,数据驱动模型也越来越受欢迎,但它们对大型数据集的依赖和缺乏物理可解释性限制了它们在看不见的条件下的泛化。物理信息神经网络(pinn)最近作为一种互补工具出现,它结合了数值和数据驱动方法的优势。通过将控制物理定律直接嵌入到学习过程中,pinn减少了对大量数据集的依赖,同时提高了可解释性和鲁棒性。尽管在标准正演问题的计算效率或精度方面,它们可能还无法与经典求解器相媲美,但在网格划分具有挑战性、数据和物理需要集成、或逆问题需要参数识别和损伤检测的情况下,pinn具有独特的优势。这篇综述提供了SE中pin n的全面概述,重点是它们的理论框架、训练策略、计算实现以及在正解和反解问题中的应用。讨论强调了它们在准确性、灵活性和混合数据物理集成方面的优势,同时也概述了当前的局限性和未来的研究方向,以增强它们在解决复杂的现实世界SE问题方面的鲁棒性和适用性。
{"title":"A comprehensive review of theoretical concepts and advancements in physics-informed neural networks with applications in structural engineering","authors":"Surendra Baniya,&nbsp;Damodar Maity","doi":"10.1007/s10462-025-11444-y","DOIUrl":"10.1007/s10462-025-11444-y","url":null,"abstract":"<div><p>Structural engineering (SE) is a diverse field with numerous applications, including computational mechanics, structural simulation, and topology optimization, all governed by fundamental physical principles and typically addressed through classical numerical methods. These methods have delivered reliable and accurate solutions for forward problems within well-defined domains. However, they can become less effective when faced with high-dimensional spaces, complex geometries, irregular domains, or inverse problems with limited data. In parallel, data-driven models have gained popularity, but their dependence on large datasets and lack of physical interpretability restrict their generalization to unseen conditions. Physics-Informed Neural Networks (PINNs) have recently emerged as complementary tools that combine the strengths of numerical and data-driven approaches. By embedding governing physical laws directly into the learning process, PINNs reduce reliance on extensive datasets while improving interpretability and robustness. Although they may not yet rival classical solvers in terms of computational efficiency or accuracy for standard forward problems, PINNs offer unique advantages in scenarios where meshing is challenging, data and physics need to be integrated, or inverse problems require parameter identification and damage detection. This review provides a comprehensive overview of PINNs in SE, focusing on their theoretical framework, training strategies, computational implementations, and applications to both forward and inverse problems. The discussion highlights their advantages in accuracy, flexibility, and hybrid data-physics integration, while also outlining current limitations and future research directions to enhance their robustness and applicability for solving complex real-world SE problems.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11444-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Androbank: the impact of API levels on mobile malware detection Androbank: API级别对移动恶意软件检测的影响
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-02 DOI: 10.1007/s10462-025-11452-y
Milan Oulehla, Ladislav Dorotík, Zuzana Komínková Oplatková

Android is the most widely used operating system, making it a prime target for mobile malware, leading to data breaches and financial losses (e.g., Dark Herring). To address these issues, AI-based forensic tools are crucial for investigating security incidents, but their accuracy depends on high-quality mobile malware datasets. While dynamic analysis has limitations, recent research has shifted towards static analysis and AI-based methods for malware detection. However, there are three key challenges: lack of reproducibility, low dataset quality, and bias in AI datasets. This paper focuses on an overlooked bias—the incorrect API Level distribution in malware datasets. Such bias skews AI detection results, making them appear effective in tests but less applicable in real-world scenarios. To highlight the importance of dataset quality, three case studies on API Level Analysis were conducted, showing how biased datasets can distort detection results. To address this, the paper introduces methods and terms like Delayed Interception, Dataset of guaranteed quality, API Milestones, AndroBank, and Sample Unification, which aim to enhance dataset reliability and improve AI-based mobile malware detection.

Android是使用最广泛的操作系统,这使得它成为移动恶意软件的主要目标,导致数据泄露和经济损失(例如,黑鲱鱼)。为了解决这些问题,基于人工智能的取证工具对于调查安全事件至关重要,但它们的准确性取决于高质量的移动恶意软件数据集。虽然动态分析有局限性,但最近的研究已经转向静态分析和基于人工智能的恶意软件检测方法。然而,存在三个关键挑战:缺乏可重复性,数据集质量低,人工智能数据集存在偏见。本文主要关注一个被忽视的偏差——恶意软件数据集中不正确的API级别分布。这种偏见会扭曲人工智能检测结果,使它们在测试中看起来有效,但在现实场景中却不太适用。为了强调数据集质量的重要性,我们对API Level Analysis进行了三个案例研究,展示了有偏差的数据集如何扭曲检测结果。为了解决这个问题,本文引入了延迟拦截、保证质量的数据集、API里程碑、AndroBank和样本统一等方法和术语,旨在提高数据集的可靠性和改进基于人工智能的移动恶意软件检测。
{"title":"Androbank: the impact of API levels on mobile malware detection","authors":"Milan Oulehla,&nbsp;Ladislav Dorotík,&nbsp;Zuzana Komínková Oplatková","doi":"10.1007/s10462-025-11452-y","DOIUrl":"10.1007/s10462-025-11452-y","url":null,"abstract":"<div>\u0000 \u0000 <p>Android is the most widely used operating system, making it a prime target for mobile malware, leading to data breaches and financial losses (e.g., Dark Herring). To address these issues, AI-based forensic tools are crucial for investigating security incidents, but their accuracy depends on high-quality mobile malware datasets. While dynamic analysis has limitations, recent research has shifted towards static analysis and AI-based methods for malware detection. However, there are three key challenges: lack of reproducibility, low dataset quality, and bias in AI datasets. This paper focuses on an overlooked bias—the incorrect API Level distribution in malware datasets. Such bias skews AI detection results, making them appear effective in tests but less applicable in real-world scenarios. To highlight the importance of dataset quality, three case studies on API Level Analysis were conducted, showing how biased datasets can distort detection results. To address this, the paper introduces methods and terms like Delayed Interception, Dataset of guaranteed quality, API Milestones, AndroBank, and Sample Unification, which aim to enhance dataset reliability and improve AI-based mobile malware detection.</p>\u0000 </div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11452-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145908990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing biofortified crop selection: a novel feed-backward double hierarchy linguistic neural network approach with Yager-Dombi t-norms 优化生物强化作物选择:一种具有Yager-Dombi t-范数的新型反馈双层次语言神经网络方法
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-29 DOI: 10.1007/s10462-025-11154-5
Shougi S. Abosuliman, Saleem Abdullah, Nawab Ali

Biofortified crops have gained significant attention as a sustainable solution to address malnutrition and under nutrition, particularly in developing countries. These crops are genetically enhanced to have higher levels of essential nutrients such as vitamins and minerals. It aims to improve the nutritional quality of staple foods and promote better health outcomes in populations that rely heavily on these crops. Biofortified crops play an important role in addressing global health challenges, providing a cost-effective and scalable approach to improving nutrition. But the selection of biofortified crops among various options is a complex task for decision-makers. Therefore, this paper introduces a novel approach called the feed-backward Double-hierarchy linguistic neural networks using double-hierarchy linguistic term fuzzy information to handle this issue. For this, we develop a series of weighted averaging Yager-Dombi aggregation operators and also discuss their desirable properties. The decision-making process becomes complex due to unknown weight vectors. Entropy distance measures are used to locate unknown weight vectors. The study addresses a real-world MADM problem by demonstrating that biofortified rice could potentially address vitamin A deficiency, a significant health concern in developing regions. The WASPAS approach is used to verify the proposed method, and its feasibility and efficacy are evaluated compared to other MADM techniques.

生物强化作物作为一种解决营养不良和营养不足问题的可持续解决方案,特别是在发展中国家,已获得了极大的关注。这些作物经过基因改良,含有更高水平的必需营养素,如维生素和矿物质。它旨在改善主食的营养质量,并促进严重依赖这些作物的人口取得更好的健康结果。生物强化作物在应对全球卫生挑战方面发挥着重要作用,为改善营养提供了一种具有成本效益和可扩展的方法。但是对于决策者来说,在各种选择中选择生物强化作物是一项复杂的任务。因此,本文提出了一种利用双层语言术语模糊信息的反馈双层语言神经网络来解决这一问题。为此,我们开发了一系列加权平均Yager-Dombi聚合算子,并讨论了它们的理想性质。由于权重向量未知,决策过程变得复杂。使用熵距离度量来定位未知的权重向量。该研究通过证明生物强化大米可能潜在地解决维生素a缺乏症(发展中地区的一个重大健康问题),解决了现实世界中的MADM问题。利用WASPAS方法对该方法进行了验证,并与其他MADM技术进行了可行性和有效性比较。
{"title":"Optimizing biofortified crop selection: a novel feed-backward double hierarchy linguistic neural network approach with Yager-Dombi t-norms","authors":"Shougi S. Abosuliman,&nbsp;Saleem Abdullah,&nbsp;Nawab Ali","doi":"10.1007/s10462-025-11154-5","DOIUrl":"10.1007/s10462-025-11154-5","url":null,"abstract":"<div><p>Biofortified crops have gained significant attention as a sustainable solution to address malnutrition and under nutrition, particularly in developing countries. These crops are genetically enhanced to have higher levels of essential nutrients such as vitamins and minerals. It aims to improve the nutritional quality of staple foods and promote better health outcomes in populations that rely heavily on these crops. Biofortified crops play an important role in addressing global health challenges, providing a cost-effective and scalable approach to improving nutrition. But the selection of biofortified crops among various options is a complex task for decision-makers. Therefore, this paper introduces a novel approach called the feed-backward Double-hierarchy linguistic neural networks using double-hierarchy linguistic term fuzzy information to handle this issue. For this, we develop a series of weighted averaging Yager-Dombi aggregation operators and also discuss their desirable properties. The decision-making process becomes complex due to unknown weight vectors. Entropy distance measures are used to locate unknown weight vectors. The study addresses a real-world MADM problem by demonstrating that biofortified rice could potentially address vitamin A deficiency, a significant health concern in developing regions. The WASPAS approach is used to verify the proposed method, and its feasibility and efficacy are evaluated compared to other MADM techniques.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11154-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement learning for single-agent to multi-agent systems: from basic theory to industrial application progress, a survey 单智能体到多智能体系统的强化学习:从基础理论到工业应用进展综述
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-27 DOI: 10.1007/s10462-025-11439-9
Dehua Zhang, Qingsong Yuan, Lei Meng, Ruixue Xia, Wei Liu, Chunbin Qin

Reinforcement learning (RL), as an emerging interdisciplinary field formed by the integration of artificial intelligence and control science, is currently demonstrating a cross-disciplinary development trend led by artificial intelligence and has become a research hotspot in the field of optimal control. This paper systematically reviews the development context of RL, focusing on the intrinsic connection between single-agent reinforcement learning (SARL) and multi-agent reinforcement learning (MARL). Firstly, starting from the formation and development of RL, it elaborates on the similarities and differences between RL and other learning paradigms in machine learning, and briefly introduces the main branches of current RL. Then, with the basic knowledge and core ideas of SARL as the basic framework, and expanding to multi-agent system (MAS) collaborative control, it explores the coherence characteristics of the two in theoretical frameworks and algorithm design. On this basis, this paper reconfigures SARL algorithms into dynamic programming, value function decomposition and policy gradient (PG) type, and abstracts MARL algorithms into four paradigms: behavior analysis, centralized learning, communication learning and collaborative learning, thus establishing an algorithm mapping relationship from single-agent to multi-agent scenarios. This innovative framework provides a new perspective for understanding the evolutionary correlation of the two methods, and also discusses the challenges and solution ideas of MARL in solving large-scale MAS problems. This paper aims to provide a reference for researchers in this field, and to promote the development of cooperative control and optimization methods for MAS as well as the advancement of related application research.

强化学习(Reinforcement learning, RL)作为人工智能与控制科学融合形成的新兴跨学科领域,目前呈现出以人工智能为主导的跨学科发展趋势,已成为最优控制领域的研究热点。本文系统回顾了强化学习的发展背景,重点讨论了单智能体强化学习(SARL)和多智能体强化学习(MARL)之间的内在联系。首先,从强化学习的形成和发展出发,阐述了强化学习与机器学习中其他学习范式的异同,并简要介绍了当前强化学习的主要分支。然后,以SARL的基本知识和核心思想为基本框架,扩展到多智能体系统(MAS)协同控制,探讨两者在理论框架和算法设计上的一致性特征。在此基础上,本文将SARL算法重新配置为动态规划、价值函数分解和策略梯度(PG)型,并将MARL算法抽象为行为分析、集中学习、沟通学习和协作学习四种范式,从而建立了单智能体到多智能体场景的算法映射关系。这一创新框架为理解两种方法的演化相关性提供了新的视角,并讨论了MARL在解决大规模MAS问题时面临的挑战和解决思路。本文旨在为该领域的研究人员提供参考,并促进MAS协同控制和优化方法的发展以及相关应用研究的推进。
{"title":"Reinforcement learning for single-agent to multi-agent systems: from basic theory to industrial application progress, a survey","authors":"Dehua Zhang,&nbsp;Qingsong Yuan,&nbsp;Lei Meng,&nbsp;Ruixue Xia,&nbsp;Wei Liu,&nbsp;Chunbin Qin","doi":"10.1007/s10462-025-11439-9","DOIUrl":"10.1007/s10462-025-11439-9","url":null,"abstract":"<div><p>Reinforcement learning (RL), as an emerging interdisciplinary field formed by the integration of artificial intelligence and control science, is currently demonstrating a cross-disciplinary development trend led by artificial intelligence and has become a research hotspot in the field of optimal control. This paper systematically reviews the development context of RL, focusing on the intrinsic connection between single-agent reinforcement learning (SARL) and multi-agent reinforcement learning (MARL). Firstly, starting from the formation and development of RL, it elaborates on the similarities and differences between RL and other learning paradigms in machine learning, and briefly introduces the main branches of current RL. Then, with the basic knowledge and core ideas of SARL as the basic framework, and expanding to multi-agent system (MAS) collaborative control, it explores the coherence characteristics of the two in theoretical frameworks and algorithm design. On this basis, this paper reconfigures SARL algorithms into dynamic programming, value function decomposition and policy gradient (PG) type, and abstracts MARL algorithms into four paradigms: behavior analysis, centralized learning, communication learning and collaborative learning, thus establishing an algorithm mapping relationship from single-agent to multi-agent scenarios. This innovative framework provides a new perspective for understanding the evolutionary correlation of the two methods, and also discusses the challenges and solution ideas of MARL in solving large-scale MAS problems. This paper aims to provide a reference for researchers in this field, and to promote the development of cooperative control and optimization methods for MAS as well as the advancement of related application research.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11439-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advances in machine learning for wetland classification: a comprehensive survey of methods and applications 机器学习在湿地分类中的进展:方法和应用的综合调查
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1007/s10462-025-11413-5
Derrick Effah, Ali Zia, Mohammad Awrangjeb, Yongsheng Gao, Kwabena Sarpong

Wetlands are critical ecosystems supporting biodiversity and providing essential environmental services. With increasing threats to wetlands, efficient classification techniques are essential for effective conservation. Several researchers have contributed to wetland classification across reputable journals. However, some challenges (data scarcity, noisy labels, and model generalisability) still exist. Therefore, this paper presents a comprehensive survey of recent advancements in machine learning (ML) and deep learning (DL) for wetland classification, focusing on developments from 2018 to 2025. Key methodologies, including convolutional neural networks, transformers, and generative adversarial networks, are critically reviewed, highlighting their strengths, limitations, and applications in remote sensing. Unlike previous reviews, this work emphasises underexplored techniques such as few-shot learning and Mamba networks, offering practical recommendations for handling limited training data and improving model generalisability. The study also identifies promising research directions, such as test-time training and hybrid loss functions, to address challenges in wetland classification. This survey aims to guide researchers and practitioners in advancing state-of-the-art wetland classification through ML and DL technologies.

湿地是支持生物多样性和提供基本环境服务的重要生态系统。随着湿地面临的威胁日益增加,有效的分类技术是有效保护湿地的必要条件。一些研究人员在知名期刊上对湿地分类做出了贡献。然而,仍然存在一些挑战(数据稀缺性、噪声标签和模型通用性)。因此,本文对用于湿地分类的机器学习(ML)和深度学习(DL)的最新进展进行了全面调查,重点关注2018年至2025年的发展。关键的方法,包括卷积神经网络,变压器和生成对抗网络,严格审查,突出其优势,局限性,并在遥感应用。与以前的评论不同,这项工作强调了未被开发的技术,如少镜头学习和曼巴网络,为处理有限的训练数据和提高模型的通用性提供了实用的建议。该研究还确定了有前途的研究方向,如测试时间训练和混合损失函数,以解决湿地分类中的挑战。本调查旨在指导研究人员和实践者通过ML和DL技术推进最先进的湿地分类。
{"title":"Advances in machine learning for wetland classification: a comprehensive survey of methods and applications","authors":"Derrick Effah,&nbsp;Ali Zia,&nbsp;Mohammad Awrangjeb,&nbsp;Yongsheng Gao,&nbsp;Kwabena Sarpong","doi":"10.1007/s10462-025-11413-5","DOIUrl":"10.1007/s10462-025-11413-5","url":null,"abstract":"<div><p>Wetlands are critical ecosystems supporting biodiversity and providing essential environmental services. With increasing threats to wetlands, efficient classification techniques are essential for effective conservation. Several researchers have contributed to wetland classification across reputable journals. However, some challenges (data scarcity, noisy labels, and model generalisability) still exist. Therefore, this paper presents a comprehensive survey of recent advancements in machine learning (ML) and deep learning (DL) for wetland classification, focusing on developments from 2018 to 2025. Key methodologies, including convolutional neural networks, transformers, and generative adversarial networks, are critically reviewed, highlighting their strengths, limitations, and applications in remote sensing. Unlike previous reviews, this work emphasises underexplored techniques such as few-shot learning and Mamba networks, offering practical recommendations for handling limited training data and improving model generalisability. The study also identifies promising research directions, such as test-time training and hybrid loss functions, to address challenges in wetland classification. This survey aims to guide researchers and practitioners in advancing state-of-the-art wetland classification through ML and DL technologies.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11413-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A computer graphics-based model to generate dynamic 3D animations for corresponding Bangla sign language gestures using HamNoSys to SiGML conversion 一个基于计算机图形的模型,使用HamNoSys到SiGML的转换,为相应的孟加拉语手语手势生成动态3D动画
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1007/s10462-025-11370-z
Ahsanul Karim, Muhammad Aminur Rahaman, Md. Ariful Islam, Md. Ariful Islam, Anichur Rahman, Tanoy Debnath, Utpol Kanti Das

Effective communication is essential for human touch, and it’s especially crucial for those who rely on sign language because they have speech or hearing difficulties. Communication between deaf and normal people is restricted by the inability of current technology to automatically create flexible Bangla Sign Language (BdSL) animations from Bangla text or voice. This work introduces a revolutionary computer graphics-based system that takes voice or text input in Bangla and uses it to create BdSL animations. The system uses a HamNoSys to Gesture Markup Language (SiGML) conversion to dynamically translate input into gestures. With the use of 94 classes in the dataset that cover Bangla numbers, alphabets, and word motions, the model can generate any word or phrase in Bangla. Additionally, the system creates gestures for inputs that aren’t in the dataset by spelling out the letters. The system converts Bangla text into three-dimensional animated BdSL movements by parsing it based on Linguistic principles. According to performance evaluations, each input has a processing cost of 79.57 ms, and the average accuracy for text and voice input is 97.50% and 94.75%, respectively. The method ensures fluency and naturalness by taking into consideration crucial elements of sign language, such as hand shape, palm orientation, and non-manual messages. By reducing communication barriers between the sign and non-sign populations, this study significantly advances accessibility. Visit this GitHub repository link for further details on the implementation of the suggested system and the SiGML dataset: https://gitlab.com/devarifkhan/bdsl-3d-animation

有效的沟通对于人与人之间的接触至关重要,对于那些依赖手语的人来说尤其重要,因为他们有语言或听力障碍。聋哑人与正常人之间的交流受到当前技术的限制,无法从孟加拉语文本或语音自动创建灵活的孟加拉语手语动画。这项工作介绍了一个革命性的基于计算机图形的系统,该系统采用孟加拉语的语音或文本输入并使用它来创建BdSL动画。该系统使用HamNoSys到手势标记语言(SiGML)的转换来动态地将输入转换为手势。通过使用数据集中94个类,涵盖孟加拉语数字、字母和单词运动,该模型可以生成孟加拉语中的任何单词或短语。此外,系统通过拼写字母为数据集中没有的输入创建手势。该系统基于语言学原理对孟加拉语文本进行解析,将其转换为三维动画BdSL运动。根据性能评估,每次输入的处理成本为79.57 ms,文本和语音输入的平均准确率分别为97.50%和94.75%。该方法通过考虑手语的关键要素,如手的形状、手掌的方向和非手动信息,确保流利和自然。通过减少手语和非手语人群之间的交流障碍,本研究显著提高了可达性。请访问此GitHub存储库链接,了解有关建议系统和SiGML数据集实现的更多详细信息:https://gitlab.com/devarifkhan/bdsl-3d-animation
{"title":"A computer graphics-based model to generate dynamic 3D animations for corresponding Bangla sign language gestures using HamNoSys to SiGML conversion","authors":"Ahsanul Karim,&nbsp;Muhammad Aminur Rahaman,&nbsp;Md. Ariful Islam,&nbsp;Md. Ariful Islam,&nbsp;Anichur Rahman,&nbsp;Tanoy Debnath,&nbsp;Utpol Kanti Das","doi":"10.1007/s10462-025-11370-z","DOIUrl":"10.1007/s10462-025-11370-z","url":null,"abstract":"<div><p>Effective communication is essential for human touch, and it’s especially crucial for those who rely on sign language because they have speech or hearing difficulties. Communication between deaf and normal people is restricted by the inability of current technology to automatically create flexible Bangla Sign Language (BdSL) animations from Bangla text or voice. This work introduces a revolutionary computer graphics-based system that takes voice or text input in Bangla and uses it to create BdSL animations. The system uses a HamNoSys to Gesture Markup Language (SiGML) conversion to dynamically translate input into gestures. With the use of 94 classes in the dataset that cover Bangla numbers, alphabets, and word motions, the model can generate any word or phrase in Bangla. Additionally, the system creates gestures for inputs that aren’t in the dataset by spelling out the letters. The system converts Bangla text into three-dimensional animated BdSL movements by parsing it based on Linguistic principles. According to performance evaluations, each input has a processing cost of 79.57 ms, and the average accuracy for text and voice input is 97.50% and 94.75%, respectively. The method ensures fluency and naturalness by taking into consideration crucial elements of sign language, such as hand shape, palm orientation, and non-manual messages. By reducing communication barriers between the sign and non-sign populations, this study significantly advances accessibility. Visit this GitHub repository link for further details on the implementation of the suggested system and the SiGML dataset: https://gitlab.com/devarifkhan/bdsl-3d-animation</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11370-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive review of current robot-based pollinators for crop pollination 目前农作物传粉机器人传粉器的综合综述
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1007/s10462-025-11409-1
Rajmeet Singh, Lakmal Seneviratne, Irfan Hussain

The decline of bee and wind-based pollination systems in greenhouses due to controlled environments and limited access has boosted the importance of finding alternative pollination methods. Robot-based pollination systems have emerged as a promising solution, ensuring adequate crop yield even in challenging pollination scenarios. This paper presents a comprehensive review of the current robotic-based pollinators employed in agriculture. The review categorizes pollinator technologies into major categories such as air-jet, water-jet, linear actuator, ultrasonic wave, and air-liquid spray, each suitable for specific crop pollination requirements. However, these technologies are often tailored to particular crops, limiting their versatility. The advancement of science and technology has led to the integration of automated pollination technology, encompassing information technology, automatic perception, detection, control, and operation. This integration not only reduces labor shortage problem but also fosters the ongoing progress of modern agriculture by refining technology, enhancing automation, and promoting intelligence in agricultural practices. Finally, the challenges encountered in the design of robot-based pollinators are addressed, and a forward-looking perspective is taken towards future developments, aiming to contribute to the sustainable advancement of this technology.

由于环境受控制和通道受限,温室中蜜蜂和风媒传粉系统的减少提高了寻找替代传粉方法的重要性。基于机器人的授粉系统已经成为一种有前途的解决方案,即使在具有挑战性的授粉情况下也能确保足够的作物产量。本文介绍了目前在农业中应用的基于机器人的传粉器的全面综述。本文将传粉技术分为空气喷射、水喷射、线性执行器、超声波和气液喷雾等大类,每种传粉技术都适合特定的作物传粉要求。然而,这些技术通常是针对特定作物定制的,限制了它们的通用性。科学技术的进步导致了自动化授粉技术的融合,包括信息技术、自动感知、检测、控制和操作。这种整合不仅减少了劳动力短缺问题,而且通过改进技术,提高自动化程度,促进农业实践的智能化,促进现代农业的不断进步。最后,讨论了机器人传粉器设计中遇到的挑战,并对未来的发展采取了前瞻性的观点,旨在为该技术的可持续发展做出贡献。
{"title":"A comprehensive review of current robot-based pollinators for crop pollination","authors":"Rajmeet Singh,&nbsp;Lakmal Seneviratne,&nbsp;Irfan Hussain","doi":"10.1007/s10462-025-11409-1","DOIUrl":"10.1007/s10462-025-11409-1","url":null,"abstract":"<div><p>The decline of bee and wind-based pollination systems in greenhouses due to controlled environments and limited access has boosted the importance of finding alternative pollination methods. Robot-based pollination systems have emerged as a promising solution, ensuring adequate crop yield even in challenging pollination scenarios. This paper presents a comprehensive review of the current robotic-based pollinators employed in agriculture. The review categorizes pollinator technologies into major categories such as air-jet, water-jet, linear actuator, ultrasonic wave, and air-liquid spray, each suitable for specific crop pollination requirements. However, these technologies are often tailored to particular crops, limiting their versatility. The advancement of science and technology has led to the integration of automated pollination technology, encompassing information technology, automatic perception, detection, control, and operation. This integration not only reduces labor shortage problem but also fosters the ongoing progress of modern agriculture by refining technology, enhancing automation, and promoting intelligence in agricultural practices. Finally, the challenges encountered in the design of robot-based pollinators are addressed, and a forward-looking perspective is taken towards future developments, aiming to contribute to the sustainable advancement of this technology.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11409-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the use of transfer learning in nature-inspired algorithms: a systematic review 关于迁移学习在自然启发算法中的应用:系统回顾
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1007/s10462-025-11404-6
Rita Xavier, Leandro Nunes de Castro

Transfer Learning (TL) has gained significant traction in machine learning, especially in deep learning contexts. However, its integration with Nature-Inspired Algorithms (NIAs) remains fragmented, with limited understanding of strategies, challenges, and outcomes. This paper presents the first systematic review focused exclusively on the use of TL in NIAs, excluding deep learning approaches. Major challenges include dealing with domain/task similarity, avoiding negative transfer, selecting what and when to transfer, and adapting TL mechanisms to population-based search paradigms. To address these issues, we conducted a structured analysis of 47 primary studies, categorizing them by TL strategies, learning paradigms, and algorithmic goals. Our findings reveal recurring patterns, highlight open research gaps, and propose future directions for developing robust TL-based NIAs. This review provides a foundation for researchers interested in designing adaptive, efficient, and knowledge-guided metaheuristics for complex optimization tasks.

迁移学习(TL)在机器学习,特别是在深度学习环境中获得了显著的吸引力。然而,它与自然启发算法(nia)的集成仍然是碎片化的,对策略、挑战和结果的理解有限。本文提出了第一个系统综述,专门关注在nia中使用TL,不包括深度学习方法。主要的挑战包括处理领域/任务相似性,避免负迁移,选择迁移的内容和时间,以及使TL机制适应基于人群的搜索范式。为了解决这些问题,我们对47项主要研究进行了结构化分析,并根据学习策略、学习范式和算法目标对它们进行了分类。我们的发现揭示了反复出现的模式,突出了开放的研究空白,并提出了开发强大的基于tl的nia的未来方向。这一综述为研究人员在复杂优化任务中设计自适应、高效和知识引导的元启发式算法提供了基础。
{"title":"On the use of transfer learning in nature-inspired algorithms: a systematic review","authors":"Rita Xavier,&nbsp;Leandro Nunes de Castro","doi":"10.1007/s10462-025-11404-6","DOIUrl":"10.1007/s10462-025-11404-6","url":null,"abstract":"<div><p>Transfer Learning (TL) has gained significant traction in machine learning, especially in deep learning contexts. However, its integration with Nature-Inspired Algorithms (NIAs) remains fragmented, with limited understanding of strategies, challenges, and outcomes. This paper presents the first systematic review focused exclusively on the use of TL in NIAs, excluding deep learning approaches. Major challenges include dealing with domain/task similarity, avoiding negative transfer, selecting what and when to transfer, and adapting TL mechanisms to population-based search paradigms. To address these issues, we conducted a structured analysis of 47 primary studies, categorizing them by TL strategies, learning paradigms, and algorithmic goals. Our findings reveal recurring patterns, highlight open research gaps, and propose future directions for developing robust TL-based NIAs. This review provides a foundation for researchers interested in designing adaptive, efficient, and knowledge-guided metaheuristics for complex optimization tasks.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11404-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring unanswerability in machine reading comprehension: approaches, benchmarks, and open challenges 探索机器阅读理解中的不可回答性:方法、基准和开放挑战
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1007/s10462-025-11421-5
Hadiseh Moradisani, Fattane Zarrinkalam, Zeinab Noorian, Faezeh Ensan

The challenge of unanswerable questions in Machine Reading Comprehension (MRC) has drawn considerable attention, as current MRC systems are typically designed under the assumption that every question has a valid answer within the provided context. However, these systems often encounter real-world situations where no valid answer is available. This paper provides a comprehensive review of existing methods for addressing unanswerable questions in MRC systems, categorizing them into model-agnostic and model-specific approaches. It explores key strategies, examines relevant datasets, and evaluates commonly used metrics. This work aims to provide a comprehensive understanding of current techniques and identify critical gaps in the field, offering insights and key challenges to direct future research toward developing more robust MRC systems capable of handling unanswerable questions.

机器阅读理解(MRC)中不可回答问题的挑战引起了相当大的关注,因为当前的MRC系统通常是在假设每个问题在给定的上下文中都有一个有效的答案的情况下设计的。然而,这些系统经常遇到没有有效答案的实际情况。本文提供了解决MRC系统中无法回答的问题的现有方法的全面回顾,将它们分为模型不可知和模型特定的方法。它探讨了关键策略,检查了相关数据集,并评估了常用的指标。这项工作旨在提供对当前技术的全面理解,并确定该领域的关键差距,为指导未来研究开发更强大的MRC系统提供见解和关键挑战,这些系统能够处理无法回答的问题。
{"title":"Exploring unanswerability in machine reading comprehension: approaches, benchmarks, and open challenges","authors":"Hadiseh Moradisani,&nbsp;Fattane Zarrinkalam,&nbsp;Zeinab Noorian,&nbsp;Faezeh Ensan","doi":"10.1007/s10462-025-11421-5","DOIUrl":"10.1007/s10462-025-11421-5","url":null,"abstract":"<div><p>The challenge of unanswerable questions in Machine Reading Comprehension (MRC) has drawn considerable attention, as current MRC systems are typically designed under the assumption that every question has a valid answer within the provided context. However, these systems often encounter real-world situations where no valid answer is available. This paper provides a comprehensive review of existing methods for addressing unanswerable questions in MRC systems, categorizing them into model-agnostic and model-specific approaches. It explores key strategies, examines relevant datasets, and evaluates commonly used metrics. This work aims to provide a comprehensive understanding of current techniques and identify critical gaps in the field, offering insights and key challenges to direct future research toward developing more robust MRC systems capable of handling unanswerable questions.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11421-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1