首页 > 最新文献

Engineering Applications of Artificial Intelligence最新文献

英文 中文
Data-driven estimation of the amount of under frequency load shedding in small power systems 小型电力系统欠频甩负荷量的数据驱动估算
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-13 DOI: 10.1016/j.engappai.2024.109617
Mohammad Rajabdorri , Matthias C.M. Troffaes , Behzad Kazemtabrizi , Miad Sarvarizadeh , Lukas Sigrist , Enrique Lobato
This paper presents a data-driven methodology for estimating under frequency load shedding (UFLS) in small power systems. UFLS plays a vital role in maintaining system stability by shedding load when the frequency drops below a specified threshold following loss of generation. Using a dynamic system frequency response (SFR) model we generate different values of UFLS (i.e., labels) predicated on a set of carefully selected operating conditions (i.e., features). Machine learning (ML) algorithms are then applied to learn the relationship between chosen features and the UFLS labels. A novel regression tree and the Tobit model are suggested for this purpose and we show how the resulting non-linear model can be directly incorporated into a mixed integer linear programming (MILP) problem. The trained model can be used to estimate UFLS in security-constrained operational planning problems, improving frequency response, optimizing reserve allocation, and reducing costs. The methodology is applied to the La Palma island power system, demonstrating its accuracy and effectiveness. The results confirm that the amount of UFLS can be estimated with the mean absolute error (MAE) as small as 0.179 MW for the whole process, with a model that is representable as a MILP for use in scheduling problems such as unit commitment among others.
本文提出了一种数据驱动方法,用于估算小型电力系统中的欠频甩负荷(UFLS)。当发电损失后频率下降到指定阈值以下时,UFLS 就会甩掉负荷,从而在维持系统稳定性方面发挥重要作用。我们使用动态系统频率响应(SFR)模型,根据一组精心挑选的运行条件(即特征)生成不同的 UFLS 值(即标签)。然后应用机器学习(ML)算法来学习所选特征与 UFLS 标签之间的关系。为此,我们提出了一种新颖的回归树和 Tobit 模型,并展示了如何将由此产生的非线性模型直接纳入混合整数线性规划 (MILP) 问题。训练有素的模型可用于在安全受限的运营规划问题中估算 UFLS,从而改善频率响应、优化储备分配并降低成本。该方法应用于拉帕尔马岛电力系统,证明了其准确性和有效性。结果证实,UFLS 的估算量在整个过程中的平均绝对误差(MAE)可小至 0.179 兆瓦,其模型可表示为 MILP,用于机组承诺等调度问题。
{"title":"Data-driven estimation of the amount of under frequency load shedding in small power systems","authors":"Mohammad Rajabdorri ,&nbsp;Matthias C.M. Troffaes ,&nbsp;Behzad Kazemtabrizi ,&nbsp;Miad Sarvarizadeh ,&nbsp;Lukas Sigrist ,&nbsp;Enrique Lobato","doi":"10.1016/j.engappai.2024.109617","DOIUrl":"10.1016/j.engappai.2024.109617","url":null,"abstract":"<div><div>This paper presents a data-driven methodology for estimating under frequency load shedding (UFLS) in small power systems. UFLS plays a vital role in maintaining system stability by shedding load when the frequency drops below a specified threshold following loss of generation. Using a dynamic system frequency response (SFR) model we generate different values of UFLS (i.e., labels) predicated on a set of carefully selected operating conditions (i.e., features). Machine learning (ML) algorithms are then applied to learn the relationship between chosen features and the UFLS labels. A novel regression tree and the Tobit model are suggested for this purpose and we show how the resulting non-linear model can be directly incorporated into a mixed integer linear programming (MILP) problem. The trained model can be used to estimate UFLS in security-constrained operational planning problems, improving frequency response, optimizing reserve allocation, and reducing costs. The methodology is applied to the La Palma island power system, demonstrating its accuracy and effectiveness. The results confirm that the amount of UFLS can be estimated with the mean absolute error (MAE) as small as 0.179 MW for the whole process, with a model that is representable as a MILP for use in scheduling problems such as unit commitment among others.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109617"},"PeriodicalIF":7.5,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142659091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of similar electronic components by transfer learning methods 用迁移学习法对相似电子元件进行分类
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-12 DOI: 10.1016/j.engappai.2024.109658
Göksu Taş
Proper selection of electronic components and automated component identification is critical for fast production processes in industry. In addition, for Internet of Things (IoT) systems, accurate and fast selection of similar electronic components is an important problem. In this study, a transfer learning-based method is proposed to classify electronic components that are difficult to select due to their similarity. Eight different convolutional neural network (CNN) models and a novel model developed only in this study were used to classify electronic components. In addition to the transfer learning methods, the parallel CNN method, in which hyperparameter determination is done by trial and error, was developed and used to solve the classification problem. In addition to the transfer learning method, the parameters were tried to be determined by the trial-and-error method for hyperparameter selection. The effect of batch size and learning rate hyperparameter variations on the prediction success of parallel CNN models is analyzed. The effect of two different batch sizes and learning rate values for transfer learning models is also analyzed. Metrics such as confusion matrix, accuracy, and loss were used for evaluation methods. The number of parameters and runtime metrics of the models were also evaluated. All experiments were averaged to obtain a general intuition of success. The success of the proposed method is given by the evaluation metrics. According to the accuracy metric, the Densely Connected Convolutional Networks (DenseNet-121) model was the most successful model with a value of 98.2925%.
正确选择电子元件和自动元件识别对于工业领域的快速生产流程至关重要。此外,对于物联网(IoT)系统来说,准确、快速地选择相似的电子元件也是一个重要问题。本研究提出了一种基于迁移学习的方法,用于对因相似性而难以选择的电子元件进行分类。八种不同的卷积神经网络(CNN)模型和一种仅在本研究中开发的新型模型被用于对电子元件进行分类。除迁移学习方法外,还开发了并行 CNN 方法,该方法通过试错确定超参数,用于解决分类问题。除了迁移学习方法外,还尝试用试错法确定超参数选择的参数。分析了批量大小和学习率超参数变化对并行 CNN 模型预测成功率的影响。还分析了两种不同批量大小和学习率值对迁移学习模型的影响。评估方法采用了混淆矩阵、准确率和损失等指标。此外,还评估了模型的参数数量和运行时间指标。对所有实验进行了平均,以获得对成功的总体直观认识。所提方法的成功与否取决于评价指标。根据准确度指标,密集连接卷积网络(DenseNet-121)模型是最成功的模型,其准确度为 98.2925%。
{"title":"Classification of similar electronic components by transfer learning methods","authors":"Göksu Taş","doi":"10.1016/j.engappai.2024.109658","DOIUrl":"10.1016/j.engappai.2024.109658","url":null,"abstract":"<div><div>Proper selection of electronic components and automated component identification is critical for fast production processes in industry. In addition, for Internet of Things (IoT) systems, accurate and fast selection of similar electronic components is an important problem. In this study, a transfer learning-based method is proposed to classify electronic components that are difficult to select due to their similarity. Eight different convolutional neural network (CNN) models and a novel model developed only in this study were used to classify electronic components. In addition to the transfer learning methods, the parallel CNN method, in which hyperparameter determination is done by trial and error, was developed and used to solve the classification problem. In addition to the transfer learning method, the parameters were tried to be determined by the trial-and-error method for hyperparameter selection. The effect of batch size and learning rate hyperparameter variations on the prediction success of parallel CNN models is analyzed. The effect of two different batch sizes and learning rate values for transfer learning models is also analyzed. Metrics such as confusion matrix, accuracy, and loss were used for evaluation methods. The number of parameters and runtime metrics of the models were also evaluated. All experiments were averaged to obtain a general intuition of success. The success of the proposed method is given by the evaluation metrics. According to the accuracy metric, the Densely Connected Convolutional Networks (DenseNet-121) model was the most successful model with a value of 98.2925%.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109658"},"PeriodicalIF":7.5,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142659144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weather-aware energy management for unmannedaerial vehicles: a machine learning application with global data integration 无人驾驶飞行器的气象感知能源管理:全球数据整合的机器学习应用
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-12 DOI: 10.1016/j.engappai.2024.109596
Abhishek G. Somanagoudar, Walter Mérida
This study introduces a machine learning (ML) framework to predict unmanned aerial vehicle (UAV) energy requirements under diverse environmental conditions. The framework correlates UAV flight patterns with publicly accessible weather data, to yield an energy management tool applicable to a wide range of UAV configurations. The model employs the Cross-industry standard process for data mining and advanced feature engineering, offering an in-depth analysis of meteorological factors and UAV energy demands. The study assesses several multi-regression linear and ML models, whereby ensemble models gradient boosting (GB) and eXtreme gradient boosting demonstrate superior performance and accuracy. Specifically, the GB model achieved a test mean absolute error (MAE) of 0.0395 V (V) for voltage, 0.808 A (A) for current, and 9.758 mA-hours (mAh) for discharge, with prediction accuracy of over 99.9% for voltage and discharge, and 97% for current, derived from the coefficient of determination (R2). A novel integration of real-world UAV logs and weather data underpins the development of a weather-aware ML prediction model for UAV energy consumption. Our framework is capable of concurrently predicting three components of energy and power with almost uniform accuracy, a feature not found in contemporary models. Empirical test flights show a discrepancy of only 0.005 W-hour (Wh) between total predicted and actual energy consumption. This work enhances both efficiency and safety in UAV operations. The resulting energy-predictive flight planning tool sets a new benchmark for artificial intelligence (AI) applications in intelligent automation for UAVs.
本研究介绍了一种机器学习(ML)框架,用于预测不同环境条件下无人驾驶飞行器(UAV)的能源需求。该框架将无人飞行器的飞行模式与可公开获取的天气数据相关联,从而产生一种适用于各种无人飞行器配置的能源管理工具。该模型采用跨行业标准流程进行数据挖掘和高级特征工程,对气象因素和无人机能源需求进行了深入分析。该研究评估了多个多元回归线性模型和 ML 模型,其中集合模型梯度提升(GB)和极端梯度提升表现出卓越的性能和准确性。具体而言,GB 模型的测试平均绝对误差(MAE)分别为:电压 0.0395 V (V)、电流 0.808 A (A)、放电 9.758 mA-hours (mAh),根据判定系数(R2),电压和放电的预测准确率超过 99.9%,电流为 97%。真实无人机日志和天气数据的新颖整合为无人机能耗的天气感知 ML 预测模型的开发奠定了基础。我们的框架能够同时预测能量和功率的三个组成部分,准确度几乎一致,这是当代模型所不具备的。经验性试飞表明,预测总能耗与实际能耗之间的差异仅为 0.005 瓦时(Wh)。这项工作提高了无人机运行的效率和安全性。由此产生的能量预测飞行规划工具为无人机智能自动化领域的人工智能(AI)应用树立了新的标杆。
{"title":"Weather-aware energy management for unmannedaerial vehicles: a machine learning application with global data integration","authors":"Abhishek G. Somanagoudar,&nbsp;Walter Mérida","doi":"10.1016/j.engappai.2024.109596","DOIUrl":"10.1016/j.engappai.2024.109596","url":null,"abstract":"<div><div>This study introduces a machine learning (ML) framework to predict unmanned aerial vehicle (UAV) energy requirements under diverse environmental conditions. The framework correlates UAV flight patterns with publicly accessible weather data, to yield an energy management tool applicable to a wide range of UAV configurations. The model employs the Cross-industry standard process for data mining and advanced feature engineering, offering an in-depth analysis of meteorological factors and UAV energy demands. The study assesses several multi-regression linear and ML models, whereby ensemble models gradient boosting (GB) and eXtreme gradient boosting demonstrate superior performance and accuracy. Specifically, the GB model achieved a test mean absolute error (MAE) of 0.0395 V (V) for voltage, 0.808 A (A) for current, and 9.758 mA-hours (mAh) for discharge, with prediction accuracy of over 99.9% for voltage and discharge, and 97% for current, derived from the coefficient of determination (R<sup>2</sup>). A novel integration of real-world UAV logs and weather data underpins the development of a weather-aware ML prediction model for UAV energy consumption. Our framework is capable of concurrently predicting three components of energy and power with almost uniform accuracy, a feature not found in contemporary models. Empirical test flights show a discrepancy of only 0.005 W-hour (Wh) between total predicted and actual energy consumption. This work enhances both efficiency and safety in UAV operations. The resulting energy-predictive flight planning tool sets a new benchmark for artificial intelligence (AI) applications in intelligent automation for UAVs.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109596"},"PeriodicalIF":7.5,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142659128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-branch neural network for No-Reference Quality assessment of Pan-Sharpened Images 用于泛锐化图像无参考质量评估的三分支神经网络
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-12 DOI: 10.1016/j.engappai.2024.109594
Igor Stępień, Mariusz Oszust
Pan-Sharpening (PS) techniques aim to enhance the spatial resolution of low-resolution multispectral images by leveraging data from high-resolution panchromatic images. Their comparison typically relies on the quality assessment of the resulting Full-Resolution (FS) pan-sharpened images. However, in the absence of a reference image, a dedicated No-Reference (NR) method must be employed. Therefore, this paper introduces a novel approach called the Three-Branch Neural Network for No-Reference Quality Assessment of Pan-Sharpened Images (TBN-PSI). The network consists of three subnetworks designed for perceptual processing of image channels, featuring shared extraction of low-level features and high-level semantics. Extensive experimental evaluation demonstrates the superiority of the approach over the state-of-the-art NR PS image quality assessment methods, using six datasets containing diverse satellite images that span urban areas, green vegetation, and water scenarios. Specifically, TBN-PSI outperforms the compared methods by 4% to 9% in terms of Spearman’s Rank-Order Correlation Coefficient (SRCC), Pearson’s Linear Correlation Coefficient (PLCC), and Kendall’s Rank Correlation Coefficient (KRCC) between the obtained scores and those of three representative full-reference methods.
全色锐化(PS)技术旨在利用高分辨率全色图像的数据,提高低分辨率多光谱图像的空间分辨率。它们之间的比较通常依赖于对所生成的全分辨率(FS)平移锐化图像的质量评估。然而,在没有参考图像的情况下,必须采用专门的无参考(NR)方法。因此,本文介绍了一种名为 "泛锐化图像无参考质量评估三分支神经网络"(TBN-PSI)的新方法。该网络由三个子网络组成,专为图像通道的感知处理而设计,具有共同提取低级特征和高级语义的特点。广泛的实验评估表明,该方法优于最先进的 NR PS 图像质量评估方法,使用的六个数据集包含城市地区、绿色植被和水域场景的各种卫星图像。具体来说,TBN-PSI 所获得的分数在斯皮尔曼等级相关系数 (SRCC)、皮尔森线性相关系数 (PLCC) 和 Kendall 等级相关系数 (KRCC) 方面均优于所比较的方法 4% 至 9%,而所获得的分数则优于三种具有代表性的完全参考方法。
{"title":"Three-branch neural network for No-Reference Quality assessment of Pan-Sharpened Images","authors":"Igor Stępień,&nbsp;Mariusz Oszust","doi":"10.1016/j.engappai.2024.109594","DOIUrl":"10.1016/j.engappai.2024.109594","url":null,"abstract":"<div><div>Pan-Sharpening (PS) techniques aim to enhance the spatial resolution of low-resolution multispectral images by leveraging data from high-resolution panchromatic images. Their comparison typically relies on the quality assessment of the resulting Full-Resolution (FS) pan-sharpened images. However, in the absence of a reference image, a dedicated No-Reference (NR) method must be employed. Therefore, this paper introduces a novel approach called the Three-Branch Neural Network for No-Reference Quality Assessment of Pan-Sharpened Images (TBN-PSI). The network consists of three subnetworks designed for perceptual processing of image channels, featuring shared extraction of low-level features and high-level semantics. Extensive experimental evaluation demonstrates the superiority of the approach over the state-of-the-art NR PS image quality assessment methods, using six datasets containing diverse satellite images that span urban areas, green vegetation, and water scenarios. Specifically, TBN-PSI outperforms the compared methods by 4% to 9% in terms of Spearman’s Rank-Order Correlation Coefficient (SRCC), Pearson’s Linear Correlation Coefficient (PLCC), and Kendall’s Rank Correlation Coefficient (KRCC) between the obtained scores and those of three representative full-reference methods.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109594"},"PeriodicalIF":7.5,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142659124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Practical framework for generative on-branch soybean pod detection in occlusion and class imbalance scenes 在遮挡和类不平衡场景中进行枝上大豆豆荚生成检测的实用框架
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-12 DOI: 10.1016/j.engappai.2024.109613
Kanglei Wu , Tan Wang , Yuan Rao , Xiu Jin , Xiaobo Wang , Jiajia Li , Zhe Zhang , Zhaohui Jiang , Xing Shao , Wu Zhang
The number of pods per plant can serve as an effective indicator of soybean yield, and accurately determining this is essential for evaluating high-quality soybean varieties. However, traditional manual pod counting is time-consuming and laborious. Although deep learning-based pod detection methods have attracted much attention, there are still considerable challenges for the effective detection of pods in occlusion and class imbalance scenes. As a remedy, this study proposes a framework that leverages synthetic pod image generation and multi-stage transfer learning to generate detection model of on-branch soybean pods in complex scenes. This framework employs a novel pipeline: initially separating individual pods from non-occluded pod images in an off-branch pod training set, then using these to generate synthetic datasets with diverse pod features. Next, a multi-stage transfer learning method is employed to train an on-branch pod detection model, leveraging both real and synthetic datasets to enhance pod feature extraction in complex scenes. The detection model of proposed framework, YOLOv7-tiny (tiny version of You Only Look Once v7), integrates an angle prediction module based on Circular Smooth Label for rotated object detection, Coordinate Attention modules for enhanced feature extraction and Minimum Point Distance Intersection over Union Loss for precise bounding box perception. Experimental results show that proposed framework achieves an 81.1% mAP (mean Average Precision) for detecting on-branch pods in complex scenes, surpassing the best-performing model by 23.7%. This proposed method presents an effective solution for complex on-branch pod detection, having great potential of serving as robust pipeline for similar agricultural tasks.
每株植物的结荚数可以作为大豆产量的有效指标,准确确定这一指标对于评估优质大豆品种至关重要。然而,传统的人工豆荚计数费时费力。虽然基于深度学习的豆荚检测方法备受关注,但要在遮挡和类不平衡场景中有效检测豆荚,仍面临相当大的挑战。作为一种补救措施,本研究提出了一种框架,利用合成豆荚图像生成和多阶段迁移学习来生成复杂场景中枝条上大豆豆荚的检测模型。该框架采用了一个新颖的管道:首先从非分支豆荚训练集中的非排除豆荚图像中分离出单个豆荚,然后利用这些图像生成具有不同豆荚特征的合成数据集。然后,采用多阶段迁移学习方法训练分支上的豆荚检测模型,利用真实和合成数据集加强复杂场景中的豆荚特征提取。拟议框架的检测模型 YOLOv7-tiny(You Only Look Once v7 的微小版本)集成了基于圆形平滑标签的角度预测模块(用于旋转物体检测)、坐标注意模块(用于增强特征提取)和最小点距离相交联合损失模块(用于精确感知边界框)。实验结果表明,所提出的框架在复杂场景中检测分枝豆荚的平均精度(mAP)达到了 81.1%,比表现最好的模型高出 23.7%。所提出的方法为复杂的枝上豆荚检测提供了一个有效的解决方案,有望成为类似农业任务的稳健管道。
{"title":"Practical framework for generative on-branch soybean pod detection in occlusion and class imbalance scenes","authors":"Kanglei Wu ,&nbsp;Tan Wang ,&nbsp;Yuan Rao ,&nbsp;Xiu Jin ,&nbsp;Xiaobo Wang ,&nbsp;Jiajia Li ,&nbsp;Zhe Zhang ,&nbsp;Zhaohui Jiang ,&nbsp;Xing Shao ,&nbsp;Wu Zhang","doi":"10.1016/j.engappai.2024.109613","DOIUrl":"10.1016/j.engappai.2024.109613","url":null,"abstract":"<div><div>The number of pods per plant can serve as an effective indicator of soybean yield, and accurately determining this is essential for evaluating high-quality soybean varieties. However, traditional manual pod counting is time-consuming and laborious. Although deep learning-based pod detection methods have attracted much attention, there are still considerable challenges for the effective detection of pods in occlusion and class imbalance scenes. As a remedy, this study proposes a framework that leverages synthetic pod image generation and multi-stage transfer learning to generate detection model of on-branch soybean pods in complex scenes. This framework employs a novel pipeline: initially separating individual pods from non-occluded pod images in an off-branch pod training set, then using these to generate synthetic datasets with diverse pod features. Next, a multi-stage transfer learning method is employed to train an on-branch pod detection model, leveraging both real and synthetic datasets to enhance pod feature extraction in complex scenes. The detection model of proposed framework, YOLOv7-tiny (tiny version of You Only Look Once v7), integrates an angle prediction module based on Circular Smooth Label for rotated object detection, Coordinate Attention modules for enhanced feature extraction and Minimum Point Distance Intersection over Union Loss for precise bounding box perception. Experimental results show that proposed framework achieves an 81.1% mAP (mean Average Precision) for detecting on-branch pods in complex scenes, surpassing the best-performing model by 23.7%. This proposed method presents an effective solution for complex on-branch pod detection, having great potential of serving as robust pipeline for similar agricultural tasks.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109613"},"PeriodicalIF":7.5,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142659279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-modal Prompt-Driven Network for low-resource vision-to-language generation 用于低资源视觉语言生成的跨模态提示驱动网络
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-12 DOI: 10.1016/j.engappai.2024.109591
Yuena Jiang, Yanxun Chang
Image captioning is a classic vision-to-language generation task, which aims to generate a descriptive sentence to describe the input image, involving the understanding of the image and the generation of natural language. Conventional methods require a large-scale labeled dataset for training, which includes a large volume of image-caption pairs. However, for several application scenarios, e.g., medicine and non-English, such plenty of image-caption pairs are usually not available. In this work, we propose the Cross-modal Prompt-Driven Network (XProDNet) to perform low-resource image captioning, which can generate accurate and comprehensive image captioning, with extremely limited data for training. We conduct experiments on (1) six benchmark datasets; (2) three application scenarios, i.e., conventional image captioning, medical image captioning, and non-English image captioning; (3) four target languages, i.e., English, Chinese, German, and French; (4) two experimental settings, i.e., fully-supervised learning and few-shot learning. The extensive experiments prove the effectiveness of our approach, which can not only generate high-quality and comprehensive image captions but also significantly surpass previous state-of-the-art methods under both the few-shot learning and fully-supervised learning settings. The improved results suggest that our method has great potential for improving image captioning in real-world applications.
图像标题制作是一项典型的从视觉到语言的生成任务,其目的是生成一个描述性句子来描述输入图像,其中涉及对图像的理解和自然语言的生成。传统方法需要大规模的标注数据集进行训练,其中包括大量的图像-标题对。然而,对于一些应用场景,如医学和非英语领域,通常无法获得如此大量的图像标题对。在这项工作中,我们提出了跨模态提示驱动网络(XProDNet)来执行低资源图像字幕,它能在极其有限的训练数据下生成准确而全面的图像字幕。我们在以下方面进行了实验:(1)六个基准数据集;(2)三种应用场景,即传统图像字幕、医疗图像字幕和非英语图像字幕;(3)四种目标语言,即英语、汉语、德语和法语;(4)两种实验设置,即完全监督学习和少量学习。大量的实验证明了我们的方法的有效性,它不仅能生成高质量和全面的图像标题,而且在少点学习和完全监督学习设置下都大大超过了以前的先进方法。改进后的结果表明,我们的方法在改进实际应用中的图像字幕方面具有巨大潜力。
{"title":"Cross-modal Prompt-Driven Network for low-resource vision-to-language generation","authors":"Yuena Jiang,&nbsp;Yanxun Chang","doi":"10.1016/j.engappai.2024.109591","DOIUrl":"10.1016/j.engappai.2024.109591","url":null,"abstract":"<div><div>Image captioning is a classic vision-to-language generation task, which aims to generate a descriptive sentence to describe the input image, involving the understanding of the image and the generation of natural language. Conventional methods require a large-scale labeled dataset for training, which includes a large volume of image-caption pairs. However, for several application scenarios, <em>e.g.,</em> medicine and non-English, such plenty of image-caption pairs are usually not available. In this work, we propose the Cross-modal Prompt-Driven Network (XProDNet) to perform low-resource image captioning, which can generate accurate and comprehensive image captioning, with extremely limited data for training. We conduct experiments on (1) six benchmark datasets; (2) three application scenarios, <em>i.e.</em>, conventional image captioning, medical image captioning, and non-English image captioning; (3) four target languages, <em>i.e.</em>, English, Chinese, German, and French; (4) two experimental settings, <em>i.e.</em>, fully-supervised learning and few-shot learning. The extensive experiments prove the effectiveness of our approach, which can not only generate high-quality and comprehensive image captions but also significantly surpass previous state-of-the-art methods under both the few-shot learning and fully-supervised learning settings. The improved results suggest that our method has great potential for improving image captioning in real-world applications.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109591"},"PeriodicalIF":7.5,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142659126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One test to predict them all: Rheological characterization of complex fluids via artificial neural network 一次测试,预测所有流体通过人工神经网络表征复杂流体的流变特性
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-12 DOI: 10.1016/j.engappai.2024.109598
Ases Akas Mishra , Viney Ghai , Valentina Matovic , Dragana Arlov , Roland Kádár
The rheological behavior of complex fluids, including thixotropy, viscoelasticity, and viscoplasticity, poses significant challenges in both measurement and prediction due to the transient nature of their stress responses. This study introduces an artificial neural network (ANN) designed to digitally characterize the rheology of complex fluids with unprecedented accuracy. By employing a data-driven approach, the ANN is trained using transient rheological tests with step inputs of shear rate. Once trained, the network adeptly captures the intricate dependencies of rheological properties on time and shear, enabling rapid and accurate predictions of various rheological tests. In contrast, traditional phenomenological structural kinetic constitutive models often fail to accurately describe the evolution of nonlinear rheological properties, particularly as material complexity increases. The ANN demonstrates high flexibility, reliability and robustness by accurately predicting transient rheology of varied materials with different shear histories. Our findings illustrate that ANNs can not only complement and validate traditional rheological characterization methods but also potentially replace them, thereby paving the way for more efficient material development and testing.
复杂流体的流变行为,包括触变性、粘弹性和粘塑性,由于其应力响应的瞬态性质,给测量和预测带来了巨大挑战。本研究介绍了一种人工神经网络 (ANN),旨在以前所未有的精度对复杂流体的流变性进行数字化表征。通过采用数据驱动方法,该人工神经网络利用剪切速率阶跃输入的瞬态流变测试进行训练。训练完成后,该网络就能巧妙地捕捉流变特性与时间和剪切力之间的复杂关系,从而快速、准确地预测各种流变测试结果。相比之下,传统的现象学结构动力学构成模型往往无法准确描述非线性流变特性的演变,尤其是当材料复杂性增加时。通过准确预测具有不同剪切历史的各种材料的瞬态流变,ANN 展示了高度的灵活性、可靠性和鲁棒性。我们的研究结果表明,ANN 不仅可以补充和验证传统的流变表征方法,还有可能取代它们,从而为更高效的材料开发和测试铺平道路。
{"title":"One test to predict them all: Rheological characterization of complex fluids via artificial neural network","authors":"Ases Akas Mishra ,&nbsp;Viney Ghai ,&nbsp;Valentina Matovic ,&nbsp;Dragana Arlov ,&nbsp;Roland Kádár","doi":"10.1016/j.engappai.2024.109598","DOIUrl":"10.1016/j.engappai.2024.109598","url":null,"abstract":"<div><div>The rheological behavior of complex fluids, including thixotropy, viscoelasticity, and viscoplasticity, poses significant challenges in both measurement and prediction due to the transient nature of their stress responses. This study introduces an artificial neural network (ANN) designed to digitally characterize the rheology of complex fluids with unprecedented accuracy. By employing a data-driven approach, the ANN is trained using transient rheological tests with step inputs of shear rate. Once trained, the network adeptly captures the intricate dependencies of rheological properties on time and shear, enabling rapid and accurate predictions of various rheological tests. In contrast, traditional phenomenological structural kinetic constitutive models often fail to accurately describe the evolution of nonlinear rheological properties, particularly as material complexity increases. The ANN demonstrates high flexibility, reliability and robustness by accurately predicting transient rheology of varied materials with different shear histories. Our findings illustrate that ANNs can not only complement and validate traditional rheological characterization methods but also potentially replace them, thereby paving the way for more efficient material development and testing.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109598"},"PeriodicalIF":7.5,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142659097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing deep neural networks for driver intention recognition 设计用于识别驾驶员意图的深度神经网络
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-12 DOI: 10.1016/j.engappai.2024.109574
Koen Vellenga , H. Joe Steinhauer , Alexander Karlsson , Göran Falkman , Asli Rhodin , Ashok Koppisetty
Driver intention recognition (DIR) studies increasingly rely on deep neural networks. Deep neural networks have achieved top performance for many different tasks. However, apart from image classifications and semantic segmentation for mobile phones, it is not a common practice for components of advanced driver assistance systems to explicitly analyze the complexity and performance of the network’s architecture. Therefore, this paper applies neural architecture search to investigate the effects of the deep neural network architecture on a real-world safety critical application with limited computational capabilities. We explore a pre-defined search space for three deep neural network layer types that are capable to handle sequential data (a long-short term memory, temporal convolution, and a time-series transformer layer), and the influence of different data fusion strategies on the driver intention recognition performance. A set of eight search strategies are evaluated for two driver intention recognition datasets. For the two datasets, we observed that there is no search strategy clearly sampling better deep neural network architectures. However, performing an architecture search improves the model performance compared to the original manually designed networks. Furthermore, we observe no relation between increased model complexity and better driver intention recognition performance. The result indicate that multiple architectures can yield similar performance, regardless of the deep neural network layer type or fusion strategy. However, the optimal complexity, layer type and fusion remain unknown upfront.
驾驶意图识别(DIR)研究越来越依赖于深度神经网络。深度神经网络在许多不同的任务中都取得了优异的表现。然而,除了用于手机的图像分类和语义分割之外,对于高级驾驶辅助系统的组件而言,明确分析网络架构的复杂性和性能并非常见做法。因此,本文应用神经架构搜索来研究深度神经网络架构对现实世界中计算能力有限的安全关键应用的影响。我们探索了能够处理顺序数据的三种深度神经网络层类型(长短期记忆层、时序卷积层和时序变换层)的预定义搜索空间,以及不同数据融合策略对驾驶员意图识别性能的影响。我们针对两个驾驶员意图识别数据集评估了八种搜索策略。对于这两个数据集,我们观察到没有一种搜索策略能明显采样出更好的深度神经网络架构。不过,与最初手动设计的网络相比,执行架构搜索能提高模型性能。此外,我们还观察到,模型复杂度的增加与更好的驾驶意图识别性能之间没有关系。结果表明,无论深度神经网络层类型或融合策略如何,多种架构都能产生相似的性能。然而,最佳的复杂度、层类型和融合仍然是未知数。
{"title":"Designing deep neural networks for driver intention recognition","authors":"Koen Vellenga ,&nbsp;H. Joe Steinhauer ,&nbsp;Alexander Karlsson ,&nbsp;Göran Falkman ,&nbsp;Asli Rhodin ,&nbsp;Ashok Koppisetty","doi":"10.1016/j.engappai.2024.109574","DOIUrl":"10.1016/j.engappai.2024.109574","url":null,"abstract":"<div><div>Driver intention recognition (DIR) studies increasingly rely on deep neural networks. Deep neural networks have achieved top performance for many different tasks. However, apart from image classifications and semantic segmentation for mobile phones, it is not a common practice for components of advanced driver assistance systems to explicitly analyze the complexity and performance of the network’s architecture. Therefore, this paper applies neural architecture search to investigate the effects of the deep neural network architecture on a real-world safety critical application with limited computational capabilities. We explore a pre-defined search space for three deep neural network layer types that are capable to handle sequential data (a long-short term memory, temporal convolution, and a time-series transformer layer), and the influence of different data fusion strategies on the driver intention recognition performance. A set of eight search strategies are evaluated for two driver intention recognition datasets. For the two datasets, we observed that there is no search strategy clearly sampling better deep neural network architectures. However, performing an architecture search improves the model performance compared to the original manually designed networks. Furthermore, we observe no relation between increased model complexity and better driver intention recognition performance. The result indicate that multiple architectures can yield similar performance, regardless of the deep neural network layer type or fusion strategy. However, the optimal complexity, layer type and fusion remain unknown upfront.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109574"},"PeriodicalIF":7.5,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142659142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Korean football in-game conversation state tracking dataset for dialogue and turn level evaluation 用于对话和回合水平评估的韩国足球游戏内对话状态跟踪数据集
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-12 DOI: 10.1016/j.engappai.2024.109572
Sangmin Song, Juhyoung Park, Juhwan Choi, Junho Lee, Kyohoon Jin, YoungBin Kim
Recent research in dialogue state tracking has made significant progress in tracking user goals through dialogue-level and turn-level approaches, but existing research primarily focused on predicting dialogue-level belief states. In this study, we present the KICK: Korean football In-game Conversation state tracKing dataset, which introduces a conversation-based approach. This approach leverages the roles of casters and commentators within the self-contained context of sports broadcasting to examine how utterances impact the belief state at both the dialogue-level and turn-level. Towards this end, we propose a task that aims to track the states of a specific time turn and understand conversations during the entire game. The proposed dataset comprises 228 games and 2463 events over one season, with a larger number of tokens per dialogue and turn, making it more challenging than existing datasets. Experiments revealed that the roles and interactions of casters and commentators are important for improving the zero-shot state tracking performance. By better understanding role-based utterances, we identify distinct approaches to the overall game process and events at specific turns.
最近的对话状态跟踪研究在通过对话级和回合级方法跟踪用户目标方面取得了重大进展,但现有研究主要集中于预测对话级信念状态。在本研究中,我们介绍了 KICK:韩国足球游戏内对话状态跟踪数据集,该数据集引入了一种基于对话的方法。这种方法利用体育转播的自足语境中的播音员和评论员的角色,来研究语篇如何在对话层面和回合层面影响信念状态。为此,我们提出了一项任务,旨在跟踪特定时间回合的状态并理解整场比赛中的对话。所提出的数据集包括一个赛季的 228 场比赛和 2463 个事件,每个对话和回合都有更多的代币,因此比现有的数据集更具挑战性。实验表明,播音员和评论员的角色和互动对于提高零镜头状态跟踪性能非常重要。通过更好地理解基于角色的话语,我们确定了针对整个游戏过程和特定回合事件的不同方法。
{"title":"Korean football in-game conversation state tracking dataset for dialogue and turn level evaluation","authors":"Sangmin Song,&nbsp;Juhyoung Park,&nbsp;Juhwan Choi,&nbsp;Junho Lee,&nbsp;Kyohoon Jin,&nbsp;YoungBin Kim","doi":"10.1016/j.engappai.2024.109572","DOIUrl":"10.1016/j.engappai.2024.109572","url":null,"abstract":"<div><div>Recent research in dialogue state tracking has made significant progress in tracking user goals through dialogue-level and turn-level approaches, but existing research primarily focused on predicting dialogue-level belief states. In this study, we present the <strong>KICK</strong>: <strong>K</strong>orean football <strong>I</strong>n-game <strong>C</strong>onversation state trac<strong>K</strong>ing dataset, which introduces a conversation-based approach. This approach leverages the roles of casters and commentators within the self-contained context of sports broadcasting to examine how utterances impact the belief state at both the dialogue-level and turn-level. Towards this end, we propose a task that aims to track the states of a specific time turn and understand conversations during the entire game. The proposed dataset comprises 228 games and 2463 events over one season, with a larger number of tokens per dialogue and turn, making it more challenging than existing datasets. Experiments revealed that the roles and interactions of casters and commentators are important for improving the zero-shot state tracking performance. By better understanding role-based utterances, we identify distinct approaches to the overall game process and events at specific turns.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109572"},"PeriodicalIF":7.5,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142659355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A flow rate estimation method for gas–liquid two-phase flow based on filter-enhanced convolutional neural network 基于滤波增强卷积神经网络的气液两相流流速估算方法
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-11 DOI: 10.1016/j.engappai.2024.109593
Yuxiao Jiang , Yinyan Liu , Lihui Peng , Yi Li
Accurate estimation of flow rate in gas–liquid two-phase flow is crucial for various industrial processes. How to accurately estimate flow rate remains a challenging problem. Previously, deep learning-based methods focused on a few human-set points with single task learning. In addition, the data were not denoised. In this study, a flow rate estimation method based on a filter-enhanced convolutional neural network (FECNN) is proposed for gas–liquid two-phase flow. The method leverages multimodal data from a Venturi tube and an electrical capacitance tomography (ECT) sensor as input, utilizing multilayer perceptron (MLP) to fuse data. Subsequently, a learnable filter module is employed to attenuate noise adaptively, followed by multiscale convolutional neural network (MSCNN) extraction of flow rate features at different scales. Finally, the method enables estimate each single-phase flow rate simultaneously through multi-task learning (MTL). The adaptive noise attenuation capabilities of the learnable filter module are demonstrated, and the ability of the proposed MSCNN to capture multiscale flow rate features through multiple comparative experiments is shown. Additionally, a qualitative comparison with recent flow rate estimation methods is provided. Overall, this study demonstrates the effectiveness and superiority of the proposed FECNN in flow rate estimation.
准确估算气液两相流中的流速对各种工业流程至关重要。如何准确估计流速仍是一个具有挑战性的问题。此前,基于深度学习的方法主要集中在少数几个人类设定点上,任务学习单一。此外,这些数据没有经过去噪处理。本研究针对气液两相流提出了一种基于滤波增强卷积神经网络(FECNN)的流速估算方法。该方法利用来自文丘里管和电容断层扫描(ECT)传感器的多模态数据作为输入,利用多层感知器(MLP)对数据进行融合。随后,利用可学习滤波器模块自适应地衰减噪声,然后利用多尺度卷积神经网络(MSCNN)提取不同尺度的流速特征。最后,该方法可通过多任务学习(MTL)同时估算每个单相流量。通过多个对比实验,展示了可学习滤波器模块的自适应噪声衰减能力,以及所提出的 MSCNN 捕捉多尺度流速特征的能力。此外,还提供了与最新流速估算方法的定性比较。总之,本研究证明了所提出的 FECNN 在流量估计方面的有效性和优越性。
{"title":"A flow rate estimation method for gas–liquid two-phase flow based on filter-enhanced convolutional neural network","authors":"Yuxiao Jiang ,&nbsp;Yinyan Liu ,&nbsp;Lihui Peng ,&nbsp;Yi Li","doi":"10.1016/j.engappai.2024.109593","DOIUrl":"10.1016/j.engappai.2024.109593","url":null,"abstract":"<div><div>Accurate estimation of flow rate in gas–liquid two-phase flow is crucial for various industrial processes. How to accurately estimate flow rate remains a challenging problem. Previously, deep learning-based methods focused on a few human-set points with single task learning. In addition, the data were not denoised. In this study, a flow rate estimation method based on a filter-enhanced convolutional neural network (FECNN) is proposed for gas–liquid two-phase flow. The method leverages multimodal data from a Venturi tube and an electrical capacitance tomography (ECT) sensor as input, utilizing multilayer perceptron (MLP) to fuse data. Subsequently, a learnable filter module is employed to attenuate noise adaptively, followed by multiscale convolutional neural network (MSCNN) extraction of flow rate features at different scales. Finally, the method enables estimate each single-phase flow rate simultaneously through multi-task learning (MTL). The adaptive noise attenuation capabilities of the learnable filter module are demonstrated, and the ability of the proposed MSCNN to capture multiscale flow rate features through multiple comparative experiments is shown. Additionally, a qualitative comparison with recent flow rate estimation methods is provided. Overall, this study demonstrates the effectiveness and superiority of the proposed FECNN in flow rate estimation.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109593"},"PeriodicalIF":7.5,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142659280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Engineering Applications of Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1