首页 > 最新文献

IEEE Transactions on AgriFood Electronics最新文献

英文 中文
Food Physical Contamination Detection Using AI-Enhanced Electrical Impedance Tomography 利用人工智能增强型电阻抗断层扫描技术检测食品物理污染
Pub Date : 2024-07-25 DOI: 10.1109/TAFE.2024.3415124
Basma Alsaid;Tracy Saroufil;Romaissa Berim;Sohaib Majzoub;Abir J. Hussain
Physical contamination of food is a prevalent issue within the food production industry. Contamination can occur at any stage of the food processing line. Many techniques are used in the literature for the detection of physical contamination in food. However, these techniques have some limitations when applied to fresh food products, particularly, when samples are characterized by diverse shapes and sizes. In addition, some of these techniques fail to detect hidden contaminants. In this work, we propose a novel approach to detect hidden physical contamination in fresh food products, including plastic fragments, stone fragments, and other foreign food objects, such as different food types that might inadvertently contaminate the sample. Electrical impedance tomography (EIT) is utilized to capture the impedance image of the sample to be used for contamination detection. Four deep learning models are trained using the EIT images to perform binary classification to identify contaminated samples. Three of the models are developed to detect the contaminants, each on its own, while the fourth model is used to detect any of the contaminates put together. The trained models achieved promising results with the accuracy of 85%, 92.9%, and 85.7% detecting plastic, stones, and other food types, respectively. The obtained accuracy when all contaminants put together was 78%. This performance shows the efficacy of the proposed approach over the existing techniques in the field.
食品的物理污染是食品生产行业普遍存在的问题。污染可能发生在食品加工生产线的任何阶段。文献中使用了许多检测食品物理污染的技术。然而,当这些技术应用于新鲜食品时,尤其是当样品的形状和大小各不相同时,就会受到一些限制。此外,其中一些技术还无法检测到隐藏的污染物。在这项工作中,我们提出了一种新方法来检测新鲜食品中隐藏的物理污染,包括塑料碎片、石块碎片和其他异物,如可能无意中污染样品的不同食物类型。电阻抗层析成像(EIT)被用来捕捉样品的阻抗图像,以用于污染检测。使用 EIT 图像训练了四个深度学习模型,以执行二元分类来识别受污染的样品。其中三个模型用于单独检测污染物,而第四个模型则用于综合检测任何污染物。经过训练的模型在检测塑料、石块和其他食物类型方面分别取得了 85%、92.9% 和 85.7% 的准确率,取得了可喜的成果。当所有杂质加在一起时,准确率为 78%。这一结果表明,与该领域的现有技术相比,所提出的方法非常有效。
{"title":"Food Physical Contamination Detection Using AI-Enhanced Electrical Impedance Tomography","authors":"Basma Alsaid;Tracy Saroufil;Romaissa Berim;Sohaib Majzoub;Abir J. Hussain","doi":"10.1109/TAFE.2024.3415124","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3415124","url":null,"abstract":"Physical contamination of food is a prevalent issue within the food production industry. Contamination can occur at any stage of the food processing line. Many techniques are used in the literature for the detection of physical contamination in food. However, these techniques have some limitations when applied to fresh food products, particularly, when samples are characterized by diverse shapes and sizes. In addition, some of these techniques fail to detect hidden contaminants. In this work, we propose a novel approach to detect hidden physical contamination in fresh food products, including plastic fragments, stone fragments, and other foreign food objects, such as different food types that might inadvertently contaminate the sample. Electrical impedance tomography (EIT) is utilized to capture the impedance image of the sample to be used for contamination detection. Four deep learning models are trained using the EIT images to perform binary classification to identify contaminated samples. Three of the models are developed to detect the contaminants, each on its own, while the fourth model is used to detect any of the contaminates put together. The trained models achieved promising results with the accuracy of 85%, 92.9%, and 85.7% detecting plastic, stones, and other food types, respectively. The obtained accuracy when all contaminants put together was 78%. This performance shows the efficacy of the proposed approach over the existing techniques in the field.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"2 2","pages":"518-526"},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142408760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Model for a Dense LoRaWAN Farm-Area Network in the Agribusiness 农业综合企业密集型 LoRaWAN 农场区域网络模型
Pub Date : 2024-07-16 DOI: 10.1109/TAFE.2024.3422843
Alfredo Arnaud;Matías Miguez;María Eugenia Araújo;Ariel Dagnino;Joel Gak;Aarón Jimenz;José Job Flores;Nicolas Calarco;Luis Arturo Soriano
In this work, modeling, simulation, and experimental measurements of a LoRaWAN network aimed at implementing a dense farm-area network (FAN) in the agrifood industry are presented. First, the network is modeled for a farm of the future, with as many sensors as would be useful, for the four main productive chains in Uruguay as a study case: livestock, timber, agriculture, and dairy industries. To this end, a survey of commercial sensors was conducted, a few farms were visited, and managers and partners in agrocompanies were interviewed. A LoRaWAN network with a single gateway was simulated to estimate the efficiency (related to data packets lost), in the case of a 1000 ha cattle field with more than 1500 sensors and some cameras sharing the network. Finally, the network efficiency was measured, using 30–40 LoRa modules @ 915 MHz, transmitting at pseudorandom times to emulate up to thousands of LoRa sensor nodes. The simulated and measured results are very similar, reaching > 92% efficiency in all cases. Sites bigger than 1000 ha on the four main productive chains were also simulated. Additionally, energy consumption and transmission distance measurements of LoRaWAN modules are presented, as well as an overview of the economic aspects related to the deployment of the network to corroborate them fit the requirements of a FAN in the agribusiness.
在这项工作中,介绍了 LoRaWAN 网络的建模、模拟和实验测量,该网络的目的是在农业食品行业实施密集农场区域网络 (FAN)。首先,以乌拉圭的四个主要生产链(畜牧业、木材业、农业和奶制品业)为研究案例,为未来的农场建立了网络模型,并配备了尽可能多的传感器。为此,我们对商业传感器进行了调查,走访了一些农场,并采访了农业公司的经理和合作伙伴。在一个 1000 公顷的养牛场中,有超过 1500 个传感器和一些摄像头共享网络,我们模拟了一个只有一个网关的 LoRaWAN 网络,以估算其效率(与数据包丢失有关)。最后,使用 30-40 个 LoRa 模块(频率为 915 MHz)测量了网络效率,这些模块以伪随机方式进行传输,模拟了多达数千个 LoRa 传感器节点。模拟和测量结果非常相似,在所有情况下效率都大于 92%。我们还模拟了四个主要生产链上面积超过 1000 公顷的地点。此外,还对 LoRaWAN 模块的能耗和传输距离进行了测量,并概述了与网络部署相关的经济方面,以证实它们符合农业综合企业的 FAN 要求。
{"title":"A Model for a Dense LoRaWAN Farm-Area Network in the Agribusiness","authors":"Alfredo Arnaud;Matías Miguez;María Eugenia Araújo;Ariel Dagnino;Joel Gak;Aarón Jimenz;José Job Flores;Nicolas Calarco;Luis Arturo Soriano","doi":"10.1109/TAFE.2024.3422843","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3422843","url":null,"abstract":"In this work, modeling, simulation, and experimental measurements of a LoRaWAN network aimed at implementing a dense farm-area network (FAN) in the agrifood industry are presented. First, the network is modeled for a farm of the future, with as many sensors as would be useful, for the four main productive chains in Uruguay as a study case: livestock, timber, agriculture, and dairy industries. To this end, a survey of commercial sensors was conducted, a few farms were visited, and managers and partners in agrocompanies were interviewed. A LoRaWAN network with a single gateway was simulated to estimate the efficiency (related to data packets lost), in the case of a 1000 ha cattle field with more than 1500 sensors and some cameras sharing the network. Finally, the network efficiency was measured, using 30–40 LoRa modules @ 915 MHz, transmitting at pseudorandom times to emulate up to thousands of LoRa sensor nodes. The simulated and measured results are very similar, reaching > 92% efficiency in all cases. Sites bigger than 1000 ha on the four main productive chains were also simulated. Additionally, energy consumption and transmission distance measurements of LoRaWAN modules are presented, as well as an overview of the economic aspects related to the deployment of the network to corroborate them fit the requirements of a FAN in the agribusiness.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"2 2","pages":"284-292"},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Machine-Learning Flow for Microwave-Sensing Systems for Contaminant Detection in Food 用于食品污染物检测的微波传感系统的增强型机器学习流程
Pub Date : 2024-07-12 DOI: 10.1109/TAFE.2024.3421238
Bernardita Štitić;Luca Urbinati;Giuseppe Di Guglielmo;Luca P. Carloni;Mario R. Casu
Combining data-driven machine learning (ML) with microwave sensing (MWS) makes it possible to analyze packaged food in real time without any contact and spot low-density contaminants (e.g., plastics or glass splinters) that current industrial food safety methods cannot detect. This is achieved by training ML classifiers on the scattered signal reflected by the target food product exposed to MWs. In this article, we present an enhanced ML flow to boost foreign body detection accuracy of ML classifiers in MWS systems. Innovations include assessing the performance of a multiclass classifier, training it with MW frequency pairs as features, data augmentation, a new preprocessing scaler suitable for the feature distributions in the datasets, quantization, and pruning. The final test results, obtained using our previously designed MWS system and collected dataset of contaminated hazelnut-cocoa spread jars, show a multiclass accuracy for the floating-point model of 96.5%, which corresponds to a binary-equivalent accuracy of 97.3%. This is an improvement of +3.3% against the binary classifier of the previous work. The quantized and pruned model, instead, reached a multiclass accuracy of 94.2%, or a binary accuracy of 95.4%, thus still improving the previous work by +1.4%. Also, we achieved a latency of 26 $mu$s on an AMD/Xilinx Kria K26 field programmable gate array (FPGA), a result which is ideal for high-throughput food production lines. Furthermore, we expand our latest work with supplementary details and experiments to further validate the proposed ML flow, including a comparative analysis against our prior results. Lastly, we share our datasets publicly on OpenML.
将数据驱动的机器学习(ML)与微波传感(MWS)相结合,可以对包装食品进行无接触实时分析,并发现当前工业食品安全方法无法检测的低密度污染物(如塑料或玻璃碎片)。这是通过对暴露在微波中的目标食品反射的散射信号训练 ML 分类器来实现的。在本文中,我们介绍了一种增强型 ML 流程,以提高 MWS 系统中 ML 分类器的异物检测精度。创新之处包括:评估多类分类器的性能、将微波频率对作为特征对其进行训练、数据增强、适合数据集特征分布的新预处理扩展器、量化和剪枝。最终测试结果显示,浮点模型的多类准确率为 96.5%,相当于二元等效准确率 97.3%。与之前的二进制分类器相比,提高了 3.3%。而经过量化和剪枝处理的模型的多分类准确率为 94.2%,二进制准确率为 95.4%,因此仍比之前的工作提高了 +1.4%。此外,我们还在 AMD/Xilinx Kria K26 现场可编程门阵列 (FPGA) 上实现了 26 美元/分钟的延迟,这一结果非常适合高通量食品生产线。此外,我们还通过补充细节和实验扩展了我们的最新工作,以进一步验证所提出的 ML 流程,包括与我们之前的结果进行比较分析。最后,我们在 OpenML 上公开分享我们的数据集。
{"title":"Enhanced Machine-Learning Flow for Microwave-Sensing Systems for Contaminant Detection in Food","authors":"Bernardita Štitić;Luca Urbinati;Giuseppe Di Guglielmo;Luca P. Carloni;Mario R. Casu","doi":"10.1109/TAFE.2024.3421238","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3421238","url":null,"abstract":"Combining data-driven machine learning (ML) with microwave sensing (MWS) makes it possible to analyze packaged food in real time without any contact and spot low-density contaminants (e.g., plastics or glass splinters) that current industrial food safety methods cannot detect. This is achieved by training ML classifiers on the scattered signal reflected by the target food product exposed to MWs. In this article, we present an enhanced ML flow to boost foreign body detection accuracy of ML classifiers in MWS systems. Innovations include assessing the performance of a multiclass classifier, training it with MW frequency pairs as features, data augmentation, a new preprocessing scaler suitable for the feature distributions in the datasets, quantization, and pruning. The final test results, obtained using our previously designed MWS system and collected dataset of contaminated hazelnut-cocoa spread jars, show a multiclass accuracy for the floating-point model of 96.5%, which corresponds to a binary-equivalent accuracy of 97.3%. This is an improvement of +3.3% against the binary classifier of the previous work. The quantized and pruned model, instead, reached a multiclass accuracy of 94.2%, or a binary accuracy of 95.4%, thus still improving the previous work by +1.4%. Also, we achieved a latency of 26 \u0000<inline-formula><tex-math>$mu$</tex-math></inline-formula>\u0000s on an AMD/Xilinx Kria K26 field programmable gate array (FPGA), a result which is ideal for high-throughput food production lines. Furthermore, we expand our latest work with supplementary details and experiments to further validate the proposed ML flow, including a comparative analysis against our prior results. Lastly, we share our datasets publicly on OpenML.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"2 2","pages":"181-189"},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UAV Sensing-Based Litchi Segmentation Using Modified Mask-RCNN for Precision Agriculture 基于无人机传感的荔枝分割技术(使用改进的掩模-RCNN),用于精准农业
Pub Date : 2024-07-12 DOI: 10.1109/TAFE.2024.3420028
Bhabesh Deka;Debarun Chakraborty
Traditional methods of manual litchi fruit counting are labor-intensive, time-consuming, and prone to errors. Moreover, due to its complex growth structures, such as occlusion with leaves and branches, overlapping, and uneven color, it becomes more challenging for the current baseline detection and instance segmentation models to accurately identify the litchi fruits. The advancement of deep learning architecture and modern sensing technology, such as unmanned aerial vehicle (UAV), had shown great potential for improving fruit counting accuracy and efficiency. In this article, we propose a modified Mask-region-based convolutional neural network-based instance segmentation model using channel attention to detect and count litchis in complex natural environments using UAV. In addition, we build a UAV-Litchi dataset consisting of 1000 images with 31 000 litchi annotations, collected by the DJI Phantom 4 with RGB sensor and labeled with a LabelImg annotation tool. Experimental results show that the proposed model with the squeeze-and-excitation block improves the detection accuracy of litchi fruits, achieving a mean average precision, recall, and F1 score of 81.47%, 92.81%, and 88.40%, respectively, with an average inference time of 7.72 s. The high accuracy and efficiency of the proposed model demonstrate its potential for precise and accurate litchi detection in precision agriculture.
传统的人工荔枝果计数方法耗费大量人力、时间,而且容易出错。此外,由于荔枝果实生长结构复杂,如枝叶遮挡、重叠、颜色不均等,现有的基线检测和实例分割模型要准确识别荔枝果实更具挑战性。深度学习架构和现代传感技术(如无人机)的发展,为提高水果计数的准确性和效率带来了巨大的潜力。在本文中,我们提出了一种基于掩膜区域卷积神经网络的改进型实例分割模型,利用通道注意力在复杂的自然环境中使用无人机对荔枝进行检测和计数。此外,我们还建立了一个无人机-荔枝数据集,该数据集由带有 RGB 传感器的大疆 Phantom 4 采集的 1000 张带有 31 000 个荔枝注释的图像组成,并使用 LabelImg 注释工具进行了标注。实验结果表明,带有挤压-激发块的拟议模型提高了荔枝果的检测精度,平均精度、召回率和 F1 分数分别达到 81.47%、92.81% 和 88.40%,平均推理时间为 7.72 秒。
{"title":"UAV Sensing-Based Litchi Segmentation Using Modified Mask-RCNN for Precision Agriculture","authors":"Bhabesh Deka;Debarun Chakraborty","doi":"10.1109/TAFE.2024.3420028","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3420028","url":null,"abstract":"Traditional methods of manual litchi fruit counting are labor-intensive, time-consuming, and prone to errors. Moreover, due to its complex growth structures, such as occlusion with leaves and branches, overlapping, and uneven color, it becomes more challenging for the current baseline detection and instance segmentation models to accurately identify the litchi fruits. The advancement of deep learning architecture and modern sensing technology, such as unmanned aerial vehicle (UAV), had shown great potential for improving fruit counting accuracy and efficiency. In this article, we propose a modified Mask-region-based convolutional neural network-based instance segmentation model using channel attention to detect and count litchis in complex natural environments using UAV. In addition, we build a UAV-Litchi dataset consisting of 1000 images with 31 000 litchi annotations, collected by the DJI Phantom 4 with RGB sensor and labeled with a LabelImg annotation tool. Experimental results show that the proposed model with the squeeze-and-excitation block improves the detection accuracy of litchi fruits, achieving a mean average precision, recall, and F1 score of 81.47%, 92.81%, and 88.40%, respectively, with an average inference time of 7.72 s. The high accuracy and efficiency of the proposed model demonstrate its potential for precise and accurate litchi detection in precision agriculture.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"2 2","pages":"509-517"},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142408887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Deep Convolutional Solutions for Identifying Biotic Crop Stress in Wild Environments 用于识别野生环境中作物生物压力的鲁棒性深度卷积解决方案
Pub Date : 2024-07-11 DOI: 10.1109/TAFE.2024.3422187
Chiranjit Pal;Imon Mukherjee;Sanjay Chatterji;Sanjoy Pratihar;Pabitra Mitra;Partha Pratim Chakrabarti
In the realm of agricultural automation, the precise identification of crop stress holds immense significance for enhancing crop productivity. Existing methods primarily focus on controlled environments, which may not accurately reflect field conditions. Field-based leaf image analysis poses challenges due to varying image quality and sunlight intensity. Moreover, the complexity of crop stress images, with their random lesion distribution, diverse symptoms, and complex backgrounds, further complicates the analysis. To overcome these limitations, a lightweight hybrid convolutional neural network has been developed. This system integrates the powerful three-deep blocks model with an autoencoder running in parallel to highlight regions of crop stress effectively. To support this approach, we have introduced the Indian Rice Disease Dataset (IRDD) with labeled images. The proposed system reports an average true positive rate (TPR) of 0.8766 and an average positive predicted value of 0.8720 on IRDD, which are higher than other state-of-the-art crop disease detection models. The system is validated on benchmark datasets, yielding significant results: TPR of 0.9870 (rice), 0.9985 (tomato), and 0.8559 (corn). Furthermore, the proposed model outperforms recent state-of-the-art works on the benchmark PlantDoc dataset, showing its effectiveness in generalizing plant disease identification tasks. Finally, an ablation study has been carried out to explore the importance of the two parallel branches. Overall, this study acts as a bridge between advanced science and practical application, showcasing how interdisciplinary automation could revolutionize crop disease identification, improve agricultural efficiency, and reshape broader industrial practices.
在农业自动化领域,精确识别作物胁迫对提高作物产量具有重大意义。现有方法主要集中在受控环境中,可能无法准确反映田间条件。由于图像质量和日照强度不同,基于田间的叶片图像分析面临挑战。此外,作物胁迫图像的复杂性,包括病变分布的随机性、症状的多样性和背景的复杂性,也使分析变得更加复杂。为了克服这些局限性,我们开发了一种轻量级混合卷积神经网络。该系统集成了功能强大的三深度块模型和并行运行的自动编码器,可有效突出作物胁迫区域。为了支持这种方法,我们引入了带有标签图像的印度水稻病害数据集(IRDD)。所提出的系统在 IRDD 上的平均真阳性率(TPR)为 0.8766,平均预测阳性值为 0.8720,均高于其他最先进的作物病害检测模型。该系统在基准数据集上进行了验证,结果非常显著:TPR分别为0.9870(水稻)、0.9985(番茄)和0.8559(玉米)。此外,所提出的模型在基准数据集 PlantDoc 上的表现优于近期最先进的作品,显示了其在植物病害识别任务中的通用性。最后,还进行了一项消融研究,以探索两个并行分支的重要性。总之,这项研究在先进科学与实际应用之间架起了一座桥梁,展示了跨学科自动化如何彻底改变作物病害识别、提高农业效率并重塑更广泛的工业实践。
{"title":"Robust Deep Convolutional Solutions for Identifying Biotic Crop Stress in Wild Environments","authors":"Chiranjit Pal;Imon Mukherjee;Sanjay Chatterji;Sanjoy Pratihar;Pabitra Mitra;Partha Pratim Chakrabarti","doi":"10.1109/TAFE.2024.3422187","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3422187","url":null,"abstract":"In the realm of agricultural automation, the precise identification of crop stress holds immense significance for enhancing crop productivity. Existing methods primarily focus on controlled environments, which may not accurately reflect field conditions. Field-based leaf image analysis poses challenges due to varying image quality and sunlight intensity. Moreover, the complexity of crop stress images, with their random lesion distribution, diverse symptoms, and complex backgrounds, further complicates the analysis. To overcome these limitations, a lightweight hybrid convolutional neural network has been developed. This system integrates the powerful three-deep blocks model with an autoencoder running in parallel to highlight regions of crop stress effectively. To support this approach, we have introduced the Indian Rice Disease Dataset (IRDD) with labeled images. The proposed system reports an average true positive rate (TPR) of 0.8766 and an average positive predicted value of 0.8720 on IRDD, which are higher than other state-of-the-art crop disease detection models. The system is validated on benchmark datasets, yielding significant results: TPR of 0.9870 (rice), 0.9985 (tomato), and 0.8559 (corn). Furthermore, the proposed model outperforms recent state-of-the-art works on the benchmark PlantDoc dataset, showing its effectiveness in generalizing plant disease identification tasks. Finally, an ablation study has been carried out to explore the importance of the two parallel branches. Overall, this study acts as a bridge between advanced science and practical application, showcasing how interdisciplinary automation could revolutionize crop disease identification, improve agricultural efficiency, and reshape broader industrial practices.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"2 2","pages":"497-508"},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142408885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Network Connectivity and Sensor Response of an OpenThread WSN Platform for Crop Monitoring 用于作物监测的 OpenThread WSN 平台的网络连接和传感器响应
Pub Date : 2024-07-10 DOI: 10.1109/TAFE.2024.3420648
Alessandro Checco;Vasco Fabiani;Maurizio Palmisano;Davide Polese
Wireless sensor networks can be a low-cost and efficient solution for monitoring environmental parameters in agriculture. In this work, we analyze the potentialities of using a network based on a general-purpose Internet of Things protocol such as OpenThread and on low-cost general-purpose physical and chemical sensors. This article aims to test and verify the platform functionalities when monitoring environmental parameters such as temperature, relative humidity, and visible and infrared irradiance in a real environment. We designed wireless sensor nodes and a data collection and visualization dashboard. We tested the sensors' response under controlled settings, conducted connectivity and network topology tests, validated the system functionality via in-field measurements, and discussed the main issues and potential capabilities of these sensors and network architecture.
无线传感器网络是一种低成本、高效率的农业环境参数监测解决方案。在这项工作中,我们分析了使用基于通用物联网协议(如 OpenThread)和低成本通用物理和化学传感器的网络的潜力。本文旨在测试和验证该平台在真实环境中监测温度、相对湿度、可见光和红外辐照度等环境参数时的功能。我们设计了无线传感器节点以及数据收集和可视化仪表板。我们在受控设置下测试了传感器的响应,进行了连接和网络拓扑测试,通过现场测量验证了系统功能,并讨论了这些传感器和网络架构的主要问题和潜在功能。
{"title":"Network Connectivity and Sensor Response of an OpenThread WSN Platform for Crop Monitoring","authors":"Alessandro Checco;Vasco Fabiani;Maurizio Palmisano;Davide Polese","doi":"10.1109/TAFE.2024.3420648","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3420648","url":null,"abstract":"Wireless sensor networks can be a low-cost and efficient solution for monitoring environmental parameters in agriculture. In this work, we analyze the potentialities of using a network based on a general-purpose Internet of Things protocol such as OpenThread and on low-cost general-purpose physical and chemical sensors. This article aims to test and verify the platform functionalities when monitoring environmental parameters such as temperature, relative humidity, and visible and infrared irradiance in a real environment. We designed wireless sensor nodes and a data collection and visualization dashboard. We tested the sensors' response under controlled settings, conducted connectivity and network topology tests, validated the system functionality via \u0000<italic>in-field</i>\u0000 measurements, and discussed the main issues and potential capabilities of these sensors and network architecture.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"2 2","pages":"218-225"},"PeriodicalIF":0.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10592646","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing Technology for Livestock Research: An Online Sheep Behavior Monitoring System 利用技术开展家畜研究:绵羊行为在线监测系统
Pub Date : 2024-07-09 DOI: 10.1109/TAFE.2024.3416414
V. Cabrera;A. Delbuggio;H. Cardoso;D. Fraga;A. Gómez;M. Pedemonte;R. Ungerfeld;J. Oreggioni
Sheep production in extensive conditions faces several challenges. These challenges could be addressed with behavior monitoring systems, contributing to animal well-being, enhancing animal research, and improving productivity. This article presents the design, manufacture, and test of an online sheep behavior monitoring system for extensive conditions. It comprises a wearable electronic collar device and a cloud server (deployed with Amazon Web Services) for storing data and providing a web user interface. The collar has an Icarus Internet of Things (IoT) Board, allowing motion data collection with a three-axis accelerometer, global navigation satellite system (GNSS) location data acquisition, and narrowband IoT communication. The device has solar panels and a battery. Our application acquires accelerometer data at 25 Hz, location data every 10–30 s, and battery level and cellular signal strength every 50 s. We encoded accelerometer samples to reduce the transmitted data. We manufactured 30 collars that collect and transmit data to the cloud server. Our system facilitates data processing, both collar and server side. We introduce a preliminary Random Forest algorithm for behavior classification on the device that identifies “still,” “walking,” and “running” with a 78% general accuracy. The device's autonomy exceeds ten days in continuous operation (streaming raw and processed data) while if the device transmits only processed data and GNSS data every 4 h, autonomy rises to 100 days. This allows us to glimpse the application of this system in long-term research experiments and farming production.
大规模条件下的绵羊生产面临着若干挑战。这些挑战可以通过行为监测系统来解决,从而促进动物福利、加强动物研究和提高生产率。本文介绍了一种适用于大范围条件下的在线绵羊行为监测系统的设计、制造和测试。该系统包括一个可穿戴电子项圈设备和一个云服务器(通过亚马逊网络服务部署),用于存储数据和提供网络用户界面。项圈上有一个伊卡洛斯物联网(IoT)板,可通过三轴加速度计采集运动数据、全球导航卫星系统(GNSS)定位数据采集和窄带物联网通信。该设备配有太阳能电池板和电池。我们的应用以 25 Hz 的频率采集加速度计数据,每 10-30 秒采集一次位置数据,每 50 秒采集一次电池电量和蜂窝信号强度。我们制造了 30 个项圈,用于收集数据并将数据传输到云服务器。我们的系统为项圈和服务器端的数据处理提供了便利。我们在设备上引入了一种用于行为分类的初步随机森林算法,该算法可识别 "静止"、"行走 "和 "奔跑",一般准确率为 78%。在连续运行(流式传输原始数据和处理数据)的情况下,设备的自主运行时间超过十天,而如果设备每隔 4 小时只传输处理数据和全球导航卫星系统数据,自主运行时间将上升到 100 天。这让我们看到了该系统在长期研究实验和农业生产中的应用。
{"title":"Harnessing Technology for Livestock Research: An Online Sheep Behavior Monitoring System","authors":"V. Cabrera;A. Delbuggio;H. Cardoso;D. Fraga;A. Gómez;M. Pedemonte;R. Ungerfeld;J. Oreggioni","doi":"10.1109/TAFE.2024.3416414","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3416414","url":null,"abstract":"Sheep production in extensive conditions faces several challenges. These challenges could be addressed with behavior monitoring systems, contributing to animal well-being, enhancing animal research, and improving productivity. This article presents the design, manufacture, and test of an online sheep behavior monitoring system for extensive conditions. It comprises a wearable electronic collar device and a cloud server (deployed with Amazon Web Services) for storing data and providing a web user interface. The collar has an Icarus Internet of Things (IoT) Board, allowing motion data collection with a three-axis accelerometer, global navigation satellite system (GNSS) location data acquisition, and narrowband IoT communication. The device has solar panels and a battery. Our application acquires accelerometer data at 25 Hz, location data every 10–30 s, and battery level and cellular signal strength every 50 s. We encoded accelerometer samples to reduce the transmitted data. We manufactured 30 collars that collect and transmit data to the cloud server. Our system facilitates data processing, both collar and server side. We introduce a preliminary Random Forest algorithm for behavior classification on the device that identifies “still,” “walking,” and “running” with a 78% general accuracy. The device's autonomy exceeds ten days in continuous operation (streaming raw and processed data) while if the device transmits only processed data and GNSS data every 4 h, autonomy rises to 100 days. This allows us to glimpse the application of this system in long-term research experiments and farming production.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"2 2","pages":"306-313"},"PeriodicalIF":0.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AgTech: Building Smart Aquaculture Assistant System Integrated IoT and Big Data Analysis 农业技术:构建集成物联网和大数据分析的智能水产养殖辅助系统
Pub Date : 2024-07-08 DOI: 10.1109/TAFE.2024.3416415
Ngoc-Bao-Van Le;Jun-Ho Huh
The development of Internet of Things (IoTs) technology in agriculture, notably aquaculture, has increased over the years due to empowering real-time monitoring and improved environmental sustainability. To ensure the development and survival of aquatic life, farm employees must constantly check and take prompt action to protect the sustainable habitat in ponds. It is also crucial for providing technical assistance to farmers during the growing season. To address these issues, we present the building of a smart aquaculture assistant system integrated with IoT and Big Data. The main components of this system include the IoT layer assistant layer. First, tracking the pond environment functions through sensor systems set up in the pond, such as turbidity, pH, and temperature sensors, which are built for the IoT layer. Our assistant can make suggestions for farmers by visualizing farm environment conditions and outdoor weather analysis. The assistant layer uses a fine-tuning NLP model GPT 3.5 for our aquaculture farming 500 frequently asked questions and crawled knowledge dataset. The chatbot can provide naturally logical replies and knowledge related to farming activities. The system is implemented as mobile and desktop applications using React Native and Python to monitor or manipulate administrative tasks. Thus, thanks to our smart assistant, prompt preventive action can be taken to reduce losses, increase productivity, and enhance farmers’ knowledge related to farming.
近年来,物联网(IoTs)技术在农业(尤其是水产养殖业)领域的发展日新月异,这得益于实时监控功能的增强和环境可持续性的改善。为确保水生生物的发展和生存,农场员工必须不断检查并及时采取行动,以保护池塘中的可持续栖息地。这对于在生长季节向农民提供技术援助也至关重要。为解决这些问题,我们提出构建一个集成了物联网和大数据的智能水产养殖辅助系统。该系统的主要组成部分包括物联网层助理层。首先,通过在池塘中设置的传感器系统跟踪池塘环境功能,如为物联网层构建的浊度、pH 值和温度传感器。我们的助手可以通过可视化农场环境状况和室外天气分析为农民提出建议。助手层针对我们的水产养殖 500 个常见问题和爬行知识数据集使用了微调 NLP 模型 GPT 3.5。聊天机器人可以提供与养殖活动相关的自然逻辑回复和知识。该系统采用 React Native 和 Python 作为移动和桌面应用程序,用于监控或操作管理任务。因此,有了我们的智能助手,就能及时采取预防措施,减少损失,提高生产率,并增强农民的农业知识。
{"title":"AgTech: Building Smart Aquaculture Assistant System Integrated IoT and Big Data Analysis","authors":"Ngoc-Bao-Van Le;Jun-Ho Huh","doi":"10.1109/TAFE.2024.3416415","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3416415","url":null,"abstract":"The development of Internet of Things (IoTs) technology in agriculture, notably aquaculture, has increased over the years due to empowering real-time monitoring and improved environmental sustainability. To ensure the development and survival of aquatic life, farm employees must constantly check and take prompt action to protect the sustainable habitat in ponds. It is also crucial for providing technical assistance to farmers during the growing season. To address these issues, we present the building of a smart aquaculture assistant system integrated with IoT and Big Data. The main components of this system include the IoT layer assistant layer. First, tracking the pond environment functions through sensor systems set up in the pond, such as turbidity, pH, and temperature sensors, which are built for the IoT layer. Our assistant can make suggestions for farmers by visualizing farm environment conditions and outdoor weather analysis. The assistant layer uses a fine-tuning NLP model GPT 3.5 for our aquaculture farming 500 frequently asked questions and crawled knowledge dataset. The chatbot can provide naturally logical replies and knowledge related to farming activities. The system is implemented as mobile and desktop applications using React Native and Python to monitor or manipulate administrative tasks. Thus, thanks to our smart assistant, prompt preventive action can be taken to reduce losses, increase productivity, and enhance farmers’ knowledge related to farming.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"2 2","pages":"471-482"},"PeriodicalIF":0.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142408403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Computer Vision System for Efficient Flea Beetle Monitoring in Canola Crop 用于高效监测油菜籽中跳甲的新型计算机视觉系统
Pub Date : 2024-07-08 DOI: 10.1109/TAFE.2024.3406329
Muhib Ullah;Muhammad Shabbir Hasan;Abdul Bais;Tyler Wist;Shaun Sharpe
Effective crop health monitoring is essential for farmers to make informed decisions about managing their crops. In canola crop management, the rapid proliferation of flea beetle (FB) populations is a major concern, as these pests can cause significant crop damage. Traditional manual field monitoring for FBs is time consuming and error-prone due to its reliance on visual assessments of FB damage to small seedlings, making conducting frequent and accurate surveys difficult. One of the key pieces of information in assessing if control of FBs is required is the presence of live FBs in the canola crop. This article proposes a novel insect-monitoring framework that uses a solar-powered, intelligent trap called the smart insect trap (SIT), equipped with a high-resolution camera and a deep-learning-based object detection network. Using this SIT, coupled with a kairomonal lure, the FB population can be monitored hourly, and population increases can be identified quickly. The SIT processes images at the edge and sends results to the cloud every 40 min for FB monitoring and analysis. It uses a modified you look only once version 8 small (YOLOv8s) object detection network, FB-YOLO, to improve its ability to detect small FBs. The modification is implemented in the network's neck, which aggregates features from the deep and early pyramids of the backbone in the neck. Improved attention to small objects is achieved by incorporating spatially aware features from early pyramids. In addition, the network is integrated with an advanced box selection algorithm called confluence nonmax suppression (NMS-C) to prevent duplicate detections in highly overlapped clusters of FBs. The FB-YOLO achieved an average precision ($text{mAP}@0.5$) of 89.97%, a 1.215% improvement over the YOLOv8s network with only 0.324 million additional parameters. Integrating NMS-C further improved the $text{mAP}@0.5$ by 0.19%, leading to an overall $text{mAP}@0.5$ of 90.16%.
有效的作物健康监测对于农民做出明智的作物管理决策至关重要。在油菜籽作物管理中,跳甲(FB)种群的快速繁殖是一个主要问题,因为这些害虫会对作物造成重大损害。传统的人工田间 FB 监测既费时又容易出错,因为它依赖于目测 FB 对小幼苗的危害程度,因此很难进行频繁而准确的调查。评估是否需要控制 FBs 的关键信息之一是油菜籽作物中是否存在活的 FBs。本文提出了一种新颖的昆虫监测框架,该框架使用一种名为智能昆虫诱捕器(SIT)的太阳能智能诱捕器,配有高分辨率摄像头和基于深度学习的目标检测网络。使用这种智能捕虫器,再配上气孔引诱剂,就能每小时监测一次 FB 的数量,并能快速识别数量的增加。SIT 在边缘处理图像,每 40 分钟将结果发送到云端,用于 FB 监测和分析。它使用经过修改的 "只看一次 "第 8 版小型(YOLOv8s)物体检测网络 FB-YOLO,以提高其检测小型 FB 的能力。这一修改是在网络的颈部实现的,它将骨干网的深层和早期金字塔的特征聚集在颈部。通过整合早期金字塔的空间感知特征,提高了对小型物体的关注度。此外,该网络还集成了一种名为 "汇合非最大抑制(NMS-C)"的高级选框算法,以防止在高度重叠的 FB 簇中出现重复检测。FB-YOLO的平均精确度($text{mAP}@0.5$)达到了89.97%,比YOLOv8s网络提高了1.215%,但只增加了32.4万个参数。整合NMS-C后,$text{mAP}@0.5$进一步提高了0.19%,从而使总体$text{mAP}@0.5$达到了90.16%。
{"title":"A Novel Computer Vision System for Efficient Flea Beetle Monitoring in Canola Crop","authors":"Muhib Ullah;Muhammad Shabbir Hasan;Abdul Bais;Tyler Wist;Shaun Sharpe","doi":"10.1109/TAFE.2024.3406329","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3406329","url":null,"abstract":"Effective crop health monitoring is essential for farmers to make informed decisions about managing their crops. In canola crop management, the rapid proliferation of flea beetle (FB) populations is a major concern, as these pests can cause significant crop damage. Traditional manual field monitoring for FBs is time consuming and error-prone due to its reliance on visual assessments of FB damage to small seedlings, making conducting frequent and accurate surveys difficult. One of the key pieces of information in assessing if control of FBs is required is the presence of live FBs in the canola crop. This article proposes a novel insect-monitoring framework that uses a solar-powered, intelligent trap called the smart insect trap (SIT), equipped with a high-resolution camera and a deep-learning-based object detection network. Using this SIT, coupled with a kairomonal lure, the FB population can be monitored hourly, and population increases can be identified quickly. The SIT processes images at the edge and sends results to the cloud every 40 min for FB monitoring and analysis. It uses a modified you look only once version 8 small (YOLOv8s) object detection network, FB-YOLO, to improve its ability to detect small FBs. The modification is implemented in the network's neck, which aggregates features from the deep and early pyramids of the backbone in the neck. Improved attention to small objects is achieved by incorporating spatially aware features from early pyramids. In addition, the network is integrated with an advanced box selection algorithm called confluence nonmax suppression (NMS-C) to prevent duplicate detections in highly overlapped clusters of FBs. The FB-YOLO achieved an average precision (\u0000<inline-formula><tex-math>$text{mAP}@0.5$</tex-math></inline-formula>\u0000) of 89.97%, a 1.215% improvement over the YOLOv8s network with only 0.324 million additional parameters. Integrating NMS-C further improved the \u0000<inline-formula><tex-math>$text{mAP}@0.5$</tex-math></inline-formula>\u0000 by 0.19%, leading to an overall \u0000<inline-formula><tex-math>$text{mAP}@0.5$</tex-math></inline-formula>\u0000 of 90.16%.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"2 2","pages":"483-496"},"PeriodicalIF":0.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142408683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PA-RDFKNet: Unifying Plant Age Estimation through RGB-Depth Fusion and Knowledge Distillation PA-RDFKNet:通过 RGB 深度融合和知识提炼统一植物年龄估计
Pub Date : 2024-07-03 DOI: 10.1109/TAFE.2024.3418818
Shreya Bansal;Malya Singh;Seema Barda;Neeraj Goel;Mukesh Saini
Agriculture is facing bigger challenges in the 21st century due to the scarcity of resources. Artificial intelligence is being integrated with agriculture to cater to people's needs, unlocking fresh avenues for sustainability and innovation. One of the crucial agricultural practices is plant growth monitoring to detect plant stress at an early stage. In the past, there have been preliminary attempts at plant growth monitoring using red–green–blue (RGB) and depth images. The major challenge of this approach is the unavailability of the depth camera at the farmers' end. In this work, we have developed a transformer-based plant age RGB-depth fusion knowledge distillation network (PA-RDFKNet), a multi-to-single modal teacher–student network, that exploits the combined knowledge of RGB-depth pairs at the training time to infer the growth using RGB images alone during test time. The model uses a distillation loss that combines response-based, feature-based, and relation-based knowledge distillation techniques in the offline scheme. The proposed knowledge distillation improves the mean squared error for RGB images from 2 to 0.14 weeks. The results are validated on three different datasets.
由于资源匮乏,农业在 21 世纪面临着更大的挑战。人工智能正在与农业相结合,以满足人们的需求,为可持续发展和创新开辟新的途径。其中一项重要的农业实践是植物生长监测,以便及早发现植物的压力。过去,人们曾初步尝试使用红-绿-蓝(RGB)和深度图像进行植物生长监测。这种方法面临的主要挑战是农民无法使用深度摄像头。在这项工作中,我们开发了一个基于变压器的植物年龄 RGB 深度融合知识蒸馏网络(PA-RDFKNet),这是一个多模态到单模态的师生网络,在训练时利用 RGB 深度对的综合知识,在测试时仅使用 RGB 图像推断生长情况。该模型在离线方案中使用了一种结合了基于响应、基于特征和基于关系的知识蒸馏技术的蒸馏损失。所提出的知识蒸馏可将 RGB 图像的均方误差从 2 周减少到 0.14 周。结果在三个不同的数据集上得到了验证。
{"title":"PA-RDFKNet: Unifying Plant Age Estimation through RGB-Depth Fusion and Knowledge Distillation","authors":"Shreya Bansal;Malya Singh;Seema Barda;Neeraj Goel;Mukesh Saini","doi":"10.1109/TAFE.2024.3418818","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3418818","url":null,"abstract":"Agriculture is facing bigger challenges in the 21st century due to the scarcity of resources. Artificial intelligence is being integrated with agriculture to cater to people's needs, unlocking fresh avenues for sustainability and innovation. One of the crucial agricultural practices is plant growth monitoring to detect plant stress at an early stage. In the past, there have been preliminary attempts at plant growth monitoring using red–green–blue (RGB) and depth images. The major challenge of this approach is the unavailability of the depth camera at the farmers' end. In this work, we have developed a transformer-based plant age RGB-depth fusion knowledge distillation network (PA-RDFKNet), a multi-to-single modal teacher–student network, that exploits the combined knowledge of RGB-depth pairs at the training time to infer the growth using RGB images alone during test time. The model uses a distillation loss that combines response-based, feature-based, and relation-based knowledge distillation techniques in the offline scheme. The proposed knowledge distillation improves the mean squared error for RGB images from 2 to 0.14 weeks. The results are validated on three different datasets.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"2 2","pages":"226-235"},"PeriodicalIF":0.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on AgriFood Electronics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1