首页 > 最新文献

IEEE Transactions on AgriFood Electronics最新文献

英文 中文
An Approach Based on Knowledge Distillation for Lightweight Defect Classification of Green Plums 基于知识蒸馏的青梅轻量化缺陷分类方法
Pub Date : 2025-01-06 DOI: 10.1109/TAFE.2024.3488196
Jinhai Wang;Wei Wang;Lan Liao;Lufeng Luo;Xuemin Lin;Xinan Zeng
During the cultivation and growth of green plums, various defects frequently occur, potentially affecting their overall quality and economic value. Accurate classification and identification of these defects have become essential components of the harvesting process, particularly when employing smart agricultural equipment. These defects pose significant challenges to the yield and quality of green plums, making their precise detection crucial for ensuring optimal output and economic efficiency. However, most contemporary research on fruit defect classification and grading using artificial intelligence techniques primarily focuses on accuracy, often neglecting the constraints imposed by limited resources. This study addresses the aforementioned challenges by employing knowledge distillation techniques to optimize the performance of a lightweight model. Specifically, during the knowledge distillation process, the vision transformer model, known for its robust recognition capabilities, was selected as the teacher model. The lightweight MobileNetv3 model, chosen for its ease of deployment, served as the student model and was trained using the Lion optimizer. In addition, the dual guidance learning module was designed to enhance knowledge transfer between the teacher and student models, thereby improving the overall capability of the student model. Experimental validation demonstrated that the proposed method excels in the green plum defect recognition task, with the student model, MobileNetv3, achieving an accuracy of 99.17% and exhibiting high performance in key metrics such as precision, recall, and F1-score. Notably, MobileNetv3 not only delivers exceptional performance but also features a low parameter count and computational complexity, facilitating its efficient deployment in practical applications. This study provides an effective and practical solution for the automatic identification and sorting of green plum defects, significantly advancing the development and application of smart agricultural technologies.
青梅在栽培和生长过程中,经常出现各种缺陷,可能影响青梅的整体品质和经济价值。准确分类和识别这些缺陷已成为收获过程的重要组成部分,特别是在使用智能农业设备时。这些缺陷对青梅的产量和质量构成了重大挑战,因此对其进行精确检测对于确保青梅的最佳产量和经济效益至关重要。然而,目前大多数利用人工智能技术对水果缺陷进行分类和分级的研究主要集中在准确性上,往往忽视了资源有限所带来的约束。本研究通过使用知识蒸馏技术来优化轻量级模型的性能,解决了上述挑战。具体而言,在知识提炼过程中,选择具有鲁棒识别能力的视觉转换器模型作为教师模型。选择轻量级的MobileNetv3模型是因为它易于部署,作为学生模型,并使用Lion优化器进行训练。此外,设计双导学习模块,增强师生模型之间的知识转移,从而提高学生模型的整体能力。实验验证表明,该方法在青梅缺陷识别任务中表现优异,以学生模型MobileNetv3为例,准确率达到99.17%,在准确率、查全率和f1分数等关键指标上表现优异。值得注意的是,MobileNetv3不仅提供了卓越的性能,而且具有低参数计数和计算复杂性,有助于其在实际应用中的有效部署。本研究为青梅缺陷的自动识别与分选提供了有效、实用的解决方案,对智慧农业技术的发展与应用具有重要的推动作用。
{"title":"An Approach Based on Knowledge Distillation for Lightweight Defect Classification of Green Plums","authors":"Jinhai Wang;Wei Wang;Lan Liao;Lufeng Luo;Xuemin Lin;Xinan Zeng","doi":"10.1109/TAFE.2024.3488196","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3488196","url":null,"abstract":"During the cultivation and growth of green plums, various defects frequently occur, potentially affecting their overall quality and economic value. Accurate classification and identification of these defects have become essential components of the harvesting process, particularly when employing smart agricultural equipment. These defects pose significant challenges to the yield and quality of green plums, making their precise detection crucial for ensuring optimal output and economic efficiency. However, most contemporary research on fruit defect classification and grading using artificial intelligence techniques primarily focuses on accuracy, often neglecting the constraints imposed by limited resources. This study addresses the aforementioned challenges by employing knowledge distillation techniques to optimize the performance of a lightweight model. Specifically, during the knowledge distillation process, the vision transformer model, known for its robust recognition capabilities, was selected as the teacher model. The lightweight MobileNetv3 model, chosen for its ease of deployment, served as the student model and was trained using the Lion optimizer. In addition, the dual guidance learning module was designed to enhance knowledge transfer between the teacher and student models, thereby improving the overall capability of the student model. Experimental validation demonstrated that the proposed method excels in the green plum defect recognition task, with the student model, MobileNetv3, achieving an accuracy of 99.17% and exhibiting high performance in key metrics such as precision, recall, and F1-score. Notably, MobileNetv3 not only delivers exceptional performance but also features a low parameter count and computational complexity, facilitating its efficient deployment in practical applications. This study provides an effective and practical solution for the automatic identification and sorting of green plum defects, significantly advancing the development and application of smart agricultural technologies.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"3 1","pages":"213-223"},"PeriodicalIF":0.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143821598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extended Kalman Filter Based Tracking Method for Accurate Fruit Yield Estimation Preserving SE(3) Equivariance 基于卡尔曼滤波器的扩展跟踪法用于精确水果产量估算,保留 SE(3) 方差
Pub Date : 2024-12-23 DOI: 10.1109/TAFE.2024.3513637
Hari Chandana Pichhika;Priyambada Subudhi;Raja Vara Prasad Yerra
Automatic yield estimation is crucial for fruit cultivation, impacting everything from harvesting to marketing. This article introduces an efficient tracking mechanism for accurate yield estimation in mango farming, addressing challenges such as fruit detection inconsistency and over-counting. We utilized this tracking-based solution on a video dataset collected in a $360^circ$ viewpoint of each mango tree in one-acre Banginapalle orchard during daylight. The videos underwent preprocessing, including gamma correction, Gaussian smoothing, and stabilization to minimize the quivering of video frames. We also implemented a cosine similarity technique to remove redundant frames with 90% similarity and segmented the canopy to identify the regions of interest. The mango detection system employs YOLOv8s and an extended Kalman filter that preserves special Euclidean group [SE(3)] equivariance, ensuring accurate mango tracking across frames, which is robust to camera movements through angular estimation. Our method surpasses existing tracking-bas algorithms such as Sort, DeepSort, and Bot-sort in tests with ten video sequences. In addition, the results are also comparable to the harvest count obtained from the farmer and the labeling count performed manually in the video frames, achieving results close to a mean absolute error of 0.341 and 0.089, respectively.
自动估产对水果种植至关重要,它影响着从采收到销售的方方面面。本文介绍了一种高效的跟踪机制,用于芒果种植中的精确产量估算,解决了果实检测不一致和过度计数等难题。我们在一个视频数据集上使用了这种基于跟踪的解决方案,该数据集在白天以 360^circ$ 的视角对一英亩 Banginapalle 果园中的每棵芒果树进行了采集。视频经过了预处理,包括伽玛校正、高斯平滑和稳定,以尽量减少视频帧的抖动。我们还采用余弦相似度技术去除相似度为 90% 的冗余帧,并对树冠进行分割,以确定感兴趣的区域。芒果检测系统采用了 YOLOv8s 和扩展卡尔曼滤波器,保留了特殊欧几里得群[SE(3)]等差数列,确保了跨帧芒果跟踪的准确性,并通过角度估计对摄像机移动具有鲁棒性。在十个视频序列的测试中,我们的方法超越了现有的基于跟踪的算法,如 Sort、DeepSort 和 Bot-sort。此外,该方法的结果还可与从果农处获得的收获计数和人工在视频帧中进行的标记计数相媲美,其平均绝对误差分别接近 0.341 和 0.089。
{"title":"Extended Kalman Filter Based Tracking Method for Accurate Fruit Yield Estimation Preserving SE(3) Equivariance","authors":"Hari Chandana Pichhika;Priyambada Subudhi;Raja Vara Prasad Yerra","doi":"10.1109/TAFE.2024.3513637","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3513637","url":null,"abstract":"Automatic yield estimation is crucial for fruit cultivation, impacting everything from harvesting to marketing. This article introduces an efficient tracking mechanism for accurate yield estimation in mango farming, addressing challenges such as fruit detection inconsistency and over-counting. We utilized this tracking-based solution on a video dataset collected in a <inline-formula><tex-math>$360^circ$</tex-math></inline-formula> viewpoint of each mango tree in one-acre Banginapalle orchard during daylight. The videos underwent preprocessing, including gamma correction, Gaussian smoothing, and stabilization to minimize the quivering of video frames. We also implemented a cosine similarity technique to remove redundant frames with 90% similarity and segmented the canopy to identify the regions of interest. The mango detection system employs YOLOv8s and an extended Kalman filter that preserves special Euclidean group [SE(3)] equivariance, ensuring accurate mango tracking across frames, which is robust to camera movements through angular estimation. Our method surpasses existing tracking-bas algorithms such as Sort, DeepSort, and Bot-sort in tests with ten video sequences. In addition, the results are also comparable to the harvest count obtained from the farmer and the labeling count performed manually in the video frames, achieving results close to a mean absolute error of 0.341 and 0.089, respectively.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"3 1","pages":"200-212"},"PeriodicalIF":0.0,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143821873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precision Fertilization via Spatio-temporal Tensor Multi-task Learning and One-Shot Learning 基于时空张量的多任务学习和单次学习的精准施肥
Pub Date : 2024-12-11 DOI: 10.1109/TAFE.2024.3485949
Yu Zhang;Kang Liu;Xulong Wang;Rujing Wang;Po Yang
Precision fertilization is essential in agricultural systems for balancing soil nutrients, conserving fertilizer, decreasing emissions, and increasing crop yields. Access to comprehensive and diverse agricultural data is problematic due to the lack of sophisticated sensor and network technologies on the majority of farms, and available agricultural data are generally unstructured and difficult to mine. The absence of agricultural data is, consequently, a significant impediment to the utilization of machine learning approaches for precision fertilization. In this research, we investigate newly gathered genuine agricultural dataset from nine real winter wheat farms in the United Kingdom, which encompass an extensive variety of agricultural variables, including climate, soil nutrients, and farming data. To deal with the spatio-temporal characteristics of agricultural dataset and to address the problem of scarcity in agricultural data, we propose a novel machine learning approach integrating multi-task learning and one-shot learning, which utilizes a multi-dimensional tensor constructed from original data combined with fertilization temporal patterns extracted by contrasting with environmental information from existing real farms to accurately predict the quantity and timing of base and top dressing fertilization. Specifically, agricultural data are converted into a 3-D tensor and tensor decomposition technique is utilized to derive a set of comprehensible spatio-temporal latent factors from the original data. The latent factors are subsequently utilized to construct the spatio-temporal tensor prediction model as multi-task relationships. The proposed one-shot learning approach utilizes the Mahalanobis distance to evaluate the similarity of environmental information between the target farm and existing real-world farms as a determinant of whether to transfer the fertilization temporal pattern of existing farm to the target farm. Comprehensive experiments are conducted to compare the proposed approach with standard regression models utilizing the real-world agricultural dataset. The experimental results demonstrate that our proposed approach presents superior accuracy and stability for fertilization prediction.
在农业系统中,精确施肥对于平衡土壤养分、节约肥料、减少排放和提高作物产量至关重要。由于大多数农场缺乏复杂的传感器和网络技术,获得全面和多样化的农业数据是有问题的,而且现有的农业数据通常是非结构化的,难以挖掘。因此,农业数据的缺乏是利用机器学习方法进行精确施肥的一个重大障碍。在这项研究中,我们调查了来自英国九个真实冬小麦农场的新收集的真实农业数据集,其中包括各种各样的农业变量,包括气候、土壤养分和农业数据。为了处理农业数据的时空特征和解决农业数据的稀缺性问题,我们提出了一种融合多任务学习和一次学习的机器学习方法。利用原始数据构建的多维张量,结合与现有真实农场环境信息对比提取的施肥时间格局,准确预测基肥和追肥的数量和时间。具体而言,将农业数据转换为三维张量,利用张量分解技术从原始数据中提取一组可理解的时空潜在因素。然后利用潜在因素作为多任务关系构建时空张量预测模型。该方法利用马氏距离(Mahalanobis distance)来评估目标农场与现实世界现有农场之间环境信息的相似性,作为是否将现有农场施肥时间模式转移到目标农场的决定因素。利用真实农业数据集进行了综合实验,将所提出的方法与标准回归模型进行比较。实验结果表明,该方法具有较高的预测精度和稳定性。
{"title":"Precision Fertilization via Spatio-temporal Tensor Multi-task Learning and One-Shot Learning","authors":"Yu Zhang;Kang Liu;Xulong Wang;Rujing Wang;Po Yang","doi":"10.1109/TAFE.2024.3485949","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3485949","url":null,"abstract":"Precision fertilization is essential in agricultural systems for balancing soil nutrients, conserving fertilizer, decreasing emissions, and increasing crop yields. Access to comprehensive and diverse agricultural data is problematic due to the lack of sophisticated sensor and network technologies on the majority of farms, and available agricultural data are generally unstructured and difficult to mine. The absence of agricultural data is, consequently, a significant impediment to the utilization of machine learning approaches for precision fertilization. In this research, we investigate newly gathered genuine agricultural dataset from nine real winter wheat farms in the United Kingdom, which encompass an extensive variety of agricultural variables, including climate, soil nutrients, and farming data. To deal with the spatio-temporal characteristics of agricultural dataset and to address the problem of scarcity in agricultural data, we propose a novel machine learning approach integrating multi-task learning and one-shot learning, which utilizes a multi-dimensional tensor constructed from original data combined with fertilization temporal patterns extracted by contrasting with environmental information from existing real farms to accurately predict the quantity and timing of base and top dressing fertilization. Specifically, agricultural data are converted into a 3-D tensor and tensor decomposition technique is utilized to derive a set of comprehensible spatio-temporal latent factors from the original data. The latent factors are subsequently utilized to construct the spatio-temporal tensor prediction model as multi-task relationships. The proposed one-shot learning approach utilizes the Mahalanobis distance to evaluate the similarity of environmental information between the target farm and existing real-world farms as a determinant of whether to transfer the fertilization temporal pattern of existing farm to the target farm. Comprehensive experiments are conducted to compare the proposed approach with standard regression models utilizing the real-world agricultural dataset. The experimental results demonstrate that our proposed approach presents superior accuracy and stability for fertilization prediction.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"3 1","pages":"190-199"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143820329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TranSEF: Transformer Enhanced Self-Ensemble Framework for Damage Assessment in Canola Crops TranSEF:油菜作物危害评估的变压器增强自集成框架
Pub Date : 2024-12-05 DOI: 10.1109/TAFE.2024.3504956
Muhib Ullah;Abdul Bais;Tyler Wist
Crop health monitoring is crucial for implementing timely and effective interventions that ensure sustainability and maximize crop yield. Flea beetles (FB), Crucifer (Phyllotreta cruciferae) and Striped (Phyllotreta striolata), pose a significant threat to canola crop health and cause substantial damage if not addressed promptly. Accurate and timely damage quantification is crucial for implementing targeted pest management strategies if insecticidal seed treatments are overcome by FB feeding to minimize yield losses if the action threshold is exceeded. Traditional manual field monitoring for FB damage is time-consuming and error-prone due to reliance on human visual estimates of FB damage. This article proposes TranSEF, a novel self-ensemble semantic segmentation algorithm that utilizes a hybrid convolutional neural network-vision transformer (ViT) encoder–decoder framework. The encoder employs a modified cross-stage partial DenseNet (CSPDenseNet), MCSPDNet, which enhances attention to tiny regions by aggregating spatially aware features from shallow layers with deeper, more abstract features. ViTs effectively capture the global context in the decoder by modeling long-range dependencies and relationships across the image. Each decoder independently processes inputs from different stages of the MCSPDNet, acting as a weak learner within an ensemble-like approach. Unlike traditional ensemble learning approaches that train weak learners separately, TranSEF is trained end-to-end, making it a self-ensembling framework. TranSEF uses hybrid supervision with a composite loss function, where decoders generate independent predictions and simultaneously supervise each other. TranSEF achieves IoU scores of 0.831 for canola leaves and 0.807 for FB damage, and the overall mIoU improved by 2.29% and 1.56% over DeepLabv3+ and SegFormer, respectively, while utilizing only 35.42 M trainable parameters-significantly fewer than DeepLabv3+ (63 M) and SegFormer (61 M).
作物健康监测对于实施及时有效的干预措施以确保可持续性和最大限度地提高作物产量至关重要。跳蚤甲虫(FB),十字花科(Phyllotreta cruciferae)和条纹(Phyllotreta striolata),对油菜作物健康构成重大威胁,如果不及时处理,会造成重大损害。如果在超过行动阈值的情况下,通过投喂FB来克服杀虫种子处理,以最大限度地减少产量损失,那么准确和及时的损害量化对于实施有针对性的害虫管理策略至关重要。由于依赖于人类对FB损伤的视觉估计,传统的人工现场监测既耗时又容易出错。本文提出了一种新的自集成语义分割算法TranSEF,该算法利用混合卷积神经网络视觉转换器(ViT)编码器-解码器框架。编码器采用改进的跨阶段部分DenseNet (CSPDenseNet) MCSPDNet,通过将来自浅层的空间感知特征与更深、更抽象的特征聚合在一起,增强对微小区域的关注。vit通过对图像上的长期依赖关系和关系进行建模,有效地捕获解码器中的全局上下文。每个解码器独立处理来自MCSPDNet不同阶段的输入,在类似集成的方法中充当弱学习者。与传统的单独训练弱学习者的集成学习方法不同,TranSEF是端到端训练的,使其成为一个自集成框架。TranSEF使用混合监督和复合损失函数,其中解码器生成独立预测并同时相互监督。TranSEF对油菜叶片的IoU得分为0.831,对FB损伤的IoU得分为0.807,总体mIoU比DeepLabv3+和SegFormer分别提高了2.29%和1.56%,而仅利用了35.42 M可训练参数,明显少于DeepLabv3+ (63 M)和SegFormer (61 M)。
{"title":"TranSEF: Transformer Enhanced Self-Ensemble Framework for Damage Assessment in Canola Crops","authors":"Muhib Ullah;Abdul Bais;Tyler Wist","doi":"10.1109/TAFE.2024.3504956","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3504956","url":null,"abstract":"Crop health monitoring is crucial for implementing timely and effective interventions that ensure sustainability and maximize crop yield. Flea beetles (FB), Crucifer (Phyllotreta cruciferae) and Striped (Phyllotreta striolata), pose a significant threat to canola crop health and cause substantial damage if not addressed promptly. Accurate and timely damage quantification is crucial for implementing targeted pest management strategies if insecticidal seed treatments are overcome by FB feeding to minimize yield losses if the action threshold is exceeded. Traditional manual field monitoring for FB damage is time-consuming and error-prone due to reliance on human visual estimates of FB damage. This article proposes TranSEF, a novel self-ensemble semantic segmentation algorithm that utilizes a hybrid convolutional neural network-vision transformer (ViT) encoder–decoder framework. The encoder employs a modified cross-stage partial DenseNet (CSPDenseNet), MCSPDNet, which enhances attention to tiny regions by aggregating spatially aware features from shallow layers with deeper, more abstract features. ViTs effectively capture the global context in the decoder by modeling long-range dependencies and relationships across the image. Each decoder independently processes inputs from different stages of the MCSPDNet, acting as a weak learner within an ensemble-like approach. Unlike traditional ensemble learning approaches that train weak learners separately, TranSEF is trained end-to-end, making it a self-ensembling framework. TranSEF uses hybrid supervision with a composite loss function, where decoders generate independent predictions and simultaneously supervise each other. TranSEF achieves IoU scores of 0.831 for canola leaves and 0.807 for FB damage, and the overall mIoU improved by 2.29% and 1.56% over DeepLabv3+ and SegFormer, respectively, while utilizing only 35.42 M trainable parameters-significantly fewer than DeepLabv3+ (63 M) and SegFormer (61 M).","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"3 1","pages":"179-189"},"PeriodicalIF":0.0,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143821526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ingredient-Guided RGB-D Fusion Network for Nutritional Assessment 成分导向的RGB-D营养评估融合网络
Pub Date : 2024-12-03 DOI: 10.1109/TAFE.2024.3493332
Zhihui Feng;Hao Xiong;Weiqing Min;Sujuan Hou;Huichuan Duan;Zhonghua Liu;Shuqiang Jiang
The nutritional value of agricultural products is an important indicator for evaluating their quality, which directly affects people's dietary choices and overall well-being. Nutritional assessment studies provide a scientific basis for the production, processing, and marketing of food by analyzing the nutrients they contain. Traditional methods often struggle with suboptimal accuracy and can be time consuming, as well as a shortage of professionals. The progress in artificial intelligence has revolutionized dietary health by offering more accessible methods for food nutritional assessment using vision-based approaches. However, existing vision-based methods using RGB images often face challenges due to varying lighting conditions, impacting the accuracy of nutritional assessment. An alternative is the RGB-D fusion method, which combines RGB images and depth maps. Yet, these methods typically rely on simple fusion techniques that do not ensure precise assessment. Additionally, current vision-based methods struggle to detect small components like oils and sugars on food surfaces, crucial for determining ingredient information and ensuring accurate nutritional assessment. In this pursuit, we propose a novel ingredient-guided RGB-D fusion network that integrates RGB images with depth maps and enables more reliable nutritional assessment guided by ingredient information. Specifically, the multifrequency bimodality fusion module is designed to leverage the correlation between the RGB image and the depth map within the frequency domain. Furthermore, the progressive-fusion module and ingredient-guided module leverage ingredient information to explore the potential correlation between ingredients and nutrients, thereby enhancing the guidance for nutritional assessment learning. We evaluate our approach on a variety of ablation settings on Nutrition5k, where it consistently outperforms state-of-the-art methods.
农产品的营养价值是评价其质量的重要指标,直接影响着人们的饮食选择和整体健康。营养评估研究通过分析食品所含的营养成分,为食品的生产、加工和营销提供科学依据。传统方法往往难以达到最佳准确度,而且耗时长,专业人员短缺。人工智能的进步为膳食健康带来了革命性的变化,它利用基于视觉的方法为食品营养评估提供了更便捷的方法。然而,由于光照条件不同,使用 RGB 图像的现有视觉方法往往面临挑战,影响营养评估的准确性。一种替代方法是 RGB-D 融合法,它将 RGB 图像与深度图相结合。然而,这些方法通常依赖于简单的融合技术,无法确保精确的评估。此外,目前基于视觉的方法难以检测到食品表面的油脂和糖分等微小成分,而这些成分对于确定成分信息和确保准确的营养评估至关重要。在这一研究中,我们提出了一种新颖的以成分为导向的 RGB-D 融合网络,它将 RGB 图像与深度图整合在一起,实现了以成分信息为导向的更可靠的营养评估。具体来说,多频双模态融合模块旨在利用频域内 RGB 图像与深度图之间的相关性。此外,渐进融合模块和成分引导模块利用成分信息来探索成分和营养素之间的潜在关联,从而加强对营养评估学习的指导。我们在 Nutrition5k 上对我们的方法进行了各种消融设置的评估,结果显示我们的方法始终优于最先进的方法。
{"title":"Ingredient-Guided RGB-D Fusion Network for Nutritional Assessment","authors":"Zhihui Feng;Hao Xiong;Weiqing Min;Sujuan Hou;Huichuan Duan;Zhonghua Liu;Shuqiang Jiang","doi":"10.1109/TAFE.2024.3493332","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3493332","url":null,"abstract":"The nutritional value of agricultural products is an important indicator for evaluating their quality, which directly affects people's dietary choices and overall well-being. Nutritional assessment studies provide a scientific basis for the production, processing, and marketing of food by analyzing the nutrients they contain. Traditional methods often struggle with suboptimal accuracy and can be time consuming, as well as a shortage of professionals. The progress in artificial intelligence has revolutionized dietary health by offering more accessible methods for food nutritional assessment using vision-based approaches. However, existing vision-based methods using RGB images often face challenges due to varying lighting conditions, impacting the accuracy of nutritional assessment. An alternative is the RGB-D fusion method, which combines RGB images and depth maps. Yet, these methods typically rely on simple fusion techniques that do not ensure precise assessment. Additionally, current vision-based methods struggle to detect small components like oils and sugars on food surfaces, crucial for determining ingredient information and ensuring accurate nutritional assessment. In this pursuit, we propose a novel ingredient-guided RGB-D fusion network that integrates RGB images with depth maps and enables more reliable nutritional assessment guided by ingredient information. Specifically, the multifrequency bimodality fusion module is designed to leverage the correlation between the RGB image and the depth map within the frequency domain. Furthermore, the progressive-fusion module and ingredient-guided module leverage ingredient information to explore the potential correlation between ingredients and nutrients, thereby enhancing the guidance for nutritional assessment learning. We evaluate our approach on a variety of ablation settings on Nutrition5k, where it consistently outperforms state-of-the-art methods.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"3 1","pages":"156-166"},"PeriodicalIF":0.0,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143821604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiRAN: A Lightweight Residual Attention Network for In-Field Plant Pest Recognition 李然:一种用于田间植物有害生物识别的轻量级剩余注意网络
Pub Date : 2024-12-03 DOI: 10.1109/TAFE.2024.3496798
Sivasubramaniam Janarthan;Selvarajah Thuseethan;Sutharshan Rajasegarar;Qiang Lyu;Yongqiang Zheng;John Yearwood
Plant pests are a major threat to sustainable food supply, causing damage to food production and agriculture industries around the world. Despite these negative impacts, on several occasions, plant pests have also been used to improve the quality of agricultural products. Although deep learning-based automated plant pest identification techniques have shown tremendous success in the recent past, they are often limited by increased computational cost, large training data requirements, and impaired performance when they present in complex backgrounds. Therefore, to alleviate these challenges, a lightweight attention-based convolutional neural network architecture, called LiRAN, based on a novel simplified attention mask module and an extended MobileNetV2 architecture, is proposed in this study. The experimental results reveal that the proposed architecture can attain 96.25%, 98.9%, and 91% accuracies on three variants of publicly available datasets with 5869, 545, and 500 sample images, respectively, showcasing high performance consistently in large and small data conditions. More importantly, this model can be deployed on smartphones or other resource-constrained embedded devices for in-field realization, only requiring $approx$ 9.3 MB of storage space with around 2.37 M parameters and 0.34 giga multiply-and-accumulate FLOPs with an input image size of 224 × 224.
植物害虫是对可持续粮食供应的主要威胁,对世界各地的粮食生产和农业造成损害。尽管存在这些负面影响,但在某些情况下,植物害虫也被用来提高农产品的质量。尽管基于深度学习的植物害虫自动识别技术在最近的过去取得了巨大的成功,但它们往往受到计算成本增加、训练数据需求大以及在复杂背景下表现不佳的限制。因此,为了缓解这些挑战,本研究提出了一种轻量级的基于注意力的卷积神经网络架构,称为LiRAN,该架构基于一种新的简化的注意力掩模模块和扩展的MobileNetV2架构。实验结果表明,该架构在三种不同的公开数据集上,分别具有5869、545和500张样本图像,准确率分别达到96.25%、98.9%和91%,在大数据和小数据条件下都表现出一致的高性能。更重要的是,该模型可以部署在智能手机或其他资源受限的嵌入式设备上进行现场实现,仅需要$ $约9.3 MB的存储空间,约2.37 M参数和0.34 gb的乘法和累积FLOPs,输入图像大小为224 × 224。
{"title":"LiRAN: A Lightweight Residual Attention Network for In-Field Plant Pest Recognition","authors":"Sivasubramaniam Janarthan;Selvarajah Thuseethan;Sutharshan Rajasegarar;Qiang Lyu;Yongqiang Zheng;John Yearwood","doi":"10.1109/TAFE.2024.3496798","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3496798","url":null,"abstract":"Plant pests are a major threat to sustainable food supply, causing damage to food production and agriculture industries around the world. Despite these negative impacts, on several occasions, plant pests have also been used to improve the quality of agricultural products. Although deep learning-based automated plant pest identification techniques have shown tremendous success in the recent past, they are often limited by increased computational cost, large training data requirements, and impaired performance when they present in complex backgrounds. Therefore, to alleviate these challenges, a lightweight attention-based convolutional neural network architecture, called LiRAN, based on a novel simplified attention mask module and an extended MobileNetV2 architecture, is proposed in this study. The experimental results reveal that the proposed architecture can attain 96.25%, 98.9%, and 91% accuracies on three variants of publicly available datasets with 5869, 545, and 500 sample images, respectively, showcasing high performance consistently in large and small data conditions. More importantly, this model can be deployed on smartphones or other resource-constrained embedded devices for in-field realization, only requiring <inline-formula><tex-math>$approx$</tex-math></inline-formula> 9.3 MB of storage space with around 2.37 M parameters and 0.34 giga multiply-and-accumulate FLOPs with an input image size of 224 × 224.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"3 1","pages":"167-178"},"PeriodicalIF":0.0,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143821872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pest and Disease Management in Ginger Plants: Artificial Intelligence of Things (AIoT) 生姜病虫害管理:物联网人工智能(AIoT)
Pub Date : 2024-11-21 DOI: 10.1109/TAFE.2024.3492323
Olakunle Elijah;Abiodun Emmanuel Abioye;Tawanda E. Maguvu
Ginger (Zingiber officinale), a globally cultivated spice crop, is vital to numerous economies. However, its production faces significant challenges due to pests and diseases, which can lead to substantial yield losses. Traditional methods for detecting these threats often rely on visual inspection by human experts, a process that is time-consuming, labor-intensive, and prone to errors. This article examines the potential of artificial intelligence (AI) to address these limitations and transform ginger cultivation. It provides a comprehensive analysis of conventional pest and disease management strategies, identifying their short comings and exploring the potential of emerging AI technologies, including the AI of things’ applications, for accurate, efficient, and timely detection and control. By pinpointing the challenges and outlining promising avenues for future research, this study aims to equip agriculturists and researchers with the knowledge necessary to optimize ginger production, enhance food security, and foster sustainable farming practices.
生姜(Zingiber officinale)是一种全球种植的香料作物,对许多经济体至关重要。然而,由于病虫害可能导致大量减产,生姜生产面临着巨大挑战。检测这些威胁的传统方法通常依赖于人类专家的目视检查,这一过程耗时、耗力,而且容易出错。本文探讨了人工智能(AI)在解决这些局限性和改变生姜种植方面的潜力。文章对传统病虫害管理策略进行了全面分析,指出了其不足之处,并探讨了新兴人工智能技术(包括物联网应用)在准确、高效、及时检测和控制方面的潜力。本研究通过指出面临的挑战和勾勒未来研究的前景,旨在为农业工作者和研究人员提供优化生姜生产、加强粮食安全和促进可持续农业实践所需的知识。
{"title":"Pest and Disease Management in Ginger Plants: Artificial Intelligence of Things (AIoT)","authors":"Olakunle Elijah;Abiodun Emmanuel Abioye;Tawanda E. Maguvu","doi":"10.1109/TAFE.2024.3492323","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3492323","url":null,"abstract":"Ginger (<italic>Zingiber officinale</i>), a globally cultivated spice crop, is vital to numerous economies. However, its production faces significant challenges due to pests and diseases, which can lead to substantial yield losses. Traditional methods for detecting these threats often rely on visual inspection by human experts, a process that is time-consuming, labor-intensive, and prone to errors. This article examines the potential of artificial intelligence (AI) to address these limitations and transform ginger cultivation. It provides a comprehensive analysis of conventional pest and disease management strategies, identifying their short comings and exploring the potential of emerging AI technologies, including the AI of things’ applications, for accurate, efficient, and timely detection and control. By pinpointing the challenges and outlining promising avenues for future research, this study aims to equip agriculturists and researchers with the knowledge necessary to optimize ginger production, enhance food security, and foster sustainable farming practices.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"3 1","pages":"86-97"},"PeriodicalIF":0.0,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143821874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Nectarine Fruit Maturity Detection and Classification Counting Model Based on YOLOv8n 基于 YOLOv8n 的新型油桃果实成熟度检测和分类计数模型
Pub Date : 2024-11-14 DOI: 10.1109/TAFE.2024.3488747
Baofeng Ji;Jingming Zhao;Fazhan Tao;Ji Zhang;Gaoyuan Zhang;Nan Wang;Ping Zhang;Huitao Fan
Fruit yield assessment is an important aspect of orchard management. In this context, target detection of fruit is of paramount importance. However, due to complex factors in real orchard environments, such as fruit occlusion, insufficient lighting, and overlapping fruits, traditional detection and counting methods often suffer from low detection accuracy and inadequate classification precision, failing to meet the requirements of practical applications. To address this issue, we focus on nectarine fruit and propose an improved YOLOv8n-based object detection algorithm model, YOLOv8n-global feature extraction enhancement (GFE). We integrate the effective squeeze-and-excitation attention mechanism into the YOLOv8n model. This integration allows our approach to adaptively adjust the weight of each channel, which enhances both detection efficiency and target recognition accuracy. Then, we introduce focal distance-intersection over union loss to address the misjudgment of hard samples. This further contributes to improving detection accuracy. In addition, we incorporate the gather-and-distribute mechanism from GOLD-YOLO, replacing the traditional feature pyramid network structure. This enhancement improves the information fusion capability in the neck of the model, leading to a higher mean average precision (mAP@0.5). In addition, the output of the improved model can be used as an input to DEEPSORT to classify and count nectarine fruit. This functionality can be used for estimating fruit maturity and yield in orchards. Experimental results demonstrate that the YOLOv8n-GFE model achieves a mAP@0.5 of 92.5%, which is an improvement of 3.2% over the original YOLOv8n model, meeting the required accuracy for recognizing nectarine fruit maturity in practical applications.
果实产量评估是果园管理的一个重要方面。在这种情况下,水果的目标检测是至关重要的。然而,由于实际果园环境中存在果实遮挡、光照不足、果实重叠等复杂因素,传统的检测计数方法往往存在检测精度低、分类精度不高的问题,无法满足实际应用的要求。针对这一问题,本文以油桃为研究对象,提出了一种改进的基于yolov8n的目标检测算法模型——YOLOv8n-global feature extraction enhancement (GFE)。我们将有效的挤压-激励注意机制整合到YOLOv8n模型中。这种集成使得我们的方法可以自适应地调整每个通道的权重,从而提高了检测效率和目标识别精度。在此基础上,引入焦距-交集-联合损失来解决硬样本的误判问题。这进一步有助于提高检测精度。此外,我们还引入了GOLD-YOLO的集散机制,取代了传统的特征金字塔网络结构。这种增强提高了模型颈部的信息融合能力,从而提高了平均精度(mAP@0.5)。此外,改进模型的输出可以作为DEEPSORT的输入,对油桃进行分类和计数。这个功能可以用来估计果园里的水果成熟度和产量。实验结果表明,YOLOv8n- gfe模型的准确率mAP@0.5为92.5%,比原来的YOLOv8n模型提高了3.2%,满足了实际应用中油桃成熟度识别的精度要求。
{"title":"A Novel Nectarine Fruit Maturity Detection and Classification Counting Model Based on YOLOv8n","authors":"Baofeng Ji;Jingming Zhao;Fazhan Tao;Ji Zhang;Gaoyuan Zhang;Nan Wang;Ping Zhang;Huitao Fan","doi":"10.1109/TAFE.2024.3488747","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3488747","url":null,"abstract":"Fruit yield assessment is an important aspect of orchard management. In this context, target detection of fruit is of paramount importance. However, due to complex factors in real orchard environments, such as fruit occlusion, insufficient lighting, and overlapping fruits, traditional detection and counting methods often suffer from low detection accuracy and inadequate classification precision, failing to meet the requirements of practical applications. To address this issue, we focus on nectarine fruit and propose an improved YOLOv8n-based object detection algorithm model, YOLOv8n-global feature extraction enhancement (GFE). We integrate the effective squeeze-and-excitation attention mechanism into the YOLOv8n model. This integration allows our approach to adaptively adjust the weight of each channel, which enhances both detection efficiency and target recognition accuracy. Then, we introduce focal distance-intersection over union loss to address the misjudgment of hard samples. This further contributes to improving detection accuracy. In addition, we incorporate the gather-and-distribute mechanism from GOLD-YOLO, replacing the traditional feature pyramid network structure. This enhancement improves the information fusion capability in the neck of the model, leading to a higher mean average precision (mAP@0.5). In addition, the output of the improved model can be used as an input to DEEPSORT to classify and count nectarine fruit. This functionality can be used for estimating fruit maturity and yield in orchards. Experimental results demonstrate that the YOLOv8n-GFE model achieves a mAP@0.5 of 92.5%, which is an improvement of 3.2% over the original YOLOv8n model, meeting the required accuracy for recognizing nectarine fruit maturity in practical applications.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"3 1","pages":"144-155"},"PeriodicalIF":0.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143821550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Power Management and Control System for Environmental Monitoring Devices 环境监测设备电源管理与控制系统
Pub Date : 2024-11-11 DOI: 10.1109/TAFE.2024.3472493
Marcel Balle;Wenxiu Xu;Kevin FA Darras;Thomas Cherico Wanger
Recent advances in Internet of Things and artificial intelligence technologies have shifted automated monitoring in smart agriculture toward low power sensors and embedded vision on powerful processing units. Vision-based monitoring devices need an effective power management and control system with system-adapted power input and output capabilities to achieve power-efficient and self-sustainable operation. Here, we present a universal power management solution for automated monitoring devices in agricultural systems, compatible with commonly used off-the-shelf edge processing units (EPUs). The proposed design is specifically adapted for battery-powered EPU systems by incorporating power-matched energy harvesting, a power switch with low-power sleep mode, and simple system integration in an microcontroller unit-less architecture with automated operation. We use a four-month case study to monitor the effects of plastic pollution in agricultural soils on plant growth under 4-mg microplastic exposure, demonstrating that the setup achieved continuous and sustainable operation. In this agricultural application, our power management module is deployed in an embedded vision camera equipped with a 5-W solar panel and five various environmental sensors, effectively monitoring environmental stress and plant growth state. This work highlights the application of the power management board in embedded agricultural monitoring devices for precision farming.
物联网和人工智能技术的最新进展已将智能农业中的自动化监控转向低功耗传感器和强大处理单元上的嵌入式视觉。基于视觉的监测设备需要一个有效的电源管理和控制系统,具有系统适应的电源输入和输出能力,以实现节能和自我持续运行。在这里,我们提出了一种通用的电源管理解决方案,用于农业系统中的自动监控设备,与常用的现成边缘处理单元(epu)兼容。提出的设计特别适用于电池供电的EPU系统,通过将功率匹配的能量收集,具有低功耗休眠模式的电源开关以及简单的系统集成在具有自动化操作的无微控制器架构中。我们通过为期4个月的案例研究,监测了农业土壤中塑料污染对4毫克微塑料暴露下植物生长的影响,表明该装置实现了连续和可持续的运行。在这个农业应用中,我们的电源管理模块部署在一个内置了5w太阳能电池板和五个各种环境传感器的嵌入式视觉摄像头中,有效地监测环境胁迫和植物生长状态。本文重点介绍了电源管理板在精准农业嵌入式农业监控设备中的应用。
{"title":"A Power Management and Control System for Environmental Monitoring Devices","authors":"Marcel Balle;Wenxiu Xu;Kevin FA Darras;Thomas Cherico Wanger","doi":"10.1109/TAFE.2024.3472493","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3472493","url":null,"abstract":"Recent advances in Internet of Things and artificial intelligence technologies have shifted automated monitoring in smart agriculture toward low power sensors and embedded vision on powerful processing units. Vision-based monitoring devices need an effective power management and control system with system-adapted power input and output capabilities to achieve power-efficient and self-sustainable operation. Here, we present a universal power management solution for automated monitoring devices in agricultural systems, compatible with commonly used off-the-shelf edge processing units (EPUs). The proposed design is specifically adapted for battery-powered EPU systems by incorporating power-matched energy harvesting, a power switch with low-power sleep mode, and simple system integration in an microcontroller unit-less architecture with automated operation. We use a four-month case study to monitor the effects of plastic pollution in agricultural soils on plant growth under 4-mg microplastic exposure, demonstrating that the setup achieved continuous and sustainable operation. In this agricultural application, our power management module is deployed in an embedded vision camera equipped with a 5-W solar panel and five various environmental sensors, effectively monitoring environmental stress and plant growth state. This work highlights the application of the power management board in embedded agricultural monitoring devices for precision farming.","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"3 1","pages":"134-143"},"PeriodicalIF":0.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143821602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2024 Index IEEE Transactions on AgriFood Electronics Vol. 2 2024 索引 《电气和电子工程师学会农业食品电子期刊》第 2 卷
Pub Date : 2024-10-24 DOI: 10.1109/TAFE.2024.3483630
{"title":"2024 Index IEEE Transactions on AgriFood Electronics Vol. 2","authors":"","doi":"10.1109/TAFE.2024.3483630","DOIUrl":"https://doi.org/10.1109/TAFE.2024.3483630","url":null,"abstract":"","PeriodicalId":100637,"journal":{"name":"IEEE Transactions on AgriFood Electronics","volume":"2 2","pages":"638-652"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10734674","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142540444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on AgriFood Electronics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1