首页 > 最新文献

Computers and Electronics in Agriculture最新文献

英文 中文
RGB camera-based monocular stereo vision applied in plant phenotype: A survey 将基于 RGB 摄像头的单目立体视觉应用于植物表型:调查
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-10-09 DOI: 10.1016/j.compag.2024.109523

Background

The breeding of plants with superior traits and the improvement of cultivation means are two essential ways to achieve yield growth and quality promotion. Phenotype, which is the result of the interaction between genes and the environment, plays a key role in understanding plant geometry, growth and development. However, inefficient manual phenotypic measurement has become the main bottleneck restricting the advancement of related technologies. The monocular stereo vision system based on an RGB camera is considered as a promising approach for achieving high-throughput three-dimensional phenotypic data acquisition. This approach is cost-effective, highly efficient, and accurate.

Scope and approach

This work presents a comprehensive summary of the eight commonly used three-dimensional reconstruction methods in monocular stereo vision, along with three common image acquisition methods (circular, fixed, and straight) applied in plant phenotyping. Through a systematic review of the literature published in the past decade, this paper highlights the application of these systems and matching methods in three-dimensional plant phenotypic research. Additionally, this paper provides a discussion on the advantages and disadvantages of different approaches.

Key findings and conclusions

At present, monocular stereo vision systems based on a single RGB camera are widely utilized to acquire diverse plant traits due to their affordability and convenience. Different application scenarios have corresponding mechanical structure and data processing methods. Deep learning-based three-dimensional reconstruction methods have demonstrated promising results and significant potential across all three common image acquisition methods. However, the current effectiveness of deep learning in reconstruction requires further validation in the absence of datasets. Moreover, limitations exist in utilizing the results of 3D reconstruction and in the selection of experimental subjects, such as vertical farming. To advance modern breeding and intelligent cultivation, it is imperative to promote dataset collection, diversify the range of research subjects (such as edible fungi and diseased plants), and develop a novel, automated, high-throughput, four-dimensional phenotype platform. As such, monocular stereo vision systems based on an RGB camera, coupled with expanded applications and the development of more efficient reconstruction algorithms, will undoubtedly emerge as a focal point for future researches.
背景培育具有优良性状的植物和改进栽培手段是实现增产和提质的两个基本途径。表型是基因与环境相互作用的结果,在了解植物的几何形状、生长和发育方面起着关键作用。然而,低效的人工表型测量已成为制约相关技术发展的主要瓶颈。基于 RGB 摄像机的单目立体视觉系统被认为是实现高通量三维表型数据采集的一种可行方法。范围和方法 本研究全面总结了单目立体视觉中常用的八种三维重建方法,以及植物表型中常用的三种图像采集方法(圆形、固定和直线)。通过系统回顾过去十年发表的文献,本文重点介绍了这些系统和匹配方法在三维植物表型研究中的应用。主要发现和结论目前,基于单个 RGB 相机的单目立体视觉系统因其经济实惠和方便快捷而被广泛用于获取各种植物性状。不同的应用场景有相应的机械结构和数据处理方法。基于深度学习的三维重建方法在三种常见的图像采集方法中都表现出了良好的效果和巨大的潜力。然而,在缺乏数据集的情况下,目前深度学习在重建中的有效性还需要进一步验证。此外,在利用三维重建结果和选择实验对象(如垂直农业)方面也存在局限性。要推进现代育种和智能栽培,当务之急是促进数据集收集,丰富研究对象(如食用菌和病虫害植物),并开发新型、自动化、高通量的四维表型平台。因此,基于 RGB 摄像机的单目立体视觉系统,加上应用范围的扩大和更高效重建算法的开发,无疑将成为未来研究的焦点。
{"title":"RGB camera-based monocular stereo vision applied in plant phenotype: A survey","authors":"","doi":"10.1016/j.compag.2024.109523","DOIUrl":"10.1016/j.compag.2024.109523","url":null,"abstract":"<div><h3>Background</h3><div>The breeding of plants with superior traits and the improvement of cultivation means are two essential ways to achieve yield growth and quality promotion. Phenotype, which is the result of the interaction between genes and the environment, plays a key role in understanding plant geometry, growth and development. However, inefficient manual phenotypic measurement has become the main bottleneck restricting the advancement of related technologies. The monocular stereo vision system based on an RGB camera is considered as a promising approach for achieving high-throughput three-dimensional phenotypic data acquisition. This approach is cost-effective, highly efficient, and accurate.</div></div><div><h3>Scope and approach</h3><div>This work presents a comprehensive summary of the eight commonly used three-dimensional reconstruction methods in monocular stereo vision, along with three common image acquisition methods (circular, fixed, and straight) applied in plant phenotyping. Through a systematic review of the literature published in the past decade, this paper highlights the application of these systems and matching methods in three-dimensional plant phenotypic research. Additionally, this paper provides a discussion on the advantages and disadvantages of different approaches.</div></div><div><h3>Key findings and conclusions</h3><div>At present, monocular stereo vision systems based on a single RGB camera are widely utilized to acquire diverse plant traits due to their affordability and convenience. Different application scenarios have corresponding mechanical structure and data processing methods. Deep learning-based three-dimensional reconstruction methods have demonstrated promising results and significant potential across all three common image acquisition methods. However, the current effectiveness of deep learning in reconstruction requires further validation in the absence of datasets. Moreover, limitations exist in utilizing the results of 3D reconstruction and in the selection of experimental subjects, such as vertical farming. To advance modern breeding and intelligent cultivation, it is imperative to promote dataset collection, diversify the range of research subjects (such as edible fungi and diseased plants), and develop a novel, automated, high-throughput, four-dimensional phenotype platform. As such, monocular stereo vision systems based on an RGB camera, coupled with expanded applications and the development of more efficient reconstruction algorithms, will undoubtedly emerge as a focal point for future researches.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LVF: A language and vision fusion framework for tomato diseases segmentation LVF:番茄病害分割的语言与视觉融合框架
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-10-09 DOI: 10.1016/j.compag.2024.109484
With the development of deep learning technology, the control of tomato diseases has emerged as a crucial aspect of intelligent agricultural management. While current research on tomato disease segmentation has made considerable strides, challenges persist due to the susceptibility of tomato leaf diseases to strong light reflections and shadow gradients in sunlight. Additionally, the complex backgrounds found in agricultural fields often lead to model confusion, resulting in inaccurate segmentation. Traditional methods for tomato disease segmentation rely on single-modal image-based models, which struggle when dealing with the nuanced features and limited scope of tomato leaf diseases. To address these issues, our study introduces the LVF framework, a dual-modal approach combining image and text information for pre-segmentation of tomato diseases. We began by creating a new dataset labeled with both images and text, specifically focusing on diseased tomato leaves with guidance from agricultural experts. For image processing, we developed a probabilistic differential fusion network to mitigate interference caused by high-frequency noise, leveraging color and grayscale images. Furthermore, our reinforcement feature network and threshold filtering network enhance useful information while filtering out negative information from the fused images. In text processing, we proposed a multi-scale cross-nesting network to integrate semantic information about diseases across different scales and types. By nesting Bert-processed word vectors with fused image vectors, our model gains a deeper understanding of semantic information, thereby improving its ability to segment crop diseases accurately. Our experiments, conducted on self-constructed tomato datasets as well as public datasets for tomatoes and maize, demonstrated the efficacy and robustness of our approach in leaf disease segmentation. The LVF framework offers a valuable tool to enhance the accuracy of crop disease segmentation, especially in complex agricultural environments.
随着深度学习技术的发展,番茄病害控制已成为智能农业管理的一个重要方面。虽然目前有关番茄病害分割的研究取得了长足进步,但由于番茄叶片病害易受阳光中强烈的光反射和阴影梯度的影响,因此挑战依然存在。此外,农田中复杂的背景往往会导致模型混淆,造成分割不准确。传统的番茄病害分割方法依赖于基于单模态图像的模型,在处理番茄叶片病害的细微特征和有限范围时显得力不从心。为了解决这些问题,我们的研究引入了 LVF 框架,这是一种结合图像和文本信息的双模态方法,用于番茄病害的预分割。我们首先创建了一个同时标有图像和文本的新数据集,在农业专家的指导下,特别关注番茄病叶。在图像处理方面,我们利用彩色和灰度图像,开发了一个概率差分融合网络,以减轻高频噪声造成的干扰。此外,我们的强化特征网络和阈值滤波网络在增强有用信息的同时,也过滤掉了融合图像中的负面信息。在文本处理方面,我们提出了一种多尺度交叉嵌套网络,用于整合不同尺度和类型的疾病语义信息。通过将伯特处理过的单词向量与融合后的图像向量嵌套,我们的模型可以更深入地理解语义信息,从而提高准确分割作物病害的能力。我们在自建的番茄数据集以及番茄和玉米的公共数据集上进行了实验,证明了我们的方法在叶病分割方面的有效性和鲁棒性。LVF 框架为提高作物病害细分的准确性提供了宝贵的工具,尤其是在复杂的农业环境中。
{"title":"LVF: A language and vision fusion framework for tomato diseases segmentation","authors":"","doi":"10.1016/j.compag.2024.109484","DOIUrl":"10.1016/j.compag.2024.109484","url":null,"abstract":"<div><div>With the development of deep learning technology, the control of tomato diseases has emerged as a crucial aspect of intelligent agricultural management. While current research on tomato disease segmentation has made considerable strides, challenges persist due to the susceptibility of tomato leaf diseases to strong light reflections and shadow gradients in sunlight. Additionally, the complex backgrounds found in agricultural fields often lead to model confusion, resulting in inaccurate segmentation. Traditional methods for tomato disease segmentation rely on single-modal image-based models, which struggle when dealing with the nuanced features and limited scope of tomato leaf diseases. To address these issues, our study introduces the LVF framework, a dual-modal approach combining image and text information for pre-segmentation of tomato diseases. We began by creating a new dataset labeled with both images and text, specifically focusing on diseased tomato leaves with guidance from agricultural experts. For image processing, we developed a probabilistic differential fusion network to mitigate interference caused by high-frequency noise, leveraging color and grayscale images. Furthermore, our reinforcement feature network and threshold filtering network enhance useful information while filtering out negative information from the fused images. In text processing, we proposed a multi-scale cross-nesting network to integrate semantic information about diseases across different scales and types. By nesting Bert-processed word vectors with fused image vectors, our model gains a deeper understanding of semantic information, thereby improving its ability to segment crop diseases accurately. Our experiments, conducted on self-constructed tomato datasets as well as public datasets for tomatoes and maize, demonstrated the efficacy and robustness of our approach in leaf disease segmentation. The LVF framework offers a valuable tool to enhance the accuracy of crop disease segmentation, especially in complex agricultural environments.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-feature language-image model for fruit quality image classification 用于水果质量图像分类的多特征语言图像模型
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-10-08 DOI: 10.1016/j.compag.2024.109462
Fruit quality classification has a great impact on the modern fruit industry. However, deep learning methods for fruit quality classification often demand a substantial number of labeled samples, which are hard and expensive to collect in many real-world applications, resulting in overfitting and low generalization. The Contrastive Language-Image Pre-Training (CLIP) model, which fuses image and text features, has demonstrated excellent performance in zero-shot classification. Inspired by CLIP, in this paper, we propose a multi-feature language-image (MFLI) model for fruit quality classification, where the fruit image and feature text are fused to enhance feature extraction. Furthermore, we construct a pomelo quality dataset containing first- and second-grade pomelo. Based on the zero-shot learning results of CLIP on this dataset, we provide recommendations for pre-prompt and multi-feature text. Experimental results show that in both zero-shot, few-shot, and conventional learning sceneries, our MFLI model outperforms state-of-the-art models on seven types of fruits, demonstrating excellent generalization capabilities.
水果质量分类对现代水果产业影响巨大。然而,用于水果质量分类的深度学习方法往往需要大量标注样本,而在现实世界的许多应用中,收集标注样本既困难又昂贵,从而导致过拟合和低泛化。对比语言-图像预训练(CLIP)模型融合了图像和文本特征,在零镜头分类中表现出色。受 CLIP 的启发,我们在本文中提出了一种用于水果质量分类的多特征语言-图像(MFLI)模型,通过融合水果图像和特征文本来增强特征提取。此外,我们还构建了一个包含一级和二级柚子的柚子质量数据集。基于 CLIP 在该数据集上的零点学习结果,我们提供了预提示和多特征文本的建议。实验结果表明,无论是在零点学习、少点学习还是传统学习环境下,我们的 MFLI 模型在七种水果上的表现都优于最先进的模型,展示了出色的泛化能力。
{"title":"Multi-feature language-image model for fruit quality image classification","authors":"","doi":"10.1016/j.compag.2024.109462","DOIUrl":"10.1016/j.compag.2024.109462","url":null,"abstract":"<div><div>Fruit quality classification has a great impact on the modern fruit industry. However, deep learning methods for fruit quality classification often demand a substantial number of labeled samples, which are hard and expensive to collect in many real-world applications, resulting in overfitting and low generalization. The Contrastive Language-Image Pre-Training (CLIP) model, which fuses image and text features, has demonstrated excellent performance in zero-shot classification. Inspired by CLIP, in this paper, we propose a multi-feature language-image (MFLI) model for fruit quality classification, where the fruit image and feature text are fused to enhance feature extraction. Furthermore, we construct a pomelo quality dataset containing first- and second-grade pomelo. Based on the zero-shot learning results of CLIP on this dataset, we provide recommendations for pre-prompt and multi-feature text. Experimental results show that in both zero-shot, few-shot, and conventional learning sceneries, our MFLI model outperforms state-of-the-art models on seven types of fruits, demonstrating excellent generalization capabilities.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cost-efficient algorithm for autonomous cultivators: Implementing template matching with field digital twins for precision agriculture 自主耕耘机的成本效益算法:利用田间数字双胞胎实施模板匹配,实现精准农业
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-10-08 DOI: 10.1016/j.compag.2024.109509
The paper focuses on the development of a vision system to automate the position control of a cultivator used for crop weeding. The vision algorithm allows monitoring of the cultivator’s misalignment with respect to crop rows, with real-time processing. The key content includes the introduction of a self-generated digital twin of the field model for numerical validation of different computer vision solutions and a comparison of three vision algorithms for measuring deviation. The objectives of the study are to improve the precision of misalignment measurements and ensure safe and accurate movement of the cultivator. The rationale behind the study is to address constraints such as camera installation and crop color, and to emphasize the importance of a confidence estimation feature for accurate measurement. The paper also provides an overview of related works in the literature, highlighting the two phases of plant identification and deviation measurement. Tests carried out on soybean and maize crops demonstrate the improvements allowed by the proposed algorithm in terms of higher measurement precision, even in the presence of high weed infestation or a significant number of missing plants. Additionally, the paper suggests analysis simplifications to enhance the algorithm’s speed while maintaining satisfactory measurement accuracy.
本文主要介绍了一种视觉系统的开发情况,该系统可自动控制用于作物除草的耕耘机的位置。通过视觉算法,可以实时监测耕耘机与作物行的错位情况。主要内容包括引入田间模型的自生成数字孪生模型,对不同的计算机视觉解决方案进行数值验证,并对三种测量偏差的视觉算法进行比较。研究的目标是提高偏差测量的精度,确保耕耘机安全准确地移动。研究的基本原理是解决相机安装和作物颜色等限制因素,并强调置信度估计功能对精确测量的重要性。本文还概述了文献中的相关工作,强调了植物识别和偏差测量这两个阶段。在大豆和玉米作物上进行的测试表明,即使在杂草丛生或大量植物缺失的情况下,所提出的算法也能提高测量精度。此外,论文还提出了简化分析的建议,以提高算法的速度,同时保持令人满意的测量精度。
{"title":"Cost-efficient algorithm for autonomous cultivators: Implementing template matching with field digital twins for precision agriculture","authors":"","doi":"10.1016/j.compag.2024.109509","DOIUrl":"10.1016/j.compag.2024.109509","url":null,"abstract":"<div><div>The paper focuses on the development of a vision system to automate the position control of a cultivator used for crop weeding. The vision algorithm allows monitoring of the cultivator’s misalignment with respect to crop rows, with real-time processing. The key content includes the introduction of a self-generated digital twin of the field model for numerical validation of different computer vision solutions and a comparison of three vision algorithms for measuring deviation. The objectives of the study are to improve the precision of misalignment measurements and ensure safe and accurate movement of the cultivator. The rationale behind the study is to address constraints such as camera installation and crop color, and to emphasize the importance of a confidence estimation feature for accurate measurement. The paper also provides an overview of related works in the literature, highlighting the two phases of plant identification and deviation measurement. Tests carried out on soybean and maize crops demonstrate the improvements allowed by the proposed algorithm in terms of higher measurement precision, even in the presence of high weed infestation or a significant number of missing plants. Additionally, the paper suggests analysis simplifications to enhance the algorithm’s speed while maintaining satisfactory measurement accuracy.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerated Data Engine: A faster dataset construction workflow for computer vision applications in commercial livestock farms 加速数据引擎:商业化畜牧场计算机视觉应用的更快数据集构建工作流程
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-10-08 DOI: 10.1016/j.compag.2024.109452
Large-scale, high-quality dataset was the foundation of developing advanced artificial intelligence applications. However, creating such a benchmark dataset in a professional field, such as precision management of animals, was always a challenge because of the costly and labor-intensive process of annotation and review. This study introduced a novel workflow named Accelerated Data Engine (ADE), designed to efficiently produce representative and high-quality computer vision datasets from raw animal surveillance footage. By incorporating referring and grounding models (R&G models) as auto-annotators, along with a distillation mechanism for dataset-auditors, ADE significantly speeded up the dataset construction process. The new workflow received natural language inputs as referrals to identify animal instances, delineated their body shapes, and then refined the auto-annotated data through a selection process. To demonstrate the efficacy of ADE, three 30-minute surveillance video samples featuring pigs, sheep, and cattle were discussed in this study. The results indicated the R&G models effectively annotated animals across various farms, while distillation mechanisms could identify various detection errors, balance the data representations, refine annotations, and verify the data quality. Two high-quality cattle datasets (6.5 k and 486 frames), including 26 k and 2.5 k cattle instances, were generated through the ADE workflow from 24-hour surveillance videos on a commercial cattle farm and made publicly available. The proposed dataset has achievable performance between 74.6 %∼84.1 %. The ADE workflow saved 78.4 % of manual work compared to the traditional dataset construction workflow (approximately 141 h). This pioneering approach empowered the fast creation of benchmark animal datasets and would enhance computer vision applications in the livestock production industry in the future.
大规模、高质量的数据集是开发先进人工智能应用的基础。然而,在动物精准管理等专业领域创建这样的基准数据集始终是一个挑战,因为标注和审查过程耗资巨大且劳动密集。本研究引入了一种名为 "加速数据引擎(ADE)"的新型工作流程,旨在从原始动物监控录像中高效生成具有代表性的高质量计算机视觉数据集。通过将引用和接地模型(R&G 模型)作为自动标注器,并为数据集审核人员提供提炼机制,ADE 大大加快了数据集构建过程。新的工作流程接收自然语言输入,作为识别动物实例的参考,勾勒出它们的体形,然后通过选择过程完善自动标注的数据。为了证明 ADE 的功效,本研究讨论了三个 30 分钟的监控视频样本,分别以猪、羊和牛为主角。结果表明,R&G 模型有效地注释了不同农场的动物,而蒸馏机制可以识别各种检测错误、平衡数据表示、完善注释并验证数据质量。通过 ADE 工作流程,从一个商业养牛场的 24 小时监控视频中生成了两个高质量的牛数据集(6.5 千帧和 486 帧),包括 26 千个和 2.5 千个牛实例,并公开发布。提议的数据集可实现的性能在 74.6 %∼84.1 % 之间。与传统的数据集构建工作流程(约 141 小时)相比,ADE 工作流程节省了 78.4% 的人工工作。这种开创性的方法有助于快速创建基准动物数据集,并将在未来提高计算机视觉在畜牧业生产中的应用。
{"title":"Accelerated Data Engine: A faster dataset construction workflow for computer vision applications in commercial livestock farms","authors":"","doi":"10.1016/j.compag.2024.109452","DOIUrl":"10.1016/j.compag.2024.109452","url":null,"abstract":"<div><div>Large-scale, high-quality dataset was the foundation of developing advanced artificial intelligence applications. However, creating such a benchmark dataset in a professional field, such as precision management of animals, was always a challenge because of the costly and labor-intensive process of annotation and review. This study introduced a novel workflow named Accelerated Data Engine (ADE), designed to efficiently produce representative and high-quality computer vision datasets from raw animal surveillance footage. By incorporating referring and grounding models (R&amp;G models) as auto-annotators, along with a distillation mechanism for dataset-auditors, ADE significantly speeded up the dataset construction process. The new workflow received natural language inputs as referrals to identify animal instances, delineated their body shapes, and then refined the auto-annotated data through a selection process. To demonstrate the efficacy of ADE, three 30-minute surveillance video samples featuring pigs, sheep, and cattle were discussed in this study. The results indicated the R&amp;G models effectively annotated animals across various farms, while distillation mechanisms could identify various detection errors, balance the data representations, refine annotations, and verify the data quality. Two high-quality cattle datasets (6.5 k and 486 frames), including 26 k and 2.5 k cattle instances, were generated through the ADE workflow from 24-hour surveillance videos on a commercial cattle farm and made publicly available. The proposed dataset has achievable performance between 74.6 %∼84.1 %. The ADE workflow saved 78.4 % of manual work compared to the traditional dataset construction workflow (approximately 141 h). This pioneering approach empowered the fast creation of benchmark animal datasets and would enhance computer vision applications in the livestock production industry in the future.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142423276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling and control of dissolved oxygen in recirculating aquaculture systems: A circadian rhythm analysis approach and GSMPC controller 循环水产养殖系统中溶解氧的建模与控制:昼夜节律分析方法和 GSMPC 控制器
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-10-07 DOI: 10.1016/j.compag.2024.109515
Precise control of dissolved oxygen (DO) concentration is significant for the growth and development of aquatic products. This study focused on modeling and control DO concentration in a recirculating aquaculture system (RAS). Firstly, a DO dynamic model was established based on the oxygen mass transfer equation and circadian rhythm of fish oxygen consumption rate. The fitting R2 of simulation data and measured data of DO response were both above 0.96 and the significance of circadian rhythm in DO dynamic model was confirmed. Subsequently, Gain-scheduling model predictive control (GSMPC) suitable for circadian rhythm was proposed and applied to regulate the DO concentration in fish tank under various operating points, and its performance was compared with that of traditional model predictive control (MPC). In terms of set value tracking, Integral of Absolute Error of GSMPC controller dropped by 23.46% compared to MPC controller, and Integral Squared Error dropped by 11.27%, while for energy consumption, the Integral of Absolute Control dropped by 11.28%. These results demonstrated GSMPC controller not only reduced the error but also shrank the energy consumption. The findings highlighted the notable advantages of GSMPC over traditional MPC, emphasizing its effectiveness in precisely regulating DO concentration in a RAS based on circadian rhythm.
精确控制溶解氧(DO)浓度对水产品的生长发育意义重大。本研究重点关注循环水养殖系统(RAS)中溶解氧浓度的建模与控制。首先,基于氧传质方程和鱼类耗氧率的昼夜节律建立了溶解氧动态模型。模拟数据与溶解氧响应实测数据的拟合 R2 均在 0.96 以上,证实了昼夜节律在溶解氧动态模型中的重要性。随后,提出了适合昼夜节律的增益调度模型预测控制(GSMPC),并将其应用于调节不同运行点下鱼缸中的溶解氧浓度,并将其性能与传统模型预测控制(MPC)进行了比较。在设定值跟踪方面,GSMPC 控制器的绝对误差积分比 MPC 控制器下降了 23.46%,平方误差积分下降了 11.27%;在能耗方面,绝对控制积分下降了 11.28%。这些结果表明,GSMPC 控制器不仅减少了误差,还降低了能耗。研究结果凸显了 GSMPC 相对于传统 MPC 的显著优势,强调了它在基于昼夜节律的 RAS 中精确调节溶解氧浓度的有效性。
{"title":"Modeling and control of dissolved oxygen in recirculating aquaculture systems: A circadian rhythm analysis approach and GSMPC controller","authors":"","doi":"10.1016/j.compag.2024.109515","DOIUrl":"10.1016/j.compag.2024.109515","url":null,"abstract":"<div><div>Precise control of dissolved oxygen (DO) concentration is significant for the growth and development of aquatic products. This study focused on modeling and control DO concentration in a recirculating aquaculture system (RAS). Firstly, a DO dynamic model was established based on the oxygen mass transfer equation and circadian rhythm of fish oxygen consumption rate. The fitting R<sup>2</sup> of simulation data and measured data of DO response were both above 0.96 and the significance of circadian rhythm in DO dynamic model was confirmed. Subsequently, Gain-scheduling model predictive control (GSMPC) suitable for circadian rhythm was proposed and applied to regulate the DO concentration in fish tank under various operating points, and its performance was compared with that of traditional model predictive control (MPC). In terms of set value tracking, Integral of Absolute Error of GSMPC controller dropped by 23.46% compared to MPC controller, and Integral Squared Error dropped by 11.27%, while for energy consumption, the Integral of Absolute Control dropped by 11.28%. These results demonstrated GSMPC controller not only reduced the error but also shrank the energy consumption. The findings highlighted the notable advantages of GSMPC over traditional MPC, emphasizing its effectiveness in precisely regulating DO concentration in a RAS based on circadian rhythm.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of an online prediction system for soil organic matter and soil moisture content based on multi-modal fusion 开发基于多模态融合的土壤有机质和土壤含水量在线预测系统
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-10-07 DOI: 10.1016/j.compag.2024.109514
Realizing accurate determination of in situ soil organic matter (SOM) content and soil moisture (SM) content in the field is of great importance for improving agricultural production efficiency. However, the feature information of a single sensor is limited, and there is still a certain gap in modeling accuracy compared to traditional laboratory methods. The variations of SM in the field also interfere with data collection, limiting the application of single sensor. In order to realize the efficient detection of in-situ SOM and SM content, this study developed an online detection system for SOM and SM content based on the fusion of characteristic wavelengths, visible images and thermal imaging image features. Firstly, based on a vehicle-mounted platform, a visible-thermal imaging camera and a characteristic wavelength integration device were integrated in a deep pine plow to realize the simultaneous acquisition of in-situ soil multi-sensor data. Then, a lightweight multimodal network was constructed to obtain thermal visible light image features and characteristic wavelength features through branch networks, achieving deep fusion of different modal data and predicting SOM and SM content. Finally, output the forecasting results of SOM and SM content, and transmit the predictive information to the cloud platform for storage. It was verified that the Multi-modal proposed in this study worked best in the laboratory environment, with a predicted R2 of 0.91 and RMSE of 2.9 g/kg for SOM, and a predicted R2 of 0.92 and RMSE of 0.77 % for SM. Compared with single image or characteristic spectral data, the fusion of visible image and characteristic wavelength effectively improved the prediction accuracy of SOM. The real-time prediction of SM was realized by fusing thermal imaging data, and the elimination of the moisture effect of visible images and characteristic wavelengths was also realized with the help of deep fusion of Multi-modal network. After field validation, the R2 of the Multi-modal system was 0.84 and the RMSE was 5.0 g/kg for SOM, and the R2 of the SM was 0.88 and the RMSE was 1.03 %. Despite the differences in soil types between the validation field and the sampling field, the system still demonstrated strong generalization and achieved high accuracy prediction of in-situ SOM and SM in the field, which effectively improves the efficiency and applicability of field. The efficiency and applicability of soil testing effectively improve the efficiency and provide technical guidance for precision management in the field.
实现田间原位土壤有机质(SOM)含量和土壤水分(SM)含量的精确测定,对于提高农业生产效率具有重要意义。然而,单个传感器的特征信息有限,与传统的实验室方法相比,建模精度仍有一定差距。田间 SM 的变化也会干扰数据采集,限制了单一传感器的应用。为了实现原地 SOM 和 SM 含量的高效检测,本研究开发了基于特征波长、可见光图像和热成像图像特征融合的 SOM 和 SM 含量在线检测系统。首先,基于车载平台,在深松犁中集成了可见光热成像相机和特征波长集成装置,实现了原位土壤多传感器数据的同步采集。然后,构建轻量级多模态网络,通过分支网络获取可见光热成像特征和特征波长特征,实现不同模态数据的深度融合,预测SOM和SM含量。最后,输出 SOM 和 SM 内容的预测结果,并将预测信息传输到云平台进行存储。经验证,本研究提出的多模态在实验室环境中效果最佳,SOM 的预测 R2 为 0.91,RMSE 为 2.9 g/kg;SM 的预测 R2 为 0.92,RMSE 为 0.77 %。与单一图像或特征光谱数据相比,可见光图像与特征波长的融合有效提高了 SOM 的预测精度。通过热成像数据的融合实现了 SM 的实时预测,并借助多模态网络的深度融合消除了可见光图像和特征波长的湿度效应。经过实地验证,多模态系统对 SOM 的 R2 为 0.84,均方根误差为 5.0 g/kg;对 SM 的 R2 为 0.88,均方根误差为 1.03 %。尽管验证田与采样田的土壤类型存在差异,但该系统仍表现出较强的泛化能力,实现了对田间原位 SOM 和 SM 的高精度预测,有效提高了田间试验的效率和适用性。土壤测试的效率和适用性有效提高了田间精准管理的效率,为田间精准管理提供了技术指导。
{"title":"Development of an online prediction system for soil organic matter and soil moisture content based on multi-modal fusion","authors":"","doi":"10.1016/j.compag.2024.109514","DOIUrl":"10.1016/j.compag.2024.109514","url":null,"abstract":"<div><div>Realizing accurate determination of in situ soil organic matter (SOM) content and soil moisture (SM) content in the field is of great importance for improving agricultural production efficiency. However, the feature information of a single sensor is limited, and there is still a certain gap in modeling accuracy compared to traditional laboratory methods. The variations of SM in the field also interfere with data collection, limiting the application of single sensor. In order to realize the efficient detection of in-situ SOM and SM content, this study developed an online detection system for SOM and SM content based on the fusion of characteristic wavelengths, visible images and thermal imaging image features. Firstly, based on a vehicle-mounted platform, a visible-thermal imaging camera and a characteristic wavelength integration device were integrated in a deep pine plow to realize the simultaneous acquisition of in-situ soil multi-sensor data. Then, a lightweight multimodal network was constructed to obtain thermal visible light image features and characteristic wavelength features through branch networks, achieving deep fusion of different modal data and predicting SOM and SM content. Finally, output the forecasting results of SOM and SM content, and transmit the predictive information to the cloud platform for storage. It was verified that the Multi-modal proposed in this study worked best in the laboratory environment, with a predicted R<sup>2</sup> of 0.91 and RMSE of 2.9 g/kg for SOM, and a predicted R<sup>2</sup> of 0.92 and RMSE of 0.77 % for SM. Compared with single image or characteristic spectral data, the fusion of visible image and characteristic wavelength effectively improved the prediction accuracy of SOM. The real-time prediction of SM was realized by fusing thermal imaging data, and the elimination of the moisture effect of visible images and characteristic wavelengths was also realized with the help of deep fusion of Multi-modal network. After field validation, the R<sup>2</sup> of the Multi-modal system was 0.84 and the RMSE was 5.0 g/kg for SOM, and the R<sup>2</sup> of the SM was 0.88 and the RMSE was 1.03 %. Despite the differences in soil types between the validation field and the sampling field, the system still demonstrated strong generalization and achieved high accuracy prediction of in-situ SOM and SM in the field, which effectively improves the efficiency and applicability of field. The efficiency and applicability of soil testing effectively improve the efficiency and provide technical guidance for precision management in the field.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generation of NIR Spectral Band from RGB Image with Wavelet Domain Spectral Extrapolation Generative Adversarial Network 利用小波域光谱外推法生成对抗网络从 RGB 图像生成近红外光谱带
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-10-07 DOI: 10.1016/j.compag.2024.109461
Near-infrared (NIR) imaging exhibits outstanding penetration capabilities and robust anti-interference characteristics. However, acquiring high-resolution and high-fidelity NIR images is difficult due to the limited mobility, high cost, and low resolution of NIR imaging hardware. In this paper, we proposed a new end-to-end Wavelet Domain Spectral Extrapolation Generative Adversarial Network (WSEGAN) to generate highly realistic NIR images from RGB images. Since RGB and NIR images have different types of noise and artifacts, which affects the quality of NIR images generation. We design a generator with a discrete wavelet transform and an attention mechanism to capture multi-resolution contextual information, and a multi-scale discriminator to capture detailed and global features. Then, the proposed approach is evaluated by three different datasets to achieve optimal results regarding both visual effects and quantitative evaluation. The normalized difference vegetation index (NDVI) is utilized to validate the effectiveness of the generated NIR images. The result demonstrates a strong correlation between the generated images and the actual vegetation distribution. More importantly, the proposed network is testified to generate NIR images to be fused with the RGB image source for agricultural target detection tasks. In dark conditions, results show that using multi-modal data instead of RGB images improves mAP0.5 detection accuracy by 4% on the CAPSICUM dataset and by 8% on the KIWI dataset across five object detection methods. This follows the physical significance of the NIR imaging and demonstrates the potential use of the NIR images generated by the proposed method.
近红外成像具有出色的穿透能力和强大的抗干扰特性。然而,由于近红外成像硬件移动性有限、成本高、分辨率低,获取高分辨率和高保真近红外图像十分困难。在本文中,我们提出了一种新型端到端小波域光谱外推生成对抗网络(WSEGAN),可从 RGB 图像生成高度逼真的近红外图像。由于 RGB 和近红外图像具有不同类型的噪声和伪影,这影响了近红外图像的生成质量。我们设计了一种具有离散小波变换和注意力机制的生成器,以捕捉多分辨率上下文信息,并设计了一个多尺度判别器来捕捉细节和全局特征。然后,通过三个不同的数据集对所提出的方法进行评估,以在视觉效果和定量评估方面获得最佳结果。利用归一化差异植被指数(NDVI)来验证生成的近红外图像的有效性。结果表明,生成的图像与实际植被分布之间具有很强的相关性。更重要的是,所提出的网络经测试可生成近红外图像,并与 RGB 图像源融合,用于农业目标检测任务。结果表明,在黑暗条件下,使用多模态数据代替 RGB 图像,在 CAPSICUM 数据集上可将 mAP0.5 的检测精度提高 4%,在 KIWI 数据集上可将五种目标检测方法的检测精度提高 8%。这与近红外成像的物理意义相吻合,并证明了建议方法生成的近红外图像的潜在用途。
{"title":"Generation of NIR Spectral Band from RGB Image with Wavelet Domain Spectral Extrapolation Generative Adversarial Network","authors":"","doi":"10.1016/j.compag.2024.109461","DOIUrl":"10.1016/j.compag.2024.109461","url":null,"abstract":"<div><div>Near-infrared (NIR) imaging exhibits outstanding penetration capabilities and robust anti-interference characteristics. However, acquiring high-resolution and high-fidelity NIR images is difficult due to the limited mobility, high cost, and low resolution of NIR imaging hardware. In this paper, we proposed a new end-to-end Wavelet Domain Spectral Extrapolation Generative Adversarial Network (WSEGAN) to generate highly realistic NIR images from RGB images. Since RGB and NIR images have different types of noise and artifacts, which affects the quality of NIR images generation. We design a generator with a discrete wavelet transform and an attention mechanism to capture multi-resolution contextual information, and a multi-scale discriminator to capture detailed and global features. Then, the proposed approach is evaluated by three different datasets to achieve optimal results regarding both visual effects and quantitative evaluation. The normalized difference vegetation index (NDVI) is utilized to validate the effectiveness of the generated NIR images. The result demonstrates a strong correlation between the generated images and the actual vegetation distribution. More importantly, the proposed network is testified to generate NIR images to be fused with the RGB image source for agricultural target detection tasks. In dark conditions, results show that using multi-modal data instead of RGB images improves <span><math><mrow><mi>m</mi><mi>A</mi><msub><mrow><mi>P</mi></mrow><mrow><mn>0</mn><mo>.</mo><mn>5</mn></mrow></msub></mrow></math></span> detection accuracy by 4% on the CAPSICUM dataset and by 8% on the KIWI dataset across five object detection methods. This follows the physical significance of the NIR imaging and demonstrates the potential use of the NIR images generated by the proposed method.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Portable multiplexed ion-selective sensor for long-term and continuous irrigation water quality monitoring 用于长期和连续灌溉水质监测的便携式多重离子选择传感器
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-10-06 DOI: 10.1016/j.compag.2024.109455
In agricultural contexts, the demand for continuous and precise measurement of multiple ions is crucial. While arrays of solid-contact ion-selective electrodes (SCEs) have been developed previously, there has been limited emphasis on their continuous and long-term monitoring of ions. Addressing this gap, our work introduces an innovative sensor array utilizing Ni-HAB MOF as an ion-to-electron transducer, enabling real-time detection of nitrate, potassium, and pH levels. The sensors exhibit exceptional stability, eliminating the need for frequent recalibration. For instance, the K+-selective sensor displays an unprecedentedly low potential drift of 0.05 µV/h, surpassing existing solid-contact sensors by two orders of magnitude. Similarly, the pH sensor demonstrates a drift of 0.3 µV/h, outperforming competitors by a factor of 100. The NO3--selective sensor shows minimal drift at 0.5 µV/h, surpassing comparable sensors by a factor of ten. Additionally, the K+-selective sensor features a sensitivity of 57.8 mV/dec and a LOD of 1.9 µM, while the NO3--selective sensor offers a sensitivity of 56.8 mV/dec and a LOD of 6.23 µM. Integrated into a portable array with wireless data transmission, this system enables real-time water quality monitoring in remote areas. Rigorous testing of the developed sensor array in a tailored complex agricultural solution confirms its selective response to target ions even in the presence of interfering ions. Importantly, pH fluctuations do not compromise the precision of the K+ and NO3-- selective sensors, highlighting the system’s robustness in real-world agricultural settings.
在农业领域,对多种离子进行连续、精确测量的要求至关重要。虽然之前已经开发出了固体接触式离子选择电极(SCE)阵列,但对其离子连续和长期监测的重视还很有限。针对这一空白,我们的工作引入了一种创新的传感器阵列,利用 Ni-HAB MOF 作为离子到电子的换能器,实现了对硝酸盐、钾和 pH 值的实时检测。这种传感器具有极高的稳定性,无需频繁重新校准。例如,K+ 选择性传感器显示出前所未有的低电位漂移(0.05 µV/h),比现有的固体接触式传感器高出两个数量级。同样,pH 传感器的电位漂移为 0.3 µV/h,比竞争对手高出 100 倍。NO3 选择性传感器的漂移极小,仅为 0.5 µV/h,比同类传感器高出 10 倍。此外,K+ 选择性传感器的灵敏度为 57.8 mV/dec,LOD 为 1.9 µM,而 NO3 选择性传感器的灵敏度为 56.8 mV/dec,LOD 为 6.23 µM。该系统与带有无线数据传输功能的便携式阵列集成在一起,可对偏远地区的水质进行实时监测。在定制的复杂农业溶液中对所开发的传感器阵列进行的严格测试证实,即使存在干扰离子,它也能对目标离子做出选择性响应。重要的是,pH 值的波动并不会影响 K+ 和 NO3--选择性传感器的精度,这凸显了该系统在实际农业环境中的稳健性。
{"title":"Portable multiplexed ion-selective sensor for long-term and continuous irrigation water quality monitoring","authors":"","doi":"10.1016/j.compag.2024.109455","DOIUrl":"10.1016/j.compag.2024.109455","url":null,"abstract":"<div><div>In agricultural contexts, the demand for continuous and precise measurement of multiple ions is crucial. While arrays of solid-contact ion-selective electrodes (SCEs) have been developed previously, there has been limited emphasis on their continuous and long-term monitoring of ions. Addressing this gap, our work introduces an innovative sensor array utilizing Ni-HAB MOF as an ion-to-electron transducer, enabling real-time detection of nitrate, potassium, and pH levels. The sensors exhibit exceptional stability, eliminating the need for frequent recalibration. For instance, the K<sup>+</sup>-selective sensor displays an unprecedentedly low potential drift of 0.05 µV/h, surpassing existing solid-contact sensors by two orders of magnitude. Similarly, the pH sensor demonstrates a drift of 0.3 µV/h, outperforming competitors by a factor of 100. The NO<sub>3</sub><sup>-</sup>-selective sensor shows minimal drift at 0.5 µV/h, surpassing comparable sensors by a factor of ten. Additionally, the K<sup>+</sup>-selective sensor features a sensitivity of 57.8 mV/dec and a LOD of 1.9 µM, while the NO<sub>3</sub><sup>-</sup>-selective sensor offers a sensitivity of 56.8 mV/dec and a LOD of 6.23 µM. Integrated into a portable array with wireless data transmission, this system enables real-time water quality monitoring in remote areas. Rigorous testing of the developed sensor array in a tailored complex agricultural solution confirms its selective response to target ions even in the presence of interfering ions. Importantly, pH fluctuations do not compromise the precision of the K<sup>+</sup> and NO<sub>3</sub><sup>-</sup>- selective sensors, highlighting the system’s robustness in real-world agricultural settings.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Technologies in cattle traceability: A bibliometric analysis 牛群追踪技术:文献计量分析
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-10-05 DOI: 10.1016/j.compag.2024.109459
It has been widely documented that livestock cattle can play a non-negligible role in natural landscapes due to climate change, deforestation, and enteric methane emissions. Alternatively, sustainable protocols and market digitalization are highlighted as promising tools to mitigate environmental cattle impacts by authenticated data in digital traceability systems. Digital inclusion, particularly for cattle breeders, can be a useful starting point for employing sustainable protocol in food chain production and management. This study analyzes the evolution of knowledge in the area of animal traceability to compare applied technologies found by a bibliometric analysis of articles published in Web of Science. The study evidences a clear change in thematic research over decades, currently culminating in technologies such as blockchain, IoT (Internet of Things), machine learning, and deep learning. These technologies emerge as the main research scopes in promoting transparency and reliability in the production chain, especially considering individual digital identification. However, challenges such as high investment requirements and difficulties in data accessibility, interoperability, privacy, and security implicate the low maturity level of available technologies and knowledge, therefore preventing further adoption and development of reliable worldwide animal traceability systems.
有大量文件表明,由于气候变化、森林砍伐和肠道甲烷排放,畜牧业对自然景观的影响不容忽视。另外,可持续规程和市场数字化也是通过数字可追溯系统中的认证数据来减轻牲畜对环境影响的有前途的工具。数字包容性,尤其是对养牛者而言,可以成为在食物链生产和管理中采用可持续规程的有益起点。本研究分析了动物可追溯性领域的知识演变,通过对发表在《科学网》上的文章进行文献计量分析,对应用技术进行比较。研究表明,几十年来,专题研究发生了明显的变化,目前区块链、物联网(IoT)、机器学习和深度学习等技术达到了顶峰。这些技术成为提高生产链透明度和可靠性的主要研究领域,特别是考虑到个人数字身份识别。然而,由于现有技术和知识的成熟度较低,在数据访问、互操作性、隐私和安全性方面存在高投资要求和困难等挑战,因此阻碍了可靠的全球动物溯源系统的进一步采用和发展。
{"title":"Technologies in cattle traceability: A bibliometric analysis","authors":"","doi":"10.1016/j.compag.2024.109459","DOIUrl":"10.1016/j.compag.2024.109459","url":null,"abstract":"<div><div>It has been widely documented that livestock cattle can play a non-negligible role in natural landscapes due to climate change, deforestation, and enteric methane emissions. Alternatively, sustainable protocols and market digitalization are highlighted as promising tools to mitigate environmental cattle impacts by authenticated data in digital traceability systems. Digital inclusion, particularly for cattle breeders, can be a useful starting point for employing sustainable protocol in food chain production and management. This study analyzes the evolution of knowledge in the area of animal traceability to compare applied technologies found by a bibliometric analysis of articles published in Web of Science. The study evidences a clear change in thematic research over decades, currently culminating in technologies such as blockchain, IoT (Internet of Things), machine learning, and deep learning. These technologies emerge as the main research scopes in promoting transparency and reliability in the production chain, especially considering individual digital identification. However, challenges such as high investment requirements and difficulties in data accessibility, interoperability, privacy, and security implicate the low maturity level of available technologies and knowledge, therefore preventing further adoption and development of reliable worldwide animal traceability systems.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Electronics in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1