首页 > 最新文献

Smart agricultural technology最新文献

英文 中文
Environmental assessment of soluble solids contents and pH of orange using hyperspectral method and machine learning 利用高光谱方法和机器学习对橙子的可溶性固形物含量和 pH 值进行环境评估
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-27 DOI: 10.1016/j.atech.2024.100544

Progress in non-destructive methods to detect the characteristics of fruits is a new and attractive process for researchers and specialists in this field. On the other hand, these researchers move toward identifying their impacts on their surroundings in line with diagnostic efficiency. One of these essential impacts is the environmental impact of the non-destructive detection process of fruits. Navel oranges are one of the most popular and widely consumed fruits, whose maturity indices such as soluble solids contents (SSC) values and acidity are considered as parameters in determining the quality of this product. This study used the hyperspectral method in the vis-NIR range to evaluate and measure navel oranges' SSC and acidity values. In the following, by applying the life cycle assessment method, the environmental impacts of measuring and evaluating these two parameters of the characteristics of navel oranges were investigated. The Impact2002+ method was used to evaluate the impact of the life cycle list. Based on the findings, the environmental impacts of SSC measurement are about 40, 42, 20, and 18 % higher than those of the environmental impacts of pH measurement from the point of view of endpoint impacts for Human Health, Ecosystem quality, climate change, and resources, respectively. The random forest modeling results showed a suitable and acceptable correlation and relationship (over 90 %) between the wavelengths selected from the feature selection stage and environmental impacts.

对于该领域的研究人员和专家来说,非破坏性水果特征检测方法的进步是一个新的和有吸引力的过程。另一方面,这些研究人员也在根据诊断效率确定其对周围环境的影响。其中一个重要影响就是水果无损检测过程对环境的影响。脐橙是最受欢迎和最广泛食用的水果之一,其成熟度指数,如可溶性固形物含量(SSC)值和酸度,被认为是决定该产品品质的参数。本研究采用可见光-近红外范围的高光谱方法来评估和测量脐橙的可溶性固形物含量和酸度值。随后,通过应用生命周期评估方法,研究了测量和评估脐橙这两个特性参数对环境的影响。采用 Impact2002+ 方法评估了生命周期清单的影响。结果表明,从对人类健康、生态系统质量、气候变化和资源的终点影响角度来看,SSC 测量的环境影响分别比 pH 测量的环境影响高出约 40%、42%、20% 和 18%。随机森林建模结果表明,从特征选择阶段选出的波长与环境影响之间存在适当且可接受的相关性和关系(超过 90%)。
{"title":"Environmental assessment of soluble solids contents and pH of orange using hyperspectral method and machine learning","authors":"","doi":"10.1016/j.atech.2024.100544","DOIUrl":"10.1016/j.atech.2024.100544","url":null,"abstract":"<div><p>Progress in non-destructive methods to detect the characteristics of fruits is a new and attractive process for researchers and specialists in this field. On the other hand, these researchers move toward identifying their impacts on their surroundings in line with diagnostic efficiency. One of these essential impacts is the environmental impact of the non-destructive detection process of fruits. Navel oranges are one of the most popular and widely consumed fruits, whose maturity indices such as soluble solids contents (SSC) values and acidity are considered as parameters in determining the quality of this product. This study used the hyperspectral method in the vis-NIR range to evaluate and measure navel oranges' SSC and acidity values. In the following, by applying the life cycle assessment method, the environmental impacts of measuring and evaluating these two parameters of the characteristics of navel oranges were investigated. The Impact2002+ method was used to evaluate the impact of the life cycle list. Based on the findings, the environmental impacts of SSC measurement are about 40, 42, 20, and 18 % higher than those of the environmental impacts of pH measurement from the point of view of endpoint impacts for Human Health, Ecosystem quality, climate change, and resources, respectively. The random forest modeling results showed a suitable and acceptable correlation and relationship (over 90 %) between the wavelengths selected from the feature selection stage and environmental impacts.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001497/pdfft?md5=7380b694ec044e66b09c11b1bd5e021a&pid=1-s2.0-S2772375524001497-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142083528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decision fusion-based system to detect two invasive stink bugs in orchards 基于决策融合的系统检测果园中的两种入侵蝽象
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-24 DOI: 10.1016/j.atech.2024.100548

Accurate and early detection of insect pests plays an important role in crop protection and pest management in agriculture, especially in orchards. This paper is focused on evaluating and improving the performance of insect detection algorithms by adopting an ensemble approach of artificial neural networks. A set of advanced object detection models including YOLOv8, Faster R-CNN, RetinaNet, SSD, and FCOS were selected, and the models were trained and evaluated on a common dataset representing digital images of different insect species pests. Two classes were considered represented by quite similar invasive stink bugs, Halyomorpha Halys and Nezara Viridula. These architectures were optimized to identify significant peculiarities and variations between reference insects, including size, shape, and color. Each model has been implemented and optimized to achieve the best possible performance before integrating into an ensemble system. By integrating the predictions of these models through a weighted ensemble mechanism that leverages the F1 Score of each model, a more performant global system was developed capable of detecting insect pests with improved performance over individual models. This significant improvement in insect detection highlights the potential of the proposed ensemble system in efficient and rapid insect pest identification, ultimately providing valuable opportunities for implementing crop monitoring technologies. The research highlights the importance of implementing and developing deep-learning technologies for solving specific challenges in agriculture and brings innovative ways of strategic pest management for sustainable agricultural practices.

在农业,特别是果园的作物保护和害虫管理中,准确和早期检测害虫起着重要作用。本文主要通过采用人工神经网络的集合方法来评估和改进昆虫检测算法的性能。本文选择了一组先进的物体检测模型,包括 YOLOv8、Faster R-CNN、RetinaNet、SSD 和 FCOS,并在代表不同昆虫种类害虫数字图像的通用数据集上对这些模型进行了训练和评估。其中两个类别被认为是非常相似的入侵蝽类,即 Halyomorpha Halys 和 Nezara Viridula。对这些架构进行了优化,以识别参考昆虫之间的显著特征和差异,包括大小、形状和颜色。每个模型都经过实施和优化,以达到最佳性能,然后再集成到一个集合系统中。通过利用每个模型的 F1 分数(F1 Score)的加权集合机制整合这些模型的预测结果,开发出了一个性能更强的全局系统,能够检测害虫,其性能比单个模型更强。昆虫检测能力的大幅提高凸显了建议的集合系统在高效、快速识别害虫方面的潜力,最终为作物监测技术的实施提供了宝贵的机会。这项研究强调了实施和开发深度学习技术对于解决农业领域具体挑战的重要性,并为可持续农业实践带来了战略性害虫管理的创新方法。
{"title":"Decision fusion-based system to detect two invasive stink bugs in orchards","authors":"","doi":"10.1016/j.atech.2024.100548","DOIUrl":"10.1016/j.atech.2024.100548","url":null,"abstract":"<div><p>Accurate and early detection of insect pests plays an important role in crop protection and pest management in agriculture, especially in orchards. This paper is focused on evaluating and improving the performance of insect detection algorithms by adopting an ensemble approach of artificial neural networks. A set of advanced object detection models including YOLOv8, Faster R-CNN, RetinaNet, SSD, and FCOS were selected, and the models were trained and evaluated on a common dataset representing digital images of different insect species pests. Two classes were considered represented by quite similar invasive stink bugs, Halyomorpha Halys and Nezara Viridula. These architectures were optimized to identify significant peculiarities and variations between reference insects, including size, shape, and color. Each model has been implemented and optimized to achieve the best possible performance before integrating into an ensemble system. By integrating the predictions of these models through a weighted ensemble mechanism that leverages the F1 Score of each model, a more performant global system was developed capable of detecting insect pests with improved performance over individual models. This significant improvement in insect detection highlights the potential of the proposed ensemble system in efficient and rapid insect pest identification, ultimately providing valuable opportunities for implementing crop monitoring technologies. The research highlights the importance of implementing and developing deep-learning technologies for solving specific challenges in agriculture and brings innovative ways of strategic pest management for sustainable agricultural practices.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001539/pdfft?md5=cb6691f69c43d98fe912aa68a091ed2e&pid=1-s2.0-S2772375524001539-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining OBIA, CNN, and UAV imagery for automated detection and mapping of individual olive trees 结合 OBIA、CNN 和无人机图像,自动检测和绘制橄榄树个体地图
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-24 DOI: 10.1016/j.atech.2024.100546

The identification of individual trees is an important research topic in forestry, remote sensing and computer vision. It represents a tool for effectively and efficiently managing and maintaining forests and orchards. However, this task is not as simple as it seems; tree detection and counting can be time consuming, cost-prohibitive and accuracy-limited, especially if performed manually on a large scale.The availability of very high-resolution UAV imagery with remote sensing can make the counting process easier, faster and more precise. With the development of technology, this process can be made more automated by using intelligent algorithms such as CNN.

This work presents an OBIA-CNN (Object Based Image Analysis-Convolution Neural Network) approach that combines CNNs with OBIA to automatically detect and count olive trees from Phantom4 advanced drone imagery. Initially, The CNN-based classifier was created, trained, validated, and applied to generate the Olive trees probability maps on the ortho-photo. The post-classification refinement based on OBIA was then conducted. A super-pixel segmentation and the Excess Green index were performed and a detailed accuracy analysis has been carried out to establish the suitability of the proposed method.

The application to a RGB ortho-mosaic of an olive grove, in the east region of Morocco was successful using a manually elaborated training dataset of 4500 images of 24×24 pixels. Finally, the CNN detected and counted 2934 olive trees on the ortho-photo, achieving an overall accuracy of 97 % and 99 % after the OBIA refinement. The results of the proposed OBIA-CNN method were also compared with the classification results of using the Template matching technique, CNN method alone, and OBIA analysis alone to evaluate the performance of the approach. Our findings suggest the use of very high resolution images with object-based deep learning is promising for automatic detection and counting of olive trees to support the accurate and sustainable agricultural monitoring.

单棵树木的识别是林业、遥感和计算机视觉领域的一个重要研究课题。它是切实有效地管理和维护森林和果园的工具。然而,这项任务并不像看起来那么简单;树木检测和计数可能会耗费大量时间、成本高昂且精度有限,尤其是在大规模人工操作的情况下。随着技术的发展,通过使用 CNN 等智能算法,可以使这一过程更加自动化。本作品介绍了一种 OBIA-CNN(基于对象的图像分析-卷积神经网络)方法,该方法将 CNN 与 OBIA 相结合,从 Phantom4 高级无人机图像中自动检测和计数橄榄树。首先,创建、训练、验证并应用基于 CNN 的分类器,以生成正射影像上的橄榄树概率图。然后,基于 OBIA 进行分类后细化。对摩洛哥东部地区橄榄树林的 RGB 正射影像拼接图进行了应用,并成功地使用了由 4500 幅 24×24 像素图像组成的人工精心制作的训练数据集。最后,CNN 在正射影像上检测并计算出 2934 棵橄榄树,总体准确率达到 97%,经过 OBIA 改进后达到 99%。我们还将所提出的 OBIA-CNN 方法的结果与使用模板匹配技术、单独使用 CNN 方法和单独使用 OBIA 分析的分类结果进行了比较,以评估该方法的性能。我们的研究结果表明,利用高分辨率图像和基于对象的深度学习技术自动检测和计算橄榄树数量,为准确和可持续的农业监测提供支持是大有可为的。
{"title":"Combining OBIA, CNN, and UAV imagery for automated detection and mapping of individual olive trees","authors":"","doi":"10.1016/j.atech.2024.100546","DOIUrl":"10.1016/j.atech.2024.100546","url":null,"abstract":"<div><p>The identification of individual trees is an important research topic in forestry, remote sensing and computer vision. It represents a tool for effectively and efficiently managing and maintaining forests and orchards. However, this task is not as simple as it seems; tree detection and counting can be time consuming, cost-prohibitive and accuracy-limited, especially if performed manually on a large scale.The availability of very high-resolution UAV imagery with remote sensing can make the counting process easier, faster and more precise. With the development of technology, this process can be made more automated by using intelligent algorithms such as CNN.</p><p>This work presents an OBIA-CNN (Object Based Image Analysis-Convolution Neural Network) approach that combines CNNs with OBIA to automatically detect and count olive trees from Phantom4 advanced drone imagery. Initially, The CNN-based classifier was created, trained, validated, and applied to generate the Olive trees probability maps on the ortho-photo. The post-classification refinement based on OBIA was then conducted. A super-pixel segmentation and the Excess Green index were performed and a detailed accuracy analysis has been carried out to establish the suitability of the proposed method.</p><p>The application to a RGB ortho-mosaic of an olive grove, in the east region of Morocco was successful using a manually elaborated training dataset of 4500 images of 24×24 pixels. Finally, the CNN detected and counted 2934 olive trees on the ortho-photo, achieving an overall accuracy of 97 % and 99 % after the OBIA refinement. The results of the proposed OBIA-CNN method were also compared with the classification results of using the Template matching technique, CNN method alone, and OBIA analysis alone to evaluate the performance of the approach. Our findings suggest the use of very high resolution images with object-based deep learning is promising for automatic detection and counting of olive trees to support the accurate and sustainable agricultural monitoring.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001515/pdfft?md5=2aaac2e04ec6af0d19b6c72cbbca5b6d&pid=1-s2.0-S2772375524001515-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142117580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerometers-based position and time interval comparisons for predicting the behaviors of young bulls housed in a feedlot system 基于加速度计的位置和时间间隔比较,用于预测饲养场系统中小公牛的行为
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-23 DOI: 10.1016/j.atech.2024.100542

Animal behavior monitoring is an important tool for animal production. This behavior monitoring strategy can indicate the well-being and health of animals, which can lead to better productive performance. This study aimed to assess the most effective accelerometer attachment position (on either the halter or a neck collar) and data transmission time intervals (ranging from 6 to 600 s) for predicting behavioral patterns, including water and food intake frequencies, as well as other activities in young beef cattle bulls within a feedlot system. A range of machine learning algorithms were applied to satisfy the aims of the study, including the random forest, support vector machine, multilayer perceptron, and naive Bayes classifier algorithms. All studied models produced high performance metrics (above 0.90) when using both attachment positions, except for the models built using the naive Bayes classifier. Therefore, coupling accelerometers with collars is a more viable alternative for use on animals, as doing so is easier than applying accelerometers to halters. Utilizing a dataset with more observations (i.e., shorter time intervals) did not result in considerable improvements in the performance metrics of the trained models. Therefore, using datasets with fewer observations is more advantageous, as it can lead to decreased computational and temporal demands for model training, in addition to saving the battery of the device considered in this study.

动物行为监测是动物生产的重要工具。这种行为监测策略可以显示动物的福利和健康状况,从而提高动物的生产性能。本研究旨在评估最有效的加速度计固定位置(缰绳或颈圈上)和数据传输时间间隔(6 到 600 秒),以预测饲养场系统中年轻肉牛的行为模式,包括饮水和进食频率以及其他活动。为了达到研究目的,应用了一系列机器学习算法,包括随机森林、支持向量机、多层感知器和天真贝叶斯分类器算法。除了使用天真贝叶斯分类器建立的模型外,所有研究的模型在使用两个附件位置时都产生了较高的性能指标(高于 0.90)。因此,将加速度计与项圈耦合使用在动物身上是一个更可行的选择,因为这样做比将加速度计应用到缰绳上更容易。使用观测数据更多的数据集(即更短的时间间隔)并不能显著提高训练模型的性能指标。因此,使用观测数据较少的数据集更有优势,因为除了能节省本研究中设备的电池外,还能降低模型训练的计算和时间需求。
{"title":"Accelerometers-based position and time interval comparisons for predicting the behaviors of young bulls housed in a feedlot system","authors":"","doi":"10.1016/j.atech.2024.100542","DOIUrl":"10.1016/j.atech.2024.100542","url":null,"abstract":"<div><p>Animal behavior monitoring is an important tool for animal production. This behavior monitoring strategy can indicate the well-being and health of animals, which can lead to better productive performance. This study aimed to assess the most effective accelerometer attachment position (on either the halter or a neck collar) and data transmission time intervals (ranging from 6 to 600 s) for predicting behavioral patterns, including water and food intake frequencies, as well as other activities in young beef cattle bulls within a feedlot system. A range of machine learning algorithms were applied to satisfy the aims of the study, including the random forest, support vector machine, multilayer perceptron, and naive Bayes classifier algorithms. All studied models produced high performance metrics (above 0.90) when using both attachment positions, except for the models built using the naive Bayes classifier. Therefore, coupling accelerometers with collars is a more viable alternative for use on animals, as doing so is easier than applying accelerometers to halters. Utilizing a dataset with more observations (i.e., shorter time intervals) did not result in considerable improvements in the performance metrics of the trained models. Therefore, using datasets with fewer observations is more advantageous, as it can lead to decreased computational and temporal demands for model training, in addition to saving the battery of the device considered in this study.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001473/pdfft?md5=9a0ad30e8f4687e633f4a0abeec7768d&pid=1-s2.0-S2772375524001473-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142083527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algorithmic advancements in agrivoltaics: Modeling shading effects of semi-transparent photovoltaics 光伏农业的算法进步:半透明光伏的遮光效应建模
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-22 DOI: 10.1016/j.atech.2024.100541

Radiation is a crucial factor in the field of agrivoltaics in greenhouses. Depending on the type of photovoltaics integrated into greenhouses, the effect on radiation varies through the phenomenon of shading. Shading in greenhouses can be either beneficial or detrimental, making its analysis imperative. In this study, the improvements and modifications made in an algorithm capable of calculating the shading from photovoltaic units installed in greenhouses are presented. A key modification to the algorithm is the calculation of shading from semi-transparent photovoltaic modules, in contrast to its original form, in which photovoltaic modules were considered opaque. The algorithm was validated using radiation data from pyranometers within a greenhouse. The coefficients used were Pearson's r correlation coefficient and the Coefficient of Variation. The correlation coefficients for times without shading effect approached the values for the case without photovoltaics installed on the roof. Simultaneously, based on the coefficients of variation, the uniformity of radiation within the shade was validated, thereby confirming its existence. Finally, the effect of semi-transparent photovoltaic units on Global Horizontal Irradiance and Photosynthetically Active Radiation was studied, with the reduction for the former approaching 52 % and about 60 % for the latter. These changes in radiation can be either beneficial or not, depending on the type of crop and the needs of the greenhouse, such as cooling.

辐射是温室中农业光伏领域的一个关键因素。根据集成到温室中的光伏类型,遮阳现象对辐射的影响各不相同。温室中的遮阳既可能是有利的,也可能是有害的,因此对其进行分析势在必行。在本研究中,介绍了对能够计算温室中安装的光伏装置遮阳的算法所做的改进和修改。该算法的一个主要改进是计算半透明光伏组件的遮阳效果,而最初的算法是将光伏组件视为不透明的。该算法利用温室内高温计的辐射数据进行了验证。使用的系数是皮尔逊 r 相关系数和变异系数。无遮阳效应时的相关系数接近屋顶未安装光伏设备时的值。同时,根据变异系数,验证了遮阳板内辐射的均匀性,从而确认了其存在。最后,研究了半透明光伏装置对全球水平辐照度和光合有效辐射的影响,前者减少了 52%,后者减少了约 60%。辐射的这些变化可能是有益的,也可能是无益的,这取决于作物的类型和温室的需求,如降温。
{"title":"Algorithmic advancements in agrivoltaics: Modeling shading effects of semi-transparent photovoltaics","authors":"","doi":"10.1016/j.atech.2024.100541","DOIUrl":"10.1016/j.atech.2024.100541","url":null,"abstract":"<div><p>Radiation is a crucial factor in the field of agrivoltaics in greenhouses. Depending on the type of photovoltaics integrated into greenhouses, the effect on radiation varies through the phenomenon of shading. Shading in greenhouses can be either beneficial or detrimental, making its analysis imperative. In this study, the improvements and modifications made in an algorithm capable of calculating the shading from photovoltaic units installed in greenhouses are presented. A key modification to the algorithm is the calculation of shading from semi-transparent photovoltaic modules, in contrast to its original form, in which photovoltaic modules were considered opaque. The algorithm was validated using radiation data from pyranometers within a greenhouse. The coefficients used were Pearson's r correlation coefficient and the Coefficient of Variation. The correlation coefficients for times without shading effect approached the values for the case without photovoltaics installed on the roof. Simultaneously, based on the coefficients of variation, the uniformity of radiation within the shade was validated, thereby confirming its existence. Finally, the effect of semi-transparent photovoltaic units on Global Horizontal Irradiance and Photosynthetically Active Radiation was studied, with the reduction for the former approaching 52 % and about 60 % for the latter. These changes in radiation can be either beneficial or not, depending on the type of crop and the needs of the greenhouse, such as cooling.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001461/pdfft?md5=25994d9b0775304119c172578bfad924&pid=1-s2.0-S2772375524001461-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using milk flow profiles for subclinical mastitis detection 利用奶流曲线检测亚临床乳腺炎
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-21 DOI: 10.1016/j.atech.2024.100537

Mastitis is a significant disease on dairy farms and can have serious negative animal performance and economic consequences if not controlled. While clinical mastitis is often easily identified due to visibly abnormal milk, subclinical mastitis presents a more insidious challenge. Somatic cell count (SCC) is commonly used to monitor and detect subclinical mastitis, however, SCC is not available at a high sampling frequency rate at the cow level on most farms due to the manual effort involved in collecting it. With the rise of precision dairy farming technologies such as milk meters, however, there is increasing interest in using data-driven approaches (especially approaches using machine learning) for detecting subclinical mastitis based on indicators more easily collected by modern sensors. In this article we introduce milk flow profiles, a new, easy-to-collect data type that can replace more difficult-to-collect data sources (e.g., those that require laboratory tests or manual measurements) in precision dairy farming. The results of our experiments demonstrate that milk flow profiles, combined with other easily accessible milking machine data, can be employed to train machine learning models that accurately detect subclinical mastitis (as evidenced by high SCC measurements), with an AUC of 0.793. Moreover, these models perform better than models trained using features from milk characteristic data that are expensive to collect and are only collected at low frequency on commercial farms. Our experiments used data from 16 weeks of milking events from 285 cows on Irish farms, and their results demonstrate the value of milk flow profiles as an easily accessible and valuable data source for precision dairy farming applications.

乳腺炎是奶牛场的重大疾病,如果不加以控制,会对动物的生产性能和经济产生严重的负面影响。临床乳腺炎通常因牛奶明显异常而容易识别,而亚临床乳腺炎则是一个更为隐蔽的挑战。体细胞计数(SCC)通常用于监测和检测亚临床乳腺炎,然而,由于人工采集的工作量大,大多数牧场无法以较高的采样频率提供奶牛体细胞计数。然而,随着奶量计等精准牧场技术的兴起,人们对使用数据驱动方法(尤其是使用机器学习的方法)检测亚临床乳腺炎的兴趣与日俱增,这种方法基于现代传感器更容易收集的指标。在这篇文章中,我们介绍了奶流量曲线,这是一种新的、易于收集的数据类型,可以在精准奶牛场中取代较难收集的数据源(如那些需要实验室测试或人工测量的数据源)。我们的实验结果表明,奶流量曲线与其他易于获取的挤奶机数据相结合,可用于训练机器学习模型,从而准确检测亚临床乳腺炎(如高SCC测量值),AUC为0.793。此外,这些模型的表现优于使用牛奶特征数据特征训练的模型,因为牛奶特征数据的收集成本很高,而且在商业化牧场中收集的频率很低。我们的实验使用了来自爱尔兰牧场285头奶牛16周挤奶事件的数据,其结果证明了奶流剖面作为精准奶牛场应用中易于获取的宝贵数据源的价值。
{"title":"Using milk flow profiles for subclinical mastitis detection","authors":"","doi":"10.1016/j.atech.2024.100537","DOIUrl":"10.1016/j.atech.2024.100537","url":null,"abstract":"<div><p>Mastitis is a significant disease on dairy farms and can have serious negative animal performance and economic consequences if not controlled. While clinical mastitis is often easily identified due to visibly abnormal milk, subclinical mastitis presents a more insidious challenge. Somatic cell count (SCC) is commonly used to monitor and detect subclinical mastitis, however, SCC is not available at a high sampling frequency rate at the cow level on most farms due to the manual effort involved in collecting it. With the rise of precision dairy farming technologies such as milk meters, however, there is increasing interest in using data-driven approaches (especially approaches using machine learning) for detecting subclinical mastitis based on indicators more easily collected by modern sensors. In this article we introduce milk flow profiles, a new, easy-to-collect data type that can replace more difficult-to-collect data sources (e.g., those that require laboratory tests or manual measurements) in precision dairy farming. The results of our experiments demonstrate that milk flow profiles, combined with other easily accessible milking machine data, can be employed to train machine learning models that accurately detect subclinical mastitis (as evidenced by high SCC measurements), with an AUC of 0.793. Moreover, these models perform better than models trained using features from milk characteristic data that are expensive to collect and are only collected at low frequency on commercial farms. Our experiments used data from 16 weeks of milking events from 285 cows on Irish farms, and their results demonstrate the value of milk flow profiles as an easily accessible and valuable data source for precision dairy farming applications.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001424/pdfft?md5=29c48e2d993ca4b5bdb66d6caff1380f&pid=1-s2.0-S2772375524001424-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An IMU-based machine learning approach for daily behavior pattern recognition in dairy cows 基于 IMU 的奶牛日常行为模式识别机器学习方法
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-18 DOI: 10.1016/j.atech.2024.100539

Technological advancements have revolutionized livestock farming, notably in health monitoring. Traditional methods, which have been criticized for subjectivity and treatment delays, can be replaced with efficient health monitoring systems, thereby reducing costs and workload. Implementing cow behavior recognition allows for effective dairy cow health monitoring. In this research, we propose an integrated system using inertial measurement unit (IMU) devices and machine learning techniques for dairy cow behavior recognition. Six main dairy cow behaviors were studied: lying, standing, walking, drinking, feeding, and ruminating. All behavior types were manually labeled into the IMU data by reviewing the recorded footage. The labeled IMU data underwent four processing steps: selecting different window sizes, feature extraction, feature selection, and normalization. These processed data were then used to build the behavior recognition model. Various model structures, including SVM, Random Forest, and XGBoost, were tested. The top-performing model, XGBoost, with its proposed 58 features achieved an F1-score of 0.87, with specific scores of 0.93 for lying, 0.85 for walking, 0.94 for ruminating, 0.89 for feeding, 0.86 for standing, 0.93 for drinking, and 0.59 for other activities. During our online testing, we observed similar patterns for each healthy cow. The cumulative time spent on each behavior also matched the statistics from previous surveys. Additionally, our backend optimization approach resulted in a final overall percentage error of 1.55 % per day during online testing. In conclusion, our study presents an IMU-based system capable of accurately recognizing dairy cow behavior. Feature design and appropriate models are proposed herein. A functional optimization method was introduced indicating that our system has the potential with applications for estrus detection and other reproductive management practices in the dairy industry.

技术进步给畜牧业带来了革命性的变化,尤其是在健康监测方面。传统方法因主观性和治疗延误而饱受诟病,而高效的健康监测系统可取代传统方法,从而降低成本和工作量。实施奶牛行为识别可实现有效的奶牛健康监测。在这项研究中,我们提出了一种利用惯性测量单元(IMU)设备和机器学习技术进行奶牛行为识别的集成系统。我们研究了奶牛的六种主要行为:躺卧、站立、行走、饮水、采食和反刍。所有行为类型都是通过查看记录的片段手动标记到 IMU 数据中的。标注的 IMU 数据经过四个处理步骤:选择不同的窗口大小、特征提取、特征选择和归一化。这些经过处理的数据随后被用于建立行为识别模型。测试了各种模型结构,包括 SVM、随机森林和 XGBoost。表现最好的 XGBoost 模型利用其提出的 58 个特征达到了 0.87 的 F1 分数,其中躺卧的具体分数为 0.93,行走为 0.85,反刍为 0.94,进食为 0.89,站立为 0.86,喝水为 0.93,其他活动为 0.59。在在线测试中,我们观察到每头健康奶牛都有类似的模式。每种行为花费的累计时间也与之前调查的统计数据相吻合。此外,我们的后台优化方法使在线测试期间每天的最终总体百分比误差为 1.55%。总之,我们的研究提出了一种基于 IMU 的系统,能够准确识别奶牛的行为。本文提出了特征设计和适当的模型。引入的功能优化方法表明,我们的系统具有应用于发情检测和奶牛业其他繁殖管理实践的潜力。
{"title":"An IMU-based machine learning approach for daily behavior pattern recognition in dairy cows","authors":"","doi":"10.1016/j.atech.2024.100539","DOIUrl":"10.1016/j.atech.2024.100539","url":null,"abstract":"<div><p>Technological advancements have revolutionized livestock farming, notably in health monitoring. Traditional methods, which have been criticized for subjectivity and treatment delays, can be replaced with efficient health monitoring systems, thereby reducing costs and workload. Implementing cow behavior recognition allows for effective dairy cow health monitoring. In this research, we propose an integrated system using inertial measurement unit (IMU) devices and machine learning techniques for dairy cow behavior recognition. Six main dairy cow behaviors were studied: lying, standing, walking, drinking, feeding, and ruminating. All behavior types were manually labeled into the IMU data by reviewing the recorded footage. The labeled IMU data underwent four processing steps: selecting different window sizes, feature extraction, feature selection, and normalization. These processed data were then used to build the behavior recognition model. Various model structures, including SVM, Random Forest, and XGBoost, were tested. The top-performing model, XGBoost, with its proposed 58 features achieved an F1-score of 0.87, with specific scores of 0.93 for lying, 0.85 for walking, 0.94 for ruminating, 0.89 for feeding, 0.86 for standing, 0.93 for drinking, and 0.59 for other activities. During our online testing, we observed similar patterns for each healthy cow. The cumulative time spent on each behavior also matched the statistics from previous surveys. Additionally, our backend optimization approach resulted in a final overall percentage error of 1.55 % per day during online testing. In conclusion, our study presents an IMU-based system capable of accurately recognizing dairy cow behavior. Feature design and appropriate models are proposed herein. A functional optimization method was introduced indicating that our system has the potential with applications for estrus detection and other reproductive management practices in the dairy industry.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001448/pdfft?md5=f2fe1f7350786c4ff6118c741fa2d254&pid=1-s2.0-S2772375524001448-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142039971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatiotemporal analysis using deep learning and fuzzy inference for evaluating broiler activities 利用深度学习和模糊推理进行时空分析,评估肉鸡活动
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-17 DOI: 10.1016/j.atech.2024.100534

Observing poultry activity is crucial for assessing their health status; however, the inspection process is often time-consuming and labor-intensive, particularly in cases involving large numbers of chickens. Inexperienced breeders may also misjudge their activity levels, potentially missing opportunities for prevention and treatment. This study integrates traditional video surveillance with an advanced monitoring system to identify various broiler behaviors in a breeding environment. A two-stage deep learning approach is employed: in the first stage, the broilers are detected, and in the second stage, five key body points (head, abdomen, two legs, and tail) are identified. A skeleton-based model is then developed centered around the abdomen, with six angles calculated using trigonometric methods. These angles are analyzed by a long short-term memory network to estimate behaviors such as “Standing”, “Walking”, “Resting”, “Eating”, “Preening”, and “Flapping”, selecting the behavior with the highest probability. Dual-layer fuzzy logic inference systems were used to evaluate the proportion of time broilers spent in static versus dynamic states, providing a robust determination of their activity levels. Validated in a mixed-sex breeding environment, the proposed system achieved accuracies of at least 85.2% for identifying broiler type, 79.2% for identifying body parts, and 50.8% for identifying behaviors. The activity level evaluation results were consistent with those conducted by experienced poultry experts.

观察家禽的活动对于评估其健康状况至关重要;然而,检查过程往往耗时耗力,尤其是在涉及大量鸡只的情况下。缺乏经验的饲养者也可能会误判家禽的活动水平,从而错失预防和治疗的良机。本研究将传统的视频监控与先进的监控系统相结合,以识别饲养环境中的各种肉鸡行为。该系统采用了两阶段深度学习方法:第一阶段检测肉鸡,第二阶段识别五个关键身体点(头部、腹部、两条腿和尾巴)。然后,以腹部为中心建立一个基于骨骼的模型,用三角函数方法计算出六个角度。这些角度由一个长短期记忆网络进行分析,以估计 "站立"、"行走"、"休息"、"进食"、"啄食 "和 "拍打 "等行为,并选择概率最高的行为。双层模糊逻辑推理系统用于评估肉鸡在静态和动态状态下所花费的时间比例,从而对肉鸡的活动水平做出可靠的判断。经过在混群饲养环境中的验证,该系统识别肉鸡类型的准确率至少达到 85.2%,识别身体部位的准确率达到 79.2%,识别行为的准确率达到 50.8%。活动水平评估结果与经验丰富的家禽专家的评估结果一致。
{"title":"Spatiotemporal analysis using deep learning and fuzzy inference for evaluating broiler activities","authors":"","doi":"10.1016/j.atech.2024.100534","DOIUrl":"10.1016/j.atech.2024.100534","url":null,"abstract":"<div><p>Observing poultry activity is crucial for assessing their health status; however, the inspection process is often time-consuming and labor-intensive, particularly in cases involving large numbers of chickens. Inexperienced breeders may also misjudge their activity levels, potentially missing opportunities for prevention and treatment. This study integrates traditional video surveillance with an advanced monitoring system to identify various broiler behaviors in a breeding environment. A two-stage deep learning approach is employed: in the first stage, the broilers are detected, and in the second stage, five key body points (head, abdomen, two legs, and tail) are identified. A skeleton-based model is then developed centered around the abdomen, with six angles calculated using trigonometric methods. These angles are analyzed by a long short-term memory network to estimate behaviors such as “Standing”, “Walking”, “Resting”, “Eating”, “Preening”, and “Flapping”, selecting the behavior with the highest probability. Dual-layer fuzzy logic inference systems were used to evaluate the proportion of time broilers spent in static versus dynamic states, providing a robust determination of their activity levels. Validated in a mixed-sex breeding environment, the proposed system achieved accuracies of at least 85.2% for identifying broiler type, 79.2% for identifying body parts, and 50.8% for identifying behaviors. The activity level evaluation results were consistent with those conducted by experienced poultry experts.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001394/pdfft?md5=5619e9cd8a35fc3953567106f47a6fc6&pid=1-s2.0-S2772375524001394-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Field-based multispecies weed and crop detection using ground robots and advanced YOLO models: A data and model-centric approach 利用地面机器人和先进的 YOLO 模型进行基于田间的多物种杂草和作物探测:以数据和模型为中心的方法
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-16 DOI: 10.1016/j.atech.2024.100538

The implementation of a machine-vision system for real-time precision weed management is a crucial step towards the development of smart spraying robotic vehicles. The intelligent machine-vision system, constructed using deep learning object detection models, has the potential to accurately detect weeds and crops from images. Both data-centric and model-centric approaches of deep learning model development offer advantages depending on environment and non-environment factors. To assess the performance of weed detection in real-field conditions for eight crop species, the Yolov8, Yolov9, and customized Yolov9 deep learning models were trained and evaluated using RGB images from four locations (Casselton, Fargo, and Carrington) over a two-year period in the Great Plains region of the U.S. The experiment encompassed eight crop species—dry bean, canola, corn, field pea, flax, lentil, soybean, and sugar beet—and five weed species—horseweed, kochia, redroot pigweed, common ragweed, and water hemp. Six Yolov8 and eight Yolov9 model variants were trained using annotated weed and crop images gathered from four different sites including combined dataset from four sites. The Yolov8 and Yolov9 models’ performance in weed detection were assessed based on mean average precision (mAP50) metrics for five datasets, eight crop species, and five weed species. The results of the weed and crop detection evaluation showed high mAP50 values of 86.2 %. The mAP50 values for individual weed and crop species detection ranged from 80.8 % to 98 %. The results demonstrated that the performance of the model varied by model type (model-centric), location due to environment (data-centric), data size (data-centric), data quality (data-centric), and object size in the image (data-centric). Nevertheless, the Yolov9 customized lightweight model has the potential to play a significant role in building a real time machine-vision-based precision weed management system.

实施用于实时精准杂草管理的机器视觉系统是开发智能喷洒机器人车辆的关键一步。利用深度学习对象检测模型构建的智能机器视觉系统有可能从图像中准确检测出杂草和农作物。以数据为中心和以模型为中心的深度学习模型开发方法都具有优势,这取决于环境和非环境因素。为了评估八种作物在真实田地条件下的杂草检测性能,我们使用来自美国大平原地区四个地点(卡塞尔顿、法戈和卡灵顿)的 RGB 图像,对 Yolov8、Yolov9 和定制的 Yolov9 深度学习模型进行了为期两年的训练和评估。实验包括八种作物--干豆、油菜籽、玉米、大田豌豆、亚麻、扁豆、大豆和甜菜,以及五种杂草--马草、科奇亚、红根猪笼草、普通豚草和水麻。利用从四个不同地点(包括四个地点的综合数据集)收集的带注释的杂草和作物图像,对六个 Yolov8 和八个 Yolov9 模型变体进行了训练。根据五个数据集、八个作物物种和五个杂草物种的平均精度(mAP50)指标,评估了 Yolov8 和 Yolov9 模型在杂草检测方面的性能。杂草和作物检测评估结果显示,mAP50 值高达 86.2%。单个杂草和作物物种检测的 mAP50 值介于 80.8 % 到 98 % 之间。结果表明,模型的性能因模型类型(以模型为中心)、环境造成的位置(以数据为中心)、数据大小(以数据为中心)、数据质量(以数据为中心)和图像中物体大小(以数据为中心)而异。尽管如此,Yolov9 定制的轻量级模型仍有可能在构建基于机器视觉的实时精准杂草管理系统中发挥重要作用。
{"title":"Field-based multispecies weed and crop detection using ground robots and advanced YOLO models: A data and model-centric approach","authors":"","doi":"10.1016/j.atech.2024.100538","DOIUrl":"10.1016/j.atech.2024.100538","url":null,"abstract":"<div><p>The implementation of a machine-vision system for real-time precision weed management is a crucial step towards the development of smart spraying robotic vehicles. The intelligent machine-vision system, constructed using deep learning object detection models, has the potential to accurately detect weeds and crops from images. Both data-centric and model-centric approaches of deep learning model development offer advantages depending on environment and non-environment factors. To assess the performance of weed detection in real-field conditions for eight crop species, the Yolov8, Yolov9, and customized Yolov9 deep learning models were trained and evaluated using RGB images from four locations (Casselton, Fargo, and Carrington) over a two-year period in the Great Plains region of the U.S. The experiment encompassed eight crop species—dry bean, canola, corn, field pea, flax, lentil, soybean, and sugar beet—and five weed species—horseweed, kochia, redroot pigweed, common ragweed, and water hemp. Six Yolov8 and eight Yolov9 model variants were trained using annotated weed and crop images gathered from four different sites including combined dataset from four sites. The Yolov8 and Yolov9 models’ performance in weed detection were assessed based on mean average precision (mAP50) metrics for five datasets, eight crop species, and five weed species. The results of the weed and crop detection evaluation showed high mAP50 values of 86.2 %. The mAP50 values for individual weed and crop species detection ranged from 80.8 % to 98 %. The results demonstrated that the performance of the model varied by model type (model-centric), location due to environment (data-centric), data size (data-centric), data quality (data-centric), and object size in the image (data-centric). Nevertheless, the Yolov9 customized lightweight model has the potential to play a significant role in building a real time machine-vision-based precision weed management system.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001436/pdfft?md5=fb66c49d8d623973c91bee4e32e27d12&pid=1-s2.0-S2772375524001436-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142002409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning guided variable rate robotic sprayer prototype 深度学习引导的变速机器人喷雾器原型
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-16 DOI: 10.1016/j.atech.2024.100540

This paper presents the development of a robotic sprayer that combines artificial intelligence with robotics for optimal spray application on citrus nursery plants grown in an indoor environment. The robotic platform is integrated with an embedded firmware of MobileNetV2 model to identify and classify the plant samples with a classification accuracy of 100 % which is used to dispense variable rate spraying of pesticide based on the health status of the plant foliage. The disease detection model was developed through the edge impulse platform and deployed on Raspberry Pi 4. The robot navigates through an array of plants, stops beside each plant, and captures an image of the citrus plants. It feeds the image into the deployed embedded model to generate a disease inference that informs the variable rate application of spray during real-time actuation. To test the spraying performance of the prototype within the growing environment, water sensitive cards were placed in each plant's canopy. After spraying, the samples of water sensitive cards were collected and quantified using a smart spray app to determine the classification accuracy as well as the extent of spray coverage on the citrus samples. The robot spray coverage results show an average spray coverage of 87 % on lemon foliage when compared with 67 % for navel orange, during the spray performance test of the robot.

本文介绍了一种机器人喷雾器的开发情况,它将人工智能与机器人技术相结合,可对室内环境中种植的柑橘苗圃植物进行最佳喷洒。机器人平台集成了 MobileNetV2 模型的嵌入式固件,可识别植物样本并对其进行分类,分类准确率达 100%,用于根据植物叶片的健康状况喷洒不同剂量的农药。病害检测模型是通过边缘脉冲平台开发的,并部署在 Raspberry Pi 4 上。机器人在植物阵列中穿行,停在每棵植物旁,并捕捉柑橘类植物的图像。它将图像输入已部署的嵌入式模型,以生成病害推断,为实时执行过程中的变速喷洒提供信息。为了测试原型在生长环境中的喷洒性能,在每株植物的树冠上都放置了水敏卡。喷洒后,收集水敏卡样本并使用智能喷洒应用程序进行量化,以确定分类准确性以及柑橘样本的喷洒覆盖范围。机器人喷洒覆盖率结果显示,在机器人喷洒性能测试中,柠檬叶片的平均喷洒覆盖率为 87%,而脐橙的平均喷洒覆盖率为 67%。
{"title":"Deep learning guided variable rate robotic sprayer prototype","authors":"","doi":"10.1016/j.atech.2024.100540","DOIUrl":"10.1016/j.atech.2024.100540","url":null,"abstract":"<div><p>This paper presents the development of a robotic sprayer that combines artificial intelligence with robotics for optimal spray application on citrus nursery plants grown in an indoor environment. The robotic platform is integrated with an embedded firmware of MobileNetV2 model to identify and classify the plant samples with a classification accuracy of 100 % which is used to dispense variable rate spraying of pesticide based on the health status of the plant foliage. The disease detection model was developed through the edge impulse platform and deployed on Raspberry Pi 4. The robot navigates through an array of plants, stops beside each plant, and captures an image of the citrus plants. It feeds the image into the deployed embedded model to generate a disease inference that informs the variable rate application of spray during real-time actuation. To test the spraying performance of the prototype within the growing environment, water sensitive cards were placed in each plant's canopy. After spraying, the samples of water sensitive cards were collected and quantified using a smart spray app to determine the classification accuracy as well as the extent of spray coverage on the citrus samples. The robot spray coverage results show an average spray coverage of 87 % on lemon foliage when compared with 67 % for navel orange, during the spray performance test of the robot.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S277237552400145X/pdfft?md5=a7520ad2b0c9742ff24d9d6b2ecf6407&pid=1-s2.0-S277237552400145X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142039970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Smart agricultural technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1