首页 > 最新文献

Smart agricultural technology最新文献

英文 中文
Using milk flow profiles for subclinical mastitis detection 利用奶流曲线检测亚临床乳腺炎
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-21 DOI: 10.1016/j.atech.2024.100537

Mastitis is a significant disease on dairy farms and can have serious negative animal performance and economic consequences if not controlled. While clinical mastitis is often easily identified due to visibly abnormal milk, subclinical mastitis presents a more insidious challenge. Somatic cell count (SCC) is commonly used to monitor and detect subclinical mastitis, however, SCC is not available at a high sampling frequency rate at the cow level on most farms due to the manual effort involved in collecting it. With the rise of precision dairy farming technologies such as milk meters, however, there is increasing interest in using data-driven approaches (especially approaches using machine learning) for detecting subclinical mastitis based on indicators more easily collected by modern sensors. In this article we introduce milk flow profiles, a new, easy-to-collect data type that can replace more difficult-to-collect data sources (e.g., those that require laboratory tests or manual measurements) in precision dairy farming. The results of our experiments demonstrate that milk flow profiles, combined with other easily accessible milking machine data, can be employed to train machine learning models that accurately detect subclinical mastitis (as evidenced by high SCC measurements), with an AUC of 0.793. Moreover, these models perform better than models trained using features from milk characteristic data that are expensive to collect and are only collected at low frequency on commercial farms. Our experiments used data from 16 weeks of milking events from 285 cows on Irish farms, and their results demonstrate the value of milk flow profiles as an easily accessible and valuable data source for precision dairy farming applications.

乳腺炎是奶牛场的重大疾病,如果不加以控制,会对动物的生产性能和经济产生严重的负面影响。临床乳腺炎通常因牛奶明显异常而容易识别,而亚临床乳腺炎则是一个更为隐蔽的挑战。体细胞计数(SCC)通常用于监测和检测亚临床乳腺炎,然而,由于人工采集的工作量大,大多数牧场无法以较高的采样频率提供奶牛体细胞计数。然而,随着奶量计等精准牧场技术的兴起,人们对使用数据驱动方法(尤其是使用机器学习的方法)检测亚临床乳腺炎的兴趣与日俱增,这种方法基于现代传感器更容易收集的指标。在这篇文章中,我们介绍了奶流量曲线,这是一种新的、易于收集的数据类型,可以在精准奶牛场中取代较难收集的数据源(如那些需要实验室测试或人工测量的数据源)。我们的实验结果表明,奶流量曲线与其他易于获取的挤奶机数据相结合,可用于训练机器学习模型,从而准确检测亚临床乳腺炎(如高SCC测量值),AUC为0.793。此外,这些模型的表现优于使用牛奶特征数据特征训练的模型,因为牛奶特征数据的收集成本很高,而且在商业化牧场中收集的频率很低。我们的实验使用了来自爱尔兰牧场285头奶牛16周挤奶事件的数据,其结果证明了奶流剖面作为精准奶牛场应用中易于获取的宝贵数据源的价值。
{"title":"Using milk flow profiles for subclinical mastitis detection","authors":"","doi":"10.1016/j.atech.2024.100537","DOIUrl":"10.1016/j.atech.2024.100537","url":null,"abstract":"<div><p>Mastitis is a significant disease on dairy farms and can have serious negative animal performance and economic consequences if not controlled. While clinical mastitis is often easily identified due to visibly abnormal milk, subclinical mastitis presents a more insidious challenge. Somatic cell count (SCC) is commonly used to monitor and detect subclinical mastitis, however, SCC is not available at a high sampling frequency rate at the cow level on most farms due to the manual effort involved in collecting it. With the rise of precision dairy farming technologies such as milk meters, however, there is increasing interest in using data-driven approaches (especially approaches using machine learning) for detecting subclinical mastitis based on indicators more easily collected by modern sensors. In this article we introduce milk flow profiles, a new, easy-to-collect data type that can replace more difficult-to-collect data sources (e.g., those that require laboratory tests or manual measurements) in precision dairy farming. The results of our experiments demonstrate that milk flow profiles, combined with other easily accessible milking machine data, can be employed to train machine learning models that accurately detect subclinical mastitis (as evidenced by high SCC measurements), with an AUC of 0.793. Moreover, these models perform better than models trained using features from milk characteristic data that are expensive to collect and are only collected at low frequency on commercial farms. Our experiments used data from 16 weeks of milking events from 285 cows on Irish farms, and their results demonstrate the value of milk flow profiles as an easily accessible and valuable data source for precision dairy farming applications.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001424/pdfft?md5=29c48e2d993ca4b5bdb66d6caff1380f&pid=1-s2.0-S2772375524001424-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An IMU-based machine learning approach for daily behavior pattern recognition in dairy cows 基于 IMU 的奶牛日常行为模式识别机器学习方法
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-18 DOI: 10.1016/j.atech.2024.100539

Technological advancements have revolutionized livestock farming, notably in health monitoring. Traditional methods, which have been criticized for subjectivity and treatment delays, can be replaced with efficient health monitoring systems, thereby reducing costs and workload. Implementing cow behavior recognition allows for effective dairy cow health monitoring. In this research, we propose an integrated system using inertial measurement unit (IMU) devices and machine learning techniques for dairy cow behavior recognition. Six main dairy cow behaviors were studied: lying, standing, walking, drinking, feeding, and ruminating. All behavior types were manually labeled into the IMU data by reviewing the recorded footage. The labeled IMU data underwent four processing steps: selecting different window sizes, feature extraction, feature selection, and normalization. These processed data were then used to build the behavior recognition model. Various model structures, including SVM, Random Forest, and XGBoost, were tested. The top-performing model, XGBoost, with its proposed 58 features achieved an F1-score of 0.87, with specific scores of 0.93 for lying, 0.85 for walking, 0.94 for ruminating, 0.89 for feeding, 0.86 for standing, 0.93 for drinking, and 0.59 for other activities. During our online testing, we observed similar patterns for each healthy cow. The cumulative time spent on each behavior also matched the statistics from previous surveys. Additionally, our backend optimization approach resulted in a final overall percentage error of 1.55 % per day during online testing. In conclusion, our study presents an IMU-based system capable of accurately recognizing dairy cow behavior. Feature design and appropriate models are proposed herein. A functional optimization method was introduced indicating that our system has the potential with applications for estrus detection and other reproductive management practices in the dairy industry.

技术进步给畜牧业带来了革命性的变化,尤其是在健康监测方面。传统方法因主观性和治疗延误而饱受诟病,而高效的健康监测系统可取代传统方法,从而降低成本和工作量。实施奶牛行为识别可实现有效的奶牛健康监测。在这项研究中,我们提出了一种利用惯性测量单元(IMU)设备和机器学习技术进行奶牛行为识别的集成系统。我们研究了奶牛的六种主要行为:躺卧、站立、行走、饮水、采食和反刍。所有行为类型都是通过查看记录的片段手动标记到 IMU 数据中的。标注的 IMU 数据经过四个处理步骤:选择不同的窗口大小、特征提取、特征选择和归一化。这些经过处理的数据随后被用于建立行为识别模型。测试了各种模型结构,包括 SVM、随机森林和 XGBoost。表现最好的 XGBoost 模型利用其提出的 58 个特征达到了 0.87 的 F1 分数,其中躺卧的具体分数为 0.93,行走为 0.85,反刍为 0.94,进食为 0.89,站立为 0.86,喝水为 0.93,其他活动为 0.59。在在线测试中,我们观察到每头健康奶牛都有类似的模式。每种行为花费的累计时间也与之前调查的统计数据相吻合。此外,我们的后台优化方法使在线测试期间每天的最终总体百分比误差为 1.55%。总之,我们的研究提出了一种基于 IMU 的系统,能够准确识别奶牛的行为。本文提出了特征设计和适当的模型。引入的功能优化方法表明,我们的系统具有应用于发情检测和奶牛业其他繁殖管理实践的潜力。
{"title":"An IMU-based machine learning approach for daily behavior pattern recognition in dairy cows","authors":"","doi":"10.1016/j.atech.2024.100539","DOIUrl":"10.1016/j.atech.2024.100539","url":null,"abstract":"<div><p>Technological advancements have revolutionized livestock farming, notably in health monitoring. Traditional methods, which have been criticized for subjectivity and treatment delays, can be replaced with efficient health monitoring systems, thereby reducing costs and workload. Implementing cow behavior recognition allows for effective dairy cow health monitoring. In this research, we propose an integrated system using inertial measurement unit (IMU) devices and machine learning techniques for dairy cow behavior recognition. Six main dairy cow behaviors were studied: lying, standing, walking, drinking, feeding, and ruminating. All behavior types were manually labeled into the IMU data by reviewing the recorded footage. The labeled IMU data underwent four processing steps: selecting different window sizes, feature extraction, feature selection, and normalization. These processed data were then used to build the behavior recognition model. Various model structures, including SVM, Random Forest, and XGBoost, were tested. The top-performing model, XGBoost, with its proposed 58 features achieved an F1-score of 0.87, with specific scores of 0.93 for lying, 0.85 for walking, 0.94 for ruminating, 0.89 for feeding, 0.86 for standing, 0.93 for drinking, and 0.59 for other activities. During our online testing, we observed similar patterns for each healthy cow. The cumulative time spent on each behavior also matched the statistics from previous surveys. Additionally, our backend optimization approach resulted in a final overall percentage error of 1.55 % per day during online testing. In conclusion, our study presents an IMU-based system capable of accurately recognizing dairy cow behavior. Feature design and appropriate models are proposed herein. A functional optimization method was introduced indicating that our system has the potential with applications for estrus detection and other reproductive management practices in the dairy industry.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001448/pdfft?md5=f2fe1f7350786c4ff6118c741fa2d254&pid=1-s2.0-S2772375524001448-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142039971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatiotemporal analysis using deep learning and fuzzy inference for evaluating broiler activities 利用深度学习和模糊推理进行时空分析,评估肉鸡活动
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-17 DOI: 10.1016/j.atech.2024.100534

Observing poultry activity is crucial for assessing their health status; however, the inspection process is often time-consuming and labor-intensive, particularly in cases involving large numbers of chickens. Inexperienced breeders may also misjudge their activity levels, potentially missing opportunities for prevention and treatment. This study integrates traditional video surveillance with an advanced monitoring system to identify various broiler behaviors in a breeding environment. A two-stage deep learning approach is employed: in the first stage, the broilers are detected, and in the second stage, five key body points (head, abdomen, two legs, and tail) are identified. A skeleton-based model is then developed centered around the abdomen, with six angles calculated using trigonometric methods. These angles are analyzed by a long short-term memory network to estimate behaviors such as “Standing”, “Walking”, “Resting”, “Eating”, “Preening”, and “Flapping”, selecting the behavior with the highest probability. Dual-layer fuzzy logic inference systems were used to evaluate the proportion of time broilers spent in static versus dynamic states, providing a robust determination of their activity levels. Validated in a mixed-sex breeding environment, the proposed system achieved accuracies of at least 85.2% for identifying broiler type, 79.2% for identifying body parts, and 50.8% for identifying behaviors. The activity level evaluation results were consistent with those conducted by experienced poultry experts.

观察家禽的活动对于评估其健康状况至关重要;然而,检查过程往往耗时耗力,尤其是在涉及大量鸡只的情况下。缺乏经验的饲养者也可能会误判家禽的活动水平,从而错失预防和治疗的良机。本研究将传统的视频监控与先进的监控系统相结合,以识别饲养环境中的各种肉鸡行为。该系统采用了两阶段深度学习方法:第一阶段检测肉鸡,第二阶段识别五个关键身体点(头部、腹部、两条腿和尾巴)。然后,以腹部为中心建立一个基于骨骼的模型,用三角函数方法计算出六个角度。这些角度由一个长短期记忆网络进行分析,以估计 "站立"、"行走"、"休息"、"进食"、"啄食 "和 "拍打 "等行为,并选择概率最高的行为。双层模糊逻辑推理系统用于评估肉鸡在静态和动态状态下所花费的时间比例,从而对肉鸡的活动水平做出可靠的判断。经过在混群饲养环境中的验证,该系统识别肉鸡类型的准确率至少达到 85.2%,识别身体部位的准确率达到 79.2%,识别行为的准确率达到 50.8%。活动水平评估结果与经验丰富的家禽专家的评估结果一致。
{"title":"Spatiotemporal analysis using deep learning and fuzzy inference for evaluating broiler activities","authors":"","doi":"10.1016/j.atech.2024.100534","DOIUrl":"10.1016/j.atech.2024.100534","url":null,"abstract":"<div><p>Observing poultry activity is crucial for assessing their health status; however, the inspection process is often time-consuming and labor-intensive, particularly in cases involving large numbers of chickens. Inexperienced breeders may also misjudge their activity levels, potentially missing opportunities for prevention and treatment. This study integrates traditional video surveillance with an advanced monitoring system to identify various broiler behaviors in a breeding environment. A two-stage deep learning approach is employed: in the first stage, the broilers are detected, and in the second stage, five key body points (head, abdomen, two legs, and tail) are identified. A skeleton-based model is then developed centered around the abdomen, with six angles calculated using trigonometric methods. These angles are analyzed by a long short-term memory network to estimate behaviors such as “Standing”, “Walking”, “Resting”, “Eating”, “Preening”, and “Flapping”, selecting the behavior with the highest probability. Dual-layer fuzzy logic inference systems were used to evaluate the proportion of time broilers spent in static versus dynamic states, providing a robust determination of their activity levels. Validated in a mixed-sex breeding environment, the proposed system achieved accuracies of at least 85.2% for identifying broiler type, 79.2% for identifying body parts, and 50.8% for identifying behaviors. The activity level evaluation results were consistent with those conducted by experienced poultry experts.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001394/pdfft?md5=5619e9cd8a35fc3953567106f47a6fc6&pid=1-s2.0-S2772375524001394-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Field-based multispecies weed and crop detection using ground robots and advanced YOLO models: A data and model-centric approach 利用地面机器人和先进的 YOLO 模型进行基于田间的多物种杂草和作物探测:以数据和模型为中心的方法
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-16 DOI: 10.1016/j.atech.2024.100538

The implementation of a machine-vision system for real-time precision weed management is a crucial step towards the development of smart spraying robotic vehicles. The intelligent machine-vision system, constructed using deep learning object detection models, has the potential to accurately detect weeds and crops from images. Both data-centric and model-centric approaches of deep learning model development offer advantages depending on environment and non-environment factors. To assess the performance of weed detection in real-field conditions for eight crop species, the Yolov8, Yolov9, and customized Yolov9 deep learning models were trained and evaluated using RGB images from four locations (Casselton, Fargo, and Carrington) over a two-year period in the Great Plains region of the U.S. The experiment encompassed eight crop species—dry bean, canola, corn, field pea, flax, lentil, soybean, and sugar beet—and five weed species—horseweed, kochia, redroot pigweed, common ragweed, and water hemp. Six Yolov8 and eight Yolov9 model variants were trained using annotated weed and crop images gathered from four different sites including combined dataset from four sites. The Yolov8 and Yolov9 models’ performance in weed detection were assessed based on mean average precision (mAP50) metrics for five datasets, eight crop species, and five weed species. The results of the weed and crop detection evaluation showed high mAP50 values of 86.2 %. The mAP50 values for individual weed and crop species detection ranged from 80.8 % to 98 %. The results demonstrated that the performance of the model varied by model type (model-centric), location due to environment (data-centric), data size (data-centric), data quality (data-centric), and object size in the image (data-centric). Nevertheless, the Yolov9 customized lightweight model has the potential to play a significant role in building a real time machine-vision-based precision weed management system.

实施用于实时精准杂草管理的机器视觉系统是开发智能喷洒机器人车辆的关键一步。利用深度学习对象检测模型构建的智能机器视觉系统有可能从图像中准确检测出杂草和农作物。以数据为中心和以模型为中心的深度学习模型开发方法都具有优势,这取决于环境和非环境因素。为了评估八种作物在真实田地条件下的杂草检测性能,我们使用来自美国大平原地区四个地点(卡塞尔顿、法戈和卡灵顿)的 RGB 图像,对 Yolov8、Yolov9 和定制的 Yolov9 深度学习模型进行了为期两年的训练和评估。实验包括八种作物--干豆、油菜籽、玉米、大田豌豆、亚麻、扁豆、大豆和甜菜,以及五种杂草--马草、科奇亚、红根猪笼草、普通豚草和水麻。利用从四个不同地点(包括四个地点的综合数据集)收集的带注释的杂草和作物图像,对六个 Yolov8 和八个 Yolov9 模型变体进行了训练。根据五个数据集、八个作物物种和五个杂草物种的平均精度(mAP50)指标,评估了 Yolov8 和 Yolov9 模型在杂草检测方面的性能。杂草和作物检测评估结果显示,mAP50 值高达 86.2%。单个杂草和作物物种检测的 mAP50 值介于 80.8 % 到 98 % 之间。结果表明,模型的性能因模型类型(以模型为中心)、环境造成的位置(以数据为中心)、数据大小(以数据为中心)、数据质量(以数据为中心)和图像中物体大小(以数据为中心)而异。尽管如此,Yolov9 定制的轻量级模型仍有可能在构建基于机器视觉的实时精准杂草管理系统中发挥重要作用。
{"title":"Field-based multispecies weed and crop detection using ground robots and advanced YOLO models: A data and model-centric approach","authors":"","doi":"10.1016/j.atech.2024.100538","DOIUrl":"10.1016/j.atech.2024.100538","url":null,"abstract":"<div><p>The implementation of a machine-vision system for real-time precision weed management is a crucial step towards the development of smart spraying robotic vehicles. The intelligent machine-vision system, constructed using deep learning object detection models, has the potential to accurately detect weeds and crops from images. Both data-centric and model-centric approaches of deep learning model development offer advantages depending on environment and non-environment factors. To assess the performance of weed detection in real-field conditions for eight crop species, the Yolov8, Yolov9, and customized Yolov9 deep learning models were trained and evaluated using RGB images from four locations (Casselton, Fargo, and Carrington) over a two-year period in the Great Plains region of the U.S. The experiment encompassed eight crop species—dry bean, canola, corn, field pea, flax, lentil, soybean, and sugar beet—and five weed species—horseweed, kochia, redroot pigweed, common ragweed, and water hemp. Six Yolov8 and eight Yolov9 model variants were trained using annotated weed and crop images gathered from four different sites including combined dataset from four sites. The Yolov8 and Yolov9 models’ performance in weed detection were assessed based on mean average precision (mAP50) metrics for five datasets, eight crop species, and five weed species. The results of the weed and crop detection evaluation showed high mAP50 values of 86.2 %. The mAP50 values for individual weed and crop species detection ranged from 80.8 % to 98 %. The results demonstrated that the performance of the model varied by model type (model-centric), location due to environment (data-centric), data size (data-centric), data quality (data-centric), and object size in the image (data-centric). Nevertheless, the Yolov9 customized lightweight model has the potential to play a significant role in building a real time machine-vision-based precision weed management system.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001436/pdfft?md5=fb66c49d8d623973c91bee4e32e27d12&pid=1-s2.0-S2772375524001436-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142002409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning guided variable rate robotic sprayer prototype 深度学习引导的变速机器人喷雾器原型
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-16 DOI: 10.1016/j.atech.2024.100540

This paper presents the development of a robotic sprayer that combines artificial intelligence with robotics for optimal spray application on citrus nursery plants grown in an indoor environment. The robotic platform is integrated with an embedded firmware of MobileNetV2 model to identify and classify the plant samples with a classification accuracy of 100 % which is used to dispense variable rate spraying of pesticide based on the health status of the plant foliage. The disease detection model was developed through the edge impulse platform and deployed on Raspberry Pi 4. The robot navigates through an array of plants, stops beside each plant, and captures an image of the citrus plants. It feeds the image into the deployed embedded model to generate a disease inference that informs the variable rate application of spray during real-time actuation. To test the spraying performance of the prototype within the growing environment, water sensitive cards were placed in each plant's canopy. After spraying, the samples of water sensitive cards were collected and quantified using a smart spray app to determine the classification accuracy as well as the extent of spray coverage on the citrus samples. The robot spray coverage results show an average spray coverage of 87 % on lemon foliage when compared with 67 % for navel orange, during the spray performance test of the robot.

本文介绍了一种机器人喷雾器的开发情况,它将人工智能与机器人技术相结合,可对室内环境中种植的柑橘苗圃植物进行最佳喷洒。机器人平台集成了 MobileNetV2 模型的嵌入式固件,可识别植物样本并对其进行分类,分类准确率达 100%,用于根据植物叶片的健康状况喷洒不同剂量的农药。病害检测模型是通过边缘脉冲平台开发的,并部署在 Raspberry Pi 4 上。机器人在植物阵列中穿行,停在每棵植物旁,并捕捉柑橘类植物的图像。它将图像输入已部署的嵌入式模型,以生成病害推断,为实时执行过程中的变速喷洒提供信息。为了测试原型在生长环境中的喷洒性能,在每株植物的树冠上都放置了水敏卡。喷洒后,收集水敏卡样本并使用智能喷洒应用程序进行量化,以确定分类准确性以及柑橘样本的喷洒覆盖范围。机器人喷洒覆盖率结果显示,在机器人喷洒性能测试中,柠檬叶片的平均喷洒覆盖率为 87%,而脐橙的平均喷洒覆盖率为 67%。
{"title":"Deep learning guided variable rate robotic sprayer prototype","authors":"","doi":"10.1016/j.atech.2024.100540","DOIUrl":"10.1016/j.atech.2024.100540","url":null,"abstract":"<div><p>This paper presents the development of a robotic sprayer that combines artificial intelligence with robotics for optimal spray application on citrus nursery plants grown in an indoor environment. The robotic platform is integrated with an embedded firmware of MobileNetV2 model to identify and classify the plant samples with a classification accuracy of 100 % which is used to dispense variable rate spraying of pesticide based on the health status of the plant foliage. The disease detection model was developed through the edge impulse platform and deployed on Raspberry Pi 4. The robot navigates through an array of plants, stops beside each plant, and captures an image of the citrus plants. It feeds the image into the deployed embedded model to generate a disease inference that informs the variable rate application of spray during real-time actuation. To test the spraying performance of the prototype within the growing environment, water sensitive cards were placed in each plant's canopy. After spraying, the samples of water sensitive cards were collected and quantified using a smart spray app to determine the classification accuracy as well as the extent of spray coverage on the citrus samples. The robot spray coverage results show an average spray coverage of 87 % on lemon foliage when compared with 67 % for navel orange, during the spray performance test of the robot.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S277237552400145X/pdfft?md5=a7520ad2b0c9742ff24d9d6b2ecf6407&pid=1-s2.0-S277237552400145X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142039970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperspectral reflectance and machine learning for multi-site monitoring of cotton growth 高光谱反射和机器学习用于棉花生长的多点监测
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-13 DOI: 10.1016/j.atech.2024.100536

Hyperspectral measurements can help with rapid decision-making and collecting data across multiple locations. However, there are multiple data processing methods (Savisky-Golay [SG], first derivative [FD], and normalization) and analyses (partial least squares regression [PLS], weighted k-nearest neighbor [KKNN], support vector machine [SVM], and random forest [RF]) that can be used to determine the best relationship between physical measurements and hyperspectral data. In the current study, FD was the best method for data processing and SVM was the best model for predicting average cotton (Gossypium spp. Malvaceae) height and nodes. However, the combination of FD and RF were best at predicting cotton leaf area index, canopy cover, and chlorophyll content across the growing season. Additionally, results from models developed by both SVM and RF were closely related to pseudo-CHIME satellite wavebands, where in-situ hyperspectral data were matched to the spectral resolutions of a future hyperspectral satellite. The information and results presented will aid producers and other members of the cotton industry to make rapid and meaningful decisions that could result in greater yield and sustainable intensification.

高光谱测量有助于快速决策和收集多个地点的数据。然而,有多种数据处理方法(萨维斯基-戈莱[SG]、一元导数[FD]和归一化)和分析方法(偏最小二乘回归[PLS]、加权 k 近邻[KKNN]、支持向量机[SVM]和随机森林[RF])可用于确定物理测量和高光谱数据之间的最佳关系。在目前的研究中,FD 是数据处理的最佳方法,SVM 是预测棉花(棉属)平均高度和节数的最佳模型。然而,FD 和 RF 组合在预测棉花整个生长季节的叶面积指数、冠层覆盖率和叶绿素含量方面效果最佳。此外,SVM 和 RF 模型的结果与伪 CHIME 卫星波段密切相关,其中现场高光谱数据与未来高光谱卫星的光谱分辨率相匹配。所提供的信息和结果将帮助生产者和棉花产业的其他成员迅速做出有意义的决策,从而提高产量和实现可持续集约化。
{"title":"Hyperspectral reflectance and machine learning for multi-site monitoring of cotton growth","authors":"","doi":"10.1016/j.atech.2024.100536","DOIUrl":"10.1016/j.atech.2024.100536","url":null,"abstract":"<div><p>Hyperspectral measurements can help with rapid decision-making and collecting data across multiple locations. However, there are multiple data processing methods (Savisky-Golay [SG], first derivative [FD], and normalization) and analyses <strong>(</strong>partial least squares regression [PLS], weighted k-nearest neighbor [KKNN], support vector machine [SVM], and random forest [RF]) that can be used to determine the best relationship between physical measurements and hyperspectral data. In the current study, FD was the best method for data processing and SVM was the best model for predicting average cotton (<em>Gossypium</em> spp. <em>Malvaceae</em>) height and nodes. However, the combination of FD and RF were best at predicting cotton leaf area index, canopy cover, and chlorophyll content across the growing season. Additionally, results from models developed by both SVM and RF were closely related to pseudo-CHIME satellite wavebands, where <em>in-situ</em> hyperspectral data were matched to the spectral resolutions of a future hyperspectral satellite. The information and results presented will aid producers and other members of the cotton industry to make rapid and meaningful decisions that could result in greater yield and sustainable intensification.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001412/pdfft?md5=2956ba7ef3b1d61a9b2f23846aae6000&pid=1-s2.0-S2772375524001412-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Adaptive Multiple Intelligence System (HybridAMIS) for classifying cannabis leaf diseases using deep learning ensembles 利用深度学习集合对大麻叶片病害进行分类的混合自适应多元智能系统(HybridAMIS)
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-11 DOI: 10.1016/j.atech.2024.100535

Optimizing cannabis crop yield and quality necessitates accurate, automated leaf disease classi-fication systems for timely detection and intervention. Existing automated solutions, however, are insufficiently tailored to the specific challenges of cannabis disease identification, struggling with robustness across varied environmental conditions and plant growth stages. This paper introduces a novel Hybrid Adaptive Multi-Intelligence System for Deep Learning Ensembles (HyAMIS-DLE), utilizing a comprehensive dataset reflective of the diversity in cannabis leaf diseases and their progression. Our approach combines non-population-based decision fusion in image prepro-cessing with population-based decision fusion in classification, employing multiple CNN archi-tectures. This integration facilitates a significant improvement in performance metrics: Hy-AMIS-DLE achieves an accuracy of 99.58 %, outperforming conventional models by up to 4.16 %, and exhibits superior robustness and an enhanced Area Under the Curve (AUC) score, effectively distinguishing between healthy and diseased leaves. The successful deployment of HyAMIS-DLE within our Automated Cannabis Leaf Disease Classification System (A-CLDC-S) demonstrates its practical value, contributing to increased crop yields, reduced losses, and the promotion of sus-tainable agricultural practices.

要优化大麻作物的产量和质量,就必须有准确的自动叶片病害分类系统,以便及时发现和干预。然而,现有的自动化解决方案不足以应对大麻病害识别的具体挑战,在不同的环境条件和植物生长阶段都难以保持稳定。本文介绍了一种新颖的深度学习集合混合自适应多智能系统(HyAMIS-DLE),它利用了一个反映大麻叶片疾病多样性及其发展过程的综合数据集。我们的方法采用多个 CNN 架构,将图像预处理中的非群体决策融合与分类中的群体决策融合相结合。这种融合大大提高了性能指标:Hy-AMIS-DLE 的准确率达到 99.58%,比传统模型高出 4.16%,并表现出卓越的鲁棒性和更高的曲线下面积(AUC)得分,能有效区分健康叶片和病叶。HyAMIS-DLE 在我们的自动大麻叶病分类系统 (A-CLDC-S) 中的成功应用证明了它的实用价值,有助于提高作物产量、减少损失和推广可实现的农业实践。
{"title":"Hybrid Adaptive Multiple Intelligence System (HybridAMIS) for classifying cannabis leaf diseases using deep learning ensembles","authors":"","doi":"10.1016/j.atech.2024.100535","DOIUrl":"10.1016/j.atech.2024.100535","url":null,"abstract":"<div><p>Optimizing cannabis crop yield and quality necessitates accurate, automated leaf disease classi-fication systems for timely detection and intervention. Existing automated solutions, however, are insufficiently tailored to the specific challenges of cannabis disease identification, struggling with robustness across varied environmental conditions and plant growth stages. This paper introduces a novel Hybrid Adaptive Multi-Intelligence System for Deep Learning Ensembles (HyAMIS-DLE), utilizing a comprehensive dataset reflective of the diversity in cannabis leaf diseases and their progression. Our approach combines non-population-based decision fusion in image prepro-cessing with population-based decision fusion in classification, employing multiple CNN archi-tectures. This integration facilitates a significant improvement in performance metrics: Hy-AMIS-DLE achieves an accuracy of 99.58 %, outperforming conventional models by up to 4.16 %, and exhibits superior robustness and an enhanced Area Under the Curve (AUC) score, effectively distinguishing between healthy and diseased leaves. The successful deployment of HyAMIS-DLE within our Automated Cannabis Leaf Disease Classification System (A-CLDC-S) demonstrates its practical value, contributing to increased crop yields, reduced losses, and the promotion of sus-tainable agricultural practices.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001400/pdfft?md5=0c1e3febb4480f535a3b4a1858aadc67&pid=1-s2.0-S2772375524001400-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142047895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"Semantic segmentation for plant leaf disease classification and damage detection: A deep learning approach" "用于植物叶片病害分类和损害检测的语义分割:一种深度学习方法"
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-10 DOI: 10.1016/j.atech.2024.100526

Agriculture sustains the livelihoods of a significant portion of India's rural population, yet challenges persist in manual practices and disease management. To address these issues, this paper presents an automated plant leaf damage detection and disease identification system leveraging advanced deep learning techniques. The proposed method consists of six stages: first, utilizing YOLOv8 for region of interest identification from drone images; second, employing DeepLabV3+ for background removal and facilitating disease classification; third, implementing a CNN model for accurate disease classification achieving high training and validation accuracies (96.97 % and 92.89 %, respectively); fourth, utilizing UNet semantic segmentation for precise damage detection at a pixel level with an evaluation accuracy of 99 %; fifth, evaluating disease severity; and sixth, suggesting tailored remedies based on disease type and damage state. Experimental analysis using the Plant Village dataset demonstrates the effectiveness of the proposed method in detecting various defects in plants such as apple, tomato, and corn. This automated approach holds promise for enhancing agricultural productivity and disease management in India and beyond.

农业维持着印度大部分农村人口的生计,但手工操作和病害管理仍面临挑战。为了解决这些问题,本文利用先进的深度学习技术提出了一种自动植物叶片损伤检测和病害识别系统。所提出的方法包括六个阶段:第一,利用 YOLOv8 从无人机图像中识别感兴趣区域;第二,利用 DeepLabV3+ 去除背景并促进病害分类;第三,采用 CNN 模型进行准确的病害分类,实现较高的训练和验证准确率(分别为 96.97 % 和 92.89 %);第四,利用 UNet 语义分割在像素级进行精确的病害检测,评估准确率为 99 %;第五,评估病害严重程度;第六,根据病害类型和损害状态提出有针对性的补救措施。使用植物村数据集进行的实验分析表明,所提出的方法在检测苹果、番茄和玉米等植物的各种缺陷方面非常有效。这种自动化方法有望提高印度及其他地区的农业生产率和病害管理水平。
{"title":"\"Semantic segmentation for plant leaf disease classification and damage detection: A deep learning approach\"","authors":"","doi":"10.1016/j.atech.2024.100526","DOIUrl":"10.1016/j.atech.2024.100526","url":null,"abstract":"<div><p>Agriculture sustains the livelihoods of a significant portion of India's rural population, yet challenges persist in manual practices and disease management. To address these issues, this paper presents an automated plant leaf damage detection and disease identification system leveraging advanced deep learning techniques. The proposed method consists of six stages: first, utilizing YOLOv8 for region of interest identification from drone images; second, employing DeepLabV3+ for background removal and facilitating disease classification; third, implementing a CNN model for accurate disease classification achieving high training and validation accuracies (96.97 % and 92.89 %, respectively); fourth, utilizing UNet semantic segmentation for precise damage detection at a pixel level with an evaluation accuracy of 99 %; fifth, evaluating disease severity; and sixth, suggesting tailored remedies based on disease type and damage state. Experimental analysis using the Plant Village dataset demonstrates the effectiveness of the proposed method in detecting various defects in plants such as apple, tomato, and corn. This automated approach holds promise for enhancing agricultural productivity and disease management in India and beyond.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S277237552400131X/pdfft?md5=b5a85e9638d70c4b376f220ae2d18d36&pid=1-s2.0-S277237552400131X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Four-parameter beta mixed models with survey and sentinel 2A satellite data for predicting paddy productivity 利用调查和定点 2A 卫星数据的四参数贝塔混合模型预测水稻生产力
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-09 DOI: 10.1016/j.atech.2024.100525

Ensuring food security, fostering agricultural sustainability, and driving economic development. However, existing prediction models often overlook the unique characteristics of paddy productivity distribution, which varies between areas, skewed, and bounded within a certain minimum and maximum range, following a four-parameter beta distribution. Consequently, these models yield less accurate and potentially misleading predictions. Additionally, most approaches fail to capture the complex interrelationships among variables that often occur when we incorporate satellite data alongside survey data that has been recognized as a key approach for improving prediction accuracy and optimizing farming practices. To address these shortcomings, this study introduces a four-parameter beta Generalized Linear Mixed Model (GLMM) augmented within a four-parameter beta Generalized Mixed Effect Tree (GMET). The four-parameter beta GMET, an extension of the four-parameter beta GLMM model integrated with a regression tree, offers enhanced flexibility in modeling complex relationships. Application of this methodology to an empirical study in Central Kalimantan and Karawang reveals notable improvements over previous methods, as evidenced by substantially lower AIC and RRMSE values. Notably, the analysis identifies lagged values of band 4, band 8, and NDVI from Sentinel-2A satellite data as significant predictors of paddy productivity, overriding the importance of farmer survey variables. This underscores the potential of satellite data to be utilized in paddy productivity predictions, offering a more efficient and cost-effective alternative to farmer survey-based methods. By enhancing satellite technology, future efforts in paddy productivity prediction can achieve higher efficiency and accuracy, contributing to informed decision-making in agricultural management.

确保粮食安全,促进农业可持续发展,推动经济发展。然而,现有的预测模型往往忽视了水稻生产力分布的独特性,即不同地区的生产力分布各不相同,呈倾斜状,并在一定的最小值和最大值范围内,遵循四参数贝塔分布。因此,这些模型得出的预测结果不够准确,并可能产生误导。此外,大多数方法都无法捕捉变量之间复杂的相互关系,而当我们将卫星数据与调查数据结合起来时,往往会出现这种情况。为了弥补这些不足,本研究引入了四参数贝塔广义线性混合模型(GLMM),并在四参数贝塔广义混合效应树(GMET)中进行了增强。四参数贝塔广义混合效应树是四参数贝塔 GLMM 模型的延伸,与回归树相结合,为复杂关系建模提供了更大的灵活性。在中加里曼丹和卡拉旺的一项实证研究中应用该方法后发现,与以前的方法相比,该方法有了显著的改进,AIC 和 RRMSE 值大大降低就是证明。值得注意的是,分析发现,来自 Sentinel-2A 卫星数据的波段 4、波段 8 和 NDVI 的滞后值是预测水稻生产力的重要指标,其重要性超过了农民调查变量。这凸显了卫星数据在预测水稻生产力方面的潜力,为基于农民调查的方法提供了更高效、更具成本效益的替代方法。通过加强卫星技术,未来的水稻生产力预测工作可以实现更高的效率和准确性,有助于农业管理方面的知情决策。
{"title":"Four-parameter beta mixed models with survey and sentinel 2A satellite data for predicting paddy productivity","authors":"","doi":"10.1016/j.atech.2024.100525","DOIUrl":"10.1016/j.atech.2024.100525","url":null,"abstract":"<div><p>Ensuring food security, fostering agricultural sustainability, and driving economic development. However, existing prediction models often overlook the unique characteristics of paddy productivity distribution, which varies between areas, skewed, and bounded within a certain minimum and maximum range, following a four-parameter beta distribution. Consequently, these models yield less accurate and potentially misleading predictions. Additionally, most approaches fail to capture the complex interrelationships among variables that often occur when we incorporate satellite data alongside survey data that has been recognized as a key approach for improving prediction accuracy and optimizing farming practices. To address these shortcomings, this study introduces a four-parameter beta Generalized Linear Mixed Model (GLMM) augmented within a four-parameter beta Generalized Mixed Effect Tree (GMET). The four-parameter beta GMET, an extension of the four-parameter beta GLMM model integrated with a regression tree, offers enhanced flexibility in modeling complex relationships. Application of this methodology to an empirical study in Central Kalimantan and Karawang reveals notable improvements over previous methods, as evidenced by substantially lower AIC and RRMSE values. Notably, the analysis identifies lagged values of band 4, band 8, and NDVI from Sentinel-2A satellite data as significant predictors of paddy productivity, overriding the importance of farmer survey variables. This underscores the potential of satellite data to be utilized in paddy productivity predictions, offering a more efficient and cost-effective alternative to farmer survey-based methods. By enhancing satellite technology, future efforts in paddy productivity prediction can achieve higher efficiency and accuracy, contributing to informed decision-making in agricultural management.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001308/pdfft?md5=5d0c86c147378242401d9cd1bb47237d&pid=1-s2.0-S2772375524001308-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142012034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proximally sensed RGB images and colour indices for distinguishing rice blast and brown spot diseases by k-means clustering: Towards a mobile application solution 通过k-means聚类区分稻瘟病和褐斑病的近距离传感RGB图像和颜色指数:移动应用解决方案
IF 6.3 Q1 AGRICULTURAL ENGINEERING Pub Date : 2024-08-09 DOI: 10.1016/j.atech.2024.100532

Rice blast (RB) and Brown spot (BS) are economically important diseases in rice that cause greater yield losses annually. Both share the same host and produce quite similar lesions, which leads to confusion in identification by farmers. Proper identification is essential for better management of the diseases. Visual identification needs trained experts and the laboratory-based experiments using molecular techniques are costly and time-consuming even though they are accurate. This study investigated the differentiation of the lesions from these two diseases based on proximally sensed digital RGB images and derived colour indices. Digital images of lesions were acquired using a smartphone camera. Thirty-six colour indices were evaluated by k-means clustering to distinguish the diseases using three colour channel components; RGB, HSV, and La*b*. Briefly, the background of the images was masked to target the leaf spot lesion, and colour indices were derived as features from the centre region across the lesion, coinciding with the common identification practice of plant pathologists. The results revealed that 36 indices delineated both diseases with 84.3 % accuracy. However, it was also found that the accuracy was mostly governed by indices associated with the R, G and B profiles, excluding the others. G/R, NGRDI, (R + G + B)/R, VARI, (G + B)/R, R/G, Nor_r, G-R, Mean_A, and Logsig indices were identified to contribute more in distinguishing the diseases. Therefore, these RGB-based colour indices can be used to distinguish blast and brown spot diseases using the k-means algorithm. The results from this study present an alternative, and non-destructive, objective method for identifying RB and BS disease symptoms. Based on the findings, a mobile application, Blast O spot is developed to differentiate the diseases in fields.

稻瘟病(RB)和褐斑病(BS)是水稻的重要经济病害,每年都会造成较大的产量损失。这两种病害的寄主相同,产生的病斑也很相似,这导致农民在识别时产生混淆。正确识别对于更好地管理病害至关重要。肉眼识别需要训练有素的专家,而使用分子技术进行的实验室实验虽然准确,但成本高且耗时。本研究根据近距离感测的数字 RGB 图像和衍生的颜色指数,对这两种病害的病变部位进行了区分。病变的数字图像是使用智能手机摄像头获取的。通过 k-means 聚类对 36 种颜色指数进行了评估,以使用三种颜色通道成分(RGB、HSV 和 La*b*)区分疾病。简而言之,图像的背景被遮蔽,以叶斑病病变为目标,颜色指数是从整个病变的中心区域得出的特征,这与植物病理学家常用的识别方法不谋而合。结果显示,36 种指数对两种病害的划分准确率均为 84.3%。不过,研究还发现,准确率主要取决于与 R、G 和 B 图谱相关的指数,而不包括其他指数。G/R、NGRDI、(R + G + B)/R、VARI、(G + B)/R、R/G、Nor_r、G-R、Mean_A 和 Logsig 指数被认为在区分疾病方面贡献较大。因此,这些基于 RGB 的色彩指数可用于使用 k-means 算法区分稻瘟病和褐斑病。这项研究的结果为识别 RB 和 BS 病症提供了另一种非破坏性的客观方法。根据研究结果,开发了一款名为 "褐斑病 "的移动应用程序,用于区分田间的病害。
{"title":"Proximally sensed RGB images and colour indices for distinguishing rice blast and brown spot diseases by k-means clustering: Towards a mobile application solution","authors":"","doi":"10.1016/j.atech.2024.100532","DOIUrl":"10.1016/j.atech.2024.100532","url":null,"abstract":"<div><p>Rice blast (RB) and Brown spot (BS) are economically important diseases in rice that cause greater yield losses annually. Both share the same host and produce quite similar lesions, which leads to confusion in identification by farmers. Proper identification is essential for better management of the diseases. Visual identification needs trained experts and the laboratory-based experiments using molecular techniques are costly and time-consuming even though they are accurate. This study investigated the differentiation of the lesions from these two diseases based on proximally sensed digital RGB images and derived colour indices. Digital images of lesions were acquired using a smartphone camera. Thirty-six colour indices were evaluated by k-means clustering to distinguish the diseases using three colour channel components; RGB, HSV, and La*b*. Briefly, the background of the images was masked to target the leaf spot lesion, and colour indices were derived as features from the centre region across the lesion, coinciding with the common identification practice of plant pathologists. The results revealed that 36 indices delineated both diseases with 84.3 % accuracy. However, it was also found that the accuracy was mostly governed by indices associated with the R, G and B profiles, excluding the others. G/R, NGRDI, (<em>R</em> + <em>G</em> + <em>B</em>)/R, VARI, (<em>G</em> + <em>B</em>)/R, R/G, Nor_r, G-R, Mean_A, and Logsig indices were identified to contribute more in distinguishing the diseases. Therefore, these RGB-based colour indices can be used to distinguish blast and brown spot diseases using the k-means algorithm. The results from this study present an alternative, and non-destructive, objective method for identifying RB and BS disease symptoms. Based on the findings, a mobile application, Blast O spot is developed to differentiate the diseases in fields.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001370/pdfft?md5=07c8c69b438b17ee021ff3d10b9320a4&pid=1-s2.0-S2772375524001370-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Smart agricultural technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1