首页 > 最新文献

Smart agricultural technology最新文献

英文 中文
UAV remote sensing imagery-based semantic segmentation approach for lodged rice region 基于无人机遥感影像的水稻倒伏区语义分割方法
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-12-01 DOI: 10.1016/j.atech.2025.101689
Qiang Chen , Chuang Xia , Yinyan Shi , Xiaochan Wang , Xuekai Huang , Lei Wang , Xiaolei Zhang , Enlai Zheng , Xiaojun Gao , Fei Liu
Rice is one of the major staple crops in China, and its yield is closely tied to national food security and farmers’ economic returns. Lodging in rice not only reduces the efficiency of mechanical harvesting but also severely impacts yield and grain quality. Therefore, accurately identifying lodged areas is of great importance. This study proposes a rice lodging detection method based on UAV-acquired multispectral remote sensing imagery. High-resolution, multi-temporal images were collected over paddy fields in Yuhang District, Zhejiang Province, using DJI Mavic 3 M and M300 UAVs. A dataset was constructed via image cropping and data augmentation. Two deep learning models—U-Net with a VGG-16 backbone and DeepLabv3+ with a MobileNetv2 backbone—were compared for semantic segmentation performance. Experimental results show that the U-Net model achieved superior performance on the validation set, with a mean Intersection over Union (MIoU) of 91.57 %, mean Pixel Accuracy (MPA) of 95.83 %, Precision of 95.27 %, Recall of 95.83 %, and training/validation losses of 0.106 and 0.151, respectively, outperforming the DeepLabv3+ model. Additionally, the impact of different training-validation data split ratios was examined. The U-Net model showed better generalization and stability when trained with a 9:1 split compared to an 8:2 split. Furthermore, based on the semantic segmentation results, the area of lodged rice was estimated and compared against ground-truth measurements. The U-Net model produced minimal relative error, with a maximum deviation of <3 %, demonstrating strong practical applicability. These findings suggest that the U-Net model not only offers high accuracy and stability but also provides a reliable technical foundation for agricultural disaster monitoring and precision management using high-resolution UAV imagery.
水稻是中国的主要粮食作物之一,其产量与国家粮食安全和农民的经济回报密切相关。水稻倒伏不仅降低机械收获效率,而且严重影响产量和品质。因此,准确识别滞留区域是非常重要的。提出了一种基于无人机多光谱遥感影像的水稻倒伏检测方法。使用大疆Mavic 3m和M300无人机在浙江省余杭区稻田上空采集高分辨率、多时相图像。通过图像裁剪和数据增强构建数据集。对比了基于VGG-16骨干网的u - net和基于MobileNetv2骨干网的DeepLabv3+两种深度学习模型的语义分割性能。实验结果表明,U-Net模型在验证集上取得了优异的性能,平均交叉比(MIoU)为91.57%,平均像素精度(MPA)为95.83%,精度为95.27%,召回率为95.83%,训练/验证损失分别为0.106和0.151,优于DeepLabv3+模型。此外,研究了不同训练-验证数据分割比率的影响。与8:2分割相比,9:1分割训练的U-Net模型表现出更好的泛化和稳定性。此外,基于语义分割结果,估计了水稻的面积,并与地面真值进行了比较。U-Net模型的相对误差最小,最大偏差为3%,具有较强的实用性。研究结果表明,U-Net模型不仅具有较高的精度和稳定性,而且为利用高分辨率无人机图像进行农业灾害监测和精准管理提供了可靠的技术基础。
{"title":"UAV remote sensing imagery-based semantic segmentation approach for lodged rice region","authors":"Qiang Chen ,&nbsp;Chuang Xia ,&nbsp;Yinyan Shi ,&nbsp;Xiaochan Wang ,&nbsp;Xuekai Huang ,&nbsp;Lei Wang ,&nbsp;Xiaolei Zhang ,&nbsp;Enlai Zheng ,&nbsp;Xiaojun Gao ,&nbsp;Fei Liu","doi":"10.1016/j.atech.2025.101689","DOIUrl":"10.1016/j.atech.2025.101689","url":null,"abstract":"<div><div>Rice is one of the major staple crops in China, and its yield is closely tied to national food security and farmers’ economic returns. Lodging in rice not only reduces the efficiency of mechanical harvesting but also severely impacts yield and grain quality. Therefore, accurately identifying lodged areas is of great importance. This study proposes a rice lodging detection method based on UAV-acquired multispectral remote sensing imagery. High-resolution, multi-temporal images were collected over paddy fields in Yuhang District, Zhejiang Province, using DJI Mavic 3 M and M300 UAVs. A dataset was constructed via image cropping and data augmentation. Two deep learning models—U-Net with a VGG-16 backbone and DeepLabv3+ with a MobileNetv2 backbone—were compared for semantic segmentation performance. Experimental results show that the U-Net model achieved superior performance on the validation set, with a mean Intersection over Union (MIoU) of 91.57 %, mean Pixel Accuracy (MPA) of 95.83 %, Precision of 95.27 %, Recall of 95.83 %, and training/validation losses of 0.106 and 0.151, respectively, outperforming the DeepLabv3+ model. Additionally, the impact of different training-validation data split ratios was examined. The U-Net model showed better generalization and stability when trained with a 9:1 split compared to an 8:2 split. Furthermore, based on the semantic segmentation results, the area of lodged rice was estimated and compared against ground-truth measurements. The U-Net model produced minimal relative error, with a maximum deviation of &lt;3 %, demonstrating strong practical applicability. These findings suggest that the U-Net model not only offers high accuracy and stability but also provides a reliable technical foundation for agricultural disaster monitoring and precision management using high-resolution UAV imagery.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101689"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
bioWatch: a computer vision system for community-based identification and reporting of spotted lanternfly life stage bioWatch:一个基于社区的斑点灯笼蝇生命阶段识别和报告的计算机视觉系统
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-12-01 DOI: 10.1016/j.atech.2025.101688
Yanqiu Yang , Kittiphum Pawikhum
Invasive species such as the Spotted Lanternfly (SLF, Lycorma delicatula) pose significant ecological and economic threats worldwide. This study introduces bioWatch, an AI-powered mobile application designed to address the challenges of invasive species detection, monitoring, and management. The application utilizes a YOLO11n deep learning model, trained on 1161 annotated images, to classify SLF across three life stages: adult, early nymph, and late nymph. In testing, the model achieved an overall precision of 0.854 and an mAP@50 of 0.589. Early and late nymph stages demonstrated strong classification performance, with mAP@50 values of 0.834 and 0.811, respectively. Although the recall for the early nymph stage was lower (0.600), the model exhibited robust performance for adults and late nymphs (recalls of 0.730 and 0.896, respectively). The egg stage, while included in training, showed limited detection performance (precision of 1.000, recall of 0.000) due to severe class imbalance in the dataset (only seven annotated instances), rather than limitations of the model architecture. This model serves as a baseline to support the deployment of the bioWatch application, which is designed for modular integration and future updates. bioWatch integrates geospatial mapping, augmented reality, explainable AI visualization, and gamification features to enhance user engagement, with early user feedback highlighting its potential for biodiversity monitoring and K–12 education.
斑点灯笼蝇(SLF, Lycorma delicatula)等入侵物种在全球范围内造成了严重的生态和经济威胁。本研究介绍了bioWatch,这是一个人工智能驱动的移动应用程序,旨在解决入侵物种检测、监测和管理的挑战。该应用程序利用YOLO11n深度学习模型,在1161张带注释的图像上进行训练,对三个生命阶段的SLF进行分类:成人、早期若虫和晚期若虫。在测试中,该模型的整体精度为0.854,mAP@50为0.589。若虫早期和晚期表现出较强的分类能力,mAP@50值分别为0.834和0.811。尽管该模型对若虫早期的回忆率较低(0.600),但对成虫和若虫晚期的回忆率分别为0.730和0.896。鸡蛋阶段,虽然包括在训练中,但由于数据集中严重的类不平衡(只有7个注释的实例),而不是模型架构的限制,显示出有限的检测性能(精度为1000,召回率为0.000)。该模型可作为支持bioWatch应用程序部署的基线,该应用程序专为模块化集成和未来更新而设计。bioWatch集成了地理空间测绘、增强现实、可解释的人工智能可视化和游戏化功能,以提高用户参与度,早期用户反馈突出了其在生物多样性监测和K-12教育方面的潜力。
{"title":"bioWatch: a computer vision system for community-based identification and reporting of spotted lanternfly life stage","authors":"Yanqiu Yang ,&nbsp;Kittiphum Pawikhum","doi":"10.1016/j.atech.2025.101688","DOIUrl":"10.1016/j.atech.2025.101688","url":null,"abstract":"<div><div>Invasive species such as the Spotted Lanternfly (SLF, <em>Lycorma delicatula</em>) pose significant ecological and economic threats worldwide. This study introduces bioWatch, an AI-powered mobile application designed to address the challenges of invasive species detection, monitoring, and management. The application utilizes a YOLO11n deep learning model, trained on 1161 annotated images, to classify SLF across three life stages: adult, early nymph, and late nymph. In testing, the model achieved an overall precision of 0.854 and an mAP@50 of 0.589. Early and late nymph stages demonstrated strong classification performance, with mAP@50 values of 0.834 and 0.811, respectively. Although the recall for the early nymph stage was lower (0.600), the model exhibited robust performance for adults and late nymphs (recalls of 0.730 and 0.896, respectively). The egg stage, while included in training, showed limited detection performance (precision of 1.000, recall of 0.000) due to severe class imbalance in the dataset (only seven annotated instances), rather than limitations of the model architecture. This model serves as a baseline to support the deployment of the bioWatch application, which is designed for modular integration and future updates. bioWatch integrates geospatial mapping, augmented reality, explainable AI visualization, and gamification features to enhance user engagement, with early user feedback highlighting its potential for biodiversity monitoring and K–12 education.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101688"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A full end-to-end analytical framework for livestock behavior modeling and health assessment using wearable electronic recording system and machine learning 使用可穿戴电子记录系统和机器学习的牲畜行为建模和健康评估的完整端到端分析框架
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-12-01 DOI: 10.1016/j.atech.2025.101686
Guohao Ni , Yuanzheng Jia , Zhonghao Shi , Fangyuan Chang , Jinfeng Miao , Jian Wang , Gengping Ye , Jie Wu , Huifang Yin , Wei Jiang , Xiangan Han , Wei Tang
Precision Livestock Farming (PLF) aims to enhance animal management through technology, yet its progression is limited by a disconnect between discrete data collection tools and the practical requirement for unified, interpretable decision-support systems. While wearable sensors and machine learning offer potential for behavior monitoring, current solutions are often fragmented, focusing on isolated classification tasks rather than providing a complete, actionable pipeline from raw data to farm management insights. This lack of integration, alongside the technical challenges of model optimization, significantly hinders widespread practical adoption. This work presents a full end-to-end analytical framework that integrates wearable electronic recording system (WERS) hardware with intelligent analytical toolkit (IAT) software to form a fully automated workflow. The IAT incorporates automated model selection and hyperparameter tuning across twelve machine learning algorithms, three feature extraction methods, and six feature selection strategies, enabling flexible and customizable modeling pipelines for sequential data processing, behavior recognition and health evaluation, and visual feedback. The implemented system demonstrates high classification accuracy, strong adaptability, and robust support for cattle behavior sequence analysis and health assessment. The system has been empirically validated on eight dairy cattle over a six-day period, demonstrating its practical applicability in real-world conditions based on the real-time deployment platform built for the system. By providing a systematic and scalable solution for intelligent livestock monitoring, this work bridges the gap between fragmented sensing technologies and operational decision-support systems, ultimately contributing to improved decision-making and operational efficiency in PLF management.
精准畜牧业(PLF)旨在通过技术加强动物管理,但其进展受到离散数据收集工具与统一、可解释的决策支持系统的实际需求之间脱节的限制。虽然可穿戴传感器和机器学习为行为监测提供了潜力,但目前的解决方案往往是碎片化的,专注于孤立的分类任务,而不是提供从原始数据到农场管理见解的完整、可操作的管道。这种集成的缺乏,加上模型优化的技术挑战,极大地阻碍了广泛的实际应用。这项工作提出了一个完整的端到端分析框架,将可穿戴电子记录系统(WERS)硬件与智能分析工具包(IAT)软件集成在一起,形成一个全自动的工作流程。IAT集成了12种机器学习算法、3种特征提取方法和6种特征选择策略的自动模型选择和超参数调整,为顺序数据处理、行为识别和健康评估以及视觉反馈提供了灵活和可定制的建模管道。该系统分类精度高,适应性强,为牛行为序列分析和健康评估提供了强大的支持。该系统已在8头奶牛身上进行了为期6天的经验验证,基于为系统构建的实时部署平台,验证了其在现实条件下的实际适用性。通过为智能牲畜监测提供系统和可扩展的解决方案,这项工作弥合了碎片化传感技术与运营决策支持系统之间的差距,最终有助于提高PLF管理的决策和运营效率。
{"title":"A full end-to-end analytical framework for livestock behavior modeling and health assessment using wearable electronic recording system and machine learning","authors":"Guohao Ni ,&nbsp;Yuanzheng Jia ,&nbsp;Zhonghao Shi ,&nbsp;Fangyuan Chang ,&nbsp;Jinfeng Miao ,&nbsp;Jian Wang ,&nbsp;Gengping Ye ,&nbsp;Jie Wu ,&nbsp;Huifang Yin ,&nbsp;Wei Jiang ,&nbsp;Xiangan Han ,&nbsp;Wei Tang","doi":"10.1016/j.atech.2025.101686","DOIUrl":"10.1016/j.atech.2025.101686","url":null,"abstract":"<div><div>Precision Livestock Farming (PLF) aims to enhance animal management through technology, yet its progression is limited by a disconnect between discrete data collection tools and the practical requirement for unified, interpretable decision-support systems. While wearable sensors and machine learning offer potential for behavior monitoring, current solutions are often fragmented, focusing on isolated classification tasks rather than providing a complete, actionable pipeline from raw data to farm management insights. This lack of integration, alongside the technical challenges of model optimization, significantly hinders widespread practical adoption. This work presents a full end-to-end analytical framework that integrates wearable electronic recording system (WERS) hardware with intelligent analytical toolkit (IAT) software to form a fully automated workflow. The IAT incorporates automated model selection and hyperparameter tuning across twelve machine learning algorithms, three feature extraction methods, and six feature selection strategies, enabling flexible and customizable modeling pipelines for sequential data processing, behavior recognition and health evaluation, and visual feedback. The implemented system demonstrates high classification accuracy, strong adaptability, and robust support for cattle behavior sequence analysis and health assessment. The system has been empirically validated on eight dairy cattle over a six-day period, demonstrating its practical applicability in real-world conditions based on the real-time deployment platform built for the system. By providing a systematic and scalable solution for intelligent livestock monitoring, this work bridges the gap between fragmented sensing technologies and operational decision-support systems, ultimately contributing to improved decision-making and operational efficiency in PLF management.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101686"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vibration regimen of finger-clamped seed-metering device based on DEM-MBD 基于DEM-MBD的指夹式排种器振动规律
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-11-29 DOI: 10.1016/j.atech.2025.101683
Zhuanghong Ma , Junchang Zhang , Hu Shi , Yu Chen , Xinyu Zhao , Zhengdao Liu , Yuxiang Huang
This study quantifies how vibration affects the singulation performance of a finger-clamped seed-metering device using coupled discrete element method–multibody dynamics (DEM–MBD) simulation and bench validation. Maize kernels were modelled in EDEM, while the metering mechanism and prescribed excitations were represented in RecurDyn. A factorial and Box–Behnken design varied vibration direction, operating speed, frequency, and amplitude; response-surface models were fitted for the qualified index, multiple index, and leakage index. Vertical vibration exerted the dominant influence on discharge behaviour. Under weak excitation (frequency<15.38Hz; amplitude<3.42mm), increasing vibration intensity reduced multiple seeding and improved the qualified index, whereas stronger excitation (frequency>15.38Hz; amplitude>3.42mm) increased leakage and reduced the qualified index, delineating a usable–detrimental vibration regime boundary. Multi-objective optimisation predicted optimal parameters of 8.74km·h⁻¹, 15.38Hz, and 3.42mm, yielding qualified, multiple, and leakage indices of 90.68%, 7.19%, and 7.30%, respectively. Bench tests on a shaker table with high-speed imaging produced 87.61%, 8.03%, and 8.98% under the same settings, in agreement with simulation trends (absolute errors: 2.15%, 0.94%, and 1.34%). The results provide quantitative guidance for vibration management and structural optimisation of finger-clamped metering, showing that appropriately tuned excitation can aid seed clearing and filling, while excessive vibration degrades singulation.
采用离散元法-多体动力学(DEM-MBD)耦合仿真和台架验证方法,定量分析了振动对指夹式测种装置仿真性能的影响。玉米籽粒用EDEM建模,计量机制和规定激励用RecurDyn表示。因子和Box-Behnken设计改变了振动方向、运行速度、频率和振幅;拟合了合格指数、多重指数和泄漏指数的响应面模型。垂直振动对放电行为的影响最大。在弱激励条件下(频率15.38Hz;幅值3.42mm),振动强度的增加减少了多次播种,提高了合格指数;而强激励条件(频率15.38Hz;幅值3.42mm),泄漏量增加,降低了合格指数,形成了可用-有害振动区边界。多目标优化预测最优参数为8.74km·h⁻¹、15.38Hz和3.42mm,合格指数为90.68%,多重指数为7.19%,漏气指数为7.30%。在高速成像激振台上进行的台架试验,在相同设置下的正确率分别为87.61%、8.03%和8.98%,与模拟趋势一致(绝对误差分别为2.15%、0.94%和1.34%)。结果表明,适当调整激励有助于种子的清除和填充,而过度的振动会降低模拟效果,这为手指夹紧计量的振动管理和结构优化提供了定量指导。
{"title":"Vibration regimen of finger-clamped seed-metering device based on DEM-MBD","authors":"Zhuanghong Ma ,&nbsp;Junchang Zhang ,&nbsp;Hu Shi ,&nbsp;Yu Chen ,&nbsp;Xinyu Zhao ,&nbsp;Zhengdao Liu ,&nbsp;Yuxiang Huang","doi":"10.1016/j.atech.2025.101683","DOIUrl":"10.1016/j.atech.2025.101683","url":null,"abstract":"<div><div>This study quantifies how vibration affects the singulation performance of a finger-clamped seed-metering device using coupled discrete element method–multibody dynamics (DEM–MBD) simulation and bench validation. Maize kernels were modelled in EDEM, while the metering mechanism and prescribed excitations were represented in RecurDyn. A factorial and Box–Behnken design varied vibration direction, operating speed, frequency, and amplitude; response-surface models were fitted for the qualified index, multiple index, and leakage index. Vertical vibration exerted the dominant influence on discharge behaviour. Under weak excitation (frequency&lt;15.38Hz; amplitude&lt;3.42mm), increasing vibration intensity reduced multiple seeding and improved the qualified index, whereas stronger excitation (frequency&gt;15.38Hz; amplitude&gt;3.42mm) increased leakage and reduced the qualified index, delineating a usable–detrimental vibration regime boundary. Multi-objective optimisation predicted optimal parameters of 8.74km·h⁻¹, 15.38Hz, and 3.42mm, yielding qualified, multiple, and leakage indices of 90.68%, 7.19%, and 7.30%, respectively. Bench tests on a shaker table with high-speed imaging produced 87.61%, 8.03%, and 8.98% under the same settings, in agreement with simulation trends (absolute errors: 2.15%, 0.94%, and 1.34%). The results provide quantitative guidance for vibration management and structural optimisation of finger-clamped metering, showing that appropriately tuned excitation can aid seed clearing and filling, while excessive vibration degrades singulation.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101683"},"PeriodicalIF":5.7,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of aboveground biomass in non-uniformly planted rice using multi-temporal UAV imagery data 利用多时相无人机影像数据估算非均匀种植水稻地上生物量
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-11-28 DOI: 10.1016/j.atech.2025.101680
Yong Su , Jinyan Tian , Bo Yang , Jing Yang
Estimating rice aboveground biomass (AGB) in unevenly planted areas using sowing parameters is challenging, mainly due to row spacing errors caused by the subjectivity of operators, as well as problems such as missing seeding and multiple seedlings per hill. Estimating regional rice AGB in actual agricultural production under the assumption of uniform planting density significantly increases errors. This study innovatively proposes a Rice Pixel Ratio (RPR) based on high-resolution Unmanned Aerial Vehicle (UAV) imagery to characterize the spatial distribution pattern of rice at the pixel scale. During 2023–2024, this study set up a total of 6 experimental treatments: 4 uniform planting density treatments (15, 18, 21, and 24 plants/m2, sequentially denoted as K1-K4), 1 uneven density treatment (D1), and 1 composite uniform density treatment (D2) integrating K1-K4. The study revealed: (i) Eight VIs extracted from high-resolution remote sensing imagery exhibited significant correlations with real-time rice growth parameters; (ii) During the vegetative phase, after RPR calibration, the non-uniform density plot D1 showed significantly improved AGB estimation accuracy, with R2 increasing from 0.76 to 0.82; (ii) When RPR>0.985 in test plots, and no model accuracy degradation occurred, confirming RPR’s broad applicability and strong robustness; (iii) For the composite density model D2 incorporating all four densities, RPR calibration elevated the reproductive-phase stem-leaf model’s R2 from 0.28 to 0.71, representing a 158.18 % improvement. Results demonstrate RPR’s capacity to significantly enhance the generalizability and robustness of AGB models under high vegetation coverage, mixed-density, and non-uniform planting conditions.
利用播种参数估算不均匀种植区域的水稻地上生物量(AGB)具有挑战性,主要是由于操作员主观性造成的行距误差,以及缺播和每山多苗等问题。在均匀种植密度的假设下,估算实际农业生产中的区域水稻AGB会显著增加误差。本研究创新性地提出了一种基于高分辨率无人机(UAV)影像的水稻像元比(RPR)方法,在像元尺度上表征水稻的空间分布格局。2023-2024年,本研究共设置了6个试验处理,即4个均匀种植密度处理(15、18、21、24株/m2,依次用K1-K4表示)、1个不均匀密度处理(D1)和1个综合K1-K4的复合均匀密度处理(D2)。研究表明:(i)从高分辨率遥感影像中提取的8个VIs与实时水稻生长参数具有显著相关性;(ii)在营养期,RPR校正后,非均匀密度图D1的AGB估计精度显著提高,R2由0.76提高到0.82;(ii)当试验区RPR>;0.985时,模型精度未发生下降,证实了RPR的适用性广,鲁棒性强;(iii)对于包含所有4种密度的复合密度模型D2, RPR校正将繁殖期茎叶模型的R2从0.28提高到0.71,提高了158.18%。结果表明,在高植被覆盖度、混合密度和非均匀种植条件下,RPR能够显著提高AGB模型的泛化性和鲁棒性。
{"title":"Estimation of aboveground biomass in non-uniformly planted rice using multi-temporal UAV imagery data","authors":"Yong Su ,&nbsp;Jinyan Tian ,&nbsp;Bo Yang ,&nbsp;Jing Yang","doi":"10.1016/j.atech.2025.101680","DOIUrl":"10.1016/j.atech.2025.101680","url":null,"abstract":"<div><div>Estimating rice aboveground biomass (AGB) in unevenly planted areas using sowing parameters is challenging, mainly due to row spacing errors caused by the subjectivity of operators, as well as problems such as missing seeding and multiple seedlings per hill. Estimating regional rice AGB in actual agricultural production under the assumption of uniform planting density significantly increases errors. This study innovatively proposes a Rice Pixel Ratio (RPR) based on high-resolution Unmanned Aerial Vehicle (UAV) imagery to characterize the spatial distribution pattern of rice at the pixel scale. During 2023–2024, this study set up a total of 6 experimental treatments: 4 uniform planting density treatments (15, 18, 21, and 24 plants/m<sup>2</sup>, sequentially denoted as K1-K4), 1 uneven density treatment (D1), and 1 composite uniform density treatment (D2) integrating K1-K4. The study revealed: (i) Eight VIs extracted from high-resolution remote sensing imagery exhibited significant correlations with real-time rice growth parameters; (ii) During the vegetative phase, after RPR calibration, the non-uniform density plot D1 showed significantly improved AGB estimation accuracy, with R<sup>2</sup> increasing from 0.76 to 0.82; (ii) When RPR&gt;0.985 in test plots, and no model accuracy degradation occurred, confirming RPR’s broad applicability and strong robustness; (iii) For the composite density model D2 incorporating all four densities, RPR calibration elevated the reproductive-phase stem-leaf model’s R<sup>2</sup> from 0.28 to 0.71, representing a 158.18 % improvement. Results demonstrate RPR’s capacity to significantly enhance the generalizability and robustness of AGB models under high vegetation coverage, mixed-density, and non-uniform planting conditions.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101680"},"PeriodicalIF":5.7,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing artificial lighting for convolutional neural network-based crop monitoring with low-cost RGB imaging in indoor cultivation 基于卷积神经网络的低成本RGB成像作物监测室内人工照明优化
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-11-28 DOI: 10.1016/j.atech.2025.101677
Matteo Landolfo, Fabio Perotti, Alessandro Pistillo, Giuseppina Pennisi, Giorgio Gianquinto, Francesco Orsini
This study investigated the effect of different red:blue (R:B) spectral light ratios on the performance of a multi-task convolutional neural network (CNN) model developed for the automatic classification of four horticultural species and their corresponding phenological stages under controlled artificial lighting conditions. The model was trained and tested using RGB images acquired under five distinct spectral treatments (R:B 1, 3, 5, 7, and 9), and its performance was evaluated using accuracy, precision, recall, F1-score, and Matthews correlation coefficient (MCC). For species classification, the best results were obtained with an R:B 1, achieving an accuracy of 86 %, precision of 87 %, recall of 85 %, F1-score of 85 %, and MCC of 0.81. In terms of phenological stage classification, the highest performance was observed at R:B 3 and R:B 5, both yielding 93 % accuracy and F1-score, precision and recall above 92 %, and an MCC of 0.86. These findings demonstrate that the multi-task CNN model is capable of learning robust and generalizable representations, maintaining high classification performance even under non-optimal spectral conditions. The integration of optimized artificial lighting with intelligent classifiers proves to be a strategic approach for automated monitoring systems in indoor and precision agriculture. Future research should explore the impact of additional spectral components (e.g., green or far-red wavelengths) and the adoption of more advanced neural architectures to further enhance the system’s robustness and scalability.
本文研究了不同红蓝(R:B)光谱光比对多任务卷积神经网络(CNN)模型性能的影响,该模型用于在受控人工光照条件下对四种园艺物种及其相应物候阶段进行自动分类。使用五种不同光谱处理(R: b1、3、5、7和9)下获得的RGB图像对模型进行训练和测试,并使用正确率、精密度、召回率、f1得分和马修斯相关系数(MCC)来评估模型的性能。在物种分类方面,R: b1的分类效果最好,准确率为86%,精密度为87%,召回率为85%,f1评分为85%,MCC为0.81。在物候分期分类方面,R: b3和R: b5的准确率和f1评分均达到93%,精密度和召回率均在92%以上,MCC为0.86。这些发现表明,多任务CNN模型能够学习鲁棒和可泛化的表征,即使在非最优谱条件下也能保持较高的分类性能。将优化的人工照明与智能分类器相结合被证明是室内和精准农业自动化监控系统的一种战略方法。未来的研究应该探索额外的光谱成分(例如,绿色或远红色波长)的影响,并采用更先进的神经结构,以进一步增强系统的鲁棒性和可扩展性。
{"title":"Optimizing artificial lighting for convolutional neural network-based crop monitoring with low-cost RGB imaging in indoor cultivation","authors":"Matteo Landolfo,&nbsp;Fabio Perotti,&nbsp;Alessandro Pistillo,&nbsp;Giuseppina Pennisi,&nbsp;Giorgio Gianquinto,&nbsp;Francesco Orsini","doi":"10.1016/j.atech.2025.101677","DOIUrl":"10.1016/j.atech.2025.101677","url":null,"abstract":"<div><div>This study investigated the effect of different red:blue (R:B) spectral light ratios on the performance of a multi-task convolutional neural network (CNN) model developed for the automatic classification of four horticultural species and their corresponding phenological stages under controlled artificial lighting conditions. The model was trained and tested using RGB images acquired under five distinct spectral treatments (R:B 1, 3, 5, 7, and 9), and its performance was evaluated using accuracy, precision, recall, F1-score, and Matthews correlation coefficient (MCC). For species classification, the best results were obtained with an R:B 1, achieving an accuracy of 86 %, precision of 87 %, recall of 85 %, F1-score of 85 %, and MCC of 0.81. In terms of phenological stage classification, the highest performance was observed at R:B 3 and R:B 5, both yielding 93 % accuracy and F1-score, precision and recall above 92 %, and an MCC of 0.86. These findings demonstrate that the multi-task CNN model is capable of learning robust and generalizable representations, maintaining high classification performance even under non-optimal spectral conditions. The integration of optimized artificial lighting with intelligent classifiers proves to be a strategic approach for automated monitoring systems in indoor and precision agriculture. Future research should explore the impact of additional spectral components (e.g., green or far-red wavelengths) and the adoption of more advanced neural architectures to further enhance the system’s robustness and scalability.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101677"},"PeriodicalIF":5.7,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grass grub (Costelytra giveni) in improved grasslands detected by remote sensing data and machine learning approaches 基于遥感数据和机器学习方法的改良草地草蛴螬(Costelytra giveni
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-11-28 DOI: 10.1016/j.atech.2025.101679
Mark R. McNeill , Federico Tomasetto , Alasdair Noble, Sarah Mansfield, Ester Meenken, Chikako van Koten
Smarter ways to monitor and predict invertebrate pest outbreaks on wide scale areas to provide early implementation of appropriate controls, have the potential to provide economic, environmental and social benefits to growers. Grass grub (Costelytra giveni) is a major scarab pest in New Zealand pastoral farming systems, where the root-feeding larvae cause significant damage to improved grasslands leading to declines in persistence and productivity losses. Early detection of a possible pest infestation at both a paddock and/or farm scale are a crucial first step to reducing economic losses. Costelytra giveni population outbreaks can often be unexpected and widespread. Furthermore, current early detection methods are too labour intensive for most farmers, so monitoring of populations each year and across paddocks is not undertaken. Automated methods for early detection of larval populations above the damage threshold (150 larvae m-2) at the paddock scale would assist farmers and farm advisers to make decisions of when and where control measures are needed. In this context, new advances in remote sensing technologies and machine learning algorithms offer great potential for managing this challenging pest. Using field data on larval densities collected each year over five years and corresponding high resolution satellite images of the pasture (5 m spatial resolution), we tested a machine learning model to detect the risk that larval densities were above the damage threshold. The best performing model achieved 77% accuracy for unseen test data and was able to reliably distinguish damage caused by larvae below and above threshold. This demonstrates proof of concept for a new approach to identify the presence of C. giveni populations that are above threshold before significant pasture damage occurs, allowing for implementation of effective control measures.
监测和预测大面积无脊椎动物病虫害爆发的更智能的方法,以便及早实施适当的控制措施,有可能为种植者带来经济、环境和社会效益。草蛴螬(Costelytra giveni)是新西兰牧区农业系统中的一种主要圣甲虫害虫,在那里,以根为食的幼虫对改善的草地造成重大损害,导致持久性下降和生产力损失。在围场和/或农场规模上及早发现可能发生的虫害是减少经济损失的关键第一步。令人难以置信的是,人口暴发往往是出乎意料和广泛的。此外,目前的早期检测方法对大多数农民来说过于劳动密集型,因此没有对每年和整个围场的人口进行监测。在围场范围内,早期检测超过危害阈值(150只幼虫m-2)的幼虫数量的自动化方法将有助于农民和农场顾问决定何时何地需要采取控制措施。在此背景下,遥感技术和机器学习算法的新进展为管理这一具有挑战性的害虫提供了巨大的潜力。利用五年来每年收集的幼虫密度现场数据和相应的草场高分辨率卫星图像(5米空间分辨率),我们测试了一个机器学习模型,以检测幼虫密度高于损害阈值的风险。最佳模型对未见的试验数据的准确率达到77%,能够可靠地区分阈值以下和高于阈值的幼虫造成的损害。这证明了一种新方法的概念证明,该方法可以在发生重大牧场损害之前识别超过阈值的C. giveni种群的存在,从而允许实施有效的控制措施。
{"title":"Grass grub (Costelytra giveni) in improved grasslands detected by remote sensing data and machine learning approaches","authors":"Mark R. McNeill ,&nbsp;Federico Tomasetto ,&nbsp;Alasdair Noble,&nbsp;Sarah Mansfield,&nbsp;Ester Meenken,&nbsp;Chikako van Koten","doi":"10.1016/j.atech.2025.101679","DOIUrl":"10.1016/j.atech.2025.101679","url":null,"abstract":"<div><div>Smarter ways to monitor and predict invertebrate pest outbreaks on wide scale areas to provide early implementation of appropriate controls, have the potential to provide economic, environmental and social benefits to growers. Grass grub (<em>Costelytra giveni</em>) is a major scarab pest in New Zealand pastoral farming systems, where the root-feeding larvae cause significant damage to improved grasslands leading to declines in persistence and productivity losses. Early detection of a possible pest infestation at both a paddock and/or farm scale are a crucial first step to reducing economic losses. <em>Costelytra giveni</em> population outbreaks can often be unexpected and widespread. Furthermore, current early detection methods are too labour intensive for most farmers, so monitoring of populations each year and across paddocks is not undertaken. Automated methods for early detection of larval populations above the damage threshold (150 larvae m<sup>-2</sup>) at the paddock scale would assist farmers and farm advisers to make decisions of when and where control measures are needed. In this context, new advances in remote sensing technologies and machine learning algorithms offer great potential for managing this challenging pest. Using field data on larval densities collected each year over five years and corresponding high resolution satellite images of the pasture (5 m spatial resolution), we tested a machine learning model to detect the risk that larval densities were above the damage threshold. The best performing model achieved 77% accuracy for unseen test data and was able to reliably distinguish damage caused by larvae below and above threshold. This demonstrates proof of concept for a new approach to identify the presence of <em>C. giveni</em> populations that are above threshold before significant pasture damage occurs, allowing for implementation of effective control measures.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101679"},"PeriodicalIF":5.7,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unraveling nitrous oxide in free-stall dairy barns: A linear mixed modeling approach to cyclic, spatial, and environmental drivers 在自由栏奶牛栏中解开一氧化二氮:循环,空间和环境驱动的线性混合建模方法
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-11-28 DOI: 10.1016/j.atech.2025.101678
Vineet Srivastava , Edit Mikó , László Horváth , Csilla Gombi , Anna Szabó , Zoltán Bozóki
This study presents a transferable linear mixed modeling framework to dissect cyclic, spatial, and environmental drivers of nitrous oxide (N2O) in a free stall dairy barn. Herein, concentrations in a barn were monitored over 10 days using photoacoustic spectroscopy across three spatial locations, vertical heights, and operational cycles. Linear mixed models were developed from simple to complex structures: starting with main effects (M-series: environmental drivers only), adding cycle-augmented terms (MC-series: environmental + cycle terms), and finally incorporating cycle-interaction dynamics (MIC series). The models evaluated cyclic, spatial, and environmental drivers for N2O concentration. Four environmental variables: temperature, relative humidity (RH), wind speed, and temperature–humidity index (THI) were analyzed for their impact on N2O concentrations. Twelve model variants were compared to identify the best fit. Our multi-criteria selection strategy identified MIC1 as optimal for prediction, with the lowest Akaike and Bayesian Information Criterion (AIC = 165,662; BIC = 165,785.6), root mean square error (RMSE = 33.9) and significant improvement in likelihood ratio tests (p < 0.001) over previous models. MC1 was selected for causal inference due to its robust coefficients (max VIF = 1.56) and comparable accuracy (RMSE = 34.2). While models using the THI underperformed, those with separate temperature and RH terms revealed dynamic cycle-phase interactions. RH exclusion severely degraded model performance. Temperature and RH synergistically amplified concentrations, with MIC1 showing a cycle 3 baseline elevation of 62.4 ppb and MC1 showing 68.8 ppb, while reversing temperature and RH effects. Spatial heterogeneity dominated while the vertical variation was minimal.
本研究提出了一个可转移的线性混合建模框架,以剖析自由马厩中氧化亚氮(N2O)的循环、空间和环境驱动因素。在此,使用光声光谱在三个空间位置、垂直高度和操作周期中监测谷仓中的浓度超过10天。线性混合模型由简单结构发展到复杂结构:从主效应(m系列:仅环境驱动因素)开始,加入周期增强项(mc系列:环境+周期项),最后纳入周期相互作用动力学(MIC系列)。这些模式评估了N2O浓度的循环、空间和环境驱动因素。分析了温度、相对湿度(RH)、风速和温湿度指数(THI) 4个环境变量对N2O浓度的影响。比较了12个模型变量以确定最佳拟合。我们的多标准选择策略将MIC1确定为预测的最佳选择,其赤池和贝叶斯信息标准(AIC = 165,662; BIC = 165,785.6)、均方根误差(RMSE = 33.9)和似然比检验(p < 0.001)均低于之前的模型。由于MC1的鲁棒系数(最大VIF = 1.56)和相当的准确性(RMSE = 34.2),因此选择MC1进行因果推理。虽然使用THI的模型表现不佳,但具有单独温度和RH项的模型显示了动态循环阶段的相互作用。RH排除严重降低了模型性能。温度和相对湿度协同放大浓度,MIC1显示周期3基线升高62.4 ppb, MC1显示68.8 ppb,同时逆转温度和相对湿度的影响。空间异质性占主导地位,垂直变异最小。
{"title":"Unraveling nitrous oxide in free-stall dairy barns: A linear mixed modeling approach to cyclic, spatial, and environmental drivers","authors":"Vineet Srivastava ,&nbsp;Edit Mikó ,&nbsp;László Horváth ,&nbsp;Csilla Gombi ,&nbsp;Anna Szabó ,&nbsp;Zoltán Bozóki","doi":"10.1016/j.atech.2025.101678","DOIUrl":"10.1016/j.atech.2025.101678","url":null,"abstract":"<div><div>This study presents a transferable linear mixed modeling framework to dissect cyclic, spatial, and environmental drivers of nitrous oxide (N<sub>2</sub>O) in a free stall dairy barn. Herein, concentrations in a barn were monitored over 10 days using photoacoustic spectroscopy across three spatial locations, vertical heights, and operational cycles. Linear mixed models were developed from simple to complex structures: starting with main effects (M-series: environmental drivers only), adding cycle-augmented terms (MC-series: environmental + cycle terms), and finally incorporating cycle-interaction dynamics (MIC series). The models evaluated cyclic, spatial, and environmental drivers for N<sub>2</sub>O concentration. Four environmental variables: temperature, relative humidity (RH), wind speed, and temperature–humidity index (THI) were analyzed for their impact on N<sub>2</sub>O concentrations. Twelve model variants were compared to identify the best fit. Our multi-criteria selection strategy identified MIC1 as optimal for prediction, with the lowest Akaike and Bayesian Information Criterion (AIC = 165,662; BIC = 165,785.6), root mean square error (RMSE = 33.9) and significant improvement in likelihood ratio tests (<em>p</em> &lt; 0.001) over previous models. MC1 was selected for causal inference due to its robust coefficients (max VIF = 1.56) and comparable accuracy (RMSE = 34.2). While models using the THI underperformed, those with separate temperature and RH terms revealed dynamic cycle-phase interactions. RH exclusion severely degraded model performance. Temperature and RH synergistically amplified concentrations, with MIC1 showing a cycle 3 baseline elevation of 62.4 ppb and MC1 showing 68.8 ppb, while reversing temperature and RH effects. Spatial heterogeneity dominated while the vertical variation was minimal.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101678"},"PeriodicalIF":5.7,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145645984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MRLCM-YOLO: A lightweight and multi scale enhanced model for detecting cowpea pests in complex field environments MRLCM-YOLO:一种用于复杂田间环境下豇豆害虫检测的轻量级、多尺度增强模型
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-11-27 DOI: 10.1016/j.atech.2025.101676
Chunshan Wang , Yifei Dai , PeiPei Sun , Lijie Zhang , Bianyin Wang , Lijuan Duan , Jianchun Wang
Insect pests significantly threaten both the yield and quality of crops, particularly in high value varieties such as cowpeas. To address key challenges including dense small object detection, complex and cluttered field environments, class imbalance, and limitations in deployment resources, this study introduces a lightweight yet high accuracy object detection model MRLCM-YOLO based on the YOLOv11 framework. The model incorporates RepViT, a reparameterized vision transformer as its backbone, to enhance both feature expressiveness and inference speed. To improve multi scale contextual representation, a novel feature fusion mechanism termed CGRFPN is proposed. Additionally, the LSKM module, based on large separable kernel attention, is employed to strengthen attention on target regions. A decoupled detection head, MultiSEAMHead, is further integrated to enhance model robustness by disentangling the classification and localization tasks. For training and validation purposes, a high-resolution cowpea pest dataset was curated, comprising 3855 annotated images spanning 19 pest categories. Experimental findings demonstrate that MRLCM-YOLO achieves 87.9 % mAP50 and 57.2 % mAP50–95, representing gains of 0.5 % and 0.9 %, respectively, over YOLOv11. With a model size of just 9.1 million parameters, MRLCM-YOLO achieves an effective trade-off between detection performance and computational cost, rendering it particularly well suited for deployment in practical agricultural scenarios.
害虫严重威胁作物的产量和质量,特别是在豇豆等高价值品种中。针对密集的小目标检测、复杂杂乱的战场环境、类不平衡以及部署资源的限制等关键挑战,本研究引入了基于YOLOv11框架的轻量级高精度目标检测模型MRLCM-YOLO。该模型采用了一种重新参数化的视觉转换器RepViT作为主干,增强了特征表达能力和推理速度。为了改进多尺度上下文表示,提出了一种新的特征融合机制CGRFPN。此外,采用基于大可分离核注意的LSKM模块加强对目标区域的注意。进一步集成了解耦检测头MultiSEAMHead,通过分离分类和定位任务来增强模型的鲁棒性。为了训练和验证目的,我们编制了一个高分辨率豇豆害虫数据集,包括3855张带注释的图像,涵盖19个害虫类别。实验结果表明,MRLCM-YOLO实现了87.9%的mAP50和57.2%的mAP50 - 95,分别比YOLOv11提高了0.5%和0.9%。MRLCM-YOLO模型大小仅为910万个参数,在检测性能和计算成本之间实现了有效的权衡,使其特别适合在实际农业场景中部署。
{"title":"MRLCM-YOLO: A lightweight and multi scale enhanced model for detecting cowpea pests in complex field environments","authors":"Chunshan Wang ,&nbsp;Yifei Dai ,&nbsp;PeiPei Sun ,&nbsp;Lijie Zhang ,&nbsp;Bianyin Wang ,&nbsp;Lijuan Duan ,&nbsp;Jianchun Wang","doi":"10.1016/j.atech.2025.101676","DOIUrl":"10.1016/j.atech.2025.101676","url":null,"abstract":"<div><div>Insect pests significantly threaten both the yield and quality of crops, particularly in high value varieties such as cowpeas. To address key challenges including dense small object detection, complex and cluttered field environments, class imbalance, and limitations in deployment resources, this study introduces a lightweight yet high accuracy object detection model MRLCM-YOLO based on the YOLOv11 framework. The model incorporates RepViT, a reparameterized vision transformer as its backbone, to enhance both feature expressiveness and inference speed. To improve multi scale contextual representation, a novel feature fusion mechanism termed CGRFPN is proposed. Additionally, the LSKM module, based on large separable kernel attention, is employed to strengthen attention on target regions. A decoupled detection head, MultiSEAMHead, is further integrated to enhance model robustness by disentangling the classification and localization tasks. For training and validation purposes, a high-resolution cowpea pest dataset was curated, comprising 3855 annotated images spanning 19 pest categories. Experimental findings demonstrate that MRLCM-YOLO achieves 87.9 % mAP<sub>50</sub> and 57.2 % mAP<sub>50–95</sub>, representing gains of 0.5 % and 0.9 %, respectively, over YOLOv11. With a model size of just 9.1 million parameters, MRLCM-YOLO achieves an effective trade-off between detection performance and computational cost, rendering it particularly well suited for deployment in practical agricultural scenarios.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101676"},"PeriodicalIF":5.7,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart farming: Real-time rice yield forecasting on mobile devices using lightweight CNN-LSTM 智能农业:使用轻量级CNN-LSTM在移动设备上进行实时水稻产量预测
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-11-26 DOI: 10.1016/j.atech.2025.101664
Sakshi Gandotra, Rita Chhikara, Anuradha Dhull
This work presents a framework for accurate and punctual in-season crop yield estimation at high spatial resolution for Indian farmers through the utilisation of low-resource edge devices by reducing the CNN-LSTM model neural activations' memory requirements. We propose a new memory optimisation approach—Clustering and Compression (C²)—that is tailored to combating the large memory needs of CNN-LSTM architecture neural activations. Through combining spatial feature extraction and temporal learning, the model acquires efficient spatiotemporal representations. It is trained on high-resolution block-level yield data, satellite-delivered Normalized Difference Vegetation Index (NDVI) and Normalized Difference Moisture Index (NDMI), and Jammu region weather data. Optimized CNN-LSTM comprehensively surpasses performance of baseline CNN and LSTM models while minimising memory usage by orders of magnitude—especially in neural activations. This optimisation allows for cost-effective, cloud-independent on-device inference and routine model training, which are essential for handling the day-to-day environmental fluctuations in the dynamic climates. In summary, the proposed method allows for a novel neural activation memory optimisation technique that facilitates device-local high-resolution crop yield estimation, paving the way for sustainable and strong agriculture for smallholder farmers.
这项工作提出了一个框架,通过减少CNN-LSTM模型神经激活的记忆要求,利用低资源边缘设备,为印度农民提供高空间分辨率的准确和准时的当季作物产量估计。我们提出了一种新的内存优化方法-聚类和压缩(C²)-这是专门针对CNN-LSTM架构神经激活的大内存需求而设计的。该模型通过空间特征提取和时间学习相结合,获得高效的时空表征。它是在高分辨率块级产量数据、卫星传送的归一化植被指数(NDVI)和归一化水分指数(NDMI)以及查谟地区天气数据上进行训练的。优化后的CNN-LSTM全面超越了基线CNN和LSTM模型的性能,同时将内存使用降低了几个数量级,尤其是在神经激活方面。这种优化允许具有成本效益,与云无关的设备上推理和常规模型训练,这对于处理动态气候中的日常环境波动至关重要。总之,所提出的方法允许一种新的神经激活记忆优化技术,促进设备局部高分辨率作物产量估计,为小农的可持续和强大的农业铺平道路。
{"title":"Smart farming: Real-time rice yield forecasting on mobile devices using lightweight CNN-LSTM","authors":"Sakshi Gandotra,&nbsp;Rita Chhikara,&nbsp;Anuradha Dhull","doi":"10.1016/j.atech.2025.101664","DOIUrl":"10.1016/j.atech.2025.101664","url":null,"abstract":"<div><div>This work presents a framework for accurate and punctual in-season crop yield estimation at high spatial resolution for Indian farmers through the utilisation of low-resource edge devices by reducing the CNN-LSTM model neural activations' memory requirements. We propose a new memory optimisation approach—Clustering and Compression (C²)—that is tailored to combating the large memory needs of CNN-LSTM architecture neural activations. Through combining spatial feature extraction and temporal learning, the model acquires efficient spatiotemporal representations. It is trained on high-resolution block-level yield data, satellite-delivered Normalized Difference Vegetation Index (NDVI) and Normalized Difference Moisture Index (NDMI), and Jammu region weather data. Optimized CNN-LSTM comprehensively surpasses performance of baseline CNN and LSTM models while minimising memory usage by orders of magnitude—especially in neural activations. This optimisation allows for cost-effective, cloud-independent on-device inference and routine model training, which are essential for handling the day-to-day environmental fluctuations in the dynamic climates. In summary, the proposed method allows for a novel neural activation memory optimisation technique that facilitates device-local high-resolution crop yield estimation, paving the way for sustainable and strong agriculture for smallholder farmers.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101664"},"PeriodicalIF":5.7,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Smart agricultural technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1