首页 > 最新文献

Smart agricultural technology最新文献

英文 中文
Intelligent recognition of basic camel behaviors based on DAF-Net for free-grazing camels 基于DAF-Net的自由放牧骆驼基本行为智能识别
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-12-29 DOI: 10.1016/j.atech.2025.101763
Fang Zhou , Yuting Xian , Leifeng Guo , Cheng Peng , Yanhong Liu , Xiang Liu , Jing Xiao , Min Tian
With the rapid development of the camel milk industry and tourism, camel farming is gradually emerging. The basic behaviors of camels (such as standing, walking, grazing, and resting) are important indicators of their health status and welfare level. Timely monitoring and analysis of these behaviors are crucial for assessing the physiological state of camels and optimizing their management. To achieve automated recognition of camel behaviors, this paper constructed a dataset of basic camel behaviors under free-grazing conditions and designed DAF-Net, a network built upon the Yolov12 framework. In the feature extraction stage, DAF-Net employs Dfe-Net (Dynamic feature extraction Net) for efficient representation learning. Three dynamically adaptive modules (C3BA, A2Dy, and AGU) were integrated to further enhance overall performance. In addition, during the data processing stage, The G-Edge SSIM algorithm is proposed in this paper to address the issue of excessive similarity between consecutive frames, and it operates without the need for GPU computational resources. Experimental results demonstrate that the proposed method achieved excellent recognition accuracy for four basic behaviors—resting, grazing, standing, and walking—at 99.0%, 96.8%, 89.5%, and 89.2%, respectively, with an overall accuracy of 93.6%. Moreover, the method enables multi-camel behavior recognition in video sequences, providing a feasible approach for camel health assessment and welfare monitoring. This study offers new insights into intelligent camel farming management and biodiversity development in free-grazing pastures of arid regions.
随着骆驼奶业和旅游业的快速发展,骆驼养殖逐渐兴起。骆驼的站立、行走、放牧、休息等基本行为是衡量其健康状况和福利水平的重要指标。及时监测和分析这些行为对于评估骆驼的生理状态和优化管理至关重要。为了实现骆驼行为的自动识别,本文构建了自由放牧条件下骆驼基本行为数据集,并基于Yolov12框架设计了DAF-Net网络。在特征提取阶段,DAF-Net采用动态特征提取网络(Dynamic feature extraction Net)进行高效的表征学习。集成了三个动态自适应模块(C3BA、A2Dy和AGU),进一步提高了整体性能。此外,在数据处理阶段,本文提出了G-Edge SSIM算法,以解决连续帧之间过于相似的问题,并且不需要GPU计算资源。实验结果表明,该方法对休息、放牧、站立和行走四种基本行为的识别准确率分别为99.0%、96.8%、89.5%和89.2%,总体准确率为93.6%。此外,该方法能够实现视频序列中多头骆驼的行为识别,为骆驼健康评估和福利监测提供了一种可行的方法。本研究为干旱地区自由放牧草场骆驼智能养殖管理和生物多样性发展提供了新的思路。
{"title":"Intelligent recognition of basic camel behaviors based on DAF-Net for free-grazing camels","authors":"Fang Zhou ,&nbsp;Yuting Xian ,&nbsp;Leifeng Guo ,&nbsp;Cheng Peng ,&nbsp;Yanhong Liu ,&nbsp;Xiang Liu ,&nbsp;Jing Xiao ,&nbsp;Min Tian","doi":"10.1016/j.atech.2025.101763","DOIUrl":"10.1016/j.atech.2025.101763","url":null,"abstract":"<div><div>With the rapid development of the camel milk industry and tourism, camel farming is gradually emerging. The basic behaviors of camels (such as standing, walking, grazing, and resting) are important indicators of their health status and welfare level. Timely monitoring and analysis of these behaviors are crucial for assessing the physiological state of camels and optimizing their management. To achieve automated recognition of camel behaviors, this paper constructed a dataset of basic camel behaviors under free-grazing conditions and designed DAF-Net, a network built upon the Yolov12 framework. In the feature extraction stage, DAF-Net employs Dfe-Net (Dynamic feature extraction Net) for efficient representation learning. Three dynamically adaptive modules (C3BA, A2Dy, and AGU) were integrated to further enhance overall performance. In addition, during the data processing stage, The G-Edge SSIM algorithm is proposed in this paper to address the issue of excessive similarity between consecutive frames, and it operates without the need for GPU computational resources. Experimental results demonstrate that the proposed method achieved excellent recognition accuracy for four basic behaviors—resting, grazing, standing, and walking—at 99.0%, 96.8%, 89.5%, and 89.2%, respectively, with an overall accuracy of 93.6%. Moreover, the method enables multi-camel behavior recognition in video sequences, providing a feasible approach for camel health assessment and welfare monitoring. This study offers new insights into intelligent camel farming management and biodiversity development in free-grazing pastures of arid regions.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101763"},"PeriodicalIF":5.7,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precise profiling of behavioral time budgets of finishing Wagyu steers using linear mixed model 用线性混合模型精确分析完成和牛舵机的行为时间预算
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-12-29 DOI: 10.1016/j.atech.2025.101768
Nanako Mochizuki , Takahiro Aoki , Yasuhiro Morita , Mitsunori Kayano
The aim of this study was to identify behavioral characteristics of Wagyu beef cattle recorded by neck collar sensors. Although understanding Wagyu-specific traits is essential for precision livestock farming, data from behavioral monitoring sensors are difficult to interpret due to their latent variability. Moreover, since the Wagyu industry makes use of unique management strategies to promote highly marbled beef, Wagyu traits cannot be inferred from dairy cows or other beef breeds, requiring validation at the breed level. This study aimed to overcome these problems through a robust statistical methodology, linear mixed model, applying to a neck collar dataset in a Japanese commercial fattening farm with 229 healthy Wagyu steers during their finishing period (21–30 months of age). Daily time of six behaviors (eating, activity, standing, lying, standing-rumination, and lying-rumination) was recorded for each animal. A linear mixed model was applied to the monthly individual average time of each behavior for precise profiling of Wagyu-specific behavioral time budgets. The model included age, season (colder and warmer) and their interaction as the fixed effects, and animal IDs as a random effect to minimize the individual variability. The results showed behavioral alterations associated with age in non-feeding behaviors, including activity, standing and lying. Significant seasonal effects were also identified, depending on the age and behavior. This study can be the foundation for accurate detection of individual disorders and improving herd management practices in the Wagyu industry.
本研究的目的是通过颈部传感器记录和牛的行为特征。虽然了解和牛的特定性状对精确畜牧业至关重要,但行为监测传感器的数据由于其潜在的可变性而难以解释。此外,由于和牛产业利用独特的管理策略来推广高大理石花纹的牛肉,因此不能从奶牛或其他牛肉品种推断和牛的性状,需要在品种层面进行验证。本研究旨在通过一种稳健的统计方法,即线性混合模型来克服这些问题,该研究应用于日本一家商业育肥农场的颈部数据集,该农场有229头处于育肥期(21-30月龄)的健康和牛。记录每只动物每天进食、活动、站立、躺卧、站立-反刍和躺卧-反刍六种行为的时间。一个线性混合模型应用于每个行为的每月个人平均时间,以精确描述和牛特定行为的时间预算。该模型将年龄、季节(冷暖)及其相互作用作为固定效应,将动物id作为随机效应,以尽量减少个体差异。结果显示,非进食行为的行为改变与年龄有关,包括活动、站立和躺着。根据年龄和行为,还发现了显著的季节性影响。该研究可为准确检测个体疾病和改善和牛行业的牛群管理实践奠定基础。
{"title":"Precise profiling of behavioral time budgets of finishing Wagyu steers using linear mixed model","authors":"Nanako Mochizuki ,&nbsp;Takahiro Aoki ,&nbsp;Yasuhiro Morita ,&nbsp;Mitsunori Kayano","doi":"10.1016/j.atech.2025.101768","DOIUrl":"10.1016/j.atech.2025.101768","url":null,"abstract":"<div><div>The aim of this study was to identify behavioral characteristics of Wagyu beef cattle recorded by neck collar sensors. Although understanding Wagyu-specific traits is essential for precision livestock farming, data from behavioral monitoring sensors are difficult to interpret due to their latent variability. Moreover, since the Wagyu industry makes use of unique management strategies to promote highly marbled beef, Wagyu traits cannot be inferred from dairy cows or other beef breeds, requiring validation at the breed level. This study aimed to overcome these problems through a robust statistical methodology, linear mixed model, applying to a neck collar dataset in a Japanese commercial fattening farm with 229 healthy Wagyu steers during their finishing period (21–30 months of age). Daily time of six behaviors (eating, activity, standing, lying, standing-rumination, and lying-rumination) was recorded for each animal. A linear mixed model was applied to the monthly individual average time of each behavior for precise profiling of Wagyu-specific behavioral time budgets. The model included age, season (colder and warmer) and their interaction as the fixed effects, and animal IDs as a random effect to minimize the individual variability. The results showed behavioral alterations associated with age in non-feeding behaviors, including activity, standing and lying. Significant seasonal effects were also identified, depending on the age and behavior. This study can be the foundation for accurate detection of individual disorders and improving herd management practices in the Wagyu industry.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101768"},"PeriodicalIF":5.7,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corn stem diameter and ear orientation angle measurement method based on D3-YOLOv11 and RGB-D camera 基于D3-YOLOv11和RGB-D相机的玉米茎径和穗位角测量方法
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-12-27 DOI: 10.1016/j.atech.2025.101760
Fan Zhang , Qiankun Fu , Yang Li , Hengyi Wang , Jun Fu
The operational effect of the reverse ear picking device for fresh corn is affected by stem diameter and ear orientation angle. The existing devices lack the ability to sense these parameters in real-time, making it difficult to dynamically adjust operating parameters, which leads to a high damage rate and harvest loss. To this end, this study focuses on the visual perception aspect and proposes a recognition method based on a depth camera and an improved D3-YOLOv11 segmentation model, which provides reliable visual input for subsequent adaptive regulation. Specifically, this study proposes Dual-Domain Dynamic Gate Conv (D3GConv) to enhance the multi-scale feature extraction ability of the model. In the neck network, a bidirectional weighted pyramid structure with semantic detail injection is designed to improve the segmentation accuracy of small objects. Generalized Focal Loss V2 was used to optimize the detection head to enhance the accuracy of boundary localization in dense stem scenes. Finally, the depth information is fused to realize the real-time measurement of stem diameter and ear orientation angle. Experimental results show that the Mask-mAP50 of the D3-YOLOv11 model reaches 99.3% and 94.6% in stem and ear instance segmentation tasks, respectively. The Mean Absolute Error of stem diameter measurement based on depth information is only 0.16 cm, and the Coefficient of Determination of ear orientation angle reaches 0.95, which verifies the reliability and practicability of this method in the adaptive control of the ear harvesting device. It provides an effective visual perception basis for improving the intelligence level of equipment.
鲜玉米反向采穗装置的操作效果受茎粗和穗向角的影响。现有设备缺乏实时感知这些参数的能力,难以动态调整操作参数,导致高损坏率和收获损失。为此,本研究从视觉感知方面入手,提出了一种基于深度相机和改进的D3-YOLOv11分割模型的识别方法,为后续的自适应调节提供可靠的视觉输入。具体而言,本文提出了双域动态门卷积(Dual-Domain Dynamic Gate Conv, D3GConv)来增强模型的多尺度特征提取能力。在颈部网络中,设计了带有语义细节注入的双向加权金字塔结构,以提高小目标的分割精度。采用广义焦损V2对检测头进行优化,提高密集茎景边界定位的精度。最后,融合深度信息,实现阀杆直径和耳部取向角的实时测量。实验结果表明,D3-YOLOv11模型的Mask-mAP50在梗和耳实例分割任务中分别达到99.3%和94.6%。基于深度信息的茎径测量的平均绝对误差仅为0.16 cm,耳位角的确定系数达到0.95,验证了该方法在采耳装置自适应控制中的可靠性和实用性。为提高装备智能化水平提供了有效的视觉感知依据。
{"title":"Corn stem diameter and ear orientation angle measurement method based on D3-YOLOv11 and RGB-D camera","authors":"Fan Zhang ,&nbsp;Qiankun Fu ,&nbsp;Yang Li ,&nbsp;Hengyi Wang ,&nbsp;Jun Fu","doi":"10.1016/j.atech.2025.101760","DOIUrl":"10.1016/j.atech.2025.101760","url":null,"abstract":"<div><div>The operational effect of the reverse ear picking device for fresh corn is affected by stem diameter and ear orientation angle. The existing devices lack the ability to sense these parameters in real-time, making it difficult to dynamically adjust operating parameters, which leads to a high damage rate and harvest loss. To this end, this study focuses on the visual perception aspect and proposes a recognition method based on a depth camera and an improved D3-YOLOv11 segmentation model, which provides reliable visual input for subsequent adaptive regulation. Specifically, this study proposes Dual-Domain Dynamic Gate Conv (D3GConv) to enhance the multi-scale feature extraction ability of the model. In the neck network, a bidirectional weighted pyramid structure with semantic detail injection is designed to improve the segmentation accuracy of small objects. Generalized Focal Loss V2 was used to optimize the detection head to enhance the accuracy of boundary localization in dense stem scenes. Finally, the depth information is fused to realize the real-time measurement of stem diameter and ear orientation angle. Experimental results show that the Mask-mAP50 of the D3-YOLOv11 model reaches 99.3% and 94.6% in stem and ear instance segmentation tasks, respectively. The Mean Absolute Error of stem diameter measurement based on depth information is only 0.16 cm, and the Coefficient of Determination of ear orientation angle reaches 0.95, which verifies the reliability and practicability of this method in the adaptive control of the ear harvesting device. It provides an effective visual perception basis for improving the intelligence level of equipment.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101760"},"PeriodicalIF":5.7,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data mining for evaluating animal performance from weighing platform big data 基于称重平台大数据的动物生产性能评估数据挖掘
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-12-26 DOI: 10.1016/j.atech.2025.101758
Alex S.C. Maia , Gustavo A.B. Moura , Vinicius F.C. Fonsêca , Bruno R. Simão , Jessica O. Gusmão , Hugo F.M. Milan , Kifle G. Gebremedhin , Robert J. Collier , Rodrigo D.L. Pacheco , Izabelle A.M.A. Teixeira
Precision livestock farming technologies, such as IoT-enabled weighing platforms, generate big data that require rigorous processing to ensure correctness and integrity. However, literature lacks a reliable procedure for automatically filtering misreadings from weighing platform big data, hindering accurate evaluation of performance indicators. This study addresses this gap through developing and validating a data mining procedure consisting in four steps: removes outliers based on 1) shrunk body weight (SBW), 2) discrepancies between weighing platform measurements, 3) estimation of initial and final live body weight (BW), and 4) deviations between measured and estimated live BW from a Generalized Additive Model (GAM). Data were collected from 12 experiments (1152 steers, 583,321 samples) conducted in Campanelli Innovation Center. The data mining procedure effectively removed outliers from loss of calibration, partial or even multiple animals standing on the platform. When compared with previous methodologies, our procedure was 20x more accurate. Mined data were used to fit GAM to predict BW. The average daily gain calculated from mined data showed an error (–0.02 ± 0.74%) 40x more accurate than using raw data (7.99 ± 29.42%) or previous methodologies and showed no statistical difference with measured data. Considering that our proposed methodology needs SBW and its measurement is stressful and lead to performance loss, an extreme gradient boosting regression model was trained to predict initial and final SBW and were highly accurate (RMSE: 0.0216 kg; R2: 0.9954). The proposed data mining procedure improved data correctness and integrity, generating high-quality datasets that can be used for early detection of animal disease, evaluating impacts on performance from new feeding additives, and supporting data-driven management decisions. In addition, the proposed procedure can be easily implemented in similar set ups.
精准畜牧业技术,如支持物联网的称重平台,产生的大数据需要严格处理,以确保正确性和完整性。然而,文献中缺乏一种可靠的程序来自动过滤称重平台大数据中的误读,阻碍了对绩效指标的准确评估。本研究通过开发和验证一个数据挖掘程序来解决这一差距,该程序包括四个步骤:基于1)缩小体重(SBW)去除异常值,2)称重平台测量值之间的差异,3)初始和最终活体重(BW)估计,以及4)广义加性模型(GAM)测量和估计活体重之间的偏差。数据来源于在Campanelli创新中心进行的12项实验(1152头牛,583321份样本)。数据挖掘过程有效地去除了标定丢失、部分甚至多个动物站在平台上的异常值。与以前的方法相比,我们的程序准确性提高了20倍。利用挖掘的数据拟合GAM来预测BW。与原始数据(7.99±29.42%)或以往方法相比,挖掘数据计算的平均日增重误差(-0.02±0.74%)提高了40倍,与实测数据无统计学差异。考虑到我们提出的方法需要SBW,其测量有压力并导致性能损失,我们训练了一个极端梯度增强回归模型来预测初始和最终的SBW,并且准确度很高(RMSE: 0.0216 kg; R2: 0.9954)。提出的数据挖掘程序提高了数据的正确性和完整性,生成高质量的数据集,可用于动物疾病的早期检测,评估新饲料添加剂对生产性能的影响,并支持数据驱动的管理决策。此外,所提出的程序可以很容易地实现在类似的设置。
{"title":"Data mining for evaluating animal performance from weighing platform big data","authors":"Alex S.C. Maia ,&nbsp;Gustavo A.B. Moura ,&nbsp;Vinicius F.C. Fonsêca ,&nbsp;Bruno R. Simão ,&nbsp;Jessica O. Gusmão ,&nbsp;Hugo F.M. Milan ,&nbsp;Kifle G. Gebremedhin ,&nbsp;Robert J. Collier ,&nbsp;Rodrigo D.L. Pacheco ,&nbsp;Izabelle A.M.A. Teixeira","doi":"10.1016/j.atech.2025.101758","DOIUrl":"10.1016/j.atech.2025.101758","url":null,"abstract":"<div><div>Precision livestock farming technologies, such as IoT-enabled weighing platforms, generate big data that require rigorous processing to ensure correctness and integrity. However, literature lacks a reliable procedure for automatically filtering misreadings from weighing platform big data, hindering accurate evaluation of performance indicators. This study addresses this gap through developing and validating a data mining procedure consisting in four steps: removes outliers based on 1) shrunk body weight (<em>SBW</em>), 2) discrepancies between weighing platform measurements, 3) estimation of initial and final live body weight (<em>BW</em>), and 4) deviations between measured and estimated live <em>BW</em> from a Generalized Additive Model (GAM). Data were collected from 12 experiments (1152 steers, 583,321 samples) conducted in Campanelli Innovation Center. The data mining procedure effectively removed outliers from loss of calibration, partial or even multiple animals standing on the platform. When compared with previous methodologies, our procedure was 20x more accurate. Mined data were used to fit GAM to predict <em>BW</em>. The average daily gain calculated from mined data showed an error (–0.02 ± 0.74%) 40x more accurate than using raw data (7.99 ± 29.42%) or previous methodologies and showed no statistical difference with measured data. Considering that our proposed methodology needs <em>SBW</em> and its measurement is stressful and lead to performance loss, an extreme gradient boosting regression model was trained to predict initial and final SBW and were highly accurate (RMSE: 0.0216 kg; R<sup>2</sup>: 0.9954). The proposed data mining procedure improved data correctness and integrity, generating high-quality datasets that can be used for early detection of animal disease, evaluating impacts on performance from new feeding additives, and supporting data-driven management decisions. In addition, the proposed procedure can be easily implemented in similar set ups.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101758"},"PeriodicalIF":5.7,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EdgeSoybeanNet : A framework for real-time, high-accuracy field soybean pod counting EdgeSoybeanNet:一个用于实时、高精度田间大豆豆荚计数的框架
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-12-26 DOI: 10.1016/j.atech.2025.101750
Johnbosco Nnamso , Francia Ravelombola , Feng Lin , Chao Lu
Accurate estimation of field soybean pods plays a critical role in precision agriculture. However, conventional methods face significant limitations, including high field variability, visually complex backgrounds, and the computational constraints of deploying deep learning models in rural edge environments. To address these challenges, we present EdgeSoybeanNet, a high-accuracy, edge-deployable AI framework for near real-time soybean pod counting. The proposed framework integrates a customized UNet-Lite segmentation network with an adaptive thresholding strategy. The computation process begins with region-of-interest extraction from UAV imagery, followed by segmentation and pod detection using adaptive thresholding. The trained AI models are then quantized and exported to ONNX and deployed with ONNX Runtime, TensorFlow Lite (TFLite), or TensorRT on edge devices, eliminating the need for cloud connectivity and enabling near real-time inference in the soybean field. To the best of our knowledge, this is the first study to incorporate adaptive threshold learning into a UNet-Lite segmentation for agricultural applications. The experimental results show a counting accuracy of 89.57% with an inference time of 0.66 s on a Raspberry Pi 5 at 300  ×  300 input UAV images, and up to 90.43% counting accuracy at 560  ×  560 input. These results demonstrate the feasibility and effectiveness of this approach for resource-constrained precision farming. Compared with the state-of-the-art SoybeanNet-S model, our approach improves counting accuracy by 5.07% and reduces the number of parameters by approximately 14 times, from 49.6 million down to 3.57 million.
田间豆荚的准确估算在精准农业中起着至关重要的作用。然而,传统方法面临着显著的局限性,包括高场变异性、视觉复杂的背景以及在农村边缘环境中部署深度学习模型的计算限制。为了应对这些挑战,我们提出了EdgeSoybeanNet,这是一种高精度、边缘可部署的人工智能框架,用于近实时的大豆豆荚计数。该框架集成了自定义UNet-Lite分割网络和自适应阈值策略。计算过程首先从无人机图像中提取感兴趣区域,然后使用自适应阈值进行分割和吊舱检测。经过训练的人工智能模型随后被量化并导出到ONNX,并与ONNX Runtime、TensorFlow Lite (TFLite)或TensorRT一起部署在边缘设备上,从而消除了对云连接的需求,并在大豆领域实现了近乎实时的推断。据我们所知,这是第一个将自适应阈值学习纳入农业应用的unet - life分割的研究。实验结果表明,在树莓派5上,在300 × 300输入的无人机图像上,计数精度为89.57%,推理时间为0.66 s,在560 × 560输入的无人机图像上,计数精度高达90.43%。这些结果证明了该方法在资源受限的精准农业中的可行性和有效性。与最先进的SoybeanNet-S模型相比,我们的方法将计数精度提高了5.07%,并将参数数量减少了约14倍,从4960万减少到357万。
{"title":"EdgeSoybeanNet : A framework for real-time, high-accuracy field soybean pod counting","authors":"Johnbosco Nnamso ,&nbsp;Francia Ravelombola ,&nbsp;Feng Lin ,&nbsp;Chao Lu","doi":"10.1016/j.atech.2025.101750","DOIUrl":"10.1016/j.atech.2025.101750","url":null,"abstract":"<div><div>Accurate estimation of field soybean pods plays a critical role in precision agriculture. However, conventional methods face significant limitations, including high field variability, visually complex backgrounds, and the computational constraints of deploying deep learning models in rural edge environments. To address these challenges, we present EdgeSoybeanNet, a high-accuracy, edge-deployable AI framework for near real-time soybean pod counting. The proposed framework integrates a customized UNet-Lite segmentation network with an adaptive thresholding strategy. The computation process begins with region-of-interest extraction from UAV imagery, followed by segmentation and pod detection using adaptive thresholding. The trained AI models are then quantized and exported to ONNX and deployed with ONNX Runtime, TensorFlow Lite (TFLite), or TensorRT on edge devices, eliminating the need for cloud connectivity and enabling near real-time inference in the soybean field. To the best of our knowledge, this is the first study to incorporate adaptive threshold learning into a UNet-Lite segmentation for agricultural applications. The experimental results show a counting accuracy of 89.57% with an inference time of 0.66 s on a Raspberry Pi 5 at 300  ×  300 input UAV images, and up to 90.43% counting accuracy at 560  ×  560 input. These results demonstrate the feasibility and effectiveness of this approach for resource-constrained precision farming. Compared with the state-of-the-art SoybeanNet-S model, our approach improves counting accuracy by 5.07% and reduces the number of parameters by approximately 14 times, from 49.6 million down to 3.57 million.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101750"},"PeriodicalIF":5.7,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and experiment of reciprocating herbaceous mulberry harvesting tester 往复式桑采收试验机的设计与试验
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-12-25 DOI: 10.1016/j.atech.2025.101757
Haitao Sui , Qinglu Yang , Yongcai Zhao , Jin Zhang , Yinfa Yan , Fade Li , Zhanhua Song
Efficient harvesting of herbaceous mulberry is essential for reducing labor costs and ensuring high-quality stubble for rapid regrowth in sericulture production. However, existing mechanized harvesters rarely enable in situ measurement of cutting and conveying power under field conditions, and the influence of operational parameters on both energy consumption and stubble quality remains insufficiently quantified. In this study, a crawler-type prototype harvester equipped with three independently driven AC servo motors and real-time torque sensors was developed to monitor cutting, conveying, and baling processes. A Central Composite Design (CCD) combined with response surface methodology was employed to investigate the effects of forward speed, conveying speed, and average cutting speed on average cutting power per branch, average conveying power per branch, and stubble quality score. Field trials were conducted in Rizhao, Shandong Province, China, using the mulberry cultivar ‘Guishangyou 12′. The regression models exhibited high goodness of fit (R² = 0.9546∼0.9946) and non-significant lack of fit (p > 0.05). Results indicated that cutting power consumption was on average 3.7 times higher than conveying power, with cutting speed exerting the most significant influence on energy use (p < 0.01) and stubble quality (p < 0.01). The optimal parameter combination—forward speed of 0.55 m/s, conveying speed of 0.96 m/s, and cutting speed of 0.95 m/s—reduced cutting power to 26.91 J·branch⁻¹, minimized conveying power to 6.64 J·branch⁻¹, and achieved a stubble quality score of 9.43. Validation experiments confirmed that deviations from predicted values were <5%. These findings provide a quantitative basis for operational optimization and energy efficiency improvement in herbaceous mulberry harvesting machinery.
在蚕桑生产中,有效地收获草本桑对于降低劳动力成本和确保高质量的残茬以实现快速再生至关重要。然而,现有的机械化收割机很少能够在现场条件下对切割和输送功率进行现场测量,并且操作参数对能耗和残茬质量的影响仍然没有充分量化。在本研究中,开发了一种履带式原型收割机,该收割机配备了三个独立驱动的交流伺服电机和实时扭矩传感器,用于监控切割、输送和打包过程。采用中心复合设计(CCD)结合响应面法,研究前进速度、输送速度和平均切割速度对每支路平均切割功率、每支路平均输送功率和残茬质量评分的影响。在中国山东省日照市进行了桑树品种“贵尚优12号”的田间试验。回归模型具有较高的拟合优度(R²= 0.9546 ~ 0.9946)和不显著的拟合缺失(p > 0.05)。结果表明,刈割功率平均是输送功率的3.7倍,刈割速度对能耗(p < 0.01)和残茬质量(p < 0.01)的影响最为显著。最优的参数组合——前进速度为0.55 m/s,输送速度为0.96 m/s,切割速度为0.95 m/s——将切割功率降低到26.91 J·分支(⁻),将输送功率降低到6.64 J·分支(⁻),使残茬质量得分达到9.43分。验证实验证实与预测值偏差为<;5%。这些研究结果为桑树采收机械的操作优化和能效提高提供了定量依据。
{"title":"Design and experiment of reciprocating herbaceous mulberry harvesting tester","authors":"Haitao Sui ,&nbsp;Qinglu Yang ,&nbsp;Yongcai Zhao ,&nbsp;Jin Zhang ,&nbsp;Yinfa Yan ,&nbsp;Fade Li ,&nbsp;Zhanhua Song","doi":"10.1016/j.atech.2025.101757","DOIUrl":"10.1016/j.atech.2025.101757","url":null,"abstract":"<div><div>Efficient harvesting of herbaceous mulberry is essential for reducing labor costs and ensuring high-quality stubble for rapid regrowth in sericulture production. However, existing mechanized harvesters rarely enable in situ measurement of cutting and conveying power under field conditions, and the influence of operational parameters on both energy consumption and stubble quality remains insufficiently quantified. In this study, a crawler-type prototype harvester equipped with three independently driven AC servo motors and real-time torque sensors was developed to monitor cutting, conveying, and baling processes. A Central Composite Design (CCD) combined with response surface methodology was employed to investigate the effects of forward speed, conveying speed, and average cutting speed on average cutting power per branch, average conveying power per branch, and stubble quality score. Field trials were conducted in Rizhao, Shandong Province, China, using the mulberry cultivar ‘Guishangyou 12′. The regression models exhibited high goodness of fit (R² = 0.9546∼0.9946) and non-significant lack of fit (p &gt; 0.05). Results indicated that cutting power consumption was on average 3.7 times higher than conveying power, with cutting speed exerting the most significant influence on energy use (p &lt; 0.01) and stubble quality (p &lt; 0.01). The optimal parameter combination—forward speed of 0.55 m/s, conveying speed of 0.96 m/s, and cutting speed of 0.95 m/s—reduced cutting power to 26.91 J·branch⁻¹, minimized conveying power to 6.64 J·branch⁻¹, and achieved a stubble quality score of 9.43. Validation experiments confirmed that deviations from predicted values were &lt;5%. These findings provide a quantitative basis for operational optimization and energy efficiency improvement in herbaceous mulberry harvesting machinery.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101757"},"PeriodicalIF":5.7,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A capability maturity model for assessing digital integration in smart farming 用于评估智能农业中数字集成的能力成熟度模型
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-12-24 DOI: 10.1016/j.atech.2025.101743
Emmanuel Ahoa , Ayalew Kassahun , Cor Verdouw , Bedir Tekinerdogan , Joep Tummers
The advancements of smart farming are noteworthy, largely driven by rapid developments in digital technologies such as the Internet of Things, Big Data, and AI. However, the mere availability of these technologies does not guarantee their effective integration into agricultural systems. Aligning the different digital components, such as sensors, platforms, data analytics, and decision-support tools, remains a complex task. This often prevents smart farming systems from reaching their full potential. Limited integration results in isolated data flows, interoperability problems, and inefficiencies across farm operations. This study presents a comprehensive Capability Maturity Model (CMM) for assessing the level of digital integration in smart farming from both technical and organisational perspectives. The model defines five maturity levels ranging from fragmented manual operations to a fully integrated and optimized level (Ad hoc, Managed, Integrated, Predictable, Innovative). It assesses the maturity of capabilities across six key dimensions: business processes, people and culture, strategy, technology, digital governance and data and analytics. A multi-case study of three smart farms in the Netherlands was conducted to validate the model. The findings indicate that the proposed model provides a holistic and practical framework for assessing digital integration maturity across different contexts. It not only supports strategic planning for interoperability but also identifies critical integration challenges and promotes a whole-farm approach to smart agriculture literature. As a decision-support tool, it provides agri-food practitioners with concrete and tailored guidance on which specific capabilities need to be improved to advance the maturity of smart farming.
智能农业的进步值得注意,这主要是由物联网、大数据和人工智能等数字技术的快速发展推动的。然而,仅仅有这些技术并不能保证它们有效地融入农业系统。调整不同的数字组件,如传感器、平台、数据分析和决策支持工具,仍然是一项复杂的任务。这往往会阻碍智能农业系统充分发挥其潜力。有限的集成会导致孤立的数据流、互操作性问题和农场操作效率低下。本研究提出了一个全面的能力成熟度模型(CMM),用于从技术和组织的角度评估智能农业的数字集成水平。该模型定义了五个成熟度级别,从分散的手工操作到完全集成和优化的级别(特别的、管理的、集成的、可预测的、创新的)。它评估了六个关键维度的能力成熟度:业务流程、人员和文化、战略、技术、数字治理以及数据和分析。对荷兰的三个智能农场进行了多案例研究,以验证该模型。研究结果表明,所提出的模型为评估不同背景下的数字集成成熟度提供了一个整体和实用的框架。它不仅支持互操作性的战略规划,而且还确定了关键的集成挑战,并促进了智能农业文献的整个农场方法。作为一种决策支持工具,它为农业食品从业者提供了具体和量身定制的指导,具体需要提高哪些能力,以促进智能农业的成熟。
{"title":"A capability maturity model for assessing digital integration in smart farming","authors":"Emmanuel Ahoa ,&nbsp;Ayalew Kassahun ,&nbsp;Cor Verdouw ,&nbsp;Bedir Tekinerdogan ,&nbsp;Joep Tummers","doi":"10.1016/j.atech.2025.101743","DOIUrl":"10.1016/j.atech.2025.101743","url":null,"abstract":"<div><div>The advancements of smart farming are noteworthy, largely driven by rapid developments in digital technologies such as the Internet of Things, Big Data, and AI. However, the mere availability of these technologies does not guarantee their effective integration into agricultural systems. Aligning the different digital components, such as sensors, platforms, data analytics, and decision-support tools, remains a complex task. This often prevents smart farming systems from reaching their full potential. Limited integration results in isolated data flows, interoperability problems, and inefficiencies across farm operations. This study presents a comprehensive Capability Maturity Model (CMM) for assessing the level of digital integration in smart farming from both technical and organisational perspectives. The model defines five maturity levels ranging from fragmented manual operations to a fully integrated and optimized level (Ad hoc, Managed, Integrated, Predictable, Innovative). It assesses the maturity of capabilities across six key dimensions: business processes, people and culture, strategy, technology, digital governance and data and analytics. A multi-case study of three smart farms in the Netherlands was conducted to validate the model. The findings indicate that the proposed model provides a holistic and practical framework for assessing digital integration maturity across different contexts. It not only supports strategic planning for interoperability but also identifies critical integration challenges and promotes a whole-farm approach to smart agriculture literature. As a decision-support tool, it provides agri-food practitioners with concrete and tailored guidance on which specific capabilities need to be improved to advance the maturity of smart farming.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101743"},"PeriodicalIF":5.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on a blockage detection method for the suction pipeline of the fan system in a pneumatic jujube picker 气动采枣机风机系统吸入管路堵塞检测方法的研究
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-12-24 DOI: 10.1016/j.atech.2025.101745
Keyi Jiang , Huizhe Ding , Hang Yin , Longpeng Ding , Zhentao Wang , Yanbin Zhang , Wu Guo , Hongfei Yang , Jingbin Li
To address the problems of insufficient detection accuracy, poor real-time performance, and lack of long-term adaptability in blockage detection for the suction pipeline of the fan system in pneumatic jujube pickers, this study proposes an intelligent detection and operation–maintenance method that integrates multi-task learning and digital twin technology. A JujubePipe-BlockMTL-LightGBM multi-task learning model is constructed to simultaneously identify the blockage location (front, middle, rear) and determine the blockage area ratio (six levels) within a unified framework, thereby overcoming the limitation of traditional single-task models that ignore the physical correlation between tasks. Furthermore, the proposed model is embedded into a digital twin-based real-time monitoring system. Through cyber–physical mapping and closed-loop control, online fault diagnosis is achieved, and a model self-evolution mechanism is introduced to cope with data distribution drift, ensuring long-term stability and accuracy of the system. Experimental results show that, on the test set, the proposed model achieves 100% recognition accuracy for front and middle blockages and 98.18% for rear blockage, significantly outperforming traditional machine learning and deep learning baseline models. In a 72 h continuous test, the overall diagnostic accuracy of the digital twin system reaches 98.67%, with no false alarms for severe blockages, thereby verifying the comprehensive advantages of the proposed method in terms of high accuracy, strong robustness and continuous self-adaptation. This work provides an effective technical pathway for intelligent operation and maintenance of pneumatic conveying agricultural equipment.
针对气动采枣机风机系统吸入管路堵塞检测存在检测精度不高、实时性差、缺乏长期适应性等问题,本研究提出了一种多任务学习与数字孪生技术相结合的智能检测运维方法。构建JujubePipe-BlockMTL-LightGBM多任务学习模型,在统一的框架内同时识别阻塞位置(前、中、后)和确定阻塞面积比(6级),克服了传统单任务模型忽略任务间物理相关性的局限性。此外,该模型被嵌入到一个基于数字孪生的实时监控系统中。通过网络物理映射和闭环控制实现在线故障诊断,并引入模型自进化机制应对数据分布漂移,保证了系统的长期稳定性和准确性。实验结果表明,在测试集上,该模型对前、中堵塞的识别准确率为100%,对后堵塞的识别准确率为98.18%,显著优于传统的机器学习和深度学习基线模型。在72 h的连续测试中,数字孪生系统的整体诊断准确率达到98.67%,严重堵塞无虚警现象,验证了本文方法准确率高、鲁棒性强、持续自适应的综合优势。本工作为气力输送农用设备的智能化运维提供了有效的技术途径。
{"title":"Research on a blockage detection method for the suction pipeline of the fan system in a pneumatic jujube picker","authors":"Keyi Jiang ,&nbsp;Huizhe Ding ,&nbsp;Hang Yin ,&nbsp;Longpeng Ding ,&nbsp;Zhentao Wang ,&nbsp;Yanbin Zhang ,&nbsp;Wu Guo ,&nbsp;Hongfei Yang ,&nbsp;Jingbin Li","doi":"10.1016/j.atech.2025.101745","DOIUrl":"10.1016/j.atech.2025.101745","url":null,"abstract":"<div><div>To address the problems of insufficient detection accuracy, poor real-time performance, and lack of long-term adaptability in blockage detection for the suction pipeline of the fan system in pneumatic jujube pickers, this study proposes an intelligent detection and operation–maintenance method that integrates multi-task learning and digital twin technology. A JujubePipe-BlockMTL-LightGBM multi-task learning model is constructed to simultaneously identify the blockage location (front, middle, rear) and determine the blockage area ratio (six levels) within a unified framework, thereby overcoming the limitation of traditional single-task models that ignore the physical correlation between tasks. Furthermore, the proposed model is embedded into a digital twin-based real-time monitoring system. Through cyber–physical mapping and closed-loop control, online fault diagnosis is achieved, and a model self-evolution mechanism is introduced to cope with data distribution drift, ensuring long-term stability and accuracy of the system. Experimental results show that, on the test set, the proposed model achieves 100% recognition accuracy for front and middle blockages and 98.18% for rear blockage, significantly outperforming traditional machine learning and deep learning baseline models. In a 72 h continuous test, the overall diagnostic accuracy of the digital twin system reaches 98.67%, with no false alarms for severe blockages, thereby verifying the comprehensive advantages of the proposed method in terms of high accuracy, strong robustness and continuous self-adaptation. This work provides an effective technical pathway for intelligent operation and maintenance of pneumatic conveying agricultural equipment.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101745"},"PeriodicalIF":5.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integration DeepLabv3+ applied to RGB images and vegetation indices for nitrogen status in cereal–legume intercropping system 将DeepLabv3+应用于谷类-豆科间作系统RGB图像和植被指数的氮素状况分析
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-12-24 DOI: 10.1016/j.atech.2025.101752
Z. Yao , E. Denimal , L. Dujourdy , C. Gée
This study presents a low-cost, non-invasive approach to monitor the nitrogen status of Triticale in a Triticale–Faba bean intercropping system, an agroecological strategy to avoid chemical nitrogen inputs, using consumer-grade smartphone RGB images and deep learning. Three smartphones (Samsung Galaxy A12, Xiaomi Redmi Note 4, and Redmi Note 11) were used to capture canopy images. A DeepLabV3+ model with a ResNet-50 backbone was trained to semantically segment Triticale pixels from mixed canopies. Training the model on patch-based image subsets, rather than full images, substantially enhanced segmentation accuracy (mIoU = 90.64%). The normalized Dark Green Color Index (nDGCI) derived from segmented images at the canopy scale, was evaluated as a proxy for nitrogen status against normalized SPAD (nSPAD) measurements, a tedious leaf scale method. Strong linear relationships were observed between nDGCI and nSPAD (pooled correlation across optical devices of R² ≈ 0.60 and optical device-specific ranging from R² = 0.69 to 0.87). Statistical analyses highlighted significant effects of cropping modality, phenological stage, and device on both indices, but the method reliably distinguished between nitrogen treatments. Device-specific calibration effectively corrected offsets, validating the feasibility of smartphone-based AI for detailed monitoring in intercropped systems. This approach offers a practical cost-effective alternative to conventional tools enabling precision agriculture in agroecological contexts.
本研究提出了一种低成本、无创的方法来监测小黑麦-蚕豆间作系统中小黑麦的氮状态,这是一种避免化学氮投入的农业生态策略,使用消费级智能手机RGB图像和深度学习。使用三部智能手机(三星Galaxy A12、b红米Note 4和红米Note 11)拍摄树冠图像。利用ResNet-50骨干网对DeepLabV3+模型进行训练,从混合冠层中对小黑麦像素进行语义分割。在基于patch的图像子集上训练模型,而不是在完整图像上训练模型,大大提高了分割精度(mIoU = 90.64%)。在冠层尺度上,将图像分割后的归一化深绿色指数(nDGCI)与叶片尺度上的归一化SPAD (nSPAD)测量值作为氮状态的代表进行了评估。nDGCI和nSPAD之间存在明显的线性关系(光器件间的总相关系数R²≈0.60,光器件间的总相关系数R²= 0.69 ~ 0.87)。统计分析强调了种植方式、物候期和装置对这两个指标的显著影响,但该方法可靠地区分了氮肥处理之间的差异。特定于设备的校准有效地校正了偏移量,验证了基于智能手机的人工智能在间作系统中进行详细监测的可行性。这种方法为在农业生态环境中实现精准农业的传统工具提供了一种实际的、具有成本效益的替代方法。
{"title":"Integration DeepLabv3+ applied to RGB images and vegetation indices for nitrogen status in cereal–legume intercropping system","authors":"Z. Yao ,&nbsp;E. Denimal ,&nbsp;L. Dujourdy ,&nbsp;C. Gée","doi":"10.1016/j.atech.2025.101752","DOIUrl":"10.1016/j.atech.2025.101752","url":null,"abstract":"<div><div>This study presents a low-cost, non-invasive approach to monitor the nitrogen status of Triticale in a Triticale–Faba bean intercropping system, an agroecological strategy to avoid chemical nitrogen inputs, using consumer-grade smartphone RGB images and deep learning. Three smartphones (Samsung Galaxy A12, Xiaomi Redmi Note 4, and Redmi Note 11) were used to capture canopy images. A DeepLabV3+ model with a ResNet-50 backbone was trained to semantically segment Triticale pixels from mixed canopies. Training the model on patch-based image subsets, rather than full images, substantially enhanced segmentation accuracy (mIoU = 90.64%). The normalized Dark Green Color Index (nDGCI) derived from segmented images at the canopy scale, was evaluated as a proxy for nitrogen status against normalized SPAD (nSPAD) measurements, a tedious leaf scale method. Strong linear relationships were observed between nDGCI and nSPAD (pooled correlation across optical devices of R² ≈ 0.60 and optical device-specific ranging from R² = 0.69 to 0.87). Statistical analyses highlighted significant effects of cropping modality, phenological stage, and device on both indices, but the method reliably distinguished between nitrogen treatments. Device-specific calibration effectively corrected offsets, validating the feasibility of smartphone-based AI for detailed monitoring in intercropped systems. This approach offers a practical cost-effective alternative to conventional tools enabling precision agriculture in agroecological contexts.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101752"},"PeriodicalIF":5.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAPose-GS: Globally adaptive pose-optimized gaussian splatting for plant 3D reconstruction towards more precise phenotyping GAPose-GS:全球自适应姿态优化高斯喷溅植物三维重建更精确的表型
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2025-12-24 DOI: 10.1016/j.atech.2025.101746
Minke Hong , Jinghua Xu , Zhangtong Sun , Xudong Hu , Yifei Zhao , Pan Gao , Yuntao Ma , Jin Hu , Shijie Tian
High-precision plant phenotyping requires efficient 3D reconstruction with high fidelity, yet existing methods such as MVS and NeRF all have problems of feature dependence and error accumulation during 3D reconstruction, which leads to geometric distortion in reconstruction and restricts the reconstruction efficiency. To address this bottleneck, this study first determined the multi-view image acquisition strategy. Further, based on the self-built multi-view dataset of chili peppers, it proposed an algorithm for efficient and high-fidelity 3D reconstruction of complex plant structures through global adaptive pose optimization and gaussian splash rendering technology, referred to as the GAPose-GS algorithm. Experimental results indicate that the Peak Signal-to-Noise Ratio (PSNR) improves by 52.0 %, 26.4 %, and 4.2 % compared to NeRF, Instant-NGP, and 3D Gaussian Splatting respectively. Additionally, the Structural Similarity Index Measure (SSIM) increases by 22.9 %, 12.8 %, and 4.3 % respectively over above methods. The point cloud data reconstructed based on this algorithm also has advantages in the measurement of phenotypic parameters. Compared with the actual measured values, the of the phenotypic parameters such as pepper plant height, canopy width, and leafstalk angle obtained in this study are 0.997, 0.954 and 0.978 respectively, and the RMSE are 0.236 cm, 1.082 cm and 2.344° respectively, and the MAE are 0.209 cm, 0.880 cm and 1.965° respectively. The accuracy was significantly better than that of the existing phenotypic calculation methods. Verification across different growth stages of wheat and maize was performed universally, with all errors remaining below 1.1 %, providing new ideas and technologies for high-precision, low-cost, and high-throughput crop phenotypic research.
高精度植物表型需要高效、高保真的三维重建,而现有的MVS、NeRF等方法在三维重建过程中均存在特征依赖和误差积累等问题,导致重建过程中的几何畸变,制约了重建效率。为了解决这一瓶颈,本研究首先确定了多视角图像采集策略。在自建辣椒多视角数据集的基础上,提出了一种基于全局自适应姿态优化和高斯飞溅渲染技术的复杂植物结构高效高保真三维重建算法,称为GAPose-GS算法。实验结果表明,与NeRF、Instant-NGP和3D高斯飞溅相比,峰值信噪比(PSNR)分别提高了52.0%、26.4%和4.2%。此外,结构相似指数测度(SSIM)比上述方法分别提高22.9%、12.8%和4.3%。基于该算法重构的点云数据在表型参数测量方面也具有优势。与实测值相比,本研究获得的辣椒株高、冠宽、叶柄角等表型参数的R²分别为0.997、0.954和0.978,RMSE分别为0.236 cm、1.082 cm和2.344°,MAE分别为0.209 cm、0.880 cm和1.965°。准确度明显优于现有的表型计算方法。在小麦和玉米不同生育期进行了普遍验证,误差均在1.1%以下,为高精度、低成本、高通量的作物表型研究提供了新的思路和技术。
{"title":"GAPose-GS: Globally adaptive pose-optimized gaussian splatting for plant 3D reconstruction towards more precise phenotyping","authors":"Minke Hong ,&nbsp;Jinghua Xu ,&nbsp;Zhangtong Sun ,&nbsp;Xudong Hu ,&nbsp;Yifei Zhao ,&nbsp;Pan Gao ,&nbsp;Yuntao Ma ,&nbsp;Jin Hu ,&nbsp;Shijie Tian","doi":"10.1016/j.atech.2025.101746","DOIUrl":"10.1016/j.atech.2025.101746","url":null,"abstract":"<div><div>High-precision plant phenotyping requires efficient 3D reconstruction with high fidelity, yet existing methods such as MVS and NeRF all have problems of feature dependence and error accumulation during 3D reconstruction, which leads to geometric distortion in reconstruction and restricts the reconstruction efficiency. To address this bottleneck, this study first determined the multi-view image acquisition strategy. Further, based on the self-built multi-view dataset of chili peppers, it proposed an algorithm for efficient and high-fidelity 3D reconstruction of complex plant structures through global adaptive pose optimization and gaussian splash rendering technology, referred to as the GAPose-GS algorithm. Experimental results indicate that the Peak Signal-to-Noise Ratio (<em>PSNR</em>) improves by 52.0 %, 26.4 %, and 4.2 % compared to NeRF, Instant-NGP, and 3D Gaussian Splatting respectively. Additionally, the Structural Similarity Index Measure (<em>SSIM</em>) increases by 22.9 %, 12.8 %, and 4.3 % respectively over above methods. The point cloud data reconstructed based on this algorithm also has advantages in the measurement of phenotypic parameters. Compared with the actual measured values, the <em>R²</em> of the phenotypic parameters such as pepper plant height, canopy width, and leafstalk angle obtained in this study are 0.997, 0.954 and 0.978 respectively, and the <em>RMSE</em> are 0.236 cm, 1.082 cm and 2.344° respectively, and the <em>MAE</em> are 0.209 cm, 0.880 cm and 1.965° respectively. The accuracy was significantly better than that of the existing phenotypic calculation methods. Verification across different growth stages of wheat and maize was performed universally, with all errors remaining below 1.1 %, providing new ideas and technologies for high-precision, low-cost, and high-throughput crop phenotypic research.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101746"},"PeriodicalIF":5.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Smart agricultural technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1