首页 > 最新文献

Computers and Electronics in Agriculture最新文献

英文 中文
The normalized difference yellow vegetation index (NDYVI): A new index for crop identification by using GaoFen-6 WFV data 归一化差异黄色植被指数(NDYVI):利用高分六号 WFV 数据识别作物的新指数
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-09-07 DOI: 10.1016/j.compag.2024.109417

The yellowing morphologies of crops provide typical spectral characteristics for crop identification. However, this feature was generally neglected by most existing vegetation indices (VIs) focused on greening feature. Chinese GaoFen-6 satellite (GF-6) equipped with a wide field-of-view (WFV) camera has a refined spectral system within 0.40–0.89 μm, including the spectral bands that are sensitive to yellowing feature. This study proposes a new normalized difference yellow vegetation index (NDYVI) based on GF-6 image, by capitalizing on spectral reflectance feature of crops with yellowing morphologies, such as flowers and tassels. We used yellow and red-edge1 band to discriminate between crops within similar growing periods and incorporated NIR band to distinguish non-crop types. The performance of NDYVI was evaluated in two distinct classification scenarios involving different cropping systems: rapeseed with winter wheat in southern China, and maize with soybean in northeastern China. By calculating NDYVI and using Classification and Regression Tree (CART) algorithm, we generated classification maps in two scenarios. Additionally, the effectiveness of NDYVI was tested and compared with other six VIs, such as Normalized Difference Vegetation Index, Red-Edge Normalized Difference Vegetation Index and Normalized Difference Yellowness Index. The results demonstrated that NDYVI outperformed the other vegetation indices in both scenarios, achieving overall accuracies over 85 % (Kappa coefficient greater than 0.80) and each crop accuracy exceeding 80 %. Due to the higher reflectance in yellow band and red-edge1 band, NDYVI is more sensitive to canopies yellowness, which offers significant advantages in distinguishing crops during similar growing periods. Moreover, NDYVI is constructed from original spectral bands in GF-6 images, offering potential for significant flexibility in diverse classification scenarios. Consequently, NDYVI holds significant potential as a new vegetation index suitable for different remote sensing applications, including crop identification, growth monitoring and land cover classification.

农作物的黄化形态为农作物识别提供了典型的光谱特征。然而,现有的植被指数(VIs)普遍忽视了这一特征,而将重点放在绿化特征上。中国高分六号卫星(GF-6)配备的宽视场(WFV)相机拥有 0.40-0.89 μm 范围内的精细光谱系统,其中包括对黄化特征敏感的光谱波段。本研究提出了一种基于 GF-6 图像的新的归一化黄差植被指数(NDYVI),利用了具有黄化形态的农作物(如花和穗)的光谱反射特征。我们利用黄色和红边1 波段来区分生长期相似的作物,并结合近红外波段来区分非作物类型。我们在两种不同的分类情景中评估了 NDYVI 的性能,这两种情景涉及不同的种植系统:中国南方的油菜与冬小麦,以及中国东北的玉米与大豆。通过计算 NDYVI 并使用分类和回归树(CART)算法,我们生成了两种情况下的分类图。此外,我们还测试了 NDYVI 的有效性,并将其与归一化差异植被指数、红边归一化差异植被指数和归一化差异黄度指数等其他六种植被指数进行了比较。结果表明,在两种情况下,归一化差异植被指数都优于其他植被指数,总体准确率超过 85%(Kappa 系数大于 0.80),每种作物的准确率都超过 80%。由于黄波段和红边1 波段的反射率较高,NDYVI 对树冠的黄度更为敏感,这为区分生长期相近的作物提供了显著优势。此外,NDYVI 是根据 GF-6 图像中的原始光谱波段构建的,在不同的分类方案中具有极大的灵活性。因此,作为一种新的植被指数,NDYVI 具有巨大的潜力,适合不同的遥感应用,包括作物识别、生长监测和土地覆被分类。
{"title":"The normalized difference yellow vegetation index (NDYVI): A new index for crop identification by using GaoFen-6 WFV data","authors":"","doi":"10.1016/j.compag.2024.109417","DOIUrl":"10.1016/j.compag.2024.109417","url":null,"abstract":"<div><p>The yellowing morphologies of crops provide typical spectral characteristics for crop identification. However, this feature was generally neglected by most existing vegetation indices (VIs) focused on greening feature. Chinese GaoFen-6 satellite (GF-6) equipped with a wide field-of-view (WFV) camera has a refined spectral system within 0.40–0.89 μm, including the spectral bands that are sensitive to yellowing feature. This study proposes a new normalized difference yellow vegetation index (NDYVI) based on GF-6 image, by capitalizing on spectral reflectance feature of crops with yellowing morphologies, such as flowers and tassels. We used yellow and red-edge1 band to discriminate between crops within similar growing periods and incorporated NIR band to distinguish non-crop types. The performance of NDYVI was evaluated in two distinct classification scenarios involving different cropping systems: rapeseed with winter wheat in southern China, and maize with soybean in northeastern China. By calculating NDYVI and using Classification and Regression Tree (CART) algorithm, we generated classification maps in two scenarios. Additionally, the effectiveness of NDYVI was tested and compared with other six VIs, such as Normalized Difference Vegetation Index, Red-Edge Normalized Difference Vegetation Index and Normalized Difference Yellowness Index. The results demonstrated that NDYVI outperformed the other vegetation indices in both scenarios, achieving overall accuracies over 85 % (Kappa coefficient greater than 0.80) and each crop accuracy exceeding 80 %. Due to the higher reflectance in yellow band and red-edge1 band, NDYVI is more sensitive to canopies yellowness, which offers significant advantages in distinguishing crops during similar growing periods. Moreover, NDYVI is constructed from original spectral bands in GF-6 images, offering potential for significant flexibility in diverse classification scenarios. Consequently, NDYVI holds significant potential as a new vegetation index suitable for different remote sensing applications, including crop identification, growth monitoring and land cover classification.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0168169924008081/pdfft?md5=ec29a4ddcc34ce5d82db0174ac78e019&pid=1-s2.0-S0168169924008081-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D terrestrial LiDAR for obtaining phenotypic information of cigar tobacco plants 用于获取雪茄烟株表型信息的三维陆地激光雷达
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-09-07 DOI: 10.1016/j.compag.2024.109424

The study of individual phenotypic information of cigar tobacco plants holds significant importance for enhancing mechanized production levels of cigar tobacco. It provides a foundational basis for the mechanization of field management, plant protection, and the design of harvesting machinery during the production process. Addressing the time-consuming, labour-intensive, inefficient, and highly subjective nature of traditional methods for extracting phenotypic information of cigar tobacco, this paper proposed a novel approach using terrestrial lidar scanning technology for the extraction of phenotypic information in field-grown cigar plants. By utilizing terrestrial lidar to acquire millimeter-precision three-dimensional point cloud data of individual cigar plants and conducting pre-processing of this point cloud data, the study employed a skeleton extraction algorithm based on Laplacian mesh contraction and topological refinement to construct a triangular mesh model of the leaves and a point cloud skeleton of the plant. Based on the triangular mesh of the leaves, this study extracted the leaf area, while the leaf length, leaf inclination angle, and petiole angle were derived from the plant’s skeletal point cloud. Additionally, the plant height was ascertained from the point cloud of the cigar tobacco plant. The experimental results, compared with manual field measurements, indicated that the Root Mean Square Error values for actual leaf length, leaf area, leaf inclination angle, petiole angle, and growth height were 1.659 cm, 8.374 cm2, 2.371°, 2.73°, and 2.229 cm, respectively. The average absolute percentage errors for these measurements were 3.102 %, 0.782 %, 3.323 %, 4.148 %, and 1.194 %, respectively. This method provided an effective means of phenotypic information measurement to assist in the growth monitoring of mature cigar plants, mechanized plant protection, mechanized harvesting, and other projects that integrate agro-mechanics and agronomy.

研究雪茄烟株的个体表型信息对于提高雪茄烟机械化生产水平具有重要意义。它为生产过程中的田间管理机械化、植物保护和采收机械的设计提供了基础。针对提取雪茄烟表型信息的传统方法耗时、耗力、效率低且主观性强的问题,本文提出了一种利用陆地激光雷达扫描技术提取田间种植的雪茄植株表型信息的新方法。通过利用地面激光雷达获取单株雪茄植株毫米级精度的三维点云数据,并对这些点云数据进行预处理,该研究采用了一种基于拉普拉斯网格收缩和拓扑细化的骨架提取算法,构建了植株叶片的三角形网格模型和点云骨架。在叶片三角形网格的基础上,本研究提取了叶片面积,而叶片长度、叶片倾角和叶柄角度则来自植物的骨架点云。此外,还从雪茄烟株的点云中确定了株高。实验结果与人工实地测量结果相比,实际叶长、叶面积、叶倾角、叶柄角和生长高度的均方根误差值分别为 1.659 厘米、8.374 平方厘米、2.371°、2.73° 和 2.229 厘米。这些测量的平均绝对百分比误差分别为 3.102 %、0.782 %、3.323 %、4.148 % 和 1.194 %。该方法提供了一种有效的表型信息测量手段,有助于监测成熟雪茄植株的生长情况、机械化植保、机械化采收以及其他农业机械与农艺相结合的项目。
{"title":"3D terrestrial LiDAR for obtaining phenotypic information of cigar tobacco plants","authors":"","doi":"10.1016/j.compag.2024.109424","DOIUrl":"10.1016/j.compag.2024.109424","url":null,"abstract":"<div><p>The study of individual phenotypic information of cigar tobacco plants holds significant importance for enhancing mechanized production levels of cigar tobacco. It provides a foundational basis for the mechanization of field management, plant protection, and the design of harvesting machinery during the production process. Addressing the time-consuming, labour-intensive, inefficient, and highly subjective nature of traditional methods for extracting phenotypic information of cigar tobacco, this paper proposed a novel approach using terrestrial lidar scanning technology for the extraction of phenotypic information in field-grown cigar plants. By utilizing terrestrial lidar to acquire millimeter-precision three-dimensional point cloud data of individual cigar plants and conducting pre-processing of this point cloud data, the study employed a skeleton extraction algorithm based on Laplacian mesh contraction and topological refinement to construct a triangular mesh model of the leaves and a point cloud skeleton of the plant. Based on the triangular mesh of the leaves, this study extracted the leaf area, while the leaf length, leaf inclination angle, and petiole angle were derived from the plant’s skeletal point cloud. Additionally, the plant height was ascertained from the point cloud of the cigar tobacco plant. The experimental results, compared with manual field measurements, indicated that the Root Mean Square Error values for actual leaf length, leaf area, leaf inclination angle, petiole angle, and growth height were 1.659 cm, 8.374 cm<sup>2</sup>, 2.371°, 2.73°, and 2.229 cm, respectively. The average absolute percentage errors for these measurements were 3.102 %, 0.782 %, 3.323 %, 4.148 %, and 1.194 %, respectively. This method provided an effective means of phenotypic information measurement to assist in the growth monitoring of mature cigar plants, mechanized plant protection, mechanized harvesting, and other projects that integrate agro-mechanics and agronomy.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Oxytetracycline injection using automated trunk injection compared to manual injection systems for HLB-affected citrus trees 使用自动树干注射系统与手动注射系统对受 HLB 影响的柑橘树注射土霉素进行比较
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-09-07 DOI: 10.1016/j.compag.2024.109430

The manual injection of therapeutic materials to trees is laborious and time-consuming, posing implementation challenges for commercial citrus growers. Compared to manual injection, a previously developed automated trunk injection mechanism reduced the labor required and the size of the injection port. The injection system has now been field-tested on HLB-infected Glen Navel orange trees grafted onto ’Swingle’ citrumelo rootstock while applying oxytetracycline hydrochloride (OTC) and water. The multi-puncture drill-free injection mechanism creates injection ports using a needle attached to an end effector and delivers the required volume of liquids using a metering pump. Injection parameters, including injection duration, pressure, and flow rate, were examined while performing injections, and a comparison was made with a manual trunk injection device commonly used for OTC injection of HLB-affected citrus trees. Maximum pressure was not significantly different between the automated and manual injection systems and for the different liquids injected (100 ml of 5500 ppm OTC, 11000 ppm OTC, and deionized water) as 0.1735 ± 0.054 MPa. However, the automated system had a significantly higher flow rate, decreasing injection duration by 81 times compared to the manual injection. Modifications and improvements to the previously developed automated injection system are also described, including the development of a clog-free needle, automation of the positioning arm, and a control system capable of delivering precise volume to each injection port on both sides of a tree trunk.

人工向果树注射治疗材料既费力又费时,这给商业柑橘种植者带来了实施方面的挑战。与人工注射相比,之前开发的自动树干注射机制减少了所需的劳动力和注射口的大小。现在,该注射系统已在嫁接到 "Swingle "柠檬砧木上的感染 HLB 的格伦脐橙树上进行了实地测试,同时施用盐酸土霉素(OTC)和水。多孔无钻注射机制使用连接到末端效应器上的针头创建注射口,并使用计量泵输送所需体积的液体。在进行注射时,对注射参数(包括注射持续时间、压力和流速)进行了检查,并与常用于对受 HLB 影响的柑橘树进行 OTC 注射的手动树干注射装置进行了比较。自动注射系统和手动注射系统的最大压力差异不大,不同注射液体(100 毫升 5500 ppm OTC、11000 ppm OTC 和去离子水)的最大压力为 0.1735 ± 0.054 兆帕。不过,与手动注射相比,自动系统的流速明显更高,注射持续时间缩短了 81 倍。此外,还介绍了对之前开发的自动注射系统进行的修改和改进,包括开发无堵塞针头、定位臂自动化以及能够向树干两侧的每个注射口提供精确注射量的控制系统。
{"title":"Oxytetracycline injection using automated trunk injection compared to manual injection systems for HLB-affected citrus trees","authors":"","doi":"10.1016/j.compag.2024.109430","DOIUrl":"10.1016/j.compag.2024.109430","url":null,"abstract":"<div><p>The manual injection of therapeutic materials to trees is laborious and time-consuming, posing implementation challenges for commercial citrus growers. Compared to manual injection, a previously developed automated trunk injection mechanism reduced the labor required and the size of the injection port. The injection system has now been field-tested on HLB-infected Glen Navel orange trees grafted onto ’Swingle’ citrumelo rootstock while applying oxytetracycline hydrochloride (OTC) and water. The multi-puncture drill-free injection mechanism creates injection ports using a needle attached to an end effector and delivers the required volume of liquids using a metering pump. Injection parameters, including injection duration, pressure, and flow rate, were examined while performing injections, and a comparison was made with a manual trunk injection device commonly used for OTC injection of HLB-affected citrus trees. Maximum pressure was not significantly different between the automated and manual injection systems and for the different liquids injected (100 ml of 5500 ppm OTC, 11000 ppm OTC, and deionized water) as 0.1735 ± 0.054 MPa. However, the automated system had a significantly higher flow rate, decreasing injection duration by 81 times compared to the manual injection. Modifications and improvements to the previously developed automated injection system are also described, including the development of a clog-free needle, automation of the positioning arm, and a control system capable of delivering precise volume to each injection port on both sides of a tree trunk.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning modelling for non-invasive grape bunch detection under diverse occlusion conditions 不同遮挡条件下用于无创葡萄串检测的深度学习模型
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-09-06 DOI: 10.1016/j.compag.2024.109421

Accurately and automatically estimating vineyard yield is a significant challenge. This study focuses on grape bunch counting in commercial vineyards using advanced deep learning techniques and object detection algorithms. The aim is to overcome the limitations of conventional yield estimation techniques, which are labour intensive, costly, and often inaccurate due to the spatial and temporal variability of the vineyard. This research proposes a non-invasive methodology for identifying grape bunches under different occlusion conditions using RGB cameras and deep learning models. The methodology is based on the collection of RGB images captured under field conditions, coupled with the implementation of the YOLOv4 architecture for data processing and analysis. Statistical indicators were used to evaluate the performance of the developed models. The comprehensive model produced a favourable outcome during validation, with an error rate of 1.12 bunches (R2 = 0.83). In the test dataset, the model achieved an error rate of 1.12 (R2 = 0.81). The results highlight the potential of emerging technologies to significantly improve vineyard yield estimation. This approach has the potential to assist vineyard management practices, enabling more informed and efficient decisions that could increase both the quantity and quality of grape production intended for winemaking.

准确自动地估算葡萄园产量是一项重大挑战。本研究的重点是利用先进的深度学习技术和物体检测算法对商业葡萄园中的葡萄串进行计数。其目的是克服传统产量估算技术的局限性,传统产量估算技术劳动强度大、成本高,而且由于葡萄园的空间和时间可变性,往往不准确。本研究提出了一种非侵入式方法,利用 RGB 相机和深度学习模型在不同遮挡条件下识别葡萄串。该方法基于在田间条件下采集的 RGB 图像,并采用 YOLOv4 架构进行数据处理和分析。统计指标用于评估所开发模型的性能。综合模型在验证过程中取得了良好的结果,误差率为 1.12 束(R2 = 0.83)。在测试数据集中,该模型的误差率为 1.12(R2 = 0.81)。结果凸显了新兴技术在显著提高葡萄园产量估算方面的潜力。这种方法有可能帮助葡萄园管理实践,做出更明智、更高效的决策,从而提高酿酒葡萄的产量和质量。
{"title":"Deep learning modelling for non-invasive grape bunch detection under diverse occlusion conditions","authors":"","doi":"10.1016/j.compag.2024.109421","DOIUrl":"10.1016/j.compag.2024.109421","url":null,"abstract":"<div><p>Accurately and automatically estimating vineyard yield is a significant challenge. This study focuses on grape bunch counting in commercial vineyards using advanced deep learning techniques and object detection algorithms. The aim is to overcome the limitations of conventional yield estimation techniques, which are labour intensive, costly, and often inaccurate due to the spatial and temporal variability of the vineyard. This research proposes a non-invasive methodology for identifying grape bunches under different occlusion conditions using RGB cameras and deep learning models. The methodology is based on the collection of RGB images captured under field conditions, coupled with the implementation of the YOLOv4 architecture for data processing and analysis. Statistical indicators were used to evaluate the performance of the developed models. The comprehensive model produced a favourable outcome during validation, with an error rate of 1.12 bunches (R<sup>2</sup> = 0.83). In the test dataset, the model achieved an error rate of 1.12 (R<sup>2</sup> = 0.81). The results highlight the potential of emerging technologies to significantly improve vineyard yield estimation. This approach has the potential to assist vineyard management practices, enabling more informed and efficient decisions that could increase both the quantity and quality of grape production intended for winemaking.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0168169924008123/pdfft?md5=d4792773031bd514df5fc9410b731507&pid=1-s2.0-S0168169924008123-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VEG-MMKG: Multimodal knowledge graph construction for vegetables based on pre-trained model extraction VEG-MMKG:基于预训练模型提取的蔬菜多模态知识图谱构建
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-09-06 DOI: 10.1016/j.compag.2024.109398

Knowledge graph technology is of great significance to modern agricultural information management and data-driven decision support. However, agricultural knowledge is rich in types, and agricultural knowledge graph databases built only based on text are not conducive to users’ intuitive perception and comprehensive understanding of knowledge. In view of this, this paper proposes a solution to extract knowledge and construct an agricultural multimodal knowledge graph using a pre-trained language model. This paper takes two plants, cabbage and corn, as research objects. First, a text-image collaborative representation learning method with a two-stream structure is adopted to combine the image modal information of vegetables with the text modal information, and the correlation and complementarity between the two types of information are used to achieve entity alignment. In addition, in order to solve the problem of high similarity of vegetable entities in small categories, a cross-modal fine-grained contrastive learning method is introduced, and the problem of insufficient semantic association between modalities is solved by contrastive learning of vocabulary and small areas of images. Finally, a visual multimodal knowledge graph user interface is constructed using the results of image and text matching. Experimental results show that the image and text matching efficiency of the fine-tuned pre-trained model on the vegetable dataset is 76.7%, and appropriate images can be matched for text entities. The constructed visual multimodal knowledge graph database allows users to query and filter knowledge according to their needs, providing assistance for subsequent research on various applications in specific fields such as multimodal agricultural intelligent question and answer, crop pest and disease identification, and agricultural product recommendations.

知识图谱技术对现代农业信息管理和数据驱动决策支持具有重要意义。然而,农业知识类型丰富,仅基于文本构建的农业知识图谱数据库不利于用户直观感知和全面理解知识。有鉴于此,本文提出了一种利用预训练语言模型提取知识并构建农业多模态知识图谱的解决方案。本文以卷心菜和玉米两种植物为研究对象。首先,采用双流结构的文本-图像协同表示学习方法,将蔬菜的图像模态信息与文本模态信息相结合,利用两类信息的相关性和互补性实现实体对齐。此外,针对小类蔬菜实体相似度较高的问题,引入了跨模态细粒度对比学习方法,通过词汇和图像小区域的对比学习解决了模态间语义关联不足的问题。最后,利用图像和文本匹配结果构建了可视化多模态知识图谱用户界面。实验结果表明,微调预训练模型在蔬菜数据集上的图像和文本匹配效率为 76.7%,并能为文本实体匹配适当的图像。所构建的可视化多模态知识图谱数据库可以让用户根据自己的需求查询和筛选知识,为后续特定领域的各种应用研究提供帮助,如多模态农业智能问答、农作物病虫害识别和农产品推荐等。
{"title":"VEG-MMKG: Multimodal knowledge graph construction for vegetables based on pre-trained model extraction","authors":"","doi":"10.1016/j.compag.2024.109398","DOIUrl":"10.1016/j.compag.2024.109398","url":null,"abstract":"<div><p>Knowledge graph technology is of great significance to modern agricultural information management and data-driven decision support. However, agricultural knowledge is rich in types, and agricultural knowledge graph databases built only based on text are not conducive to users’ intuitive perception and comprehensive understanding of knowledge. In view of this, this paper proposes a solution to extract knowledge and construct an agricultural multimodal knowledge graph using a pre-trained language model. This paper takes two plants, cabbage and corn, as research objects. First, a text-image collaborative representation learning method with a two-stream structure is adopted to combine the image modal information of vegetables with the text modal information, and the correlation and complementarity between the two types of information are used to achieve entity alignment. In addition, in order to solve the problem of high similarity of vegetable entities in small categories, a cross-modal fine-grained contrastive learning method is introduced, and the problem of insufficient semantic association between modalities is solved by contrastive learning of vocabulary and small areas of images. Finally, a visual multimodal knowledge graph user interface is constructed using the results of image and text matching. Experimental results show that the image and text matching efficiency of the fine-tuned pre-trained model on the vegetable dataset is 76.7%, and appropriate images can be matched for text entities. The constructed visual multimodal knowledge graph database allows users to query and filter knowledge according to their needs, providing assistance for subsequent research on various applications in specific fields such as multimodal agricultural intelligent question and answer, crop pest and disease identification, and agricultural product recommendations.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Making Australian Drought Monitor dataset findable, accessible, interoperable and reusable 使澳大利亚干旱监测数据集可查找、可访问、可互操作和可重复使用
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-09-06 DOI: 10.1016/j.compag.2024.109381

Making agricultural research datasets Findable, Accessible, Interoperable, and Reusable (FAIR) is an evolving priority for research organisations in Australia. Indigenous data governance standards, described in the CARE (Collective benefit, Authority to control, Responsibility and Ethics) principles complement FAIR principles when managing research datasets. Agricultural research data have traditionally been difficult to publicly access and share due in part to conflicting interests in ownership, commerce, multiparty contracts, and diverse research practices.

As part of an agriculture digital research platform development project (AgReFed Platform project), we develop here a workflow that applies the FAIR data and CARE principles to the Australian Drought Monitor dataset, a product developed as part of the Northern Australia Climate Program (NACP), a joint project funded by Meat and Livestock Australia, the Queensland Drought and Climate Adaptation Program and the University of Southern Queensland (UniSQ). We present here a complete process on how to apply the FAIR principles to the Australian Drought Monitor dataset, including a digital infrastructure development to enable its re-use in the AgReFed Platform project.

使农业研究数据集可查找、可访问、可互操作和可重复使用(FAIR)是澳大利亚研究机构不断发展的优先事项。在管理研究数据集时,CARE(集体利益、控制权、责任和道德)原则中描述的本土数据管理标准是对 FAIR 原则的补充。农业研究数据历来难以公开获取和共享,部分原因是在所有权、商业、多方合同和多样化研究实践方面存在利益冲突。作为农业数字研究平台开发项目(AgReFed Platform 项目)的一部分,我们在此开发了一个将 FAIR 数据和 CARE 原则应用于澳大利亚干旱监测数据集的工作流程,该数据集是澳大利亚北部气候项目(NACP)的一部分,该项目由澳大利亚肉类和畜牧业协会、昆士兰干旱和气候适应项目以及南昆士兰大学(UniSQ)联合资助。我们在此介绍如何将 FAIR 原则应用于澳大利亚干旱监测数据集的完整流程,包括在 AgReFed 平台项目中重新使用该数据集的数字基础设施开发。
{"title":"Making Australian Drought Monitor dataset findable, accessible, interoperable and reusable","authors":"","doi":"10.1016/j.compag.2024.109381","DOIUrl":"10.1016/j.compag.2024.109381","url":null,"abstract":"<div><p>Making agricultural research datasets Findable, Accessible, Interoperable, and Reusable (FAIR) is an evolving priority for research organisations in Australia. Indigenous data governance standards, described in the CARE (Collective benefit, Authority to control, Responsibility and Ethics) principles complement FAIR principles when managing research datasets. Agricultural research data have traditionally been difficult to publicly access and share due in part to conflicting interests in ownership, commerce, multiparty contracts, and diverse research practices.</p><p>As part of an agriculture digital research platform development project (AgReFed Platform project), we develop here a workflow that applies the FAIR data and CARE principles to the Australian Drought Monitor dataset, a product developed as part of the Northern Australia Climate Program (NACP), a joint project funded by Meat and Livestock Australia, the Queensland Drought and Climate Adaptation Program and the University of Southern Queensland (UniSQ). We present here a complete process on how to apply the FAIR principles to the Australian Drought Monitor dataset, including a digital infrastructure development to enable its re-use in the AgReFed Platform project.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0168169924007725/pdfft?md5=031553ad1ed3c90adad4814fa02c11aa&pid=1-s2.0-S0168169924007725-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precision farming for sustainability: An agricultural intelligence model 精准农业促进可持续发展:农业智能模型
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-09-06 DOI: 10.1016/j.compag.2024.109386

Digital cultivation is emerging as one of the most promising fields that helped in creating an ecosystem for smart farming. Precision farming, modernized techniques, and creating smart agriculture supply chains are the need of the hour for high-quality yield. Artificial Intelligence (AI) helps create a framework and plays an important role in making decisions by analysing various data points. There are countries where more than 70% of the population depends on agriculture for their living, technological advancements help to improve crop yields and get better farming results through sustainable ways. Each stage in agriculture, starting from preparation of land, crop selection, type of fertilizer to use, to the kind of watering needed for the crops; can be monitored and regulated by technological advancement. Farmers can also make decisions and implement the best practices in their field by using AI and allied technologies. Disruptive technologies such as blockchain, the Internet of Things, remote sensing, imaging technologies, and drones can transform the primitive way of agriculture. Market analysis and user demands can also be foreseen, which helps farmers to get better yields. Another important sector where technology can play a big role in disease control and pest management. Artificial Intelligence-based farming creates high productivity and better yield, increasing individual farmers’ profit. In this study, the authors would like to throw light on AI and allied technologies, which can make agricultural productivity increase significantly. In a post-pandemic situation, high-yield and more productive farming will have a major impact. An agricultural intelligence framework model for self-sustained farming is proposed in this work. The proposed framework will help achieve self-sustained growth with increased economic stability. An end-to-end supply chain ensures customers are provided with quality products and farmers are not financially looted. Technology-driven farming will also push the next generation to take up agricultural jobs. The various advancements and strategies we propose in this study aim to build a better ecosystem for transforming Artificial Intelligence into agricultural intelligence.

数字种植正在成为最有前途的领域之一,有助于创建智能农业生态系统。精准农业、现代化技术和创建智能农业供应链是实现高产的当务之急。人工智能(AI)有助于创建一个框架,并通过分析各种数据点在决策中发挥重要作用。在一些国家,70% 以上的人口以农业为生,技术进步有助于提高作物产量,并通过可持续的方式获得更好的农业成果。农业的每个阶段,从整地、作物选择、肥料种类到作物所需的浇水,都可以通过技术进步进行监测和调节。农民还可以利用人工智能和相关技术做出决策,并在自己的田地里实施最佳做法。区块链、物联网、遥感、成像技术和无人机等颠覆性技术可以改变原始的农业生产方式。还可以预见市场分析和用户需求,帮助农民提高产量。在另一个重要领域,技术可以在疾病控制和病虫害管理方面发挥巨大作用。以人工智能为基础的农业可以创造更高的生产力和更好的产量,增加农民的个人收益。在本研究中,作者希望介绍人工智能和相关技术,这些技术可以显著提高农业生产率。在大流行后的形势下,高产和更高产的农业将产生重大影响。本作品提出了一个用于自我维持农业的农业智能框架模型。拟议的框架将有助于实现自我持续增长,提高经济稳定性。端到端供应链可确保为客户提供优质产品,农民不会受到经济掠夺。技术驱动型农业还将推动下一代从事农业工作。我们在本研究中提出的各种进步和战略旨在建立一个更好的生态系统,将人工智能转化为农业智能。
{"title":"Precision farming for sustainability: An agricultural intelligence model","authors":"","doi":"10.1016/j.compag.2024.109386","DOIUrl":"10.1016/j.compag.2024.109386","url":null,"abstract":"<div><p>Digital cultivation is emerging as one of the most promising fields that helped in creating an ecosystem for smart farming. Precision farming, modernized techniques, and creating smart agriculture supply chains are the need of the hour for high-quality yield. Artificial Intelligence (AI) helps create a framework and plays an important role in making decisions by analysing various data points. There are countries where more than 70% of the population depends on agriculture for their living, technological advancements help to improve crop yields and get better farming results through sustainable ways. Each stage in agriculture, starting from preparation of land, crop selection, type of fertilizer to use, to the kind of watering needed for the crops; can be monitored and regulated by technological advancement. Farmers can also make decisions and implement the best practices in their field by using AI and allied technologies. Disruptive technologies such as blockchain, the Internet of Things, remote sensing, imaging technologies, and drones can transform the primitive way of agriculture. Market analysis and user demands can also be foreseen, which helps farmers to get better yields. Another important sector where technology can play a big role in disease control and pest management. Artificial Intelligence-based farming creates high productivity and better yield, increasing individual farmers’ profit. In this study, the authors would like to throw light on AI and allied technologies, which can make agricultural productivity increase significantly. In a post-pandemic situation, high-yield and more productive farming will have a major impact. An agricultural intelligence framework model for self-sustained farming is proposed in this work. The proposed framework will help achieve self-sustained growth with increased economic stability. An end-to-end supply chain ensures customers are provided with quality products and farmers are not financially looted. Technology-driven farming will also push the next generation to take up agricultural jobs. The various advancements and strategies we propose in this study aim to build a better ecosystem for transforming Artificial Intelligence into agricultural intelligence.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New method for modeling digital twin behavior perception of cows: Cow daily behavior recognition based on multimodal data 奶牛数字孪生行为感知建模新方法:基于多模态数据的奶牛日常行为识别
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-09-05 DOI: 10.1016/j.compag.2024.109426

The cow digital shadow reflects the behavior, health condition, and productivity of cows, playing a crucial role in ensuring animal welfare, increasing individual productivity, and improving breeding efficiency. To fully utilize the existing multimodal data on farms and build a cow digital shadow with rich behavioral information, this study proposes a multimodal data fusion algorithm for recognizing cow behaviors such as drinking, feeding, lying, standing, and walking. This algorithm leverages the strengths of different data modalities, complementing each other, and enhances the performance of the cow behavior classification model. The algorithm integrates motion sensor and video data, collected by custom-made collars with inertial measurement units (IMUs) sensors placed at the top of the cow’s neck and cameras in the barn, using EfficientNet V2 S, BiLSTM, and Transformer networks. Experimental results demonstrate recognition accuracies of 98.80 %, precision of 97.15 %, and recall rates of 96.93 %, showing significant improvements over single-modal data behavior recognition algorithms. This method maximizes the utility of existing multimodal data to generate a cow digital shadow with detailed behavioral information, enhancing the modeling and simulation element of the cow digital twin architecture and laying the foundation for developing a comprehensive cow behavior data model.

奶牛数字影像反映了奶牛的行为、健康状况和生产性能,在确保动物福利、提高个体生产性能和提高繁殖效率方面发挥着至关重要的作用。为了充分利用牧场现有的多模态数据,构建具有丰富行为信息的奶牛数字影子,本研究提出了一种多模态数据融合算法,用于识别奶牛的饮水、采食、躺卧、站立和行走等行为。该算法充分利用了不同数据模式的优势,取长补短,提高了奶牛行为分类模型的性能。该算法利用 EfficientNet V2 S、BiLSTM 和 Transformer 网络,整合了运动传感器和视频数据,这些数据是通过定制的项圈收集的,项圈上的惯性测量单元(IMU)传感器被放置在奶牛颈部上方,而摄像机则安装在牛舍中。实验结果表明,识别准确率为 98.80%,精确率为 97.15%,召回率为 96.93%,与单一模式数据行为识别算法相比有显著提高。该方法最大限度地利用了现有的多模态数据,生成了具有详细行为信息的奶牛数字影子,增强了奶牛数字孪生结构的建模和仿真要素,为开发全面的奶牛行为数据模型奠定了基础。
{"title":"New method for modeling digital twin behavior perception of cows: Cow daily behavior recognition based on multimodal data","authors":"","doi":"10.1016/j.compag.2024.109426","DOIUrl":"10.1016/j.compag.2024.109426","url":null,"abstract":"<div><p>The cow digital shadow reflects the behavior, health condition, and productivity of cows, playing a crucial role in ensuring animal welfare, increasing individual productivity, and improving breeding efficiency. To fully utilize the existing multimodal data on farms and build a cow digital shadow with rich behavioral information, this study proposes a multimodal data fusion algorithm for recognizing cow behaviors such as drinking, feeding, lying, standing, and walking. This algorithm leverages the strengths of different data modalities, complementing each other, and enhances the performance of the cow behavior classification model. The algorithm integrates motion sensor and video data, collected by custom-made collars with inertial measurement units (IMUs) sensors placed at the top of the cow’s neck and cameras in the barn, using EfficientNet V2 S, BiLSTM, and Transformer networks. Experimental results demonstrate recognition accuracies of 98.80 %, precision of 97.15 %, and recall rates of 96.93 %, showing significant improvements over single-modal data behavior recognition algorithms. This method maximizes the utility of existing multimodal data to generate a cow digital shadow with detailed behavioral information, enhancing the modeling and simulation element of the cow digital twin architecture and laying the foundation for developing a comprehensive cow behavior data model.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Animal-based CO2, CH4, and N2O emissions analysis: Machine learning predictions by agricultural regions and climate dynamics in varied scenarios 基于动物的 CO2、CH4 和 N2O 排放分析:不同农业地区和气候动态情景下的机器学习预测
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-09-05 DOI: 10.1016/j.compag.2024.109423

Livestock is an essential source of livelihood and food. In the context of climate change, animal-based greenhouse gas (GHG) emissions are of great importance. This study predicted direct N2O Emissions, indirect N2O Emissions, CH4 Emissions from manure management, CH4 Emissions from enteric fermentation, and CO2 Emissions as GHG emissions from animal sources for all provinces of Turkey using various machine learning algorithms. Animal populations, climate parameters, and agricultural area information are used to model GHG emissions. The proposed study includes two different analyses according to the number of features used. The CatBoost algorithm was primarily successful when using eight features according to Scenario-1 and twelve features according to Scenario-2. In Scenario-1, R2 values for GHG emission predictions for 2021 are obtained as 0.996, 0.996, 0.992, 0.999, and 0.996, respectively, while in Scenario-2, they are obtained as 0.995, 0.996, 0.984, 0.996, and 0.996. In Scenario-1, R2 values for GHG emission predictions for 2004–2009 are obtained as 0.976, 0.962, 0.982, 0.994, and 0.994, respectively, while in Scenario-2, they are obtained as 0.975, 0.957, 0.917, 0.993, and 0.993. According to the results, the model’s performance was not improved by increasing the number of features used. Using fewer features gave more successful results.

牲畜是重要的生计和食物来源。在气候变化的背景下,以动物为基础的温室气体(GHG)排放具有重要意义。本研究利用各种机器学习算法预测了土耳其所有省份动物源温室气体的直接一氧化二氮排放量、间接一氧化二氮排放量、粪便管理产生的甲烷排放量、肠道发酵产生的甲烷排放量和二氧化碳排放量。动物数量、气候参数和农业面积信息被用于建立温室气体排放模型。根据所使用特征的数量,建议的研究包括两种不同的分析。根据方案 1,CatBoost 算法在使用 8 个特征时主要取得了成功,而根据方案 2,则使用了 12 个特征。在情景-1 中,2021 年温室气体排放预测的 R2 值分别为 0.996、0.996、0.992、0.999 和 0.996,而在情景-2 中,R2 值分别为 0.995、0.996、0.984、0.996 和 0.996。在情景-1 中,2004-2009 年温室气体排放预测的 R2 值分别为 0.976、0.962、0.982、0.994 和 0.994,而在情景-2 中,R2 值分别为 0.975、0.957、0.917、0.993 和 0.993。结果表明,模型的性能并没有因为特征数量的增加而提高。使用较少的特征就能得到更成功的结果。
{"title":"Animal-based CO2, CH4, and N2O emissions analysis: Machine learning predictions by agricultural regions and climate dynamics in varied scenarios","authors":"","doi":"10.1016/j.compag.2024.109423","DOIUrl":"10.1016/j.compag.2024.109423","url":null,"abstract":"<div><p>Livestock is an essential source of livelihood and food. In the context of climate change, animal-based greenhouse gas (GHG) emissions are of great importance. This study predicted direct N<sub>2</sub>O Emissions, indirect N<sub>2</sub>O Emissions, CH<sub>4</sub> Emissions from manure management, CH<sub>4</sub> Emissions from enteric fermentation, and CO<sub>2</sub> Emissions as GHG emissions from animal sources for all provinces of Turkey using various machine learning algorithms. Animal populations, climate parameters, and agricultural area information are used to model GHG emissions. The proposed study includes two different analyses according to the number of features used. The CatBoost algorithm was primarily successful when using eight features according to Scenario-1 and twelve features according to Scenario-2. In Scenario-1, R<sup>2</sup> values for GHG emission predictions for 2021 are obtained as 0.996, 0.996, 0.992, 0.999, and 0.996, respectively, while in Scenario-2, they are obtained as 0.995, 0.996, 0.984, 0.996, and 0.996. In Scenario-1, R<sup>2</sup> values for GHG emission predictions for 2004–2009 are obtained as 0.976, 0.962, 0.982, 0.994, and 0.994, respectively, while in Scenario-2, they are obtained as 0.975, 0.957, 0.917, 0.993, and 0.993. According to the results, the model’s performance was not improved by increasing the number of features used. Using fewer features gave more successful results.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid and accurate detection of total nitrogen in the different types for soil using laser-induced breakdown spectroscopy combined with transfer learning 利用激光诱导击穿光谱与迁移学习相结合,快速准确地检测不同类型土壤中的全氮含量
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-09-05 DOI: 10.1016/j.compag.2024.109396

Precision fertilizing is crucial not only for enhancing fertilizer efficiency but also for protecting the environment. The rapid sensing of total soil nitrogen (TN) constitutes a key aspect of precision fertilization. Currently common methods, such as the Kjeldahl method, are not suitable for on-site applications. Laser-induced breakdown spectroscopy (LIBS), celebrated for its expeditious data acquisition and high precision, has seen widespread deployment in rapid soil sensing. However, the time-consuming sample preprocessing stage restricts the on-site application of LIBS. In this study, we employed a powder adhesion (PA) method to shorten the preprocessing cycle to 3 min. A transfer learning approach named TransLIBS is introduced to ensure the estimation performance of PA. Compared to the calibration model directly developed on the target domain, the transferred model by TransLIBS elevates RV2 by 0.134 and diminishes RMSEV by 0.312 g kg−1. The F-test method is leveraged to identify active variables, and feature map visualization is employed to interpret the transfer mechanism of the TransLIBS approach. The visualization results highlight the most influential variables situated in the 212–310 nm and 391–395 nm range. Transfer learning has advanced the application of LIBS in soil, providing more opportunities for on-site LIBS detection.

精准施肥不仅对提高肥料利用率至关重要,而且对保护环境也至关重要。快速检测土壤全氮(TN)是精准施肥的一个关键环节。目前常见的方法,如凯氏定氮法,并不适合现场应用。激光诱导击穿光谱法(LIBS)因其快速的数据采集和高精度而闻名,已被广泛应用于快速土壤检测。然而,耗时的样品预处理阶段限制了激光诱导击穿光谱的现场应用。在本研究中,我们采用粉末粘附(PA)方法将预处理周期缩短至 3 分钟。为了确保 PA 的估计性能,我们引入了一种名为 TransLIBS 的迁移学习方法。与直接在目标域开发的校准模型相比,TransLIBS 的转移模型将 RV2 提高了 0.134,将 RMSEV 降低了 0.312 g kg-1。利用 F 检验方法识别活跃变量,并采用特征图可视化方法解释 TransLIBS 方法的转移机制。可视化结果突出显示了位于 212-310 nm 和 391-395 nm 范围内最具影响力的变量。迁移学习推进了 LIBS 在土壤中的应用,为现场 LIBS 检测提供了更多机会。
{"title":"Rapid and accurate detection of total nitrogen in the different types for soil using laser-induced breakdown spectroscopy combined with transfer learning","authors":"","doi":"10.1016/j.compag.2024.109396","DOIUrl":"10.1016/j.compag.2024.109396","url":null,"abstract":"<div><p>Precision fertilizing is crucial not only for enhancing fertilizer efficiency but also for protecting the environment. The rapid sensing of total soil nitrogen (TN) constitutes a key aspect of precision fertilization. Currently common methods, such as the Kjeldahl method, are not suitable for on-site applications. Laser-induced breakdown spectroscopy (LIBS), celebrated for its expeditious data acquisition and high precision, has seen widespread deployment in rapid soil sensing. However, the time-consuming sample preprocessing stage restricts the on-site application of LIBS. In this study, we employed a powder adhesion (PA) method to shorten the preprocessing cycle to 3 min. A transfer learning approach named TransLIBS is introduced to ensure the estimation performance of PA. Compared to the calibration model directly developed on the target domain, the transferred model by TransLIBS elevates <span><math><mrow><msubsup><mi>R</mi><mrow><mi>V</mi></mrow><mn>2</mn></msubsup></mrow></math></span> by 0.134 and diminishes <span><math><mrow><msub><mrow><mi>RMSE</mi></mrow><mi>V</mi></msub></mrow></math></span> by 0.312 g kg<sup>−1</sup>. The F-test method is leveraged to identify active variables, and feature map visualization is employed to interpret the transfer mechanism of the TransLIBS approach. The visualization results highlight the most influential variables situated in the 212–310 nm and 391–395 nm range. Transfer learning has advanced the application of LIBS in soil, providing more opportunities for on-site LIBS detection.</p></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Electronics in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1