Pub Date : 2024-08-13DOI: 10.1016/j.atech.2024.100536
Hyperspectral measurements can help with rapid decision-making and collecting data across multiple locations. However, there are multiple data processing methods (Savisky-Golay [SG], first derivative [FD], and normalization) and analyses (partial least squares regression [PLS], weighted k-nearest neighbor [KKNN], support vector machine [SVM], and random forest [RF]) that can be used to determine the best relationship between physical measurements and hyperspectral data. In the current study, FD was the best method for data processing and SVM was the best model for predicting average cotton (Gossypium spp. Malvaceae) height and nodes. However, the combination of FD and RF were best at predicting cotton leaf area index, canopy cover, and chlorophyll content across the growing season. Additionally, results from models developed by both SVM and RF were closely related to pseudo-CHIME satellite wavebands, where in-situ hyperspectral data were matched to the spectral resolutions of a future hyperspectral satellite. The information and results presented will aid producers and other members of the cotton industry to make rapid and meaningful decisions that could result in greater yield and sustainable intensification.
{"title":"Hyperspectral reflectance and machine learning for multi-site monitoring of cotton growth","authors":"","doi":"10.1016/j.atech.2024.100536","DOIUrl":"10.1016/j.atech.2024.100536","url":null,"abstract":"<div><p>Hyperspectral measurements can help with rapid decision-making and collecting data across multiple locations. However, there are multiple data processing methods (Savisky-Golay [SG], first derivative [FD], and normalization) and analyses <strong>(</strong>partial least squares regression [PLS], weighted k-nearest neighbor [KKNN], support vector machine [SVM], and random forest [RF]) that can be used to determine the best relationship between physical measurements and hyperspectral data. In the current study, FD was the best method for data processing and SVM was the best model for predicting average cotton (<em>Gossypium</em> spp. <em>Malvaceae</em>) height and nodes. However, the combination of FD and RF were best at predicting cotton leaf area index, canopy cover, and chlorophyll content across the growing season. Additionally, results from models developed by both SVM and RF were closely related to pseudo-CHIME satellite wavebands, where <em>in-situ</em> hyperspectral data were matched to the spectral resolutions of a future hyperspectral satellite. The information and results presented will aid producers and other members of the cotton industry to make rapid and meaningful decisions that could result in greater yield and sustainable intensification.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001412/pdfft?md5=2956ba7ef3b1d61a9b2f23846aae6000&pid=1-s2.0-S2772375524001412-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-11DOI: 10.1016/j.atech.2024.100535
Optimizing cannabis crop yield and quality necessitates accurate, automated leaf disease classi-fication systems for timely detection and intervention. Existing automated solutions, however, are insufficiently tailored to the specific challenges of cannabis disease identification, struggling with robustness across varied environmental conditions and plant growth stages. This paper introduces a novel Hybrid Adaptive Multi-Intelligence System for Deep Learning Ensembles (HyAMIS-DLE), utilizing a comprehensive dataset reflective of the diversity in cannabis leaf diseases and their progression. Our approach combines non-population-based decision fusion in image prepro-cessing with population-based decision fusion in classification, employing multiple CNN archi-tectures. This integration facilitates a significant improvement in performance metrics: Hy-AMIS-DLE achieves an accuracy of 99.58 %, outperforming conventional models by up to 4.16 %, and exhibits superior robustness and an enhanced Area Under the Curve (AUC) score, effectively distinguishing between healthy and diseased leaves. The successful deployment of HyAMIS-DLE within our Automated Cannabis Leaf Disease Classification System (A-CLDC-S) demonstrates its practical value, contributing to increased crop yields, reduced losses, and the promotion of sus-tainable agricultural practices.
{"title":"Hybrid Adaptive Multiple Intelligence System (HybridAMIS) for classifying cannabis leaf diseases using deep learning ensembles","authors":"","doi":"10.1016/j.atech.2024.100535","DOIUrl":"10.1016/j.atech.2024.100535","url":null,"abstract":"<div><p>Optimizing cannabis crop yield and quality necessitates accurate, automated leaf disease classi-fication systems for timely detection and intervention. Existing automated solutions, however, are insufficiently tailored to the specific challenges of cannabis disease identification, struggling with robustness across varied environmental conditions and plant growth stages. This paper introduces a novel Hybrid Adaptive Multi-Intelligence System for Deep Learning Ensembles (HyAMIS-DLE), utilizing a comprehensive dataset reflective of the diversity in cannabis leaf diseases and their progression. Our approach combines non-population-based decision fusion in image prepro-cessing with population-based decision fusion in classification, employing multiple CNN archi-tectures. This integration facilitates a significant improvement in performance metrics: Hy-AMIS-DLE achieves an accuracy of 99.58 %, outperforming conventional models by up to 4.16 %, and exhibits superior robustness and an enhanced Area Under the Curve (AUC) score, effectively distinguishing between healthy and diseased leaves. The successful deployment of HyAMIS-DLE within our Automated Cannabis Leaf Disease Classification System (A-CLDC-S) demonstrates its practical value, contributing to increased crop yields, reduced losses, and the promotion of sus-tainable agricultural practices.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001400/pdfft?md5=0c1e3febb4480f535a3b4a1858aadc67&pid=1-s2.0-S2772375524001400-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142047895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-10DOI: 10.1016/j.atech.2024.100526
Agriculture sustains the livelihoods of a significant portion of India's rural population, yet challenges persist in manual practices and disease management. To address these issues, this paper presents an automated plant leaf damage detection and disease identification system leveraging advanced deep learning techniques. The proposed method consists of six stages: first, utilizing YOLOv8 for region of interest identification from drone images; second, employing DeepLabV3+ for background removal and facilitating disease classification; third, implementing a CNN model for accurate disease classification achieving high training and validation accuracies (96.97 % and 92.89 %, respectively); fourth, utilizing UNet semantic segmentation for precise damage detection at a pixel level with an evaluation accuracy of 99 %; fifth, evaluating disease severity; and sixth, suggesting tailored remedies based on disease type and damage state. Experimental analysis using the Plant Village dataset demonstrates the effectiveness of the proposed method in detecting various defects in plants such as apple, tomato, and corn. This automated approach holds promise for enhancing agricultural productivity and disease management in India and beyond.
{"title":"\"Semantic segmentation for plant leaf disease classification and damage detection: A deep learning approach\"","authors":"","doi":"10.1016/j.atech.2024.100526","DOIUrl":"10.1016/j.atech.2024.100526","url":null,"abstract":"<div><p>Agriculture sustains the livelihoods of a significant portion of India's rural population, yet challenges persist in manual practices and disease management. To address these issues, this paper presents an automated plant leaf damage detection and disease identification system leveraging advanced deep learning techniques. The proposed method consists of six stages: first, utilizing YOLOv8 for region of interest identification from drone images; second, employing DeepLabV3+ for background removal and facilitating disease classification; third, implementing a CNN model for accurate disease classification achieving high training and validation accuracies (96.97 % and 92.89 %, respectively); fourth, utilizing UNet semantic segmentation for precise damage detection at a pixel level with an evaluation accuracy of 99 %; fifth, evaluating disease severity; and sixth, suggesting tailored remedies based on disease type and damage state. Experimental analysis using the Plant Village dataset demonstrates the effectiveness of the proposed method in detecting various defects in plants such as apple, tomato, and corn. This automated approach holds promise for enhancing agricultural productivity and disease management in India and beyond.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S277237552400131X/pdfft?md5=b5a85e9638d70c4b376f220ae2d18d36&pid=1-s2.0-S277237552400131X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-09DOI: 10.1016/j.atech.2024.100525
Ensuring food security, fostering agricultural sustainability, and driving economic development. However, existing prediction models often overlook the unique characteristics of paddy productivity distribution, which varies between areas, skewed, and bounded within a certain minimum and maximum range, following a four-parameter beta distribution. Consequently, these models yield less accurate and potentially misleading predictions. Additionally, most approaches fail to capture the complex interrelationships among variables that often occur when we incorporate satellite data alongside survey data that has been recognized as a key approach for improving prediction accuracy and optimizing farming practices. To address these shortcomings, this study introduces a four-parameter beta Generalized Linear Mixed Model (GLMM) augmented within a four-parameter beta Generalized Mixed Effect Tree (GMET). The four-parameter beta GMET, an extension of the four-parameter beta GLMM model integrated with a regression tree, offers enhanced flexibility in modeling complex relationships. Application of this methodology to an empirical study in Central Kalimantan and Karawang reveals notable improvements over previous methods, as evidenced by substantially lower AIC and RRMSE values. Notably, the analysis identifies lagged values of band 4, band 8, and NDVI from Sentinel-2A satellite data as significant predictors of paddy productivity, overriding the importance of farmer survey variables. This underscores the potential of satellite data to be utilized in paddy productivity predictions, offering a more efficient and cost-effective alternative to farmer survey-based methods. By enhancing satellite technology, future efforts in paddy productivity prediction can achieve higher efficiency and accuracy, contributing to informed decision-making in agricultural management.
{"title":"Four-parameter beta mixed models with survey and sentinel 2A satellite data for predicting paddy productivity","authors":"","doi":"10.1016/j.atech.2024.100525","DOIUrl":"10.1016/j.atech.2024.100525","url":null,"abstract":"<div><p>Ensuring food security, fostering agricultural sustainability, and driving economic development. However, existing prediction models often overlook the unique characteristics of paddy productivity distribution, which varies between areas, skewed, and bounded within a certain minimum and maximum range, following a four-parameter beta distribution. Consequently, these models yield less accurate and potentially misleading predictions. Additionally, most approaches fail to capture the complex interrelationships among variables that often occur when we incorporate satellite data alongside survey data that has been recognized as a key approach for improving prediction accuracy and optimizing farming practices. To address these shortcomings, this study introduces a four-parameter beta Generalized Linear Mixed Model (GLMM) augmented within a four-parameter beta Generalized Mixed Effect Tree (GMET). The four-parameter beta GMET, an extension of the four-parameter beta GLMM model integrated with a regression tree, offers enhanced flexibility in modeling complex relationships. Application of this methodology to an empirical study in Central Kalimantan and Karawang reveals notable improvements over previous methods, as evidenced by substantially lower AIC and RRMSE values. Notably, the analysis identifies lagged values of band 4, band 8, and NDVI from Sentinel-2A satellite data as significant predictors of paddy productivity, overriding the importance of farmer survey variables. This underscores the potential of satellite data to be utilized in paddy productivity predictions, offering a more efficient and cost-effective alternative to farmer survey-based methods. By enhancing satellite technology, future efforts in paddy productivity prediction can achieve higher efficiency and accuracy, contributing to informed decision-making in agricultural management.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001308/pdfft?md5=5d0c86c147378242401d9cd1bb47237d&pid=1-s2.0-S2772375524001308-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142012034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-09DOI: 10.1016/j.atech.2024.100532
Rice blast (RB) and Brown spot (BS) are economically important diseases in rice that cause greater yield losses annually. Both share the same host and produce quite similar lesions, which leads to confusion in identification by farmers. Proper identification is essential for better management of the diseases. Visual identification needs trained experts and the laboratory-based experiments using molecular techniques are costly and time-consuming even though they are accurate. This study investigated the differentiation of the lesions from these two diseases based on proximally sensed digital RGB images and derived colour indices. Digital images of lesions were acquired using a smartphone camera. Thirty-six colour indices were evaluated by k-means clustering to distinguish the diseases using three colour channel components; RGB, HSV, and La*b*. Briefly, the background of the images was masked to target the leaf spot lesion, and colour indices were derived as features from the centre region across the lesion, coinciding with the common identification practice of plant pathologists. The results revealed that 36 indices delineated both diseases with 84.3 % accuracy. However, it was also found that the accuracy was mostly governed by indices associated with the R, G and B profiles, excluding the others. G/R, NGRDI, (R + G + B)/R, VARI, (G + B)/R, R/G, Nor_r, G-R, Mean_A, and Logsig indices were identified to contribute more in distinguishing the diseases. Therefore, these RGB-based colour indices can be used to distinguish blast and brown spot diseases using the k-means algorithm. The results from this study present an alternative, and non-destructive, objective method for identifying RB and BS disease symptoms. Based on the findings, a mobile application, Blast O spot is developed to differentiate the diseases in fields.
{"title":"Proximally sensed RGB images and colour indices for distinguishing rice blast and brown spot diseases by k-means clustering: Towards a mobile application solution","authors":"","doi":"10.1016/j.atech.2024.100532","DOIUrl":"10.1016/j.atech.2024.100532","url":null,"abstract":"<div><p>Rice blast (RB) and Brown spot (BS) are economically important diseases in rice that cause greater yield losses annually. Both share the same host and produce quite similar lesions, which leads to confusion in identification by farmers. Proper identification is essential for better management of the diseases. Visual identification needs trained experts and the laboratory-based experiments using molecular techniques are costly and time-consuming even though they are accurate. This study investigated the differentiation of the lesions from these two diseases based on proximally sensed digital RGB images and derived colour indices. Digital images of lesions were acquired using a smartphone camera. Thirty-six colour indices were evaluated by k-means clustering to distinguish the diseases using three colour channel components; RGB, HSV, and La*b*. Briefly, the background of the images was masked to target the leaf spot lesion, and colour indices were derived as features from the centre region across the lesion, coinciding with the common identification practice of plant pathologists. The results revealed that 36 indices delineated both diseases with 84.3 % accuracy. However, it was also found that the accuracy was mostly governed by indices associated with the R, G and B profiles, excluding the others. G/R, NGRDI, (<em>R</em> + <em>G</em> + <em>B</em>)/R, VARI, (<em>G</em> + <em>B</em>)/R, R/G, Nor_r, G-R, Mean_A, and Logsig indices were identified to contribute more in distinguishing the diseases. Therefore, these RGB-based colour indices can be used to distinguish blast and brown spot diseases using the k-means algorithm. The results from this study present an alternative, and non-destructive, objective method for identifying RB and BS disease symptoms. Based on the findings, a mobile application, Blast O spot is developed to differentiate the diseases in fields.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001370/pdfft?md5=07c8c69b438b17ee021ff3d10b9320a4&pid=1-s2.0-S2772375524001370-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.1016/j.atech.2024.100524
The agriculture industry is currently facing significant challenges in effectively monitoring the health of livestock. Traditional methods of health monitoring are often labor-intensive, inefficient, and insufficiently responsive to the needs of modern farming. As the number of IoT devices in agriculture proliferates, issues of scalability and computational load have become prominent, necessitating efficient and scalable solutions. This research introduces a cloud-based architecture aimed at enhancing livestock health monitoring. This system is designed to track critical health indicators such as movement patterns, body temperature, and heart rate, utilizing AWS for robust data handling and Python for data processing and real-time analytics. The proposed system incorporates Narrow Band IoT (Nb IoT) technology, which is optimized for low-bandwidth, long-range communication, making it suitable for rural and remote farming locations. The architecture's scalability allows for the effective management of varying numbers of IoT devices, which is essential for adapting to changing herd sizes and farm scales. Preliminary experiments conducted to assess the system's performance have demonstrated its durability and effectiveness, indicating a successful integration of AWS IoT Cloud services with the deployed IoT devices. Furthermore, the study explores the implementation of predictive analytics to facilitate proactive health management in livestock. By predicting potential health issues before they become apparent, the system can offer significant improvements in animal welfare and farm efficiency. The integration of cloud computing and IoT not only meets the growing technological needs of modern agriculture but also sets a new benchmark in the development of sustainable farming practices. The findings from this research could have broad implications for the future of livestock management, potentially leading to widespread adoption of technology-driven health monitoring systems in agriculture. This would help in optimizing the health management of livestock globally, thereby enhancing productivity and sustainability in the agricultural sector.
{"title":"Development of a cloud-based IoT system for livestock health monitoring using AWS and python","authors":"","doi":"10.1016/j.atech.2024.100524","DOIUrl":"10.1016/j.atech.2024.100524","url":null,"abstract":"<div><p>The agriculture industry is currently facing significant challenges in effectively monitoring the health of livestock. Traditional methods of health monitoring are often labor-intensive, inefficient, and insufficiently responsive to the needs of modern farming. As the number of IoT devices in agriculture proliferates, issues of scalability and computational load have become prominent, necessitating efficient and scalable solutions. This research introduces a cloud-based architecture aimed at enhancing livestock health monitoring. This system is designed to track critical health indicators such as movement patterns, body temperature, and heart rate, utilizing AWS for robust data handling and Python for data processing and real-time analytics. The proposed system incorporates Narrow Band IoT (Nb IoT) technology, which is optimized for low-bandwidth, long-range communication, making it suitable for rural and remote farming locations. The architecture's scalability allows for the effective management of varying numbers of IoT devices, which is essential for adapting to changing herd sizes and farm scales. Preliminary experiments conducted to assess the system's performance have demonstrated its durability and effectiveness, indicating a successful integration of AWS IoT Cloud services with the deployed IoT devices. Furthermore, the study explores the implementation of predictive analytics to facilitate proactive health management in livestock. By predicting potential health issues before they become apparent, the system can offer significant improvements in animal welfare and farm efficiency. The integration of cloud computing and IoT not only meets the growing technological needs of modern agriculture but also sets a new benchmark in the development of sustainable farming practices. The findings from this research could have broad implications for the future of livestock management, potentially leading to widespread adoption of technology-driven health monitoring systems in agriculture. This would help in optimizing the health management of livestock globally, thereby enhancing productivity and sustainability in the agricultural sector.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001291/pdfft?md5=9ae812e43563363d78e1563c7874898c&pid=1-s2.0-S2772375524001291-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.1016/j.atech.2024.100533
As the demand for food surges and the agricultural sector undergoes a transformative shift towards sustainability and efficiency, the need for precise and proactive measures to ensure the health and welfare of livestock becomes paramount. In the egg and hatchery industry, hyperspectral imaging (HSI) has emerged as a cutting-edge, non-destructive technique for fast and accurate egg quality analysis, including detecting chick embryo mortality. However, the high cost and operational complexity compared to conventional RGB imaging are significant bottlenecks in the widespread adoption of HSI technology. To overcome these hurdles and unlock the full potential of HSI, a promising solution is hyperspectral image reconstruction from standard RGB images. This study aims to reconstruct hyperspectral images from RGB images for non-destructive early prediction of chick embryo mortality. Initially, the performance of different image reconstruction algorithms, such as HRNET, MST++, Restormer, and EDSR were compared to reconstruct the hyperspectral images of the eggs in the early incubation period. Later, the reconstructed spectra were used to differentiate live from dead embryos eggs using the XGBoost and Random Forest classification methods. Among the reconstruction methods, HRNET showed impressive reconstruction performance with MRAE of 0.0955, RMSE of 0.0159, and PSNR of 36.79 dB. This study motivated the idea that harnessing imaging technology integrated with smart sensors and data analytics has the potential to improve automation, enhance biosecurity, and optimize resource management towards sustainable agriculture 4.0.
{"title":"Hyperspectral image reconstruction for predicting chick embryo mortality towards advancing egg and hatchery industry","authors":"","doi":"10.1016/j.atech.2024.100533","DOIUrl":"10.1016/j.atech.2024.100533","url":null,"abstract":"<div><p>As the demand for food surges and the agricultural sector undergoes a transformative shift towards sustainability and efficiency, the need for precise and proactive measures to ensure the health and welfare of livestock becomes paramount. In the egg and hatchery industry, hyperspectral imaging (HSI) has emerged as a cutting-edge, non-destructive technique for fast and accurate egg quality analysis, including detecting chick embryo mortality. However, the high cost and operational complexity compared to conventional RGB imaging are significant bottlenecks in the widespread adoption of HSI technology. To overcome these hurdles and unlock the full potential of HSI, a promising solution is hyperspectral image reconstruction from standard RGB images. This study aims to reconstruct hyperspectral images from RGB images for non-destructive early prediction of chick embryo mortality. Initially, the performance of different image reconstruction algorithms, such as HRNET, MST++, Restormer, and EDSR were compared to reconstruct the hyperspectral images of the eggs in the early incubation period. Later, the reconstructed spectra were used to differentiate live from dead embryos eggs using the XGBoost and Random Forest classification methods. Among the reconstruction methods, HRNET showed impressive reconstruction performance with MRAE of 0.0955, RMSE of 0.0159, and PSNR of 36.79 dB. This study motivated the idea that harnessing imaging technology integrated with smart sensors and data analytics has the potential to improve automation, enhance biosecurity, and optimize resource management towards sustainable agriculture 4.0.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001382/pdfft?md5=1d6d69c1b4d4426333f4a0df82d33520&pid=1-s2.0-S2772375524001382-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1016/j.atech.2024.100530
The tractor serves as a crucial power source in agricultural operations. However, the tractor's power often remains underutilized due to a mismatch between the tractor and implement, considering specific field conditions. To enhance system output, it becomes vital to acquire data on performance-related parameters for the tractor-implement combination. In this study we develop a real-time Instrumented Tractor Performance Monitoring System (ITPMS) using the Internet-of-Things (IoT). This system consists of a Raspberry Pi, a GPS sensor, a proximity sensor, a rotary potentiometer, and a three-point hitch dynamometer. The rotary potentiometer measures tillage depth, while the three-point hitch dynamometer used to measure data on draft force. Proximity sensors are installed on a two-wheel drive (2WD) tractor to measure forward speed and drive-wheel slip. We establish a dedicated web server using a Google® Firebase® project to store data from all sensors through Raspberry Pi. Additionally, we design a web interface and a mobile application to provide real-time data generated from the sensors. Field experiments were done to evaluate and monitor the performance parameters of the tractor-implement combination utilising the developed ITPMS. The results demonstrate that the system effectively monitors the performance parameters necessary for tractor-implement combination. Furthermore, the system's capability to update data to the IoT server in real-time is validated. Overall, the development and implementation of this Raspberry Pi based IoT system, provides a reliable and efficient solution for real-time performance monitoring of instrumented tractors.
{"title":"Development and implementation of a raspberry Pi-based IoT system for real-time performance monitoring of an instrumented tractor","authors":"","doi":"10.1016/j.atech.2024.100530","DOIUrl":"10.1016/j.atech.2024.100530","url":null,"abstract":"<div><p>The tractor serves as a crucial power source in agricultural operations. However, the tractor's power often remains underutilized due to a mismatch between the tractor and implement, considering specific field conditions. To enhance system output, it becomes vital to acquire data on performance-related parameters for the tractor-implement combination. In this study we develop a real-time Instrumented Tractor Performance Monitoring System (ITPMS) using the Internet-of-Things (IoT). This system consists of a Raspberry Pi, a GPS sensor, a proximity sensor, a rotary potentiometer, and a three-point hitch dynamometer. The rotary potentiometer measures tillage depth, while the three-point hitch dynamometer used to measure data on draft force. Proximity sensors are installed on a two-wheel drive (2WD) tractor to measure forward speed and drive-wheel slip. We establish a dedicated web server using a Google® Firebase® project to store data from all sensors through Raspberry Pi. Additionally, we design a web interface and a mobile application to provide real-time data generated from the sensors. Field experiments were done to evaluate and monitor the performance parameters of the tractor-implement combination utilising the developed ITPMS. The results demonstrate that the system effectively monitors the performance parameters necessary for tractor-implement combination. Furthermore, the system's capability to update data to the IoT server in real-time is validated. Overall, the development and implementation of this Raspberry Pi based IoT system, provides a reliable and efficient solution for real-time performance monitoring of instrumented tractors.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001357/pdfft?md5=6081ec8bb670ddf38bff895433e2db2a&pid=1-s2.0-S2772375524001357-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141963550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-06DOI: 10.1016/j.atech.2024.100528
Measuring fruit mass and volume is a time-consuming and tedious task that can affect production planning. This study developed a computer vision system to estimate the volume and mass of baobab fruits from single-view images captured from inexpensive and readily available cameras such as those in smartphones. The baobab fruits were collected from two study fields within the savanna ecological zone. Their images were captured, and subsequently, they were detected and segmented with over 97 % accuracy. The segmented images were binarized, and two-dimensional (2D) features such as the segmented area, centroid, bounding box, equivalent diameter, and major diameter were extracted from them. The volumes of the fruits were estimated from the 2D features using random forest, linear, polynomial, and radial support vector machine models. All the models achieved high goodness of fit; however, the random forest model delivered the best performance, with an R2 value of 99.8 %. The relationship between mass and volume was a quadratic equation (mass = 38.23 + 0.25 × volume + 4.49e−05 × volume2) and had an R2 value of 92 %. Correlations were validated via plots and statistical tests, and credible intervals of point estimates were determined from the posterior distributions of their samples. This highlights the potential of artificial intelligence methods to be applied in a less constrained environmental setting for ecological research and agricultural management. Commercial companies producing baobab powder and seed oil should apply these models for effective production planning. To enhance the model, it would be beneficial to gain a better understanding of how climate gradients affect the morphological characteristics of baobab fruits.
{"title":"Application of computer vision and machine learning in morphological characterization of Adansonia digitata fruits","authors":"","doi":"10.1016/j.atech.2024.100528","DOIUrl":"10.1016/j.atech.2024.100528","url":null,"abstract":"<div><p>Measuring fruit mass and volume is a time-consuming and tedious task that can affect production planning. This study developed a computer vision system to estimate the volume and mass of baobab fruits from single-view images captured from inexpensive and readily available cameras such as those in smartphones. The baobab fruits were collected from two study fields within the savanna ecological zone. Their images were captured, and subsequently, they were detected and segmented with over 97 % accuracy. The segmented images were binarized, and two-dimensional (2D) features such as the segmented area, centroid, bounding box, equivalent diameter, and major diameter were extracted from them. The volumes of the fruits were estimated from the 2D features using random forest, linear, polynomial, and radial support vector machine models. All the models achieved high goodness of fit; however, the random forest model delivered the best performance, with an <em>R</em><sup>2</sup> value of 99.8 %. The relationship between mass and volume was a quadratic equation (mass = 38.23 + 0.25 × volume + 4.49e−05 × volume<sup>2</sup>) and had an <em>R<sup>2</sup></em> value of 92 %. Correlations were validated via plots and statistical tests, and credible intervals of point estimates were determined from the posterior distributions of their samples. This highlights the potential of artificial intelligence methods to be applied in a less constrained environmental setting for ecological research and agricultural management. Commercial companies producing baobab powder and seed oil should apply these models for effective production planning. To enhance the model, it would be beneficial to gain a better understanding of how climate gradients affect the morphological characteristics of baobab fruits.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001333/pdfft?md5=28c0bd462ab75be057a6e0439591686c&pid=1-s2.0-S2772375524001333-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141978359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-06DOI: 10.1016/j.atech.2024.100529
Precision Farming (PF) provides different solutions to assist the decision-making process on farms. Current PF technologies such as variable rate site-specific applications can bring financial benefits to farmers as well as environmental advantages. Increasing scientific research and an expanding number of PF products are supporting a growing interest in PF applications. However, the actual implementation of these technologies on farms in many cases remains low. Therefore, there is a need to disseminate and transfer knowledge about the positive aspects of PF. One of the ways to facilitate the adoption process of PF technologies is education and training among farmers and other interested stakeholders. This paper presents a case study using the computer game Farming Simulator as an educational tool for raising awareness about the topic in an engaging and enjoyable way. Two distinct downloadable content (DLC) versions were developed and implemented in the versions 2019 and 2022 of the game, respectively, each with a range of PF functionalities (automatic steering, variable rate applications, yield mapping among others). The PF DLCs have received positive feedback from students and scientists but also the general public. The growing number of downloads (3,661,069 in total for both DLC versions as of 15th November 2023) demonstrates the effectiveness of computer games as an educational tool to educate and inform stakeholders (farmers, scientists, students, and the general public) about agricultural challenges and the potential of PF as a solution.
{"title":"Stimulating awareness of Precision Farming through gamification: The Farming Simulator case","authors":"","doi":"10.1016/j.atech.2024.100529","DOIUrl":"10.1016/j.atech.2024.100529","url":null,"abstract":"<div><p>Precision Farming (PF) provides different solutions to assist the decision-making process on farms. Current PF technologies such as variable rate site-specific applications can bring financial benefits to farmers as well as environmental advantages. Increasing scientific research and an expanding number of PF products are supporting a growing interest in PF applications. However, the actual implementation of these technologies on farms in many cases remains low. Therefore, there is a need to disseminate and transfer knowledge about the positive aspects of PF. One of the ways to facilitate the adoption process of PF technologies is education and training among farmers and other interested stakeholders. This paper presents a case study using the computer game Farming Simulator as an educational tool for raising awareness about the topic in an engaging and enjoyable way. Two distinct downloadable content (DLC) versions were developed and implemented in the versions 2019 and 2022 of the game, respectively, each with a range of PF functionalities (automatic steering, variable rate applications, yield mapping among others). The PF DLCs have received positive feedback from students and scientists but also the general public. The growing number of downloads (3,661,069 in total for both DLC versions as of 15th November 2023) demonstrates the effectiveness of computer games as an educational tool to educate and inform stakeholders (farmers, scientists, students, and the general public) about agricultural challenges and the potential of PF as a solution.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001345/pdfft?md5=ad458eeb92a594eed7a96ba8983898b8&pid=1-s2.0-S2772375524001345-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142002019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}