Pub Date : 2024-08-05DOI: 10.1016/j.atech.2024.100521
Various deep learning methods are employed to detect stress and diseases in market garden crops, as well as to assess their severity. This study aims to comprehensively analyze these techniques and identify potential research avenues. The diversity of deep learning techniques was explored through a literature review based on the PRISMA guidelines. Research equations were defined, resulting in a sample of publications, of which 72 were deemed usable and considered in the final analysis. For classification tasks, hybrid CNN models were the most widely used (19.2%). Commonly utilized models included VGG16 (10%), InceptionV3 (6.1%), DCNN (5%), and YoloV5 (5%). In object detection tasks, Fast R-CNN was used six times, followed by YoloV5 (three occurrences) and YoloV3 (two occurrences). In segmentation tasks, Mask R-CNN accounted for 28.67% of the models, while DeepLabV3+ accounted for 24.98%. Assessing disease severity in market garden crops is complex due to the unique criteria for each plant disease and the presence of multiple diseases across different crop types. To address this complexity, establishing a standardized method is crucial. Further research is essential to enhance the application of deep learning techniques in the study of market garden crops. This includes gathering extensive datasets that encompass various scenarios of crop diseases and considering the impact of climate variations on stress manifestation.
{"title":"Deep learning methods for enhanced stress and pest management in market garden crops: A comprehensive analysis","authors":"","doi":"10.1016/j.atech.2024.100521","DOIUrl":"10.1016/j.atech.2024.100521","url":null,"abstract":"<div><p>Various deep learning methods are employed to detect stress and diseases in market garden crops, as well as to assess their severity. This study aims to comprehensively analyze these techniques and identify potential research avenues. The diversity of deep learning techniques was explored through a literature review based on the PRISMA guidelines. Research equations were defined, resulting in a sample of <span><math><mn>1</mn><mo>,</mo><mn>422</mn></math></span> publications, of which 72 were deemed usable and considered in the final analysis. For classification tasks, hybrid CNN models were the most widely used (19.2%). Commonly utilized models included VGG16 (10%), InceptionV3 (6.1%), DCNN (5%), and YoloV5 (5%). In object detection tasks, Fast R-CNN was used six times, followed by YoloV5 (three occurrences) and YoloV3 (two occurrences). In segmentation tasks, Mask R-CNN accounted for 28.67% of the models, while DeepLabV3+ accounted for 24.98%. Assessing disease severity in market garden crops is complex due to the unique criteria for each plant disease and the presence of multiple diseases across different crop types. To address this complexity, establishing a standardized method is crucial. Further research is essential to enhance the application of deep learning techniques in the study of market garden crops. This includes gathering extensive datasets that encompass various scenarios of crop diseases and considering the impact of climate variations on stress manifestation.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001266/pdfft?md5=11f8833021e6b2a9c9d7b21ec215d816&pid=1-s2.0-S2772375524001266-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141962900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1016/j.atech.2024.100527
Citrus fruit (Citrus nobilis Lour.) from the Indonesian region is reported to have high economic value due to attractive nutritional, nutraceutical, and sensory attributes. However, authenticating the geographic origins is challenging because of adulteration and similarity in visual appearance. Therefore, this study aimed to develop an effective method based on laser-light backscattering imaging (LLBI) for authentication of the geographic origin of the region citrus. A total of 200 citrus samples were collected from Medan, Malang, Jember, and Banyuwangi regions, which were the four main citrus-producing areas in Indonesia. Approximately three different laser wavelengths, namely 450, 532, and 648 nm were beamed to produce the backscattering image. Furthermore, a combination of the gray-level co-occurrence matrix (GLCM) method and support vector machine (SVM) algorithm was applied to extract texture features and build a classification model, respectively. In this context, three kernel functions, such as linear, radial basis function, and polynomial, were compared in authenticating the geographic origin of citrus. The results showed that the proposed technique achieved 96.667 % accuracy and 3.333 % apparent error for authentication of the geographic origin. The proposed LLBI technique applied a laser wavelength of 450 nm and a polynomial kernel function as the best combination to produce reliable predictive power. This study held valuable implications for advancing sensing technology devices to authenticate geographic origin, specifically citrus fruit.
{"title":"Implementation of laser-light backscattering imaging for authentication of the geographic origin of Indonesia region citrus","authors":"","doi":"10.1016/j.atech.2024.100527","DOIUrl":"10.1016/j.atech.2024.100527","url":null,"abstract":"<div><p>Citrus fruit (<em>Citrus nobilis</em> Lour.) from the Indonesian region is reported to have high economic value due to attractive nutritional, nutraceutical, and sensory attributes. However, authenticating the geographic origins is challenging because of adulteration and similarity in visual appearance. Therefore, this study aimed to develop an effective method based on laser-light backscattering imaging (LLBI) for authentication of the geographic origin of the region citrus. A total of 200 citrus samples were collected from Medan, Malang, Jember, and Banyuwangi regions, which were the four main citrus-producing areas in Indonesia. Approximately three different laser wavelengths, namely 450, 532, and 648 nm were beamed to produce the backscattering image. Furthermore, a combination of the gray-level co-occurrence matrix (GLCM) method and support vector machine (SVM) algorithm was applied to extract texture features and build a classification model, respectively. In this context, three kernel functions, such as linear, radial basis function, and polynomial, were compared in authenticating the geographic origin of citrus. The results showed that the proposed technique achieved 96.667 % accuracy and 3.333 % apparent error for authentication of the geographic origin. The proposed LLBI technique applied a laser wavelength of 450 nm and a polynomial kernel function as the best combination to produce reliable predictive power. This study held valuable implications for advancing sensing technology devices to authenticate geographic origin, specifically citrus fruit.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001321/pdfft?md5=2be978ede4331958edb87904719057d3&pid=1-s2.0-S2772375524001321-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141963079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.atech.2024.100497
This paper introduces a precision agriculture application aimed at mitigating the excessive utilization of agricultural chemicals, including pesticides and fungicides during crop spraying. The prevailing spraying techniques face two principle challenges: first, the indiscriminate dispensation of chemicals irrespective of plant size and requirements and second, the farmer's exposure to health hazards. To tackle these issues, a detection and segmentation model employing both YOLOv5 and YOLOv6 architectures is proposed and a comparative assessment of their accuracies within the same model category is conducted. The training dataset originates from a subset of the TobSet dataset, while the evaluation of the trained models is executed using publicly accessible aerial videos/images from available repository. The best detection accuracy achieved for the tobacco plant model size is observed with YOLOv6s and the YOLOv5-segmentation model, yielding accuracies of 95% and 94.8%, respectively. Additional performance metrics such as precision, recall, area under the PR-curve, inference time, and NMS per image are also compared between the two models. The YOLOv5-segmentation model excels by outperforming the YOLOv6s model in precision, recall score, and area under the PR-curve whereas slightly extended inference time and NMS per image duration are noted for the YOLOv5-segmentation model and the speed performance is comparable for the two models. Subsequently, the evaluation of these two models is conducted on the drone videos, which were recorded during drone traversal at a speed of 2 km/hr. The results demonstrate superiority of YOLOv5-segmentation model over the YOLOv6s model, with detection accuracies of 98.1% and 97.3%, respectively. These findings indicate the potential of integrating YOLOv5 segmentation models in precision spraying applications and contribute in improving the overall agricultural practices.
{"title":"Real-time precision spraying application for tobacco plants","authors":"","doi":"10.1016/j.atech.2024.100497","DOIUrl":"10.1016/j.atech.2024.100497","url":null,"abstract":"<div><p>This paper introduces a precision agriculture application aimed at mitigating the excessive utilization of agricultural chemicals, including pesticides and fungicides during crop spraying. The prevailing spraying techniques face two principle challenges: first, the indiscriminate dispensation of chemicals irrespective of plant size and requirements and second, the farmer's exposure to health hazards. To tackle these issues, a detection and segmentation model employing both YOLOv5 and YOLOv6 architectures is proposed and a comparative assessment of their accuracies within the same model category is conducted. The training dataset originates from a subset of the TobSet dataset, while the evaluation of the trained models is executed using publicly accessible aerial videos/images from available repository. The best detection accuracy achieved for the tobacco plant model size is observed with YOLOv6s and the YOLOv5-segmentation model, yielding accuracies of 95% and 94.8%, respectively. Additional performance metrics such as precision, recall, area under the PR-curve, inference time, and NMS per image are also compared between the two models. The YOLOv5-segmentation model excels by outperforming the YOLOv6s model in precision, recall score, and area under the PR-curve whereas slightly extended inference time and NMS per image duration are noted for the YOLOv5-segmentation model and the speed performance is comparable for the two models. Subsequently, the evaluation of these two models is conducted on the drone videos, which were recorded during drone traversal at a speed of 2 km/hr. The results demonstrate superiority of YOLOv5-segmentation model over the YOLOv6s model, with detection accuracies of 98.1% and 97.3%, respectively. These findings indicate the potential of integrating YOLOv5 segmentation models in precision spraying applications and contribute in improving the overall agricultural practices.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001023/pdfft?md5=94fc464b49049dc4477a72101693e032&pid=1-s2.0-S2772375524001023-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141696648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.atech.2024.100457
Soluble solids content (SSC) and pH of red globe grapes are crucial measures of quality. In this paper, we used hyperspectral imaging technology to achieve nondestructive detection and distribution visualization of SSC and pH of red globe grapes. First, the hyperspectral images of samples were collected. Then, CARS, SPA, GA, IRIV were used to extract feature variables from raw spectral (RAW) information. The PLSR prediction models of samples were developed. By comparing the different prediction models, RAW-IRIV-PLSR was selected as the optimal model. Finally, the SSC and pH of the samples were calculated to obtain a grayscale image and perform a pseudo-color transformation to visualize the distribution of SSC and pH. By studying the classification of the maturity of samples, it was concluded that the best discriminant classification model of maturity was RAW-IRIV-ELM. Hyperspectral also provided a new method for maturity stage classification of red globe grapes.
{"title":"SSC and pH prediction and maturity classification of grapes based on hyperspectral imaging","authors":"","doi":"10.1016/j.atech.2024.100457","DOIUrl":"10.1016/j.atech.2024.100457","url":null,"abstract":"<div><p>Soluble solids content (SSC) and pH of red globe grapes are crucial measures of quality. In this paper, we used hyperspectral imaging technology to achieve nondestructive detection and distribution visualization of SSC and pH of red globe grapes. First, the hyperspectral images of samples were collected. Then, CARS, SPA, GA, IRIV were used to extract feature variables from raw spectral (RAW) information. The PLSR prediction models of samples were developed. By comparing the different prediction models, RAW-IRIV-PLSR was selected as the optimal model. Finally, the SSC and pH of the samples were calculated to obtain a grayscale image and perform a pseudo-color transformation to visualize the distribution of SSC and pH. By studying the classification of the maturity of samples, it was concluded that the best discriminant classification model of maturity was RAW-IRIV-ELM. Hyperspectral also provided a new method for maturity stage classification of red globe grapes.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524000625/pdfft?md5=c27b68967e55c6c0187a19808ab6cac4&pid=1-s2.0-S2772375524000625-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141039736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.atech.2024.100514
Amidst changing climate, real-time soil moisture monitoring is vital for the development of in-season decision support tools to help farmers manage weather-related risks in agriculture. Precision Sustainable Agriculture (PSA) recently established a real-time soil moisture monitoring network across the central, Midwest, and eastern U.S., but continuous field-scale sensor observations often come with data gaps and anomalies. To maintain the data quality and continuity needed for developing decision tools, a quality control system is necessary.
The International Soil Moisture Network (ISMN) introduced the Flagit module for anomaly detection in soil moisture time series observations. However, under certain conditions, Flagit's threshold and spectral based quality control approaches may underperform in identifying anomalies. Recently, deep learning methods have been successfully applied to detect time series anomalies in time series data in various disciplines. However, their use in agriculture for anomaly detection in time series datasets has not been yet investigated. This study focuses on developing a Bi-directional Long Short-Term Memory (LSTM) model, referred to as DeepQC, to identify anomalies in soil moisture time series data. Manual flagged PSA observations were used for training, validation, and testing the model, following an 80:10:10 split. The study then compared the DeepQC and Flagit-based estimates to assess their relative performance.
Flagit correctly flagged 95.8 % of the correct observations and 50.3 % of the anomaly observations, indicating its limitations in identifying anomalies, particularly at sites consists of more than 30 % anomalies. On the other hand, the DeepQC correctly flagged 89.8 % of the correct observations and 99.5 % of the anomalies, with overall accuracy of 95.6 %, in significantly less time, demonstrating its superiority over Flagit approach. Importantly, the performance of the DeepQC remained consistent regardless of the number of anomalies in site observations. Given the promising results obtained with the DeepQC, future studies will focus on implementing and finetuning this model on national and global soil moisture networks.
{"title":"DeepQC: A deep learning system for automatic quality control of in-situ soil moisture sensor time series data","authors":"","doi":"10.1016/j.atech.2024.100514","DOIUrl":"10.1016/j.atech.2024.100514","url":null,"abstract":"<div><p>Amidst changing climate, real-time soil moisture monitoring is vital for the development of in-season decision support tools to help farmers manage weather-related risks in agriculture. Precision Sustainable Agriculture (PSA) recently established a real-time soil moisture monitoring network across the central, Midwest, and eastern U.S., but continuous field-scale sensor observations often come with data gaps and anomalies. To maintain the data quality and continuity needed for developing decision tools, a quality control system is necessary.</p><p>The International Soil Moisture Network (ISMN) introduced the Flagit module for anomaly detection in soil moisture time series observations. However, under certain conditions, Flagit's threshold and spectral based quality control approaches may underperform in identifying anomalies. Recently, deep learning methods have been successfully applied to detect time series anomalies in time series data in various disciplines. However, their use in agriculture for anomaly detection in time series datasets has not been yet investigated. This study focuses on developing a Bi-directional Long Short-Term Memory (LSTM) model, referred to as DeepQC, to identify anomalies in soil moisture time series data. Manual flagged PSA observations were used for training, validation, and testing the model, following an 80:10:10 split. The study then compared the DeepQC and Flagit-based estimates to assess their relative performance.</p><p>Flagit correctly flagged 95.8 % of the correct observations and 50.3 % of the anomaly observations, indicating its limitations in identifying anomalies, particularly at sites consists of more than 30 % anomalies. On the other hand, the DeepQC correctly flagged 89.8 % of the correct observations and 99.5 % of the anomalies, with overall accuracy of 95.6 %, in significantly less time, demonstrating its superiority over Flagit approach. Importantly, the performance of the DeepQC remained consistent regardless of the number of anomalies in site observations. Given the promising results obtained with the DeepQC, future studies will focus on implementing and finetuning this model on national and global soil moisture networks.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001199/pdfft?md5=4a06655ff87f5ebdc29ea1c311526dc4&pid=1-s2.0-S2772375524001199-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141848625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.atech.2024.100517
As per the FAO, the insect pest causes 30 to 40 percent loss every year across the globe. The identification, classification and management of insect pest is very important to avoid significant loss. Practicing the above process by adopting manual methods are time consuming and less effective to achieve the task. The traditional methods often fall short in addressing dynamic pest behaviours, resulting in crop losses and increased chemical usage. Therefore, adoption of the Artificial Intelligence (AI) techniques in pest identification and management act as a good substitute that arises from the challenges posed by evolving pest populations and the desire for sustainable agricultural practices. AI offers a transformative approach by utilizing advanced algorithms to analyse intricate data patterns from numerous sources like sensors and imagery. This enables accurate pest identification, early detection, and predictive modelling, enhancing decision-making for pest control, by minimizing indiscriminate pesticide application and optimizing interventions. AI not only reduces economic losses but also promotes eco-friendly strategies for efficient and resilient pest management systems. The present review is an endeavour to explain the intermingling and future scope of AI in insect pest management.
{"title":"Unravelling the use of artificial intelligence in management of insect pests","authors":"","doi":"10.1016/j.atech.2024.100517","DOIUrl":"10.1016/j.atech.2024.100517","url":null,"abstract":"<div><p>As per the FAO, the insect pest causes 30 to 40 percent loss every year across the globe. The identification, classification and management of insect pest is very important to avoid significant loss. Practicing the above process by adopting manual methods are time consuming and less effective to achieve the task. The traditional methods often fall short in addressing dynamic pest behaviours, resulting in crop losses and increased chemical usage. Therefore, adoption of the Artificial Intelligence (AI) techniques in pest identification and management act as a good substitute that arises from the challenges posed by evolving pest populations and the desire for sustainable agricultural practices. AI offers a transformative approach by utilizing advanced algorithms to analyse intricate data patterns from numerous sources like sensors and imagery. This enables accurate pest identification, early detection, and predictive modelling, enhancing decision-making for pest control, by minimizing indiscriminate pesticide application and optimizing interventions. AI not only reduces economic losses but also promotes eco-friendly strategies for efficient and resilient pest management systems. The present review is an endeavour to explain the intermingling and future scope of AI in insect pest management.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001229/pdfft?md5=6a95738d350be78ad808f211fbfee750&pid=1-s2.0-S2772375524001229-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141950665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.atech.2024.100523
Software architecture forms the cornerstone for achieving and ensuring various software quality attributes. It encompasses the collected requirements of the product, serving as a blueprint that delineates quality features for all project stakeholders, along with methods for measurement and control. Despite the significant increase in IoT-based agricultural systems, there is a dearth of studies on the quality elements of their software architecture. To address this need, this study offers an overview of components and services tailored to address specific quality attributes pertinent to agriculture systems. It identifies, investigates, and presents quality attributes influencing the design of software architecture for IoT-based agriculture systems. This paper identified and discussed several quality attributes, including performance, scalability, flexibility, interoperability, productivity, extensibility, and security, and mapped them to corresponding components of the IoT-based agriculture software architecture. Also, several issues were identified and discussed for the software architecture quality of IoT-based agriculture systems, such as real-time processing and interoperability due to the various devices and protocols utilized in these systems. The findings of this study offer valuable insights for developing, executing, and refining IoT-based agricultural systems to fulfill the changing requirements of the agriculture industry.
{"title":"Quality attributes of software architecture in IoT-based agricultural systems","authors":"","doi":"10.1016/j.atech.2024.100523","DOIUrl":"10.1016/j.atech.2024.100523","url":null,"abstract":"<div><p>Software architecture forms the cornerstone for achieving and ensuring various software quality attributes. It encompasses the collected requirements of the product, serving as a blueprint that delineates quality features for all project stakeholders, along with methods for measurement and control. Despite the significant increase in IoT-based agricultural systems, there is a dearth of studies on the quality elements of their software architecture. To address this need, this study offers an overview of components and services tailored to address specific quality attributes pertinent to agriculture systems. It identifies, investigates, and presents quality attributes influencing the design of software architecture for IoT-based agriculture systems. This paper identified and discussed several quality attributes, including performance, scalability, flexibility, interoperability, productivity, extensibility, and security, and mapped them to corresponding components of the IoT-based agriculture software architecture. Also, several issues were identified and discussed for the software architecture quality of IoT-based agriculture systems, such as real-time processing and interoperability due to the various devices and protocols utilized in these systems. The findings of this study offer valuable insights for developing, executing, and refining IoT-based agricultural systems to fulfill the changing requirements of the agriculture industry.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S277237552400128X/pdfft?md5=4f265ea113c9bf732ee4c29264b5d9d2&pid=1-s2.0-S277237552400128X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141950668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.atech.2024.100512
Digital Twins have gained attention in various industries by creating virtual replicas of real-world systems through data collection and machine learning. These replicas are used to run simulations, monitor processes, and support decision-making, extracting valuable information to benefit users. Reinforcement learning is a promising machine learning technique to use in Digital Twins, as it relies on a virtual representation of an environment or system to learn an optimal policy for a given task, which is exactly what a Digital Twin provides. Through its self-learning nature, reinforcement learning can not only optimize given tasks but might also find ways to achieve goals that were previously unexplored and, therefore, open up new avenues to tackle tasks like pest and disease detection, crop growth or crop rotation planning. However, while reinforcement learning can benefit many agricultural practices, the explainability of the employed models is frequently disregarded, diminishing its benefits as users fail to build trust in the suggested decisions. Consequently, there is a notable absence of focus on explainable reinforcement learning techniques, indicating a significant area for future development as an industry as vital to many people as the agri-food sector needs to rely on resilient methods and understandable decisions. Explainable AI models contribute to achieving both of these requirements. Therefore, the use of reinforcement learning in agriculture has the potential to open up a variety of reinforcement learning-based Digital Twin applications in agricultural domains. To explore these domains, This review categorises existing research works that employ reinforcement learning techniques in agricultural settings. On the one hand, we examine the application domain and put them into categories accordingly. On the other hand, we group the works by the reinforcement learning method involved to gain an overview of the currently employed models. Through this analysis, the review seeks to provide insights into the state-of-the-art reinforcement learning applications in agriculture. Additionally, we aim to identify gaps and opportunities for future research focusing on potential synergies of reinforcement learning and Digital Twins to tackle agricultural challenges and optimise farming processes, paving the way for more efficient and sustainable farming methodologies.
{"title":"Current applications and potential future directions of reinforcement learning-based Digital Twins in agriculture","authors":"","doi":"10.1016/j.atech.2024.100512","DOIUrl":"10.1016/j.atech.2024.100512","url":null,"abstract":"<div><p>Digital Twins have gained attention in various industries by creating virtual replicas of real-world systems through data collection and machine learning. These replicas are used to run simulations, monitor processes, and support decision-making, extracting valuable information to benefit users. Reinforcement learning is a promising machine learning technique to use in Digital Twins, as it relies on a virtual representation of an environment or system to learn an optimal policy for a given task, which is exactly what a Digital Twin provides. Through its self-learning nature, reinforcement learning can not only optimize given tasks but might also find ways to achieve goals that were previously unexplored and, therefore, open up new avenues to tackle tasks like pest and disease detection, crop growth or crop rotation planning. However, while reinforcement learning can benefit many agricultural practices, the explainability of the employed models is frequently disregarded, diminishing its benefits as users fail to build trust in the suggested decisions. Consequently, there is a notable absence of focus on explainable reinforcement learning techniques, indicating a significant area for future development as an industry as vital to many people as the agri-food sector needs to rely on resilient methods and understandable decisions. Explainable AI models contribute to achieving both of these requirements. Therefore, the use of reinforcement learning in agriculture has the potential to open up a variety of reinforcement learning-based Digital Twin applications in agricultural domains. To explore these domains, This review categorises existing research works that employ reinforcement learning techniques in agricultural settings. On the one hand, we examine the application domain and put them into categories accordingly. On the other hand, we group the works by the reinforcement learning method involved to gain an overview of the currently employed models. Through this analysis, the review seeks to provide insights into the state-of-the-art reinforcement learning applications in agriculture. Additionally, we aim to identify gaps and opportunities for future research focusing on potential synergies of reinforcement learning and Digital Twins to tackle agricultural challenges and optimise farming processes, paving the way for more efficient and sustainable farming methodologies.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001175/pdfft?md5=fb11f34d416b6388102d5886df92122e&pid=1-s2.0-S2772375524001175-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141950719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.atech.2024.100506
Due to an increasing demand for food and pressures on our ecosystem, mechanisation and automation in agriculture has been proposed as one of the main solutions to the problems associated with overpopulation given today's life standards. To encourage the use of new technologies and bridge the gap between plant and computer science, here we validate an open-source pipeline capable of predicting real time in situ fruit characteristics, specifically in this case for apples. Using Agroscope's phenotyping tool (ASPEN), we achieve an average precision at intercept over union of 50 % of 0.75 when using YOLOv8 - m as the object detection algorithm, and with thanks to the use of multiple sensors, we find an average diameter error of 4.4 mm in the task of apple size determination. Our research demonstrates that although the pipeline tends to underestimate the actual fruit size, size estimation cannot only be used to determine the size of apples per scan, but also to track temporal apple size distribution in 4 different varieties. This research supports ASPEN in potentially replacing traditional field measurements, also suggesting that other traits could also be digitally measured for standard orchard phenotyping, either for scientific or agricultural output goals. Finally, we make publicly available a new dataset of more than 600 images (Agroscope_apple dataset) and a pre-trained model based on YOLOv8 and specifically trained for the in-situ apple detection task. By doing so, we hope to increase the accessibility and use of new technologies in the field of agriculture.
{"title":"ASPEN study case: Real time in situ apples detection and characterization","authors":"","doi":"10.1016/j.atech.2024.100506","DOIUrl":"10.1016/j.atech.2024.100506","url":null,"abstract":"<div><p>Due to an increasing demand for food and pressures on our ecosystem, mechanisation and automation in agriculture has been proposed as one of the main solutions to the problems associated with overpopulation given today's life standards. To encourage the use of new technologies and bridge the gap between plant and computer science, here we validate an open-source pipeline capable of predicting real time <em>in situ</em> fruit characteristics, specifically in this case for apples. Using Agroscope's phenotyping tool (ASPEN), we achieve an average precision at intercept over union of 50 % of 0.75 when using YOLOv8 - m as the object detection algorithm, and with thanks to the use of multiple sensors, we find an average diameter error of 4.4 mm in the task of apple size determination. Our research demonstrates that although the pipeline tends to underestimate the actual fruit size, size estimation cannot only be used to determine the size of apples per scan, but also to track temporal apple size distribution in 4 different varieties. This research supports ASPEN in potentially replacing traditional field measurements, also suggesting that other traits could also be digitally measured for standard orchard phenotyping, either for scientific or agricultural output goals. Finally, we make publicly available a new dataset of more than 600 images (Agroscope_apple dataset) and a pre-trained model based on YOLOv8 and specifically trained for the in-situ apple detection task. By doing so, we hope to increase the accessibility and use of new technologies in the field of agriculture.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001114/pdfft?md5=0dc4b16a70000668d0ef2a34513958db&pid=1-s2.0-S2772375524001114-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141848884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.atech.2024.100519
Effective early pregnancy diagnosis is crucial for commercial rabbit breeding. Early pregnancy diagnosis enables the implementation of staged feeding for pregnant does, effectively preventing excessive weight gain and reducing the high mortality rates of kits during the birthing stage. This not only enhances production efficiency but also ensures the health and well-being of the breeding rabbits. The study introduces a method and device, the Rabbit Pregnancy Identification Device (RAPID), for detecting rabbit pregnancies using matrix optical sensing. RAPID comprises eight sensor modules and a central host unit. Each sensor module is equipped with three LEDs emitting light at wavelengths of 660 nm, 850 nm, and 940 nm, along with two photodiodes for data collection. A mobile application was developed to enable flexible control of the device. Signal-to-noise ratio tests were conducted to evaluate the stability of data collection by the device across varying light intensities. The experimental results reveal a direct correlation between light intensity levels and the signal-to-noise ratio of collected data. Notably, under a light intensity level of 4, RAPID achieves a signal-to-noise ratio ranging from 42 to 45 dB, satisfying the necessary criteria for data collection. Different classification models were trained using sample data from 216 does across various batches, and their generalization capabilities were evaluated. The experimental findings indicate that the optimal time for RAPID to diagnose the pregnancy status of does is on the 14th day after insemination, achieving an accuracy of 86.63 % and a recall of 80.49 %. Moreover, the model exhibits a degree of generalization, achieving an accuracy of 78.36 % when classifying another batch of sample data. RAPID achieves an accuracy of 97.25 % for pregnancy diagnosis of older does, which is 7.44 % higher than that of younger does; the accuracy rate for pregnancy diagnosis of does with sparse hair is 86.92 %, which is 4.78 % higher than that of does with dense hair. By comparing the effectiveness of using data from 8 sensor modules and data from a single sensor module for pregnancy diagnosis of different batches of does, it was found that the former exhibits more stable generalization capability in doe pregnancy detection.
{"title":"RAPID: A rabbit pregnancy diagnosis device based on matrix optical sensing","authors":"","doi":"10.1016/j.atech.2024.100519","DOIUrl":"10.1016/j.atech.2024.100519","url":null,"abstract":"<div><p>Effective early pregnancy diagnosis is crucial for commercial rabbit breeding. Early pregnancy diagnosis enables the implementation of staged feeding for pregnant does, effectively preventing excessive weight gain and reducing the high mortality rates of kits during the birthing stage. This not only enhances production efficiency but also ensures the health and well-being of the breeding rabbits. The study introduces a method and device, the Rabbit Pregnancy Identification Device (RAPID), for detecting rabbit pregnancies using matrix optical sensing. RAPID comprises eight sensor modules and a central host unit. Each sensor module is equipped with three LEDs emitting light at wavelengths of 660 nm, 850 nm, and 940 nm, along with two photodiodes for data collection. A mobile application was developed to enable flexible control of the device. Signal-to-noise ratio tests were conducted to evaluate the stability of data collection by the device across varying light intensities. The experimental results reveal a direct correlation between light intensity levels and the signal-to-noise ratio of collected data. Notably, under a light intensity level of 4, RAPID achieves a signal-to-noise ratio ranging from 42 to 45 dB, satisfying the necessary criteria for data collection. Different classification models were trained using sample data from 216 does across various batches, and their generalization capabilities were evaluated. The experimental findings indicate that the optimal time for RAPID to diagnose the pregnancy status of does is on the 14th day after insemination, achieving an accuracy of 86.63 % and a recall of 80.49 %. Moreover, the model exhibits a degree of generalization, achieving an accuracy of 78.36 % when classifying another batch of sample data. RAPID achieves an accuracy of 97.25 % for pregnancy diagnosis of older does, which is 7.44 % higher than that of younger does; the accuracy rate for pregnancy diagnosis of does with sparse hair is 86.92 %, which is 4.78 % higher than that of does with dense hair. By comparing the effectiveness of using data from 8 sensor modules and data from a single sensor module for pregnancy diagnosis of different batches of does, it was found that the former exhibits more stable generalization capability in doe pregnancy detection.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001242/pdfft?md5=2a57912cd871e08d03455e7ca435386f&pid=1-s2.0-S2772375524001242-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141950666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}