Pub Date : 2023-12-22DOI: 10.3103/S1060992X23040045
Changgeng Yu, Dashi Lin, Chaowen He
Fruit picking robot requires a powerful vision system that can accurately identify the fruit on the tree. Accurate segmentation of orange fruit in orchards is challenging because of the complex environments due to the overlapping of fruits and occlusions from foliage. In this work, we proposed an image segmentation model called ASE-UNet based on the U-Net architecture, which can achieve accurate segmentation of oranges in complex environments. Firstly, the backbone network structure is improved to reduce the down-sampling rate of orange fruit images, thereby retaining more spatial detail information. Secondly, we introduced the Shape Feature Extraction Module (SFEM), which at enhancing the ability of the model to distinguish between the fruits and backgrounds, such as branches and leaves, by extracting shape and outline information from the orange fruit target. Finally, an attention mechanism was utilized to suppress background channel feature interference in the skip connection and improve the fusion of high-layer and low-layer features. We evaluate the proposed model on the orange fruit images dataset collected in the agricultural environment. The results showed that ASE-UNet achieves IoU, Precision, Recall, and F1-scores of 90.03, 96.10, 93.45, and 94.75%, respectively, which outperform other semantic segmentation methods, such as U-Net, PSPNet, and DeepLabv3+. The proposed method effectively solves the problem of low accuracy fruit segmentation models in the agricultural environment and provides technical support for fruit picking robots.
{"title":"ASE-UNet: An Orange Fruit Segmentation Model in an Agricultural Environment Based on Deep Learning","authors":"Changgeng Yu, Dashi Lin, Chaowen He","doi":"10.3103/S1060992X23040045","DOIUrl":"10.3103/S1060992X23040045","url":null,"abstract":"<p>Fruit picking robot requires a powerful vision system that can accurately identify the fruit on the tree. Accurate segmentation of orange fruit in orchards is challenging because of the complex environments due to the overlapping of fruits and occlusions from foliage. In this work, we proposed an image segmentation model called ASE-UNet based on the U-Net architecture, which can achieve accurate segmentation of oranges in complex environments. Firstly, the backbone network structure is improved to reduce the down-sampling rate of orange fruit images, thereby retaining more spatial detail information. Secondly, we introduced the Shape Feature Extraction Module (SFEM), which at enhancing the ability of the model to distinguish between the fruits and backgrounds, such as branches and leaves, by extracting shape and outline information from the orange fruit target. Finally, an attention mechanism was utilized to suppress background channel feature interference in the skip connection and improve the fusion of high-layer and low-layer features. We evaluate the proposed model on the orange fruit images dataset collected in the agricultural environment. The results showed that ASE-UNet achieves IoU, Precision, Recall, and <i>F</i><sub>1</sub>-scores of 90.03, 96.10, 93.45, and 94.75%, respectively, which outperform other semantic segmentation methods, such as U-Net, PSPNet, and DeepLabv3+. The proposed method effectively solves the problem of low accuracy fruit segmentation models in the agricultural environment and provides technical support for fruit picking robots.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 4","pages":"247 - 257"},"PeriodicalIF":1.0,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138988825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-22DOI: 10.3103/S1060992X23040069
O. Angelsky, A. Bekshaev, C. Zenkova, D. Ivanskyi, P. Maksymyak, V. Kryvetsky, Zhebo Chen
The paper offers a short review of the recent works associated with the use of luminescent carbon nanoparticles for the studies of structurally inhomogeneous optical fields carrying a diagnostic information on inhomogeneous material objects. Methods for obtaining nanoparticles with various specially assigned optical and electrical properties, necessary for research and diagnostic tasks, are analyzed. It is shown that the light-induced motion of nanoparticles suspended in the optical field enable detection and localization of the points of intensity minima and phase singularities. Optically-driven nanoparticles can serve as highly-sensitive probes of the object surface inhomogeneities, realizing a contactless version of the atomic-force profilometry. In many cases, the use of nanoparticles makes it possible to circumvent the spatial-resolution limitations of optical systems dictated by the classical wave-optics concepts (Rayleigh limit).
{"title":"Application of the Luminescent Carbon Nanoparticles for Optical Diagnostics of Structure-Inhomogeneous Objects at the Micro- and Nanoscales","authors":"O. Angelsky, A. Bekshaev, C. Zenkova, D. Ivanskyi, P. Maksymyak, V. Kryvetsky, Zhebo Chen","doi":"10.3103/S1060992X23040069","DOIUrl":"10.3103/S1060992X23040069","url":null,"abstract":"<p>The paper offers a short review of the recent works associated with the use of luminescent carbon nanoparticles for the studies of structurally inhomogeneous optical fields carrying a diagnostic information on inhomogeneous material objects. Methods for obtaining nanoparticles with various specially assigned optical and electrical properties, necessary for research and diagnostic tasks, are analyzed. It is shown that the light-induced motion of nanoparticles suspended in the optical field enable detection and localization of the points of intensity minima and phase singularities. Optically-driven nanoparticles can serve as highly-sensitive probes of the object surface inhomogeneities, realizing a contactless version of the atomic-force profilometry. In many cases, the use of nanoparticles makes it possible to circumvent the spatial-resolution limitations of optical systems dictated by the classical wave-optics concepts (Rayleigh limit).</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 4","pages":"258 - 274"},"PeriodicalIF":1.0,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139015709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-22DOI: 10.3103/s1060992x23040069
O. Angelsky, A. Bekshaev, C. Zenkova, D. Ivanskyi, P. Maksymyak, V. Kryvetsky, Zhebo Chen
Abstract
The paper offers a short review of the recent works associated with the use of luminescent carbon nanoparticles for the studies of structurally inhomogeneous optical fields carrying a diagnostic information on inhomogeneous material objects. Methods for obtaining nanoparticles with various specially assigned optical and electrical properties, necessary for research and diagnostic tasks, are analyzed. It is shown that the light-induced motion of nanoparticles suspended in the optical field enable detection and localization of the points of intensity minima and phase singularities. Optically-driven nanoparticles can serve as highly-sensitive probes of the object surface inhomogeneities, realizing a contactless version of the atomic-force profilometry. In many cases, the use of nanoparticles makes it possible to circumvent the spatial-resolution limitations of optical systems dictated by the classical wave-optics concepts (Rayleigh limit).
{"title":"Application of the Luminescent Carbon Nanoparticles for Optical Diagnostics of Structure-Inhomogeneous Objects at the Micro- and Nanoscales","authors":"O. Angelsky, A. Bekshaev, C. Zenkova, D. Ivanskyi, P. Maksymyak, V. Kryvetsky, Zhebo Chen","doi":"10.3103/s1060992x23040069","DOIUrl":"https://doi.org/10.3103/s1060992x23040069","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The paper offers a short review of the recent works associated with the use of luminescent carbon nanoparticles for the studies of structurally inhomogeneous optical fields carrying a diagnostic information on inhomogeneous material objects. Methods for obtaining nanoparticles with various specially assigned optical and electrical properties, necessary for research and diagnostic tasks, are analyzed. It is shown that the light-induced motion of nanoparticles suspended in the optical field enable detection and localization of the points of intensity minima and phase singularities. Optically-driven nanoparticles can serve as highly-sensitive probes of the object surface inhomogeneities, realizing a contactless version of the atomic-force profilometry. In many cases, the use of nanoparticles makes it possible to circumvent the spatial-resolution limitations of optical systems dictated by the classical wave-optics concepts (Rayleigh limit).</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"40 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139029220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-22DOI: 10.3103/S1060992X23040124
S. Ye, R. Bohush, H. Chen, S. Ihnatsyeva, S. V. Ablameyko
A new image set, augmentation method and fine in-learning adjustment of convolutional neural networks (CNN) are proposed to increase the accuracy of CNN-based person re-identification. Unlike other known sets, we have used many video frames from external and internal surveillance systems shot at all seasons of the year to make up our PolReID1077 set of person images. The PolReID1077-forming samples are subjected to the cyclic shift, chroma subsampling, and replacement of a fragment by a reduced copy of another sample to get a wider range of images. The learning set generating technique is used to train a CNN. The training is carried out in two stages. The first stage is pre-training using the augmented data. At the second stage the original images are used to carry out fine-tuning of CNN weight coefficients to reduce in-learning losses and increase re-identification efficiency. The approach doesn’t allow the CNN to remember learning sets and decreases the chances of overfitting. Different augmentation methods, data sets and learning techniques are used in the experiments.
{"title":"Data Augmentation and Fine Tuning of Convolutional Neural Network during Training for Person Re-Identification in Video Surveillance Systems","authors":"S. Ye, R. Bohush, H. Chen, S. Ihnatsyeva, S. V. Ablameyko","doi":"10.3103/S1060992X23040124","DOIUrl":"10.3103/S1060992X23040124","url":null,"abstract":"<p>A new image set, augmentation method and fine in-learning adjustment of convolutional neural networks (CNN) are proposed to increase the accuracy of CNN-based person re-identification. Unlike other known sets, we have used many video frames from external and internal surveillance systems shot at all seasons of the year to make up our PolReID1077 set of person images. The PolReID1077-forming samples are subjected to the cyclic shift, chroma subsampling, and replacement of a fragment by a reduced copy of another sample to get a wider range of images. The learning set generating technique is used to train a CNN. The training is carried out in two stages. The first stage is pre-training using the augmented data. At the second stage the original images are used to carry out fine-tuning of CNN weight coefficients to reduce in-learning losses and increase re-identification efficiency. The approach doesn’t allow the CNN to remember learning sets and decreases the chances of overfitting. Different augmentation methods, data sets and learning techniques are used in the experiments.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 4","pages":"233 - 246"},"PeriodicalIF":1.0,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139015710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-22DOI: 10.3103/S1060992X23040112
P. Venkatasaichandrakanth, M. Iyapparaja
Agronomic pests cause agriculture to incur financial losses because they diminish production, which lowers revenue. Pest control, essential to lowering these losses, involves identifying and eliminating this risk. Since it enables management to take place, identification is the fundamental component of control. Utilizing the pest’s traits, visual identification is done. These characteristics differ between animals and are intrinsic. Since identification is so difficult, specialists in the field handle most of the work, which concentrates the information. Researchers have developed various techniques for predicting crop diseases using images of infected leaves. While progress has been made in identifying plant diseases using different models and methods, new advancements and discussions still offer room for improvement. Technology can significantly improve global crop production, and large datasets can be used to train models and approaches that uncover new and improved methods for detecting plant diseases and addressing low-yield issues. The effectiveness of machine learning and deep learning for identifying and categorizing pests has been confirmed by prior research. This paper thoroughly examines and critically evaluates the many strategies and methodologies used to classify and detect pests or insects using deep learning. The paper examines the benefits and drawbacks of various methodologies and considers potential problems with insect detection via image processing. The paper concludes by providing an analysis and outlook on the future direction of pest detection and classification using deep learning on plants like peanuts.
{"title":"Review on Pest Detection and Classification in Agricultural Environments Using Image-Based Deep Learning Models and Its Challenges","authors":"P. Venkatasaichandrakanth, M. Iyapparaja","doi":"10.3103/S1060992X23040112","DOIUrl":"10.3103/S1060992X23040112","url":null,"abstract":"<p>Agronomic pests cause agriculture to incur financial losses because they diminish production, which lowers revenue. Pest control, essential to lowering these losses, involves identifying and eliminating this risk. Since it enables management to take place, identification is the fundamental component of control. Utilizing the pest’s traits, visual identification is done. These characteristics differ between animals and are intrinsic. Since identification is so difficult, specialists in the field handle most of the work, which concentrates the information. Researchers have developed various techniques for predicting crop diseases using images of infected leaves. While progress has been made in identifying plant diseases using different models and methods, new advancements and discussions still offer room for improvement. Technology can significantly improve global crop production, and large datasets can be used to train models and approaches that uncover new and improved methods for detecting plant diseases and addressing low-yield issues. The effectiveness of machine learning and deep learning for identifying and categorizing pests has been confirmed by prior research. This paper thoroughly examines and critically evaluates the many strategies and methodologies used to classify and detect pests or insects using deep learning. The paper examines the benefits and drawbacks of various methodologies and considers potential problems with insect detection via image processing. The paper concludes by providing an analysis and outlook on the future direction of pest detection and classification using deep learning on plants like peanuts.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 4","pages":"295 - 309"},"PeriodicalIF":1.0,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139013597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-22DOI: 10.3103/S1060992X23040100
Raj Kumar, Anuradha Chug, Amit Prakash Singh
The Precise and prompt identification of plant pathogens is essential to keep agricultural losses as low as possible. In recent time, deep convolution neural networks have seen an exponential growth in their use in phytopathology due to its capacity for rapid and precise disease identification. However, deep convolutional neural network needs a lot of processing power because of its intricate structure consisting of a large stack of layers and millions of trainable parameters which makes them inedquate for light computing devices. In this article, authors have introduced a novel light-weight sequential CNN architecture for the diagnosis of leaf diseases. The suggested CNN approach contains fewer layers and around 70% less attributes than pre-trained CNN-based approaches. For the experiments and performance evaluation, authors have chosen a benchmark public dataset consisting of 7012 images of tomato and potato leaves affected with early and late blight diseases. The performance of the proposed architecture is compared against three recent priorly trained CNN architectures such as ResNet-50, VGG-16 and MobileNet-V2. The average accuracy percentage reported by the proposed architecture is 98.02 and the time consumed in training is also much better than the existing priorly trained CNN architectures. The experimental findings clearly demonstrate that the suggested approach outperforms the recent existing trained CNN approaches and has a very less number of layers and parameters which significantly reduces the amount of computing resources and time to train the model which could be a better choice for mobile-based real-time plant disease diagnosis applications.
{"title":"Plant Foliage Disease Diagnosis Using Light-Weight Efficient Sequential CNN Model","authors":"Raj Kumar, Anuradha Chug, Amit Prakash Singh","doi":"10.3103/S1060992X23040100","DOIUrl":"10.3103/S1060992X23040100","url":null,"abstract":"<p>The Precise and prompt identification of plant pathogens is essential to keep agricultural losses as low as possible. In recent time, deep convolution neural networks have seen an exponential growth in their use in phytopathology due to its capacity for rapid and precise disease identification. However, deep convolutional neural network needs a lot of processing power because of its intricate structure consisting of a large stack of layers and millions of trainable parameters which makes them inedquate for light computing devices. In this article, authors have introduced a novel light-weight sequential CNN architecture for the diagnosis of leaf diseases. The suggested CNN approach contains fewer layers and around 70% less attributes than pre-trained CNN-based approaches. For the experiments and performance evaluation, authors have chosen a benchmark public dataset consisting of 7012 images of tomato and potato leaves affected with early and late blight diseases. The performance of the proposed architecture is compared against three recent priorly trained CNN architectures such as ResNet-50, VGG-16 and MobileNet-V2. The average accuracy percentage reported by the proposed architecture is 98.02 and the time consumed in training is also much better than the existing priorly trained CNN architectures. The experimental findings clearly demonstrate that the suggested approach outperforms the recent existing trained CNN approaches and has a very less number of layers and parameters which significantly reduces the amount of computing resources and time to train the model which could be a better choice for mobile-based real-time plant disease diagnosis applications.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 4","pages":"331 - 345"},"PeriodicalIF":1.0,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139023049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-22DOI: 10.3103/s1060992x23040124
S. Ye, R. Bohush, H. Chen, S. Ihnatsyeva, S. V. Ablameyko
Abstract
A new image set, augmentation method and fine in-learning adjustment of convolutional neural networks (CNN) are proposed to increase the accuracy of CNN-based person re-identification. Unlike other known sets, we have used many video frames from external and internal surveillance systems shot at all seasons of the year to make up our PolReID1077 set of person images. The PolReID1077-forming samples are subjected to the cyclic shift, chroma subsampling, and replacement of a fragment by a reduced copy of another sample to get a wider range of images. The learning set generating technique is used to train a CNN. The training is carried out in two stages. The first stage is pre-training using the augmented data. At the second stage the original images are used to carry out fine-tuning of CNN weight coefficients to reduce in-learning losses and increase re-identification efficiency. The approach doesn’t allow the CNN to remember learning sets and decreases the chances of overfitting. Different augmentation methods, data sets and learning techniques are used in the experiments.
{"title":"Data Augmentation and Fine Tuning of Convolutional Neural Network during Training for Person Re-Identification in Video Surveillance Systems","authors":"S. Ye, R. Bohush, H. Chen, S. Ihnatsyeva, S. V. Ablameyko","doi":"10.3103/s1060992x23040124","DOIUrl":"https://doi.org/10.3103/s1060992x23040124","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>A new image set, augmentation method and fine in-learning adjustment of convolutional neural networks (CNN) are proposed to increase the accuracy of CNN-based person re-identification. Unlike other known sets, we have used many video frames from external and internal surveillance systems shot at all seasons of the year to make up our PolReID1077 set of person images. The PolReID1077-forming samples are subjected to the cyclic shift, chroma subsampling, and replacement of a fragment by a reduced copy of another sample to get a wider range of images. The learning set generating technique is used to train a CNN. The training is carried out in two stages. The first stage is pre-training using the augmented data. At the second stage the original images are used to carry out fine-tuning of CNN weight coefficients to reduce in-learning losses and increase re-identification efficiency. The approach doesn’t allow the CNN to remember learning sets and decreases the chances of overfitting. Different augmentation methods, data sets and learning techniques are used in the experiments.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"120 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139029213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-22DOI: 10.3103/S1060992X23040094
I. M. Karandashev, G. S. Teplov, A. A. Karmanov, V. V. Keremet, A. V. Kuzovkov
The paper deals with the inverse problem of computational lithography. We turn to deep neural network algorithms to compute photomask topologies. The chief goal of the research is to understand how efficient the neural net architectures such as U-net, Erf-Net and Deep Lab v.3, as well as built-in Calibre Workbench algorithms, can be in tackling inverse lithography problems. Specially generated and marked data sets are used to train the artificial neural nets. Calibre EDA software is used to generate haphazard patterns for a 90 nm transistor gate mask. The accuracy and speed parameters are used for the comparison. The edge placement error (EPE) and intersection over union (IOU) are used as metrics. The use of the neural nets allows two orders of magnitude reduction of the mask computation time, with accuracy keeping to 92% for the IOU metric.
{"title":"Investigating the Efficiency of Using U-Net, Erf-Net and DeepLabV3 Architectures in Inverse Lithography-based 90-nm Photomask Generation","authors":"I. M. Karandashev, G. S. Teplov, A. A. Karmanov, V. V. Keremet, A. V. Kuzovkov","doi":"10.3103/S1060992X23040094","DOIUrl":"10.3103/S1060992X23040094","url":null,"abstract":"<p>The paper deals with the inverse problem of computational lithography. We turn to deep neural network algorithms to compute photomask topologies. The chief goal of the research is to understand how efficient the neural net architectures such as U-net, Erf-Net and Deep Lab v.3, as well as built-in Calibre Workbench algorithms, can be in tackling inverse lithography problems. Specially generated and marked data sets are used to train the artificial neural nets. Calibre EDA software is used to generate haphazard patterns for a 90 nm transistor gate mask. The accuracy and speed parameters are used for the comparison. The edge placement error (EPE) and intersection over union (IOU) are used as metrics. The use of the neural nets allows two orders of magnitude reduction of the mask computation time, with accuracy keeping to 92% for the IOU metric.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 4","pages":"219 - 225"},"PeriodicalIF":1.0,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3103/S1060992X23040094.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139029348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-22DOI: 10.3103/s1060992x23040100
Raj Kumar, Anuradha Chug, Amit Prakash Singh
Abstract
The Precise and prompt identification of plant pathogens is essential to keep agricultural losses as low as possible. In recent time, deep convolution neural networks have seen an exponential growth in their use in phytopathology due to its capacity for rapid and precise disease identification. However, deep convolutional neural network needs a lot of processing power because of its intricate structure consisting of a large stack of layers and millions of trainable parameters which makes them inedquate for light computing devices. In this article, authors have introduced a novel light-weight sequential CNN architecture for the diagnosis of leaf diseases. The suggested CNN approach contains fewer layers and around 70% less attributes than pre-trained CNN-based approaches. For the experiments and performance evaluation, authors have chosen a benchmark public dataset consisting of 7012 images of tomato and potato leaves affected with early and late blight diseases. The performance of the proposed architecture is compared against three recent priorly trained CNN architectures such as ResNet-50, VGG-16 and MobileNet-V2. The average accuracy percentage reported by the proposed architecture is 98.02 and the time consumed in training is also much better than the existing priorly trained CNN architectures. The experimental findings clearly demonstrate that the suggested approach outperforms the recent existing trained CNN approaches and has a very less number of layers and parameters which significantly reduces the amount of computing resources and time to train the model which could be a better choice for mobile-based real-time plant disease diagnosis applications.
{"title":"Plant Foliage Disease Diagnosis Using Light-Weight Efficient Sequential CNN Model","authors":"Raj Kumar, Anuradha Chug, Amit Prakash Singh","doi":"10.3103/s1060992x23040100","DOIUrl":"https://doi.org/10.3103/s1060992x23040100","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The Precise and prompt identification of plant pathogens is essential to keep agricultural losses as low as possible. In recent time, deep convolution neural networks have seen an exponential growth in their use in phytopathology due to its capacity for rapid and precise disease identification. However, deep convolutional neural network needs a lot of processing power because of its intricate structure consisting of a large stack of layers and millions of trainable parameters which makes them inedquate for light computing devices. In this article, authors have introduced a novel light-weight sequential CNN architecture for the diagnosis of leaf diseases. The suggested CNN approach contains fewer layers and around 70% less attributes than pre-trained CNN-based approaches. For the experiments and performance evaluation, authors have chosen a benchmark public dataset consisting of 7012 images of tomato and potato leaves affected with early and late blight diseases. The performance of the proposed architecture is compared against three recent priorly trained CNN architectures such as ResNet-50, VGG-16 and MobileNet-V2. The average accuracy percentage reported by the proposed architecture is 98.02 and the time consumed in training is also much better than the existing priorly trained CNN architectures. The experimental findings clearly demonstrate that the suggested approach outperforms the recent existing trained CNN approaches and has a very less number of layers and parameters which significantly reduces the amount of computing resources and time to train the model which could be a better choice for mobile-based real-time plant disease diagnosis applications.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"22 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139029382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-22DOI: 10.3103/S1060992X23040082
Saurabh Jaglan, Sunita Kumari, Praveen Aggarwal
Road traffic accidents are considered a significant problem which ruins the life of many people and also causes major economic losses. So, this issue is considered a hot research topic, and many researchers all over the world are focusing on developing a solution to this most challenging problem. Traditionally the accident spots are detected by means of transportation experts, and following that, some of the statistical models such as linear and nonlinear regression were used for accident severity prediction. However, these traditional approaches do not have the capability to analyze the relationship between the influential factor and accident severity. To address this issue, an Artificial Neural Network (ANN) classifier based vulnerable accident prediction model is proposed in this current research. Initially, the past accident data over the past period of years is collected from a specified area. The acquired data consists of a variable factor related to road infrastructure, weather condition, area of the accident, type of injury and driving characteristics. Then, to standardize the raw input data, min-max normalization is used as a pre-processing technique. The pre-processed is sent for the feature selection process in which essential features are selected by correlating the variable factor with accident severity prediction. Following that, the dimension of the features is reduced using Latent Sematic Index (LSI). Finally, the reduced features are fetched into the ANN classifier for predicting the severity of accidents such as low, medium and high. Simulation analysis of the proposed accident prediction model is carried out by evaluating some of the performance metrics for three datasets. Accuracy, error, specificity, recall and precision attained for the proposed model using dataset 1 is 96.3, 0.03, 98 and 98%. Through this proposed vulnerable accident prediction model, the severity of accidents can be analyzed effectively, and road safety levels can be improved.
道路交通事故被认为是一个重大问题,它毁掉了许多人的生活,也造成了重大的经济损失。因此,这个问题被认为是一个热门研究课题,世界各地的许多研究人员都在集中精力为这个最具挑战性的问题开发解决方案。传统上,事故点是通过交通专家来检测的,之后,一些统计模型(如线性和非线性回归)被用于事故严重性预测。然而,这些传统方法无法分析影响因素与事故严重性之间的关系。针对这一问题,本研究提出了一种基于人工神经网络(ANN)分类器的易损事故预测模型。首先,从指定区域收集过去几年的事故数据。获取的数据包括与道路基础设施、天气状况、事故发生区域、伤害类型和驾驶特征相关的可变因素。然后,为了使原始输入数据标准化,使用了最小-最大归一化作为预处理技术。预处理后的数据将被送往特征选择过程,在此过程中,通过将可变因素与事故严重性预测相关联来选择基本特征。然后,使用潜在语义索引(LSI)降低特征的维度。最后,将缩减后的特征提取到 ANN 分类器中,用于预测事故的严重程度,如低、中和高。通过评估三个数据集的一些性能指标,对所提出的事故预测模型进行了仿真分析。在数据集 1 中,所提模型的准确率、误差、特异性、召回率和精确率分别为 96.3%、0.03%、98% 和 98%。通过所提出的易损事故预测模型,可以有效地分析事故的严重程度,提高道路安全水平。
{"title":"Development of Prediction Models for Vulnerable Road User Accident Severity","authors":"Saurabh Jaglan, Sunita Kumari, Praveen Aggarwal","doi":"10.3103/S1060992X23040082","DOIUrl":"10.3103/S1060992X23040082","url":null,"abstract":"<p>Road traffic accidents are considered a significant problem which ruins the life of many people and also causes major economic losses. So, this issue is considered a hot research topic, and many researchers all over the world are focusing on developing a solution to this most challenging problem. Traditionally the accident spots are detected by means of transportation experts, and following that, some of the statistical models such as linear and nonlinear regression were used for accident severity prediction. However, these traditional approaches do not have the capability to analyze the relationship between the influential factor and accident severity. To address this issue, an Artificial Neural Network (ANN) classifier based vulnerable accident prediction model is proposed in this current research. Initially, the past accident data over the past period of years is collected from a specified area. The acquired data consists of a variable factor related to road infrastructure, weather condition, area of the accident, type of injury and driving characteristics. Then, to standardize the raw input data, min-max normalization is used as a pre-processing technique. The pre-processed is sent for the feature selection process in which essential features are selected by correlating the variable factor with accident severity prediction. Following that, the dimension of the features is reduced using Latent Sematic Index (LSI). Finally, the reduced features are fetched into the ANN classifier for predicting the severity of accidents such as low, medium and high. Simulation analysis of the proposed accident prediction model is carried out by evaluating some of the performance metrics for three datasets. Accuracy, error, specificity, recall and precision attained for the proposed model using dataset 1 is 96.3, 0.03, 98 and 98%. Through this proposed vulnerable accident prediction model, the severity of accidents can be analyzed effectively, and road safety levels can be improved.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 4","pages":"346 - 363"},"PeriodicalIF":1.0,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139013537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}