Pub Date : 2024-07-04DOI: 10.3103/S1060992X24700085
Manjur Hossain, Kalimuddin Mondal
Present manuscript designs and analyzes numerically all-optical binary-to-gray code (BTGC) converter utilizing silicon microring resonator. A waveguide-based silicon microring resonator has been employed to achieve optical switching under low-power conditions using the two-photon absorption effect. Gray code (GC) is a binary numerical system in which two consecutive codes distinguished by only one bit. The GC is critical in optics communication because it prevents spurious output from optical switches and facilitates error correction in optical communications. MATLAB is used to design and analyze the architecture at almost 260 Gbps operational speed. The faster response times and compact design of the demonstrated circuits make them especially useful for optical communication systems. Performance indicating factors evaluated from MATLAB results and analyzed. Design parameters that are optimized have been chosen in order to construct the model practically.
{"title":"Numerical Analysis of All-Optical Binary to Gray Code Converter Using Silicon Microring Resonator","authors":"Manjur Hossain, Kalimuddin Mondal","doi":"10.3103/S1060992X24700085","DOIUrl":"10.3103/S1060992X24700085","url":null,"abstract":"<p>Present manuscript designs and analyzes numerically all-optical binary-to-gray code (BTGC) converter utilizing silicon microring resonator. A waveguide-based silicon microring resonator has been employed to achieve optical switching under low-power conditions using the two-photon absorption effect. Gray code (GC) is a binary numerical system in which two consecutive codes distinguished by only one bit. The GC is critical in optics communication because it prevents spurious output from optical switches and facilitates error correction in optical communications. MATLAB is used to design and analyze the architecture at almost 260 Gbps operational speed. The faster response times and compact design of the demonstrated circuits make them especially useful for optical communication systems. Performance indicating factors evaluated from MATLAB results and analyzed. Design parameters that are optimized have been chosen in order to construct the model practically.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"193 - 204"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.3103/S1060992X2470005X
Youshaa Murhij, Dmitry Yudin
Accurate 3D pose estimation and shape reconstruction from monocular images is a challenging task in the field of autonomous driving. Our work introduces a novel approach to solve this task for vehicles called Deformable Attention-Guided Modeling for Monocular 3D Reconstruction (DAGM-Mono). Our proposed solution addresses the challenge of detailed shape reconstruction by leveraging deformable attention mechanisms. Specifically, given 2D primitives, DAGM-Mono reconstructs vehicles shapes using deformable attention-guided modeling, considering the relevance between detected objects and vehicle shape priors. Our method introduces two additional loss functions: Chamfer Distance (CD) and Hierarchical Chamfer Distance to enhance the process of shape reconstruction by additionally capturing fine-grained shape details at different scales. Our bi-contextual deformable attention framework estimates 3D object pose, capturing both inter-object relations and scene context. Experiments on the ApolloCar3D dataset demonstrate that DAGM-Mono achieves state-of-the-art performance and significantly enhances the performance of mature monocular 3D object detectors. Code and data are publicly available at: https://github.com/YoushaaMurhij/DAGM-Mono.
{"title":"DAGM-Mono: Deformable Attention-Guided Modeling for Monocular 3D Reconstruction","authors":"Youshaa Murhij, Dmitry Yudin","doi":"10.3103/S1060992X2470005X","DOIUrl":"10.3103/S1060992X2470005X","url":null,"abstract":"<p>Accurate 3D pose estimation and shape reconstruction from monocular images is a challenging task in the field of autonomous driving. Our work introduces a novel approach to solve this task for vehicles called Deformable Attention-Guided Modeling for Monocular 3D Reconstruction (DAGM-Mono). Our proposed solution addresses the challenge of detailed shape reconstruction by leveraging deformable attention mechanisms. Specifically, given 2D primitives, DAGM-Mono reconstructs vehicles shapes using deformable attention-guided modeling, considering the relevance between detected objects and vehicle shape priors. Our method introduces two additional loss functions: Chamfer Distance (CD) and Hierarchical Chamfer Distance to enhance the process of shape reconstruction by additionally capturing fine-grained shape details at different scales. Our bi-contextual deformable attention framework estimates 3D object pose, capturing both inter-object relations and scene context. Experiments on the ApolloCar3D dataset demonstrate that DAGM-Mono achieves state-of-the-art performance and significantly enhances the performance of mature monocular 3D object detectors. Code and data are publicly available at: https://github.com/YoushaaMurhij/DAGM-Mono.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"144 - 156"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.3103/S1060992X24700024
Swarnalata Rath, Nilima R. Das, Binod Kumar Pattanayak
Univariate stocks and multivariate equities are more common due to partnerships. Accurate future stock predictions benefit investors and stakeholders. The study has limitations, but hybrid architectures can outperform single deep learning approach (DL) in price prediction. This study presents a hybrid attention-based optimal DL model that leverages multiple neural networks to enhance stock price prediction accuracy. The model uses strategic optimization of individual model components, extracting crucial insights from stock price time series data. The process involves initial pre-processing, wavelet transform denoising, and min-max normalization, followed by data division into training and test sets. The proposed model integrates stacked Bi-directional Long Short Term Memory (Bi-LSTM), an attention module, and an Equilibrium optimized 1D Convolutional Neural Network (CNN). Stacked Bi-LSTM networks shoot enriched temporal features, while the attention mechanism reduces historical data loss and highlights significant information. A dropout layer with tailored dropout rates is introduced to address overfitting. The Conv1D layer within the 1D CNN detects abrupt data changes using residual features from the dropout layer. The model incorporates Equilibrium Optimization (EO) for training the CNN, allowing the algorithm to select optimal weights based on mean square error. Model efficiency is evaluated through diverse metrics, including Mean Absolute Error (MAE), Mean Square Error (MSE), Root Mean Square Error (RMSE), and R-squared (R2), to confirm the model’s predictive performance.
{"title":"Stacked BI-LSTM and E-Optimized CNN-A Hybrid Deep Learning Model for Stock Price Prediction","authors":"Swarnalata Rath, Nilima R. Das, Binod Kumar Pattanayak","doi":"10.3103/S1060992X24700024","DOIUrl":"10.3103/S1060992X24700024","url":null,"abstract":"<p>Univariate stocks and multivariate equities are more common due to partnerships. Accurate future stock predictions benefit investors and stakeholders. The study has limitations, but hybrid architectures can outperform single deep learning approach (DL) in price prediction. This study presents a hybrid attention-based optimal DL model that leverages multiple neural networks to enhance stock price prediction accuracy. The model uses strategic optimization of individual model components, extracting crucial insights from stock price time series data. The process involves initial pre-processing, wavelet transform denoising, and min-max normalization, followed by data division into training and test sets. The proposed model integrates stacked Bi-directional Long Short Term Memory (Bi-LSTM), an attention module, and an Equilibrium optimized 1D Convolutional Neural Network (CNN). Stacked Bi-LSTM networks shoot enriched temporal features, while the attention mechanism reduces historical data loss and highlights significant information. A dropout layer with tailored dropout rates is introduced to address overfitting. The Conv1D layer within the 1D CNN detects abrupt data changes using residual features from the dropout layer. The model incorporates Equilibrium Optimization (EO) for training the CNN, allowing the algorithm to select optimal weights based on mean square error. Model efficiency is evaluated through diverse metrics, including Mean Absolute Error (MAE), Mean Square Error (MSE), Root Mean Square Error (RMSE), and R-squared (R2), to confirm the model’s predictive performance.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"102 - 120"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.3103/S1060992X24700103
Saurabh Jaglan, Sunita Kumari, Praveen Aggarwal
Traditional approaches do not have the capability to analyse the road accident severity with different road characteristics, area and type of injury. Hence, the road accident severity prediction model with variable factors is designed using the ANN algorithm. In this designed model, the past accident records with road characteristics are obtained and pre-processed utilizing adaptive data cleaning as well as the min-max normalization technique. These techniques are used to remove and separate the collected data according to their relation. The Pearson correlation coefficient is utilized to separate the features from the pre-processed data. The ANN algorithm is used to train and validate these retrieved features. The proposed model’s performance values are 99, 98, 99 and 98% for accuracy, precision, specificity and recall. Thus, the resultant values of the designed road accident severity prediction model with variable factors using the ANN algorithm perform better compared to the existing techniques.
摘要 传统方法无法分析不同道路特征、区域和伤害类型下的道路事故严重性。因此,我们使用方差网络算法设计了具有可变因素的道路事故严重性预测模型。在所设计的模型中,利用自适应数据清理和最小-最大归一化技术,获取并预处理了具有道路特征的过往事故记录。这些技术用于根据数据之间的关系去除和分离所收集的数据。利用皮尔逊相关系数从预处理数据中分离出特征。ANN 算法用于训练和验证这些检索到的特征。拟议模型的准确率、精确率、特异性和召回率分别为 99%、98%、99% 和 98%。因此,与现有的技术相比,使用 ANN 算法设计的道路事故严重性预测模型的结果值与可变因素的表现更好。
{"title":"Latent Semantic Index Based Feature Reduction for Enhanced Severity Prediction of Road Accidents","authors":"Saurabh Jaglan, Sunita Kumari, Praveen Aggarwal","doi":"10.3103/S1060992X24700103","DOIUrl":"10.3103/S1060992X24700103","url":null,"abstract":"<p>Traditional approaches do not have the capability to analyse the road accident severity with different road characteristics, area and type of injury. Hence, the road accident severity prediction model with variable factors is designed using the ANN algorithm. In this designed model, the past accident records with road characteristics are obtained and pre-processed utilizing adaptive data cleaning as well as the min-max normalization technique. These techniques are used to remove and separate the collected data according to their relation. The Pearson correlation coefficient is utilized to separate the features from the pre-processed data. The ANN algorithm is used to train and validate these retrieved features. The proposed model’s performance values are 99, 98, 99 and 98% for accuracy, precision, specificity and recall. Thus, the resultant values of the designed road accident severity prediction model with variable factors using the ANN algorithm perform better compared to the existing techniques.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"221 - 235"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.3103/S1060992X24700073
Ennaji Fatima Zohra, El Kabtane Hamada
The potential applications of emotion recognition from facial expressions have generated considerable interest across multiple domains, encompassing areas such as human-computer interaction, camera and mental health analysis. In this article, a novel approach has been proposed for face emotion recognition (FER) using several data preprocessing and Feature extraction steps such as Face Mesh, data augmentation and oval cropping of the faces. A transfer learning using VGG19 architecture and a Deep Convolution Neural Network (DCNN) have been proposed. We demonstrate the effectiveness of the proposed approach through extensive experiments on the Cohn-Kanade+ (CK+) dataset, comparing it with existing state-of-the-art methods. An accuracy of 99.79% was found using the VGG19. Finally, a set of images collected from an AI tool that generates images based on textual description have been done and tested using our model. The results indicate that the solution achieves superior performance, offering a promising solution for accurate and real-time face emotion recognition.
{"title":"Transfer Learning Based Face Emotion Recognition Using Meshed Faces and Oval Cropping: A Novel Approach","authors":"Ennaji Fatima Zohra, El Kabtane Hamada","doi":"10.3103/S1060992X24700073","DOIUrl":"10.3103/S1060992X24700073","url":null,"abstract":"<p>The potential applications of emotion recognition from facial expressions have generated considerable interest across multiple domains, encompassing areas such as human-computer interaction, camera and mental health analysis. In this article, a novel approach has been proposed for face emotion recognition (FER) using several data preprocessing and Feature extraction steps such as Face Mesh, data augmentation and oval cropping of the faces. A transfer learning using VGG19 architecture and a Deep Convolution Neural Network (DCNN) have been proposed. We demonstrate the effectiveness of the proposed approach through extensive experiments on the Cohn-Kanade+ (CK+) dataset, comparing it with existing state-of-the-art methods. An accuracy of 99.79% was found using the VGG19. Finally, a set of images collected from an AI tool that generates images based on textual description have been done and tested using our model. The results indicate that the solution achieves superior performance, offering a promising solution for accurate and real-time face emotion recognition.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"178 - 192"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.3103/S1060992X24700036
Nandula Anuradha, Panuganti VijayaPal Reddy
Aspect based suggestion is the process of analyzing the aspect of the review and classifying them as suggestion or non-suggestion comment. Today, online reviews are becoming a more popular way to express suggestions. To manually analyze and extract recommendations from such a large volume of reviews is practically impossible. However, the existing algorithm yields low accuracy with more errors. A deep learning-based DNN (Deep Neural Network) is created to address these problems. Raw data’s are collected and pre-processed to remove the unnecessary contents. After that, a count vectorizer is utilized to convert the words into vectors as well as to extract features from the data. Then, reducing the dimension of the feature vector by applying a hybrid PCA-HBA (Principal Component Analysis-Honey Badger Algorithm). HBA optimization is utilized to select the optimal number of components to enhance the accuracy of the proposed model. Then, the features are classified using two trained deep neural network. One trained model is utilized to identify the aspect of the review, and another trained model is utilized to identify whether the aspect is a suggestion or non-suggestion. The experimental analysis shows that the proposed approach achieves 93% accuracy and 93% specificity for aspect identification as well as 87% accuracy and 66% specificity for the classification of suggestions. Thus, the designed model is the best choice for aspect-based suggestion classification.
{"title":"Aspect Based Suggestion Classification Using Deep Neural Network and Principal Component Analysis with Honey Badger Optimization","authors":"Nandula Anuradha, Panuganti VijayaPal Reddy","doi":"10.3103/S1060992X24700036","DOIUrl":"10.3103/S1060992X24700036","url":null,"abstract":"<p>Aspect based suggestion is the process of analyzing the aspect of the review and classifying them as suggestion or non-suggestion comment. Today, online reviews are becoming a more popular way to express suggestions. To manually analyze and extract recommendations from such a large volume of reviews is practically impossible. However, the existing algorithm yields low accuracy with more errors. A deep learning-based DNN (Deep Neural Network) is created to address these problems. Raw data’s are collected and pre-processed to remove the unnecessary contents. After that, a count vectorizer is utilized to convert the words into vectors as well as to extract features from the data. Then, reducing the dimension of the feature vector by applying a hybrid PCA-HBA (Principal Component Analysis-Honey Badger Algorithm). HBA optimization is utilized to select the optimal number of components to enhance the accuracy of the proposed model. Then, the features are classified using two trained deep neural network. One trained model is utilized to identify the aspect of the review, and another trained model is utilized to identify whether the aspect is a suggestion or non-suggestion. The experimental analysis shows that the proposed approach achieves 93% accuracy and 93% specificity for aspect identification as well as 87% accuracy and 66% specificity for the classification of suggestions. Thus, the designed model is the best choice for aspect-based suggestion classification.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"121 - 132"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.3103/S1060992X24700048
Seyed Sina Mohammadi, Mohammadreza Salehirad, Mohammad Mollaie Emamzadeh, Mojtaba Barkhordari Yazdi
One of the most demanding applications of accurate Artificial Neural Networks (ANN) can be found in medical fields, mainly to make critical decisions. To achieve this goal, an efficient optimization and training method is required to tune the parameters of ANN and to reach the global solutions of these parameters. Equilibrium Optimizer (EO) has recently been introduced to solve optimization problems more reliably than other optimization methods which have the ability to escape from the local optima solutions and to reach the global optimum solution. In this paper, to achieve a higher performance, some modifications are applied to the EO algorithm and the Improved Equilibrium Optimizer (IEO) method is presented which have enough accuracy and reliability to be used in crucial and accurate medical applications. Then, this IEO approach is utilized to learn ANN, and IEO-ANN algorithm will be introduced. The proposed IEO-ANN will be implemented to solve real-world medical problems such as breast cancer detection and heart failure prediction. The obtained results of IEO are compared with those of three other well-known approaches: EO, Particle Swarm Optimizer (PSO), Salp Swarm Optimizer (SSO), and Back Propagation (BP). The recorded results have shown that the proposed IEO algorithm has much higher prediction accuracy than others. Therefore, the presented IEO can improve the accuracy and convergence rate of tuning neural networks, so that the proposed IEO-ANN is a suitable classifying and predicting approach for crucial medical decisions where high accuracy is needed.
{"title":"Improved Equilibrium Optimizer for Accurate Training of Feedforward Neural Networks","authors":"Seyed Sina Mohammadi, Mohammadreza Salehirad, Mohammad Mollaie Emamzadeh, Mojtaba Barkhordari Yazdi","doi":"10.3103/S1060992X24700048","DOIUrl":"10.3103/S1060992X24700048","url":null,"abstract":"<p>One of the most demanding applications of accurate Artificial Neural Networks (ANN) can be found in medical fields, mainly to make critical decisions<b>.</b> To achieve this goal, an efficient optimization and training method is required to tune the parameters of ANN and to reach the global solutions of these parameters. Equilibrium Optimizer (EO) has recently been introduced to solve optimization problems more reliably than other optimization methods which have the ability to escape from the local optima solutions and to reach the global optimum solution. In this paper, to achieve a higher performance, some modifications are applied to the EO algorithm and the Improved Equilibrium Optimizer (IEO) method is presented which have enough accuracy and reliability to be used in crucial and accurate medical applications. Then, this IEO approach is utilized to learn ANN, and IEO-ANN algorithm will be introduced. The proposed IEO-ANN will be implemented to solve real-world medical problems such as breast cancer detection and heart failure prediction. The obtained results of IEO are compared with those of three other well-known approaches: EO, Particle Swarm Optimizer (PSO), Salp Swarm Optimizer (SSO), and Back Propagation (BP). The recorded results have shown that the proposed IEO algorithm has much higher prediction accuracy than others. Therefore, the presented IEO can improve the accuracy and convergence rate of tuning neural networks, so that the proposed IEO-ANN is a suitable classifying and predicting approach for crucial medical decisions where high accuracy is needed.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"133 - 143"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.3103/S1060992X24700115
Nilesh U Sambhe, Manjusha Deshmukh, L. Ashok Kumar, Sandip Chavan, Nidhi P. Ranjan
A vital source of nutrition and a major contributor to the nation’s economic expansion is agriculture. Due to numerous complex factors such as environment, humidity, soil nutrients, and soil moisture, multi crop yield forecasting was very challenging. Because crop prediction is a complicated process, improving performance is challenging. To address these problems, an advance deep learning model was developed to predict crop types and its yields in a particular soil. A real time data were created, which contain various parameters such as soil nutrition’s, weather, data, seasons and temperature. The created dataset is pre-processed using outlier detection as well as normalization because it contains unwanted rows and columns. After that, the pre-processed data were given as input for the DeepNet230 model to analyze the input parameters like soil nutrition and temperature to predict the multi crop type and its yield quantity. DeepNet230 have the capacity of automatic feature learning and rapid unstructured process, so it provides an efficient prediction performance of crop yield and its types. The performance analysis of crop prediction for the proposed model are 93.7% accuracy, 93.4% recall, 92.8% precision and 92.9% specificity. Then, the performance of yield prediction for the identified crops are 95.5% accuracy, 91.6% recall, 93% precision and 94.2% specificity. In addition, the developed method was compared with several opposing methods for validation. The observed results show that the suggested method performed significantly better in real time due to its improved predictive capabilities.
{"title":"MCYP-DeepNet: Nutrition and Temperature Based Season Wise Multi Crop Yield Prediction Using DeepNet 230 Classifier","authors":"Nilesh U Sambhe, Manjusha Deshmukh, L. Ashok Kumar, Sandip Chavan, Nidhi P. Ranjan","doi":"10.3103/S1060992X24700115","DOIUrl":"10.3103/S1060992X24700115","url":null,"abstract":"<p>A vital source of nutrition and a major contributor to the nation’s economic expansion is agriculture. Due to numerous complex factors such as environment, humidity, soil nutrients, and soil moisture, multi crop yield forecasting was very challenging. Because crop prediction is a complicated process, improving performance is challenging. To address these problems, an advance deep learning model was developed to predict crop types and its yields in a particular soil. A real time data were created, which contain various parameters such as soil nutrition’s, weather, data, seasons and temperature. The created dataset is pre-processed using outlier detection as well as normalization because it contains unwanted rows and columns. After that, the pre-processed data were given as input for the DeepNet230 model to analyze the input parameters like soil nutrition and temperature to predict the multi crop type and its yield quantity. DeepNet230 have the capacity of automatic feature learning and rapid unstructured process, so it provides an efficient prediction performance of crop yield and its types. The performance analysis of crop prediction for the proposed model are 93.7% accuracy, 93.4% recall, 92.8% precision and 92.9% specificity. Then, the performance of yield prediction for the identified crops are 95.5% accuracy, 91.6% recall, 93% precision and 94.2% specificity. In addition, the developed method was compared with several opposing methods for validation. The observed results show that the suggested method performed significantly better in real time due to its improved predictive capabilities.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"236 - 253"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.3103/S1060992X24700097
Pranav Kumar, Md. Talib Ahmad, Ranjana Kumari
Motor speech condition called dysarthria is caused by a lack of movement in the lips, tongue, vocal cords, and diaphragm are a few of the muscles needed to produce speech. Speech that is slurred, sluggish, or inaccurate might be the initial sign of dysarthria, which varies in severity. Parkinson’s disease, muscular dystrophy, multiple sclerosis, brain tumors, brain damage, and amyotrophic lateral sclerosis are among the health problems that can result from dysarthria. This research develops an efficient method for extracting features and classifying dysarthria affected persons from speech signals. This suggested method uses a speech signal as its source. The supplied speech signal is pre-processed to improve the identification of dysarthria speech. Pre-processing methods like the Butterworth band pass filter and Savitzky Golay digital FIR filter are used to smoothing the raw data. After pre-processing, the signals are input into the feature extraction techniques, such as Yule-Walker Autoregressive modelling, Mel frequency cepstral coefficients and Perceptual Linear Predictive to extract the important features. The dysarthria speech is finally detected using an improved Elman Spike Neural Network (EESNN) algorithm-based classifier. Hunter Prey Optimization (HPO) is used to select the weights of EESNN optimally. The proposed algorithm achieves 94.25% accuracy and 94.26% specificity values. Thus this proposed approach is the best choice for predicting dysarthria disease using speech signal.
{"title":"HPO Based Enhanced Elman Spike Neural Network for Detecting Speech of People with Dysarthria","authors":"Pranav Kumar, Md. Talib Ahmad, Ranjana Kumari","doi":"10.3103/S1060992X24700097","DOIUrl":"10.3103/S1060992X24700097","url":null,"abstract":"<p>Motor speech condition called dysarthria is caused by a lack of movement in the lips, tongue, vocal cords, and diaphragm are a few of the muscles needed to produce speech. Speech that is slurred, sluggish, or inaccurate might be the initial sign of dysarthria, which varies in severity. Parkinson’s disease, muscular dystrophy, multiple sclerosis, brain tumors, brain damage, and amyotrophic lateral sclerosis are among the health problems that can result from dysarthria. This research develops an efficient method for extracting features and classifying dysarthria affected persons from speech signals. This suggested method uses a speech signal as its source. The supplied speech signal is pre-processed to improve the identification of dysarthria speech. Pre-processing methods like the Butterworth band pass filter and Savitzky Golay digital FIR filter are used to smoothing the raw data. After pre-processing, the signals are input into the feature extraction techniques, such as Yule-Walker Autoregressive modelling, Mel frequency cepstral coefficients and Perceptual Linear Predictive to extract the important features. The dysarthria speech is finally detected using an improved Elman Spike Neural Network (EESNN) algorithm-based classifier. Hunter Prey Optimization (HPO) is used to select the weights of EESNN optimally. The proposed algorithm achieves 94.25% accuracy and 94.26% specificity values. Thus this proposed approach is the best choice for predicting dysarthria disease using speech signal.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"205 - 220"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.3103/S1060992X24700061
P. Sh. Geidarov
In this paper proposes an algorithm for the analytical calculation of convolutional neural networks without using neural network training algorithms. A description of the algorithm is given, on the basis of which the weights and threshold values of a convolutional neural network are analytically calculated. In this case, to calculate the parameters of the convolutional neural network, only 10 selected samples were used from the MNIST digit database, each of which is an image of one of the recognizable classes of digits from 0 to 9, and was randomly selected from the MNIST digit database. As a result of the operation of this algorithm, the number of channels of the convolutional neural network layers is also determined analytically. Based on the proposed algorithm, a software module was implemented in the Builder environment C++, on the basis of which a number of experiments were carried out with recognition of the MNIST database. The results of the experiments described in the work showed that the computation time of convolutional neural networks is very short and amounts to fractions of a second or a minute. Analytically computed convolutional neural networks were tested on the MNIST digit database, consisting of 1000 images of handwritten digits. The experimental results showed that already using only 10 selected images from the MNIST database, analytically calculated convolutional neural networks are able to recognize more than half of the images of the MNIST database, without application of neural network training algorithms. In general, the study showed that artificial neural networks, and in particular convolutional neural networks, are capable of not only being trained by learning algorithms, but also recognizing images almost instantly, without the use of learning algorithms using preliminary analytical calculation of the values of the neural network’s weights.
{"title":"Analytical Calculation of Weights Convolutional Neural Network","authors":"P. Sh. Geidarov","doi":"10.3103/S1060992X24700061","DOIUrl":"10.3103/S1060992X24700061","url":null,"abstract":"<p>In this paper proposes an algorithm for the analytical calculation of convolutional neural networks without using neural network training algorithms. A description of the algorithm is given, on the basis of which the weights and threshold values of a convolutional neural network are analytically calculated. In this case, to calculate the parameters of the convolutional neural network, only 10 selected samples were used from the MNIST digit database, each of which is an image of one of the recognizable classes of digits from 0 to 9, and was randomly selected from the MNIST digit database. As a result of the operation of this algorithm, the number of channels of the convolutional neural network layers is also determined analytically. Based on the proposed algorithm, a software module was implemented in the Builder environment C++, on the basis of which a number of experiments were carried out with recognition of the MNIST database. The results of the experiments described in the work showed that the computation time of convolutional neural networks is very short and amounts to fractions of a second or a minute. Analytically computed convolutional neural networks were tested on the MNIST digit database, consisting of 1000 images of handwritten digits. The experimental results showed that already using only 10 selected images from the MNIST database, analytically calculated convolutional neural networks are able to recognize more than half of the images of the MNIST database, without application of neural network training algorithms. In general, the study showed that artificial neural networks, and in particular convolutional neural networks, are capable of not only being trained by learning algorithms, but also recognizing images almost instantly, without the use of learning algorithms using preliminary analytical calculation of the values of the neural network’s weights.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"157 - 177"},"PeriodicalIF":1.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}