Pub Date : 2025-09-17DOI: 10.3103/S1060992X24602227
V. V. Kotlyar, A. A. Kovalev, A. M. Telegin, S. S. Stafeev
We discuss two source vector fields of Poincare-beam type that can be looked upon as optical skyrmions, i.e. topological quasiparticles. We derive explicit analytical relationships that describe projections of a three-dimensional (3D) skyrmion vector field in the source plane and skyrmion numbers, which are shown to be pro-portional to the topological charges of constituent optical vortices of the Poincare beams. We also propose a new constructive formula as an effective tool for calculating the skyrmion number via normalized Stokes vector projections rather than skyrmion vector field projections. The skyrmion numbers calculated using the familiar and newly proposed formulae coincide. Numbers of each projection of a 3D skyrmion vector field are shown to comprise a third of the full skyrmion number. The theoretical conclusions are validated by a numerical simulation. The non-uniform linear polarization in the skyrmion cross-section depends on the azimuthal angle and can be used to form a spiral microrelief due to the mass transfer of molecules on the surface of the material.
{"title":"Poincare Skyrmion Number","authors":"V. V. Kotlyar, A. A. Kovalev, A. M. Telegin, S. S. Stafeev","doi":"10.3103/S1060992X24602227","DOIUrl":"10.3103/S1060992X24602227","url":null,"abstract":"<p>We discuss two source vector fields of Poincare-beam type that can be looked upon as optical skyrmions, i.e. topological quasiparticles. We derive explicit analytical relationships that describe projections of a three-dimensional (3D) skyrmion vector field in the source plane and skyrmion numbers, which are shown to be pro-portional to the topological charges of constituent optical vortices of the Poincare beams. We also propose a new constructive formula as an effective tool for calculating the skyrmion number via normalized Stokes vector projections rather than skyrmion vector field projections. The skyrmion numbers calculated using the familiar and newly proposed formulae coincide. Numbers of each projection of a 3D skyrmion vector field are shown to comprise a third of the full skyrmion number. The theoretical conclusions are validated by a numerical simulation. The non-uniform linear polarization in the skyrmion cross-section depends on the azimuthal angle and can be used to form a spiral microrelief due to the mass transfer of molecules on the surface of the material.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"34 3","pages":"347 - 357"},"PeriodicalIF":0.8,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-17DOI: 10.3103/S1060992X25700146
Leonid Litinskii
We analyze a Hopfield neural network with a quasi-diagonal connection matrix. We use the term “quasi-diagonal matrix” to denote a matrix with all elements equal zero except the elements on the first super- and sub-diagonals of the principle diagonal. The nonzero elements are arbitrary real numbers. Such matrix generalizes the well-known connection matrix of the one dimensional Ising model with open boundary conditions where all nonzero elements equal ( + 1). We present a simple description of the fixed points of the Hopfield neural network and their dependence on the matrix elements. The obtained results also allow us to analyze the cases of a) the nonzero elements constitute arbitrary super- and sub-diagonals and b) periodic boundary conditions.
{"title":"Hopfield Model with Quasi-Diagonal Connection Matrix","authors":"Leonid Litinskii","doi":"10.3103/S1060992X25700146","DOIUrl":"10.3103/S1060992X25700146","url":null,"abstract":"<p>We analyze a Hopfield neural network with a quasi-diagonal connection matrix. We use the term “quasi-diagonal matrix” to denote a matrix with all elements equal zero except the elements on the first super- and sub-diagonals of the principle diagonal. The nonzero elements are arbitrary real numbers. Such matrix generalizes the well-known connection matrix of the one dimensional Ising model with open boundary conditions where all nonzero elements equal <span>( + 1)</span>. We present a simple description of the fixed points of the Hopfield neural network and their dependence on the matrix elements. The obtained results also allow us to analyze the cases of a) the nonzero elements constitute arbitrary super- and sub-diagonals and b) periodic boundary conditions.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"34 3","pages":"334 - 346"},"PeriodicalIF":0.8,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-02DOI: 10.3103/S1060992X25700079
Sandip B. Chavan, D. R. Ingle
Accurate vegetable yield prediction and precise fertilizer recommendations are crucial for maximizing agricultural productivity and sustainability. Advanced methodologies, including machine learning algorithms and precision agriculture tools, offer significant improvements in forecasting crop yields and optimizing nutrient application. However, traditional models often depend on extensive, high-quality datasets, which may be challenging to obtain in less-developed regions. Moreover, traditional fertilizer recommendation systems may not sufficiently adapt to real-time changes in soil conditions or crop requirements, leading to less precise nutrient management. In order to address the aforementioned problems, vegetable yield prediction and fertilizer recommendations are made using optimal machine learning and hybrid deep learning models. In this paper, the developed model collects agricultural data from a standard source. Subsequently, the collected data undergoes three pre-processing techniques to improve crop yield prediction. Data cleaning involves identifying missing or incomplete values, while data normalization ensures all features contribute equally to model training using weighted k-means and Neighbourhood averaging addresses outliers. After that pre-processed data is used for feature selection, using Relief Feature Ranking with Recursive Feature Elimination. The selected data is used for crop yield prediction and fertilizer recommendation. Physics-informed neural networks (PINN) based fruit fly optimization (IFO) algorithm is employed for predicting the yield of various vegetables like chickpeas, kidney beans, blackgram, lentil, etc. A hybrid Independent Shearlet-based Deep Belief Network (IS-DBN) is used for fertilizer recommendation. The performance metrics for vegetable prediction and fertilizer recommendation attained for the proposed model are 99.17 and 96.99% of accuracy, 91.67 and 89.65% of precision. The proposed model’s obtained values are better than those of the existing methods. Thus, the proposed optimized machine learning and hybrid deep learning approach effectively predict crop yield and fertilizer recommendation with higher accuracy.
{"title":"Vegetable Yield Prediction and Fertilizer Recommendation Using Optimized PINN and Independent Shearlet Based DBN Approach","authors":"Sandip B. Chavan, D. R. Ingle","doi":"10.3103/S1060992X25700079","DOIUrl":"10.3103/S1060992X25700079","url":null,"abstract":"<p>Accurate vegetable yield prediction and precise fertilizer recommendations are crucial for maximizing agricultural productivity and sustainability. Advanced methodologies, including machine learning algorithms and precision agriculture tools, offer significant improvements in forecasting crop yields and optimizing nutrient application. However, traditional models often depend on extensive, high-quality datasets, which may be challenging to obtain in less-developed regions. Moreover, traditional fertilizer recommendation systems may not sufficiently adapt to real-time changes in soil conditions or crop requirements, leading to less precise nutrient management. In order to address the aforementioned problems, vegetable yield prediction and fertilizer recommendations are made using optimal machine learning and hybrid deep learning models. In this paper, the developed model collects agricultural data from a standard source. Subsequently, the collected data undergoes three pre-processing techniques to improve crop yield prediction. Data cleaning involves identifying missing or incomplete values, while data normalization ensures all features contribute equally to model training using weighted <i>k</i>-means and Neighbourhood averaging addresses outliers. After that pre-processed data is used for feature selection, using Relief Feature Ranking with Recursive Feature Elimination. The selected data is used for crop yield prediction and fertilizer recommendation. Physics-informed neural networks (PINN) based fruit fly optimization (IFO) algorithm is employed for predicting the yield of various vegetables like chickpeas, kidney beans, blackgram, lentil, etc. A hybrid Independent Shearlet-based Deep Belief Network (IS-DBN) is used for fertilizer recommendation. The performance metrics for vegetable prediction and fertilizer recommendation attained for the proposed model are 99.17 and 96.99% of accuracy, 91.67 and 89.65% of precision. The proposed model’s obtained values are better than those of the existing methods. Thus, the proposed optimized machine learning and hybrid deep learning approach effectively predict crop yield and fertilizer recommendation with higher accuracy.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"34 2","pages":"128 - 145"},"PeriodicalIF":0.8,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145161160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-02DOI: 10.3103/S1060992X25700055
M. V. Gashnikov
A method for compressing multidimensional discrete arrays is studied. Based on the reduction of the extremum of post-extrapolation residuals, the method can be used to handle video and image arrays. The compression algorithm calculates post-extrapolation residuals for all samples of an initial multidimensional discrete array and then roughens these residuals to reduce the required capacity of a storage system and raise the data transmission rate. The method allows more reliable control of discrepancy between the original multidimensional discrete array and its unpacked version by reducing the extremum of roughened post-extrapolation residuals. Computer simulations validate that the use of the reduction of the extremum of post-extrapolation residuals increases the efficiency of the multidimensional data compression algorithm. The experiments also demonstrate the approach to be more efficient than other popular algorithms used for the compression of multidimensional discrete-data arrays.
{"title":"Reducing the Extremum of Post-Extrapolation Residuals in Compression of Multidimensional Discrete Arrays","authors":"M. V. Gashnikov","doi":"10.3103/S1060992X25700055","DOIUrl":"10.3103/S1060992X25700055","url":null,"abstract":"<p>A method for compressing multidimensional discrete arrays is studied. Based on the reduction of the extremum of post-extrapolation residuals, the method can be used to handle video and image arrays. The compression algorithm calculates post-extrapolation residuals for all samples of an initial multidimensional discrete array and then roughens these residuals to reduce the required capacity of a storage system and raise the data transmission rate. The method allows more reliable control of discrepancy between the original multidimensional discrete array and its unpacked version by reducing the extremum of roughened post-extrapolation residuals. Computer simulations validate that the use of the reduction of the extremum of post-extrapolation residuals increases the efficiency of the multidimensional data compression algorithm. The experiments also demonstrate the approach to be more efficient than other popular algorithms used for the compression of multidimensional discrete-data arrays.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"34 2","pages":"164 - 168"},"PeriodicalIF":0.8,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145161163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-02DOI: 10.3103/S1060992X2570002X
K. Sangeetha, E. Anbalagan, Raj Kumar, Vaibhav Eknath Pawar, N. Muthukumaran
Ad-hoc network is a type of wireless network, but it differs from other wireless networks in that it lacks infrastructure such as access points, routers, and other devices. While a node can communicate with every other node in the same cell in infrastructure networks, routing and the limitations of wireless communication are the main issues in ad hoc networks. But those clarifications are gave not accurate results. In order to overcome these issues, proposed traffic congestion prevention for IoT based traffic management in ad-hoc network using deep learning. This proposed method has five phases like data collection, preprocessing, feature selection, classification and decision making. The input data gathered from IoT devices in the ad-hoc network. After that, IoT features were preprocessed using missing values replacement and SMOTE resampling. Then preprocessed IoT data features to be selected using chi-square, which is used to select optimal features to avoid overfitting problems. Following that, the selected IoT features were classified with the help of deep LSTM, which is used to know whether the network is traffic or not. If the network have traffic, the data transmission is done through the traffic less path. Otherwise, the IoT data should be transmitted easily. The proposed model was designed and the performance was validated by using MATLAB software. Deep learning (DL) performance parameters such as accuracy, precision, recall, and error have values of 98.32, 98.325, 97.87, and 1.9%, respectively. Moreover, this proposed model is effective for detecting traffic congestion and which is used to prevent traffic through an ad-hoc network’s IoT based traffic management system.
{"title":"Deep LSTM and Chi-Square Based Feature Selection Model for Traffic Congestion Prediction in Ad-Hoc Network","authors":"K. Sangeetha, E. Anbalagan, Raj Kumar, Vaibhav Eknath Pawar, N. Muthukumaran","doi":"10.3103/S1060992X2570002X","DOIUrl":"10.3103/S1060992X2570002X","url":null,"abstract":"<p>Ad-hoc network is a type of wireless network, but it differs from other wireless networks in that it lacks infrastructure such as access points, routers, and other devices. While a node can communicate with every other node in the same cell in infrastructure networks, routing and the limitations of wireless communication are the main issues in ad hoc networks. But those clarifications are gave not accurate results. In order to overcome these issues, proposed traffic congestion prevention for IoT based traffic management in ad-hoc network using deep learning. This proposed method has five phases like data collection, preprocessing, feature selection, classification and decision making. The input data gathered from IoT devices in the ad-hoc network. After that, IoT features were preprocessed using missing values replacement and SMOTE resampling. Then preprocessed IoT data features to be selected using chi-square, which is used to select optimal features to avoid overfitting problems. Following that, the selected IoT features were classified with the help of deep LSTM, which is used to know whether the network is traffic or not. If the network have traffic, the data transmission is done through the traffic less path. Otherwise, the IoT data should be transmitted easily. The proposed model was designed and the performance was validated by using MATLAB software. Deep learning (DL) performance parameters such as accuracy, precision, recall, and error have values of 98.32, 98.325, 97.87, and 1.9%, respectively. Moreover, this proposed model is effective for detecting traffic congestion and which is used to prevent traffic through an ad-hoc network’s IoT based traffic management system.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"34 2","pages":"239 - 255"},"PeriodicalIF":0.8,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145161164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The world economy, human well-being, and the health of plants and animals have all suffered greatly as a result of rising air pollution. This survey investigates four different aspects of air pollution prediction using machine learning (ML). It examines the relationship between industrial processes and emissions, concentrating on factors and industries. Predictive models that can foretell pollution levels from industrial activity are created using machine learning techniques. ML models are used to forecast the amounts of pollution associated with vehicle traffic, as automobiles play a significant role in the degradation of urban air quality. The use of ML based approaches to predict pollution levels from natural phenomena like storms of dust, lava flows, and wildfires helps preventive measures and disaster preparedness. Lastly, ML algorithms are used to anticipate pollutant emissions from a range of combustion sources, including power plants, residential heating systems, and industrial boilers. In addition to discussing the consequences for pollution management strategies, the study assesses how well machine learning algorithms predict emissions. The objective is to further advance the creation of forecasting abilities that are essential for lowering the detrimental effects of air pollution on the environment and public health by providing insights into the quickly evolving field of air pollution forecasts through ML approaches.
{"title":"Predictive Purity: Advancements in Air Pollution Forecasting through Machine Learning","authors":"Mankala Satish, Saroj Kumar Biswas, Biswajit Purkayastha","doi":"10.3103/S1060992X25700031","DOIUrl":"10.3103/S1060992X25700031","url":null,"abstract":"<p>The world economy, human well-being, and the health of plants and animals have all suffered greatly as a result of rising air pollution. This survey investigates four different aspects of air pollution prediction using machine learning (ML). It examines the relationship between industrial processes and emissions, concentrating on factors and industries. Predictive models that can foretell pollution levels from industrial activity are created using machine learning techniques. ML models are used to forecast the amounts of pollution associated with vehicle traffic, as automobiles play a significant role in the degradation of urban air quality. The use of ML based approaches to predict pollution levels from natural phenomena like storms of dust, lava flows, and wildfires helps preventive measures and disaster preparedness. Lastly, ML algorithms are used to anticipate pollutant emissions from a range of combustion sources, including power plants, residential heating systems, and industrial boilers. In addition to discussing the consequences for pollution management strategies, the study assesses how well machine learning algorithms predict emissions. The objective is to further advance the creation of forecasting abilities that are essential for lowering the detrimental effects of air pollution on the environment and public health by providing insights into the quickly evolving field of air pollution forecasts through ML approaches.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"34 2","pages":"256 - 271"},"PeriodicalIF":0.8,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-02DOI: 10.3103/S1060992X24600599
Rupali J. Dhabarde, D. V. Kodavade, Aditya Konnur, Vijay Manwatkar
Recognition of Human Face expression is the most significant and challenging societal interaction tasks. Humans often convey their feelings and intentions through their facial expressions in a natural and honest manner, nonverbal communication is mostly characterized by facial expressions. Various approaches for classifying emotions and facial recognition have been established to enhance the accuracy of face recognition in the infrared images. Significant issues of recent deep FER systems include overfitting due to insufficient training data as well as expression-unrelated variables such as identification bias, head posture, and illumination. To address these challenges, the proposed model implemented a method for detecting facial expression using HAPNet segmentation and hybrid VGG16 with SVM classifier. At first, pre-processed the images using an optimized Difference of Gaussians (DOG) filter for enhancing the edges of the image and the Artificial Gorilla Troops Optimization Algorithm (GTO) is used to select the kernel size based on the maximum PSNR. Segmentation is the next step for segmenting the face using the Hybrid, Asymmetric, and Progressive Network (HAPNet) method. Landmark is detected based on Multi-Task Cascaded Convolutional Networks (MTCNN) for identifying the location of the mouth eyes, and nose. The last step is to categorize the seven emotions which are happy, sad, disgusted, surprised, angry, fearful, and neutral on faces using the hybrid VGG16 with Support Vector Machine (SVM) algorithm. The effectiveness of the proposed methodology is evaluated using the metrics of accuracy is 96.6%, positive predictive value is 93.08%, hit rate of 95.2%, selectivity of 92.5%, negative predictive value of 95.8%, and f1-score of 94.49%. Experiments on the database illustrates that the proposed approach performs better than conventional techniques for accurately identifies the expressions on the face in the thermal images.
{"title":"Facial Expression Recognition in Infrared Imaging Using HAPNet Segmentation and Hybrid VGG16-SVM Classifier","authors":"Rupali J. Dhabarde, D. V. Kodavade, Aditya Konnur, Vijay Manwatkar","doi":"10.3103/S1060992X24600599","DOIUrl":"10.3103/S1060992X24600599","url":null,"abstract":"<p>Recognition of Human Face expression is the most significant and challenging societal interaction tasks. Humans often convey their feelings and intentions through their facial expressions in a natural and honest manner, nonverbal communication is mostly characterized by facial expressions. Various approaches for classifying emotions and facial recognition have been established to enhance the accuracy of face recognition in the infrared images. Significant issues of recent deep FER systems include overfitting due to insufficient training data as well as expression-unrelated variables such as identification bias, head posture, and illumination. To address these challenges, the proposed model implemented a method for detecting facial expression using HAPNet segmentation and hybrid VGG16 with SVM classifier. At first, pre-processed the images using an optimized Difference of Gaussians (DOG) filter for enhancing the edges of the image and the Artificial Gorilla Troops Optimization Algorithm (GTO) is used to select the kernel size based on the maximum PSNR. Segmentation is the next step for segmenting the face using the Hybrid, Asymmetric, and Progressive Network (HAPNet) method. Landmark is detected based on Multi-Task Cascaded Convolutional Networks (MTCNN) for identifying the location of the mouth eyes, and nose. The last step is to categorize the seven emotions which are happy, sad, disgusted, surprised, angry, fearful, and neutral on faces using the hybrid VGG16 with Support Vector Machine (SVM) algorithm. The effectiveness of the proposed methodology is evaluated using the metrics of accuracy is 96.6%, positive predictive value is 93.08%, hit rate of 95.2%, selectivity of 92.5%, negative predictive value of 95.8%, and f1-score of 94.49%. Experiments on the database illustrates that the proposed approach performs better than conventional techniques for accurately identifies the expressions on the face in the thermal images.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"34 2","pages":"146 - 163"},"PeriodicalIF":0.8,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145161312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-02DOI: 10.3103/S1060992X25600272
B. V. Kryzhanovsky
Neural network properties are considered in the case of the interconnection tensor rank being higher than two (i.e., when in addition to the synaptic connection matrix, there are presynaptic synapses, pre-presynaptic synapses, etc.). This sort of interconnection tensor occurs in realization of crossbar-based neural networks. It is intrinsic for a crossbar design to suffer from parasitic currents: when a signal travels along a connection to a certain neuron, a part of it always passes to other neurons’ connections through memory cells (synapses). As a result, a signal at the neuron input holds noise—other weak signals going to all other neurons. It means that the conductivity of an analog crossbar cell varies proportionally to the noise signal, and the cell output signal becomes nonlinear. It is shown that the interconnection tensor of a certain form makes the neural network much more efficient: the storage capacity and basin of attraction of the network increase considerably. A network like the Hopfield one is used in the study.
{"title":"Interconnection Tensor Rank and the Neural Network Storage Capacity","authors":"B. V. Kryzhanovsky","doi":"10.3103/S1060992X25600272","DOIUrl":"10.3103/S1060992X25600272","url":null,"abstract":"<p>Neural network properties are considered in the case of the interconnection tensor rank being higher than two (i.e., when in addition to the synaptic connection matrix, there are presynaptic synapses, pre-presynaptic synapses, etc.). This sort of interconnection tensor occurs in realization of crossbar-based neural networks. It is intrinsic for a crossbar design to suffer from parasitic currents: when a signal travels along a connection to a certain neuron, a part of it always passes to other neurons’ connections through memory cells (synapses). As a result, a signal at the neuron input holds noise—other weak signals going to all other neurons. It means that the conductivity of an analog crossbar cell varies proportionally to the noise signal, and the cell output signal becomes nonlinear. It is shown that the interconnection tensor of a certain form makes the neural network much more efficient: the storage capacity and basin of attraction of the network increase considerably. A network like the Hopfield one is used in the study.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"34 2","pages":"181 - 187"},"PeriodicalIF":0.8,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145161092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-02DOI: 10.3103/S1060992X24601362
Manjur Hossain
The manuscript includes the analysis and implementation of compact XOR and XNOR gates all-optically using microring resonator. Research on simultaneous logic and its inverse operation in a single circuit is crucial and productive in the field of optical computing. In addition, energy-efficient circuits are becoming more and more crucial. XOR and XNOR logic gates are designed and analyzed at about 260 Gbps using MATLAB. The same design has also been verified by “Ansys Lumerical finite difference time domain (FDTD)” software. Footprint of the FDTD design is only 47.7 μm × 18.8 μm. This proposed XOR and XNOR are particularly useful for digital signal processing because of its small architecture and faster response times. The evaluation and analysis of a few performance-indicating variables includes “extinction ratio”, “contrast ratio”, “amplitude modulation”, “on-off ratio”, and “relative eye opening”. Optimized design parameters are chosen to implement the design experimentally.
{"title":"Design and Analysis of Compact All-Optical XOR and XNOR Gates Employing Microring Resonator","authors":"Manjur Hossain","doi":"10.3103/S1060992X24601362","DOIUrl":"10.3103/S1060992X24601362","url":null,"abstract":"<p>The manuscript includes the analysis and implementation of compact XOR and XNOR gates all-optically using microring resonator. Research on simultaneous logic and its inverse operation in a single circuit is crucial and productive in the field of optical computing. In addition, energy-efficient circuits are becoming more and more crucial. XOR and XNOR logic gates are designed and analyzed at about 260 Gbps using MATLAB. The same design has also been verified by “Ansys Lumerical finite difference time domain (FDTD)” software. Footprint of the FDTD design is only 47.7 μm × 18.8 μm. This proposed XOR and XNOR are particularly useful for digital signal processing because of its small architecture and faster response times. The evaluation and analysis of a few performance-indicating variables includes “extinction ratio”, “contrast ratio”, “amplitude modulation”, “on-off ratio”, and “relative eye opening”. Optimized design parameters are chosen to implement the design experimentally.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"34 2","pages":"229 - 238"},"PeriodicalIF":0.8,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145161094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-02DOI: 10.3103/S1060992X25700043
C. Usha Nandhini, P. R. Tamilselvi
Heart disease is a primary cause of disability and premature mortality globally. Coronary heart disease is the most prevalent kind of heart disease, which happens when plaque builds up inside the arteries that feed blood to the heart, making blood circulation difficult. Heart disease prediction is a difficult task in clinical machine learning. However, various existing systems are utilized to detect the type of heart disease but those approaches are time-consuming and inaccurate to detect the disease at early stages. To address various issues, a deep learning framework has been developed to achieve accurate disease classification. Initially, data’s are collected and pre-processed using a Sequential K-Nearest Neighbors (SKNN) technique for missing value replacement. The data is then subjected to decimal scaling normalization to enhance its integrity and uniformity. Then, reducing the dimension of the feature vector by applying Multilinear Principal Component Analysis (MPCA). Butterfly optimization (BOA) is employed to determine the ideal quantity of components to enhance the accuracy of the proposed model. In order to determine the different forms of cardiac disease, characteristics are classified subsequently using Long Short-Term Memory (LSTM). To evaluate the planned model’s performance, performance measures from the proposed and existing models are compared. Performance measures include Sensitivity, MCC, Negative Predictive Value (NPV), False Discovery Rate (FDR), Accuracy, Precision, Error, Specificity, F1-score, False Negative Rate (FNR), False Positive Rate (FPR), False Negative Rate (FNR), and False Positive Rate (FPR) attained for the proposed model is 96.5, 95, 3.5, 95.9, 95.5, 94.7, 95.7, 2.8, 3.7, 90.9, 93.2, 95.7 and 2.9%. In comparison to other existing techniques, the proposed technique performs better. In order to determine the type of heart disease, the created model is the best choice.
{"title":"Heart Disease Prediction and Classification Using LSTM Optimized by Butterfly Optimization","authors":"C. Usha Nandhini, P. R. Tamilselvi","doi":"10.3103/S1060992X25700043","DOIUrl":"10.3103/S1060992X25700043","url":null,"abstract":"<p>Heart disease is a primary cause of disability and premature mortality globally. Coronary heart disease is the most prevalent kind of heart disease, which happens when plaque builds up inside the arteries that feed blood to the heart, making blood circulation difficult. Heart disease prediction is a difficult task in clinical machine learning. However, various existing systems are utilized to detect the type of heart disease but those approaches are time-consuming and inaccurate to detect the disease at early stages. To address various issues, a deep learning framework has been developed to achieve accurate disease classification. Initially, data’s are collected and pre-processed using a Sequential K-Nearest Neighbors (SKNN) technique for missing value replacement. The data is then subjected to decimal scaling normalization to enhance its integrity and uniformity. Then, reducing the dimension of the feature vector by applying Multilinear Principal Component Analysis (MPCA). Butterfly optimization (BOA) is employed to determine the ideal quantity of components to enhance the accuracy of the proposed model. In order to determine the different forms of cardiac disease, characteristics are classified subsequently using Long Short-Term Memory (LSTM). To evaluate the planned model’s performance, performance measures from the proposed and existing models are compared. Performance measures include Sensitivity, MCC, Negative Predictive Value (NPV), False Discovery Rate (FDR), Accuracy, Precision, Error, Specificity, F1-score, False Negative Rate (FNR), False Positive Rate (FPR), False Negative Rate (FNR), and False Positive Rate (FPR) attained for the proposed model is 96.5, 95, 3.5, 95.9, 95.5, 94.7, 95.7, 2.8, 3.7, 90.9, 93.2, 95.7 and 2.9%. In comparison to other existing techniques, the proposed technique performs better. In order to determine the type of heart disease, the created model is the best choice.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"34 2","pages":"272 - 284"},"PeriodicalIF":0.8,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}