This paper presents a theoretical work of a new device concept for frequency division demultiplexing with excellent performance based on waveguides system containing segments and loops in the presence of two geometrics defects. This system permits the separation of two frequency, based on 1D photonic waveguides loops structures. The system under consideration possesses a Y‑shaped demultiplexer configuration, consisting of a single input and two output channels (transmission lines). Each output channel contains an alternating unit cell consisting of a segment and a loop. The creation of a geometrical defect at the segment level in the middle of each output line allows the creation of two defect modes inside the bandgaps. The numerical results show that this demultiplexer system is able to separate two signals (electromagnetic waves) of different frequencies and guide each signal through an output channel. We perform the analytical calculation of the transmission rates T1, T2, and reflection R using the interface response theory, which is based on Green’s function method for the proposed demultiplexer system. The proposed device offers high transmission efficiency, high quality factor and a large frequency difference between defect modes, hence, it is highly desirable for frequency division demultiplexing applications.
本文介绍了一种用于频分解复用的新设备概念的理论研究,该设备性能卓越,基于波导系统,包含两个几何缺陷的波段和环路。该系统基于一维光子波导环路结构,可实现双频分离。该系统采用 Y 型解复用器配置,由一个输入通道和两个输出通道(传输线)组成。每个输出通道都包含一个交替的单元格,单元格由段和环组成。在每条输出线中间的分段处产生一个几何缺陷,从而在带隙内产生两种缺陷模式。数值结果表明,这种解复用器系统能够分离两个不同频率的信号(电磁波),并引导每个信号通过一个输出通道。我们利用基于格林函数法的界面响应理论,对所提出的解复用器系统的传输速率 T1、T2 和反射率 R 进行了分析计算。所提出的器件具有传输效率高、品质因数高和缺陷模式之间频率差大的特点,因此非常适合频分解复用应用。
{"title":"Two Frequency-Division Demultiplexing Using Photonic Waveguides by the Presence of Two Geometric Defects","authors":"El-Aouni Mimoun, Ben-Ali Youssef, El Kadmiri Ilyass, Ouariach Abdelaziz, Bria Driss","doi":"10.3103/S1060992X24700218","DOIUrl":"10.3103/S1060992X24700218","url":null,"abstract":"<p>This paper presents a theoretical work of a new device concept for frequency division demultiplexing with excellent performance based on waveguides system containing segments and loops in the presence of two geometrics defects. This system permits the separation of two frequency, based on 1<i>D</i> photonic waveguides loops structures. The system under consideration possesses a Y‑shaped demultiplexer configuration, consisting of a single input and two output channels (transmission lines). Each output channel contains an alternating unit cell consisting of a segment and a loop. The creation of a geometrical defect at the segment level in the middle of each output line allows the creation of two defect modes inside the bandgaps. The numerical results show that this demultiplexer system is able to separate two signals (electromagnetic waves) of different frequencies and guide each signal through an output channel. We perform the analytical calculation of the transmission rates <i>T</i><sub>1</sub>, <i>T</i><sub>2</sub>, and reflection R using the interface response theory, which is based on Green’s function method for the proposed demultiplexer system. The proposed device offers high transmission efficiency, high quality factor and a large frequency difference between defect modes, hence, it is highly desirable for frequency division demultiplexing applications.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 3","pages":"326 - 338"},"PeriodicalIF":1.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142413979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.3103/S1060992X24700140
M. V. Gashnikov
The paper investigates algorithms using long intensity gradients for georeferencing of Earth remote sensing data. The case is considered in which one “reliable” referenced set of remote sensing data is already known for a particular area. New input data are referenced to this “reliable” set by detecting resemblant fragments in the “relible” data set and new remote sensing data. A set of pairs of resemblant fragments makes it possible to calculate the transformation parameters of new data. To increase the efficiency of resemblant fragments detection, we go to the space of long intensity gradients, which makes the georeferencing method more stable to admissible differences between resemblant fragments. The paper considers a few algorithms of going to the long gradient space and compares them. The computaional experiment provides grounds for recommending the best way of going to the long gradient space.
{"title":"Georeferencing Remote Sensing Data Using Long Gradients","authors":"M. V. Gashnikov","doi":"10.3103/S1060992X24700140","DOIUrl":"10.3103/S1060992X24700140","url":null,"abstract":"<p>The paper investigates algorithms using long intensity gradients for georeferencing of Earth remote sensing data. The case is considered in which one “reliable” referenced set of remote sensing data is already known for a particular area. New input data are referenced to this “reliable” set by detecting resemblant fragments in the “relible” data set and new remote sensing data. A set of pairs of resemblant fragments makes it possible to calculate the transformation parameters of new data. To increase the efficiency of resemblant fragments detection, we go to the space of long intensity gradients, which makes the georeferencing method more stable to admissible differences between resemblant fragments. The paper considers a few algorithms of going to the long gradient space and compares them. The computaional experiment provides grounds for recommending the best way of going to the long gradient space.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 3","pages":"255 - 258"},"PeriodicalIF":1.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142414204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.3103/S1060992X2470019X
V. I. Egorov, B. V. Kryzhanovsky
The 1/t Wang-Landau algorithm is analyzed from the viewpoint of execution time and accuracy when it is used in computations of the density of states of a two-dimensional Ising model. We find that the simulation results have a systematic error, the magnitude of which decreases with increasing the lattice size. The relative error has two maxima: the first one is located near the energy of the ground state, and the second maximum corresponds to the value of the internal energy at the critical point. We demonstrate that it is impossible to estimate the execution time of the 1/t Wang-Landau algorithm in advance when simulating large lattices. The reason is that when the final value of the modification factor was reached, the criterion for transition to mode 1/t was not met. The simultaneous calculations of the density of states for energy and magnetization are shown to lead to higher accuracy in estimating statistical moments of internal energy.
{"title":"Accuracy and Performance Analysis of the 1/t Wang-Landau Algorithm in the Joint Density of States Estimation","authors":"V. I. Egorov, B. V. Kryzhanovsky","doi":"10.3103/S1060992X2470019X","DOIUrl":"10.3103/S1060992X2470019X","url":null,"abstract":"<p>The 1/<i>t</i> Wang-Landau algorithm is analyzed from the viewpoint of execution time and accuracy when it is used in computations of the density of states of a two-dimensional Ising model. We find that the simulation results have a systematic error, the magnitude of which decreases with increasing the lattice size. The relative error has two maxima: the first one is located near the energy of the ground state, and the second maximum corresponds to the value of the internal energy at the critical point. We demonstrate that it is impossible to estimate the execution time of the 1/<i>t</i> Wang-Landau algorithm in advance when simulating large lattices. The reason is that when the final value of the modification factor was reached, the criterion for transition to mode 1/<i>t</i> was not met. The simultaneous calculations of the density of states for energy and magnetization are shown to lead to higher accuracy in estimating statistical moments of internal energy.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 3","pages":"302 - 307"},"PeriodicalIF":1.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142414024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.3103/S1060992X24700176
V. S. Skorokhodov, D. M. Drozdova, D. A. Yudin
Recently, there has been an increased interest in NeRF methods which reconstruct differentiable representation of three-dimensional scenes. One of the main limitations of such methods is their inability to assess the confidence of the model in its predictions. In this paper, we propose a new neural network model for the formation of extended vector representations, called uSF, which allows the model to predict not only color and semantic label of each point, but also estimate the corresponding values of uncertainty. We show that with a small number of images available for training, a model that quantifies uncertainty performs better than a model without such functionality. Code of the uSF approach is publicly available at https://github.com/sevashasla/usf/.
{"title":"uSF: Learning Neural Semantic Field with Uncertainty","authors":"V. S. Skorokhodov, D. M. Drozdova, D. A. Yudin","doi":"10.3103/S1060992X24700176","DOIUrl":"10.3103/S1060992X24700176","url":null,"abstract":"<p>Recently, there has been an increased interest in NeRF methods which reconstruct differentiable representation of three-dimensional scenes. One of the main limitations of such methods is their inability to assess the confidence of the model in its predictions. In this paper, we propose a new neural network model for the formation of extended vector representations, called uSF, which allows the model to predict not only color and semantic label of each point, but also estimate the corresponding values of uncertainty. We show that with a small number of images available for training, a model that quantifies uncertainty performs better than a model without such functionality. Code of the uSF approach is publicly available at https://github.com/sevashasla/usf/.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 3","pages":"276 - 285"},"PeriodicalIF":1.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142413895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.3103/S1060992X24700164
Artem Mukhin, Rustam Paringer, Danil Gribanov, Igor Kilbas
Analyzing hyperspectral images poses a non-trivial challenge due to various challenges. To overcome most of these challenges one of the widely employed approach involves utilizing indices, such as the Normalized Difference Vegetation Index (NDVI). Indices provide a powerful means to distill complex spectral information into meaningful metrics, facilitating the interpretation of specific features within the hyperspectral domain. Moreover, the indices are usually easy to compute. However, creating indices for discerning arbitrary data classes within an image proves to be a challenging task. In this paper, we present an algorithm designed to automatically generate lightweight descriptors, suited for discerning between arbitrary classes in hyperspectral images. These lightweight descriptors within the algorithm are characterized by indices derived from selected informative layers. Our proposed algorithm streamlines the descriptor generation process through a multi-step approach. Firstly, it employs Principal Component Analysis (PCA) to transform the hyperspectral image into a three-channel representation. This transformed image serves as input for a Segment Anything Model (SAM). The neural network outputs a labeled map, delineating different classes within the hyperspectral image. Subsequently, our Informative Index Formation algorithm (INDI) utilizes this labeled map to systematically generate a set of lightweight descriptors. Each descriptor within the set is adept at distinguishing a specific class from the remaining classes in the hyperspectral image. The paper demonstrates the practical application of the developed algorithm for hyperspectral image segmentation.
{"title":"Automated Lightweight Descriptor Generation for Hyperspectral Image Analysis","authors":"Artem Mukhin, Rustam Paringer, Danil Gribanov, Igor Kilbas","doi":"10.3103/S1060992X24700164","DOIUrl":"10.3103/S1060992X24700164","url":null,"abstract":"<p>Analyzing hyperspectral images poses a non-trivial challenge due to various challenges. To overcome most of these challenges one of the widely employed approach involves utilizing indices, such as the Normalized Difference Vegetation Index (NDVI). Indices provide a powerful means to distill complex spectral information into meaningful metrics, facilitating the interpretation of specific features within the hyperspectral domain. Moreover, the indices are usually easy to compute. However, creating indices for discerning arbitrary data classes within an image proves to be a challenging task. In this paper, we present an algorithm designed to automatically generate lightweight descriptors, suited for discerning between arbitrary classes in hyperspectral images. These lightweight descriptors within the algorithm are characterized by indices derived from selected informative layers. Our proposed algorithm streamlines the descriptor generation process through a multi-step approach. Firstly, it employs Principal Component Analysis (PCA) to transform the hyperspectral image into a three-channel representation. This transformed image serves as input for a Segment Anything Model (SAM). The neural network outputs a labeled map, delineating different classes within the hyperspectral image. Subsequently, our Informative Index Formation algorithm (INDI) utilizes this labeled map to systematically generate a set of lightweight descriptors. Each descriptor within the set is adept at distinguishing a specific class from the remaining classes in the hyperspectral image. The paper demonstrates the practical application of the developed algorithm for hyperspectral image segmentation.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 3","pages":"264 - 275"},"PeriodicalIF":1.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142414023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.3103/S1060992X24700206
M. K. Arif, Kalaivani Kathirvelu
Reducing the amount of car accidents and the deaths that result from them requires close monitoring of drivers’ health and alertness. Identifying driver weariness has been a major practical concern and problem in recent years. A number of machine learning algorithms have been used for monitoring the driver’s health system, even though accurate and early identification is more challenging. In order to overcome this issues, vehicle driver health is monitored using wearable ECG based on an optimized Deep Belief Network (DBN) is proposed. The collected ECG raw signal is pre-processed using a notch filter and high pass filter and an adaptive sliding window to improve the signal quality. After that, Wavelet Packet Decomposition (WPD) and the Short Time Fourier Transform (SIFT) are used to extract features from the pre-processed signal. It enables for the extraction of both time and frequency domain data. In order to classify whether a driver is fit to drive, is under stress, or has a heart condition, the extracted statistical features are sent for further classification using an optimized Deep Belief Neural Network (DBN). The walrus optimization technique is utilized to set the learning rate of the DBN classifier in an optimal manner. To prevent collisions between vehicles, the driver will be alerted via a buzzer system in the event of stress or heart problems. According to the results of the experimental research, the proposed technique achieves 95.1% accuracy, 92.5% precision, 96.5% specificity, 93% of recall, and 92.7% of the f1-score. Thus, the driver health monitoring system can be accurately detected using this automated model.
要减少车祸及其造成的死亡人数,就必须密切监测驾驶员的健康状况和警觉性。近年来,识别驾驶员的疲劳程度一直是一个重要的实际问题。许多机器学习算法已被用于监测驾驶员的健康系统,尽管准确和早期识别更具挑战性。为了克服这一问题,我们提出了基于优化的深度信念网络(DBN)的可穿戴心电图来监测汽车驾驶员的健康状况。收集到的心电图原始信号使用陷波滤波器、高通滤波器和自适应滑动窗口进行预处理,以提高信号质量。然后,使用小波包分解(WPD)和短时傅里叶变换(SIFT)从预处理信号中提取特征。它可以提取时域和频域数据。为了对驾驶员是否适合驾驶、是否处于压力状态或是否患有心脏疾病进行分类,提取的统计特征将通过优化的深度信念神经网络(DBN)进行进一步分类。海象优化技术用于以最佳方式设置 DBN 分类器的学习率。为防止车辆之间发生碰撞,当驾驶员出现压力或心脏问题时,将通过蜂鸣器系统发出警报。根据实验研究结果,所提出的技术达到了 95.1%的准确率、92.5%的精确率、96.5%的特异性、93%的召回率和 92.7%的 f1 分数。因此,驾驶员健康监测系统可以利用该自动模型进行准确检测。
{"title":"Automated Driver Health Monitoring System in Automobile Industry Using WOA-DBN Using ECG Waveform","authors":"M. K. Arif, Kalaivani Kathirvelu","doi":"10.3103/S1060992X24700206","DOIUrl":"10.3103/S1060992X24700206","url":null,"abstract":"<p>Reducing the amount of car accidents and the deaths that result from them requires close monitoring of drivers’ health and alertness. Identifying driver weariness has been a major practical concern and problem in recent years. A number of machine learning algorithms have been used for monitoring the driver’s health system, even though accurate and early identification is more challenging. In order to overcome this issues, vehicle driver health is monitored using wearable ECG based on an optimized Deep Belief Network (DBN) is proposed. The collected ECG raw signal is pre-processed using a notch filter and high pass filter and an adaptive sliding window to improve the signal quality. After that, Wavelet Packet Decomposition (WPD) and the Short Time Fourier Transform (SIFT) are used to extract features from the pre-processed signal. It enables for the extraction of both time and frequency domain data. In order to classify whether a driver is fit to drive, is under stress, or has a heart condition, the extracted statistical features are sent for further classification using an optimized Deep Belief Neural Network (DBN). The walrus optimization technique is utilized to set the learning rate of the DBN classifier in an optimal manner. To prevent collisions between vehicles, the driver will be alerted via a buzzer system in the event of stress or heart problems. According to the results of the experimental research, the proposed technique achieves 95.1% accuracy, 92.5% precision, 96.5% specificity, 93% of recall, and 92.7% of the f1-score. Thus, the driver health monitoring system can be accurately detected using this automated model.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 3","pages":"308 - 325"},"PeriodicalIF":1.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142414174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.3103/S1060992X24700231
Raj Kumar, Amit Prakash Singh, Anuradha Chug
Plant diseases can harm crops and reduce the amount of food that can be cultivated, which is problematic for farmers. Technology is being utilized to develop computer-based programs that can recognize plant diseases and assist farmers in making better decisions after identifying plant leaf diseases. In most of these models, machine learning algorithms are applied, to make predictions about potential plant diseases using mathematical models and neural networks. Many researchers discussed the variants of DNN and CNN algorithms to solve the discussed problems and gave better results. In this paper, the novel approach is discussed and implemented where the plant disease is identified whether the plant leaf captured image has a noisy background or not; or whether the leaf image is segmented or not. The authors developed an adaptive algorithm which gives the results in two phases: the classification of the plant disease based on the original input leaf image and secondly, the identification of plant leaf disease after applying the segmentation process. The result of this two-phase proposed model is analyzed and compared with existing popular models like AlexNet, ResNet-50, and the EffNet the results are convincing. The proposed model has 97.39% accuracy when the noiseless image is taken; while the 90.26% accuracy is there, in case of noisy background image as an input; and the results are outstanding, if the authors are applying their segmentation-based AH-CNN model on the noisy real-time image, the accuracy is 95.27%.
{"title":"Adaptive Disease Detection Algorithm Using Hybrid CNN Model for Plant Leaves","authors":"Raj Kumar, Amit Prakash Singh, Anuradha Chug","doi":"10.3103/S1060992X24700231","DOIUrl":"10.3103/S1060992X24700231","url":null,"abstract":"<p>Plant diseases can harm crops and reduce the amount of food that can be cultivated, which is problematic for farmers. Technology is being utilized to develop computer-based programs that can recognize plant diseases and assist farmers in making better decisions after identifying plant leaf diseases. In most of these models, machine learning algorithms are applied, to make predictions about potential plant diseases using mathematical models and neural networks. Many researchers discussed the variants of DNN and CNN algorithms to solve the discussed problems and gave better results. In this paper, the novel approach is discussed and implemented where the plant disease is identified whether the plant leaf captured image has a noisy background or not; or whether the leaf image is segmented or not. The authors developed an adaptive algorithm which gives the results in two phases: the classification of the plant disease based on the original input leaf image and secondly, the identification of plant leaf disease after applying the segmentation process. The result of this two-phase proposed model is analyzed and compared with existing popular models like AlexNet, ResNet-50, and the EffNet the results are convincing. The proposed model has 97.39% accuracy when the noiseless image is taken; while the 90.26% accuracy is there, in case of noisy background image as an input; and the results are outstanding, if the authors are applying their segmentation-based AH-CNN model on the noisy real-time image, the accuracy is 95.27%.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 3","pages":"355 - 372"},"PeriodicalIF":1.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142414025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.3103/S1060992X2470022X
C. S. Parvathy, J. P. Jayan
Lung cancer is the most common cancer and the primary reason for cancer related fatalities globally. Lung cancer patients have a 14% overall survival rate. If the cancer is found in the early stages, the lives of patients with the disease may be preserved. A variety of conventional machine and deep learning algorithms have been developed for the effective automatic diagnosis of lung cancer. But they still have issues with recognition accuracy and take longer to analyze. To overcome these issues, this paper presents deep learning assisted Squeeze and Excitation Convolutional Neural Networks (SENET) to predict lung cancer on computed tomography images. This paper uses lung CT images for prediction. These raw images are preprocessed using Adaptive Bilateral Filter (ABF) and Reformed Histogram Equalization (RHE) to remove noise and enhance an image’s clarity. To determine the tunable parameters of the RHE approach Tuna Swam optimization algorithm is used in this proposed method. This preprocessed image is then given to the segmentation process to divide the image. This proposed approach uses the Chan vese segmentation model to segment the image. Segmentation output is then fed into the classifier for final classification. SENET classifier is utilized in this proposed approach to final lung cancer prediction. The outcomes of the test assessment demonstrated that the proposed model could identify lung cancer with 99.2% accuracy, 99.1% precision, and 0.8% error. The proposed SENET system predicts CT scanning images of lung cancer successfully.
{"title":"Automatic Lung Cancer Detection Using Computed Tomography Based on Chan Vese Segmentation and SENET","authors":"C. S. Parvathy, J. P. Jayan","doi":"10.3103/S1060992X2470022X","DOIUrl":"10.3103/S1060992X2470022X","url":null,"abstract":"<p>Lung cancer is the most common cancer and the primary reason for cancer related fatalities globally. Lung cancer patients have a 14% overall survival rate. If the cancer is found in the early stages, the lives of patients with the disease may be preserved. A variety of conventional machine and deep learning algorithms have been developed for the effective automatic diagnosis of lung cancer. But they still have issues with recognition accuracy and take longer to analyze. To overcome these issues, this paper presents deep learning assisted Squeeze and Excitation Convolutional Neural Networks (SENET) to predict lung cancer on computed tomography images. This paper uses lung CT images for prediction. These raw images are preprocessed using Adaptive Bilateral Filter (ABF) and Reformed Histogram Equalization (RHE) to remove noise and enhance an image’s clarity. To determine the tunable parameters of the RHE approach Tuna Swam optimization algorithm is used in this proposed method. This preprocessed image is then given to the segmentation process to divide the image. This proposed approach uses the Chan vese segmentation model to segment the image. Segmentation output is then fed into the classifier for final classification. SENET classifier is utilized in this proposed approach to final lung cancer prediction. The outcomes of the test assessment demonstrated that the proposed model could identify lung cancer with 99.2% accuracy, 99.1% precision, and 0.8% error. The proposed SENET system predicts CT scanning images of lung cancer successfully.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 3","pages":"339 - 354"},"PeriodicalIF":1.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142414131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.3103/S1060992X24700152
Heena Kalim, Anuradha Chug, Amit Prakash Singh
The paper introduces two novel activation functions known as modExp and modExpm. The activation functions possess several desirable properties, such as being continuously differentiable, bounded, smooth, and non-monotonic. Our studies have shown that modExp and modExpm consistently outperform ReLU and other activation functions across a range of challenging datasets and complex models. Initially, the experiments involve training and classifying using a multi-layer perceptron (MLP) on benchmark data sets like the Diagnostic Wisconsin Breast Cancer and Iris Flower datasets. Both modExp and modExpm demonstrate impressive performance, with modExp achieving 94.15 and 95.56% and modExpm achieving 94.15 and 95.56% respectively, when compared to ReLU, ELU, Tanh, Mish, Softsign, Leaky ReLU, and TanhExp. In addition, a series of experiments were carried out on five different depths of deeper neural networks, ranging from five to eight layers, using MNIST datasets. The modExpm activation function demonstrated superior performance accuracy on various neural network configurations, achieving 95.56, 95.43, 94.72, 95.14, and 95.61% on wider 5 layers, slimmer 5 layers, 6 layers, 7 layers, and 8 layers respectively. The modExp activation function also performed well, achieving the second highest accuracy of 95.42, 94.33, 94.76, 95.06, and 95.37% on the same network configurations, outperforming ReLU, ELU, Tanh, Mish, Softsign, Leaky ReLU, and TanhExp. The results of the statistical feature measures show that both activation functions have the highest mean accuracy, the lowest standard deviation, the lowest Root Mean squared Error, the lowest variance, and the lowest Mean squared Error. According to the experiment, both functions converge more quickly than ReLU, which is a significant advantage in Neural network learning.
{"title":"Enhancement of Neural Network Performance with the Use of Two Novel Activation Functions: modExp and modExpm","authors":"Heena Kalim, Anuradha Chug, Amit Prakash Singh","doi":"10.3103/S1060992X24700152","DOIUrl":"10.3103/S1060992X24700152","url":null,"abstract":"<p>The paper introduces two novel activation functions known as modExp and modExp<sub>m</sub>. The activation functions possess several desirable properties, such as being continuously differentiable, bounded, smooth, and non-monotonic. Our studies have shown that modExp and modExp<sub>m</sub> consistently outperform ReLU and other activation functions across a range of challenging datasets and complex models. Initially, the experiments involve training and classifying using a multi-layer perceptron (MLP) on benchmark data sets like the Diagnostic Wisconsin Breast Cancer and Iris Flower datasets. Both modExp and modExp<sub>m</sub> demonstrate impressive performance, with modExp achieving 94.15 and 95.56% and modExp<sub>m</sub> achieving 94.15 and 95.56% respectively, when compared to ReLU, ELU, Tanh, Mish, Softsign, Leaky ReLU, and TanhExp. In addition, a series of experiments were carried out on five different depths of deeper neural networks, ranging from five to eight layers, using MNIST datasets. The modExp<sub>m</sub> activation function demonstrated superior performance accuracy on various neural network configurations, achieving 95.56, 95.43, 94.72, 95.14, and 95.61% on wider 5 layers, slimmer 5 layers, 6 layers, 7 layers, and 8 layers respectively. The modExp activation function also performed well, achieving the second highest accuracy of 95.42, 94.33, 94.76, 95.06, and 95.37% on the same network configurations, outperforming ReLU, ELU, Tanh, Mish, Softsign, Leaky ReLU, and TanhExp. The results of the statistical feature measures show that both activation functions have the highest mean accuracy, the lowest standard deviation, the lowest Root Mean squared Error, the lowest variance, and the lowest Mean squared Error. According to the experiment, both functions converge more quickly than ReLU, which is a significant advantage in Neural network learning.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 3","pages":"286 - 301"},"PeriodicalIF":1.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142414006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.3103/S1060992X24700188
B. V. Kryzhanovsky
The paper studies the properties of a fully connected neural network built around phase neurons. The signals traveling through the interconnections of the network are unit pulses with fixed phases. The phases encoding the components of associative memory vectors are distributed at random within the interval [0, 2π]. The simplest case in which the connection matrix is defined according to Hebbian learning rule is considered. The Chernov–Chebyshev technique, which is independent of the type of distribution of encoding phases, is used to evaluate the recognition error. The associative memory of this type of network is shown to be four times as large as that of a conventional Hopfield-type network using binary patterns. Correspondingly, the radius of the domain of attraction is also four times larger.
{"title":"On Recognition Capacity of a Phase Neural Network","authors":"B. V. Kryzhanovsky","doi":"10.3103/S1060992X24700188","DOIUrl":"10.3103/S1060992X24700188","url":null,"abstract":"<p>The paper studies the properties of a fully connected neural network built around phase neurons. The signals traveling through the interconnections of the network are unit pulses with fixed phases. The phases encoding the components of associative memory vectors are distributed at random within the interval [0, 2π]. The simplest case in which the connection matrix is defined according to Hebbian learning rule is considered. The Chernov–Chebyshev technique, which is independent of the type of distribution of encoding phases, is used to evaluate the recognition error. The associative memory of this type of network is shown to be four times as large as that of a conventional Hopfield-type network using binary patterns. Correspondingly, the radius of the domain of attraction is also four times larger.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 3","pages":"259 - 263"},"PeriodicalIF":1.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142414036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}