Identifying individuals from a facial image is a technique that forms part of computer vision and is used in various fields such as security, digital biometrics, smartphones, and banking. However, it can prove difficult due to the complexity of facial structure and the presence of variations that can affect the results. To overcome this difficulty, in this paper, we propose a combined approach that aims to improve the accuracy and robustness of facial recognition in the presence of variations. To this end, two datasets (ORL and UMIST) are used to train our model. We then began with the image pre-processing phase, which consists in applying a histogram equalization operation to adjust the gray levels over the entire image surface to improve quality and enhance the detection of features in each image. Next, the least important features are eliminated from the images using the Principal Component Analysis (PCA) method. Finally, the pre-processed images are subjected to a neural network architecture (CNN) consisting of multiple convolution layers and fully connected layers. Our simulation results show a high performance of our approach, with accuracy rates of up to 99.50% for the ORL dataset and 100% for the UMIST dataset.
{"title":"A combined method based on CNN architecture for variation-resistant facial recognition","authors":"Hicham Benradi, Ahmed Chater, Abdelali Lasfar","doi":"10.32985/ijeces.14.9.4","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.4","url":null,"abstract":"Identifying individuals from a facial image is a technique that forms part of computer vision and is used in various fields such as security, digital biometrics, smartphones, and banking. However, it can prove difficult due to the complexity of facial structure and the presence of variations that can affect the results. To overcome this difficulty, in this paper, we propose a combined approach that aims to improve the accuracy and robustness of facial recognition in the presence of variations. To this end, two datasets (ORL and UMIST) are used to train our model. We then began with the image pre-processing phase, which consists in applying a histogram equalization operation to adjust the gray levels over the entire image surface to improve quality and enhance the detection of features in each image. Next, the least important features are eliminated from the images using the Principal Component Analysis (PCA) method. Finally, the pre-processed images are subjected to a neural network architecture (CNN) consisting of multiple convolution layers and fully connected layers. Our simulation results show a high performance of our approach, with accuracy rates of up to 99.50% for the ORL dataset and 100% for the UMIST dataset.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"46 16","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134902986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the rapid growth of data from different sources in organizations, the traditional tools and techniques that cannot handle such huge data are known as big data which is in a scalable fashion. Similarly, many existing frequent itemset mining algorithms have good performance but scalability problems as they cannot exploit parallel processing power available locally or in cloud infrastructure. Since big data and cloud ecosystem overcomes the barriers or limitations in computing resources, it is a natural choice to use distributed programming paradigms such as Map Reduce. In this paper, we propose a novel algorithm known as A Nodesets-based Fast and Scalable Frequent Itemset Mining (FSFIM) to extract frequent itemsets from Big Data. Here, Pre-Order Coding (POC) tree is used to represent data and improve speed in processing. Nodeset is the underlying data structure that is efficient in discovering frequent itemsets. FSFIM is found to be faster and more scalable in mining frequent itemsets. When compared with its predecessors such as Node-lists and N-lists, the Nodesets save half of the memory as they need only either pre-order or post-order coding. Cloudera's Distribution of Hadoop (CDH), a MapReduce framework, is used for empirical study. A prototype application is built to evaluate the performance of the FSFIM. Experimental results revealed that FSFIM outperforms existing algorithms such as Mahout PFP, Mlib PFP, and Big FIM. FSFIM is more scalable and found to be an ideal candidate for real-time applications that mine frequent itemsets from Big Data.
{"title":"A Novel Nodesets-Based Frequent Itemset Mining Algorithm for Big Data using MapReduce","authors":"Borra Sivaiah, Ramisetty Rajeswara Rao","doi":"10.32985/ijeces.14.9.9","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.9","url":null,"abstract":"Due to the rapid growth of data from different sources in organizations, the traditional tools and techniques that cannot handle such huge data are known as big data which is in a scalable fashion. Similarly, many existing frequent itemset mining algorithms have good performance but scalability problems as they cannot exploit parallel processing power available locally or in cloud infrastructure. Since big data and cloud ecosystem overcomes the barriers or limitations in computing resources, it is a natural choice to use distributed programming paradigms such as Map Reduce. In this paper, we propose a novel algorithm known as A Nodesets-based Fast and Scalable Frequent Itemset Mining (FSFIM) to extract frequent itemsets from Big Data. Here, Pre-Order Coding (POC) tree is used to represent data and improve speed in processing. Nodeset is the underlying data structure that is efficient in discovering frequent itemsets. FSFIM is found to be faster and more scalable in mining frequent itemsets. When compared with its predecessors such as Node-lists and N-lists, the Nodesets save half of the memory as they need only either pre-order or post-order coding. Cloudera's Distribution of Hadoop (CDH), a MapReduce framework, is used for empirical study. A prototype application is built to evaluate the performance of the FSFIM. Experimental results revealed that FSFIM outperforms existing algorithms such as Mahout PFP, Mlib PFP, and Big FIM. FSFIM is more scalable and found to be an ideal candidate for real-time applications that mine frequent itemsets from Big Data.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"20 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134991840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The laboratory setup and corresponding experimental procedure for determining the remanent magnetic flux in the magnetic core of a single-phase transformer are presented in this paper. Using the proposed method, the remanent flux can be determined without prior knowledge of any parameter or past states of the transformer which is a significant advantage compared to previously known methods. Furthermore, reliable information about the remanent flux could be obtained using less equipment than other methods. Only electrical measurements are needed, without any physical intervention in the core or some other parts of the transformer. However, the major drawback is that some new unknown value of the remanent flux is set after the measuring procedure. Various initial conditions of the remanent flux and the closing voltage angle are set before each energization of the transformer to prove the validity of the proposed method, which can be used to obtain some characteristics of the remanent flux, such as stability over time or its dependence on some external factors.
{"title":"Experimental Procedure for Determining the Remanent Magnetic Flux Value Using the Nominal AC Energization","authors":"Dragan Vulin, Denis Pelin, Mario Franjković","doi":"10.32985/ijeces.14.9.12","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.12","url":null,"abstract":"The laboratory setup and corresponding experimental procedure for determining the remanent magnetic flux in the magnetic core of a single-phase transformer are presented in this paper. Using the proposed method, the remanent flux can be determined without prior knowledge of any parameter or past states of the transformer which is a significant advantage compared to previously known methods. Furthermore, reliable information about the remanent flux could be obtained using less equipment than other methods. Only electrical measurements are needed, without any physical intervention in the core or some other parts of the transformer. However, the major drawback is that some new unknown value of the remanent flux is set after the measuring procedure. Various initial conditions of the remanent flux and the closing voltage angle are set before each energization of the transformer to prove the validity of the proposed method, which can be used to obtain some characteristics of the remanent flux, such as stability over time or its dependence on some external factors.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"5 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134991984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muneeswari G., Ahilan A., Rajeshwari R, Kannan K., John Clement Singh C.
Wireless Sensor Network (WSN) is a network area that includes a large number of nodes and the ability of wireless transmission. WSNs are frequently employed for vital applications in which security and dependability are of utmost concern. The main objective of the proposed method is to design a WSN to maximize network longevity while minimizing power usage. In a WSN, trust management is employed to encourage node collaboration, which is crucial for achieving dependable transmission. In this research, a novel Trust and Energy Aware Routing Protocol (TEARP) in wireless sensors networks is proposed, which use blockchain technology to maintain the identity of the Sensor Nodes (SNs) and Aggregator Nodes (ANs). The proposed TEARP technique provides a thorough trust value for nodes based on their direct trust values and the filtering mechanisms generate the indirect trust values. Further, an enhanced threshold technique is employed to identify the most appropriate clustering heads based on dynamic changes in the extensive trust values and residual energy of the networks. Lastly, cluster heads should be routed in a secure manner using a Sand Cat Swarm Optimization Algorithm (SCSOA). The proposed method has been evaluated using specific parameters such as Network Lifetime, Residual Energy, Throughpu,t Packet Delivery Ratio, and Detection Accuracy respectively. The proposed TEARP method improves the network lifetime by 39.64%, 33.05%, and 27.16%, compared with Energy-efficient and Secure Routing (ESR), Multi-Objective nature-inspired algorithm based on Shuffled frog-leaping algorithm and Firefly Algorithm (MOSFA) , and Optimal Support Vector Machine (OSVM).
{"title":"Trust And Energy-Aware Routing Protocol for Wireless Sensor Networks Based on Secure Routing","authors":"Muneeswari G., Ahilan A., Rajeshwari R, Kannan K., John Clement Singh C.","doi":"10.32985/ijeces.14.9.6","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.6","url":null,"abstract":"Wireless Sensor Network (WSN) is a network area that includes a large number of nodes and the ability of wireless transmission. WSNs are frequently employed for vital applications in which security and dependability are of utmost concern. The main objective of the proposed method is to design a WSN to maximize network longevity while minimizing power usage. In a WSN, trust management is employed to encourage node collaboration, which is crucial for achieving dependable transmission. In this research, a novel Trust and Energy Aware Routing Protocol (TEARP) in wireless sensors networks is proposed, which use blockchain technology to maintain the identity of the Sensor Nodes (SNs) and Aggregator Nodes (ANs). The proposed TEARP technique provides a thorough trust value for nodes based on their direct trust values and the filtering mechanisms generate the indirect trust values. Further, an enhanced threshold technique is employed to identify the most appropriate clustering heads based on dynamic changes in the extensive trust values and residual energy of the networks. Lastly, cluster heads should be routed in a secure manner using a Sand Cat Swarm Optimization Algorithm (SCSOA). The proposed method has been evaluated using specific parameters such as Network Lifetime, Residual Energy, Throughpu,t Packet Delivery Ratio, and Detection Accuracy respectively. The proposed TEARP method improves the network lifetime by 39.64%, 33.05%, and 27.16%, compared with Energy-efficient and Secure Routing (ESR), Multi-Objective nature-inspired algorithm based on Shuffled frog-leaping algorithm and Firefly Algorithm (MOSFA) , and Optimal Support Vector Machine (OSVM).","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"46 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134902989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kamal S. Chandwani, Varsha Namdeo, Poonam T. Agarkar, Sanjay M. Malode, Prashant R. Patil, Narendra P. Giradkar, Pratik R. Hajare
Rapid enhancements in Multimedia tools and features day per day have made entertainment amazing and the quality visual effects have attracted every individual to watch these days' videos. The fast-changing scenes, light effects, and undistinguishable blending of diverse frames have created challenges for researchers in detecting gradual transitions. The proposed work concentrates to detect gradual transitions in videos using correlation coefficients obtained using color histograms and an adaptive thresholding mechanism. Other gradual transitions including fade out, fade in, and cuts are eliminated successfully, and dissolves are then detected from the acquired video frames. The characteristics of the normalized correlation coefficient are studied carefully and dissolve are extracted simply with low computational and time complexity. The confusion between fade in/out and dissolves is discriminated against using the adaptive threshold and the absence of spikes is not part of the case of dissolves. The experimental results obtained over 14 videos involving lightning effects and rapid object motions from Indian film songs accurately detected 22 out of 25 gradual transitions while falsely detecting one transition. The performance of the proposed scheme over four benchmark videos of the TRECVID 2001 dataset obtained 91.6, 94.33, and 92.03 values for precision, recall, and F-measure respectively.
{"title":"Correlation Coefficients and Adaptive Threshold-Based Dissolve Detection in High-Quality Videos","authors":"Kamal S. Chandwani, Varsha Namdeo, Poonam T. Agarkar, Sanjay M. Malode, Prashant R. Patil, Narendra P. Giradkar, Pratik R. Hajare","doi":"10.32985/ijeces.14.8.8","DOIUrl":"https://doi.org/10.32985/ijeces.14.8.8","url":null,"abstract":"Rapid enhancements in Multimedia tools and features day per day have made entertainment amazing and the quality visual effects have attracted every individual to watch these days' videos. The fast-changing scenes, light effects, and undistinguishable blending of diverse frames have created challenges for researchers in detecting gradual transitions. The proposed work concentrates to detect gradual transitions in videos using correlation coefficients obtained using color histograms and an adaptive thresholding mechanism. Other gradual transitions including fade out, fade in, and cuts are eliminated successfully, and dissolves are then detected from the acquired video frames. The characteristics of the normalized correlation coefficient are studied carefully and dissolve are extracted simply with low computational and time complexity. The confusion between fade in/out and dissolves is discriminated against using the adaptive threshold and the absence of spikes is not part of the case of dissolves. The experimental results obtained over 14 videos involving lightning effects and rapid object motions from Indian film songs accurately detected 22 out of 25 gradual transitions while falsely detecting one transition. The performance of the proposed scheme over four benchmark videos of the TRECVID 2001 dataset obtained 91.6, 94.33, and 92.03 values for precision, recall, and F-measure respectively.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"25 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135317027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Developing accurate classification models for radar-based Human Activity Recognition (HAR), capable of solving real-world problems, depends heavily on the amount of available data. In this paper, we propose a simple, effective, and generalizable data augmentation strategy along with preprocessing for micro-Doppler signatures to enhance recognition performance. By leveraging the decomposition properties of the Discrete Wavelet Transform (DWT), new samples are generated with distinct characteristics that do not overlap with those of the original samples. The micro-Doppler signatures are projected onto the DWT space for the decomposition process using the Haar wavelet. The returned decomposition components are used in different configurations to generate new data. Three new samples are obtained from a single spectrogram, which increases the amount of training data without creating duplicates. Next, the augmented samples are processed using the Sobel filter. This step allows each sample to be expanded into three representations, including the gradient in the x-direction (Dx), y-direction (Dy), and both x- and y-directions (Dxy). These representations are used as input for training a three-input convolutional neural network-long short-term memory support vector machine (CNN-LSTM-SVM) model. We have assessed the feasibility of our solution by evaluating it on three datasets containing micro-Doppler signatures of human activities, including Frequency Modulated Continuous Wave (FMCW) 77 GHz, FMCW 24 GHz, and Impulse Radio Ultra-Wide Band (IR-UWB) 10 GHz datasets. Several experiments have been carried out to evaluate the model's performance with the inclusion of additional samples. The model was trained from scratch only on the augmented samples and tested on the original samples. Our augmentation approach has been thoroughly evaluated using various metrics, including accuracy, precision, recall, and F1-score. The results demonstrate a substantial improvement in the recognition rate and effectively alleviate the overfitting effect. Accuracies of 96.47%, 94.27%, and 98.18% are obtained for the FMCW 77 GHz, FMCW 24 GHz, and IR- UWB 10 GHz datasets, respectively. The findings of the study demonstrate the utility of DWT to enrich micro-Doppler training samples to improve HAR performance. Furthermore, the processing step was found to be efficient in enhancing the classification accuracy, achieving 96.78%, 96.32%, and 100% for the FMCW 77 GHz, FMCW 24 GHz, and IR-UWB 10 GHz datasets, respectively.
{"title":"Advanced Human Activity Recognition through Data Augmentation and Feature Concatenation of Micro-Doppler Signatures","authors":"Djazila Souhila Korti, Zohra Slimane","doi":"10.32985/ijeces.14.8.7","DOIUrl":"https://doi.org/10.32985/ijeces.14.8.7","url":null,"abstract":"Developing accurate classification models for radar-based Human Activity Recognition (HAR), capable of solving real-world problems, depends heavily on the amount of available data. In this paper, we propose a simple, effective, and generalizable data augmentation strategy along with preprocessing for micro-Doppler signatures to enhance recognition performance. By leveraging the decomposition properties of the Discrete Wavelet Transform (DWT), new samples are generated with distinct characteristics that do not overlap with those of the original samples. The micro-Doppler signatures are projected onto the DWT space for the decomposition process using the Haar wavelet. The returned decomposition components are used in different configurations to generate new data. Three new samples are obtained from a single spectrogram, which increases the amount of training data without creating duplicates. Next, the augmented samples are processed using the Sobel filter. This step allows each sample to be expanded into three representations, including the gradient in the x-direction (Dx), y-direction (Dy), and both x- and y-directions (Dxy). These representations are used as input for training a three-input convolutional neural network-long short-term memory support vector machine (CNN-LSTM-SVM) model. We have assessed the feasibility of our solution by evaluating it on three datasets containing micro-Doppler signatures of human activities, including Frequency Modulated Continuous Wave (FMCW) 77 GHz, FMCW 24 GHz, and Impulse Radio Ultra-Wide Band (IR-UWB) 10 GHz datasets. Several experiments have been carried out to evaluate the model's performance with the inclusion of additional samples. The model was trained from scratch only on the augmented samples and tested on the original samples. Our augmentation approach has been thoroughly evaluated using various metrics, including accuracy, precision, recall, and F1-score. The results demonstrate a substantial improvement in the recognition rate and effectively alleviate the overfitting effect. Accuracies of 96.47%, 94.27%, and 98.18% are obtained for the FMCW 77 GHz, FMCW 24 GHz, and IR- UWB 10 GHz datasets, respectively. The findings of the study demonstrate the utility of DWT to enrich micro-Doppler training samples to improve HAR performance. Furthermore, the processing step was found to be efficient in enhancing the classification accuracy, achieving 96.78%, 96.32%, and 100% for the FMCW 77 GHz, FMCW 24 GHz, and IR-UWB 10 GHz datasets, respectively.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135322503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human Activity Recognition (HAR) is an important field with diverse applications. However, video-based HAR is challenging because of various factors, such as noise, multiple people, and obscured body parts. Moreover, it is difficult to identify similar activities within and across classes. This study presents a novel approach that utilizes body region relationships as features and a two-level hierarchical model for classification to address these challenges. The proposed system uses a Hidden Markov Model (HMM) at the first level to model human activity, and similar activities are then grouped and classified using a Support Vector Machine (SVM) at the second level. The performance of the proposed system was evaluated on four datasets, with superior results observed for the KTH and Basic Kitchen Activity (BKA) datasets. Promising results were obtained for the HMDB-51 and UCF101 datasets. Improvements of 25%, 25%, 4%, 22%, 24%, and 30% in accuracy, recall, specificity, Precision, F1-score, and MCC, respectively, are achieved for the KTH dataset. On the BKA dataset, the second level of the system shows improvements of 8.6%, 8.6%, 0.85%, 8.2%, 8.4%, and 9.5% for the same metrics compared to the first level. These findings demonstrate the potential of the proposed two-level hierarchical system for human activity recognition applications.
{"title":"A Hierarchical Framework for Video-Based Human Activity Recognition Using Body Part Interactions","authors":"Milind Kamble, Rajankumar S. Bichkar","doi":"10.32985/ijeces.14.8.6","DOIUrl":"https://doi.org/10.32985/ijeces.14.8.6","url":null,"abstract":"Human Activity Recognition (HAR) is an important field with diverse applications. However, video-based HAR is challenging because of various factors, such as noise, multiple people, and obscured body parts. Moreover, it is difficult to identify similar activities within and across classes. This study presents a novel approach that utilizes body region relationships as features and a two-level hierarchical model for classification to address these challenges. The proposed system uses a Hidden Markov Model (HMM) at the first level to model human activity, and similar activities are then grouped and classified using a Support Vector Machine (SVM) at the second level. The performance of the proposed system was evaluated on four datasets, with superior results observed for the KTH and Basic Kitchen Activity (BKA) datasets. Promising results were obtained for the HMDB-51 and UCF101 datasets. Improvements of 25%, 25%, 4%, 22%, 24%, and 30% in accuracy, recall, specificity, Precision, F1-score, and MCC, respectively, are achieved for the KTH dataset. On the BKA dataset, the second level of the system shows improvements of 8.6%, 8.6%, 0.85%, 8.2%, 8.4%, and 9.5% for the same metrics compared to the first level. These findings demonstrate the potential of the proposed two-level hierarchical system for human activity recognition applications.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135321838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software reliability frameworks are extremely effective for estimating the probability of software failure over time. Numerous approaches for predicting software dependability were presented, but neither of those has shown to be effective. Predicting the number of software faults throughout the research and testing phases is a serious problem. As there are several software metrics such as object-oriented design metrics, public and private attributes, methods, previous bug metrics, and software change metrics. Many researchers have identified and performed predictions of software reliability on these metrics. But none of them contributed to identifying relations among these metrics and exploring the most optimal metrics. Therefore, this paper proposed a correlation- constrained multi-objective evolutionary optimization algorithm (CCMOEO) for software reliability prediction. CCMOEO is an effective optimization approach for estimating the variables of popular growth models which consists of reliability. To obtain the highest classification effectiveness, the suggested CCMOEO approach overcomes modeling uncertainties by integrating various metrics with multiple objective functions. The hypothesized models were formulated using evaluation results on five distinct datasets in this research. The prediction was evaluated on seven different machine learning algorithms i.e., linear support vector machine (LSVM), radial support vector machine (RSVM), decision tree, random forest, gradient boosting, k-nearest neighbor, and linear regression. The result analysis shows that random forest achieved better performance.
{"title":"Software Reliability Prediction using Correlation Constrained Multi-Objective Evolutionary Optimization Algorithm","authors":"Neha Yadav, Vibhash Yadav","doi":"10.32985/ijeces.14.8.11","DOIUrl":"https://doi.org/10.32985/ijeces.14.8.11","url":null,"abstract":"Software reliability frameworks are extremely effective for estimating the probability of software failure over time. Numerous approaches for predicting software dependability were presented, but neither of those has shown to be effective. Predicting the number of software faults throughout the research and testing phases is a serious problem. As there are several software metrics such as object-oriented design metrics, public and private attributes, methods, previous bug metrics, and software change metrics. Many researchers have identified and performed predictions of software reliability on these metrics. But none of them contributed to identifying relations among these metrics and exploring the most optimal metrics. Therefore, this paper proposed a correlation- constrained multi-objective evolutionary optimization algorithm (CCMOEO) for software reliability prediction. CCMOEO is an effective optimization approach for estimating the variables of popular growth models which consists of reliability. To obtain the highest classification effectiveness, the suggested CCMOEO approach overcomes modeling uncertainties by integrating various metrics with multiple objective functions. The hypothesized models were formulated using evaluation results on five distinct datasets in this research. The prediction was evaluated on seven different machine learning algorithms i.e., linear support vector machine (LSVM), radial support vector machine (RSVM), decision tree, random forest, gradient boosting, k-nearest neighbor, and linear regression. The result analysis shows that random forest achieved better performance.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"33 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135322513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Filter Bank Multi-Carrier Offset-QAM (FBMC-OQAM) is one of the hottest topics in research for 5G multi-carrier methods because of its high efficiency in the spectrum, minimal leakage in the side lobes, zero cyclic prefix (CP), and multiphase filter design. Large-scale subcarrier configurations in optical fiber networks need the use of FBMC-OQAM. Chromatic dispersion is critical in optical fiber transmission because it causes different spectral waves (color beams) to travel at different rates. Laser phase noise, which arises when the phase of the laser output drifts with time, is a major barrier that lowers throughput in fiber-optic communication systems. This deterioration may be closely related among channels that share lasers in multichannel fiber-optic systems using methods like wavelength-division multiplexing with frequency combs or space-division multiplexing. In this research, we use parallel Analysis Filter Bank (AFB) equalizers in the receiver part of the FBMC OQAM Optical Communication system to compensate for chromatic dispersion (CD) and phase noise (PN). Following the equalization of CD compensation, the phase of the carriers in the received signal is tracked and compensated using Modified Blind Phase Search (MBPS). The CD and PN compensation techniques are simulated and analyzed numerically and graphically to determine their efficacy. To evaluate the FBMC's efficiency across various equalizers, 16-OQAM is taken into account. Bit Error Rate (BER), Optical Signal-to-Noise Ratio (OSNR), Q-Factor, and Mean Square Error (MSE) were the primary metrics we utilized to evaluate performance. Single-tap equalizer, multi-tap equalizer (N=3), ISDF equalizer with suggested Parallel Analysis Filter Banks (AFBs) (K=3), and MBPS were all set aside for comparison. When compared to other forms of Nonlinear compensation (NLC), the CD and PN tolerance attained by Parallel AFB equalization with MBPS is the greatest.
{"title":"Compensating Chromatic Dispersion and Phase Noise using Parallel AFB-MBPS For FBMC-OQAM Optical Communication System","authors":"Ahmed H. Abbas, Thamer M. Jamel","doi":"10.32985/ijeces.14.8.4","DOIUrl":"https://doi.org/10.32985/ijeces.14.8.4","url":null,"abstract":"Filter Bank Multi-Carrier Offset-QAM (FBMC-OQAM) is one of the hottest topics in research for 5G multi-carrier methods because of its high efficiency in the spectrum, minimal leakage in the side lobes, zero cyclic prefix (CP), and multiphase filter design. Large-scale subcarrier configurations in optical fiber networks need the use of FBMC-OQAM. Chromatic dispersion is critical in optical fiber transmission because it causes different spectral waves (color beams) to travel at different rates. Laser phase noise, which arises when the phase of the laser output drifts with time, is a major barrier that lowers throughput in fiber-optic communication systems. This deterioration may be closely related among channels that share lasers in multichannel fiber-optic systems using methods like wavelength-division multiplexing with frequency combs or space-division multiplexing. In this research, we use parallel Analysis Filter Bank (AFB) equalizers in the receiver part of the FBMC OQAM Optical Communication system to compensate for chromatic dispersion (CD) and phase noise (PN). Following the equalization of CD compensation, the phase of the carriers in the received signal is tracked and compensated using Modified Blind Phase Search (MBPS). The CD and PN compensation techniques are simulated and analyzed numerically and graphically to determine their efficacy. To evaluate the FBMC's efficiency across various equalizers, 16-OQAM is taken into account. Bit Error Rate (BER), Optical Signal-to-Noise Ratio (OSNR), Q-Factor, and Mean Square Error (MSE) were the primary metrics we utilized to evaluate performance. Single-tap equalizer, multi-tap equalizer (N=3), ISDF equalizer with suggested Parallel Analysis Filter Banks (AFBs) (K=3), and MBPS were all set aside for comparison. When compared to other forms of Nonlinear compensation (NLC), the CD and PN tolerance attained by Parallel AFB equalization with MBPS is the greatest.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"2013 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135316791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since at least a decade, Machine Learning has attracted the interest of researchers. Among the topics of discussion is the application of Machine Learning (ML) and Deep Learning (DL) to the healthcare industry. Several implementations are performed on the medical dataset to verify its precision. The four main players, True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN), play a crucial role in determining the classifier's performance. Various metrics are provided based on the main players. Selecting the appropriate performance metric is a crucial step. In addition to TP and TN, FN should be given greater weight when a healthcare dataset is evaluated for disease diagnosis or detection. Thus, a suitable performance metric must be considered. In this paper, a novel machine learning metric referred to as Healthcare-Critical-Diagnostic-Accuracy (HCDA) is proposed and compared to the well-known metrics accuracy and ROC_AUC score. The machine learning classifiers Support Vector Machine (SVM), Logistic Regression (LR), Random Forest (RF), and Naive Bayes (NB) are implemented on four distinct datasets. The obtained results indicate that the proposed HCDA metric is more sensitive to FN counts. The results show, that even if there is rise in %FN for dataset 1 to 10.31 % then too accuracy is 83% ad HCDA shows correlated drop to 72.70 %. Similarly, in dataset 2 if %FN rises to 14.80 for LR classifier, accuracy is 78.2 % and HCDA is 63.45 %. Similar kind of results are obtained for dataset 3 and 4 too. More FN counts result in a lower HCDA score, and vice versa. In common exiting metrics such as Accuracy and ROC_AUC score, even as the FN count increases, the score increases, which is misleading. As a result, it can be concluded that the proposed HCDA is a more robust and accurate metric for Critical Healthcare Analysis, as FN conditions for disease diagnosis and detection are taken into account more than TP and TN.
{"title":"Healthcare Critical Diagnosis Accuracy","authors":"Deepali Pankaj Javale, Sharmishta Desai","doi":"10.32985/ijeces.14.8.10","DOIUrl":"https://doi.org/10.32985/ijeces.14.8.10","url":null,"abstract":"Since at least a decade, Machine Learning has attracted the interest of researchers. Among the topics of discussion is the application of Machine Learning (ML) and Deep Learning (DL) to the healthcare industry. Several implementations are performed on the medical dataset to verify its precision. The four main players, True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN), play a crucial role in determining the classifier's performance. Various metrics are provided based on the main players. Selecting the appropriate performance metric is a crucial step. In addition to TP and TN, FN should be given greater weight when a healthcare dataset is evaluated for disease diagnosis or detection. Thus, a suitable performance metric must be considered. In this paper, a novel machine learning metric referred to as Healthcare-Critical-Diagnostic-Accuracy (HCDA) is proposed and compared to the well-known metrics accuracy and ROC_AUC score. The machine learning classifiers Support Vector Machine (SVM), Logistic Regression (LR), Random Forest (RF), and Naive Bayes (NB) are implemented on four distinct datasets. The obtained results indicate that the proposed HCDA metric is more sensitive to FN counts. The results show, that even if there is rise in %FN for dataset 1 to 10.31 % then too accuracy is 83% ad HCDA shows correlated drop to 72.70 %. Similarly, in dataset 2 if %FN rises to 14.80 for LR classifier, accuracy is 78.2 % and HCDA is 63.45 %. Similar kind of results are obtained for dataset 3 and 4 too. More FN counts result in a lower HCDA score, and vice versa. In common exiting metrics such as Accuracy and ROC_AUC score, even as the FN count increases, the score increases, which is misleading. As a result, it can be concluded that the proposed HCDA is a more robust and accurate metric for Critical Healthcare Analysis, as FN conditions for disease diagnosis and detection are taken into account more than TP and TN.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"BME-12 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135321814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}