Pratvina Talele, Siddharth Apte, R. Phalnikar, Harsha V. Talele
Requirement engineering is a mandatory phase of the Software development life cycle (SDLC) that includes defining and documenting system requirements in the Software Requirements Specification (SRS). As the complexity increases, it becomes difficult to categorise the requirements into functional and non-functional requirements. Presently, the dearth of automated techniques necessitates reliance on labour-intensive and time-consuming manual methods for this purpose. This research endeavours to address this gap by investigating and contrasting two prominent feature extraction techniques and their efficacy in automating the classification of requirements. Natural language processing methods are used in the text pre-processing phase, followed by the Term Frequency – Inverse Document Frequency (TF-IDF) and Word2Vec for feature extraction for further understanding. These features are used as input to the Machine Learning algorithms. This study compares existing machine learning algorithms and discusses their correctness in categorising the software requirements. In our study, we have assessed the algorithms Decision Tree (DT), Random Forest (RF), Logistic Regression (LR), Neural Network (NN), K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) on the precision and accuracy parameters. The results obtained in this study showed that the TF-IDF feature selection algorithm performed better in categorising requirements than the Word2Vec algorithm, with an accuracy of 91.20% for the Support Vector Machine (SVM) and Random Forest algorithm as compared to 87.36% for the SVM algorithm. A 3.84% difference is seen between the two when applied to the publicly available PURE dataset. We believe these results will aid developers in building products that aid in requirement engineering.
{"title":"Semi-automated Software Requirements Categorisation using Machine Learning Algorithms","authors":"Pratvina Talele, Siddharth Apte, R. Phalnikar, Harsha V. Talele","doi":"10.32985/ijeces.14.10.3","DOIUrl":"https://doi.org/10.32985/ijeces.14.10.3","url":null,"abstract":"Requirement engineering is a mandatory phase of the Software development life cycle (SDLC) that includes defining and documenting system requirements in the Software Requirements Specification (SRS). As the complexity increases, it becomes difficult to categorise the requirements into functional and non-functional requirements. Presently, the dearth of automated techniques necessitates reliance on labour-intensive and time-consuming manual methods for this purpose. This research endeavours to address this gap by investigating and contrasting two prominent feature extraction techniques and their efficacy in automating the classification of requirements. Natural language processing methods are used in the text pre-processing phase, followed by the Term Frequency – Inverse Document Frequency (TF-IDF) and Word2Vec for feature extraction for further understanding. These features are used as input to the Machine Learning algorithms. This study compares existing machine learning algorithms and discusses their correctness in categorising the software requirements. In our study, we have assessed the algorithms Decision Tree (DT), Random Forest (RF), Logistic Regression (LR), Neural Network (NN), K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) on the precision and accuracy parameters. The results obtained in this study showed that the TF-IDF feature selection algorithm performed better in categorising requirements than the Word2Vec algorithm, with an accuracy of 91.20% for the Support Vector Machine (SVM) and Random Forest algorithm as compared to 87.36% for the SVM algorithm. A 3.84% difference is seen between the two when applied to the publicly available PURE dataset. We believe these results will aid developers in building products that aid in requirement engineering.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"27 13","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139009989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sensors incorporated in devices are a source of temporal data that can be interpreted to learn the context of a user. The smartphone accelerometer sensor generates data streams that form distinct patterns in response to user activities. The human context can be predicted using deep learning models built from raw sensor data or features retrieved from raw data. This study analyzes data streams from the UCI-HAR public dataset for activity recognition to determine 31 handcrafted features in the temporal and frequency domain. Various stacked and combination RNN models, trained with attention mechanisms, are designed to work with computed features. Attention gave the models a good fit. When trained with all features, the two-stacked GRU model performed best with 99% accuracy. Selecting the most promising features helps reduce training time without compromising accuracy. The ranking supplied by the permutation feature importance measure and Shapley values are utilized to identify the best features from the highly correlated features. Models trained using optimal features, as determined by the importance measures, had a 96% accuracy rate. Misclassification in attention-based classifiers occurs in the prediction of dynamic activities, such as walking upstairs and walking downstairs, and in sedentary activities, such as sitting and standing, due to the similar range of each activity’s axis values. Our research emphasizes the design of streamlined neural network architectures, characterized by fewer layers and a reduced number of neurons when compared to existing models in the field, to design lightweight models to be implemented in resource-constraint gadgets.
{"title":"Significance of handcrafted features in human activity recognition with attention-based RNN models","authors":"S. Abraham, Rekha K. James","doi":"10.32985/ijeces.14.10.8","DOIUrl":"https://doi.org/10.32985/ijeces.14.10.8","url":null,"abstract":"Sensors incorporated in devices are a source of temporal data that can be interpreted to learn the context of a user. The smartphone accelerometer sensor generates data streams that form distinct patterns in response to user activities. The human context can be predicted using deep learning models built from raw sensor data or features retrieved from raw data. This study analyzes data streams from the UCI-HAR public dataset for activity recognition to determine 31 handcrafted features in the temporal and frequency domain. Various stacked and combination RNN models, trained with attention mechanisms, are designed to work with computed features. Attention gave the models a good fit. When trained with all features, the two-stacked GRU model performed best with 99% accuracy. Selecting the most promising features helps reduce training time without compromising accuracy. The ranking supplied by the permutation feature importance measure and Shapley values are utilized to identify the best features from the highly correlated features. Models trained using optimal features, as determined by the importance measures, had a 96% accuracy rate. Misclassification in attention-based classifiers occurs in the prediction of dynamic activities, such as walking upstairs and walking downstairs, and in sedentary activities, such as sitting and standing, due to the similar range of each activity’s axis values. Our research emphasizes the design of streamlined neural network architectures, characterized by fewer layers and a reduced number of neurons when compared to existing models in the field, to design lightweight models to be implemented in resource-constraint gadgets.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"19 5","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139009450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of the Internet and smart end systems, such as smartphones and portable laptops, along with the emergence of cloud computing, social networks, and the Internet of Things, has brought about new network requirements. To meet these requirements, a new architecture called software-defined network (SDN) has been introduced. However, traffic distribution in SDN has raised challenges, especially in terms of uneven load distribution impacting network performance. To address this issue, several SDN load balancing (LB) techniques have been developed to improve efficiency. This article provides an overview of SDN and its effect on load balancing, highlighting key elements and discussing various load-balancing schemes based on existing solutions and research challenges. Additionally, the article outlines performance metrics used to evaluate these algorithms and suggests possible future research directions.
{"title":"Review of SDN-based load-balancing methods, issues, challenges, and roadmap","authors":"Mohit Chandra Saxena, Munish Sabharwal, Preeti Bajaj","doi":"10.32985/ijeces.14.9.8","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.8","url":null,"abstract":"The development of the Internet and smart end systems, such as smartphones and portable laptops, along with the emergence of cloud computing, social networks, and the Internet of Things, has brought about new network requirements. To meet these requirements, a new architecture called software-defined network (SDN) has been introduced. However, traffic distribution in SDN has raised challenges, especially in terms of uneven load distribution impacting network performance. To address this issue, several SDN load balancing (LB) techniques have been developed to improve efficiency. This article provides an overview of SDN and its effect on load balancing, highlighting key elements and discussing various load-balancing schemes based on existing solutions and research challenges. Additionally, the article outlines performance metrics used to evaluate these algorithms and suggests possible future research directions.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"45 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134902521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raid Rafi Omar Al-Nima, Hasan Maher Ahmed, Nagham Tharwat Saeed
Finger Outer Knuckle (FOK) is a distinctive biometric that has grown in popularity recently. This results from its inborn qualities such as stability, protection, and specific anatomical patterns. Applications for the identification of FOK patterns include forensic investigations, access control systems, and personal identity. In this study, we suggest a method for identifying FOK patterns using Local Directional Number (LDN) codes produced from gradient-based compass masks. For the FOK pattern matching, the suggested method uses two asymmetric masks—Kirsch and Gaussian derivative—to compute the edge response and extract LDN codes. To calculate edge response on the pattern, an asymmetric compass mask made from the Gaussian derivative mask is created by rotating the Kirsch mask by 45 degrees to provide edge response in eight distinct directions. The edge response of each mask and the combination of dominating vector numbers are examined during the LDN code-generating process. A distance metric can be used to compare the LDN code's condensed representation of the FOK pattern to the original for matching purposes. On the Indian Institute of Technology Delhi Finger Knuckle (IITDFK) database, the efficiency of the suggested procedure is assessed. The data show that the suggested strategy is effective, with an Equal Error Rate (EER) of 10.78%. This value performs better than other EER values when compared to different approaches.
{"title":"Patterns Identification of Finger Outer Knuckles by Utilizing Local Directional Number","authors":"Raid Rafi Omar Al-Nima, Hasan Maher Ahmed, Nagham Tharwat Saeed","doi":"10.32985/ijeces.14.9.10","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.10","url":null,"abstract":"Finger Outer Knuckle (FOK) is a distinctive biometric that has grown in popularity recently. This results from its inborn qualities such as stability, protection, and specific anatomical patterns. Applications for the identification of FOK patterns include forensic investigations, access control systems, and personal identity. In this study, we suggest a method for identifying FOK patterns using Local Directional Number (LDN) codes produced from gradient-based compass masks. For the FOK pattern matching, the suggested method uses two asymmetric masks—Kirsch and Gaussian derivative—to compute the edge response and extract LDN codes. To calculate edge response on the pattern, an asymmetric compass mask made from the Gaussian derivative mask is created by rotating the Kirsch mask by 45 degrees to provide edge response in eight distinct directions. The edge response of each mask and the combination of dominating vector numbers are examined during the LDN code-generating process. A distance metric can be used to compare the LDN code's condensed representation of the FOK pattern to the original for matching purposes. On the Indian Institute of Technology Delhi Finger Knuckle (IITDFK) database, the efficiency of the suggested procedure is assessed. The data show that the suggested strategy is effective, with an Equal Error Rate (EER) of 10.78%. This value performs better than other EER values when compared to different approaches.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134954595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the immense availability of Closed-Circuit Television surveillance, it is quite difficult for crime investigation due to its huge storage and complex background. Content-based video retrieval is an excellent method to identify the best Keyframes from these surveillance videos. As the crime surveillance reports numerous action scenes, the existing keyframe extraction is not exemplary. At this point, the Spatio-temporal Histogram of Oriented Gradients - Support Vector Machine feature method with the combination of Background Subtraction is appended over the recovered crime video to highlight the human presence in surveillance frames. Additionally, the Visual Geometry Group trains these frames for the classification report of human-detected frames. These detected frames are processed to extract the keyframe by manipulating an inter-frame difference with its threshold value to favor the requisite human-detected keyframes. Thus, the experimental results of HOG-SVM illustrate a compression ratio of 98.54%, which is preferable to the proposed work's compression ratio of 98.71%, which supports the criminal investigation.
{"title":"An Enhanced Spatio-Temporal Human Detected Keyframe Extraction","authors":"Rajeshwari D., Victoria Priscilla C.","doi":"10.32985/ijeces.14.9.3","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.3","url":null,"abstract":"Due to the immense availability of Closed-Circuit Television surveillance, it is quite difficult for crime investigation due to its huge storage and complex background. Content-based video retrieval is an excellent method to identify the best Keyframes from these surveillance videos. As the crime surveillance reports numerous action scenes, the existing keyframe extraction is not exemplary. At this point, the Spatio-temporal Histogram of Oriented Gradients - Support Vector Machine feature method with the combination of Background Subtraction is appended over the recovered crime video to highlight the human presence in surveillance frames. Additionally, the Visual Geometry Group trains these frames for the classification report of human-detected frames. These detected frames are processed to extract the keyframe by manipulating an inter-frame difference with its threshold value to favor the requisite human-detected keyframes. Thus, the experimental results of HOG-SVM illustrate a compression ratio of 98.54%, which is preferable to the proposed work's compression ratio of 98.71%, which supports the criminal investigation.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"46 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134902991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Speech enhancement aims to eliminate or reduce undesirable noises and distortions, this processing should keep features of the speech to enhance the quality and intelligibility of degraded speech signals. In this study, we investigated a combined approach using single-frequency filtering (SFF) and a modified spectral subtraction method to enhance single-channel speech. The SFF method involves dividing the speech signal into uniform subband envelopes, and then performing spectral over-subtraction on each envelope. A smoothing parameter, determined by the a-posteriori signal-to-noise ratio (SNR), is used to estimate and update the noise without the need for explicitly detecting silence. To evaluate the performance of our algorithm, we employed objective measures such as segmental SNR (segSNR), extended short-term objective intelligibility (ESTOI), and perceptual evaluation of speech quality (PESQ). We tested our algorithm with various types of noise at different SNR levels and achieved results ranging from 4.24 to 15.41 for segSNR, 0.57 to 0.97 for ESTOI, and 2.18 to 4.45 for PESQ. Compared to other standard and existing speech enhancement methods, our algorithm produces better results and performs well in reducing undesirable noises.
{"title":"A robust speech enhancement method in noisy environments","authors":"Nesrine Abajaddi, Youssef Elfahm, Badia Mounir, Abdelmajid Farchi","doi":"10.32985/ijeces.14.9.2","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.2","url":null,"abstract":"Speech enhancement aims to eliminate or reduce undesirable noises and distortions, this processing should keep features of the speech to enhance the quality and intelligibility of degraded speech signals. In this study, we investigated a combined approach using single-frequency filtering (SFF) and a modified spectral subtraction method to enhance single-channel speech. The SFF method involves dividing the speech signal into uniform subband envelopes, and then performing spectral over-subtraction on each envelope. A smoothing parameter, determined by the a-posteriori signal-to-noise ratio (SNR), is used to estimate and update the noise without the need for explicitly detecting silence. To evaluate the performance of our algorithm, we employed objective measures such as segmental SNR (segSNR), extended short-term objective intelligibility (ESTOI), and perceptual evaluation of speech quality (PESQ). We tested our algorithm with various types of noise at different SNR levels and achieved results ranging from 4.24 to 15.41 for segSNR, 0.57 to 0.97 for ESTOI, and 2.18 to 4.45 for PESQ. Compared to other standard and existing speech enhancement methods, our algorithm produces better results and performs well in reducing undesirable noises.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"46 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134902993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep reinforcement learning can effectively deal with resource allocation (RA) in wireless networks. However, more complex networks can have slower learning speeds, and a lack of network adaptability requires new policies to be learned for newly introduced systems. To address these issues, a novel federated learning-based resource allocation (FEDRESOURCE) has been proposed in this paper which efficiently performs RA in wireless networks. The proposed FEDRESOURCE technique uses federated learning (FL) which is a ML technique that shares the DRL-based RA model between distributed systems and a cloud server to describe a policy. The regularized local loss that occurs in the network will be reduced by using a butterfly optimization technique, which increases the convergence of the FL algorithm. The suggested FL framework speeds up policy learning and allows for adoption by employing deep learning and the optimization technique. Experiments were conducted using a Python-based simulator and detailed numerical results for the wireless RA sub-problems. The theoretical results of the novel FEDRESOURCE algorithm have been validated in terms of transmission power, convergence of algorithm, throughput, and cost. The proposed FEDRESOURCE technique achieves maximum transmit power up to 27%, 55%, and 68% energy efficiency compared to Scheduling policy, Asynchronous FL framework, and Heterogeneous computation schemes respectively. The proposed FEDRESOURCE technique can increase discrimination accuracy by 1.7%, 1.2%, and 0.78% compared to the scheduling policy framework, Asynchronous FL framework, and Heterogeneous computation schemes respectively.
{"title":"FEDRESOURCE","authors":"P. G. Satheesh, T. Sasikala","doi":"10.32985/ijeces.14.9.7","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.7","url":null,"abstract":"Deep reinforcement learning can effectively deal with resource allocation (RA) in wireless networks. However, more complex networks can have slower learning speeds, and a lack of network adaptability requires new policies to be learned for newly introduced systems. To address these issues, a novel federated learning-based resource allocation (FEDRESOURCE) has been proposed in this paper which efficiently performs RA in wireless networks. The proposed FEDRESOURCE technique uses federated learning (FL) which is a ML technique that shares the DRL-based RA model between distributed systems and a cloud server to describe a policy. The regularized local loss that occurs in the network will be reduced by using a butterfly optimization technique, which increases the convergence of the FL algorithm. The suggested FL framework speeds up policy learning and allows for adoption by employing deep learning and the optimization technique. Experiments were conducted using a Python-based simulator and detailed numerical results for the wireless RA sub-problems. The theoretical results of the novel FEDRESOURCE algorithm have been validated in terms of transmission power, convergence of algorithm, throughput, and cost. The proposed FEDRESOURCE technique achieves maximum transmit power up to 27%, 55%, and 68% energy efficiency compared to Scheduling policy, Asynchronous FL framework, and Heterogeneous computation schemes respectively. The proposed FEDRESOURCE technique can increase discrimination accuracy by 1.7%, 1.2%, and 0.78% compared to the scheduling policy framework, Asynchronous FL framework, and Heterogeneous computation schemes respectively.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"40 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134954077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ganesh Yallabandi, Veena Mayya, Jayakumar Jeganathan, Sowmya Kamath S.
Early detection of patient deterioration in the Intensive Care Unit (ICU) can play a crucial role in improving patient outcomes. Conventional severity scales currently used to predict patient deterioration are based on a number of factors, the majority of which consist of multiple investigations. Recent advancements in machine learning (ML) within the healthcare domain offer the potential to alleviate the burden of continuous patient monitoring. In this study, we propose an optimized ML model designed to leverage variations in vital signs observed during the final 24 hours of an ICU stay for outcome predictions. Further, we elucidate the relative contributions of distinct vital parameters to these outcomes The dataset compiled in real-time encompasses six pivotal vital parameters: systolic (0) and diastolic (1) blood pressure, pulse rate (2), respiratory rate (3), oxygen saturation (SpO2) (4), and temperature (5). Of these vital parameters, systolic blood pressure emerges as the most significant predictor associated with mortality prediction. Using a fivefold cross-validation method, several ML classifiers are used to categorize the last 24 hours of time series data after ICU admission into three groups: recovery, death, and intubation. Notably, the optimized Gradient Boosting classifier exhibited the highest performance in detecting mortality, achieving an area under the receiver-operator curve (AUC) of 0.95. Through the integration of electronic health records with this ML software, there is the promise of early notifications regarding adverse outcomes, potentially several hours before the onset of hemodynamic instability.
{"title":"ICU Patients’ Pattern Recognition and Correlation Identification of Vital Parameters Using Optimized Machine Learning Models","authors":"Ganesh Yallabandi, Veena Mayya, Jayakumar Jeganathan, Sowmya Kamath S.","doi":"10.32985/ijeces.14.9.5","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.5","url":null,"abstract":"Early detection of patient deterioration in the Intensive Care Unit (ICU) can play a crucial role in improving patient outcomes. Conventional severity scales currently used to predict patient deterioration are based on a number of factors, the majority of which consist of multiple investigations. Recent advancements in machine learning (ML) within the healthcare domain offer the potential to alleviate the burden of continuous patient monitoring. In this study, we propose an optimized ML model designed to leverage variations in vital signs observed during the final 24 hours of an ICU stay for outcome predictions. Further, we elucidate the relative contributions of distinct vital parameters to these outcomes The dataset compiled in real-time encompasses six pivotal vital parameters: systolic (0) and diastolic (1) blood pressure, pulse rate (2), respiratory rate (3), oxygen saturation (SpO2) (4), and temperature (5). Of these vital parameters, systolic blood pressure emerges as the most significant predictor associated with mortality prediction. Using a fivefold cross-validation method, several ML classifiers are used to categorize the last 24 hours of time series data after ICU admission into three groups: recovery, death, and intubation. Notably, the optimized Gradient Boosting classifier exhibited the highest performance in detecting mortality, achieving an area under the receiver-operator curve (AUC) of 0.95. Through the integration of electronic health records with this ML software, there is the promise of early notifications regarding adverse outcomes, potentially several hours before the onset of hemodynamic instability.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"46 15","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134902987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maha A. Elmohallawy, Amir Yassin Hassan, Amal F. Abdel-Gawad, Sameh I. Selem
This paper studies the latest advances made in Deep Learning (DL) methods utilized for transformer inrush and fault currents classification. Inrush and fault currents at different operating conditions, initial flux and fault type are simulated. This paper presents a technique for the classification of power transformer faults which is based on a DL method called convolutional neural network (CNN) and compares it with traditional artificial neural network (ANN) and other techniques. The inrush and fault current signals of the transformer are simulated within MATLAB by using Fourier analyzers that provides the 2nd harmonic signal. The 2nd harmonic peak and variance statistic values of input signals of the three phases of transformer are used at different operating conditions. The resulted values are aggregated into a dataset to be used as an input for the CNN model, then training and testing the CNN model is performed. Consequently, it is obvious that the CNN algorithm achieves a better performance compared to other algorithms. This study helps with easy discrimination between normal signals and faulty signals and to determine the type of the fault to clear it easily.
{"title":"Transformer Faults Classification Based on Convolution Neural Network","authors":"Maha A. Elmohallawy, Amir Yassin Hassan, Amal F. Abdel-Gawad, Sameh I. Selem","doi":"10.32985/ijeces.14.9.11","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.11","url":null,"abstract":"This paper studies the latest advances made in Deep Learning (DL) methods utilized for transformer inrush and fault currents classification. Inrush and fault currents at different operating conditions, initial flux and fault type are simulated. This paper presents a technique for the classification of power transformer faults which is based on a DL method called convolutional neural network (CNN) and compares it with traditional artificial neural network (ANN) and other techniques. The inrush and fault current signals of the transformer are simulated within MATLAB by using Fourier analyzers that provides the 2nd harmonic signal. The 2nd harmonic peak and variance statistic values of input signals of the three phases of transformer are used at different operating conditions. The resulted values are aggregated into a dataset to be used as an input for the CNN model, then training and testing the CNN model is performed. Consequently, it is obvious that the CNN algorithm achieves a better performance compared to other algorithms. This study helps with easy discrimination between normal signals and faulty signals and to determine the type of the fault to clear it easily.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"38 32","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134953642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Localization in a dynamic environment is one of the challenges in WSN localization involving dynamic sensor nodes or anchor nodes. Mobile anchors can be an efficient solution for the number of anchors in a 3-dimensional environment requiring more local anchors. The reliability of a localization system using mobile anchors is determined by various parameters such as energy efficiency, coverage, computational complexity, and cost. Various methods have been proposed by researchers to build a reliable mobile anchor localization system. This certainly shows the many research opportunities that can be carried out in mobile anchor localization. The many opportunities in this topic will be very confusing for researchers who want to research in this field in choosing a topic area early. However, until now there is still no paper that discusses systematic mapping studies that can provide information on topic areas and trends in the field of mobile anchor localization. A systematic Mapping Study (SMS) was conducted to determine the topic area and its trends, influential authors, and produce modeling topics and trends from the resulting modeling topics. This SMS can be a solution for researchers who are interested in research in the field of mobile anchor localization in determining the research topics they are interested in for further research. This paper gives information on the mobile anchor research area, the author who has influenced mobile anchor localization research, and the topic modeling and trend that potentially promissing research in the future. The SMS includes a chronology of publications from 2017-2022, bibliometric co-occurrence, co-author analysis, topic modeling, and trends. The results show that the development of mobile anchor localization publications is still developing until 2022. There are 10 topic models with 6 of them included in the promising topic. The results of this SMS can be used as preliminary research from the literacy stage, namely Systematic Literature Review (SLR).
{"title":"Research Trend Topic Area on Mobile Anchor Localization","authors":"Gita Indah Hapsari, Rendy Munadi, Bayu Erfianto, Indrarini Dyah Irawati","doi":"10.32985/ijeces.14.9.1","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.1","url":null,"abstract":"Localization in a dynamic environment is one of the challenges in WSN localization involving dynamic sensor nodes or anchor nodes. Mobile anchors can be an efficient solution for the number of anchors in a 3-dimensional environment requiring more local anchors. The reliability of a localization system using mobile anchors is determined by various parameters such as energy efficiency, coverage, computational complexity, and cost. Various methods have been proposed by researchers to build a reliable mobile anchor localization system. This certainly shows the many research opportunities that can be carried out in mobile anchor localization. The many opportunities in this topic will be very confusing for researchers who want to research in this field in choosing a topic area early. However, until now there is still no paper that discusses systematic mapping studies that can provide information on topic areas and trends in the field of mobile anchor localization. A systematic Mapping Study (SMS) was conducted to determine the topic area and its trends, influential authors, and produce modeling topics and trends from the resulting modeling topics. This SMS can be a solution for researchers who are interested in research in the field of mobile anchor localization in determining the research topics they are interested in for further research. This paper gives information on the mobile anchor research area, the author who has influenced mobile anchor localization research, and the topic modeling and trend that potentially promissing research in the future. The SMS includes a chronology of publications from 2017-2022, bibliometric co-occurrence, co-author analysis, topic modeling, and trends. The results show that the development of mobile anchor localization publications is still developing until 2022. There are 10 topic models with 6 of them included in the promising topic. The results of this SMS can be used as preliminary research from the literacy stage, namely Systematic Literature Review (SLR).","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"85 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134957071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}