Due to recent advancements in computational biology, DNA microarray technology has evolved as a useful tool in the detection of mutation among various complex diseases like cancer. The availability of thousands of microarray datasets makes this field an active area of research. Early cancer detection can reduce the mortality rate and the treatment cost. Cancer classification is a process to provide a detailed overview of the disease microenvironment for better diagnosis. However, the gene microarray datasets suffer from a curse of dimensionality problems also the classification models are prone to be overfitted due to small sample size and large feature space. To address these issues, the authors have proposed an Improved Binary Competitive Swarm Optimization Whale Optimization Algorithm (IBCSOWOA) for cancer classification, in which IBCSO has been employed to reduce the informative gene subset originated from using minimum redundancy maximum relevance (mRMR) as filter method. The IBCSOWOA technique has been tested on an artificial neural network (ANN) model and the whale optimization algorithm (WOA) is used for parameter tuning of the model. The performance of the proposed IBCSOWOA is tested on six different mutation-based microarray datasets and compared with existing disease prediction methods. The experimental results indicate the superiority of the proposed technique over the existing nature-inspired methods in terms of optimal feature subset, classification accuracy, and convergence rate. The proposed technique has illustrated above 98% accuracy in all six datasets with the highest accuracy of 99.45% in the Lung cancer dataset.
{"title":"A Hybrid Metaheuristics based technique for Mutation Based Disease Classification","authors":"M. Phogat, D. Kumar","doi":"10.32985/ijeces.14.6.3","DOIUrl":"https://doi.org/10.32985/ijeces.14.6.3","url":null,"abstract":"Due to recent advancements in computational biology, DNA microarray technology has evolved as a useful tool in the detection of mutation among various complex diseases like cancer. The availability of thousands of microarray datasets makes this field an active area of research. Early cancer detection can reduce the mortality rate and the treatment cost. Cancer classification is a process to provide a detailed overview of the disease microenvironment for better diagnosis. However, the gene microarray datasets suffer from a curse of dimensionality problems also the classification models are prone to be overfitted due to small sample size and large feature space. To address these issues, the authors have proposed an Improved Binary Competitive Swarm Optimization Whale Optimization Algorithm (IBCSOWOA) for cancer classification, in which IBCSO has been employed to reduce the informative gene subset originated from using minimum redundancy maximum relevance (mRMR) as filter method. The IBCSOWOA technique has been tested on an artificial neural network (ANN) model and the whale optimization algorithm (WOA) is used for parameter tuning of the model. The performance of the proposed IBCSOWOA is tested on six different mutation-based microarray datasets and compared with existing disease prediction methods. The experimental results indicate the superiority of the proposed technique over the existing nature-inspired methods in terms of optimal feature subset, classification accuracy, and convergence rate. The proposed technique has illustrated above 98% accuracy in all six datasets with the highest accuracy of 99.45% in the Lung cancer dataset.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45285666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dahbi Abdeldjalil, B. Benlahbib, M. Benmedjahed, Abderrahman Khelfaoui, A. Bouraiou, N. Aoun, Saad Mekhilefd, A. Reama
This work presents an experimental comparative investigation between Maximum power point tracking control methods used in variable speed wind turbines. In order to enhance the efficiency of the wind turbine system, the maximum power point tracking control has been applied for extracting and exploiting the maximum available wind power. Furthermore, two maximum power point tracking controls have been analyzed, developed, and investigated in real-time using Dspace. The first was optimal torque control without speed control, whereas the second was with speed control. The maximum power point tracking control performance comparison has been performed in a real-time experimental validation to illustrate the advantages of these control on the real wind energy system. The results have been achieved and discussed, where the power efficiency improvements appeared in the transit time and in the steady-state as well. In addition, the proposed optimal torque control for maximum power point tracking with speed control decreased the response time and oscillations, while it increased the power to an interval of 12,5% to 75% compared to that of strategy without speed control in the steady-state and transit state, respectively.
{"title":"A Comparative Experimental Investigation of MPPT Controls for Variable Speed Wind Turbines","authors":"Dahbi Abdeldjalil, B. Benlahbib, M. Benmedjahed, Abderrahman Khelfaoui, A. Bouraiou, N. Aoun, Saad Mekhilefd, A. Reama","doi":"10.32985/ijeces.14.6.10","DOIUrl":"https://doi.org/10.32985/ijeces.14.6.10","url":null,"abstract":"This work presents an experimental comparative investigation between Maximum power point tracking control methods used in variable speed wind turbines. In order to enhance the efficiency of the wind turbine system, the maximum power point tracking control has been applied for extracting and exploiting the maximum available wind power. Furthermore, two maximum power point tracking controls have been analyzed, developed, and investigated in real-time using Dspace. The first was optimal torque control without speed control, whereas the second was with speed control. The maximum power point tracking control performance comparison has been performed in a real-time experimental validation to illustrate the advantages of these control on the real wind energy system. The results have been achieved and discussed, where the power efficiency improvements appeared in the transit time and in the steady-state as well. In addition, the proposed optimal torque control for maximum power point tracking with speed control decreased the response time and oscillations, while it increased the power to an interval of 12,5% to 75% compared to that of strategy without speed control in the steady-state and transit state, respectively.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47630480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A compact and low-cost eight-port (2x4 configuration) tapered-edged antenna array (TEAA) with symmetrical slots and reduced mutual-coupling is presented in this paper using the inset-feed technique. The 8-port TEAA is designed and simulated using CST microwave studio, fabricated using the flame-resistant (FR4) substrate having a dielectric constant (εr) = 4.3 and thickness (h) = 1.66mm and characterized using Keysight technologies vector network analyzer (VNA). The designed 8-port TEAA operates at the 5.05-5.2GHz frequency band. Various performance design parameters, like return-loss, bandwidth, gain, 2D/3D radiation patterns, surface current distributions, and isolation-loss, are briefly studied, and the results are summarized. The eight-port TEAA has featured the bandwidth/ gain characteristic of 195MHz/10.25dB, 3dB beam-width of 52.8o, and excellent mutual-coupling (high isolation-loss) of less than -20dB, respectively. The 8-port TEAA is proposed and characterized to work for next-generation high-throughput WLANs like IEEE 802.11ax (WiFi-6E), Internet-of-Things (IoT), and the upcoming 5G wireless communication systems.
{"title":"Eight-Port Tapered-Edged Antenna Array With Symmetrical Slots and Reduced Mutual-Coupling for Next-Generation Wireless and Internet of Things (IoT) Applications","authors":"Bilal A. Khawaja","doi":"10.32985/ijeces.14.6.9","DOIUrl":"https://doi.org/10.32985/ijeces.14.6.9","url":null,"abstract":"A compact and low-cost eight-port (2x4 configuration) tapered-edged antenna array (TEAA) with symmetrical slots and reduced mutual-coupling is presented in this paper using the inset-feed technique. The 8-port TEAA is designed and simulated using CST microwave studio, fabricated using the flame-resistant (FR4) substrate having a dielectric constant (εr) = 4.3 and thickness (h) = 1.66mm and characterized using Keysight technologies vector network analyzer (VNA). The designed 8-port TEAA operates at the 5.05-5.2GHz frequency band. Various performance design parameters, like return-loss, bandwidth, gain, 2D/3D radiation patterns, surface current distributions, and isolation-loss, are briefly studied, and the results are summarized. The eight-port TEAA has featured the bandwidth/ gain characteristic of 195MHz/10.25dB, 3dB beam-width of 52.8o, and excellent mutual-coupling (high isolation-loss) of less than -20dB, respectively. The 8-port TEAA is proposed and characterized to work for next-generation high-throughput WLANs like IEEE 802.11ax (WiFi-6E), Internet-of-Things (IoT), and the upcoming 5G wireless communication systems.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45327967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fatima Barkani, Mohamed Hamidi, Ouissam Zealouk, H. Satori
The main purpose of this research is to investigate how an Amazigh speech recognition system can be integrated into a low-cost minicomputer, specifically the Raspberry Pi, in order to improve the system's automatic speech recognition capabilities. The study focuses on optimizing system parameters to achieve a balance between performance and limited system resources. To achieve this, the system employs a combination of Hidden Markov Models (HMMs), Gaussian Mixture Models (GMMs), and Mel Frequency Spectral Coefficients (MFCCs) with a speaker-independent approach. The system has been developed to recognize 20 Amazigh words, comprising of 10 commands and the first ten Amazigh digits. The results indicate that the recognition rate achieved on the Raspberry Pi system is 89.16% using 3 HMMs, 16 GMMs, and 39 MFCC coefficients. These findings demonstrate that it is feasible to create effective embedded Amazigh speech recognition systems using a low-cost minicomputer such as the Raspberry Pi. Furthermore, Amazigh linguistic analysis has been implemented to ensure the accuracy of the designed embedded speech system.
{"title":"Assessing the Performance of a Speech Recognition System Embedded in Low-Cost Devices","authors":"Fatima Barkani, Mohamed Hamidi, Ouissam Zealouk, H. Satori","doi":"10.32985/ijeces.14.6.7","DOIUrl":"https://doi.org/10.32985/ijeces.14.6.7","url":null,"abstract":"The main purpose of this research is to investigate how an Amazigh speech recognition system can be integrated into a low-cost minicomputer, specifically the Raspberry Pi, in order to improve the system's automatic speech recognition capabilities. The study focuses on optimizing system parameters to achieve a balance between performance and limited system resources. To achieve this, the system employs a combination of Hidden Markov Models (HMMs), Gaussian Mixture Models (GMMs), and Mel Frequency Spectral Coefficients (MFCCs) with a speaker-independent approach. The system has been developed to recognize 20 Amazigh words, comprising of 10 commands and the first ten Amazigh digits. The results indicate that the recognition rate achieved on the Raspberry Pi system is 89.16% using 3 HMMs, 16 GMMs, and 39 MFCC coefficients. These findings demonstrate that it is feasible to create effective embedded Amazigh speech recognition systems using a low-cost minicomputer such as the Raspberry Pi. Furthermore, Amazigh linguistic analysis has been implemented to ensure the accuracy of the designed embedded speech system.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49459591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Among the main factors contributing to death globally is cervical cancer, regardless of whether it can be avoided and treated if the afflicted tissues are removed early. Cervical screening programs must be made accessible to everyone and effectively, which is a difficult task that necessitates, among other things, identifying the population's most vulnerable members. Therefore, we present an effective deep-learning method for classifying the multi-class cervical cancer disease using Pap smear images in this research. The transfer learning-based optimized SE-ResNet152 model is used for effective multi-class Pap smear image classification. The reliable significant image features are accurately extracted by the proposed network model. The network's hyper-parameters are optimized using the Deer Hunting Optimization (DHO) algorithm. Five SIPaKMeD dataset categories and six CRIC dataset categories constitute the 11 classes for cervical cancer diseases. A Pap smear image dataset with 8838 images and various class distributions is used to evaluate the proposed method. The introduction of the cost-sensitive loss function throughout the classifier's learning process rectifies the dataset's imbalance. When compared to prior existing approaches on multi-class Pap smear image classification, 99.68% accuracy, 98.82% precision, 97.86% recall, and 98.64% F1-Score are achieved by the proposed method on the test set. For automated preliminary diagnosis of cervical cancer diseases, the proposed method produces better identification results in hospitals and cervical cancer clinics due to the positive classification results.
{"title":"Multi-class Cervical Cancer Classification using Transfer Learning-based Optimized SE-ResNet152 model in Pap Smear Whole Slide Images","authors":"Krishna Prasad Battula, B. Sai Chandana","doi":"10.32985/ijeces.14.6.1","DOIUrl":"https://doi.org/10.32985/ijeces.14.6.1","url":null,"abstract":"Among the main factors contributing to death globally is cervical cancer, regardless of whether it can be avoided and treated if the afflicted tissues are removed early. Cervical screening programs must be made accessible to everyone and effectively, which is a difficult task that necessitates, among other things, identifying the population's most vulnerable members. Therefore, we present an effective deep-learning method for classifying the multi-class cervical cancer disease using Pap smear images in this research. The transfer learning-based optimized SE-ResNet152 model is used for effective multi-class Pap smear image classification. The reliable significant image features are accurately extracted by the proposed network model. The network's hyper-parameters are optimized using the Deer Hunting Optimization (DHO) algorithm. Five SIPaKMeD dataset categories and six CRIC dataset categories constitute the 11 classes for cervical cancer diseases. A Pap smear image dataset with 8838 images and various class distributions is used to evaluate the proposed method. The introduction of the cost-sensitive loss function throughout the classifier's learning process rectifies the dataset's imbalance. When compared to prior existing approaches on multi-class Pap smear image classification, 99.68% accuracy, 98.82% precision, 97.86% recall, and 98.64% F1-Score are achieved by the proposed method on the test set. For automated preliminary diagnosis of cervical cancer diseases, the proposed method produces better identification results in hospitals and cervical cancer clinics due to the positive classification results.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42413545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain tumor classification is an essential task in medical image processing that provides assistance to doctors for accurate diagnoses and treatment plans. A Deep Residual Network based Transfer Learning to a fully convoluted Convolutional Neural Network (CNN) is proposed to perform brain tumor classification of Magnetic Resonance Images (MRI) from the BRATS 2020 dataset. The dataset consists of a variety of pre-operative MRI scans to segment integrally varied brain tumors in appearance, shape, and histology, namely gliomas. A Deep Residual Network (ResNet-50) to a fully convoluted CNN is proposed to perform tumor classification from MRI of the BRATS dataset. The 50-layered residual network deeply convolutes the multi-category of tumor images in classification tasks using convolution block and identity block. Limitations such as Limited accuracy and complexity of algorithms in CNN-based ME-Net, and classification issues in YOLOv2 inceptions are resolved by the proposed model in this work. The trained CNN learns boundary and region tasks and extracts successful contextual information from MRI scans with minimal computation cost. The tumor segmentation and classification are performed in one step using a U-Net architecture, which helps retain spatial features of the image. The multimodality fusion is implemented to perform classification and regression tasks by integrating dataset information. The dice scores of the proposed model for Enhanced Tumor (ET), Whole Tumor (WT), and Tumor Core (TC) are 0.88, 0.97, and 0.90 on the BRATS 2020 dataset, and also resulted in 99.94% accuracy, 98.92% sensitivity, 98.63% specificity, and 99.94% precision.
{"title":"Effective Brain Tumor Classification Using Deep Residual Network-Based Transfer Learning","authors":"D. Saida, Klsdt Keerthi Vardhan, P. Premchand","doi":"10.32985/ijeces.14.6.2","DOIUrl":"https://doi.org/10.32985/ijeces.14.6.2","url":null,"abstract":"Brain tumor classification is an essential task in medical image processing that provides assistance to doctors for accurate diagnoses and treatment plans. A Deep Residual Network based Transfer Learning to a fully convoluted Convolutional Neural Network (CNN) is proposed to perform brain tumor classification of Magnetic Resonance Images (MRI) from the BRATS 2020 dataset. The dataset consists of a variety of pre-operative MRI scans to segment integrally varied brain tumors in appearance, shape, and histology, namely gliomas. A Deep Residual Network (ResNet-50) to a fully convoluted CNN is proposed to perform tumor classification from MRI of the BRATS dataset. The 50-layered residual network deeply convolutes the multi-category of tumor images in classification tasks using convolution block and identity block. Limitations such as Limited accuracy and complexity of algorithms in CNN-based ME-Net, and classification issues in YOLOv2 inceptions are resolved by the proposed model in this work. The trained CNN learns boundary and region tasks and extracts successful contextual information from MRI scans with minimal computation cost. The tumor segmentation and classification are performed in one step using a U-Net architecture, which helps retain spatial features of the image. The multimodality fusion is implemented to perform classification and regression tasks by integrating dataset information. The dice scores of the proposed model for Enhanced Tumor (ET), Whole Tumor (WT), and Tumor Core (TC) are 0.88, 0.97, and 0.90 on the BRATS 2020 dataset, and also resulted in 99.94% accuracy, 98.92% sensitivity, 98.63% specificity, and 99.94% precision.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47989416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper reports the performance of series- and shunt-connected self-excited reluctance generators (SERG). In addition to the two stator connections, an analysis was carried out on rotor configurations (with and without a cage) a combination resulting in four different generator topologies. The loss of load and transient characteristics of each generator configuration were studied for a combination of pure resistive and R-L loads. It is shown that for the same machine size, speed and exciting capacitor value, the generator with a cage preserves a better wave shape following a transient disturbance than the cageless machine. At unity power factor, shunt generator with cage can deliver 0.691pu output power, at 1.97% regulation; its series counterpart only delivers 0.589 pu at 2.05%. The study demonstrates that while shunt generators have better regulation and supports higher loads at different power factors, series generators show a superior performance in terms of damping out transients.
{"title":"Performance of Synchronous Reluctance Generators with Series and Shunt Stator Connections","authors":"Pauline Ijeoma Obe, Lilian Livutse Amuhaya, Emeka Simon Obe, Adamu Murtala Zungeru","doi":"10.32985/ijeces.14.5.10","DOIUrl":"https://doi.org/10.32985/ijeces.14.5.10","url":null,"abstract":"This paper reports the performance of series- and shunt-connected self-excited reluctance generators (SERG). In addition to the two stator connections, an analysis was carried out on rotor configurations (with and without a cage) a combination resulting in four different generator topologies. The loss of load and transient characteristics of each generator configuration were studied for a combination of pure resistive and R-L loads. It is shown that for the same machine size, speed and exciting capacitor value, the generator with a cage preserves a better wave shape following a transient disturbance than the cageless machine. At unity power factor, shunt generator with cage can deliver 0.691pu output power, at 1.97% regulation; its series counterpart only delivers 0.589 pu at 2.05%. The study demonstrates that while shunt generators have better regulation and supports higher loads at different power factors, series generators show a superior performance in terms of damping out transients.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45450072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radar-based hand gesture recognition is an important research area that provides suitable support for various applications, such as human-computer interaction and healthcare monitoring. Several deep learning algorithms for gesture recognition using Impulse Radio Ultra-Wide Band (IR-UWB) have been proposed. Most of them focus on achieving high performance, which requires a huge amount of data. The procedure of acquiring and annotating data remains a complex, costly, and time-consuming task. Moreover, processing a large volume of data usually requires a complex model with very large training parameters, high computation, and memory consumption. To overcome these shortcomings, we propose a simple data processing approach along with a lightweight multi-input hybrid model structure to enhance performance. We aim to improve the existing state-of-the-art results obtained using an available IR-UWB gesture dataset consisting of range-time images of dynamic hand gestures. First, these images are extended using the Sobel filter, which generates low-level feature representations for each sample. These represent the gradient images in the x-direction, the y-direction, and both the x- and y-directions. Next, we apply these representations as inputs to a three-input Convolutional Neural Network- Long Short-Term Memory- Support Vector Machine (CNN-LSTM-SVM) model. Each one is provided to a separate CNN branch and then concatenated for further processing by the LSTM. This combination allows for the automatic extraction of richer spatiotemporal features of the target with no manual engineering approach or prior domain knowledge. To select the optimal classifier for our model and achieve a high recognition rate, the SVM hyperparameters are tuned using the Optuna framework. Our proposed multi-input hybrid model achieved high performance on several parameters, including 98.27% accuracy, 98.30% precision, 98.29% recall, and 98.27% F1-score while ensuring low complexity. Experimental results indicate that the proposed approach improves accuracy and prevents the model from overfitting.
{"title":"Enhancing Dynamic Hand Gesture Recognition using Feature Concatenation via Multi-Input Hybrid Model","authors":"Djazila Souhila Korti, Zohra Slimane, Kheira Lakhdari","doi":"10.32985/ijeces.14.5.5","DOIUrl":"https://doi.org/10.32985/ijeces.14.5.5","url":null,"abstract":"Radar-based hand gesture recognition is an important research area that provides suitable support for various applications, such as human-computer interaction and healthcare monitoring. Several deep learning algorithms for gesture recognition using Impulse Radio Ultra-Wide Band (IR-UWB) have been proposed. Most of them focus on achieving high performance, which requires a huge amount of data. The procedure of acquiring and annotating data remains a complex, costly, and time-consuming task. Moreover, processing a large volume of data usually requires a complex model with very large training parameters, high computation, and memory consumption. To overcome these shortcomings, we propose a simple data processing approach along with a lightweight multi-input hybrid model structure to enhance performance. We aim to improve the existing state-of-the-art results obtained using an available IR-UWB gesture dataset consisting of range-time images of dynamic hand gestures. First, these images are extended using the Sobel filter, which generates low-level feature representations for each sample. These represent the gradient images in the x-direction, the y-direction, and both the x- and y-directions. Next, we apply these representations as inputs to a three-input Convolutional Neural Network- Long Short-Term Memory- Support Vector Machine (CNN-LSTM-SVM) model. Each one is provided to a separate CNN branch and then concatenated for further processing by the LSTM. This combination allows for the automatic extraction of richer spatiotemporal features of the target with no manual engineering approach or prior domain knowledge. To select the optimal classifier for our model and achieve a high recognition rate, the SVM hyperparameters are tuned using the Optuna framework. Our proposed multi-input hybrid model achieved high performance on several parameters, including 98.27% accuracy, 98.30% precision, 98.29% recall, and 98.27% F1-score while ensuring low complexity. Experimental results indicate that the proposed approach improves accuracy and prevents the model from overfitting.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46015498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Biomedical research and discoveries are communicated through scholarly publications and this literature is voluminous, rich in scientific text and growing exponentially by the day. Biomedical journals publish nearly three thousand research articles daily, making literature search a challenging proposition for researchers. Biomolecular events involve genes, proteins, metabolites, and enzymes that provide invaluable insights into biological processes and explain the physiological functional mechanisms. Text mining (TM) or extraction of such events automatically from big data is the only quick and viable solution to gather any useful information. Such events extracted from biological literature have a broad range of applications like database curation, ontology construction, semantic web search and interactive systems. However, automatic extraction has its challenges on account of ambiguity and the diverse nature of natural language and associated linguistic occurrences like speculations, negations etc., which commonly exist in biomedical texts and lead to erroneous elucidation. In the last decade, many strategies have been proposed in this field, using different paradigms like Biomedical natural language processing (BioNLP), machine learning and deep learning. Also, new parallel computing architectures like graphical processing units (GPU) have emerged as possible candidates to accelerate the event extraction pipeline. This paper reviews and provides a summarization of the key approaches in complex biomolecular big data event extraction tasks and recommends a balanced architecture in terms of accuracy, speed, computational cost, and memory usage towards developing a robust GPU-accelerated BioNLP system.
{"title":"Biomolecular Event Extraction using Natural Language Processing","authors":"Manish Bali, S. Anandaraj","doi":"10.32985/ijeces.14.5.12","DOIUrl":"https://doi.org/10.32985/ijeces.14.5.12","url":null,"abstract":"Biomedical research and discoveries are communicated through scholarly publications and this literature is voluminous, rich in scientific text and growing exponentially by the day. Biomedical journals publish nearly three thousand research articles daily, making literature search a challenging proposition for researchers. Biomolecular events involve genes, proteins, metabolites, and enzymes that provide invaluable insights into biological processes and explain the physiological functional mechanisms. Text mining (TM) or extraction of such events automatically from big data is the only quick and viable solution to gather any useful information. Such events extracted from biological literature have a broad range of applications like database curation, ontology construction, semantic web search and interactive systems. However, automatic extraction has its challenges on account of ambiguity and the diverse nature of natural language and associated linguistic occurrences like speculations, negations etc., which commonly exist in biomedical texts and lead to erroneous elucidation. In the last decade, many strategies have been proposed in this field, using different paradigms like Biomedical natural language processing (BioNLP), machine learning and deep learning. Also, new parallel computing architectures like graphical processing units (GPU) have emerged as possible candidates to accelerate the event extraction pipeline. This paper reviews and provides a summarization of the key approaches in complex biomolecular big data event extraction tasks and recommends a balanced architecture in terms of accuracy, speed, computational cost, and memory usage towards developing a robust GPU-accelerated BioNLP system.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45731888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fatima Chakir, A. El Magri, R. Lajouad, M. Kissaoui, Mostafa Chakir, O. Bouattane
With their many advantages, including low power dissipation in power switches, low harmonic content, and reduced electromagnetic interference (EMI) from the inverter, multilevel converter (MLI) topologies are becoming more and more in demand in high and medium power applications. This paper introduces a novel multi-level symmetric inverter topology with adopted control. The objectives of this article are to architecturally define the positions of the various switches, to choose the right switches and to propose an inverter control strategy that will eliminate harmonics while producing the ideal output voltage/current. By using fewer switching elements, fewer voltage sources, and switches with a total harmonic content (THD) which reduces losses and a drop in minimum voltage (Vstrssj), the proposed topology is more efficient than conventional inverters with the same number of levels. The new topology will be demonstrated using a seven-level single-phase inverter. For various modulation indices, MATLAB-SIMULINK is used to study and validate the topology.
{"title":"Design and analysis of a new multi-level inverter topology with a reduced number of switches and controlled by PDPWM technique","authors":"Fatima Chakir, A. El Magri, R. Lajouad, M. Kissaoui, Mostafa Chakir, O. Bouattane","doi":"10.32985/ijeces.14.5.11","DOIUrl":"https://doi.org/10.32985/ijeces.14.5.11","url":null,"abstract":"With their many advantages, including low power dissipation in power switches, low harmonic content, and reduced electromagnetic interference (EMI) from the inverter, multilevel converter (MLI) topologies are becoming more and more in demand in high and medium power applications. This paper introduces a novel multi-level symmetric inverter topology with adopted control. The objectives of this article are to architecturally define the positions of the various switches, to choose the right switches and to propose an inverter control strategy that will eliminate harmonics while producing the ideal output voltage/current. By using fewer switching elements, fewer voltage sources, and switches with a total harmonic content (THD) which reduces losses and a drop in minimum voltage (Vstrssj), the proposed topology is more efficient than conventional inverters with the same number of levels. The new topology will be demonstrated using a seven-level single-phase inverter. For various modulation indices, MATLAB-SIMULINK is used to study and validate the topology.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45133723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}