An accurate diagnosis is significant for the treatment of any disease in its early stage. Content-Based Medical Image Retrieval (CBMIR) is used to find similar medical images in a huge database to help radiologists in diagnosis. The main difficulty in CBMIR is semantic gaps between the lower-level visual details, captured by computer-aided tools and higher-level semantic details captured by humans. Many existing methods such as Manhattan Distance, Triplet Deep Hashing, and Transfer Learning techniques for CBMIR were developed but showed lower efficiency and the computational cost was high. To solve such issues, a new feature extraction approach is proposed using Histogram of Gradient (HoG) with Local Ternary Pattern (LTP) to automatically retrieve medical images from the Contrast-Enhanced Magnetic Resonance Imaging (CE-MRI) database. Adam optimization algorithm is utilized to select features and the Euclidean measure calculates the similarity for query images. From the experimental analysis, it is clearly showing that the proposed HoG-LTP method achieves higher accuracy of 98.8%, a sensitivity of 98.5%, and a specificity of 99.416%, which is better when compared to the existing Random Forest (RF) method which displayed an accuracy, sensitivity, and specificity of 81.1%, 81.7% and 90.5% respectively.
{"title":"Feature Extraction Method using HoG with LTP for Content-Based Medical Image Retrieval","authors":"NV Shamna, B. Aziz Musthafa","doi":"10.32985/ijeces.14.3.4","DOIUrl":"https://doi.org/10.32985/ijeces.14.3.4","url":null,"abstract":"An accurate diagnosis is significant for the treatment of any disease in its early stage. Content-Based Medical Image Retrieval (CBMIR) is used to find similar medical images in a huge database to help radiologists in diagnosis. The main difficulty in CBMIR is semantic gaps between the lower-level visual details, captured by computer-aided tools and higher-level semantic details captured by humans. Many existing methods such as Manhattan Distance, Triplet Deep Hashing, and Transfer Learning techniques for CBMIR were developed but showed lower efficiency and the computational cost was high. To solve such issues, a new feature extraction approach is proposed using Histogram of Gradient (HoG) with Local Ternary Pattern (LTP) to automatically retrieve medical images from the Contrast-Enhanced Magnetic Resonance Imaging (CE-MRI) database. Adam optimization algorithm is utilized to select features and the Euclidean measure calculates the similarity for query images. From the experimental analysis, it is clearly showing that the proposed HoG-LTP method achieves higher accuracy of 98.8%, a sensitivity of 98.5%, and a specificity of 99.416%, which is better when compared to the existing Random Forest (RF) method which displayed an accuracy, sensitivity, and specificity of 81.1%, 81.7% and 90.5% respectively.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45634469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prostate Cancer (PC) is the leading cause of mortality among males, therefore an effective system is required for identifying the sensitive bio-markers for early recognition. The objective of the research is to find the potential bio-markers for characterizing the dissimilar types of PC. In this article, the PC-related genes are acquired from the Gene Expression Omnibus (GEO) database. Then, gene selection is accomplished using enhanced Particle Swarm Optimization (PSO) to select the active genes, which are related to the PC. In the enhanced PSO algorithm, the interval-newton approach is included to keep the search space adaptive by varying the swarm diversity that helps to perform the local search significantly. The selected active genes are fed to the random forest classifier for the classification of PC (high and low-risk). As seen in the experimental investigation, the proposed model achieved an overall classification accuracy of 96.71%, which is better compared to the traditional models like naïve Bayes, support vector machine and neural network.
{"title":"Effective Prostate Cancer Detection using Enhanced Particle Swarm Optimization Algorithm with Random Forest on the Microarray Data","authors":"Sanjeev Prakashrao Kaulgud, Vishwanath R. Hulipalled, Siddanagouda Somanagouda Patil, Prabhuraj Metipatil","doi":"10.32985/ijeces.14.3.2","DOIUrl":"https://doi.org/10.32985/ijeces.14.3.2","url":null,"abstract":"Prostate Cancer (PC) is the leading cause of mortality among males, therefore an effective system is required for identifying the sensitive bio-markers for early recognition. The objective of the research is to find the potential bio-markers for characterizing the dissimilar types of PC. In this article, the PC-related genes are acquired from the Gene Expression Omnibus (GEO) database. Then, gene selection is accomplished using enhanced Particle Swarm Optimization (PSO) to select the active genes, which are related to the PC. In the enhanced PSO algorithm, the interval-newton approach is included to keep the search space adaptive by varying the swarm diversity that helps to perform the local search significantly. The selected active genes are fed to the random forest classifier for the classification of PC (high and low-risk). As seen in the experimental investigation, the proposed model achieved an overall classification accuracy of 96.71%, which is better compared to the traditional models like naïve Bayes, support vector machine and neural network.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48767308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The recent technological advances related to embedded systems, and the increased requirements of the Electric Vehicle (EV) industry, lead to the evolution of design and validation methodologies applied to complex systems, in order to design a product that respects the requirements defined according to its performance, safety, and reliability. This research paper presents a design and validation methodology, based on a hardware-in-the-loop (HIL) approach, including a software platform represented by Matlab/ Simulink and a real-time STM32 microcontroller used as a hardware platform. The objective of this work is to evaluate and validate an Energy Management System (EMS) based on Fuzzy Logic Controller (FLC), developed in C code and embedded on an STM32 microcontroller. The developed EMS is designed to control, in real-time, the energy flow in a hybrid energy storage system (HESS), designed in an active topology, made of a Li-ion battery and Super-Capacitors (SC). The proposed HESS model was organized using the Energetic Macroscopic Representation (EMR) and constructed on Matlab/Simulink software platform. The evaluation and validation of the developed algorithm were performed by comparing the HIL and simulation results under the New European Driving Cycle (NEDC).
{"title":"Fuzzy controller hardware implementation for an EV's HESS energy management","authors":"J. Hatim, Askour Rachid, Bououlid Idrissi Badr","doi":"10.32985/ijeces.14.3.9","DOIUrl":"https://doi.org/10.32985/ijeces.14.3.9","url":null,"abstract":"The recent technological advances related to embedded systems, and the increased requirements of the Electric Vehicle (EV) industry, lead to the evolution of design and validation methodologies applied to complex systems, in order to design a product that respects the requirements defined according to its performance, safety, and reliability. This research paper presents a design and validation methodology, based on a hardware-in-the-loop (HIL) approach, including a software platform represented by Matlab/ Simulink and a real-time STM32 microcontroller used as a hardware platform. The objective of this work is to evaluate and validate an Energy Management System (EMS) based on Fuzzy Logic Controller (FLC), developed in C code and embedded on an STM32 microcontroller. The developed EMS is designed to control, in real-time, the energy flow in a hybrid energy storage system (HESS), designed in an active topology, made of a Li-ion battery and Super-Capacitors (SC). The proposed HESS model was organized using the Energetic Macroscopic Representation (EMR) and constructed on Matlab/Simulink software platform. The evaluation and validation of the developed algorithm were performed by comparing the HIL and simulation results under the New European Driving Cycle (NEDC).","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41346967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Nura Yusuf, Kamalrulnizam bin Abu Bakar, Babangida Isyaku, Ajibade Lukuman Saheed
Software Defined Networking (SDN) introduced network management flexibility that eludes traditional network architecture. Nevertheless, the pervasive demand for various cloud computing services with different levels of Quality of Service requirements in our contemporary world made network service provisioning challenging. One of these challenges is path selection (PS) for routing heterogeneous traffic with end-to-end quality of service support specific to each traffic class. The challenge had gotten the research community's attention to the extent that many PSAs were proposed. However, a gap still exists that calls for further study. This paper reviews the existing PSA and the Baseline Shortest Path Algorithms (BSPA) upon which many relevant PSA(s) are built to help identify these gaps. The paper categorizes the PSAs into four, based on their path selection criteria, (1) PSAs that use static or dynamic link quality to guide PSD, (2) PSAs that consider the criticality of switch in terms of an update operation, FlowTable limitation or port capacity to guide PSD, (3) PSAs that consider flow variabilities to guide PSD and (4) The PSAs that use ML optimization in their PSD. We then reviewed and compared the techniques' design in each category against the identified SDN PSA design objectives, solution approach, BSPA, and validation approaches. Finally, the paper recommends directions for further research.
{"title":"Review of Path Selection Algorithms with Link Quality and Critical Switch Aware for Heterogeneous Traffic in SDN","authors":"Muhammad Nura Yusuf, Kamalrulnizam bin Abu Bakar, Babangida Isyaku, Ajibade Lukuman Saheed","doi":"10.32985/ijeces.14.3.12","DOIUrl":"https://doi.org/10.32985/ijeces.14.3.12","url":null,"abstract":"Software Defined Networking (SDN) introduced network management flexibility that eludes traditional network architecture. Nevertheless, the pervasive demand for various cloud computing services with different levels of Quality of Service requirements in our contemporary world made network service provisioning challenging. One of these challenges is path selection (PS) for routing heterogeneous traffic with end-to-end quality of service support specific to each traffic class. The challenge had gotten the research community's attention to the extent that many PSAs were proposed. However, a gap still exists that calls for further study. This paper reviews the existing PSA and the Baseline Shortest Path Algorithms (BSPA) upon which many relevant PSA(s) are built to help identify these gaps. The paper categorizes the PSAs into four, based on their path selection criteria, (1) PSAs that use static or dynamic link quality to guide PSD, (2) PSAs that consider the criticality of switch in terms of an update operation, FlowTable limitation or port capacity to guide PSD, (3) PSAs that consider flow variabilities to guide PSD and (4) The PSAs that use ML optimization in their PSD. We then reviewed and compared the techniques' design in each category against the identified SDN PSA design objectives, solution approach, BSPA, and validation approaches. Finally, the paper recommends directions for further research.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42536328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Similar to the recognition of captions, pictures, or overlapped text that typically appears horizontally, multi-oriented text recognition in video frames is challenging since it has high contrast related to its background. Multi-oriented form of text normally denotes scene text which makes text recognition further stimulating and remarkable owing to the disparaging features of scene text. Hence, predictable text detection approaches might not give virtuous outcomes for multi-oriented scene text detection. Text detection from any such natural image has been challenging since earlier times, and significant enhancement has been made recently to execute this task. While coming to blurred, low-resolution, and small-sized images, most of the previous research conducted doesn’t work well; hence, there is a research gap in that area. Scene-based text detection is a key area due to its adverse applications. One such primary reason for the failure of earlier methods is that the existing methods could not generate precise alignments across feature areas and targets for those images. This research focuses on scene-based text detection with the aid of YOLO based object detector and a CNN-based classification approach. The experiments were conducted in MATLAB 2019A, and the packages used were RESNET50, INCEPTIONRESNETV2, and DENSENET201. The efficiency of the proposed methodology - Hybrid resnet -YOLO procured maximum accuracy of 91%, Hybrid inceptionresnetv2 -YOLO of 81.2%, and Hybrid densenet201 -YOLO of 83.1% and was verified by comparing it with the existing research works Resnet50 of 76.9%, ResNet-101 of 79.5%, and ResNet-152 of 82%.
{"title":"Scene Based Text Recognition From Natural Images and Classification Based on Hybrid CNN Models with Performance Evaluation","authors":"Sunil Kumar Dasari, S. Mehta","doi":"10.32985/ijeces.14.3.7","DOIUrl":"https://doi.org/10.32985/ijeces.14.3.7","url":null,"abstract":"Similar to the recognition of captions, pictures, or overlapped text that typically appears horizontally, multi-oriented text recognition in video frames is challenging since it has high contrast related to its background. Multi-oriented form of text normally denotes scene text which makes text recognition further stimulating and remarkable owing to the disparaging features of scene text. Hence, predictable text detection approaches might not give virtuous outcomes for multi-oriented scene text detection. Text detection from any such natural image has been challenging since earlier times, and significant enhancement has been made recently to execute this task. While coming to blurred, low-resolution, and small-sized images, most of the previous research conducted doesn’t work well; hence, there is a research gap in that area. Scene-based text detection is a key area due to its adverse applications. One such primary reason for the failure of earlier methods is that the existing methods could not generate precise alignments across feature areas and targets for those images. This research focuses on scene-based text detection with the aid of YOLO based object detector and a CNN-based classification approach. The experiments were conducted in MATLAB 2019A, and the packages used were RESNET50, INCEPTIONRESNETV2, and DENSENET201. The efficiency of the proposed methodology - Hybrid resnet -YOLO procured maximum accuracy of 91%, Hybrid inceptionresnetv2 -YOLO of 81.2%, and Hybrid densenet201 -YOLO of 83.1% and was verified by comparing it with the existing research works Resnet50 of 76.9%, ResNet-101 of 79.5%, and ResNet-152 of 82%.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44152967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Memory corruption error is one of the critical security attack vectors against a wide range of software. Addressing this problem, modern compilers provide multiple features to fortify the software against such errors. However, applying compiler-based memory defense is problematic in legacy systems we often encounter in industry or military environments because source codes are unavailable. In this study, we propose memory diversification techniques tailored for legacy binaries to which we cannot apply state-of- the-art compiler-based solutions. The basic idea of our approach is to automatically patch the machine code instructions of each legacy system differently (e.g., a drone, or a vehicle firmware) without altering any semantic behavior of the software logic. As a result of our system, attackers must create a specific attack payload for each target by analyzing the particular firmware, thus significantly increasing exploit development time and cost. Our approach is evaluated by applying it to a stack and heap of multiple binaries, including PX4 drone firmware and other Linux utilities.
{"title":"Effective Memory Diversification in Legacy Systems","authors":"Heesun Yun, Daehee Jang","doi":"10.32985/ijeces.14.3.10","DOIUrl":"https://doi.org/10.32985/ijeces.14.3.10","url":null,"abstract":"Memory corruption error is one of the critical security attack vectors against a wide range of software. Addressing this problem, modern compilers provide multiple features to fortify the software against such errors. However, applying compiler-based memory defense is problematic in legacy systems we often encounter in industry or military environments because source codes are unavailable. In this study, we propose memory diversification techniques tailored for legacy binaries to which we cannot apply state-of- the-art compiler-based solutions. The basic idea of our approach is to automatically patch the machine code instructions of each legacy system differently (e.g., a drone, or a vehicle firmware) without altering any semantic behavior of the software logic. As a result of our system, attackers must create a specific attack payload for each target by analyzing the particular firmware, thus significantly increasing exploit development time and cost. Our approach is evaluated by applying it to a stack and heap of multiple binaries, including PX4 drone firmware and other Linux utilities.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44293680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bino Nelson, Haris Pandiyapallil Abdul Khadir, Sheeba Odattil
Fluid clot below the retinal surface is the root cause of Central Serous Retinopathy (CSR), often referred to as Central Serous Chorioretinopathy (CSC). Delicate tissues that absorb sunlight and enable the brain to recognize images make up the retina. This important organ is vulnerable to damage, which could result in blindness and vision loss for the affected person. Therefore, complete visual loss may be reversed and, in some circumstances, may return to normal with early diagnosis discovery. Therefore, timely and precise CSR detection prevents serious damage to the macula and serves as a foundation for the detection of other retinal disorders. Although CSR has been detected using Blue Wave Fundus Autofluorescence (BWFA) images, developing an accurate and efficient computational system is still difficult. This paper focuses on the use of trained Convolutional Neural Networks (CNN) to implement a framework for accurate and automatic CSR recognition from BWFA images. Transfer Learning has been used in conjunction with pre-trained network architectures (VGG19) for classification. Statistical parameter evaluation has been used to investigate the effectiveness of DCNN. For VGG19, the statistic parameters evaluation revealed a classification accuracy of 97.30%, a precision of 99.56%, an F1 score of 97.25%, and a recall of 95.04% when using a BWFA image dataset collected from a local eye hospital in Cochin, Kerala, India. Identification of CSR from BWFA images is not done before. This paper illustrates how the proposed framework might be applied in clinical situations to assist physicians and clinicians in the identification of retinal diseases.
{"title":"Detection of CSR from Blue Wave Fundus Autofluorescence Images using Deep Neural Network Based on Transfer Learning","authors":"Bino Nelson, Haris Pandiyapallil Abdul Khadir, Sheeba Odattil","doi":"10.32985/ijeces.14.3.5","DOIUrl":"https://doi.org/10.32985/ijeces.14.3.5","url":null,"abstract":"Fluid clot below the retinal surface is the root cause of Central Serous Retinopathy (CSR), often referred to as Central Serous Chorioretinopathy (CSC). Delicate tissues that absorb sunlight and enable the brain to recognize images make up the retina. This important organ is vulnerable to damage, which could result in blindness and vision loss for the affected person. Therefore, complete visual loss may be reversed and, in some circumstances, may return to normal with early diagnosis discovery. Therefore, timely and precise CSR detection prevents serious damage to the macula and serves as a foundation for the detection of other retinal disorders. Although CSR has been detected using Blue Wave Fundus Autofluorescence (BWFA) images, developing an accurate and efficient computational system is still difficult. This paper focuses on the use of trained Convolutional Neural Networks (CNN) to implement a framework for accurate and automatic CSR recognition from BWFA images. Transfer Learning has been used in conjunction with pre-trained network architectures (VGG19) for classification. Statistical parameter evaluation has been used to investigate the effectiveness of DCNN. For VGG19, the statistic parameters evaluation revealed a classification accuracy of 97.30%, a precision of 99.56%, an F1 score of 97.25%, and a recall of 95.04% when using a BWFA image dataset collected from a local eye hospital in Cochin, Kerala, India. Identification of CSR from BWFA images is not done before. This paper illustrates how the proposed framework might be applied in clinical situations to assist physicians and clinicians in the identification of retinal diseases.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43379942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Comprehensive assessments of the molecular characteristics of breast cancer from gene expression patterns can aid in the early identification and treatment of tumor patients. The enormous scale of gene expression data obtained through microarray sequencing increases the difficulty of training the classifier due to large-scale features. Selecting pivotal gene features can minimize high dimensionality and the classifier complexity with improved breast cancer detection accuracy. However, traditional filter and wrapper-based selection methods have scalability and adaptability issues in handling complex gene features. This paper presents a hybrid feature selection method of Mutual Information Maximization - Improved Moth Flame Optimization (MIM-IMFO) for gene selection along with an advanced Hyper-heuristic Adaptive Universum Support classification model Vector Machine (HH-AUSVM) to improve cancer detection rates. The hybrid gene selection method is developed by performing filter-based selection using MIM in the first stage followed by the wrapper method in the second stage, to obtain the pivotal features and remove the inappropriate ones. This method improves standard MFO by a hybrid exploration/exploitation phase to accomplish a better trade-off between exploration and exploitation phases. The classifier HH-AUSVM is formulated by integrating the Adaptive Universum learning approach to the hyper- heuristics-based parameter optimized SVM to tackle the class samples imbalance problem. Evaluated on breast cancer gene expression datasets from Mendeley Data Repository, this proposed MIM-IMFO gene selection-based HH-AUSVM classification approach provided better breast cancer detection with high accuracies of 95.67%, 96.52%, 97.97% and 95.5% and less processing time of 4.28, 3.17, 9.45 and 6.31 seconds, respectively.
从基因表达模式综合评估癌症的分子特征有助于肿瘤患者的早期识别和治疗。由于大规模的特征,通过微阵列测序获得的巨大规模的基因表达数据增加了分类器的训练难度。选择关键基因特征可以最大限度地减少高维度和分类器复杂性,提高癌症检测的准确性。然而,传统的基于过滤器和包装器的选择方法在处理复杂的基因特征时存在可扩展性和适应性问题。本文提出了一种用于基因选择的互信息最大化-改进的Moth火焰优化(MIM-IMFO)的混合特征选择方法,以及一种先进的超专家自适应普遍支持分类模型向量机(HH-AUSVM),以提高癌症的检测率。杂交基因选择方法是通过在第一阶段使用MIM进行基于滤波器的选择,然后在第二阶段使用包装方法来开发的,以获得关键特征并去除不合适的特征。该方法通过混合勘探/开发阶段改进了标准MFO,以在勘探和开发阶段之间实现更好的权衡。分类器HH-AUSVM是通过将自适应Universum学习方法与基于超启发式的参数优化SVM相结合来解决类样本不平衡问题而形成的。在Mendeley Data Repository的乳腺癌症基因表达数据集上评估,该基于MIM-IMFO基因选择的HH-AUSVM分类方法提供了更好的乳腺癌症检测,准确率分别为95.67%、96.52%、97.97%和95.5%,处理时间分别为4.28、3.17、9.45和6.31秒。
{"title":"Breast Cancer Classification by Gene Expression Analysis using Hybrid Feature Selection and Hyper-heuristic Adaptive Universum Support Vector Machine","authors":"V. Murugesan, P. Balamurugan","doi":"10.32985/ijeces.14.3.1","DOIUrl":"https://doi.org/10.32985/ijeces.14.3.1","url":null,"abstract":"Comprehensive assessments of the molecular characteristics of breast cancer from gene expression patterns can aid in the early identification and treatment of tumor patients. The enormous scale of gene expression data obtained through microarray sequencing increases the difficulty of training the classifier due to large-scale features. Selecting pivotal gene features can minimize high dimensionality and the classifier complexity with improved breast cancer detection accuracy. However, traditional filter and wrapper-based selection methods have scalability and adaptability issues in handling complex gene features. This paper presents a hybrid feature selection method of Mutual Information Maximization - Improved Moth Flame Optimization (MIM-IMFO) for gene selection along with an advanced Hyper-heuristic Adaptive Universum Support classification model Vector Machine (HH-AUSVM) to improve cancer detection rates. The hybrid gene selection method is developed by performing filter-based selection using MIM in the first stage followed by the wrapper method in the second stage, to obtain the pivotal features and remove the inappropriate ones. This method improves standard MFO by a hybrid exploration/exploitation phase to accomplish a better trade-off between exploration and exploitation phases. The classifier HH-AUSVM is formulated by integrating the Adaptive Universum learning approach to the hyper- heuristics-based parameter optimized SVM to tackle the class samples imbalance problem. Evaluated on breast cancer gene expression datasets from Mendeley Data Repository, this proposed MIM-IMFO gene selection-based HH-AUSVM classification approach provided better breast cancer detection with high accuracies of 95.67%, 96.52%, 97.97% and 95.5% and less processing time of 4.28, 3.17, 9.45 and 6.31 seconds, respectively.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48985829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A group of small sensors can participate in the wireless network infrastructure and make appropriate transmission and communication sensor networks. There are numerous uses for drones, including military, medical, agricultural, and atmospheric monitoring. The power sources available to nodes in WSNs are restricted. Furthermore, because of this, a diverse method of energy availability is required, primarily for communication over a vast distance, for which Multi-Hop (MH) systems are used. Obtaining the optimum routing path between nodes is still a significant problem, even when multi-hop systems reduce the cost of energy needed by every node along the way. As a result, the number of transmissions must be kept to a minimum to provide effective routing and extend the system's lifetime. To solve the energy problem in WSN, Taylor based Gravitational Search Algorithm (TBGSA) is proposed, which combines the Taylor series with a Gravitational search algorithm to discover the best hops for multi-hop routing. Initially, the sensor nodes are categorised as groups or clusters and the maximum capable node can access the cluster head the next action is switching between multiple nodes via a multi-hop manner. Initially, the best (CH) Cluster Head is chosen using the Artificial Bee Colony (ABC) algorithm, and then the data is transmitted utilizing multi-hop routing. The comparison result shows out the extension of networks longevity of the proposed method with the existing EBMRS, MOGA, and DMEERP methods. The network lifetime of the proposed method increased by 13.2%, 21.9% and 29.2% better than DMEERP, MOGA, and EBMRS respectively.
{"title":"Energy Efficient Multi-hop routing scheme using Taylor based Gravitational Search Algorithm in Wireless Sensor Networks","authors":"S. B, Dharavath Champla, P. M, A. A","doi":"10.32985/ijeces.14.3.11","DOIUrl":"https://doi.org/10.32985/ijeces.14.3.11","url":null,"abstract":"A group of small sensors can participate in the wireless network infrastructure and make appropriate transmission and communication sensor networks. There are numerous uses for drones, including military, medical, agricultural, and atmospheric monitoring. The power sources available to nodes in WSNs are restricted. Furthermore, because of this, a diverse method of energy availability is required, primarily for communication over a vast distance, for which Multi-Hop (MH) systems are used. Obtaining the optimum routing path between nodes is still a significant problem, even when multi-hop systems reduce the cost of energy needed by every node along the way. As a result, the number of transmissions must be kept to a minimum to provide effective routing and extend the system's lifetime. To solve the energy problem in WSN, Taylor based Gravitational Search Algorithm (TBGSA) is proposed, which combines the Taylor series with a Gravitational search algorithm to discover the best hops for multi-hop routing. Initially, the sensor nodes are categorised as groups or clusters and the maximum capable node can access the cluster head the next action is switching between multiple nodes via a multi-hop manner. Initially, the best (CH) Cluster Head is chosen using the Artificial Bee Colony (ABC) algorithm, and then the data is transmitted utilizing multi-hop routing. The comparison result shows out the extension of networks longevity of the proposed method with the existing EBMRS, MOGA, and DMEERP methods. The network lifetime of the proposed method increased by 13.2%, 21.9% and 29.2% better than DMEERP, MOGA, and EBMRS respectively.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43972198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The most advanced technology, watermarking enables intruders to access the database. Various techniques have been developed for information security. Watermarks and histories are linked to many biometric techniques such as fingerprints, palm positions, gait, iris and speech are recommended. Digital watermarking is the utmost successful approaches among the methods available. In this paper the multiband wavelet transforms and singular value decomposition are discussed to establish a watermarking strategy rather than biometric information. The use of biometrics instead of conservative watermarks can enhance information protection. The biometric technology being used is iris. The iris template can be viewed as a watermark, while an iris mode of communication may be used to help information security with the addition of a watermark to the image of the iris. The research involves verifying authentication against different attacks such as no attacks, Jpeg Compression, Gaussian, Median Filtering and Blurring. The Algorithm increases durability and resilience when exposed to geometric and frequency attacks. Finally, the proposed framework can be applied not only to the assessment of iris biometrics, but also to other areas where privacy is critical.
{"title":"Iris Biometric Watermarking for Authentication Using Multiband Discrete Wavelet Transform and Singular-Value Decomposition","authors":"S. Joyce, S. Veni","doi":"10.32985/ijeces.14.3.3","DOIUrl":"https://doi.org/10.32985/ijeces.14.3.3","url":null,"abstract":"The most advanced technology, watermarking enables intruders to access the database. Various techniques have been developed for information security. Watermarks and histories are linked to many biometric techniques such as fingerprints, palm positions, gait, iris and speech are recommended. Digital watermarking is the utmost successful approaches among the methods available. In this paper the multiband wavelet transforms and singular value decomposition are discussed to establish a watermarking strategy rather than biometric information. The use of biometrics instead of conservative watermarks can enhance information protection. The biometric technology being used is iris. The iris template can be viewed as a watermark, while an iris mode of communication may be used to help information security with the addition of a watermark to the image of the iris. The research involves verifying authentication against different attacks such as no attacks, Jpeg Compression, Gaussian, Median Filtering and Blurring. The Algorithm increases durability and resilience when exposed to geometric and frequency attacks. Finally, the proposed framework can be applied not only to the assessment of iris biometrics, but also to other areas where privacy is critical.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"14 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41306766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}