首页 > 最新文献

International Journal of Electrical and Computer Engineering Systems最新文献

英文 中文
A combined method based on CNN architecture for variation-resistant facial recognition 一种基于CNN结构的抗变差人脸识别组合方法
Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-11-14 DOI: 10.32985/ijeces.14.9.4
Hicham Benradi, Ahmed Chater, Abdelali Lasfar
Identifying individuals from a facial image is a technique that forms part of computer vision and is used in various fields such as security, digital biometrics, smartphones, and banking. However, it can prove difficult due to the complexity of facial structure and the presence of variations that can affect the results. To overcome this difficulty, in this paper, we propose a combined approach that aims to improve the accuracy and robustness of facial recognition in the presence of variations. To this end, two datasets (ORL and UMIST) are used to train our model. We then began with the image pre-processing phase, which consists in applying a histogram equalization operation to adjust the gray levels over the entire image surface to improve quality and enhance the detection of features in each image. Next, the least important features are eliminated from the images using the Principal Component Analysis (PCA) method. Finally, the pre-processed images are subjected to a neural network architecture (CNN) consisting of multiple convolution layers and fully connected layers. Our simulation results show a high performance of our approach, with accuracy rates of up to 99.50% for the ORL dataset and 100% for the UMIST dataset.
从面部图像中识别个人是一种技术,构成了计算机视觉的一部分,并用于各种领域,如安全、数字生物识别、智能手机和银行。然而,由于面部结构的复杂性和可能影响结果的变异的存在,这可能证明是困难的。为了克服这一困难,在本文中,我们提出了一种组合方法,旨在提高存在变化的面部识别的准确性和鲁棒性。为此,使用两个数据集(ORL和UMIST)来训练我们的模型。然后,我们从图像预处理阶段开始,该阶段包括应用直方图均衡化操作来调整整个图像表面的灰度级别,以提高质量并增强每个图像中的特征检测。接下来,使用主成分分析(PCA)方法从图像中消除最不重要的特征。最后,对预处理后的图像进行由多个卷积层和全连接层组成的神经网络结构(CNN)处理。我们的仿真结果显示了我们的方法的高性能,ORL数据集的准确率高达99.50%,UMIST数据集的准确率高达100%。
{"title":"A combined method based on CNN architecture for variation-resistant facial recognition","authors":"Hicham Benradi, Ahmed Chater, Abdelali Lasfar","doi":"10.32985/ijeces.14.9.4","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.4","url":null,"abstract":"Identifying individuals from a facial image is a technique that forms part of computer vision and is used in various fields such as security, digital biometrics, smartphones, and banking. However, it can prove difficult due to the complexity of facial structure and the presence of variations that can affect the results. To overcome this difficulty, in this paper, we propose a combined approach that aims to improve the accuracy and robustness of facial recognition in the presence of variations. To this end, two datasets (ORL and UMIST) are used to train our model. We then began with the image pre-processing phase, which consists in applying a histogram equalization operation to adjust the gray levels over the entire image surface to improve quality and enhance the detection of features in each image. Next, the least important features are eliminated from the images using the Principal Component Analysis (PCA) method. Finally, the pre-processed images are subjected to a neural network architecture (CNN) consisting of multiple convolution layers and fully connected layers. Our simulation results show a high performance of our approach, with accuracy rates of up to 99.50% for the ORL dataset and 100% for the UMIST dataset.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"46 16","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134902986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Nodesets-Based Frequent Itemset Mining Algorithm for Big Data using MapReduce 基于MapReduce的大数据频繁项集挖掘算法
Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-11-14 DOI: 10.32985/ijeces.14.9.9
Borra Sivaiah, Ramisetty Rajeswara Rao
Due to the rapid growth of data from different sources in organizations, the traditional tools and techniques that cannot handle such huge data are known as big data which is in a scalable fashion. Similarly, many existing frequent itemset mining algorithms have good performance but scalability problems as they cannot exploit parallel processing power available locally or in cloud infrastructure. Since big data and cloud ecosystem overcomes the barriers or limitations in computing resources, it is a natural choice to use distributed programming paradigms such as Map Reduce. In this paper, we propose a novel algorithm known as A Nodesets-based Fast and Scalable Frequent Itemset Mining (FSFIM) to extract frequent itemsets from Big Data. Here, Pre-Order Coding (POC) tree is used to represent data and improve speed in processing. Nodeset is the underlying data structure that is efficient in discovering frequent itemsets. FSFIM is found to be faster and more scalable in mining frequent itemsets. When compared with its predecessors such as Node-lists and N-lists, the Nodesets save half of the memory as they need only either pre-order or post-order coding. Cloudera's Distribution of Hadoop (CDH), a MapReduce framework, is used for empirical study. A prototype application is built to evaluate the performance of the FSFIM. Experimental results revealed that FSFIM outperforms existing algorithms such as Mahout PFP, Mlib PFP, and Big FIM. FSFIM is more scalable and found to be an ideal candidate for real-time applications that mine frequent itemsets from Big Data.
由于组织中来自不同来源的数据的快速增长,传统的工具和技术无法处理如此庞大的数据,因此被称为可扩展的大数据。同样,许多现有的频繁项集挖掘算法具有良好的性能,但存在可扩展性问题,因为它们无法利用本地或云基础设施中可用的并行处理能力。由于大数据和云生态系统克服了计算资源的障碍或限制,使用Map Reduce等分布式编程范式是一种自然的选择。在本文中,我们提出了一种新的算法,称为基于节点集的快速可扩展频繁项集挖掘(FSFIM),从大数据中提取频繁项集。本文采用预序编码(Pre-Order Coding, POC)树来表示数据,提高处理速度。节点集是一种底层数据结构,可以有效地发现频繁的项目集。发现FSFIM在挖掘频繁项集方面速度更快,更具可扩展性。与node -list和n -list等前辈相比,node - sets节省了一半的内存,因为它们只需要预先排序或后顺序编码。使用Cloudera的分布式Hadoop (CDH)作为MapReduce框架进行实证研究。建立了一个原型应用程序来评估FSFIM的性能。实验结果表明,FSFIM优于现有的Mahout PFP、Mlib PFP和Big FIM算法。FSFIM具有更高的可扩展性,是从大数据中挖掘频繁项目集的实时应用程序的理想选择。
{"title":"A Novel Nodesets-Based Frequent Itemset Mining Algorithm for Big Data using MapReduce","authors":"Borra Sivaiah, Ramisetty Rajeswara Rao","doi":"10.32985/ijeces.14.9.9","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.9","url":null,"abstract":"Due to the rapid growth of data from different sources in organizations, the traditional tools and techniques that cannot handle such huge data are known as big data which is in a scalable fashion. Similarly, many existing frequent itemset mining algorithms have good performance but scalability problems as they cannot exploit parallel processing power available locally or in cloud infrastructure. Since big data and cloud ecosystem overcomes the barriers or limitations in computing resources, it is a natural choice to use distributed programming paradigms such as Map Reduce. In this paper, we propose a novel algorithm known as A Nodesets-based Fast and Scalable Frequent Itemset Mining (FSFIM) to extract frequent itemsets from Big Data. Here, Pre-Order Coding (POC) tree is used to represent data and improve speed in processing. Nodeset is the underlying data structure that is efficient in discovering frequent itemsets. FSFIM is found to be faster and more scalable in mining frequent itemsets. When compared with its predecessors such as Node-lists and N-lists, the Nodesets save half of the memory as they need only either pre-order or post-order coding. Cloudera's Distribution of Hadoop (CDH), a MapReduce framework, is used for empirical study. A prototype application is built to evaluate the performance of the FSFIM. Experimental results revealed that FSFIM outperforms existing algorithms such as Mahout PFP, Mlib PFP, and Big FIM. FSFIM is more scalable and found to be an ideal candidate for real-time applications that mine frequent itemsets from Big Data.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"20 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134991840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experimental Procedure for Determining the Remanent Magnetic Flux Value Using the Nominal AC Energization 使用标称交流通电确定剩余磁通值的实验程序
Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-11-14 DOI: 10.32985/ijeces.14.9.12
Dragan Vulin, Denis Pelin, Mario Franjković
The laboratory setup and corresponding experimental procedure for determining the remanent magnetic flux in the magnetic core of a single-phase transformer are presented in this paper. Using the proposed method, the remanent flux can be determined without prior knowledge of any parameter or past states of the transformer which is a significant advantage compared to previously known methods. Furthermore, reliable information about the remanent flux could be obtained using less equipment than other methods. Only electrical measurements are needed, without any physical intervention in the core or some other parts of the transformer. However, the major drawback is that some new unknown value of the remanent flux is set after the measuring procedure. Various initial conditions of the remanent flux and the closing voltage angle are set before each energization of the transformer to prove the validity of the proposed method, which can be used to obtain some characteristics of the remanent flux, such as stability over time or its dependence on some external factors.
本文介绍了测定单相变压器磁芯剩余磁通的实验装置和相应的实验步骤。使用所提出的方法,可以确定剩余磁通,而不需要事先知道变压器的任何参数或过去的状态,这与以前已知的方法相比是一个显着的优势。此外,与其他方法相比,使用较少的设备可以获得关于剩余通量的可靠信息。只需要进行电气测量,而不需要对变压器的铁芯或其他部分进行任何物理干预。然而,主要的缺点是在测量过程之后设置了一些新的未知的剩余通量值。在变压器每次通电前设置剩余磁通和合闸电压角的各种初始条件,以证明所提方法的有效性,可用于获得剩余磁通的一些特性,如随时间的稳定性或对某些外部因素的依赖性。
{"title":"Experimental Procedure for Determining the Remanent Magnetic Flux Value Using the Nominal AC Energization","authors":"Dragan Vulin, Denis Pelin, Mario Franjković","doi":"10.32985/ijeces.14.9.12","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.12","url":null,"abstract":"The laboratory setup and corresponding experimental procedure for determining the remanent magnetic flux in the magnetic core of a single-phase transformer are presented in this paper. Using the proposed method, the remanent flux can be determined without prior knowledge of any parameter or past states of the transformer which is a significant advantage compared to previously known methods. Furthermore, reliable information about the remanent flux could be obtained using less equipment than other methods. Only electrical measurements are needed, without any physical intervention in the core or some other parts of the transformer. However, the major drawback is that some new unknown value of the remanent flux is set after the measuring procedure. Various initial conditions of the remanent flux and the closing voltage angle are set before each energization of the transformer to prove the validity of the proposed method, which can be used to obtain some characteristics of the remanent flux, such as stability over time or its dependence on some external factors.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"5 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134991984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trust And Energy-Aware Routing Protocol for Wireless Sensor Networks Based on Secure Routing 基于安全路由的无线传感器网络信任与能量感知路由协议
Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-11-14 DOI: 10.32985/ijeces.14.9.6
Muneeswari G., Ahilan A., Rajeshwari R, Kannan K., John Clement Singh C.
Wireless Sensor Network (WSN) is a network area that includes a large number of nodes and the ability of wireless transmission. WSNs are frequently employed for vital applications in which security and dependability are of utmost concern. The main objective of the proposed method is to design a WSN to maximize network longevity while minimizing power usage. In a WSN, trust management is employed to encourage node collaboration, which is crucial for achieving dependable transmission. In this research, a novel Trust and Energy Aware Routing Protocol (TEARP) in wireless sensors networks is proposed, which use blockchain technology to maintain the identity of the Sensor Nodes (SNs) and Aggregator Nodes (ANs). The proposed TEARP technique provides a thorough trust value for nodes based on their direct trust values and the filtering mechanisms generate the indirect trust values. Further, an enhanced threshold technique is employed to identify the most appropriate clustering heads based on dynamic changes in the extensive trust values and residual energy of the networks. Lastly, cluster heads should be routed in a secure manner using a Sand Cat Swarm Optimization Algorithm (SCSOA). The proposed method has been evaluated using specific parameters such as Network Lifetime, Residual Energy, Throughpu,t Packet Delivery Ratio, and Detection Accuracy respectively. The proposed TEARP method improves the network lifetime by 39.64%, 33.05%, and 27.16%, compared with Energy-efficient and Secure Routing (ESR), Multi-Objective nature-inspired algorithm based on Shuffled frog-leaping algorithm and Firefly Algorithm (MOSFA) , and Optimal Support Vector Machine (OSVM).
无线传感器网络(WSN)是一个包含大量节点和无线传输能力的网络区域。无线传感器网络经常被用于最关心安全性和可靠性的重要应用中。提出的方法的主要目标是设计一个WSN,以最大限度地延长网络寿命,同时最小化功耗。在无线传感器网络中,采用信任管理来促进节点间的协作,这是实现可靠传输的关键。在本研究中,提出了一种新的无线传感器网络信任和能量感知路由协议(TEARP),该协议使用区块链技术来维护传感器节点(SNs)和聚合节点(ANs)的身份。提出的TEARP技术基于节点的直接信任值为节点提供彻底的信任值,过滤机制生成间接信任值。在此基础上,基于网络的广泛信任值和剩余能量的动态变化,采用增强阈值技术识别最合适的聚类头。最后,应该使用Sand Cat群优化算法(SCSOA)以安全的方式路由簇头。该方法分别使用网络寿命、剩余能量、吞吐量、包投递率和检测精度等具体参数进行了评估。与节能与安全路由(ESR)、基于shuffle青蛙跳跃算法和萤火虫算法的多目标自然启发算法(MOSFA)和最优支持向量机(OSVM)相比,提出的TEARP方法的网络生存期分别提高了39.64%、33.05%和27.16%。
{"title":"Trust And Energy-Aware Routing Protocol for Wireless Sensor Networks Based on Secure Routing","authors":"Muneeswari G., Ahilan A., Rajeshwari R, Kannan K., John Clement Singh C.","doi":"10.32985/ijeces.14.9.6","DOIUrl":"https://doi.org/10.32985/ijeces.14.9.6","url":null,"abstract":"Wireless Sensor Network (WSN) is a network area that includes a large number of nodes and the ability of wireless transmission. WSNs are frequently employed for vital applications in which security and dependability are of utmost concern. The main objective of the proposed method is to design a WSN to maximize network longevity while minimizing power usage. In a WSN, trust management is employed to encourage node collaboration, which is crucial for achieving dependable transmission. In this research, a novel Trust and Energy Aware Routing Protocol (TEARP) in wireless sensors networks is proposed, which use blockchain technology to maintain the identity of the Sensor Nodes (SNs) and Aggregator Nodes (ANs). The proposed TEARP technique provides a thorough trust value for nodes based on their direct trust values and the filtering mechanisms generate the indirect trust values. Further, an enhanced threshold technique is employed to identify the most appropriate clustering heads based on dynamic changes in the extensive trust values and residual energy of the networks. Lastly, cluster heads should be routed in a secure manner using a Sand Cat Swarm Optimization Algorithm (SCSOA). The proposed method has been evaluated using specific parameters such as Network Lifetime, Residual Energy, Throughpu,t Packet Delivery Ratio, and Detection Accuracy respectively. The proposed TEARP method improves the network lifetime by 39.64%, 33.05%, and 27.16%, compared with Energy-efficient and Secure Routing (ESR), Multi-Objective nature-inspired algorithm based on Shuffled frog-leaping algorithm and Firefly Algorithm (MOSFA) , and Optimal Support Vector Machine (OSVM).","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"46 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134902989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correlation Coefficients and Adaptive Threshold-Based Dissolve Detection in High-Quality Videos 基于相关系数和自适应阈值的高质量视频溶解检测
Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-10-24 DOI: 10.32985/ijeces.14.8.8
Kamal S. Chandwani, Varsha Namdeo, Poonam T. Agarkar, Sanjay M. Malode, Prashant R. Patil, Narendra P. Giradkar, Pratik R. Hajare
Rapid enhancements in Multimedia tools and features day per day have made entertainment amazing and the quality visual effects have attracted every individual to watch these days' videos. The fast-changing scenes, light effects, and undistinguishable blending of diverse frames have created challenges for researchers in detecting gradual transitions. The proposed work concentrates to detect gradual transitions in videos using correlation coefficients obtained using color histograms and an adaptive thresholding mechanism. Other gradual transitions including fade out, fade in, and cuts are eliminated successfully, and dissolves are then detected from the acquired video frames. The characteristics of the normalized correlation coefficient are studied carefully and dissolve are extracted simply with low computational and time complexity. The confusion between fade in/out and dissolves is discriminated against using the adaptive threshold and the absence of spikes is not part of the case of dissolves. The experimental results obtained over 14 videos involving lightning effects and rapid object motions from Indian film songs accurately detected 22 out of 25 gradual transitions while falsely detecting one transition. The performance of the proposed scheme over four benchmark videos of the TRECVID 2001 dataset obtained 91.6, 94.33, and 92.03 values for precision, recall, and F-measure respectively.
多媒体工具和功能的快速增强使娱乐变得惊人,高质量的视觉效果吸引了每个人观看这些天的视频。快速变化的场景、灯光效果和不同帧的难以区分的混合给研究人员在检测渐变时带来了挑战。提出的工作集中在使用颜色直方图和自适应阈值机制获得的相关系数来检测视频中的渐变。其他渐进的过渡,包括淡出,淡入和切割被成功地消除,然后从采集的视频帧中检测溶解。对归一化相关系数的特征进行了细致的研究,提取方法简单,计算复杂度和时间复杂度低。使用自适应阈值可以区分淡入/淡出和溶解之间的混淆,并且没有峰值不属于溶解的情况。通过对印度电影歌曲中闪电效果和快速物体运动的14个视频的实验结果,准确地检测了25个渐变过渡中的22个,而错误地检测了一个过渡。在TRECVID 2001数据集的四个基准视频上,该方案的精度、召回率和F-measure值分别为91.6、94.33和92.03。
{"title":"Correlation Coefficients and Adaptive Threshold-Based Dissolve Detection in High-Quality Videos","authors":"Kamal S. Chandwani, Varsha Namdeo, Poonam T. Agarkar, Sanjay M. Malode, Prashant R. Patil, Narendra P. Giradkar, Pratik R. Hajare","doi":"10.32985/ijeces.14.8.8","DOIUrl":"https://doi.org/10.32985/ijeces.14.8.8","url":null,"abstract":"Rapid enhancements in Multimedia tools and features day per day have made entertainment amazing and the quality visual effects have attracted every individual to watch these days' videos. The fast-changing scenes, light effects, and undistinguishable blending of diverse frames have created challenges for researchers in detecting gradual transitions. The proposed work concentrates to detect gradual transitions in videos using correlation coefficients obtained using color histograms and an adaptive thresholding mechanism. Other gradual transitions including fade out, fade in, and cuts are eliminated successfully, and dissolves are then detected from the acquired video frames. The characteristics of the normalized correlation coefficient are studied carefully and dissolve are extracted simply with low computational and time complexity. The confusion between fade in/out and dissolves is discriminated against using the adaptive threshold and the absence of spikes is not part of the case of dissolves. The experimental results obtained over 14 videos involving lightning effects and rapid object motions from Indian film songs accurately detected 22 out of 25 gradual transitions while falsely detecting one transition. The performance of the proposed scheme over four benchmark videos of the TRECVID 2001 dataset obtained 91.6, 94.33, and 92.03 values for precision, recall, and F-measure respectively.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"25 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135317027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced Human Activity Recognition through Data Augmentation and Feature Concatenation of Micro-Doppler Signatures 基于微多普勒特征的数据增强和特征拼接的高级人类活动识别
Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-10-24 DOI: 10.32985/ijeces.14.8.7
Djazila Souhila Korti, Zohra Slimane
Developing accurate classification models for radar-based Human Activity Recognition (HAR), capable of solving real-world problems, depends heavily on the amount of available data. In this paper, we propose a simple, effective, and generalizable data augmentation strategy along with preprocessing for micro-Doppler signatures to enhance recognition performance. By leveraging the decomposition properties of the Discrete Wavelet Transform (DWT), new samples are generated with distinct characteristics that do not overlap with those of the original samples. The micro-Doppler signatures are projected onto the DWT space for the decomposition process using the Haar wavelet. The returned decomposition components are used in different configurations to generate new data. Three new samples are obtained from a single spectrogram, which increases the amount of training data without creating duplicates. Next, the augmented samples are processed using the Sobel filter. This step allows each sample to be expanded into three representations, including the gradient in the x-direction (Dx), y-direction (Dy), and both x- and y-directions (Dxy). These representations are used as input for training a three-input convolutional neural network-long short-term memory support vector machine (CNN-LSTM-SVM) model. We have assessed the feasibility of our solution by evaluating it on three datasets containing micro-Doppler signatures of human activities, including Frequency Modulated Continuous Wave (FMCW) 77 GHz, FMCW 24 GHz, and Impulse Radio Ultra-Wide Band (IR-UWB) 10 GHz datasets. Several experiments have been carried out to evaluate the model's performance with the inclusion of additional samples. The model was trained from scratch only on the augmented samples and tested on the original samples. Our augmentation approach has been thoroughly evaluated using various metrics, including accuracy, precision, recall, and F1-score. The results demonstrate a substantial improvement in the recognition rate and effectively alleviate the overfitting effect. Accuracies of 96.47%, 94.27%, and 98.18% are obtained for the FMCW 77 GHz, FMCW 24 GHz, and IR- UWB 10 GHz datasets, respectively. The findings of the study demonstrate the utility of DWT to enrich micro-Doppler training samples to improve HAR performance. Furthermore, the processing step was found to be efficient in enhancing the classification accuracy, achieving 96.78%, 96.32%, and 100% for the FMCW 77 GHz, FMCW 24 GHz, and IR-UWB 10 GHz datasets, respectively.
为基于雷达的人类活动识别(HAR)开发准确的分类模型,能够解决现实世界的问题,在很大程度上取决于可用数据的数量。在本文中,我们提出了一种简单、有效和通用的数据增强策略,并对微多普勒特征进行预处理,以提高识别性能。通过利用离散小波变换(DWT)的分解特性,生成具有不同特征的新样本,这些特征不与原始样本重叠。将微多普勒特征投影到DWT空间,利用Haar小波进行分解。在不同的配置中使用返回的分解组件来生成新数据。从单个谱图中获得三个新的样本,这增加了训练数据的数量,而不会产生重复。接下来,使用索贝尔滤波器处理增强的样本。此步骤允许将每个样本扩展为三种表示,包括x方向(Dx), y方向(Dy)以及x和y方向(Dxy)的梯度。这些表征被用作训练三输入卷积神经网络-长短期记忆支持向量机(CNN-LSTM-SVM)模型的输入。我们通过对包含人类活动微多普勒特征的三个数据集进行评估来评估我们的解决方案的可行性,包括调频连续波(FMCW) 77 GHz, FMCW 24 GHz和脉冲无线电超宽带(IR-UWB) 10 GHz数据集。几个实验已经进行了评估模型的性能与附加样本的包含。该模型仅在增强样本上从头开始训练,并在原始样本上进行测试。我们的增强方法已经使用各种指标进行了全面评估,包括准确性、精密度、召回率和f1分数。结果表明,该方法大大提高了识别率,有效地缓解了过拟合效应。FMCW 77 GHz、FMCW 24 GHz和IR- UWB 10 GHz数据集的精度分别为96.47%、94.27%和98.18%。研究结果表明,DWT可以丰富微多普勒训练样本,从而提高HAR性能。此外,该处理步骤可有效提高FMCW 77 GHz、FMCW 24 GHz和IR-UWB 10 GHz数据集的分类准确率,分别达到96.78%、96.32%和100%。
{"title":"Advanced Human Activity Recognition through Data Augmentation and Feature Concatenation of Micro-Doppler Signatures","authors":"Djazila Souhila Korti, Zohra Slimane","doi":"10.32985/ijeces.14.8.7","DOIUrl":"https://doi.org/10.32985/ijeces.14.8.7","url":null,"abstract":"Developing accurate classification models for radar-based Human Activity Recognition (HAR), capable of solving real-world problems, depends heavily on the amount of available data. In this paper, we propose a simple, effective, and generalizable data augmentation strategy along with preprocessing for micro-Doppler signatures to enhance recognition performance. By leveraging the decomposition properties of the Discrete Wavelet Transform (DWT), new samples are generated with distinct characteristics that do not overlap with those of the original samples. The micro-Doppler signatures are projected onto the DWT space for the decomposition process using the Haar wavelet. The returned decomposition components are used in different configurations to generate new data. Three new samples are obtained from a single spectrogram, which increases the amount of training data without creating duplicates. Next, the augmented samples are processed using the Sobel filter. This step allows each sample to be expanded into three representations, including the gradient in the x-direction (Dx), y-direction (Dy), and both x- and y-directions (Dxy). These representations are used as input for training a three-input convolutional neural network-long short-term memory support vector machine (CNN-LSTM-SVM) model. We have assessed the feasibility of our solution by evaluating it on three datasets containing micro-Doppler signatures of human activities, including Frequency Modulated Continuous Wave (FMCW) 77 GHz, FMCW 24 GHz, and Impulse Radio Ultra-Wide Band (IR-UWB) 10 GHz datasets. Several experiments have been carried out to evaluate the model's performance with the inclusion of additional samples. The model was trained from scratch only on the augmented samples and tested on the original samples. Our augmentation approach has been thoroughly evaluated using various metrics, including accuracy, precision, recall, and F1-score. The results demonstrate a substantial improvement in the recognition rate and effectively alleviate the overfitting effect. Accuracies of 96.47%, 94.27%, and 98.18% are obtained for the FMCW 77 GHz, FMCW 24 GHz, and IR- UWB 10 GHz datasets, respectively. The findings of the study demonstrate the utility of DWT to enrich micro-Doppler training samples to improve HAR performance. Furthermore, the processing step was found to be efficient in enhancing the classification accuracy, achieving 96.78%, 96.32%, and 100% for the FMCW 77 GHz, FMCW 24 GHz, and IR-UWB 10 GHz datasets, respectively.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135322503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hierarchical Framework for Video-Based Human Activity Recognition Using Body Part Interactions 一种基于视频的人体活动识别层次框架
Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-10-24 DOI: 10.32985/ijeces.14.8.6
Milind Kamble, Rajankumar S. Bichkar
Human Activity Recognition (HAR) is an important field with diverse applications. However, video-based HAR is challenging because of various factors, such as noise, multiple people, and obscured body parts. Moreover, it is difficult to identify similar activities within and across classes. This study presents a novel approach that utilizes body region relationships as features and a two-level hierarchical model for classification to address these challenges. The proposed system uses a Hidden Markov Model (HMM) at the first level to model human activity, and similar activities are then grouped and classified using a Support Vector Machine (SVM) at the second level. The performance of the proposed system was evaluated on four datasets, with superior results observed for the KTH and Basic Kitchen Activity (BKA) datasets. Promising results were obtained for the HMDB-51 and UCF101 datasets. Improvements of 25%, 25%, 4%, 22%, 24%, and 30% in accuracy, recall, specificity, Precision, F1-score, and MCC, respectively, are achieved for the KTH dataset. On the BKA dataset, the second level of the system shows improvements of 8.6%, 8.6%, 0.85%, 8.2%, 8.4%, and 9.5% for the same metrics compared to the first level. These findings demonstrate the potential of the proposed two-level hierarchical system for human activity recognition applications.
人体活动识别(HAR)是一个有着广泛应用的重要领域。然而,基于视频的HAR具有挑战性,因为存在各种因素,例如噪音,多人和遮挡的身体部位。此外,很难识别类内和类间的类似活动。本研究提出了一种新的方法,利用身体区域关系作为特征和两级层次模型进行分类,以解决这些挑战。该系统在第一级使用隐马尔可夫模型(HMM)来模拟人类活动,然后在第二级使用支持向量机(SVM)对类似的活动进行分组和分类。所提出的系统的性能在四个数据集上进行了评估,KTH和基本厨房活动(BKA)数据集观察到更好的结果。在HMDB-51和UCF101数据集上获得了令人满意的结果。对于KTH数据集,准确率、召回率、特异性、精度、f1评分和MCC分别提高了25%、25%、4%、22%、24%和30%。在BKA数据集上,对于相同的指标,与第一级相比,系统的第二级显示出8.6%、8.6%、0.85%、8.2%、8.4%和9.5%的改进。这些发现证明了所提出的两级层次系统在人类活动识别应用中的潜力。
{"title":"A Hierarchical Framework for Video-Based Human Activity Recognition Using Body Part Interactions","authors":"Milind Kamble, Rajankumar S. Bichkar","doi":"10.32985/ijeces.14.8.6","DOIUrl":"https://doi.org/10.32985/ijeces.14.8.6","url":null,"abstract":"Human Activity Recognition (HAR) is an important field with diverse applications. However, video-based HAR is challenging because of various factors, such as noise, multiple people, and obscured body parts. Moreover, it is difficult to identify similar activities within and across classes. This study presents a novel approach that utilizes body region relationships as features and a two-level hierarchical model for classification to address these challenges. The proposed system uses a Hidden Markov Model (HMM) at the first level to model human activity, and similar activities are then grouped and classified using a Support Vector Machine (SVM) at the second level. The performance of the proposed system was evaluated on four datasets, with superior results observed for the KTH and Basic Kitchen Activity (BKA) datasets. Promising results were obtained for the HMDB-51 and UCF101 datasets. Improvements of 25%, 25%, 4%, 22%, 24%, and 30% in accuracy, recall, specificity, Precision, F1-score, and MCC, respectively, are achieved for the KTH dataset. On the BKA dataset, the second level of the system shows improvements of 8.6%, 8.6%, 0.85%, 8.2%, 8.4%, and 9.5% for the same metrics compared to the first level. These findings demonstrate the potential of the proposed two-level hierarchical system for human activity recognition applications.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135321838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Software Reliability Prediction using Correlation Constrained Multi-Objective Evolutionary Optimization Algorithm 基于关联约束的多目标进化优化算法的软件可靠性预测
Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-10-24 DOI: 10.32985/ijeces.14.8.11
Neha Yadav, Vibhash Yadav
Software reliability frameworks are extremely effective for estimating the probability of software failure over time. Numerous approaches for predicting software dependability were presented, but neither of those has shown to be effective. Predicting the number of software faults throughout the research and testing phases is a serious problem. As there are several software metrics such as object-oriented design metrics, public and private attributes, methods, previous bug metrics, and software change metrics. Many researchers have identified and performed predictions of software reliability on these metrics. But none of them contributed to identifying relations among these metrics and exploring the most optimal metrics. Therefore, this paper proposed a correlation- constrained multi-objective evolutionary optimization algorithm (CCMOEO) for software reliability prediction. CCMOEO is an effective optimization approach for estimating the variables of popular growth models which consists of reliability. To obtain the highest classification effectiveness, the suggested CCMOEO approach overcomes modeling uncertainties by integrating various metrics with multiple objective functions. The hypothesized models were formulated using evaluation results on five distinct datasets in this research. The prediction was evaluated on seven different machine learning algorithms i.e., linear support vector machine (LSVM), radial support vector machine (RSVM), decision tree, random forest, gradient boosting, k-nearest neighbor, and linear regression. The result analysis shows that random forest achieved better performance.
软件可靠性框架对于估计随着时间推移软件故障的概率是非常有效的。人们提出了许多预测软件可靠性的方法,但没有一种是有效的。在整个研究和测试阶段预测软件故障的数量是一个严重的问题。因为有几个软件度量,如面向对象的设计度量、公共和私有属性、方法、以前的错误度量和软件更改度量。许多研究人员已经根据这些指标确定并执行了软件可靠性的预测。但他们都没有对识别这些指标之间的关系和探索最优指标做出贡献。为此,本文提出了一种基于关联约束的多目标进化优化算法(CCMOEO)。CCMOEO是一种有效的估计由可靠性组成的流行增长模型变量的优化方法。为了获得最高的分类效率,本文提出的CCMOEO方法通过将多个指标与多个目标函数集成来克服建模的不确定性。假设模型是根据本研究中五个不同数据集的评估结果制定的。采用线性支持向量机(LSVM)、径向支持向量机(RSVM)、决策树、随机森林、梯度增强、k近邻和线性回归等7种不同的机器学习算法对预测结果进行了评估。结果分析表明,随机森林取得了较好的性能。
{"title":"Software Reliability Prediction using Correlation Constrained Multi-Objective Evolutionary Optimization Algorithm","authors":"Neha Yadav, Vibhash Yadav","doi":"10.32985/ijeces.14.8.11","DOIUrl":"https://doi.org/10.32985/ijeces.14.8.11","url":null,"abstract":"Software reliability frameworks are extremely effective for estimating the probability of software failure over time. Numerous approaches for predicting software dependability were presented, but neither of those has shown to be effective. Predicting the number of software faults throughout the research and testing phases is a serious problem. As there are several software metrics such as object-oriented design metrics, public and private attributes, methods, previous bug metrics, and software change metrics. Many researchers have identified and performed predictions of software reliability on these metrics. But none of them contributed to identifying relations among these metrics and exploring the most optimal metrics. Therefore, this paper proposed a correlation- constrained multi-objective evolutionary optimization algorithm (CCMOEO) for software reliability prediction. CCMOEO is an effective optimization approach for estimating the variables of popular growth models which consists of reliability. To obtain the highest classification effectiveness, the suggested CCMOEO approach overcomes modeling uncertainties by integrating various metrics with multiple objective functions. The hypothesized models were formulated using evaluation results on five distinct datasets in this research. The prediction was evaluated on seven different machine learning algorithms i.e., linear support vector machine (LSVM), radial support vector machine (RSVM), decision tree, random forest, gradient boosting, k-nearest neighbor, and linear regression. The result analysis shows that random forest achieved better performance.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"33 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135322513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compensating Chromatic Dispersion and Phase Noise using Parallel AFB-MBPS For FBMC-OQAM Optical Communication System 利用并行AFB-MBPS补偿FBMC-OQAM光通信系统中的色散和相位噪声
Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-10-24 DOI: 10.32985/ijeces.14.8.4
Ahmed H. Abbas, Thamer M. Jamel
Filter Bank Multi-Carrier Offset-QAM (FBMC-OQAM) is one of the hottest topics in research for 5G multi-carrier methods because of its high efficiency in the spectrum, minimal leakage in the side lobes, zero cyclic prefix (CP), and multiphase filter design. Large-scale subcarrier configurations in optical fiber networks need the use of FBMC-OQAM. Chromatic dispersion is critical in optical fiber transmission because it causes different spectral waves (color beams) to travel at different rates. Laser phase noise, which arises when the phase of the laser output drifts with time, is a major barrier that lowers throughput in fiber-optic communication systems. This deterioration may be closely related among channels that share lasers in multichannel fiber-optic systems using methods like wavelength-division multiplexing with frequency combs or space-division multiplexing. In this research, we use parallel Analysis Filter Bank (AFB) equalizers in the receiver part of the FBMC OQAM Optical Communication system to compensate for chromatic dispersion (CD) and phase noise (PN). Following the equalization of CD compensation, the phase of the carriers in the received signal is tracked and compensated using Modified Blind Phase Search (MBPS). The CD and PN compensation techniques are simulated and analyzed numerically and graphically to determine their efficacy. To evaluate the FBMC's efficiency across various equalizers, 16-OQAM is taken into account. Bit Error Rate (BER), Optical Signal-to-Noise Ratio (OSNR), Q-Factor, and Mean Square Error (MSE) were the primary metrics we utilized to evaluate performance. Single-tap equalizer, multi-tap equalizer (N=3), ISDF equalizer with suggested Parallel Analysis Filter Banks (AFBs) (K=3), and MBPS were all set aside for comparison. When compared to other forms of Nonlinear compensation (NLC), the CD and PN tolerance attained by Parallel AFB equalization with MBPS is the greatest.
滤波器组多载波偏置qam (FBMC-OQAM)由于具有频谱效率高、旁瓣泄漏小、零循环前缀(CP)和多相滤波器设计等优点,是5G多载波方法研究的热点之一。光纤网络中大规模的子载波配置需要使用FBMC-OQAM。色散在光纤传输中是至关重要的,因为它导致不同的光谱波(彩色光束)以不同的速率传播。激光相位噪声是光纤通信系统中降低吞吐量的主要障碍,它是激光输出的相位随时间漂移而产生的。这种恶化可能与多通道光纤系统中使用波分复用与频率梳或空分复用等方法共享激光器的通道密切相关。在本研究中,我们在FBMC OQAM光通信系统的接收部分使用并行分析滤波器组(AFB)均衡器来补偿色散(CD)和相位噪声(PN)。在CD补偿均衡之后,利用改进盲相位搜索(MBPS)对接收信号中的载波相位进行跟踪和补偿。对CD和PN补偿技术进行了数值和图形模拟和分析,以确定它们的效果。为了评估FBMC在各种均衡器中的效率,16-OQAM被考虑在内。误码率(BER)、光信噪比(OSNR)、q因子和均方误差(MSE)是我们用来评估性能的主要指标。单抽头均衡器,多抽头均衡器(N=3), ISDF均衡器与建议的并行分析滤波器组(afb) (K=3)和MBPS都被搁置一边进行比较。与其他形式的非线性补偿(NLC)相比,采用MBPS并联AFB均衡获得的CD和PN容差最大。
{"title":"Compensating Chromatic Dispersion and Phase Noise using Parallel AFB-MBPS For FBMC-OQAM Optical Communication System","authors":"Ahmed H. Abbas, Thamer M. Jamel","doi":"10.32985/ijeces.14.8.4","DOIUrl":"https://doi.org/10.32985/ijeces.14.8.4","url":null,"abstract":"Filter Bank Multi-Carrier Offset-QAM (FBMC-OQAM) is one of the hottest topics in research for 5G multi-carrier methods because of its high efficiency in the spectrum, minimal leakage in the side lobes, zero cyclic prefix (CP), and multiphase filter design. Large-scale subcarrier configurations in optical fiber networks need the use of FBMC-OQAM. Chromatic dispersion is critical in optical fiber transmission because it causes different spectral waves (color beams) to travel at different rates. Laser phase noise, which arises when the phase of the laser output drifts with time, is a major barrier that lowers throughput in fiber-optic communication systems. This deterioration may be closely related among channels that share lasers in multichannel fiber-optic systems using methods like wavelength-division multiplexing with frequency combs or space-division multiplexing. In this research, we use parallel Analysis Filter Bank (AFB) equalizers in the receiver part of the FBMC OQAM Optical Communication system to compensate for chromatic dispersion (CD) and phase noise (PN). Following the equalization of CD compensation, the phase of the carriers in the received signal is tracked and compensated using Modified Blind Phase Search (MBPS). The CD and PN compensation techniques are simulated and analyzed numerically and graphically to determine their efficacy. To evaluate the FBMC's efficiency across various equalizers, 16-OQAM is taken into account. Bit Error Rate (BER), Optical Signal-to-Noise Ratio (OSNR), Q-Factor, and Mean Square Error (MSE) were the primary metrics we utilized to evaluate performance. Single-tap equalizer, multi-tap equalizer (N=3), ISDF equalizer with suggested Parallel Analysis Filter Banks (AFBs) (K=3), and MBPS were all set aside for comparison. When compared to other forms of Nonlinear compensation (NLC), the CD and PN tolerance attained by Parallel AFB equalization with MBPS is the greatest.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"2013 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135316791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Healthcare Critical Diagnosis Accuracy 医疗保健关键诊断准确性
Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-10-24 DOI: 10.32985/ijeces.14.8.10
Deepali Pankaj Javale, Sharmishta Desai
Since at least a decade, Machine Learning has attracted the interest of researchers. Among the topics of discussion is the application of Machine Learning (ML) and Deep Learning (DL) to the healthcare industry. Several implementations are performed on the medical dataset to verify its precision. The four main players, True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN), play a crucial role in determining the classifier's performance. Various metrics are provided based on the main players. Selecting the appropriate performance metric is a crucial step. In addition to TP and TN, FN should be given greater weight when a healthcare dataset is evaluated for disease diagnosis or detection. Thus, a suitable performance metric must be considered. In this paper, a novel machine learning metric referred to as Healthcare-Critical-Diagnostic-Accuracy (HCDA) is proposed and compared to the well-known metrics accuracy and ROC_AUC score. The machine learning classifiers Support Vector Machine (SVM), Logistic Regression (LR), Random Forest (RF), and Naive Bayes (NB) are implemented on four distinct datasets. The obtained results indicate that the proposed HCDA metric is more sensitive to FN counts. The results show, that even if there is rise in %FN for dataset 1 to 10.31 % then too accuracy is 83% ad HCDA shows correlated drop to 72.70 %. Similarly, in dataset 2 if %FN rises to 14.80 for LR classifier, accuracy is 78.2 % and HCDA is 63.45 %. Similar kind of results are obtained for dataset 3 and 4 too. More FN counts result in a lower HCDA score, and vice versa. In common exiting metrics such as Accuracy and ROC_AUC score, even as the FN count increases, the score increases, which is misleading. As a result, it can be concluded that the proposed HCDA is a more robust and accurate metric for Critical Healthcare Analysis, as FN conditions for disease diagnosis and detection are taken into account more than TP and TN.
至少从十年前开始,机器学习就吸引了研究人员的兴趣。讨论的主题之一是机器学习(ML)和深度学习(DL)在医疗保健行业的应用。在医学数据集上执行了几种实现来验证其精度。四个主要参与者,真阳性(TP),真阴性(TN),假阳性(FP)和假阴性(FN),在决定分类器的性能方面起着至关重要的作用。根据主要参与者提供了各种度量标准。选择合适的性能指标是关键的一步。除了TP和TN之外,在评估医疗保健数据集进行疾病诊断或检测时,FN应该被赋予更大的权重。因此,必须考虑合适的性能度量。本文提出了一种新的机器学习指标,即医疗保健关键诊断准确性(HCDA),并将其与众所周知的指标准确性和ROC_AUC评分进行了比较。机器学习分类器支持向量机(SVM)、逻辑回归(LR)、随机森林(RF)和朴素贝叶斯(NB)在四个不同的数据集上实现。所得结果表明,所提出的HCDA指标对FN计数更敏感。结果表明,即使数据集1的%FN上升到10.31%,准确率也为83%,而HCDA显示相关下降到72.70%。同样,在数据集2中,如果LR分类器的%FN上升到14.80,准确率为78.2%,HCDA为63.45%。数据集3和数据集4也得到了类似的结果。FN计数越多,HCDA评分越低,反之亦然。在常见的现有指标(如Accuracy和ROC_AUC分数)中,即使FN计数增加,分数也会增加,这是一种误导。因此,可以得出结论,提议的HCDA是关键医疗保健分析的更稳健和准确的度量,因为疾病诊断和检测的FN条件比TP和TN考虑得更多。
{"title":"Healthcare Critical Diagnosis Accuracy","authors":"Deepali Pankaj Javale, Sharmishta Desai","doi":"10.32985/ijeces.14.8.10","DOIUrl":"https://doi.org/10.32985/ijeces.14.8.10","url":null,"abstract":"Since at least a decade, Machine Learning has attracted the interest of researchers. Among the topics of discussion is the application of Machine Learning (ML) and Deep Learning (DL) to the healthcare industry. Several implementations are performed on the medical dataset to verify its precision. The four main players, True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN), play a crucial role in determining the classifier's performance. Various metrics are provided based on the main players. Selecting the appropriate performance metric is a crucial step. In addition to TP and TN, FN should be given greater weight when a healthcare dataset is evaluated for disease diagnosis or detection. Thus, a suitable performance metric must be considered. In this paper, a novel machine learning metric referred to as Healthcare-Critical-Diagnostic-Accuracy (HCDA) is proposed and compared to the well-known metrics accuracy and ROC_AUC score. The machine learning classifiers Support Vector Machine (SVM), Logistic Regression (LR), Random Forest (RF), and Naive Bayes (NB) are implemented on four distinct datasets. The obtained results indicate that the proposed HCDA metric is more sensitive to FN counts. The results show, that even if there is rise in %FN for dataset 1 to 10.31 % then too accuracy is 83% ad HCDA shows correlated drop to 72.70 %. Similarly, in dataset 2 if %FN rises to 14.80 for LR classifier, accuracy is 78.2 % and HCDA is 63.45 %. Similar kind of results are obtained for dataset 3 and 4 too. More FN counts result in a lower HCDA score, and vice versa. In common exiting metrics such as Accuracy and ROC_AUC score, even as the FN count increases, the score increases, which is misleading. As a result, it can be concluded that the proposed HCDA is a more robust and accurate metric for Critical Healthcare Analysis, as FN conditions for disease diagnosis and detection are taken into account more than TP and TN.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"BME-12 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135321814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Electrical and Computer Engineering Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1