Gestational diabetes mellitus (GDM) is the type of diabetes that affects pregnant women due to high blood sugar levels. The women with gestational diabetes have a chance of miscarriage during pregnancy and having chance of developing type-2 diabetes in the future. It is a general practice to take proper diabetes test like OGTT to detect GDM. This test is to be done during 24 to 28 weeks of pregnancy. In addition, the use of machine learning could be exploited for predicting gestational diabetes. The main goal of this work is to propose optimal ML algorithms for effective prediction of gestational diabetes mellitus and there by avoid it’s side effects and future complications. In this work different machine algorithms are planned to be compared for their performance in predicting GDM. Before analysing the algorithms they are implemented using 10 fold cross validation technique to obtain better performance. The algorithms implemented are Linear Discriminant Analysis, Mixture Discriminant Analysis, Quadratic Discriminant Analysis, Flexible Discriminant Analysis, Regularized Discriminant Analysis and Feed Forward Neural Networks. These algorithms are compared depending on performance measures accuracy, kappa statistic, sensitivity, specificity, precision and F-measure. Then feed forward neural networks and Flexible Discriminant Analysis are obtained as optimal in this work.
{"title":"Rigorous assessment of data mining algorithms in gestational diabetes mellitus prediction","authors":"S. Reddy, Nilambar Sethi, R. Rajender","doi":"10.3233/kes-210081","DOIUrl":"https://doi.org/10.3233/kes-210081","url":null,"abstract":"Gestational diabetes mellitus (GDM) is the type of diabetes that affects pregnant women due to high blood sugar levels. The women with gestational diabetes have a chance of miscarriage during pregnancy and having chance of developing type-2 diabetes in the future. It is a general practice to take proper diabetes test like OGTT to detect GDM. This test is to be done during 24 to 28 weeks of pregnancy. In addition, the use of machine learning could be exploited for predicting gestational diabetes. The main goal of this work is to propose optimal ML algorithms for effective prediction of gestational diabetes mellitus and there by avoid it’s side effects and future complications. In this work different machine algorithms are planned to be compared for their performance in predicting GDM. Before analysing the algorithms they are implemented using 10 fold cross validation technique to obtain better performance. The algorithms implemented are Linear Discriminant Analysis, Mixture Discriminant Analysis, Quadratic Discriminant Analysis, Flexible Discriminant Analysis, Regularized Discriminant Analysis and Feed Forward Neural Networks. These algorithms are compared depending on performance measures accuracy, kappa statistic, sensitivity, specificity, precision and F-measure. Then feed forward neural networks and Flexible Discriminant Analysis are obtained as optimal in this work.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130149220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we investigate the multiple attribute decision making (MADM) problem based on the Hamacher aggregation operators and Choquet integral with dual Pythagorean hesitant fuzzy information. Then, motivated by the ideal of Hamacher operation and Choquet integral, we have developed some Hamacher correlated operators for aggregating dual hesitant Pythagorean fuzzy information. The prominent characteristic of these proposed operators is studied. Then, we have utilized these two operators to develop some approaches to solve the dual hesitant Pythagorean fuzzy MADM problems. Finally, a practical example for supplier selection in supply chain management is given to verify the developed approach and to demonstrate its practicality and effectiveness.
{"title":"Algorithms for multiple attribute decision making with dual hesitant Pythagorean fuzzy information and their application to supplier selection","authors":"Xuyang Li","doi":"10.3233/kes-210089","DOIUrl":"https://doi.org/10.3233/kes-210089","url":null,"abstract":"In this paper, we investigate the multiple attribute decision making (MADM) problem based on the Hamacher aggregation operators and Choquet integral with dual Pythagorean hesitant fuzzy information. Then, motivated by the ideal of Hamacher operation and Choquet integral, we have developed some Hamacher correlated operators for aggregating dual hesitant Pythagorean fuzzy information. The prominent characteristic of these proposed operators is studied. Then, we have utilized these two operators to develop some approaches to solve the dual hesitant Pythagorean fuzzy MADM problems. Finally, a practical example for supplier selection in supply chain management is given to verify the developed approach and to demonstrate its practicality and effectiveness.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131727781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we investigate the multiple attribute decision making (MADM) problem based on the Muirhead Mean (MM) operators with dual Pythagorean hesitant fuzzy information. Then, motivated by the ideal of MM operators, we have developed some MM operators for aggregating dual hesitant Pythagorean fuzzy information. The prominent characteristic of these proposed operators are studied. Then, we have utilized these operators to develop some approaches to solve the dual hesitant Pythagorean fuzzy multiple attribute decision making problems. Finally, a practical example for supplier selection in supply chain management is given to verify the developed approach and to demonstrate its practicality and effectiveness.
{"title":"Models for multiple attribute decision making with dual hesitant pythagorean fuzzy information","authors":"Linggang Ran","doi":"10.3233/kes-210085","DOIUrl":"https://doi.org/10.3233/kes-210085","url":null,"abstract":"In this paper, we investigate the multiple attribute decision making (MADM) problem based on the Muirhead Mean (MM) operators with dual Pythagorean hesitant fuzzy information. Then, motivated by the ideal of MM operators, we have developed some MM operators for aggregating dual hesitant Pythagorean fuzzy information. The prominent characteristic of these proposed operators are studied. Then, we have utilized these operators to develop some approaches to solve the dual hesitant Pythagorean fuzzy multiple attribute decision making problems. Finally, a practical example for supplier selection in supply chain management is given to verify the developed approach and to demonstrate its practicality and effectiveness.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115381724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In our country, fire from high-rise buildings happens from time to time and produced economic loss could be well over 100 billion that will give rise to great harm to safety of the people’s life and property even to the development of the whole national economy. In the fire protection work for High-rise building’s safety assessment, prevention, cause and salvation, all of this controlled by multiple factors. With their different attributes, fire problem and management work became more complex and even harder. When we want to integration and judgment the information which including many main bodies that with multi-aspect and evaluation index characteristics to be evaluated, Fuzzy mathematics evaluation method can help us. It can consider fully the fuzzy and uncertain in the assessment. For the problems in high-rise Building fire protection management. And it is frequently viewed as a multi-attribute group decision-making (MAGDM) issue. Thus, a novel MAGDM method is needed to tackle it. Depending on the conventional TOPSIS (Technique for Order Preferenceby Similarity to Ideal Solution) method and intuitionistic fuzzy sets (IFSs), this essay designs a novel intuitive distance based IF-TOPSIS method for fire safety assessment based on the high building. Afterwards, relying on novel distance measures between IFNs, the conventional TOPSIS method is extended to the intuitionistic fuzzy environment to calculate assessment score of each alternative. Eventually, an application about fire safety assessment based on the high building and some comparative analysis have been given to demonstrate the superiority of the designed method. The results illustrate that the designed framework is useful for fire safety assessment based on the high building.
在我国,高层建筑火灾时有发生,造成的经济损失可达千亿以上,对人民生命财产安全乃至整个国民经济的发展造成极大危害。在消防工作中对高层建筑的安全评价、预防、致因和抢救,这一切都受多重因素的控制。由于其不同的属性,火灾问题和管理工作变得更加复杂和困难。当我们需要对包含许多具有多方位和评价指标特征的主体信息进行综合评判时,模糊数学评价方法可以为我们提供帮助。它能充分考虑评价中的模糊性和不确定性。针对高层建筑消防管理中存在的问题。它经常被看作是一个多属性群体决策问题。因此,需要一种新的MAGDM方法来解决这一问题。本文在传统的TOPSIS (Order preference Technique for Order Preferenceby Similarity to Ideal Solution)方法和直觉模糊集(ifs)方法的基础上,设计了一种基于直觉距离的高层建筑消防安全评价的IF-TOPSIS方法。然后,依靠新的ifn之间的距离度量,将传统的TOPSIS方法扩展到直觉模糊环境中,计算每个备选方案的评价分数。最后,以某高层建筑为例进行了消防安全评价,并进行了对比分析,证明了所设计方法的优越性。结果表明,所设计的框架可用于高层建筑的消防安全评价。
{"title":"Research on the fire safety assessment of high building with intuitionistic fuzzy TOPSIS method","authors":"Mingbiao Xu, Dehong Peng","doi":"10.3233/kes-210084","DOIUrl":"https://doi.org/10.3233/kes-210084","url":null,"abstract":"In our country, fire from high-rise buildings happens from time to time and produced economic loss could be well over 100 billion that will give rise to great harm to safety of the people’s life and property even to the development of the whole national economy. In the fire protection work for High-rise building’s safety assessment, prevention, cause and salvation, all of this controlled by multiple factors. With their different attributes, fire problem and management work became more complex and even harder. When we want to integration and judgment the information which including many main bodies that with multi-aspect and evaluation index characteristics to be evaluated, Fuzzy mathematics evaluation method can help us. It can consider fully the fuzzy and uncertain in the assessment. For the problems in high-rise Building fire protection management. And it is frequently viewed as a multi-attribute group decision-making (MAGDM) issue. Thus, a novel MAGDM method is needed to tackle it. Depending on the conventional TOPSIS (Technique for Order Preferenceby Similarity to Ideal Solution) method and intuitionistic fuzzy sets (IFSs), this essay designs a novel intuitive distance based IF-TOPSIS method for fire safety assessment based on the high building. Afterwards, relying on novel distance measures between IFNs, the conventional TOPSIS method is extended to the intuitionistic fuzzy environment to calculate assessment score of each alternative. Eventually, an application about fire safety assessment based on the high building and some comparative analysis have been given to demonstrate the superiority of the designed method. The results illustrate that the designed framework is useful for fire safety assessment based on the high building.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121264976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multimodal Biometrics are used to developed the robust system for Identification. Biometric such as face, fingerprint and palm vein are used for security purposes. In this Proposed System, Convolutional neural network is used for recognizing the image features. Convolutional neural networks are complex feed forward neural networks used for image classification and recognition due to its high accuracy rate. Convolutional neural network extracts the features of face, fingerprint and palm vein. Feature level fusion is done at Rectified linear unit layer. Maximum orthogonal component method is used for Fusion. In Maximum orthogonal component method, prominent features of biometrics are considered and fused together. This method helps to improve the recognition rates. Database are self-generated using these biometrics. Training and Testing is done using 4500 images of face, fingerprint and palm vein. Performance parameters are improved by this technique. The experimental results are better than conventional methods.
{"title":"Multimodal biometric identification system with deep learning based feature level fusion using maximum orthogonal method","authors":"P. Shende, Y. Dandawate","doi":"10.3233/kes-210086","DOIUrl":"https://doi.org/10.3233/kes-210086","url":null,"abstract":"Multimodal Biometrics are used to developed the robust system for Identification. Biometric such as face, fingerprint and palm vein are used for security purposes. In this Proposed System, Convolutional neural network is used for recognizing the image features. Convolutional neural networks are complex feed forward neural networks used for image classification and recognition due to its high accuracy rate. Convolutional neural network extracts the features of face, fingerprint and palm vein. Feature level fusion is done at Rectified linear unit layer. Maximum orthogonal component method is used for Fusion. In Maximum orthogonal component method, prominent features of biometrics are considered and fused together. This method helps to improve the recognition rates. Database are self-generated using these biometrics. Training and Testing is done using 4500 images of face, fingerprint and palm vein. Performance parameters are improved by this technique. The experimental results are better than conventional methods.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124587019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A dynamic slicing algorithm is proposed in this paper along with its implementation which is dynamic for concurrent Component-oriented programs carrying multiple threads. As a part of representing the concurrent COP (CCOP) effectively, an intermediate graph is developed called Concurrent Component Dependency Graph (CCmDG). The system dependence graph (SDG) for individual components and interfaces are integrated to represent the above intermediate graph. It also consists of some new dependence edges which have been triggered for connecting the individual dependence graph of each component with the interface. Based on the graph created for the CCOP, a dynamic slicing algorithm is proposed, which sets the resultant by making the executed nodes marked during run time in Concurrent Components Dynamic Slicing (CCmDS) appropriately. For checking the competence of our algorithm, five case studies have been considered and also compared with an existing technique. From the study, we found that our algorithm results in smaller and precise size slice compared to the existing algorithm in less time.
{"title":"An efficient and precise dynamic slicing for concurrent component-oriented programs","authors":"N. Pujari, Abhishek Ray, Jagannath Singh","doi":"10.3233/kes-210088","DOIUrl":"https://doi.org/10.3233/kes-210088","url":null,"abstract":"A dynamic slicing algorithm is proposed in this paper along with its implementation which is dynamic for concurrent Component-oriented programs carrying multiple threads. As a part of representing the concurrent COP (CCOP) effectively, an intermediate graph is developed called Concurrent Component Dependency Graph (CCmDG). The system dependence graph (SDG) for individual components and interfaces are integrated to represent the above intermediate graph. It also consists of some new dependence edges which have been triggered for connecting the individual dependence graph of each component with the interface. Based on the graph created for the CCOP, a dynamic slicing algorithm is proposed, which sets the resultant by making the executed nodes marked during run time in Concurrent Components Dynamic Slicing (CCmDS) appropriately. For checking the competence of our algorithm, five case studies have been considered and also compared with an existing technique. From the study, we found that our algorithm results in smaller and precise size slice compared to the existing algorithm in less time.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114932220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Light-weight cryptography is a major research area due to the minimization of the size of the devices utilized for such services. The associated security threats do increase as their applications are more now. Identity-Based Encryption (IBE) with its wide range of cryptographic schemes and protocols is specifically found suitable for low-end devices that have much resource constraint. This work describes various schemes and protocols in IBE. In this paper an analysis of IBE schemes and the various attacks they are prone to are discussed. The future trends are found to be very promising and challenging.
{"title":"A systematic analysis of identity based encryption (IBE)","authors":"Aravind Karrothu, J. Norman","doi":"10.3233/kes-210078","DOIUrl":"https://doi.org/10.3233/kes-210078","url":null,"abstract":"Light-weight cryptography is a major research area due to the minimization of the size of the devices utilized for such services. The associated security threats do increase as their applications are more now. Identity-Based Encryption (IBE) with its wide range of cryptographic schemes and protocols is specifically found suitable for low-end devices that have much resource constraint. This work describes various schemes and protocols in IBE. In this paper an analysis of IBE schemes and the various attacks they are prone to are discussed. The future trends are found to be very promising and challenging.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117276846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Madhusudhana Rao, Suribabu Korada, Yarramalle Srinivas
The speaker identification in Teleconferencing scenario, it is important to address whether a particular speaker is a part of a conference or not and to note that whether a particular speaker is spoken at the meeting or not. The feature vectors are extracted using MFCC-SDC-LPC. The Generalized Gamma Distribution is used to model the feature vectors. K-means algorithm is utilized to cluster the speech data. The test speaker is to be verified that he/she is a participant in the conference. A conference database is generated with 50 speakers. In order to test the model, 20 different speakers not belonging to the conference are also considered. The efficiency of the model developed is compared using various measures such as AR, FAR and MDR. And the system is tested by varying number of speakers in the conference. The results show that the model performs more robustly.
{"title":"Machine hearing system for teleconference authentication with effective speech analysis","authors":"T. Madhusudhana Rao, Suribabu Korada, Yarramalle Srinivas","doi":"10.3233/kes-210079","DOIUrl":"https://doi.org/10.3233/kes-210079","url":null,"abstract":"The speaker identification in Teleconferencing scenario, it is important to address whether a particular speaker is a part of a conference or not and to note that whether a particular speaker is spoken at the meeting or not. The feature vectors are extracted using MFCC-SDC-LPC. The Generalized Gamma Distribution is used to model the feature vectors. K-means algorithm is utilized to cluster the speech data. The test speaker is to be verified that he/she is a participant in the conference. A conference database is generated with 50 speakers. In order to test the model, 20 different speakers not belonging to the conference are also considered. The efficiency of the model developed is compared using various measures such as AR, FAR and MDR. And the system is tested by varying number of speakers in the conference. The results show that the model performs more robustly.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128737617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Speech Enhancement (SE) techniques, the major challenging task is to suppress non-stationary noises including white noise in real-time application scenarios. Many techniques have been developed for enhancing the vocal signals; however, those were not effective for suppressing non-stationary noises very well. Also, those have high time and resource consumption. As a result, Sliding Window Empirical Mode Decomposition and Hurst (SWEMDH)-based SE method where the speech signal was decomposed into Intrinsic Mode Functions (IMFs) based on the sliding window and the noise factor in each IMF was chosen based on the Hurst exponent data. Also, the least corrupted IMFs were utilized to restore the vocal signal. However, this technique was not suitable for white noise scenarios. Therefore in this paper, a Variant of Variational Mode Decomposition (VVMD) with SWEMDH technique is proposed to reduce the complexity in real-time applications. The key objective of this proposed SWEMD-VVMDH technique is to decide the IMFs based on Hurst exponent and then apply the VVMD technique to suppress both low- and high-frequency noisy factors from the vocal signals. Originally, the noisy vocal signal is decomposed into many IMFs using SWEMDH technique. Then, Hurst exponent is computed to decide the IMFs with low-frequency noisy factors and Narrow-Band Components (NBC) is computed to decide the IMFs with high-frequency noisy factors. Moreover, VVMD is applied on the addition of all chosen IMF to remove both low- and high-frequency noisy factors. Thus, the speech signal quality is improved under non-stationary noises including additive white Gaussian noise. Finally, the experimental outcomes demonstrate the significant speech signal improvement under both non-stationary and white noise surroundings.
{"title":"A variant of SWEMDH technique based on variational mode decomposition for speech enhancement","authors":"P. Selvaraj, E. Chandra","doi":"10.3233/kes-210072","DOIUrl":"https://doi.org/10.3233/kes-210072","url":null,"abstract":"In Speech Enhancement (SE) techniques, the major challenging task is to suppress non-stationary noises including white noise in real-time application scenarios. Many techniques have been developed for enhancing the vocal signals; however, those were not effective for suppressing non-stationary noises very well. Also, those have high time and resource consumption. As a result, Sliding Window Empirical Mode Decomposition and Hurst (SWEMDH)-based SE method where the speech signal was decomposed into Intrinsic Mode Functions (IMFs) based on the sliding window and the noise factor in each IMF was chosen based on the Hurst exponent data. Also, the least corrupted IMFs were utilized to restore the vocal signal. However, this technique was not suitable for white noise scenarios. Therefore in this paper, a Variant of Variational Mode Decomposition (VVMD) with SWEMDH technique is proposed to reduce the complexity in real-time applications. The key objective of this proposed SWEMD-VVMDH technique is to decide the IMFs based on Hurst exponent and then apply the VVMD technique to suppress both low- and high-frequency noisy factors from the vocal signals. Originally, the noisy vocal signal is decomposed into many IMFs using SWEMDH technique. Then, Hurst exponent is computed to decide the IMFs with low-frequency noisy factors and Narrow-Band Components (NBC) is computed to decide the IMFs with high-frequency noisy factors. Moreover, VVMD is applied on the addition of all chosen IMF to remove both low- and high-frequency noisy factors. Thus, the speech signal quality is improved under non-stationary noises including additive white Gaussian noise. Finally, the experimental outcomes demonstrate the significant speech signal improvement under both non-stationary and white noise surroundings.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122161943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Speaker Identification denotes the speech samples of known speaker and it identifies the best matches of the input model. The SGMFC method is the combination of Sub Gaussian Mixture Model (SGMM) with the Mel-frequency Cepstral Coefficients (MFCC) for feature extraction. The SGMFC method minimizes the error rate, memory footprint and also computational throughput measure needs of a medium-vocabulary speaker identification system, supposed for preparation on a transportable or otherwise. Fuzzy C-means and k-means clustering are used in the SGMM method to attain the improved efficiency and their outcomes with parameters such as precision, sensitivity and specificity are compared.
{"title":"Speaker identification analysis for SGMM with k-means and fuzzy C-means clustering using SVM statistical technique","authors":"K. Manikandan, E. Chandra","doi":"10.3233/kes-210073","DOIUrl":"https://doi.org/10.3233/kes-210073","url":null,"abstract":"Speaker Identification denotes the speech samples of known speaker and it identifies the best matches of the input model. The SGMFC method is the combination of Sub Gaussian Mixture Model (SGMM) with the Mel-frequency Cepstral Coefficients (MFCC) for feature extraction. The SGMFC method minimizes the error rate, memory footprint and also computational throughput measure needs of a medium-vocabulary speaker identification system, supposed for preparation on a transportable or otherwise. Fuzzy C-means and k-means clustering are used in the SGMM method to attain the improved efficiency and their outcomes with parameters such as precision, sensitivity and specificity are compared.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124413642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}