Abdel Fatah Azzam, A. Maghrabi, Eman El-Naqeeb, Mohammed Aldawood, H. ElGhawalby
In today’s data-driven world, we are constantly exposed to a vast amount of information. This information is stored in various information systems and is used for analysis and management purposes. One important approach to handle these data is through the process of clustering or categorization. Clustering algorithms are powerful tools used in data analysis and machine learning to group similar data points together based on their inherent characteristics. These algorithms aim to identify patterns and structures within a dataset, allowing for the discovery of hidden relationships and insights. By partitioning data into distinct clusters, clustering algorithms enable efficient data exploration, classification, and anomaly detection. In this study, we propose a novel centroid-based clustering algorithm, namely, the morphological accuracy clustering algorithm (MAC algorithm). The proposed algorithm uses a morphological accuracy measure to define the centroid of the cluster. The empirical results demonstrate that the proposed algorithm achieves a stable clustering outcome in fewer iterations compared to several existing centroid-based clustering algorithms. Additionally, the clusters generated by these existing algorithms are highly susceptible to the initial centroid selection made by the user.
{"title":"Morphological Accuracy Data Clustering: A Novel Algorithm for Enhanced Cluster Analysis","authors":"Abdel Fatah Azzam, A. Maghrabi, Eman El-Naqeeb, Mohammed Aldawood, H. ElGhawalby","doi":"10.1155/2024/3795126","DOIUrl":"https://doi.org/10.1155/2024/3795126","url":null,"abstract":"In today’s data-driven world, we are constantly exposed to a vast amount of information. This information is stored in various information systems and is used for analysis and management purposes. One important approach to handle these data is through the process of clustering or categorization. Clustering algorithms are powerful tools used in data analysis and machine learning to group similar data points together based on their inherent characteristics. These algorithms aim to identify patterns and structures within a dataset, allowing for the discovery of hidden relationships and insights. By partitioning data into distinct clusters, clustering algorithms enable efficient data exploration, classification, and anomaly detection. In this study, we propose a novel centroid-based clustering algorithm, namely, the morphological accuracy clustering algorithm (MAC algorithm). The proposed algorithm uses a morphological accuracy measure to define the centroid of the cluster. The empirical results demonstrate that the proposed algorithm achieves a stable clustering outcome in fewer iterations compared to several existing centroid-based clustering algorithms. Additionally, the clusters generated by these existing algorithms are highly susceptible to the initial centroid selection made by the user.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141112459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Communication through speech can be hindered by environmental noise, prompting the need for alternative methods such as lip reading, which bypasses auditory challenges. However, the accurate interpretation of lip movements is impeded by the uniqueness of individual lip shapes, necessitating detailed analysis. In addition, the development of an Indonesian dataset addresses the lack of diversity in existing datasets, predominantly in English, fostering more inclusive research. This study proposes an enhanced lip-reading system trained using the long-term recurrent convolutional network (LRCN) considering eight different types of lip shapes. MediaPipe Face Mesh precisely detects lip landmarks, enabling the LRCN model to recognize Indonesian utterances. Experimental results demonstrate the effectiveness of the approach, with the LRCN model with three convolutional layers (LRCN-3Conv) achieving 95.42% accuracy for word test data and 95.63% for phrases, outperforming the convolutional long short-term memory (Conv-LSTM) method. The proposed approach outperforms Conv-LSTM in terms of accuracy. Furthermore, the evaluation of the original MIRACL-VC1 dataset also produced a best accuracy of 90.67% on LRCN-3Conv compared to previous studies in the word-labeled class. The success is attributed to MediaPipe Face Mesh detection, which facilitates the accurate detection of the lip region. Leveraging advanced deep learning techniques and precise landmark detection, these findings promise improved communication accessibility for individuals facing auditory challenges.
{"title":"Indonesian Lip-Reading Detection and Recognition Based on Lip Shape Using Face Mesh and Long-Term Recurrent Convolutional Network","authors":"Aripin, Abas Setiawan","doi":"10.1155/2024/6479124","DOIUrl":"https://doi.org/10.1155/2024/6479124","url":null,"abstract":"Communication through speech can be hindered by environmental noise, prompting the need for alternative methods such as lip reading, which bypasses auditory challenges. However, the accurate interpretation of lip movements is impeded by the uniqueness of individual lip shapes, necessitating detailed analysis. In addition, the development of an Indonesian dataset addresses the lack of diversity in existing datasets, predominantly in English, fostering more inclusive research. This study proposes an enhanced lip-reading system trained using the long-term recurrent convolutional network (LRCN) considering eight different types of lip shapes. MediaPipe Face Mesh precisely detects lip landmarks, enabling the LRCN model to recognize Indonesian utterances. Experimental results demonstrate the effectiveness of the approach, with the LRCN model with three convolutional layers (LRCN-3Conv) achieving 95.42% accuracy for word test data and 95.63% for phrases, outperforming the convolutional long short-term memory (Conv-LSTM) method. The proposed approach outperforms Conv-LSTM in terms of accuracy. Furthermore, the evaluation of the original MIRACL-VC1 dataset also produced a best accuracy of 90.67% on LRCN-3Conv compared to previous studies in the word-labeled class. The success is attributed to MediaPipe Face Mesh detection, which facilitates the accurate detection of the lip region. Leveraging advanced deep learning techniques and precise landmark detection, these findings promise improved communication accessibility for individuals facing auditory challenges.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140688706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Daqrouq, A. Balamesh, O. Alrusaini, A. Alkhateeb, A. S. Balamash
Speech emotion recognition (SER) is a challenging task due to the complex and subtle nature of emotions. This study proposes a novel approach for emotion modeling using speech signals by combining discrete wavelet transform (DWT) with linear prediction coding (LPC). The performance of various classifiers, including support vector machine (SVM), K-Nearest Neighbors (KNN), Efficient Logistic Regression, Naive Bayes, Ensemble, and Neural Network, was evaluated for emotion classification using the EMO-DB dataset. Evaluation metrics such as area under the curve (AUC), average prediction accuracy, and cross-validation techniques were employed. The results indicate that KNN and SVM classifiers exhibited high accuracy in distinguishing sadness from other emotions. Ensemble methods and Neural Networks also demonstrated strong performance in sadness classification. While Efficient Logistic Regression and Naive Bayes classifiers showed competitive performance, they were slightly less accurate compared to other classifiers. Furthermore, the proposed feature extraction method yielded the highest average accuracy, and its combination with formants or wavelet entropy further improved classification accuracy. On the other hand, Efficient Logistic Regression exhibited the lowest accuracies among the classifiers. The uniqueness of this study was that it investigated a combined feature extraction method and integrated them to compare with various forms of combinations. However, the purposes of the investigation include improved performance of the classifiers, high effectiveness of the system, and the potential for emotion classification tasks. These findings can guide the selection of appropriate classifiers and feature extraction methods in future research and real-world applications. Further investigations can focus on refining classifiers and exploring additional feature extraction techniques to enhance emotion classification accuracy.
{"title":"Emotion Modeling in Speech Signals: Discrete Wavelet Transform and Machine Learning Tools for Emotion Recognition System","authors":"K. Daqrouq, A. Balamesh, O. Alrusaini, A. Alkhateeb, A. S. Balamash","doi":"10.1155/2024/7184018","DOIUrl":"https://doi.org/10.1155/2024/7184018","url":null,"abstract":"Speech emotion recognition (SER) is a challenging task due to the complex and subtle nature of emotions. This study proposes a novel approach for emotion modeling using speech signals by combining discrete wavelet transform (DWT) with linear prediction coding (LPC). The performance of various classifiers, including support vector machine (SVM), K-Nearest Neighbors (KNN), Efficient Logistic Regression, Naive Bayes, Ensemble, and Neural Network, was evaluated for emotion classification using the EMO-DB dataset. Evaluation metrics such as area under the curve (AUC), average prediction accuracy, and cross-validation techniques were employed. The results indicate that KNN and SVM classifiers exhibited high accuracy in distinguishing sadness from other emotions. Ensemble methods and Neural Networks also demonstrated strong performance in sadness classification. While Efficient Logistic Regression and Naive Bayes classifiers showed competitive performance, they were slightly less accurate compared to other classifiers. Furthermore, the proposed feature extraction method yielded the highest average accuracy, and its combination with formants or wavelet entropy further improved classification accuracy. On the other hand, Efficient Logistic Regression exhibited the lowest accuracies among the classifiers. The uniqueness of this study was that it investigated a combined feature extraction method and integrated them to compare with various forms of combinations. However, the purposes of the investigation include improved performance of the classifiers, high effectiveness of the system, and the potential for emotion classification tasks. These findings can guide the selection of appropriate classifiers and feature extraction methods in future research and real-world applications. Further investigations can focus on refining classifiers and exploring additional feature extraction techniques to enhance emotion classification accuracy.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140755018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Mbey, Felix Ghislain Yem Souhe, Vinny Junior Foba Kakeu, A. Boum
With the installation of solar panels around the world and the permanent fluctuation of climatic factors, it is, therefore, important to provide the necessary energy in the electrical network in order to satisfy the electrical demand at all times for smart grid applications. This study first presents a comprehensive and comparative review of existing deep learning methods used for smart grid applications such as solar photovoltaic (PV) generation forecasting and power consumption forecasting. In this work, electrical consumption forecasting is long term and will consider smart meter data and socioeconomic and demographic data. Photovoltaic power generation forecasting is short term by considering climatic data such as solar irradiance, temperature, and humidity. Moreover, we have proposed a novel hybrid deep learning method based on multilayer perceptron (MLP), long short-term memory (LSTM), and genetic algorithm (GA). We then simulated all the deep learning methods on a climate and electricity consumption dataset for the city of Douala. Electrical consumption data are collected from smart meters installed at consumers in Douala. Climate data are collected at the climate management center in the city of Douala. The results obtained show the outperformance of the proposed optimized method based on deep learning in the both electrical consumption and PV power generation forecasting and its superiority compared to basic methods of deep learning such as support vector machine (SVM), MLP, recurrent neural network (RNN), and random forest algorithm (RFA).
{"title":"A Novel Deep Learning-Based Data Analysis Model for Solar Photovoltaic Power Generation and Electrical Consumption Forecasting in the Smart Power Grid","authors":"C. Mbey, Felix Ghislain Yem Souhe, Vinny Junior Foba Kakeu, A. Boum","doi":"10.1155/2024/9257508","DOIUrl":"https://doi.org/10.1155/2024/9257508","url":null,"abstract":"With the installation of solar panels around the world and the permanent fluctuation of climatic factors, it is, therefore, important to provide the necessary energy in the electrical network in order to satisfy the electrical demand at all times for smart grid applications. This study first presents a comprehensive and comparative review of existing deep learning methods used for smart grid applications such as solar photovoltaic (PV) generation forecasting and power consumption forecasting. In this work, electrical consumption forecasting is long term and will consider smart meter data and socioeconomic and demographic data. Photovoltaic power generation forecasting is short term by considering climatic data such as solar irradiance, temperature, and humidity. Moreover, we have proposed a novel hybrid deep learning method based on multilayer perceptron (MLP), long short-term memory (LSTM), and genetic algorithm (GA). We then simulated all the deep learning methods on a climate and electricity consumption dataset for the city of Douala. Electrical consumption data are collected from smart meters installed at consumers in Douala. Climate data are collected at the climate management center in the city of Douala. The results obtained show the outperformance of the proposed optimized method based on deep learning in the both electrical consumption and PV power generation forecasting and its superiority compared to basic methods of deep learning such as support vector machine (SVM), MLP, recurrent neural network (RNN), and random forest algorithm (RFA).","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140754241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Sommers, Shahram Rahimi, Tonya G. McCall, Emily Wall, Althea Henslee, Larry Dalton, Paul D. Babin, Nathan Watson, Gehendra Sharma, Milan D. Parmar
The more “manufacturable” a product is, the “easier” it is to manufacture. For two different product designs targeting the same role, one may be more manufacturable than the other. Evaluating manufacturability requires experts in the processes of manufacturing, “manufacturing process engineers” (MPEs). Human experts are expensive to train and employ, while a well-designed expert system (ES) could be quicker, more reliable, and provide higher performance and superior accuracy. In this work, a group of MPEs (“Team A”) externalized a portion of their expertise into a rule-based expert system in cooperation with a group of ES knowledge engineers and developers. We produced a large ES with 113 total rules and 94 variables. The ES comprises a crisp ES which constructs a Fuzzy ES, thus producing a two-stage ES. Team A then used the ES and a derivation of it (the “MAKE A”) to conduct assessments of the manufacturability of several “notional” designs, providing a sanity check of the rule-base. A provisional assessment used a first draft of the rule-base, and MAKE A, and was of notional wing designs. The primary assessment, using an updated rule-base and MAKE A, was of notional rotor blade designs. We describe the process by which this ES was made and the assessments that were conducted and conclude with insights gained from constructing the ES. These insights can be summarized as follows: build a bridge between expert and user, move from general features to specific features, do not make the user do a lot of work, and only ask the user for objective observations. We add the product of our work to the growing library of tools and methodologies at the disposal of the U.S. Army Engineer Research and Development Center (ERDC). The primary findings of the present work are (1) an ES that satisfied the experts, according to their expressed performance expectations, and (2) the insights gained on how such a system might best be constructed.
产品的 "可制造性 "越强,制造起来就越 "容易"。对于针对同一角色的两种不同产品设计,其中一种可能比另一种更具可制造性。评估可制造性需要制造工艺方面的专家,即 "制造工艺工程师"(MPE)。人类专家的培训和聘用成本高昂,而精心设计的专家系统(ES)可以更快、更可靠,并提供更高的性能和卓越的准确性。在这项工作中,一组 MPE("团队 A")与一组 ES 知识工程师和开发人员合作,将他们的部分专业知识外化为基于规则的专家系统。我们制作了一个大型 ES,共有 113 条规则和 94 个变量。该 ES 包括一个简明 ES,它构建了一个模糊 ES,从而产生了一个两阶段 ES。然后,团队 A 使用该 ES 及其衍生物("MAKE A")对几种 "名义 "设计的可制造性进行评估,对规则库进行合理性检查。临时评估使用了规则库和 MAKE A 的初稿,针对的是假想机翼设计。主要评估使用了更新后的规则库和 MAKE A,对名义转子叶片设计进行了评估。我们介绍了该 ES 的制作过程和所进行的评估,并总结了从构建 ES 中获得的启示。这些见解可归纳如下:在专家和用户之间架起一座桥梁,从一般特征转向具体特征,不要让用户做大量的工作,只要求用户提供客观的观察结果。美国陆军工程研究与发展中心(ERDC)的工具和方法库正在不断扩大,我们将把我们的工作成果加入其中。这项工作的主要成果是:(1)根据专家们表达的性能预期,开发出了令他们满意的 ES;(2)在如何以最佳方式构建此类系统方面获得了深刻见解。
{"title":"A Hybrid Expert System for Estimation of the Manufacturability of a Notional Design","authors":"Alexander Sommers, Shahram Rahimi, Tonya G. McCall, Emily Wall, Althea Henslee, Larry Dalton, Paul D. Babin, Nathan Watson, Gehendra Sharma, Milan D. Parmar","doi":"10.1155/2024/4985090","DOIUrl":"https://doi.org/10.1155/2024/4985090","url":null,"abstract":"The more “manufacturable” a product is, the “easier” it is to manufacture. For two different product designs targeting the same role, one may be more manufacturable than the other. Evaluating manufacturability requires experts in the processes of manufacturing, “manufacturing process engineers” (MPEs). Human experts are expensive to train and employ, while a well-designed expert system (ES) could be quicker, more reliable, and provide higher performance and superior accuracy. In this work, a group of MPEs (“Team A”) externalized a portion of their expertise into a rule-based expert system in cooperation with a group of ES knowledge engineers and developers. We produced a large ES with 113 total rules and 94 variables. The ES comprises a crisp ES which constructs a Fuzzy ES, thus producing a two-stage ES. Team A then used the ES and a derivation of it (the “MAKE A”) to conduct assessments of the manufacturability of several “notional” designs, providing a sanity check of the rule-base. A provisional assessment used a first draft of the rule-base, and MAKE A, and was of notional wing designs. The primary assessment, using an updated rule-base and MAKE A, was of notional rotor blade designs. We describe the process by which this ES was made and the assessments that were conducted and conclude with insights gained from constructing the ES. These insights can be summarized as follows: build a bridge between expert and user, move from general features to specific features, do not make the user do a lot of work, and only ask the user for objective observations. We add the product of our work to the growing library of tools and methodologies at the disposal of the U.S. Army Engineer Research and Development Center (ERDC). The primary findings of the present work are (1) an ES that satisfied the experts, according to their expressed performance expectations, and (2) the insights gained on how such a system might best be constructed.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140380174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natural language is a type of language that human beings use to communicate with each other. However, it is very difficult to communicate with a machine-understandable language. Finding context meaning is challenging the activity of automatically identifying machine translation, indexing engines, and predicting neighbor words in natural language. Many researchers around the world investigated word-sense disambiguation in different languages, including the Afaan Oromo language, to solve this challenge. Nevertheless, the amount of effort for Afaan Oromo is very little in terms of finding context meaning and predicting neighbor words to solve the word ambiguity problem. Since the Afaan Oromo language is one of the languages developed in Ethiopia, it needs the latest technology to enhance communication and overcome ambiguity challenges. So far, this work aims to design and develop a vector space model for the Afaan Oromo language that can provide the application of word-sense disambiguation to increase the performance of information retrieval. In this work, the study has used the Afaan Oromo word embedding method to disambiguate a contextual meaning of words by applying the semisupervised technique. To conduct the study, 456,300 Afaan Oromo words were taken from different sources and preprocessed for experimentation by the Natural Language Toolkit and Anaconda tool. The K-means machine learning algorithm was used to cluster similar word vocabulary. Experimental results show that using word embedding for the proposed language’s corpus improves the performance of the system by a total accuracy of 98.89% and outperforms the existing similar systems.
{"title":"Semisupervised Learning-Based Word-Sense Disambiguation Using Word Embedding for Afaan Oromoo Language","authors":"Tabor Wegi Geleta, Jara Muda Haro","doi":"10.1155/2024/4429069","DOIUrl":"https://doi.org/10.1155/2024/4429069","url":null,"abstract":"Natural language is a type of language that human beings use to communicate with each other. However, it is very difficult to communicate with a machine-understandable language. Finding context meaning is challenging the activity of automatically identifying machine translation, indexing engines, and predicting neighbor words in natural language. Many researchers around the world investigated word-sense disambiguation in different languages, including the Afaan Oromo language, to solve this challenge. Nevertheless, the amount of effort for Afaan Oromo is very little in terms of finding context meaning and predicting neighbor words to solve the word ambiguity problem. Since the Afaan Oromo language is one of the languages developed in Ethiopia, it needs the latest technology to enhance communication and overcome ambiguity challenges. So far, this work aims to design and develop a vector space model for the Afaan Oromo language that can provide the application of word-sense disambiguation to increase the performance of information retrieval. In this work, the study has used the Afaan Oromo word embedding method to disambiguate a contextual meaning of words by applying the semisupervised technique. To conduct the study, 456,300 Afaan Oromo words were taken from different sources and preprocessed for experimentation by the Natural Language Toolkit and Anaconda tool. The K-means machine learning algorithm was used to cluster similar word vocabulary. Experimental results show that using word embedding for the proposed language’s corpus improves the performance of the system by a total accuracy of 98.89% and outperforms the existing similar systems.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140241593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When diverse decision makers are involved in the decision-making process, taking average of decision values might not reflect an accurate point of view. To overcome such a scenario, the circular Fermatean fuzzy (CFF) set, an advancement of the Fermatean fuzzy (FF) set, and the interval-valued Fermatean fuzzy set (IVFFS) are introduced in this current study. The proposed CFF set is a circle with a centre as association value (AV) and nonassociation value (NAV) with a radius at most equal to 2. It is built in such a way that it covers all the decision makers’ opinion value through a circle. Due to its geometric structure, the CFF set resolves ambiguity and risk more accurately and effectively than FF and IVFF. FF t-norm and t-conorm are used to investigate the properties of CFF sets, subsequent to which the algebraic operations between them are defined. A couple of CFF distance measures between CFF numbers are introduced and used in the selection of an electric autorickshaw along with the CFF weighted averaging and geometric aggregation operators. The overview and comparison analysis of the generated reports exemplifies the viability and compatibility of the CFF set strategy for selecting the best choices.
{"title":"The Characteristics of Circular Fermatean Fuzzy Sets and Multicriteria Decision-Making Based on the Fermatean Fuzzy t-Norm and t-Conorm","authors":"R. A., I. V., K. S., A. H, Arifmohammed K. M.","doi":"10.1155/2024/6974363","DOIUrl":"https://doi.org/10.1155/2024/6974363","url":null,"abstract":"When diverse decision makers are involved in the decision-making process, taking average of decision values might not reflect an accurate point of view. To overcome such a scenario, the circular Fermatean fuzzy (CFF) set, an advancement of the Fermatean fuzzy (FF) set, and the interval-valued Fermatean fuzzy set (IVFFS) are introduced in this current study. The proposed CFF set is a circle with a centre as association value (AV) and nonassociation value (NAV) with a radius at most equal to 2. It is built in such a way that it covers all the decision makers’ opinion value through a circle. Due to its geometric structure, the CFF set resolves ambiguity and risk more accurately and effectively than FF and IVFF. FF t-norm and t-conorm are used to investigate the properties of CFF sets, subsequent to which the algebraic operations between them are defined. A couple of CFF distance measures between CFF numbers are introduced and used in the selection of an electric autorickshaw along with the CFF weighted averaging and geometric aggregation operators. The overview and comparison analysis of the generated reports exemplifies the viability and compatibility of the CFF set strategy for selecting the best choices.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139787482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When diverse decision makers are involved in the decision-making process, taking average of decision values might not reflect an accurate point of view. To overcome such a scenario, the circular Fermatean fuzzy (CFF) set, an advancement of the Fermatean fuzzy (FF) set, and the interval-valued Fermatean fuzzy set (IVFFS) are introduced in this current study. The proposed CFF set is a circle with a centre as association value (AV) and nonassociation value (NAV) with a radius at most equal to 2. It is built in such a way that it covers all the decision makers’ opinion value through a circle. Due to its geometric structure, the CFF set resolves ambiguity and risk more accurately and effectively than FF and IVFF. FF t-norm and t-conorm are used to investigate the properties of CFF sets, subsequent to which the algebraic operations between them are defined. A couple of CFF distance measures between CFF numbers are introduced and used in the selection of an electric autorickshaw along with the CFF weighted averaging and geometric aggregation operators. The overview and comparison analysis of the generated reports exemplifies the viability and compatibility of the CFF set strategy for selecting the best choices.
{"title":"The Characteristics of Circular Fermatean Fuzzy Sets and Multicriteria Decision-Making Based on the Fermatean Fuzzy t-Norm and t-Conorm","authors":"R. A., I. V., K. S., A. H, Arifmohammed K. M.","doi":"10.1155/2024/6974363","DOIUrl":"https://doi.org/10.1155/2024/6974363","url":null,"abstract":"When diverse decision makers are involved in the decision-making process, taking average of decision values might not reflect an accurate point of view. To overcome such a scenario, the circular Fermatean fuzzy (CFF) set, an advancement of the Fermatean fuzzy (FF) set, and the interval-valued Fermatean fuzzy set (IVFFS) are introduced in this current study. The proposed CFF set is a circle with a centre as association value (AV) and nonassociation value (NAV) with a radius at most equal to 2. It is built in such a way that it covers all the decision makers’ opinion value through a circle. Due to its geometric structure, the CFF set resolves ambiguity and risk more accurately and effectively than FF and IVFF. FF t-norm and t-conorm are used to investigate the properties of CFF sets, subsequent to which the algebraic operations between them are defined. A couple of CFF distance measures between CFF numbers are introduced and used in the selection of an electric autorickshaw along with the CFF weighted averaging and geometric aggregation operators. The overview and comparison analysis of the generated reports exemplifies the viability and compatibility of the CFF set strategy for selecting the best choices.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139847267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nur Iriawan, A. A. Pravitasari, Ulfa S. Nuraini, Nur I. Nirmalasari, Taufik Azmi, Muhammad Nasrudin, Adam F. Fandisyah, K. Fithriasari, S. W. Purnami, Irhamah, Widiana Ferriastuti
Brain tumor detection and segmentation are the main issues in biomedical engineering research fields, and it is always challenging due to its heterogeneous shape and location in MRI. The quality of the MR images also plays an important role in providing a clear sight of the shape and boundary of the tumor. The clear shape and boundary of the tumor will increase the probability of safe medical surgery. Analysis of this different scope of image types requires refined computerized quantification and visualization tools. This paper employed deep learning to detect and segment brain tumor MRI images by combining the convolutional neural network (CNN) and fully convolutional network (FCN) methodology in serial. The fundamental finding is to detect and localize the tumor area with YOLO-CNN and segment it with the FCN-UNet architecture. This analysis provided automatic detection and segmentation as well as the location of the tumor. The segmentation using the UNet is run under four scenarios, and the best one is chosen by the minimum loss and maximum accuracy value. In this research, we used 277 images for training, 69 images for validation, and 14 images for testing. The validation is carried out by comparing the segmentation results with the medical ground truth to provide the correct classification ratio (CCR). This study succeeded in the detection of brain tumors and provided a clear area of the brain tumor with a high CCR of about 97%.
{"title":"YOLO-UNet Architecture for Detecting and Segmenting the Localized MRI Brain Tumor Image","authors":"Nur Iriawan, A. A. Pravitasari, Ulfa S. Nuraini, Nur I. Nirmalasari, Taufik Azmi, Muhammad Nasrudin, Adam F. Fandisyah, K. Fithriasari, S. W. Purnami, Irhamah, Widiana Ferriastuti","doi":"10.1155/2024/3819801","DOIUrl":"https://doi.org/10.1155/2024/3819801","url":null,"abstract":"Brain tumor detection and segmentation are the main issues in biomedical engineering research fields, and it is always challenging due to its heterogeneous shape and location in MRI. The quality of the MR images also plays an important role in providing a clear sight of the shape and boundary of the tumor. The clear shape and boundary of the tumor will increase the probability of safe medical surgery. Analysis of this different scope of image types requires refined computerized quantification and visualization tools. This paper employed deep learning to detect and segment brain tumor MRI images by combining the convolutional neural network (CNN) and fully convolutional network (FCN) methodology in serial. The fundamental finding is to detect and localize the tumor area with YOLO-CNN and segment it with the FCN-UNet architecture. This analysis provided automatic detection and segmentation as well as the location of the tumor. The segmentation using the UNet is run under four scenarios, and the best one is chosen by the minimum loss and maximum accuracy value. In this research, we used 277 images for training, 69 images for validation, and 14 images for testing. The validation is carried out by comparing the segmentation results with the medical ground truth to provide the correct classification ratio (CCR). This study succeeded in the detection of brain tumors and provided a clear area of the brain tumor with a high CCR of about 97%.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139853280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nur Iriawan, A. A. Pravitasari, Ulfa S. Nuraini, Nur I. Nirmalasari, Taufik Azmi, Muhammad Nasrudin, Adam F. Fandisyah, K. Fithriasari, S. W. Purnami, Irhamah, Widiana Ferriastuti
Brain tumor detection and segmentation are the main issues in biomedical engineering research fields, and it is always challenging due to its heterogeneous shape and location in MRI. The quality of the MR images also plays an important role in providing a clear sight of the shape and boundary of the tumor. The clear shape and boundary of the tumor will increase the probability of safe medical surgery. Analysis of this different scope of image types requires refined computerized quantification and visualization tools. This paper employed deep learning to detect and segment brain tumor MRI images by combining the convolutional neural network (CNN) and fully convolutional network (FCN) methodology in serial. The fundamental finding is to detect and localize the tumor area with YOLO-CNN and segment it with the FCN-UNet architecture. This analysis provided automatic detection and segmentation as well as the location of the tumor. The segmentation using the UNet is run under four scenarios, and the best one is chosen by the minimum loss and maximum accuracy value. In this research, we used 277 images for training, 69 images for validation, and 14 images for testing. The validation is carried out by comparing the segmentation results with the medical ground truth to provide the correct classification ratio (CCR). This study succeeded in the detection of brain tumors and provided a clear area of the brain tumor with a high CCR of about 97%.
{"title":"YOLO-UNet Architecture for Detecting and Segmenting the Localized MRI Brain Tumor Image","authors":"Nur Iriawan, A. A. Pravitasari, Ulfa S. Nuraini, Nur I. Nirmalasari, Taufik Azmi, Muhammad Nasrudin, Adam F. Fandisyah, K. Fithriasari, S. W. Purnami, Irhamah, Widiana Ferriastuti","doi":"10.1155/2024/3819801","DOIUrl":"https://doi.org/10.1155/2024/3819801","url":null,"abstract":"Brain tumor detection and segmentation are the main issues in biomedical engineering research fields, and it is always challenging due to its heterogeneous shape and location in MRI. The quality of the MR images also plays an important role in providing a clear sight of the shape and boundary of the tumor. The clear shape and boundary of the tumor will increase the probability of safe medical surgery. Analysis of this different scope of image types requires refined computerized quantification and visualization tools. This paper employed deep learning to detect and segment brain tumor MRI images by combining the convolutional neural network (CNN) and fully convolutional network (FCN) methodology in serial. The fundamental finding is to detect and localize the tumor area with YOLO-CNN and segment it with the FCN-UNet architecture. This analysis provided automatic detection and segmentation as well as the location of the tumor. The segmentation using the UNet is run under four scenarios, and the best one is chosen by the minimum loss and maximum accuracy value. In this research, we used 277 images for training, 69 images for validation, and 14 images for testing. The validation is carried out by comparing the segmentation results with the medical ground truth to provide the correct classification ratio (CCR). This study succeeded in the detection of brain tumors and provided a clear area of the brain tumor with a high CCR of about 97%.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139793285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}