Pub Date : 2024-03-20DOI: 10.1142/s0218488524400063
Zeyu Liang
Extracting the relations of two entities on the sentence-level has drawn increasing attention in recent years but remains facing great challenges on document-level, due to the inherent difficulty in recognizing the relations of two entities across multiple sentences. Previous works show that employing the graph convolutional neural network can help the model capture unstructured dependent information of entities. However, they usually employed the non-adaptive weight edges to build the correlation weight matrix which suffered from the problem of information redundancy and gradient disappearance. To solve this problem, we propose a deep gated graph reasoning model for document-level relation extraction, namely, BERT-GGNNs, which employ an improved gated graph neural network with a learnable correlation weight matrix to establish multiple deep gated graph reason layers. The proposed deep gated graph reasoning layers make the model easier to reasoning the relations between entities hidden in the document. Experiments show that the proposed model outperforms most of strong baseline models, and our proposed model is 0.3% and 0.3% higher than the famous LSR-BERT model on the F1 and Ing F1, respectively.
近年来,在句子层面提取两个实体的关系越来越受到关注,但在文档层面仍面临巨大挑战,原因是在多个句子中识别两个实体的关系存在固有困难。以往的研究表明,利用图卷积神经网络可以帮助模型捕捉实体的非结构化依赖信息。然而,他们通常采用非自适应性权重边来构建相关性权重矩阵,这就存在信息冗余和梯度消失的问题。为了解决这个问题,我们提出了一种用于文档级关系提取的深度门控图推理模型,即 BERT-GGNN,它采用了改进的门控图神经网络和可学习的相关权重矩阵来建立多个深度门控图推理层。所提出的深度门控图推理层使模型更容易推理出隐藏在文档中的实体之间的关系。实验表明,所提出的模型优于大多数强基线模型,而且我们所提出的模型在 F1 和 Ing F1 上分别比著名的 LSR-BERT 模型高出 0.3% 和 0.3%。
{"title":"Document-Level Relation Extraction with Deep Gated Graph Reasoning","authors":"Zeyu Liang","doi":"10.1142/s0218488524400063","DOIUrl":"https://doi.org/10.1142/s0218488524400063","url":null,"abstract":"<p>Extracting the relations of two entities on the sentence-level has drawn increasing attention in recent years but remains facing great challenges on document-level, due to the inherent difficulty in recognizing the relations of two entities across multiple sentences. Previous works show that employing the graph convolutional neural network can help the model capture unstructured dependent information of entities. However, they usually employed the non-adaptive weight edges to build the correlation weight matrix which suffered from the problem of information redundancy and gradient disappearance. To solve this problem, we propose a deep gated graph reasoning model for document-level relation extraction, namely, BERT-GGNNs, which employ an improved gated graph neural network with a learnable correlation weight matrix to establish multiple deep gated graph reason layers. The proposed deep gated graph reasoning layers make the model easier to reasoning the relations between entities hidden in the document. Experiments show that the proposed model outperforms most of strong baseline models, and our proposed model is <b>0.3%</b> and <b>0.3%</b> higher than the famous LSR-BERT model on the F1 and Ing F1, respectively.</p>","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"20 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-20DOI: 10.1142/s021848852450003x
Jorge Antunes, Goodness C. Aye, Rangan Gupta, Peter Wanke, Yong Tan
Better performance at a country level will provide benefits to the whole population. This issue has been studied from various perspectives using empirical methods. However, little effort has as yet been made to address the issue of endogeneity in the interrelationships between productive performance and its determinants. We address this issue by proposing a Two-Dimensional Fuzzy-Monte Carlo Analysis (2DFMC) approach. The joint use of stochastic and fuzzy approaches – within the ambit of 2DFMCA – offers methodological tools to mitigate epistemic uncertainty while increasing research validity and reproducibility: (i) preliminary performance assessment by fuzzy ideal solutions; and (ii) robust stochastic regression of the performance scores into the epistemic sources of uncertainty related to the levels of physical and human capitals measured in distinct countries at different epochs. By applying the proposed method to a sample of 23 countries for 1890–2018, our results show that the best and worst-performing countries were Norway and Portugal, respectively. We further found that the intensity of human capital and the age of equipment (capital stock) have different impacts on productive performance – it has been established that capital intensity and total factor productivity are influenced by productivity performance, which, in turn, has a negative impact on labor productivity and GDP per capita. Our analysis provides insights to enable government policies to coordinate productive performance and other macroeconomic indicators.
{"title":"Endogenous Long-Term Productivity Performance in Advanced Countries: A Novel Two-Dimensional Fuzzy-Monte Carlo Approach","authors":"Jorge Antunes, Goodness C. Aye, Rangan Gupta, Peter Wanke, Yong Tan","doi":"10.1142/s021848852450003x","DOIUrl":"https://doi.org/10.1142/s021848852450003x","url":null,"abstract":"<p>Better performance at a country level will provide benefits to the whole population. This issue has been studied from various perspectives using empirical methods. However, little effort has as yet been made to address the issue of endogeneity in the interrelationships between productive performance and its determinants. We address this issue by proposing a Two-Dimensional Fuzzy-Monte Carlo Analysis (2DFMC) approach. The joint use of stochastic and fuzzy approaches – within the ambit of 2DFMCA – offers methodological tools to mitigate epistemic uncertainty while increasing research validity and reproducibility: (i) preliminary performance assessment by fuzzy ideal solutions; and (ii) robust stochastic regression of the performance scores into the epistemic sources of uncertainty related to the levels of physical and human capitals measured in distinct countries at different epochs. By applying the proposed method to a sample of 23 countries for 1890–2018, our results show that the best and worst-performing countries were Norway and Portugal, respectively. We further found that the intensity of human capital and the age of equipment (capital stock) have different impacts on productive performance – it has been established that capital intensity and total factor productivity are influenced by productivity performance, which, in turn, has a negative impact on labor productivity and GDP per capita. Our analysis provides insights to enable government policies to coordinate productive performance and other macroeconomic indicators.</p>","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"46 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-20DOI: 10.1142/s0218488524500041
F. Babakordi
Algebraic operations on generalized hesitant fuzzy numbers are key tools to address the problems with decision uncertainty. In this paper, by studying the arithmetic operations on generalized trapezoidal hesitant fuzzy numbers, modified arithmetic operations are introduced for this class of numbers so that, using these arithmetic operations, the multiplication and division of two generalized trapezoidal hesitant fuzzy numbers are always generalized trapezoidal hesitant fuzzy numbers. Furthermore, a generalized trapezoidal hesitant fuzzy number raised to the power of a real number is a generalized trapezoidal hesitant fuzzy number, and in the defined division, the case where the denominator becomes zero is not considered. Numerical examples are used to show the shortcomings of the previous arithmetic operations as well as the efficiencies of the arithmetic operations proposed in this research for generalized trapezoidal hesitant fuzzy numbers. Finally, the application of the proposed new arithmetic operations to generalized trapezoidal hesitant fuzzy numbers in solving the generalized trapezoidal hesitant fully fuzzy equation is discussed.
{"title":"Arithmetic Operations on Generalized Trapezoidal Hesitant Fuzzy Numbers and Their Application to Solving Generalized Trapezoidal Hesitant Fully Fuzzy Equation","authors":"F. Babakordi","doi":"10.1142/s0218488524500041","DOIUrl":"https://doi.org/10.1142/s0218488524500041","url":null,"abstract":"<p>Algebraic operations on generalized hesitant fuzzy numbers are key tools to address the problems with decision uncertainty. In this paper, by studying the arithmetic operations on generalized trapezoidal hesitant fuzzy numbers, modified arithmetic operations are introduced for this class of numbers so that, using these arithmetic operations, the multiplication and division of two generalized trapezoidal hesitant fuzzy numbers are always generalized trapezoidal hesitant fuzzy numbers. Furthermore, a generalized trapezoidal hesitant fuzzy number raised to the power of a real number is a generalized trapezoidal hesitant fuzzy number, and in the defined division, the case where the denominator becomes zero is not considered. Numerical examples are used to show the shortcomings of the previous arithmetic operations as well as the efficiencies of the arithmetic operations proposed in this research for generalized trapezoidal hesitant fuzzy numbers. Finally, the application of the proposed new arithmetic operations to generalized trapezoidal hesitant fuzzy numbers in solving the generalized trapezoidal hesitant fully fuzzy equation is discussed.</p>","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"56 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-20DOI: 10.1142/s0218488524500016
K. Kiruthika Devi, G. A. Sathish Kumar
Currently, social media networks such as Facebook and Twitter have evolved into valuable platforms for global communication. However, due to their extensive user bases, Twitter is often misused by illegitimate users engaging in illicit activities. While there are numerous research papers available that delve into combating illegitimate users on Twitter, a common shortcoming in most of these works is the failure to address the issue of class imbalance, which significantly impacts the effectiveness of spam detection. Few other research works that have addressed class imbalance have not yet applied bio-inspired algorithms to balance the dataset. Therefore, we introduce PSOB-U, a particle swarm optimization-based undersampling technique designed to balance the Twitter dataset. In PSOB-U, various classifiers and metrics are employed to select majority samples and rank them. Furthermore, an ensemble learning approach is implemented to combine the base classifiers in three stages. During the training phase of the base classifiers, undersampling techniques and a cost-sensitive random forest (CS-RF) are utilized to address the imbalanced data at both the data and algorithmic levels. In the first stage, imbalanced datasets are balanced using random undersampling, particle swarm optimization-based undersampling, and random oversampling. In the second stage, a classifier is constructed for each of the balanced datasets obtained through these sampling techniques. In the third stage, a majority voting method is introduced to aggregate the predicted outputs from the three classifiers. The evaluation results demonstrate that our proposed method significantly enhances the detection of illegitimate users in the imbalanced Twitter dataset. Additionally, we compare our proposed work with existing models, and the predicted results highlight the superiority of our spam detection model over state-of-the-art spam detection models that address the class imbalance problem. The combination of particle swarm optimization-based undersampling and the ensemble learning approach using majority voting results in more accurate spam detection.
{"title":"Bio-Inspired Algorithm Based Undersampling Approach and Ensemble Learning for Twitter Spam Detection","authors":"K. Kiruthika Devi, G. A. Sathish Kumar","doi":"10.1142/s0218488524500016","DOIUrl":"https://doi.org/10.1142/s0218488524500016","url":null,"abstract":"<p>Currently, social media networks such as Facebook and Twitter have evolved into valuable platforms for global communication. However, due to their extensive user bases, Twitter is often misused by illegitimate users engaging in illicit activities. While there are numerous research papers available that delve into combating illegitimate users on Twitter, a common shortcoming in most of these works is the failure to address the issue of class imbalance, which significantly impacts the effectiveness of spam detection. Few other research works that have addressed class imbalance have not yet applied bio-inspired algorithms to balance the dataset. Therefore, we introduce PSOB-U, a particle swarm optimization-based undersampling technique designed to balance the Twitter dataset. In PSOB-U, various classifiers and metrics are employed to select majority samples and rank them. Furthermore, an ensemble learning approach is implemented to combine the base classifiers in three stages. During the training phase of the base classifiers, undersampling techniques and a cost-sensitive random forest (CS-RF) are utilized to address the imbalanced data at both the data and algorithmic levels. In the first stage, imbalanced datasets are balanced using random undersampling, particle swarm optimization-based undersampling, and random oversampling. In the second stage, a classifier is constructed for each of the balanced datasets obtained through these sampling techniques. In the third stage, a majority voting method is introduced to aggregate the predicted outputs from the three classifiers. The evaluation results demonstrate that our proposed method significantly enhances the detection of illegitimate users in the imbalanced Twitter dataset. Additionally, we compare our proposed work with existing models, and the predicted results highlight the superiority of our spam detection model over state-of-the-art spam detection models that address the class imbalance problem. The combination of particle swarm optimization-based undersampling and the ensemble learning approach using majority voting results in more accurate spam detection.</p>","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"136 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-20DOI: 10.1142/s0218488524500028
S. J. R. K. Padminivalli V., M. V. P. Chandra Sekhara Rao
Data mining and natural language processing researchers have been working on sentiment analysis for the past decade. Using deep neural networks (DNNs) for sentiment analysis has recently shown promising results. A technique of studying people’s attitudes through emotional sentiment analysis of data generated from various sources such as Twitter, social media reviews, etc. and classifying emotions based on the given data is related to text data generation. Therefore, the proposed study proposes a well-known deep learning technique for facet-based emotional mood classification using text data that can handle a large amount of content. Text data pre-processing uses stemming, segmentation, tokenization, case folding, and removal of stop words, nulls, and special characters. After data pre-processing, three word embedding approaches such as Assimilated N-gram Approach (ANA), Boosted Term Frequency Inverse Document Frequency (BT-IDF) and Enhanced Two-Way Encoder Representation from Transformers (E-BERT) are used to extract relevant features. The extracted features from the three different approaches are concatenated using the Feature Fusion Approach (FFA). The optimal features are selected using the Intensified Hunger Games Search Optimization (I-HGSO) algorithm. Finally, aspect-based sentiment analysis is performed using the Senti-BILSTM (Deep Aspect-EMO SentiNet) autoencoder based on the Hybrid Emotional Aspect Capsule autoencoder. The experiment was built on the yelp reviews dataset, IDMB movie review dataset, Amazon reviews dataset and the Twitter sentiment dataset. A statistical evaluation and comparison of the experimental results are conducted with respect to the accuracy, precision, specificity, the f1-score, recall, and sensitivity. There is a 99.26% accuracy value in the Yelp reviews dataset, a 99.46% accuracy value in the IMDB movie reviews dataset, a 99.26% accuracy value in the Amazon reviews dataset and a 99.93% accuracy value in the Twitter sentiment dataset.
{"title":"Deep Aspect-Sentinet: Aspect Based Emotional Sentiment Analysis Using Hybrid Attention Deep Learning Assisted BILSTM","authors":"S. J. R. K. Padminivalli V., M. V. P. Chandra Sekhara Rao","doi":"10.1142/s0218488524500028","DOIUrl":"https://doi.org/10.1142/s0218488524500028","url":null,"abstract":"<p>Data mining and natural language processing researchers have been working on sentiment analysis for the past decade. Using deep neural networks (DNNs) for sentiment analysis has recently shown promising results. A technique of studying people’s attitudes through emotional sentiment analysis of data generated from various sources such as Twitter, social media reviews, etc. and classifying emotions based on the given data is related to text data generation. Therefore, the proposed study proposes a well-known deep learning technique for facet-based emotional mood classification using text data that can handle a large amount of content. Text data pre-processing uses stemming, segmentation, tokenization, case folding, and removal of stop words, nulls, and special characters. After data pre-processing, three word embedding approaches such as Assimilated N-gram Approach (ANA), Boosted Term Frequency Inverse Document Frequency (BT-IDF) and Enhanced Two-Way Encoder Representation from Transformers (E-BERT) are used to extract relevant features. The extracted features from the three different approaches are concatenated using the Feature Fusion Approach (FFA). The optimal features are selected using the Intensified Hunger Games Search Optimization (I-HGSO) algorithm. Finally, aspect-based sentiment analysis is performed using the Senti-BILSTM (Deep Aspect-EMO SentiNet) autoencoder based on the Hybrid Emotional Aspect Capsule autoencoder. The experiment was built on the yelp reviews dataset, IDMB movie review dataset, Amazon reviews dataset and the Twitter sentiment dataset. A statistical evaluation and comparison of the experimental results are conducted with respect to the accuracy, precision, specificity, the f1-score, recall, and sensitivity. There is a 99.26% accuracy value in the Yelp reviews dataset, a 99.46% accuracy value in the IMDB movie reviews dataset, a 99.26% accuracy value in the Amazon reviews dataset and a 99.93% accuracy value in the Twitter sentiment dataset.</p>","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"4 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-20DOI: 10.1142/s0218488524500053
Gül Deniz Çaylı
Uninorms combining t-conorms and t-norms on bounded lattices have lately drawn extensive interest. In this article, we propose two ways for constructing uninorms on a bounded lattice with an identity element. They benefit from the appearance of the t-norm (resp. t-conorm) and the closure operator (resp. interior operator) on a bounded lattice. Additionally, we include some illustrative examples to highlight that our procedures differ from others in the literature.
{"title":"Constructing Uninorms on Bounded Lattices Through Closure and Interior Operators","authors":"Gül Deniz Çaylı","doi":"10.1142/s0218488524500053","DOIUrl":"https://doi.org/10.1142/s0218488524500053","url":null,"abstract":"<p>Uninorms combining t-conorms and t-norms on bounded lattices have lately drawn extensive interest. In this article, we propose two ways for constructing uninorms on a bounded lattice with an identity element. They benefit from the appearance of the t-norm (resp. t-conorm) and the closure operator (resp. interior operator) on a bounded lattice. Additionally, we include some illustrative examples to highlight that our procedures differ from others in the literature.</p>","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"44 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1142/s0218488523500393
Mohsen Saffarian, Malihe Niksirat, Mehdi Ghatee, Seyed Hadi Nasseri
This paper deals with fuzzy multi-depot bus scheduling (FMDBS) problem in which the objective function and constraints are defined with fuzzy attributes. Credibility relation is used to formulate the problem as an integer multicommodity flow problem. A novel combination of branch-and-price and heuristic algorithms, is proposed to efficiently solve FMDBS problem. In the proposed algorithm, the heuristic algorithm is applied to generate initial columns for the column generation method. Also, a heuristic algorithm is used to improve the generated solutions in each node of the branch-and-price tree. Two sets of benchmark examples are applied to demonstrate the efficiency of the proposed algorithm for large-scale instances. Also, the algorithm is applied to solve the classical multi-depot bus scheduling problem. The results show that the proposed algorithm decreases integrality gap and computational time in comparison with the state-of-the-art algorithms and normal branch-and-price algorithm. Finally, as a case study, the bus schedules in Tehran BRT network are generated.
{"title":"Branch-and-Price Based Heuristic Algorithm for Fuzzy Multi-Depot Bus Scheduling Problem","authors":"Mohsen Saffarian, Malihe Niksirat, Mehdi Ghatee, Seyed Hadi Nasseri","doi":"10.1142/s0218488523500393","DOIUrl":"https://doi.org/10.1142/s0218488523500393","url":null,"abstract":"This paper deals with fuzzy multi-depot bus scheduling (FMDBS) problem in which the objective function and constraints are defined with fuzzy attributes. Credibility relation is used to formulate the problem as an integer multicommodity flow problem. A novel combination of branch-and-price and heuristic algorithms, is proposed to efficiently solve FMDBS problem. In the proposed algorithm, the heuristic algorithm is applied to generate initial columns for the column generation method. Also, a heuristic algorithm is used to improve the generated solutions in each node of the branch-and-price tree. Two sets of benchmark examples are applied to demonstrate the efficiency of the proposed algorithm for large-scale instances. Also, the algorithm is applied to solve the classical multi-depot bus scheduling problem. The results show that the proposed algorithm decreases integrality gap and computational time in comparison with the state-of-the-art algorithms and normal branch-and-price algorithm. Finally, as a case study, the bus schedules in Tehran BRT network are generated.","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136152392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1142/s0218488523500368
Adnan Khan, Muhammad Farman, Ali Akgül
This research article illustrates the notion of strong and complete Pythagorean fuzzy soft graphs (PFSGs). Different operations on PFSGs including union of two PFSGs, join of two PFSGs, lexicographic product of two PFSGs, strong product of two PFSGs, Cartesian product of two PFSGs, composition of two PFSGs are also analysed here. Some properties related to these products are discussed here. The idea of complememt of a PFSG is also eloborated here. Moreover, we establish the application of PFSG in the decision making (DM) problem.
{"title":"Decision Making Under Pythagorean Fuzzy Soft Environment","authors":"Adnan Khan, Muhammad Farman, Ali Akgül","doi":"10.1142/s0218488523500368","DOIUrl":"https://doi.org/10.1142/s0218488523500368","url":null,"abstract":"This research article illustrates the notion of strong and complete Pythagorean fuzzy soft graphs (PFSGs). Different operations on PFSGs including union of two PFSGs, join of two PFSGs, lexicographic product of two PFSGs, strong product of two PFSGs, Cartesian product of two PFSGs, composition of two PFSGs are also analysed here. Some properties related to these products are discussed here. The idea of complememt of a PFSG is also eloborated here. Moreover, we establish the application of PFSG in the decision making (DM) problem.","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136129472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1142/s0218488523500344
D. Pavithra, K. Padmanaban, V. Kumararaja, S. Sujanthi
Autism spectrum disease is one of the severe neuro developmental disorders that are currently present worldwide (ASD). It is a chronic disorder that has an impact on a person’s behaviour and communication abilities. The world health organization’s 2019 study states that an increasing number of people are being diagnosed with ASD, which poses a risk because it is comparable to high medical expenses. Early detection can significantly lessen the impact. Traditional techniques are costly and time-consuming. This paper offers a Novel Deep Recurrent Neural Network (NDRNN) algorithm for the detection of the level of autism to address the aforementioned problems. The deep recurrent neural network is developed with several hidden recurrent network layers with Long-Short Term Memory (LSTM) units. In this work, Artificial Algae Algorithm (AAA) is used as a feature extraction algorithm, to obtain the best optimal features among the listed feature set. An Intelligent Water Droplet (IWD) algorithm is used for obtaining optimal weight and bias value for the recurrent neural network. The algorithm was evaluated for the dataset obtained by the Indian scale for assessment of autism. Experimental results shows that this proposed model produces the 91% of classification accuracy and 92% of sensitivity and reduces the cost.
{"title":"An In-Depth Analysis of Autism Spectrum Disorder Using Optimized Deep Recurrent Neural Network","authors":"D. Pavithra, K. Padmanaban, V. Kumararaja, S. Sujanthi","doi":"10.1142/s0218488523500344","DOIUrl":"https://doi.org/10.1142/s0218488523500344","url":null,"abstract":"Autism spectrum disease is one of the severe neuro developmental disorders that are currently present worldwide (ASD). It is a chronic disorder that has an impact on a person’s behaviour and communication abilities. The world health organization’s 2019 study states that an increasing number of people are being diagnosed with ASD, which poses a risk because it is comparable to high medical expenses. Early detection can significantly lessen the impact. Traditional techniques are costly and time-consuming. This paper offers a Novel Deep Recurrent Neural Network (NDRNN) algorithm for the detection of the level of autism to address the aforementioned problems. The deep recurrent neural network is developed with several hidden recurrent network layers with Long-Short Term Memory (LSTM) units. In this work, Artificial Algae Algorithm (AAA) is used as a feature extraction algorithm, to obtain the best optimal features among the listed feature set. An Intelligent Water Droplet (IWD) algorithm is used for obtaining optimal weight and bias value for the recurrent neural network. The algorithm was evaluated for the dataset obtained by the Indian scale for assessment of autism. Experimental results shows that this proposed model produces the 91% of classification accuracy and 92% of sensitivity and reduces the cost.","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136152692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1142/s0218488523500332
Lesheng Jin, Boris Yatsalo, Luis Martínez Lopez, Tapan Senapati, Chaker Jebari, Ronald R. Yager
Uncertainties are pervasive in ever-increasing more practical evaluation and decision making environments. Numerical information with uncertainty losses more or less credibility, which makes it possible to use bi-polar preference based weights allocation method to attach differing importance to different information granules in evaluation. However, there lacks effective methodologies and techniques to simultaneously consider various categories of involved bi-polar preferences, not merely the magnitude of main data which ordered weighted averaging aggregation can well handle. This work proposes some types and categories of bi-polar preference possibly involved in preference and uncertain evaluation environment, discusses some methods and techniques to elicit the preference strengths from practical backgrounds, and suggests several techniques to generate corresponding weight vectors for performing bi-polar preference based information fusion. Detailed decision making procedure and numerical example with management background are also presented. This work also presents some practical approaches to apply preferences and uncertainties involved aggregation techniques in decision making.
{"title":"A Weight Determination Model in Uncertain and Complex Bi-Polar Preference Environment","authors":"Lesheng Jin, Boris Yatsalo, Luis Martínez Lopez, Tapan Senapati, Chaker Jebari, Ronald R. Yager","doi":"10.1142/s0218488523500332","DOIUrl":"https://doi.org/10.1142/s0218488523500332","url":null,"abstract":"Uncertainties are pervasive in ever-increasing more practical evaluation and decision making environments. Numerical information with uncertainty losses more or less credibility, which makes it possible to use bi-polar preference based weights allocation method to attach differing importance to different information granules in evaluation. However, there lacks effective methodologies and techniques to simultaneously consider various categories of involved bi-polar preferences, not merely the magnitude of main data which ordered weighted averaging aggregation can well handle. This work proposes some types and categories of bi-polar preference possibly involved in preference and uncertain evaluation environment, discusses some methods and techniques to elicit the preference strengths from practical backgrounds, and suggests several techniques to generate corresponding weight vectors for performing bi-polar preference based information fusion. Detailed decision making procedure and numerical example with management background are also presented. This work also presents some practical approaches to apply preferences and uncertainties involved aggregation techniques in decision making.","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136129889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}