Pub Date : 2024-12-03DOI: 10.1016/j.ijar.2024.109340
Ofer Arieli , Jesse Heyninck
In this paper, we consider assumption-based argumentation frameworks that are based on contrapositive logics and partially-ordered preference functions. It is shown that these structures provide a general and solid platform for representing and reasoning with conflicting and prioritized arguments. Two useful properties of the preference functions are identified (selectivity and max-lower-boundedness), and extended forms of attack relations are supported (∃–attacks and ∀-attacks), which assure several desirable properties and a variety of formal settings for argumentation-based conclusion drawing. These two variations of attacks may be further extended to collective attacks. Such (existential or universal) collective attacks allow to challenge a collective of assertions rather than single assertions. We show that these extensions not only enhance the expressive power of the framework, but in certain cases also enable more rational patterns of reasoning with conflicting assertions.
{"title":"Simple contrapositive assumption-based argumentation frameworks with preferences: Partial orders and collective attacks","authors":"Ofer Arieli , Jesse Heyninck","doi":"10.1016/j.ijar.2024.109340","DOIUrl":"10.1016/j.ijar.2024.109340","url":null,"abstract":"<div><div>In this paper, we consider assumption-based argumentation frameworks that are based on contrapositive logics and partially-ordered preference functions. It is shown that these structures provide a general and solid platform for representing and reasoning with conflicting and prioritized arguments. Two useful properties of the preference functions are identified (selectivity and max-lower-boundedness), and extended forms of attack relations are supported (∃–attacks and ∀-attacks), which assure several desirable properties and a variety of formal settings for argumentation-based conclusion drawing. These two variations of attacks may be further extended to collective attacks. Such (existential or universal) collective attacks allow to challenge a collective of assertions rather than single assertions. We show that these extensions not only enhance the expressive power of the framework, but in certain cases also enable more rational patterns of reasoning with conflicting assertions.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"178 ","pages":"Article 109340"},"PeriodicalIF":3.2,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143141986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-02DOI: 10.1016/j.ijar.2024.109333
Guillaume Escamocher , Samira Pourkhajouei , Federico Toffano , Paolo Viappiani , Nic Wilson
The development of models that can cope with noisy input preferences is a critical topic in artificial intelligence methods for interactive preference elicitation. A Bayesian representation of the uncertainty in the user preference model can be used to successfully handle this, but there are large costs in terms of the processing time which limit the adoption of these techniques in real-time contexts. A Bayesian approach also requires one to assume a prior distribution over the set of user preference models. In this work, dealing with multi-criteria decision problems, we consider instead a more qualitative approach to preference uncertainty, focusing on the most plausible user preference models, and aim to generate a query strategy that enables us to find an alternative that is optimal in all of the most plausible preference models. We develop a non-Bayesian algorithmic method for recommendation and interactive elicitation that considers a large number of possible user models that are evaluated with respect to their degree of consistency of the input preferences. This suggests methods for generating queries that are reasonably fast to compute. We show formal asymptotic results for our algorithm, including the probability that it returns the actual best option. Our test results demonstrate the viability of our approach, including in real-time contexts, with high accuracy in recommending the most preferred alternative for the user.
{"title":"Interactive preference elicitation under noisy preference models: An efficient non-Bayesian approach","authors":"Guillaume Escamocher , Samira Pourkhajouei , Federico Toffano , Paolo Viappiani , Nic Wilson","doi":"10.1016/j.ijar.2024.109333","DOIUrl":"10.1016/j.ijar.2024.109333","url":null,"abstract":"<div><div>The development of models that can cope with noisy input preferences is a critical topic in artificial intelligence methods for interactive preference elicitation. A Bayesian representation of the uncertainty in the user preference model can be used to successfully handle this, but there are large costs in terms of the processing time which limit the adoption of these techniques in real-time contexts. A Bayesian approach also requires one to assume a prior distribution over the set of user preference models. In this work, dealing with multi-criteria decision problems, we consider instead a more qualitative approach to preference uncertainty, focusing on the most plausible user preference models, and aim to generate a query strategy that enables us to find an alternative that is optimal in all of the most plausible preference models. We develop a non-Bayesian algorithmic method for recommendation and interactive elicitation that considers a large number of possible user models that are evaluated with respect to their degree of consistency of the input preferences. This suggests methods for generating queries that are reasonably fast to compute. We show formal asymptotic results for our algorithm, including the probability that it returns the actual best option. Our test results demonstrate the viability of our approach, including in real-time contexts, with high accuracy in recommending the most preferred alternative for the user.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"178 ","pages":"Article 109333"},"PeriodicalIF":3.2,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143141987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-28DOI: 10.1016/j.ijar.2024.109331
Arne Decadt, Alexander Erreygers, Jasper De Bock
We study how to infer new choices from prior choices using the framework of choice functions, a unifying mathematical framework for decision-making based on sets of preference orders. In particular, we define the natural (most conservative) extension of a given choice assessment to a coherent choice function—whenever possible—and use this natural extension to make new choices. We provide a practical algorithm for computing this natural extension and various ways to improve scalability. Finally, we test these algorithms for different types of choice assessments.
{"title":"Extending choice assessments to choice functions: An algorithm for computing the natural extension","authors":"Arne Decadt, Alexander Erreygers, Jasper De Bock","doi":"10.1016/j.ijar.2024.109331","DOIUrl":"10.1016/j.ijar.2024.109331","url":null,"abstract":"<div><div>We study how to infer new choices from prior choices using the framework of choice functions, a unifying mathematical framework for decision-making based on sets of preference orders. In particular, we define the natural (most conservative) extension of a given choice assessment to a coherent choice function—whenever possible—and use this natural extension to make new choices. We provide a practical algorithm for computing this natural extension and various ways to improve scalability. Finally, we test these algorithms for different types of choice assessments.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"178 ","pages":"Article 109331"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143141392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-28DOI: 10.1016/j.ijar.2024.109332
Rodrigo F.L. Lassance , Rafael Izbicki , Rafael B. Stern
Instead of testing solely a precise hypothesis, it is often useful to enlarge it with alternatives deemed to differ negligibly from it. For instance, in a bioequivalence study one might test if the concentration of an ingredient is exactly the same in two drugs. In such a context, it might be more relevant to test the enlarged hypothesis that the difference in concentration between them is of no practical significance. While this concept is not alien to Bayesian statistics, applications remain mostly confined to parametric settings and strategies that effectively harness experts' intuitions are often scarce or nonexistent. To resolve both issues, we introduce the Pragmatic Region Oriented Test (PROTEST), an accessible nonparametric testing framework based on distortion models that can seamlessly integrate with Markov Chain Monte Carlo (MCMC) methods and is available as an R package. We develop expanded versions of model adherence, goodness-of-fit, quantile and two-sample tests. To demonstrate how PROTEST operates, we use examples, simulated studies that critically evaluate features of the test and an application on neuron spikes. Furthermore, we address the crucial issue of selecting the threshold—which controls how much a hypothesis is to be expanded—even when intuitions are limited or challenging to quantify.
{"title":"Adding imprecision to hypotheses: A Bayesian framework for testing practical significance in nonparametric settings","authors":"Rodrigo F.L. Lassance , Rafael Izbicki , Rafael B. Stern","doi":"10.1016/j.ijar.2024.109332","DOIUrl":"10.1016/j.ijar.2024.109332","url":null,"abstract":"<div><div>Instead of testing solely a precise hypothesis, it is often useful to enlarge it with alternatives deemed to differ negligibly from it. For instance, in a bioequivalence study one might test if the concentration of an ingredient is exactly the same in two drugs. In such a context, it might be more relevant to test the enlarged hypothesis that the difference in concentration between them is of no practical significance. While this concept is not alien to Bayesian statistics, applications remain mostly confined to parametric settings and strategies that effectively harness experts' intuitions are often scarce or nonexistent. To resolve both issues, we introduce the Pragmatic Region Oriented Test (<span>PROTEST</span>), an accessible nonparametric testing framework based on distortion models that can seamlessly integrate with Markov Chain Monte Carlo (MCMC) methods and is available as an <span>R</span> package. We develop expanded versions of model adherence, goodness-of-fit, quantile and two-sample tests. To demonstrate how <span>PROTEST</span> operates, we use examples, simulated studies that critically evaluate features of the test and an application on neuron spikes. Furthermore, we address the crucial issue of selecting the threshold—which controls how much a hypothesis is to be expanded—even when intuitions are limited or challenging to quantify.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"178 ","pages":"Article 109332"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142757051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-26DOI: 10.1016/j.ijar.2024.109328
Haifei Zhang , Benjamin Quost , Marie-Hélène Masson
Classifiers now demonstrate impressive performances in many domains. However, in some applications where the cost of an erroneous decision is high, set-valued predictions may be preferable to classical crisp decisions, being less informative but more reliable. Cautious classifiers aim at producing such imprecise predictions so as to reduce the risk of making wrong decisions. In this paper, we describe two cautious classification approaches rooted in the ensemble learning paradigm, which consist in combining probability intervals. These intervals are aggregated within the framework of belief functions, using two proposed strategies that can be regarded as generalizations of classical averaging and voting. Our strategies aim at maximizing the lower expected discounted utility to achieve a good compromise between model accuracy and determinacy. The efficiency and performance of the proposed procedure are illustrated using imprecise decision trees, thus giving birth to cautious variants of the random forest classifier. The performance and properties of these variants are illustrated using 15 datasets.
{"title":"Cautious classifier ensembles for set-valued decision-making","authors":"Haifei Zhang , Benjamin Quost , Marie-Hélène Masson","doi":"10.1016/j.ijar.2024.109328","DOIUrl":"10.1016/j.ijar.2024.109328","url":null,"abstract":"<div><div>Classifiers now demonstrate impressive performances in many domains. However, in some applications where the cost of an erroneous decision is high, set-valued predictions may be preferable to classical crisp decisions, being less informative but more reliable. Cautious classifiers aim at producing such imprecise predictions so as to reduce the risk of making wrong decisions. In this paper, we describe two cautious classification approaches rooted in the ensemble learning paradigm, which consist in combining probability intervals. These intervals are aggregated within the framework of belief functions, using two proposed strategies that can be regarded as generalizations of classical averaging and voting. Our strategies aim at maximizing the lower expected discounted utility to achieve a good compromise between model accuracy and determinacy. The efficiency and performance of the proposed procedure are illustrated using imprecise decision trees, thus giving birth to cautious variants of the random forest classifier. The performance and properties of these variants are illustrated using 15 datasets.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"177 ","pages":"Article 109328"},"PeriodicalIF":3.2,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142723044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22DOI: 10.1016/j.ijar.2024.109329
Sana Afreen, Ajay Kumar Bhurjee
This paper delves into interval-valued bimatrix games, where precise payoffs remain elusive, but lower and upper bounds on payoffs can be determined. The study explores several key questions in this context. Firstly, it addresses the issue of the existence of a universally applicable equilibrium across all instances of interval values. The paper establishes a fundamental equivalence by demonstrating that this property hinges on the solvability of a specific system of interval linear inequalities. Secondly, the research endeavors to characterize the comprehensive set of weak and strong equilibrium using a system of interval linear inequalities. The findings in this paper shed light on the complexities and intricacies of interval-valued bimatrix games, offering valuable insights into their equilibrium properties and computational aspects. Through illustrative examples, we underscore the practical utility of these approaches and compare them with previously developed state-of-the-art methods, demonstrating their ability to generate conservative solutions in the face of interval uncertainty. The findings of this research not only offer valuable insights into the equilibrium properties and computational aspects of interval-valued bimatrix games but extend their practical implications. In particular, the paper delves into real-life applications, exemplifying the significance of these findings for crude oil trading decision-making.
{"title":"Existence of optimal strategies in bimatrix game and applications","authors":"Sana Afreen, Ajay Kumar Bhurjee","doi":"10.1016/j.ijar.2024.109329","DOIUrl":"10.1016/j.ijar.2024.109329","url":null,"abstract":"<div><div>This paper delves into interval-valued bimatrix games, where precise payoffs remain elusive, but lower and upper bounds on payoffs can be determined. The study explores several key questions in this context. Firstly, it addresses the issue of the existence of a universally applicable equilibrium across all instances of interval values. The paper establishes a fundamental equivalence by demonstrating that this property hinges on the solvability of a specific system of interval linear inequalities. Secondly, the research endeavors to characterize the comprehensive set of weak and strong equilibrium using a system of interval linear inequalities. The findings in this paper shed light on the complexities and intricacies of interval-valued bimatrix games, offering valuable insights into their equilibrium properties and computational aspects. Through illustrative examples, we underscore the practical utility of these approaches and compare them with previously developed state-of-the-art methods, demonstrating their ability to generate conservative solutions in the face of interval uncertainty. The findings of this research not only offer valuable insights into the equilibrium properties and computational aspects of interval-valued bimatrix games but extend their practical implications. In particular, the paper delves into real-life applications, exemplifying the significance of these findings for crude oil trading decision-making.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"177 ","pages":"Article 109329"},"PeriodicalIF":3.2,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142723046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22DOI: 10.1016/j.ijar.2024.109327
Enliang Yan , Pengfei Zhang , Tianyong Hao , Tao Zhang , Jianping Yu , Yuncheng Jiang , Yuan Yang
The computation of concept distances aids in understanding the interrelations among entities within knowledge graphs and uncovering implicit information. The existing studies predominantly focus on the conceptual distance of specific hierarchical levels without offering a unified framework for comprehensive exploration. To overcome the limitations of unidimensional approaches, this paper proposes a method for calculating concept distances at multiple granularities based on a three-way partial order structure. Specifically: (1) this study introduces a methodology for calculating inter-object similarity based on the three-way attribute partial order structure (APOS); (2) It proposes the application of the similarity matrix to delineate the structure of categories; (3) Based on the similarity matrix describing the three-way APOS of categories, we establish a novel method for calculating inter-category distance. The experiments on eight datasets demonstrate that this approach effectively differentiates various concepts and computes their distances. When applied to classification tasks, it exhibits outstanding performance.
{"title":"An approach to calculate conceptual distance across multi-granularity based on three-way partial order structure","authors":"Enliang Yan , Pengfei Zhang , Tianyong Hao , Tao Zhang , Jianping Yu , Yuncheng Jiang , Yuan Yang","doi":"10.1016/j.ijar.2024.109327","DOIUrl":"10.1016/j.ijar.2024.109327","url":null,"abstract":"<div><div>The computation of concept distances aids in understanding the interrelations among entities within knowledge graphs and uncovering implicit information. The existing studies predominantly focus on the conceptual distance of specific hierarchical levels without offering a unified framework for comprehensive exploration. To overcome the limitations of unidimensional approaches, this paper proposes a method for calculating concept distances at multiple granularities based on a three-way partial order structure. Specifically: (1) this study introduces a methodology for calculating inter-object similarity based on the three-way attribute partial order structure (APOS); (2) It proposes the application of the similarity matrix to delineate the structure of categories; (3) Based on the similarity matrix describing the three-way APOS of categories, we establish a novel method for calculating inter-category distance. The experiments on eight datasets demonstrate that this approach effectively differentiates various concepts and computes their distances. When applied to classification tasks, it exhibits outstanding performance.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"177 ","pages":"Article 109327"},"PeriodicalIF":3.2,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142723048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22DOI: 10.1016/j.ijar.2024.109330
Tathagata Basu , Matthias C.M. Troffaes
Causal effect estimation is a critical task in statistical learning that aims to find the causal effect on subjects by identifying causal links between a number of predictor (or, explanatory) variables and the outcome of a treatment. In a regressional framework, we assign a treatment and outcome model to estimate the average causal effect. Additionally, for high dimensional regression problems, variable selection methods are also used to find a subset of predictor variables that maximises the predictive performance of the underlying model for better estimation of the causal effect. In this paper, we propose a different approach. We focus on the variable selection aspects of high dimensional causal estimation problem. We suggest a cautious Bayesian group LASSO (least absolute shrinkage and selection operator) framework for variable selection using prior sensitivity analysis. We argue that in some cases, abstaining from selecting (or, rejecting) a predictor is beneficial and we should gather more information to obtain a more decisive result. We also show that for problems with very limited information, expert elicited variable selection can give us a more stable causal effect estimation as it avoids overfitting. Lastly, we carry a comparative study with synthetic dataset and show the applicability of our method in real-life situations.
{"title":"Robust Bayesian causal estimation for causal inference in medical diagnosis","authors":"Tathagata Basu , Matthias C.M. Troffaes","doi":"10.1016/j.ijar.2024.109330","DOIUrl":"10.1016/j.ijar.2024.109330","url":null,"abstract":"<div><div>Causal effect estimation is a critical task in statistical learning that aims to find the causal effect on subjects by identifying causal links between a number of predictor (or, explanatory) variables and the outcome of a treatment. In a regressional framework, we assign a treatment and outcome model to estimate the average causal effect. Additionally, for high dimensional regression problems, variable selection methods are also used to find a subset of predictor variables that maximises the predictive performance of the underlying model for better estimation of the causal effect. In this paper, we propose a different approach. We focus on the variable selection aspects of high dimensional causal estimation problem. We suggest a cautious Bayesian group LASSO (least absolute shrinkage and selection operator) framework for variable selection using prior sensitivity analysis. We argue that in some cases, abstaining from selecting (or, rejecting) a predictor is beneficial and we should gather more information to obtain a more decisive result. We also show that for problems with very limited information, expert elicited variable selection can give us a more stable causal effect estimation as it avoids overfitting. Lastly, we carry a comparative study with synthetic dataset and show the applicability of our method in real-life situations.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"177 ","pages":"Article 109330"},"PeriodicalIF":3.2,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142723045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-14DOI: 10.1016/j.ijar.2024.109326
Pham Viet Anh , Nguyen Ngoc Thuy , Le Hoang Son , Tran Hung Cuong , Nguyen Long Giang
The intuitionistic fuzzy set theory is recognized as an effective approach for attribute reduction in decision information systems containing numerical or continuous data, particularly in cases of noisy data. However, this approach involves complex computations due to the participation of both the membership and non-membership functions, making it less feasible for data tables with a large number of objects. Additionally, in some practical scenarios, dynamic data tables may change in the number of objects, such as the addition or removal of objects. To overcome these challenges, we propose a novel and efficient incremental attribute reduction method based on -level intuitionistic fuzzy sets. Specifically, we first utilize the key properties of -level intuitionistic fuzzy sets to construct a distance measure between two -level intuitionistic fuzzy partitions. This extension of the intuitionistic fuzzy set model helps reduce noise in the data and shrink the computational space. Subsequently, we define a new reduct and design an efficient algorithm to identify an attribute subset in fixed decision tables. For dynamic decision tables, we develop two incremental calculation formulas based on the distance measure between two -level intuitionistic fuzzy partitions to improve processing time. Accordingly, some important properties of the distance measures are also clarified. Finally, we design two incremental attribute reduction algorithms that handle the addition and removal of objects. Experimental results have demonstrated that our method is more effective than incremental methods based on fuzzy rough set and intuitionistic fuzzy set approaches in terms of execution time and classification accuracy from the obtained reduct.
{"title":"Incremental attribute reduction with α,β-level intuitionistic fuzzy sets","authors":"Pham Viet Anh , Nguyen Ngoc Thuy , Le Hoang Son , Tran Hung Cuong , Nguyen Long Giang","doi":"10.1016/j.ijar.2024.109326","DOIUrl":"10.1016/j.ijar.2024.109326","url":null,"abstract":"<div><div>The intuitionistic fuzzy set theory is recognized as an effective approach for attribute reduction in decision information systems containing numerical or continuous data, particularly in cases of noisy data. However, this approach involves complex computations due to the participation of both the membership and non-membership functions, making it less feasible for data tables with a large number of objects. Additionally, in some practical scenarios, dynamic data tables may change in the number of objects, such as the addition or removal of objects. To overcome these challenges, we propose a novel and efficient incremental attribute reduction method based on <span><math><mi>α</mi><mo>,</mo><mi>β</mi></math></span>-level intuitionistic fuzzy sets. Specifically, we first utilize the key properties of <span><math><mi>α</mi><mo>,</mo><mi>β</mi></math></span>-level intuitionistic fuzzy sets to construct a distance measure between two <span><math><mi>α</mi><mo>,</mo><mi>β</mi></math></span>-level intuitionistic fuzzy partitions. This extension of the intuitionistic fuzzy set model helps reduce noise in the data and shrink the computational space. Subsequently, we define a new reduct and design an efficient algorithm to identify an attribute subset in fixed decision tables. For dynamic decision tables, we develop two incremental calculation formulas based on the distance measure between two <span><math><mi>α</mi><mo>,</mo><mi>β</mi></math></span>-level intuitionistic fuzzy partitions to improve processing time. Accordingly, some important properties of the distance measures are also clarified. Finally, we design two incremental attribute reduction algorithms that handle the addition and removal of objects. Experimental results have demonstrated that our method is more effective than incremental methods based on fuzzy rough set and intuitionistic fuzzy set approaches in terms of execution time and classification accuracy from the obtained reduct.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"176 ","pages":"Article 109326"},"PeriodicalIF":3.2,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The motivation behind this research stems from the inherent complexity and vagueness in human social interactions, which traditional Social Network Analysis (SNA) approaches often fail to capture adequately. Conventional SNA methods typically represent relationships as binary or weighted ties, thereby losing the subtle nuances and inherent uncertainty in real-world social connections. The need to preserve the vagueness of social relations and provide a more accurate representation of these relationships motivates the introduction of a fuzzy-based approach to SNA. This paper proposes a novel framework for Fuzzy Social Network Analysis (FSNA), which extends traditional SNA to accommodate the vagueness of relationships. The proposed method redefines the ties between nodes as fuzzy numbers rather than crisp values and introduces a comprehensive set of fuzzy centrality indices, including fuzzy degree centrality, fuzzy betweenness centrality, and fuzzy closeness centrality, among others. These indices are designed to measure the importance and influence of nodes within a network while preserving the uncertainty in the relationships between them. The applicability of the proposed framework is demonstrated through a case study involving a university department's collaboration network, where relationships between faculty members are analyzed using data collected via a fascinating mouse-tracking technique.
这项研究的动机源于人类社会互动中固有的复杂性和模糊性,而传统的社会网络分析(SNA)方法往往无法充分捕捉到这一点。传统的 SNA 方法通常用二元或加权纽带来表示关系,从而忽略了现实世界中社会联系的细微差别和内在不确定性。由于需要保留社会关系的模糊性,并对这些关系提供更准确的表述,这就促使人们在 SNA 中引入基于模糊的方法。本文提出了一个新颖的模糊社会网络分析(FSNA)框架,它扩展了传统的 SNA,以适应关系的模糊性。所提出的方法将节点之间的联系重新定义为模糊数而非清晰值,并引入了一套完整的模糊中心性指数,包括模糊度中心性、模糊度间中心性和模糊接近中心性等。这些指数旨在衡量网络中节点的重要性和影响力,同时保留节点之间关系的不确定性。我们通过一个涉及大学院系合作网络的案例研究来证明所提出的框架的适用性,在该案例研究中,我们利用迷人的鼠标跟踪技术收集的数据分析了教师之间的关系。
{"title":"Fuzzy centrality measures in social network analysis: Theory and application in a university department collaboration network","authors":"Annamaria Porreca , Fabrizio Maturo , Viviana Ventre","doi":"10.1016/j.ijar.2024.109319","DOIUrl":"10.1016/j.ijar.2024.109319","url":null,"abstract":"<div><div>The motivation behind this research stems from the inherent complexity and vagueness in human social interactions, which traditional Social Network Analysis (SNA) approaches often fail to capture adequately. Conventional SNA methods typically represent relationships as binary or weighted ties, thereby losing the subtle nuances and inherent uncertainty in real-world social connections. The need to preserve the vagueness of social relations and provide a more accurate representation of these relationships motivates the introduction of a fuzzy-based approach to SNA. This paper proposes a novel framework for Fuzzy Social Network Analysis (FSNA), which extends traditional SNA to accommodate the vagueness of relationships. The proposed method redefines the ties between nodes as fuzzy numbers rather than crisp values and introduces a comprehensive set of fuzzy centrality indices, including fuzzy degree centrality, fuzzy betweenness centrality, and fuzzy closeness centrality, among others. These indices are designed to measure the importance and influence of nodes within a network while preserving the uncertainty in the relationships between them. The applicability of the proposed framework is demonstrated through a case study involving a university department's collaboration network, where relationships between faculty members are analyzed using data collected via a fascinating mouse-tracking technique.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"176 ","pages":"Article 109319"},"PeriodicalIF":3.2,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}