Pub Date : 2024-08-28DOI: 10.1016/j.ijar.2024.109283
Helene Fargier , Romain Guillaume
Dempster-Shafer theory of evidence is a framework that is expressive enough to represent both ignorance and probabilistic information. However, decision models based on belief functions proposed in the literature face limitations in a sequential context: they either abandon the principle of dynamic consistency, restrict the combination of lotteries, or relax the requirement for transitive and complete comparisons. This work formally establishes that these requirements are indeed incompatible when any form of compensation is considered. It then demonstrates that these requirements can be satisfied in non-compensatory frameworks by introducting and characterizing a dynamically consistent rule based on first-order dominance.
{"title":"Decision with belief functions and generalized independence: Two impossibility theorems1","authors":"Helene Fargier , Romain Guillaume","doi":"10.1016/j.ijar.2024.109283","DOIUrl":"10.1016/j.ijar.2024.109283","url":null,"abstract":"<div><p>Dempster-Shafer theory of evidence is a framework that is expressive enough to represent both ignorance and probabilistic information. However, decision models based on belief functions proposed in the literature face limitations in a sequential context: they either abandon the principle of dynamic consistency, restrict the combination of lotteries, or relax the requirement for transitive and complete comparisons. This work formally establishes that these requirements are indeed incompatible when any form of compensation is considered. It then demonstrates that these requirements can be satisfied in non-compensatory frameworks by introducting and characterizing a dynamically consistent rule based on first-order dominance.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"175 ","pages":"Article 109283"},"PeriodicalIF":3.2,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142148437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1016/j.ijar.2024.109271
Mingjie Cai , Haichao Wang , Feng Xu , Qingguo Li
The neighborhood threshold in the neighborhood rough set has a significant impact on the neighborhood relation. When the neighborhood threshold of an object exceeds the critical value, the labels of objects in the neighborhood are not completely consistent, and the critical value of each object often differs. Most existing neighborhood rough set models cannot adaptively regulate the neighborhood threshold. In this paper, we introduce a novel neighborhood rough set model that incorporates a self-tuning mechanism for the neighborhood threshold, taking into account the distribution of objects across different areas. The neighborhood margin is a measure proposed to assess the condition of neighborhoods, and it is calculated by subtracting the neighborhood threshold from the closest distance between heterogeneous elements. The neighborhood margin accurately represents the local state of the neighborhood, taking into account decision information. The margin neighborhood is proposed with a self-tuning the neighborhood threshold. Finally, we introduce the margin neighborhood rough set model and margin neighborhood-based attribute reduction algorithm, and explore the relationship between the proposed model and the neighborhood rough set model. The experiment examines the performance of reducts under various measures, and demonstrates that the neighborhood margin rough set reduces the uncertainty of neighborhood granules effectively, leading to excellent classification performance compared to other neighborhood-based SOTA models.
邻域粗糙集中的邻域临界值对邻域关系有重要影响。当一个对象的邻域阈值超过临界值时,邻域中对象的标签并不完全一致,每个对象的临界值往往也不相同。现有的邻域粗糙集模型大多不能自适应地调节邻域临界值。在本文中,我们引入了一种新的邻域粗糙集模型,该模型结合了邻域临界值的自调整机制,并考虑到了不同区域的对象分布情况。邻域边际是一种用来评估邻域状况的指标,它是通过用异质元素之间的最近距离减去邻域阈值计算得出的。考虑到决策信息,邻域边际准确地代表了邻域的局部状态。我们提出的邻域边际具有自调整邻域阈值的功能。最后,我们介绍了边际邻域粗糙集模型和基于边际邻域的属性缩减算法,并探讨了所提出的模型与邻域粗糙集模型之间的关系。实验检验了各种度量下的还原性能,结果表明邻域余量粗糙集能有效降低邻域颗粒的不确定性,与其他基于邻域的 SOTA 模型相比,能带来出色的分类性能。
{"title":"Neighborhood margin rough set: Self-tuning neighborhood threshold","authors":"Mingjie Cai , Haichao Wang , Feng Xu , Qingguo Li","doi":"10.1016/j.ijar.2024.109271","DOIUrl":"10.1016/j.ijar.2024.109271","url":null,"abstract":"<div><p>The neighborhood threshold in the neighborhood rough set has a significant impact on the neighborhood relation. When the neighborhood threshold of an object exceeds the critical value, the labels of objects in the neighborhood are not completely consistent, and the critical value of each object often differs. Most existing neighborhood rough set models cannot adaptively regulate the neighborhood threshold. In this paper, we introduce a novel neighborhood rough set model that incorporates a self-tuning mechanism for the neighborhood threshold, taking into account the distribution of objects across different areas. The neighborhood margin is a measure proposed to assess the condition of neighborhoods, and it is calculated by subtracting the neighborhood threshold from the closest distance between heterogeneous elements. The neighborhood margin accurately represents the local state of the neighborhood, taking into account decision information. The margin neighborhood is proposed with a self-tuning the neighborhood threshold. Finally, we introduce the margin neighborhood rough set model and margin neighborhood-based attribute reduction algorithm, and explore the relationship between the proposed model and the neighborhood rough set model. The experiment examines the performance of reducts under various measures, and demonstrates that the neighborhood margin rough set reduces the uncertainty of neighborhood granules effectively, leading to excellent classification performance compared to other neighborhood-based SOTA models.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"174 ","pages":"Article 109271"},"PeriodicalIF":3.2,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1016/j.ijar.2024.109272
Binghan Long , Tingquan Deng , Yiyu Yao , Weihua Xu
Three-way concept lattices (TCLs) have been widely explored due to their clear hierarchical structures, concise visual description and good interpretability. In contrast to classic formal contexts, lattice-valued fuzzy contexts exhibit great capability in describing and representing concepts with uncertainty. Different from conventional approaches to research of TCLs, this paper focuses on investigating the algebraic structure and properties of three-way concept lattice (TCL) stemmed from the positive concept lattice and negative concept lattice in a lattice-valued formal context. Several associated concept lattices such as the Cartesian product of positive concept lattice and negative lattice (i.e., pos-neg lattice), lattices induced from the partition of the pos-neg lattice, and their relationship are explored. Specifically, the isomorphism, embedding and order-preserving mappings between them are built. The quotient set of pos-neg lattice when being defined a specific equivalence relation on it is a complete lattice and each equivalence class is a lower semi-lattice. It is further declared that the structure of TCL is intrinsically and determined wholly by the pos-neg lattice. A practical application of the built theory of TCL is provided to sort alternatives in multi-criteria decision making.
{"title":"Three-way concept lattice from adjunctive positive and negative concepts","authors":"Binghan Long , Tingquan Deng , Yiyu Yao , Weihua Xu","doi":"10.1016/j.ijar.2024.109272","DOIUrl":"10.1016/j.ijar.2024.109272","url":null,"abstract":"<div><p>Three-way concept lattices (TCLs) have been widely explored due to their clear hierarchical structures, concise visual description and good interpretability. In contrast to classic formal contexts, lattice-valued fuzzy contexts exhibit great capability in describing and representing concepts with uncertainty. Different from conventional approaches to research of TCLs, this paper focuses on investigating the algebraic structure and properties of three-way concept lattice (TCL) stemmed from the positive concept lattice and negative concept lattice in a lattice-valued formal context. Several associated concept lattices such as the Cartesian product of positive concept lattice and negative lattice (i.e., pos-neg lattice), lattices induced from the partition of the pos-neg lattice, and their relationship are explored. Specifically, the isomorphism, embedding and order-preserving mappings between them are built. The quotient set of pos-neg lattice when being defined a specific equivalence relation on it is a complete lattice and each equivalence class is a lower semi-lattice. It is further declared that the structure of TCL is intrinsically and determined wholly by the pos-neg lattice. A practical application of the built theory of TCL is provided to sort alternatives in multi-criteria decision making.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"174 ","pages":"Article 109272"},"PeriodicalIF":3.2,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142084199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we present a new approach to tackle lattice generation for complex and heterogeneous data using the concept of convexity. This is a work that we have already carried out, albeit intuitively [11] where we proposed the NextPriorityConcept algorithm for generating a meet-semilattice of concepts based on suitable descriptions and strategies. Now, we revisit the essential properties of our description spaces using a stronger formalism based on the properties of closure operators.
{"title":"Description lattices of generalised convex hulls","authors":"Christophe Demko, Karell Bertet, Jean-François Viaud, Cyril Faucher, Damien Mondou","doi":"10.1016/j.ijar.2024.109269","DOIUrl":"10.1016/j.ijar.2024.109269","url":null,"abstract":"<div><p>In this article, we present a new approach to tackle lattice generation for complex and heterogeneous data using the concept of convexity. This is a work that we have already carried out, albeit intuitively <span><span>[11]</span></span> where we proposed the <span>NextPriorityConcept</span> algorithm for generating a meet-semilattice of concepts based on suitable descriptions and strategies. Now, we revisit the essential properties of our description spaces using a stronger formalism based on the properties of closure operators.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"174 ","pages":"Article 109269"},"PeriodicalIF":3.2,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X24001567/pdfft?md5=fff1b6ef063a4772686d8b74306ad002&pid=1-s2.0-S0888613X24001567-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142087815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1016/j.ijar.2024.109268
Langwangqing Suo , Han Yang , Qiaoyi Li , Hai-Long Yang , Yiyu Yao
A theory of three-way decision is about thinking, problem-solving, and computing in threes or through triads. In this paper, we review fifteen years of research on three-way decision by using the philosophy-theory-application triad and the who-what-when triad. First, we discuss the philosophy, theory, and application of three-way decision. At the philosophy level, we delve into the philosophical roots and fundamental nature of three-way decision to reveal the underlying philosophical thinking. At the theory level, we provide an insightful analysis of the theory and methodology of three-way decision. At the application level, we examine the integration of three-way decision with other theories and their applications and effectiveness in real-world scenarios. Second, we focus on bibliometrics analytics by using the who-what-when triad, which attempts to answer a fundamental question of “who did what when”. We propose a model by applying the method of three-way decision. The first 3 is the author-topic-time triad. The second 3 represents a three-level analysis for each of the first three: (1) categorizing authors into the three levels of prolific authors, frequent authors, and occasional authors, (2) classifying topics into the three levels of the core topics, emerging topics, and to-be-explored topics, and (3) dividing articles into the three levels of initial investigations, further developments, and most recent studies. Finally, we perform a bibliometrics analysis of three-way decision articles by using the model of three-way decision. The results not only reveal the current status and trend of three-way decision research but also provide a road map for future research.
{"title":"A review of three-way decision: Triadic understanding, organization, and perspectives","authors":"Langwangqing Suo , Han Yang , Qiaoyi Li , Hai-Long Yang , Yiyu Yao","doi":"10.1016/j.ijar.2024.109268","DOIUrl":"10.1016/j.ijar.2024.109268","url":null,"abstract":"<div><p>A theory of three-way decision is about thinking, problem-solving, and computing in threes or through triads. In this paper, we review fifteen years of research on three-way decision by using the philosophy-theory-application triad and the who-what-when triad. First, we discuss the philosophy, theory, and application of three-way decision. At the philosophy level, we delve into the philosophical roots and fundamental nature of three-way decision to reveal the underlying philosophical thinking. At the theory level, we provide an insightful analysis of the theory and methodology of three-way decision. At the application level, we examine the integration of three-way decision with other theories and their applications and effectiveness in real-world scenarios. Second, we focus on bibliometrics analytics by using the who-what-when triad, which attempts to answer a fundamental question of “who did what when”. We propose a <span><math><mn>3</mn><mo>×</mo><mn>3</mn></math></span> model by applying the <span><math><mn>3</mn><mo>×</mo><mn>3</mn></math></span> method of three-way decision. The first 3 is the author-topic-time triad. The second 3 represents a three-level analysis for each of the first three: (1) categorizing authors into the three levels of prolific authors, frequent authors, and occasional authors, (2) classifying topics into the three levels of the core topics, emerging topics, and to-be-explored topics, and (3) dividing articles into the three levels of initial investigations, further developments, and most recent studies. Finally, we perform a bibliometrics analysis of three-way decision articles by using the <span><math><mn>3</mn><mo>×</mo><mn>3</mn></math></span> model of three-way decision. The results not only reveal the current status and trend of three-way decision research but also provide a road map for future research.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"173 ","pages":"Article 109268"},"PeriodicalIF":3.2,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141985060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-10DOI: 10.1016/j.ijar.2024.109267
Federico M. Schmidt, Sebastian Gottifredi, Alejandro J. García
The automatic identification of argument units within a text is a crucial task, as it is the first step that should be performed by an end-to-end argument mining system. In this work, we propose an approach for categorizing errors in predicted argument units, which allows the evaluation of segmentation models from an argumentative perspective. We assess the ability of several models to generalize knowledge across different text domains and, through the proposed categorization, we show differences in their behavior that may not be noticeable using standard classification metrics. Furthermore, we assess how the errors in predicted argument units impact on a task that rely on accurate unit identification, an aspect that has not been studied in previous research, and that helps to evaluate the usability of an imperfect segmentation model beyond the segmentation task itself.
{"title":"Identifying arguments within a text: Categorizing errors and their impact in arguments' relation prediction","authors":"Federico M. Schmidt, Sebastian Gottifredi, Alejandro J. García","doi":"10.1016/j.ijar.2024.109267","DOIUrl":"10.1016/j.ijar.2024.109267","url":null,"abstract":"<div><p>The automatic identification of argument units within a text is a crucial task, as it is the first step that should be performed by an end-to-end argument mining system. In this work, we propose an approach for categorizing errors in predicted argument units, which allows the evaluation of segmentation models from an argumentative perspective. We assess the ability of several models to generalize knowledge across different text domains and, through the proposed categorization, we show differences in their behavior that may not be noticeable using standard classification metrics. Furthermore, we assess how the errors in predicted argument units impact on a task that rely on accurate unit identification, an aspect that has not been studied in previous research, and that helps to evaluate the usability of an imperfect segmentation model beyond the segmentation task itself.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"173 ","pages":"Article 109267"},"PeriodicalIF":3.2,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141985061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.1016/j.ijar.2024.109266
Tom Davot , Tuan-Anh Vu , Sébastien Destercke , David Savourey
Many works within robust combinatorial optimisation consider interval-valued costs or constraints. While most of these works focus on finding a unique solution following a robust criteria such as minimax, a few consider the problem of characterising a set of possibly optimal solutions. This paper is situated within this line of work, and considers the problem of exactly enumerating the set of possibly optimal matroids under interval-valued costs. We show in particular that each solution in this set can be obtained through a polynomial procedure, and provide an efficient algorithm to achieve the enumeration.
{"title":"On the enumeration of non-dominated matroids with imprecise weights","authors":"Tom Davot , Tuan-Anh Vu , Sébastien Destercke , David Savourey","doi":"10.1016/j.ijar.2024.109266","DOIUrl":"10.1016/j.ijar.2024.109266","url":null,"abstract":"<div><p>Many works within robust combinatorial optimisation consider interval-valued costs or constraints. While most of these works focus on finding a unique solution following a robust criteria such as minimax, a few consider the problem of characterising a set of possibly optimal solutions. This paper is situated within this line of work, and considers the problem of exactly enumerating the set of possibly optimal matroids under interval-valued costs. We show in particular that each solution in this set can be obtained through a polynomial procedure, and provide an efficient algorithm to achieve the enumeration.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"174 ","pages":"Article 109266"},"PeriodicalIF":3.2,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X24001531/pdfft?md5=3bfaab42b412416c32b8cfd04b4ca57d&pid=1-s2.0-S0888613X24001531-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142011887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-02DOI: 10.1016/j.ijar.2024.109265
Susana Furtado , Charles R. Johnson
Our primary interest is understanding reciprocal matrices all of whose efficient vectors are ordinally the same, i.e., there is only one efficient order (we call these matrices uniformly ordered, UO). These are reciprocal matrices for which no efficient vector produces strict order reversals. A reciprocal matrix is called column ordered (CO) if each column is ordinally the same. Efficient vectors for a CO matrix with the same order of the columns always exist. For example, the entry-wise geometric mean of some or all columns of a reciprocal matrix is efficient and, if the matrix is CO, has the same order of the columns. A necessary, but not sufficient, condition for UO is that the matrix be CO and then the only efficient order should be satisfied by the columns (possibly weakly). In the case , CO is necessary and sufficient for UO, but not for . We characterize the 4-by-4 UO matrices and identify the three possible alternate orders when the matrix is CO (and give entry-wise conditions for their occurrence). We also describe the simple perturbed consistent matrices that are UO. Some of the technology developed for this purpose is of independent interest.
我们的主要兴趣在于理解所有有效向量顺序相同的互易矩阵,即只有一种有效顺序(我们称这些矩阵为均匀有序矩阵,UO)。这些互易矩阵的有效向量都不会产生严格的顺序颠倒。如果倒数矩阵的每一列顺序相同,则称为列有序矩阵(CO)。列序相同的 CO 矩阵的有效向量总是存在的。例如,倒易矩阵部分或所有列的分项几何平均数是有效的,如果矩阵是 CO 矩阵,则列序相同。UO 的一个必要条件(但不是充分条件)是矩阵必须是 CO 矩阵,然后列应满足唯一有效的顺序(可能是弱的)。对于 UO 而言,CO 是必要且充分条件,但对于 .我们描述了 4 乘 4 UO 矩阵的特征,并确定了当矩阵为 CO 时三种可能的交替阶次(并给出了出现这些阶次的条目条件)。我们还描述了属于 UO 的简单扰动一致矩阵。为此开发的一些技术具有独立的意义。
{"title":"Pairwise comparison matrices with uniformly ordered efficient vectors","authors":"Susana Furtado , Charles R. Johnson","doi":"10.1016/j.ijar.2024.109265","DOIUrl":"10.1016/j.ijar.2024.109265","url":null,"abstract":"<div><p>Our primary interest is understanding reciprocal matrices all of whose efficient vectors are ordinally the same, i.e., there is only one efficient order (we call these matrices uniformly ordered, UO). These are reciprocal matrices for which no efficient vector produces strict order reversals. A reciprocal matrix is called column ordered (CO) if each column is ordinally the same. Efficient vectors for a CO matrix with the same order of the columns always exist. For example, the entry-wise geometric mean of some or all columns of a reciprocal matrix is efficient and, if the matrix is CO, has the same order of the columns. A necessary, but not sufficient, condition for UO is that the matrix be CO and then the only efficient order should be satisfied by the columns (possibly weakly). In the case <span><math><mi>n</mi><mo>=</mo><mn>3</mn></math></span>, CO is necessary and sufficient for UO, but not for <span><math><mi>n</mi><mo>></mo><mn>3</mn></math></span>. We characterize the 4-by-4 UO matrices and identify the three possible alternate orders when the matrix is CO (and give entry-wise conditions for their occurrence). We also describe the simple perturbed consistent matrices that are UO. Some of the technology developed for this purpose is of independent interest.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"173 ","pages":"Article 109265"},"PeriodicalIF":3.2,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X2400152X/pdfft?md5=46ced9a2ce3c58f5412d1fff892e8440&pid=1-s2.0-S0888613X2400152X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-29DOI: 10.1016/j.ijar.2024.109264
A. Hovhannisyan , A.E. Allahverdyan
The common cause principle for two random variables A and B is examined in the case of causal insufficiency, when their common cause C is known to exist, but only the joint probability of A and B is observed. As a result, C cannot be uniquely identified (the latent confounder problem). We show that the generalized maximum likelihood method can be applied to this situation and allows identification of C that is consistent with the common cause principle. It closely relates to the maximum entropy principle. Investigation of the two binary symmetric variables reveals a non-analytic behavior of conditional probabilities reminiscent of a second-order phase transition. This occurs during the transition from correlation to anti-correlation in the observed probability distribution. The relation between the generalized likelihood approach and alternative methods, such as predictive likelihood and minimum common entropy, is discussed. The consideration of the common cause for three observed variables (and one hidden cause) uncovers causal structures that defy representation through directed acyclic graphs with the Markov condition.
{"title":"The most likely common cause","authors":"A. Hovhannisyan , A.E. Allahverdyan","doi":"10.1016/j.ijar.2024.109264","DOIUrl":"10.1016/j.ijar.2024.109264","url":null,"abstract":"<div><p>The common cause principle for two random variables <em>A</em> and <em>B</em> is examined in the case of causal insufficiency, when their common cause <em>C</em> is known to exist, but only the joint probability of <em>A</em> and <em>B</em> is observed. As a result, <em>C</em> cannot be uniquely identified (the latent confounder problem). We show that the generalized maximum likelihood method can be applied to this situation and allows identification of <em>C</em> that is consistent with the common cause principle. It closely relates to the maximum entropy principle. Investigation of the two binary symmetric variables reveals a non-analytic behavior of conditional probabilities reminiscent of a second-order phase transition. This occurs during the transition from correlation to anti-correlation in the observed probability distribution. The relation between the generalized likelihood approach and alternative methods, such as predictive likelihood and minimum common entropy, is discussed. The consideration of the common cause for three observed variables (and one hidden cause) uncovers causal structures that defy representation through directed acyclic graphs with the Markov condition.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"173 ","pages":"Article 109264"},"PeriodicalIF":3.2,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}