首页 > 最新文献

International Journal of Approximate Reasoning最新文献

英文 中文
A dialectical formalisation of preferred subtheories reasoning under resource bounds
IF 3.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-17 DOI: 10.1016/j.ijar.2025.109385
Kees van Berkel , Marcello D'Agostino , Sanjay Modgil
Dialectical Classical Argumentation (Dialectical Cl-Arg) has been shown to satisfy rationality postulates under resource bounds. In particular, the consistency and non-contamination postulates are satisfied despite dropping the assumption of logical omniscience and the consistency and subset minimality checks on arguments' premises that are deployed by standard approaches to Cl-Arg. This paper studies Dialectical Cl-Arg's formalisation of Preferred Subtheories (PS) non-monotonic reasoning under resource bounds. The contribution of this paper is twofold. First, we establish soundness and completeness for Dialectical Cl-Arg's credulous consequence relation under the preferred semantics and credulous PS consequences. This result paves the way for the use of argument game proof theories and dialogues that establish membership of arguments in admissible (and so preferred) extensions, and hence the credulous PS consequences of a belief base. Second, we refine the non-standard characteristic function for Dialectical Cl-Arg, and use this refined function to show soundness for Dialectical Cl-Arg consequences under the grounded semantics and resource-bounded sceptical PS consequence. We provide a counterexample that shows that completeness does not hold. However, we also show that the grounded consequences defined by Dialectical Cl-Arg strictly subsume the grounded consequences defined by standard Cl-Arg formalisations of PS, so that we recover sceptical PS consequences that one would intuitively expect to hold.
{"title":"A dialectical formalisation of preferred subtheories reasoning under resource bounds","authors":"Kees van Berkel ,&nbsp;Marcello D'Agostino ,&nbsp;Sanjay Modgil","doi":"10.1016/j.ijar.2025.109385","DOIUrl":"10.1016/j.ijar.2025.109385","url":null,"abstract":"<div><div><em>Dialectical Classical Argumentation</em> (Dialectical <em>Cl-Arg</em>) has been shown to satisfy rationality postulates under resource bounds. In particular, the consistency and non-contamination postulates are satisfied despite dropping the assumption of logical omniscience and the consistency and subset minimality checks on arguments' premises that are deployed by standard approaches to <em>Cl-Arg</em>. This paper studies Dialectical <em>Cl-Arg</em>'s formalisation of Preferred Subtheories (<em>PS</em>) non-monotonic reasoning under resource bounds. The contribution of this paper is twofold. First, we establish soundness and completeness for Dialectical <em>Cl-Arg</em>'s credulous consequence relation under the <em>preferred</em> semantics and credulous <em>PS</em> consequences. This result paves the way for the use of argument game proof theories and dialogues that establish membership of arguments in admissible (and so preferred) extensions, and hence the credulous <em>PS</em> consequences of a belief base. Second, we refine the non-standard characteristic function for Dialectical <em>Cl-Arg</em>, and use this refined function to show soundness for Dialectical <em>Cl-Arg</em> consequences under the grounded semantics and resource-bounded sceptical <em>PS</em> consequence. We provide a counterexample that shows that completeness does not hold. However, we also show that the grounded consequences defined by Dialectical <em>Cl-Arg</em> strictly subsume the grounded consequences defined by standard <em>Cl-Arg</em> formalisations of <em>PS</em>, so that we recover sceptical <em>PS</em> consequences that one would intuitively expect to hold.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109385"},"PeriodicalIF":3.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143429220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An unsupervised feature extraction and fusion framework for multi-source data based on copula theory
IF 3.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-13 DOI: 10.1016/j.ijar.2025.109384
Xiuwei Chen, Li Lai, Maokang Luo
With the development of big data technology, people are increasingly facing the challenge of dealing with massive amounts of multi-source or multi-sensor data. Therefore, it becomes crucial to extract valuable information from such data. Information fusion techniques provide effective solutions for handling multi-source data and can be categorized into three levels: data-level fusion, feature-level fusion, and decision-level fusion. Feature-level fusion combines features from multiple sources to create a consolidated feature, enhancing information richness. This paper proposes an unsupervised feature extraction and fusion method for multi-source data that utilizes the R-Vine copula, denoted as CF. The method starts by performing kernel density estimation to extract each data source's marginal density and distribution. Next, the maximum spanning tree is employed to select a vine structure for each attribute, and the corresponding copulas are chosen using maximum likelihood estimation and the AIC criterion. The joint probability density of each attribute across all information sources can be obtained by utilizing the relevant vine structure and copulas, serving as the final fusion feature. Finally, the proposed method is evaluated on eighteen simulated datasets and six real datasets. The results indicate that compared to several state-of-the-art fusion methods, the CF method can significantly enhance the classification accuracy of popular classifiers such as KNN, SVM, and Logistic Regression.
{"title":"An unsupervised feature extraction and fusion framework for multi-source data based on copula theory","authors":"Xiuwei Chen,&nbsp;Li Lai,&nbsp;Maokang Luo","doi":"10.1016/j.ijar.2025.109384","DOIUrl":"10.1016/j.ijar.2025.109384","url":null,"abstract":"<div><div>With the development of big data technology, people are increasingly facing the challenge of dealing with massive amounts of multi-source or multi-sensor data. Therefore, it becomes crucial to extract valuable information from such data. Information fusion techniques provide effective solutions for handling multi-source data and can be categorized into three levels: data-level fusion, feature-level fusion, and decision-level fusion. Feature-level fusion combines features from multiple sources to create a consolidated feature, enhancing information richness. This paper proposes an unsupervised feature extraction and fusion method for multi-source data that utilizes the R-Vine copula, denoted as CF. The method starts by performing kernel density estimation to extract each data source's marginal density and distribution. Next, the maximum spanning tree is employed to select a vine structure for each attribute, and the corresponding copulas are chosen using maximum likelihood estimation and the AIC criterion. The joint probability density of each attribute across all information sources can be obtained by utilizing the relevant vine structure and copulas, serving as the final fusion feature. Finally, the proposed method is evaluated on eighteen simulated datasets and six real datasets. The results indicate that compared to several state-of-the-art fusion methods, the CF method can significantly enhance the classification accuracy of popular classifiers such as KNN, SVM, and Logistic Regression.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109384"},"PeriodicalIF":3.2,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient parameter-free adaptive hashing for large-scale cross-modal retrieval
IF 3.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-10 DOI: 10.1016/j.ijar.2025.109383
Bo Li , You Wu , Zhixin Li
The intention of deep cross-modal hashing retrieval (DCMHR) is to explore the connections between multi-media data, but most methods are only applicable to a few modalities and cannot be extended to other scenarios. Meanwhile, many methods also fail to emphasize the importance of unified training for classification loss and hash loss, which can also reduce the robustness and effectiveness of the model. Regarding these two issues, this paper designs Efficient Parameter-free Adaptive Hashing for Large-Scale Cross-Modal Retrieval (EPAH) to adaptively extract the modality variations and collect corresponding semantics of cross-modal features into the generated hash codes. EPAH does not use hyper-parameters, weight vectors, auxiliary matrices, and other structures to learn cross-modal data, while efficient parameter-free adaptive hashing can achieve multi-modal retrieval tasks. Specifically, our proposal is a two-stage strategy, divided into feature extraction and unified training, both stages are parameter-free adaptive learning. Meanwhile, this article simplifies the model training settings, selects the more stable gradient descent method, and designs the unified hash code generation function. Comprehensive experiments evidence that our EPAH approach can outperform the SoTA DCMHR methods. In addition, EPAH conducts the essential analysis of out-of-modality extension and parameter anti-interference, which demonstrates generalization and innovation. The code is available at https://github.com/libo-02/EPAH.
{"title":"Efficient parameter-free adaptive hashing for large-scale cross-modal retrieval","authors":"Bo Li ,&nbsp;You Wu ,&nbsp;Zhixin Li","doi":"10.1016/j.ijar.2025.109383","DOIUrl":"10.1016/j.ijar.2025.109383","url":null,"abstract":"<div><div>The intention of deep cross-modal hashing retrieval (DCMHR) is to explore the connections between multi-media data, but most methods are only applicable to a few modalities and cannot be extended to other scenarios. Meanwhile, many methods also fail to emphasize the importance of unified training for classification loss and hash loss, which can also reduce the robustness and effectiveness of the model. Regarding these two issues, this paper designs Efficient Parameter-free Adaptive Hashing for Large-Scale Cross-Modal Retrieval (EPAH) to adaptively extract the modality variations and collect corresponding semantics of cross-modal features into the generated hash codes. EPAH does not use hyper-parameters, weight vectors, auxiliary matrices, and other structures to learn cross-modal data, while efficient parameter-free adaptive hashing can achieve multi-modal retrieval tasks. Specifically, our proposal is a two-stage strategy, divided into feature extraction and unified training, both stages are parameter-free adaptive learning. Meanwhile, this article simplifies the model training settings, selects the more stable gradient descent method, and designs the unified hash code generation function. Comprehensive experiments evidence that our EPAH approach can outperform the SoTA DCMHR methods. In addition, EPAH conducts the essential analysis of out-of-modality extension and parameter anti-interference, which demonstrates generalization and innovation. The code is available at <span><span>https://github.com/libo-02/EPAH</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109383"},"PeriodicalIF":3.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A logical formalisation of a hypothesis in weighted abduction: Towards user-feedback dialogues
IF 3.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-07 DOI: 10.1016/j.ijar.2025.109382
Shota Motoura, Ayako Hoshino, Itaru Hosomi, Kunihiko Sadamasa
Weighted abduction computes hypotheses that explain input observations. A reasoner of weighted abduction first generates possible hypotheses and then selects the hypothesis that is the most plausible. Since a reasoner employs parameters, called weights, that control its plausibility evaluation function, it can output the most plausible hypothesis according to a specific application using application-specific weights. This versatility makes it applicable from plant operation to cybersecurity or discourse analysis. However, the predetermined application-specific weights are not applicable to all cases of the application. Hence, the hypothesis selected by the reasoner does not necessarily seem the most plausible to the user. In order to resolve this problem, this article proposes two types of user-feedback dialogue protocols, in which the user points out, either positively, negatively or neutrally, properties of the hypotheses presented by the reasoner, and the reasoner regenerates hypotheses that satisfy the user's feedback. As it is required for user-feedback dialogue protocols, we then prove: (i) our protocols necessarily terminate under certain reasonable conditions; (ii) they converge on hypotheses that have the same properties in common as fixed target hypotheses do in common if the user determines the positivity, negativity or neutrality of each pointed-out property based on whether the target hypotheses have that property.
{"title":"A logical formalisation of a hypothesis in weighted abduction: Towards user-feedback dialogues","authors":"Shota Motoura,&nbsp;Ayako Hoshino,&nbsp;Itaru Hosomi,&nbsp;Kunihiko Sadamasa","doi":"10.1016/j.ijar.2025.109382","DOIUrl":"10.1016/j.ijar.2025.109382","url":null,"abstract":"<div><div>Weighted abduction computes hypotheses that explain input observations. A reasoner of weighted abduction first generates possible hypotheses and then selects the hypothesis that is the most plausible. Since a reasoner employs parameters, called weights, that control its plausibility evaluation function, it can output the most plausible hypothesis according to a specific application using application-specific weights. This versatility makes it applicable from plant operation to cybersecurity or discourse analysis. However, the predetermined application-specific weights are not applicable to all cases of the application. Hence, the hypothesis selected by the reasoner does not necessarily seem the most plausible to the user. In order to resolve this problem, this article proposes two types of user-feedback dialogue protocols, in which the user points out, either positively, negatively or neutrally, properties of the hypotheses presented by the reasoner, and the reasoner regenerates hypotheses that satisfy the user's feedback. As it is required for user-feedback dialogue protocols, we then prove: (i) our protocols necessarily terminate under certain reasonable conditions; (ii) they converge on hypotheses that have the same properties in common as fixed target hypotheses do in common if the user determines the positivity, negativity or neutrality of each pointed-out property based on whether the target hypotheses have that property.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109382"},"PeriodicalIF":3.2,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trend-pattern unlimited fuzzy information granule-based LSTM model for long-term time-series forecasting
IF 3.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-05 DOI: 10.1016/j.ijar.2025.109381
Yanan Jiang, Fusheng Yu, Yuqing Tang, Chenxi Ouyang, Fangyi Li
Trend fuzzy information granulation has shown promising results in long-term time-series forecasting and has attracted increasing attention. In the forecasting model based on trend fuzzy information granulation, the representation of trend granules plays a crucial role. The research focuses on developing trend granules and trend granular time series to effectively represent trend information and improve forecasting performance. However, the existing trend fuzzy information granulation methods make assumptions about the trend pattern of granules (i.e., assuming that granules have linear trends or definite nonlinear trends). Fuzzy information granules with presupposed trend patterns have limited expressive ability and struggle to capture complex nonlinear trends and temporal dependencies, thus limiting their forecasting performance. To address this issue, this paper proposes a novel kind of trend fuzzy information granules, named Trend-Pattern Unlimited Fuzzy Information Granules (TPUFIGs), which are constructed by the recurrent autoencoder with automatic feature learning and nonlinear modeling capabilities. Compared with the existing trend fuzzy information granules, TPUFIGs can better characterize potential trend patterns and temporal dependencies, and exhibit stronger robustness. With the TPUFIGs and Long Short-Term Memory (LSTM) neural network, we design the TPUFIG-LSTM forecasting model, which can effectively alleviate error accumulation and improve forecasting capability. Experimental results on six heterogeneous time series datasets demonstrate the superior performance of the proposed model. By combining deep learning and granular computing, this fuzzy information granulation method characterizes intricate dynamic features in time series more effectively, thus providing a novel solution for long-term time series forecasting with improved forecasting accuracy and generalization capability.
{"title":"Trend-pattern unlimited fuzzy information granule-based LSTM model for long-term time-series forecasting","authors":"Yanan Jiang,&nbsp;Fusheng Yu,&nbsp;Yuqing Tang,&nbsp;Chenxi Ouyang,&nbsp;Fangyi Li","doi":"10.1016/j.ijar.2025.109381","DOIUrl":"10.1016/j.ijar.2025.109381","url":null,"abstract":"<div><div>Trend fuzzy information granulation has shown promising results in long-term time-series forecasting and has attracted increasing attention. In the forecasting model based on trend fuzzy information granulation, the representation of trend granules plays a crucial role. The research focuses on developing trend granules and trend granular time series to effectively represent trend information and improve forecasting performance. However, the existing trend fuzzy information granulation methods make assumptions about the trend pattern of granules (i.e., assuming that granules have linear trends or definite nonlinear trends). Fuzzy information granules with presupposed trend patterns have limited expressive ability and struggle to capture complex nonlinear trends and temporal dependencies, thus limiting their forecasting performance. To address this issue, this paper proposes a novel kind of trend fuzzy information granules, named Trend-Pattern Unlimited Fuzzy Information Granules (TPUFIGs), which are constructed by the recurrent autoencoder with automatic feature learning and nonlinear modeling capabilities. Compared with the existing trend fuzzy information granules, TPUFIGs can better characterize potential trend patterns and temporal dependencies, and exhibit stronger robustness. With the TPUFIGs and Long Short-Term Memory (LSTM) neural network, we design the TPUFIG-LSTM forecasting model, which can effectively alleviate error accumulation and improve forecasting capability. Experimental results on six heterogeneous time series datasets demonstrate the superior performance of the proposed model. By combining deep learning and granular computing, this fuzzy information granulation method characterizes intricate dynamic features in time series more effectively, thus providing a novel solution for long-term time series forecasting with improved forecasting accuracy and generalization capability.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109381"},"PeriodicalIF":3.2,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The multi-criteria ranking method for criterion-oriented regret three-way decision
IF 3.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-31 DOI: 10.1016/j.ijar.2025.109374
Weidong Wan , Kai Zhang , Ligang Zhou
Recently, the criterion-oriented three-way decision has garnered widespread attention as it considers the decision-makers' preferences in handling multi-criteria decision-making problems. However, due to the fact that some criterion-oriented three-way decision models do not accurately consider the specific deviation between the object evaluation value and the criterion preference value when calculating the loss function, some of the objects show the weakness of ranking failure. In order to eliminate this weakness, this paper considers this deviation as the decision-maker's regret psychology, combines the regret theory, proposes a new loss function and constructs a new criterion-oriented regret three-way decision model. Firstly, an innovative approach for determining the loss function is introduced, integrating the decision-maker's basic demands with regret theory. Secondly, thresholds are derived by combining the decision-maker's basic demands with two optimization models. Thirdly, the k-means++ clustering algorithm is employed to derive the objects' fuzzy depictions. Then, this paper proposes a practical method for calculating conditional probabilities by combining the concept of closeness with the fuzzy depictions of the objects. Next, a multi-criteria ranking method founded on criterion-oriented regret three-way decision is proposed. Finally, the applicability of the innovative sequencing method is verified by combining parametric and comparative analyses for the computer hardware selection problem. Additionally, in dataset experiments, the proposed method is further validated on datasets containing known ranking results and datasets containing ordered classification.
{"title":"The multi-criteria ranking method for criterion-oriented regret three-way decision","authors":"Weidong Wan ,&nbsp;Kai Zhang ,&nbsp;Ligang Zhou","doi":"10.1016/j.ijar.2025.109374","DOIUrl":"10.1016/j.ijar.2025.109374","url":null,"abstract":"<div><div>Recently, the criterion-oriented three-way decision has garnered widespread attention as it considers the decision-makers' preferences in handling multi-criteria decision-making problems. However, due to the fact that some criterion-oriented three-way decision models do not accurately consider the specific deviation between the object evaluation value and the criterion preference value when calculating the loss function, some of the objects show the weakness of ranking failure. In order to eliminate this weakness, this paper considers this deviation as the decision-maker's regret psychology, combines the regret theory, proposes a new loss function and constructs a new criterion-oriented regret three-way decision model. Firstly, an innovative approach for determining the loss function is introduced, integrating the decision-maker's basic demands with regret theory. Secondly, thresholds are derived by combining the decision-maker's basic demands with two optimization models. Thirdly, the <em>k</em>-means++ clustering algorithm is employed to derive the objects' fuzzy depictions. Then, this paper proposes a practical method for calculating conditional probabilities by combining the concept of closeness with the fuzzy depictions of the objects. Next, a multi-criteria ranking method founded on criterion-oriented regret three-way decision is proposed. Finally, the applicability of the innovative sequencing method is verified by combining parametric and comparative analyses for the computer hardware selection problem. Additionally, in dataset experiments, the proposed method is further validated on datasets containing known ranking results and datasets containing ordered classification.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109374"},"PeriodicalIF":3.2,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143093631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global sensitivity analysis of uncertain parameters in Bayesian networks
IF 3.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-31 DOI: 10.1016/j.ijar.2025.109368
Rafael Ballester-Ripoll, Manuele Leonelli
Traditionally, the sensitivity analysis of a Bayesian network studies the impact of individually modifying the entries of its conditional probability tables in a one-at-a-time (OAT) fashion. However, this approach fails to give a comprehensive account of each inputs' relevance, since simultaneous perturbations in two or more parameters often entail higher-order effects that cannot be captured by an OAT analysis. We propose to conduct global variance-based sensitivity analysis instead, whereby n parameters are viewed as uncertain at once and their importance is assessed jointly. Our method works by encoding the uncertainties as n additional variables of the network. To prevent the curse of dimensionality while adding these dimensions, we use low-rank tensor decomposition to break down the new potentials into smaller factors. Last, we apply the method of Sobol to the resulting network to obtain n global sensitivity indices, one for each parameter of interest. Using a benchmark array of both expert-elicited and learned Bayesian networks, we demonstrate that the Sobol indices can significantly differ from the OAT indices, thus revealing the true influence of uncertain parameters and their interactions.
{"title":"Global sensitivity analysis of uncertain parameters in Bayesian networks","authors":"Rafael Ballester-Ripoll,&nbsp;Manuele Leonelli","doi":"10.1016/j.ijar.2025.109368","DOIUrl":"10.1016/j.ijar.2025.109368","url":null,"abstract":"<div><div>Traditionally, the sensitivity analysis of a Bayesian network studies the impact of individually modifying the entries of its conditional probability tables in a one-at-a-time (OAT) fashion. However, this approach fails to give a comprehensive account of each inputs' relevance, since simultaneous perturbations in two or more parameters often entail higher-order effects that cannot be captured by an OAT analysis. We propose to conduct global variance-based sensitivity analysis instead, whereby <em>n</em> parameters are viewed as uncertain at once and their importance is assessed jointly. Our method works by encoding the uncertainties as <em>n</em> additional variables of the network. To prevent the curse of dimensionality while adding these dimensions, we use low-rank tensor decomposition to break down the new potentials into smaller factors. Last, we apply the method of Sobol to the resulting network to obtain <em>n</em> global sensitivity indices, one for each parameter of interest. Using a benchmark array of both expert-elicited and learned Bayesian networks, we demonstrate that the Sobol indices can significantly differ from the OAT indices, thus revealing the true influence of uncertain parameters and their interactions.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109368"},"PeriodicalIF":3.2,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reductions of concept lattices based on Boolean formal contexts
IF 3.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-29 DOI: 10.1016/j.ijar.2025.109372
Dong-Yun Niu , Ju-Sheng Mi
In order to obtain more concise information, accelerate operation speed, and save storage space, the reduction of the concept lattice is particularly important. This paper mainly studies the reduction of the concept lattice based on Boolean formal contexts. Firstly, four types of reductions are proposed: the reduction of maintaining the structure of the concept lattice unchanged, the reduction of maintaining the extents unchanged of meet-irreducible elements, the reduction of maintaining the extents unchanged of join-irreducible elements, and the reduction of maintaining column vector granular concepts unchanged. Then the relationships among the four different types of reductions are studied. Secondly, with the purpose of maintaining the structure of the concept lattice unchanged, we provide three approaches to obtain the reductions from different perspectives. Thirdly, since each unit row vector plays a different role in the Boolean formal context, we give an approach to recognise the characteristics of unit row vectors.
{"title":"Reductions of concept lattices based on Boolean formal contexts","authors":"Dong-Yun Niu ,&nbsp;Ju-Sheng Mi","doi":"10.1016/j.ijar.2025.109372","DOIUrl":"10.1016/j.ijar.2025.109372","url":null,"abstract":"<div><div>In order to obtain more concise information, accelerate operation speed, and save storage space, the reduction of the concept lattice is particularly important. This paper mainly studies the reduction of the concept lattice based on Boolean formal contexts. Firstly, four types of reductions are proposed: the reduction of maintaining the structure of the concept lattice unchanged, the reduction of maintaining the extents unchanged of meet-irreducible elements, the reduction of maintaining the extents unchanged of join-irreducible elements, and the reduction of maintaining column vector granular concepts unchanged. Then the relationships among the four different types of reductions are studied. Secondly, with the purpose of maintaining the structure of the concept lattice unchanged, we provide three approaches to obtain the reductions from different perspectives. Thirdly, since each unit row vector plays a different role in the Boolean formal context, we give an approach to recognise the characteristics of unit row vectors.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109372"},"PeriodicalIF":3.2,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143093629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Outlier detection in mixed-attribute data: A semi-supervised approach with fuzzy approximations and relative entropy
IF 3.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-29 DOI: 10.1016/j.ijar.2025.109373
Baiyang Chen , Zhong Yuan , Zheng Liu , Dezhong Peng , Yongxiang Li , Chang Liu , Guiduo Duan
Outlier detection is a critical task in data mining, aimed at identifying objects that significantly deviate from the norm. Semi-supervised methods improve detection performance by leveraging partially labeled data but typically overlook the uncertainty and heterogeneity of real-world mixed-attribute data. This paper introduces a semi-supervised outlier detection method, namely fuzzy rough sets-based outlier detection (FROD), to effectively handle these challenges. Specifically, we first utilize a small subset of labeled data to construct fuzzy decision systems, through which we introduce the attribute classification accuracy based on fuzzy approximations to evaluate the contribution of attribute sets in outlier detection. Unlabeled data is then used to compute fuzzy relative entropy, which provides a characterization of outliers from the perspective of uncertainty. Finally, we develop the detection algorithm by combining attribute classification accuracy with fuzzy relative entropy. Experimental results on 16 public datasets show that FROD is comparable with or better than leading detection algorithms. All datasets and source codes are accessible at https://github.com/ChenBaiyang/FROD.
{"title":"Outlier detection in mixed-attribute data: A semi-supervised approach with fuzzy approximations and relative entropy","authors":"Baiyang Chen ,&nbsp;Zhong Yuan ,&nbsp;Zheng Liu ,&nbsp;Dezhong Peng ,&nbsp;Yongxiang Li ,&nbsp;Chang Liu ,&nbsp;Guiduo Duan","doi":"10.1016/j.ijar.2025.109373","DOIUrl":"10.1016/j.ijar.2025.109373","url":null,"abstract":"<div><div>Outlier detection is a critical task in data mining, aimed at identifying objects that significantly deviate from the norm. Semi-supervised methods improve detection performance by leveraging partially labeled data but typically overlook the uncertainty and heterogeneity of real-world mixed-attribute data. This paper introduces a semi-supervised outlier detection method, namely fuzzy rough sets-based outlier detection (FROD), to effectively handle these challenges. Specifically, we first utilize a small subset of labeled data to construct fuzzy decision systems, through which we introduce the attribute classification accuracy based on fuzzy approximations to evaluate the contribution of attribute sets in outlier detection. Unlabeled data is then used to compute fuzzy relative entropy, which provides a characterization of outliers from the perspective of uncertainty. Finally, we develop the detection algorithm by combining attribute classification accuracy with fuzzy relative entropy. Experimental results on 16 public datasets show that FROD is comparable with or better than leading detection algorithms. All datasets and source codes are accessible at <span><span>https://github.com/ChenBaiyang/FROD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109373"},"PeriodicalIF":3.2,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lifting factor graphs with some unknown factors for new individuals
IF 3.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-27 DOI: 10.1016/j.ijar.2025.109371
Malte Luttermann , Ralf Möller , Marcel Gehrke
Lifting exploits symmetries in probabilistic graphical models by using a representative for indistinguishable objects, allowing to carry out query answering more efficiently while maintaining exact answers. In this paper, we investigate how lifting enables us to perform probabilistic inference for factor graphs containing unknown factors, i.e., factors whose underlying function of potential mappings is unknown. We present the Lifting Factor Graphs with Some Unknown Factors (LIFAGU) algorithm to identify indistinguishable subgraphs in a factor graph containing unknown factors, thereby enabling the transfer of known potentials to unknown potentials to ensure a well-defined semantics of the model and allow for (lifted) probabilistic inference. We further extend LIFAGU to incorporate additional background knowledge about groups of factors belonging to the same individual object. By incorporating such background knowledge, LIFAGU is able to further reduce the ambiguity of possible transfers of known potentials to unknown potentials.
{"title":"Lifting factor graphs with some unknown factors for new individuals","authors":"Malte Luttermann ,&nbsp;Ralf Möller ,&nbsp;Marcel Gehrke","doi":"10.1016/j.ijar.2025.109371","DOIUrl":"10.1016/j.ijar.2025.109371","url":null,"abstract":"<div><div>Lifting exploits symmetries in probabilistic graphical models by using a representative for indistinguishable objects, allowing to carry out query answering more efficiently while maintaining exact answers. In this paper, we investigate how lifting enables us to perform probabilistic inference for factor graphs containing unknown factors, i.e., factors whose underlying function of potential mappings is unknown. We present the <em>Lifting Factor Graphs with Some Unknown Factors (LIFAGU) algorithm</em> to identify indistinguishable subgraphs in a factor graph containing unknown factors, thereby enabling the transfer of known potentials to unknown potentials to ensure a well-defined semantics of the model and allow for (lifted) probabilistic inference. We further extend LIFAGU to incorporate additional background knowledge about groups of factors belonging to the same individual object. By incorporating such background knowledge, LIFAGU is able to further reduce the ambiguity of possible transfers of known potentials to unknown potentials.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109371"},"PeriodicalIF":3.2,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143093627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Approximate Reasoning
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1