Manuel S. Lazo-Cortés , Guillermo Sanchez-Diaz , Nelva N. Almanza Ortega
{"title":"Shortest-length and coarsest-granularity constructs vs. reducts: An experimental evaluation","authors":"Manuel S. Lazo-Cortés , Guillermo Sanchez-Diaz , Nelva N. Almanza Ortega","doi":"10.1016/j.ijar.2024.109187","DOIUrl":null,"url":null,"abstract":"<div><p>In the domain of rough set theory, super-reducts represent subsets of attributes possessing the same discriminative power as the complete set of attributes when it comes to distinguishing objects across distinct classes in supervised classification problems. Within the realm of super-reducts, the concept of reducts holds significance, denoting subsets that are irreducible.</p><p>Contrastingly, constructs, while serving the purpose of distinguishing objects across different classes, also exhibit the capability to preserve certain shared characteristics among objects within the same class. In essence, constructs represent a subtype of super-reducts that integrates information both inter-classes and intra-classes. Despite their potential, constructs have garnered comparatively less attention than reducts.</p><p>Both reducts and constructs find application in the reduction of data dimensionality. This paper exposes key concepts related to constructs and reducts, providing insights into their roles. Additionally, it conducts an experimental comparative study between optimal reducts and constructs, considering specific criteria such as shortest length and coarsest granularity, and evaluates their performance using classical classifiers.</p><p>The outcomes derived from employing seven classifiers on sixteen datasets lead us to propose that both coarsest granularity reducts and constructs prove to be effective choices for dimensionality reduction in supervised classification problems. Notably, when considering the optimality criterion of the shortest length, constructs exhibit clear superiority over reducts, which are found to be less favorable.</p><p>Moreover, a comparative analysis was conducted between the results obtained using the coarsest granularity constructs and a technique from outside of rough set theory, specifically correlation-based feature selection. The former demonstrated statistically superior performance, providing further evidence of its efficacy in comparison.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109187"},"PeriodicalIF":3.2000,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Approximate Reasoning","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0888613X24000744","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In the domain of rough set theory, super-reducts represent subsets of attributes possessing the same discriminative power as the complete set of attributes when it comes to distinguishing objects across distinct classes in supervised classification problems. Within the realm of super-reducts, the concept of reducts holds significance, denoting subsets that are irreducible.
Contrastingly, constructs, while serving the purpose of distinguishing objects across different classes, also exhibit the capability to preserve certain shared characteristics among objects within the same class. In essence, constructs represent a subtype of super-reducts that integrates information both inter-classes and intra-classes. Despite their potential, constructs have garnered comparatively less attention than reducts.
Both reducts and constructs find application in the reduction of data dimensionality. This paper exposes key concepts related to constructs and reducts, providing insights into their roles. Additionally, it conducts an experimental comparative study between optimal reducts and constructs, considering specific criteria such as shortest length and coarsest granularity, and evaluates their performance using classical classifiers.
The outcomes derived from employing seven classifiers on sixteen datasets lead us to propose that both coarsest granularity reducts and constructs prove to be effective choices for dimensionality reduction in supervised classification problems. Notably, when considering the optimality criterion of the shortest length, constructs exhibit clear superiority over reducts, which are found to be less favorable.
Moreover, a comparative analysis was conducted between the results obtained using the coarsest granularity constructs and a technique from outside of rough set theory, specifically correlation-based feature selection. The former demonstrated statistically superior performance, providing further evidence of its efficacy in comparison.
期刊介绍:
The International Journal of Approximate Reasoning is intended to serve as a forum for the treatment of imprecision and uncertainty in Artificial and Computational Intelligence, covering both the foundations of uncertainty theories, and the design of intelligent systems for scientific and engineering applications. It publishes high-quality research papers describing theoretical developments or innovative applications, as well as review articles on topics of general interest.
Relevant topics include, but are not limited to, probabilistic reasoning and Bayesian networks, imprecise probabilities, random sets, belief functions (Dempster-Shafer theory), possibility theory, fuzzy sets, rough sets, decision theory, non-additive measures and integrals, qualitative reasoning about uncertainty, comparative probability orderings, game-theoretic probability, default reasoning, nonstandard logics, argumentation systems, inconsistency tolerant reasoning, elicitation techniques, philosophical foundations and psychological models of uncertain reasoning.
Domains of application for uncertain reasoning systems include risk analysis and assessment, information retrieval and database design, information fusion, machine learning, data and web mining, computer vision, image and signal processing, intelligent data analysis, statistics, multi-agent systems, etc.