{"title":"Causality-Driven Intra-class Non-equilibrium Label-Specific Features Learning","authors":"Wenxin Ge, Yibin Wang, Yuting Xu, Yusheng Cheng","doi":"10.1007/s11063-024-11439-w","DOIUrl":null,"url":null,"abstract":"<p>In multi-label learning, label-specific feature learning can effectively avoid some ineffectual features that interfere with the classification performance of the model. However, most of the existing label-specific feature learning algorithms improve the performance of the model for classification by constraining the solution space through label correlation. The non-equilibrium of the label distribution not only leads to some spurious correlations mixed in with the calculated label correlations but also diminishes the performance of the classification model. Causal learning can improve the classification performance and robustness of the model by capturing real causal relationships from limited data. Based on this, this paper proposes a causality-driven intra-class non-equilibrium label-specific features learning, named CNSF. Firstly, the causal relationship between the labels is learned by the Peter-Clark algorithm. Secondly, the label density of all instances is calculated by the intra-class non-equilibrium method, which is used to relieve the non-equilibrium distribution of original labels. Then, the correlation of the density matrix is calculated using cosine similarity and combined with causality to construct the causal density correlation matrix, to solve the problem of spurious correlation mixed in the label correlation obtained by traditional methods. Finally, the causal density correlation matrix is used to induce label-specific feature learning. Compared with eight state-of-the-art multi-label algorithms on thirteen datasets, the experimental results prove the reasonability and effectiveness of the algorithms in this paper.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"2015 1","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Processing Letters","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11063-024-11439-w","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In multi-label learning, label-specific feature learning can effectively avoid some ineffectual features that interfere with the classification performance of the model. However, most of the existing label-specific feature learning algorithms improve the performance of the model for classification by constraining the solution space through label correlation. The non-equilibrium of the label distribution not only leads to some spurious correlations mixed in with the calculated label correlations but also diminishes the performance of the classification model. Causal learning can improve the classification performance and robustness of the model by capturing real causal relationships from limited data. Based on this, this paper proposes a causality-driven intra-class non-equilibrium label-specific features learning, named CNSF. Firstly, the causal relationship between the labels is learned by the Peter-Clark algorithm. Secondly, the label density of all instances is calculated by the intra-class non-equilibrium method, which is used to relieve the non-equilibrium distribution of original labels. Then, the correlation of the density matrix is calculated using cosine similarity and combined with causality to construct the causal density correlation matrix, to solve the problem of spurious correlation mixed in the label correlation obtained by traditional methods. Finally, the causal density correlation matrix is used to induce label-specific feature learning. Compared with eight state-of-the-art multi-label algorithms on thirteen datasets, the experimental results prove the reasonability and effectiveness of the algorithms in this paper.
期刊介绍:
Neural Processing Letters is an international journal publishing research results and innovative ideas on all aspects of artificial neural networks. Coverage includes theoretical developments, biological models, new formal modes, learning, applications, software and hardware developments, and prospective researches.
The journal promotes fast exchange of information in the community of neural network researchers and users. The resurgence of interest in the field of artificial neural networks since the beginning of the 1980s is coupled to tremendous research activity in specialized or multidisciplinary groups. Research, however, is not possible without good communication between people and the exchange of information, especially in a field covering such different areas; fast communication is also a key aspect, and this is the reason for Neural Processing Letters