{"title":"Online imbalance learning with unpredictable feature evolution and label scarcity","authors":"","doi":"10.1016/j.neucom.2024.128476","DOIUrl":null,"url":null,"abstract":"<div><p>Recently, online learning with imbalanced data streams has aroused wide concern, which reflects an uneven distribution of different classes in data streams. Existing approaches have conventionally been conducted on stationary feature space and they assume that we can obtain the entire labels of data streams in the case of supervised learning. However, in many real scenarios, e.g., the environment monitoring task, new features flood in, and old features are partially lost during the changing environment as the different lifespans of different sensors. Besides, each instance needs to be labeled by experts, resulting in expensive costs and scarcity of labels. To address the above problems, this paper proposes a novel Online Imbalance learning with unpredictable Feature evolution and Label scarcity (OIFL) algorithm. First, we utilize margin-based online active learning to selectively label valuable instances. After obtaining the labels, we handle imbalanced class distribution by optimizing F-measure and transforming F-measure optimization into a weighted surrogate loss minimization. When data streams arrive with augmented features, we combine the online passive-aggressive algorithm and structural risk minimization to update the classifier in the divided feature space. When data streams arrive with incomplete features, we leverage variance to identify the most informative features following the empirical risk minimization principle and continue to update the existing classifier as before. Finally, we obtain a sparse but reliable learner by the strategy of projecting truncation. We derive theoretical analyses of OIFL. Also, experiments on the synthetic datasets and real-world data streams to validate the effectiveness of our method.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224012475","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, online learning with imbalanced data streams has aroused wide concern, which reflects an uneven distribution of different classes in data streams. Existing approaches have conventionally been conducted on stationary feature space and they assume that we can obtain the entire labels of data streams in the case of supervised learning. However, in many real scenarios, e.g., the environment monitoring task, new features flood in, and old features are partially lost during the changing environment as the different lifespans of different sensors. Besides, each instance needs to be labeled by experts, resulting in expensive costs and scarcity of labels. To address the above problems, this paper proposes a novel Online Imbalance learning with unpredictable Feature evolution and Label scarcity (OIFL) algorithm. First, we utilize margin-based online active learning to selectively label valuable instances. After obtaining the labels, we handle imbalanced class distribution by optimizing F-measure and transforming F-measure optimization into a weighted surrogate loss minimization. When data streams arrive with augmented features, we combine the online passive-aggressive algorithm and structural risk minimization to update the classifier in the divided feature space. When data streams arrive with incomplete features, we leverage variance to identify the most informative features following the empirical risk minimization principle and continue to update the existing classifier as before. Finally, we obtain a sparse but reliable learner by the strategy of projecting truncation. We derive theoretical analyses of OIFL. Also, experiments on the synthetic datasets and real-world data streams to validate the effectiveness of our method.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.