{"title":"MSNet: Multi-task self-supervised network for time series classification","authors":"Dongxuan Huang , Xingfeng Lv , Yang Zhang","doi":"10.1016/j.patrec.2025.03.008","DOIUrl":null,"url":null,"abstract":"<div><div>Learning rich representations from unlabeled temporal data is essential for effective time series classification. Most existing self-supervised learning methods for time series focus on a single task, often relying on contrastive learning or reconstruction techniques. However, these single tasks cannot capture comprehensive features and often overlook local structural, temporal, or discriminative features of time series data. This paper proposes a multi-task self-supervised network (MSNet) that integrates contrastive and reconstruction-based methods to learn rich representations. We adopt augmentation and disturbed methods to generate more diverse learning views. Then, the model performs disturbance contrastive, temporal contrastive, and reconstruction tasks. The contrastive tasks enhance the consistency of representations between augmented views from the same sequence. The reconstruction task captures local dependency structures, enhancing the robustness of learned representations. We conduct experiments on three real-world time series datasets. The experimental results demonstrate that our model achieves strong classification performance on these datasets. Additionally, when trained with limited labeled data, the proposed method shows excellent generalization and robustness.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"191 ","pages":"Pages 73-79"},"PeriodicalIF":3.9000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525000923","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Learning rich representations from unlabeled temporal data is essential for effective time series classification. Most existing self-supervised learning methods for time series focus on a single task, often relying on contrastive learning or reconstruction techniques. However, these single tasks cannot capture comprehensive features and often overlook local structural, temporal, or discriminative features of time series data. This paper proposes a multi-task self-supervised network (MSNet) that integrates contrastive and reconstruction-based methods to learn rich representations. We adopt augmentation and disturbed methods to generate more diverse learning views. Then, the model performs disturbance contrastive, temporal contrastive, and reconstruction tasks. The contrastive tasks enhance the consistency of representations between augmented views from the same sequence. The reconstruction task captures local dependency structures, enhancing the robustness of learned representations. We conduct experiments on three real-world time series datasets. The experimental results demonstrate that our model achieves strong classification performance on these datasets. Additionally, when trained with limited labeled data, the proposed method shows excellent generalization and robustness.
期刊介绍:
Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition.
Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.