Time series anomaly detection is crucial in many fields, where the objective is to identify unusual patterns by learning normality from sequential observations. However, existing methods typically treat the entire training data as a single, homogeneous normal class, which disregards the normal diversity caused by distribution shifts over time. As a result, these methods are forced to learn a single, complex decision boundary that must enclose all variations of normal behavior, making it difficult to precisely distinguish subtle anomalies hidden within the normal patterns. Therefore, this paper tackles this challenge by explicitly modeling heterogeneous normality, which allows for learning simpler, localized decision boundaries to separate anomalies. Specifically, we propose a novel approach that decomposes the heterogeneous class space into multiple normal classes, adopting a two-stage coarse-to-fine training paradigm: (1) a Mixture of Experts (MoE) framework assigns pseudo-labels by routing input features to specialized experts for prediction, approximating the latent sub-class structure; (2) enhanced features are generated based on pseudo-labels and feature space is refined via spectral decomposition, which contracts class boundaries and better exposes anomalies. Extensive experiments on 23 univariate datasets and 17 multivariate datasets show that our approach significantly outperforms state-of-the-art competitors by 2.55%-21.76% in VUS-PR, validating the importance of modeling heterogeneous normality in time series anomaly detection.
扫码关注我们
求助内容:
应助结果提醒方式:
