Cross-domain fault diagnosis is a critical task in predictive maintenance for fleet management. However, existing transfer learning and distribution matching methods are often impractical in real-world scenarios, especially under few-shot conditions, where their diagnostic performance cannot be consistently guaranteed. To address this issue, this study proposes a novel semi-supervised framework for cross-domain fault diagnosis, based on the pre-training–fine-tuning paradigm. In our approach, self-supervised contrastive learning is employed for centralized multi-domain pre-training, followed by supervised fine-tuning and contrastive re-learning to achieve robust model alignment across different machines and operating conditions. To effectively capture temporal dependencies in structured sensor data and improve sample efficiency, we incorporate a time-series contrastive learning method, Time-Series Representation Learning via Temporal and Contextual Contrasting (TS-TCC), as the core component of the pre-training stage. Furthermore, we introduce a two-stage sample selection strategy that enables annotation-efficient model alignment. This design ensures consistently reliable diagnostic performance on the target domain while minimizing labeling effort. We validate our framework using two benchmark datasets: the Prognostics and Health Management Data Challenge 2022 dataset for Hydraulic Rock Drill (HRD) fault classification and the Paderborn University (PU) Bearing dataset. Experimental results demonstrate substantial improvements over existing methods. For the HRD dataset, our approach achieves 96.62% accuracy under Condition 1, representing a 45.79% improvement over the best baseline method. Similarly, for the PU Bearing dataset, we achieve 90.93% accuracy under Condition 1, exceeding the best baseline by 62.62%. Comparable performances are observed across other experimental conditions in both datasets.
扫码关注我们
求助内容:
应助结果提醒方式:
