Identifying irregularities in electricity load data is essential for maintaining dependable and effective power systems. Traditional approaches necessitate a significant amount of labeled data in order to achieve high accuracy, resulting in increased costs, and limited scalability. This paper introduces a feature extraction model based on contrastive learning, which greatly enhances the accuracy of anomaly detection for electricity load data. The model generates both positive and negative pairs after utilizing original input data sequences. This enables to learn complex similarities and differences. Through the utilization of a contrastive loss function, the aim is to minimize disparities between positive pairs and maximize the distances between negative pairs, resulting in the extraction of essential feature representations. The results demonstrate significant improvements enhancements such as accuracy rose from 69.85 % to 95.65 %, precision improved from 61.2 % to 96 %, recall increased from 74.5 % to 93 %, and the F1-score saw an improvement from 67.3 % to 94.6 %. The ROC-AUC score rose from 0.7286 to 0.9532, indicating better differentiation between normal and anomalous data. A paired t-test confirmed these gains with p-values well below 0.05, further validating the model’s effectiveness, while Cohen's d test validated the practical significance, indicating large effect sizes across all metrics. Furthermore, 95 % confidence intervals for the mean differences confirmed that the improvements are both statistically and practically meaningful. This approach not only improves detection accuracy but also reduces reliance on large labeled datasets, making it more scalable and cost-effective for real-world applications.