Intelligent electric shovels are being developed for intelligent mining in open-pit mines. Complex environment detection and target recognition based on image recognition technology are prerequisites for achieving intelligent electric shovel operation. However, there is a large amount of sand–dust in open-pit mines, which can lead to low visibility and color shift in the environment during data collection, resulting in low-quality images. The images collected for environmental perception in sand–dust environment can seriously affect the target detection and scene segmentation capabilities of intelligent electric shovels. Therefore, developing an effective image processing algorithm to solve these problems and improve the perception ability of intelligent electric shovels has become crucial. At present, methods based on deep learning have achieved good results in image dehazing, and have a certain correlation in image sand–dust removal. However, deep learning heavily relies on data sets, but existing data sets are concentrated in haze environments, with significant gaps in the data set of sand–dust images, especially in open-pit mining scenes. Another bottleneck is the limited performance associated with traditional methods when removing sand–dust from images, such as image distortion and blurring. To address the aforementioned issues, a method for generating sand–dust image data based on atmospheric physical models and CIELAB color space features is proposed. The impact mechanism of sand–dust on images was analyzed through atmospheric physical models, and the formation of sand–dust images was divided into two parts: blurring and color deviation. We studied the blurring and color deviation effect generation theories based on atmospheric physical models and CIELAB color space, and designed a two-stage sand–dust image generation method. We also constructed an open-pit mine sand–dust data set in a real mining environment. Last but not least, this article takes generative adversarial network (GAN) as the research foundation and focuses on the formation mechanism of sand–dust image effects. The CIELAB color features are fused with the discriminator of GAN as basic priors and additional constraints to improve the discrimination effect. By combining the three feature components of CIELAB color space and comparing the algorithm performance, a feature fusion scheme is determined. The results show that the proposed method can generate clear and realistic images well, which helps to improve the performance of target detection and scene segmentation tasks in heavy sand–dust open-pit mines.