Underwater images often suffer from color distortion, blurry details, and low contrast due to light scattering and water-type changes. Existing methods mainly focus on spatial information and ignore frequency-difference processing, which hinders the solution to the mixing degradation problem. To overcome these challenges, we propose a multi-scale wavelet pyramid recurrent fusion network (MWPRFN). This network retains low-frequency features at all levels, integrates them into a low-frequency enhancement branch, and fuses image features using a multi-scale dynamic cross-layer mechanism (DCLM) to capture the correlation between high and low frequencies. Each stage of the multi-level framework consists of a multi-frequency information interaction pyramid network (MFIPN) and an atmospheric light compensation estimation network (ALCEN). The low-frequency branch of the MFIPN enhances global details through an efficient context refinement module (ECRM). In contrast, the high-frequency branch extracts texture and edge features through a multi-scale difference expansion module (MSDC). After the inverse wavelet transform, ALCEN uses atmospheric light estimation and frequency domain compensation to compensate for color distortion. Experimental results show that MWPRFN significantly improves the quality of underwater images on five benchmark datasets. Compared with state-of-the-art methods, objective image quality metrics including PSNR, SSIM, and NIQE are improved by an average of 3.45%, 1.32%, and 4.50% respectively. Specifically, PSNR increased from 24.03 decibels to 24.86 decibels, SSIM increased from 0.9002 to 0.9121, and NIQE decreased from 3.261 to 3.115.