In fields such as underwater exploration, acquiring clear and precise imagery is paramount for gathering diverse underwater information. Consequently, the development of robust underwater image enhancement (UIE) algorithms is of great significance. Leveraged by advancements in deep learning, UIE research has achieved substantial progress. Addressing the scarcity of underwater datasets and the imperative to refine the quality of enhanced reference images, this paper introduces a novel semantic-guided network architecture, termed SGAF-GAN. This model utilizes semantic information as an ancillary supervisory signal within the UIE network, steering the enhancement process towards semantically relevant areas while ameliorating issues with image edge blurriness. Moreover, in scenarios where rare image degradation co-occurs with semantically pertinent features, semantic information furnishes the network with prior knowledge, bolstering model performance and generalizability. This study integrates a feature attention fusion mechanism to preserve context information and amplify the influence of semantic guidance during cross-domain integration. Given the variable degradation in underwater images, the combination of spatial and channel attention empowers the network to assign more accurate weights to the most adversely affected regions, thereby elevating the overall image enhancement efficacy. Empirical evaluations demonstrate that SGAF-GAN excels across various real underwater datasets, aligning with human visual perception standards. On the SUIM dataset, SGAF-GAN achieves a PSNR of 24.30 and an SSIM of 0.9144.