Considering the complexity of lesion regions in medical images, current researches relying on CNNs typically employ large-kernel convolutions to expand the receptive field and enhance segmentation quality. However, these convolution methods are hindered by substantial computational requirements and limited capacity to extract contextual and multi-scale information, making it challenging to efficiently segment complex regions. To address this issue, we propose a dual fusion with enhanced deformable convolution network, namely DFEDC, which dynamically adjusts the receptive field and simultaneously integrates multi-scale feature information to effectively segment complex lesion areas and process boundaries. Firstly, we combine global channel and spatial fusion in a serial way, which integrates and reuses global channel attention and fully connected layers to achieve lightweight extraction of channel and spatial information. Additionally, we design a structured deformable convolution (SDC) that structures deformable convolution with inceptions and large kernel attention, and enhances the learning of offsets through parallel fusion to efficiently extract multi-scale feature information. To compensate for the loss of spatial information of SDC, we introduce a hybrid 2D and 3D feature extraction module to transform feature extraction from a single dimension to a fusion of 2D and 3D. Extensive experimental results on the Synapse, ACDC, and ISIC-2018 datasets demonstrate that our proposed DFEDC achieves superior results.