In the task of infrared and visible image fusion, source images often exhibit complex and variable characteristics due to scene illumination. To address the challenges posed by complex lighting conditions and enhance the quality of fused images, we develop the ALFusion method, which combines dynamic convolution and Transformer for infrared and visible image fusion. The core idea is to utilize the adaptability of dynamic convolution coupled with the superior long-range modeling capabilities of the Transformer to design a hybrid feature extractor. This extractor dynamically and comprehensively captures features from source images under various lighting conditions. To enhance the effectiveness of mixed features, we integrate an elaborate multi-scale attention enhancement module into the skip connections of the U-Net architecture. In this module, we use convolutional kernels of various sizes to expand the receptive field and incorporate an attention mechanism to enhance and highlight the combined use of dynamic convolution and the Transformer. Considering the performance in advanced visual tasks, an illumination detection network is integrated into the loss function, strategically balancing the pixel fusion ratio and optimizing visual impacts under varied lighting conditions, leveraging information from different source images. The fusion results indicate that our method consistently delivers superior background contrast and enhanced texture and structural features across diverse lighting conditions. Ablation and complementary experiments have been conducted, further affirming the effectiveness of our proposed method and highlighting its potential in advanced visual tasks.