Very high-resolution (VHR) optical satellite imagery offers significant potential for detailed land cover mapping. However, terrain shadows, which appear dark and lack texture and detail, are especially acute at low solar elevations. These shadows hinder the creation of spatially complete and accurate land cover maps, particularly in rugged mountainous environments. While many methods have been proposed to mitigate terrain shadows in remote sensing, they either perform insufficient shadow reduction or rely on high-resolution digital elevation models which are often unavailable for VHR image shadow mitigation. In this paper, we propose a bi-temporal image fusion approach to mitigate terrain shadows in VHR satellite imagery. Our approach fuses a WorldView-2 multispectral image, which contains significant terrain shadows, with a corresponding geometrically registered WorldView-1 panchromatic image, which has minimal shadows. This fusion is applied to improve the mapping of evergreen conifers in temperate mixed mountain forests. To evaluate the effectiveness of our approach, we first improve an existing shadow detection method by Silva et al. (2018) to more accurately detect shadows in mountainous, forested landscapes. Next, we propose a quantitative algorithm that differentiates dark and light terrain shadows in VHR satellite imagery based on object visibility in shadowed areas. Finally, we apply a state-of-the-art 3D U-Net deep learning method to detect evergreen conifers. Our study shows that the proposed approach significantly reduces terrain shadows and enhances the detection of evergreen conifers in shaded areas. This is the first time a bi-temporal image fusion approach has been used to mitigate terrain shadow effects for land cover mapping at a very high spatial resolution. This approach can also be applied to other VHR satellite sensors. However, careful image co-registration will be necessary when applying this technique to multi-sensor systems beyond the WorldView constellation, such as Pléiades or SkySat.