Image classification is an important application area of soft computing. In many real-world application scenarios, image classifiers are applied to domains that differ from the original training data. This so-called domain shift significantly reduces classification accuracy. To tackle this issue, unsupervised domain adaptation (UDA) techniques have been developed to bridge the gap between source and target domains. These techniques achieve this by transferring knowledge from a labeled source domain to an unlabeled target domain. We develop a novel and effective coarse-to-fine domain adaptation method called Domain Adaptation via Feature Disentanglement (DAFD), which has two new key components: First, our Class-Relevant Feature Selection (CRFS) module disentangles class-relevant features from class-irrelevant ones. This prevents the network from overfitting to irrelevant data and enhances its focus on crucial information for accurate classification. This reduces the complexity of domain alignment, which improves the classification accuracy on the target domain. Second, our Dynamic Local Maximum Mean Discrepancy module DLMMD achieves a fine-grained feature alignment by minimizing the discrepancy among class-relevant features from different domains. The alignment process now becomes more adaptive and contextually sensitive, enhancing the ability of the model to recognize domain-specific patterns and characteristics. The combination of the CRFS and DLMMD modules results in an effective alignment of class-relevant features. Domain knowledge is successfully transferred from the source to the target domain. Our comprehensive experiments on four standard datasets demonstrate that DAFD is robust and highly effective in cross-domain image classification tasks.