Facial expression recognition (FER) algorithms often encounter obstacles in cross-domain scenarios, attributed to variations in collection conditions such as lighting, weather, age, gender, and skin color of subjects. Unlike existing approaches that primarily focus on extracting globally invariant features and aligning domain distributions, we propose a novel framework that fundamentally shifts the approach to cross-domain FER. Our proposed algorithm, termed Bi-Directional Fusion of Active and Stable Information (FER-DAS), uniquely combines three innovative components: the Active Assessment Strategy (AAS), Cross-Domain Dynamic Class Threshold (CD-DCT), and Weighted Cross-Domain Alignment (WCDA). The AAS component selectively identifies and enhances active samples in the target domain, providing precise annotations for improved model robustness. Samples with the highest uncertainty are deemed active, indicating low prediction confidence and high informational value for model training. These are then filtered using a predefined threshold to ensure only the most informative samples are included in training iterations. In contrast to conventional static threshold techniques, our dynamic class threshold strategy (CD-DCT) adaptively filters stable samples across domains, thereby ensuring that only the most reliable information is utilized in training. The WCDA strategy further refines this process by dynamically assessing and weighting the contribution of target domain samples to class centers, effectively mitigating domain distribution discrepancies. Extensive experiments on multiple benchmark datasets confirm that FER-DAS sets a new standard in cross-domain FER, consistently outperforming existing state-of-the-art methods.