Deep neural networks suffer from overfitting when training samples contain inaccurate annotations (noisy labels), leading to suboptimal performance. In addressing this challenge, current methods for learning with noisy labels employ specific criteria, such as small loss, historical prediction, etc., to distinguish clean and noisy instances. Subsequently, semi-supervised learning techniques are introduced to boost performance. Most of them are one-stage frameworks that aim to achieve optimal sample partitioning and robust SSL training within a single iteration, thereby increasing training difficulty and complexity. To address this limitation, we propose a novel two-stage noisy label learning framework called UCRT, which consists of uniform consistency selection and robust training. In the first stage, the emphasis lies on creating a more uniform and accurate clean set, while the second stage uniformly extends this clean set to improve model performance by introducing SSL techniques. Comprehensive experiments conducted on both synthetic and real-world noisy datasets demonstrate the stability of UCRT across various noise types, showcasing superior performance compared with state-of-the-art methods. The code will be available at: https://github.com/LanXiaoPang613/UCRT.