Existing control methods for humanoid robots, such as Model Predictive Control (MPC) and Reinforcement Learning (RL), generally lack the modeling and exploitation of rhythmic mechanisms. As a result, they struggle to balance stability, energy efficiency, and gait transition capability during typical rhythmic motions like walking and running. To address this limitation, we propose Walk2Run, a unified control framework inspired by biological rhythmicity. The method introduces control priors based on the frequency modulation observed in human walk–run transitions. Specifically, we extract rhythmic parameters from motion capture data to construct a Rhythm Generator grounded in Central Pattern Generator (CPG) principles, which guides the policy to produce speed-adaptive periodic motion. This rhythmic guidance is further integrated with a constrained reinforcement learning framework using barrier function optimization, enhancing training stability and output feasibility. Experimental results demonstrate that our method outperforms traditional approaches across multiple metrics, achieving more natural rhythmic motion with improved energy efficiency in medium- to high-speed scenarios, while also enhancing gait stability and adaptability to the robotic platform.