Background: Acquiring human- like motor skills in embodied musculoskeletal models is challenging due to the high dimensionality and redundancy of muscle actuators.
Methods: Inspired by human motor control coordination patterns, we introduce a synergy-guided reinforcement learning framework integrating physiological priors derived from muscle synergies into the control policy of an embodied musculoskeletal model. It leverages coordinated muscle activation patterns to guide learning, generating muscle excitation signals via a synergy-guided control component and a residual control component. To evaluate the proposed method, four badminton stroke skills are selected as benchmark tasks (forehand/backhand, inward/outward net slices).
Results: The experimental results demonstrate that our method achieves an average root mean square error of under 0.015 radians across all stroke types, demonstrating its ability to accurately learn expert motion. Furthermore, it outperforms the baseline proximal policy optimization (PPO) model in terms of trajectory accuracy, energy efficiency, and training convergence speed, particularly excelling in energy efficiency with up to a 14.9% reduction in energy consumption. The forehand high serve is also tested to validate the method's effectiveness in learning longer and larger-ranges movements, showing the same advantages. Moreover, the muscle synergies learned by the model exhibit moderate resemblance to human synergies, indicating potential interpretability and biological plausibility.
Conclusion: This work highlights that integrating neurophysiological priors into reinforcement learning provides a promising pathway for efficient, interpretable, and human- like motor control.
Significance: The approach holds promise for advancing motor skill assessment, human-machine interfaces, and rehabilitation technologies by enabling more efficient and human- like motion skill learning.
扫码关注我们
求助内容:
应助结果提醒方式:
