Asteroid landing exploration is one of the most direct, effective, and challenging methods to gather data on small celestial bodies, and it is a prominent topic in deep space exploration. Considering that a significant number of asteroids have moons, if a trajectory can be designed to transfer from one asteroid to another, it would enable rapid exploration of small celestial body systems. However, this problem involves high-dimensional, nonlinear dynamics such as modeling the complex space perturbation environment of small celestial bodies, describing the lander and asteroid's rugged surface, and analyzing the uncertain attitude and trajectory motions of the lander. These pose new challenges to space dynamics. To address this, this paper employs deep reinforcement learning (DRL) to tackle these challenges. The model consists of three main modules: the deep reinforcement learning module, the dual-asteroid simulation module, and the visualization module. By inputting the initial probe position, the initial states of the dual-asteroid system, and the target landing point coordinates, it outputs the probe's trajectory, control strategy, and energy consumption as performance indicators. This paper achieves data loading and preprocessing, builds the deep reinforcement learning model, develops the physical simulation module, and visualizes the modules. The results show that through reinforcement learning-based trajectory optimization, energy consumption in the transfer trajectory of the probe is significantly reduced (approximately 44.6 %), while maintaining good performance in final precision and reward. This demonstrates that combining the nearest approach tracking control with a deep reinforcement learning model can effectively improve the probe's energy efficiency.
扫码关注我们
求助内容:
应助结果提醒方式:
