The rise of personalized manufacturing presents significant challenges for robotic assembly. While learning-based methods offer promising solutions, they often suffer from low training efficiency and poor generalization. To address these limitations, this paper proposes an efficient prior-guided (PG) deep reinforcement learning (DRL) approach for generalizable robotic assembly using multi-sensor information. First, a phased multi-sensor information fusion method is introduced. Then, a visual feature extraction method that combines MobileNetV3-Lite with conventional digital image processing and a rule-based force feature extraction method are designed to extract lower-dimensional features as prior-guided knowledge. Based on the methods above, a Soft Actor-Critic (SAC) algorithm that integrates Gated Recurrent Unit (GRU) network architecture with PG is proposed, thereby enabling efficient assembly skill learning. Simulations and physical experiments with respect to three typical assembly skills, i.e., search, alignment, and insertion, are conducted. Results indicate that, compared with the baseline SAC algorithm, our feature extraction method reduces visual feature dimensions by 93.75% and provides accurate prior-guided knowledge for DRL. The proposed assembly skill learning algorithm achieves a 30.16% reduction in average training time and a 16.82% decrease in average completion step. Furthermore, all learned skills can be rapidly transferred across different objects, and all assembly tasks are completed efficiently and compliantly with an average success rate of 96.86%.
扫码关注我们
求助内容:
应助结果提醒方式:
