Human-robot collaboration (HRC) offers promising solutions in industrial assembly by combining human dexterity and robot efficiency. However, the proximity of the human and robot raises safety concerns that can limit efficiency. Previous studies have typically addressed safety or efficiency separately, relying on incomplete models of human behavior, which has led to robot control strategies with limited adaptability. To bridge this gap, this paper proposes a hierarchical human behavior modeling framework that integrates human motion prediction (HMP) and human action segmentation. The proposed model captures both the fine-grained motion dynamics and the higher-level task structure, enabling a more complete and context-aware understanding of human behavior. The model can enhance robot decision-making for both proactive safety mechanisms and dynamic task allocation. HMP compares three predictive models (Convolutional Neural Networks - Long Short-Term Memory (CNN-LSTM), Spatial-Temporal Graph Convolutional Networks (ST-GCN), Transformer). CNN-LSTM and ST-GCN outperformed Transformer, demonstrating better short-term predictive accuracy. Human action segmentation includes feature extraction, dimensionality reduction, clustering, and two-stage temporal segmentation. The HMP (CNN-LSTM) based features achieve the highest clustering performance. Two-stage segmentation demonstrates high accuracy, achieving normalized edit distances (NED) of 0.029 and 0.07 for task-level and sub-task-level segmentation, respectively. Evaluation results show that proactive collision avoidance using predicted motions increased safety distance (from 0.4633 m to 0.4717 m), while dynamic task allocation based on action segmentation improves robot efficiency (from 84.95 % to 98.56 %). These results validate the effectiveness of the proposed hierarchical human behavior modeling framework in simultaneously enhancing safety and efficiency in HRC assembly.
扫码关注我们
求助内容:
应助结果提醒方式:
