Traditional human-robot collaboration research has primarily focused on single human-robot dyads, yet faces significant challenges in addressing complex industrial scenarios characterized by concurrent multi-tasking, dynamic disturbances, and heterogeneous role coordination. Transitioning toward multi-human multi-robot collaboration (MHMRC) is crucial for achieving a significant leap in coordinated efficiency and manufacturing flexibility. To address this, we investigate a Hybrid Cognitive Digital Twin (HCDT) framework through generative knowledge-augmented paradigms. Our approach introduces a human-centric cognitive entity to generate task data and knowledge-driven strategies for MHMRC. This work demonstrates that integrating Large Language Models (LLMs) with Graph Neural Networks (GNNs) offers robust capabilities in comprehension, reasoning, ideally meeting MHMRC's requirements for handling unplanned operational variations as well as adapting to dynamic collaborative tasks. Furthermore, we demonstrate that compared to human-engineered precoding strategies, the HCDT-powered MHMRC system autonomously generates collaborative strategies for unscheduled tasks under more complex dynamic conditions and mission scenarios, enabling the execution of situations beyond conventional predefined patterns. The proposed methodology was validated in automotive lithium-ion battery (LIB) disassembly applications. Experimental results demonstrate its adaptability to dynamic collaborative tasks and generalization in generating strategies for unplanned operational variations within dynamic disassembly environments. This approach effectively overcomes various technical challenges to achieve autonomous collaboration in MHMRC systems through knowledge representation, task allocation, and collaborative optimization.
扫码关注我们
求助内容:
应助结果提醒方式:
