Baicun Wang , Ci Song , Xingyu Li , Huiying Zhou , Huayong Yang , Lihui Wang
{"title":"A deep learning-enabled visual-inertial fusion method for human pose estimation in occluded human-robot collaborative assembly scenarios","authors":"Baicun Wang , Ci Song , Xingyu Li , Huiying Zhou , Huayong Yang , Lihui Wang","doi":"10.1016/j.rcim.2024.102906","DOIUrl":null,"url":null,"abstract":"<div><div>In the context of human-centric smart manufacturing, human-robot collaboration (HRC) systems leverage the strengths of both humans and machines to achieve more flexible and efficient manufacturing. In particular, estimating and monitoring human motion status determines when and how the robots cooperate. However, the presence of occlusion in industrial settings seriously affects the performance of human pose estimation (HPE). Using more sensors can alleviate the occlusion issue, but it may cause additional computational costs and lower workers' comfort. To address this issue, this work proposes a visual-inertial fusion-based method for HPE in HRC, aiming to achieve accurate and robust estimation while minimizing the influence on human motion. A part-specific cross-modal fusion mechanism is designed to integrate spatial information provided by a monocular camera and six Inertial Measurement Units (IMUs). A multi-scale temporal module is developed to model the motion dependence between frames at different granularities. Our approach achieves 34.9 mm Mean Per Joint Positional Error (MPJPE) on the TotalCapture dataset and 53.9 mm on the 3DPW dataset, outperforming state-of-the-art visual-inertial fusion-based methods. Tests on a synthetic-occlusion dataset further validate the occlusion robustness of our network. Quantitative and qualitative experiments on a real assembly case verified the superiority and potential of our approach in HRC. It is expected that this work can be a reference for human motion perception in occluded HRC scenarios.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"93 ","pages":"Article 102906"},"PeriodicalIF":9.1000,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics and Computer-integrated Manufacturing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0736584524001935","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
In the context of human-centric smart manufacturing, human-robot collaboration (HRC) systems leverage the strengths of both humans and machines to achieve more flexible and efficient manufacturing. In particular, estimating and monitoring human motion status determines when and how the robots cooperate. However, the presence of occlusion in industrial settings seriously affects the performance of human pose estimation (HPE). Using more sensors can alleviate the occlusion issue, but it may cause additional computational costs and lower workers' comfort. To address this issue, this work proposes a visual-inertial fusion-based method for HPE in HRC, aiming to achieve accurate and robust estimation while minimizing the influence on human motion. A part-specific cross-modal fusion mechanism is designed to integrate spatial information provided by a monocular camera and six Inertial Measurement Units (IMUs). A multi-scale temporal module is developed to model the motion dependence between frames at different granularities. Our approach achieves 34.9 mm Mean Per Joint Positional Error (MPJPE) on the TotalCapture dataset and 53.9 mm on the 3DPW dataset, outperforming state-of-the-art visual-inertial fusion-based methods. Tests on a synthetic-occlusion dataset further validate the occlusion robustness of our network. Quantitative and qualitative experiments on a real assembly case verified the superiority and potential of our approach in HRC. It is expected that this work can be a reference for human motion perception in occluded HRC scenarios.
期刊介绍:
The journal, Robotics and Computer-Integrated Manufacturing, focuses on sharing research applications that contribute to the development of new or enhanced robotics, manufacturing technologies, and innovative manufacturing strategies that are relevant to industry. Papers that combine theory and experimental validation are preferred, while review papers on current robotics and manufacturing issues are also considered. However, papers on traditional machining processes, modeling and simulation, supply chain management, and resource optimization are generally not within the scope of the journal, as there are more appropriate journals for these topics. Similarly, papers that are overly theoretical or mathematical will be directed to other suitable journals. The journal welcomes original papers in areas such as industrial robotics, human-robot collaboration in manufacturing, cloud-based manufacturing, cyber-physical production systems, big data analytics in manufacturing, smart mechatronics, machine learning, adaptive and sustainable manufacturing, and other fields involving unique manufacturing technologies.