Pengfei Ding, Jie Zhang, Pai Zheng, Peng Zhang, Bo Fei, Ziqi Xu
{"title":"用于定制装配任务中主动人机协作的动态场景增强型多样化人类运动预测网络","authors":"Pengfei Ding, Jie Zhang, Pai Zheng, Peng Zhang, Bo Fei, Ziqi Xu","doi":"10.1007/s10845-024-02462-8","DOIUrl":null,"url":null,"abstract":"<p>Human motion prediction is crucial for facilitating human–robot collaboration in customized assembly tasks. However, existing research primarily focuses on predicting limited human motions using static global information, which fails to address the highly stochastic nature of customized assembly operations in a given region. To address this, we propose a dynamic scenario-enhanced diverse human motion prediction network that extracts dynamic collaborative features to predict highly stochastic customized assembly operations. In this paper, we present a multi-level feature adaptation network that generates information for dynamically manipulating objects. This is accomplished by extracting multi-attribute features at different levels, including multi-channel gaze tracking, multi-scale object affordance detection, and multi-modal object’s 6 degree-of-freedom pose estimation. Notably, we employ gaze tracking to locate the collaborative space accurately. Furthermore, we introduce a multi-step feedback-refined diffusion sampling network specifically designed for predicting highly stochastic customized assembly operations. This network refines the outcomes of our proposed multi-weight diffusion sampling strategy to better align with the target distribution. Additionally, we develop a feedback regulatory mechanism that incorporates ground truth information in each prediction step to ensure the reliability of the results. Finally, the effectiveness of the proposed method was demonstrated through comparative experiments and validation of assembly tasks in a laboratory environment.</p>","PeriodicalId":16193,"journal":{"name":"Journal of Intelligent Manufacturing","volume":"47 1","pages":""},"PeriodicalIF":5.9000,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dynamic scenario-enhanced diverse human motion prediction network for proactive human–robot collaboration in customized assembly tasks\",\"authors\":\"Pengfei Ding, Jie Zhang, Pai Zheng, Peng Zhang, Bo Fei, Ziqi Xu\",\"doi\":\"10.1007/s10845-024-02462-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Human motion prediction is crucial for facilitating human–robot collaboration in customized assembly tasks. However, existing research primarily focuses on predicting limited human motions using static global information, which fails to address the highly stochastic nature of customized assembly operations in a given region. To address this, we propose a dynamic scenario-enhanced diverse human motion prediction network that extracts dynamic collaborative features to predict highly stochastic customized assembly operations. In this paper, we present a multi-level feature adaptation network that generates information for dynamically manipulating objects. This is accomplished by extracting multi-attribute features at different levels, including multi-channel gaze tracking, multi-scale object affordance detection, and multi-modal object’s 6 degree-of-freedom pose estimation. Notably, we employ gaze tracking to locate the collaborative space accurately. Furthermore, we introduce a multi-step feedback-refined diffusion sampling network specifically designed for predicting highly stochastic customized assembly operations. This network refines the outcomes of our proposed multi-weight diffusion sampling strategy to better align with the target distribution. Additionally, we develop a feedback regulatory mechanism that incorporates ground truth information in each prediction step to ensure the reliability of the results. Finally, the effectiveness of the proposed method was demonstrated through comparative experiments and validation of assembly tasks in a laboratory environment.</p>\",\"PeriodicalId\":16193,\"journal\":{\"name\":\"Journal of Intelligent Manufacturing\",\"volume\":\"47 1\",\"pages\":\"\"},\"PeriodicalIF\":5.9000,\"publicationDate\":\"2024-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Intelligent Manufacturing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s10845-024-02462-8\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligent Manufacturing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s10845-024-02462-8","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Dynamic scenario-enhanced diverse human motion prediction network for proactive human–robot collaboration in customized assembly tasks
Human motion prediction is crucial for facilitating human–robot collaboration in customized assembly tasks. However, existing research primarily focuses on predicting limited human motions using static global information, which fails to address the highly stochastic nature of customized assembly operations in a given region. To address this, we propose a dynamic scenario-enhanced diverse human motion prediction network that extracts dynamic collaborative features to predict highly stochastic customized assembly operations. In this paper, we present a multi-level feature adaptation network that generates information for dynamically manipulating objects. This is accomplished by extracting multi-attribute features at different levels, including multi-channel gaze tracking, multi-scale object affordance detection, and multi-modal object’s 6 degree-of-freedom pose estimation. Notably, we employ gaze tracking to locate the collaborative space accurately. Furthermore, we introduce a multi-step feedback-refined diffusion sampling network specifically designed for predicting highly stochastic customized assembly operations. This network refines the outcomes of our proposed multi-weight diffusion sampling strategy to better align with the target distribution. Additionally, we develop a feedback regulatory mechanism that incorporates ground truth information in each prediction step to ensure the reliability of the results. Finally, the effectiveness of the proposed method was demonstrated through comparative experiments and validation of assembly tasks in a laboratory environment.
期刊介绍:
The Journal of Nonlinear Engineering aims to be a platform for sharing original research results in theoretical, experimental, practical, and applied nonlinear phenomena within engineering. It serves as a forum to exchange ideas and applications of nonlinear problems across various engineering disciplines. Articles are considered for publication if they explore nonlinearities in engineering systems, offering realistic mathematical modeling, utilizing nonlinearity for new designs, stabilizing systems, understanding system behavior through nonlinearity, optimizing systems based on nonlinear interactions, and developing algorithms to harness and leverage nonlinear elements.