{"title":"基于 MS-TCN 的时空模型与三轴触觉,用于增强柔性印刷电路组装能力","authors":"Zengxin Kang, Jing Cui, Yijie Wang, Zhikai Hu, Zhongyi Chu","doi":"10.1108/ria-10-2023-0136","DOIUrl":null,"url":null,"abstract":"Purpose\nCurrent flexible printed circuit (FPC) assembly relies heavily on manual labor, limiting capacity and increasing costs. Small FPC size makes automation challenging as terminals can be visually occluded. The purpose of this study is to use 3D tactile sensing to mimic human manual mating skills for enabling sensing offset between FPC terminals (FPC-t) and FPC mating slots (FPC-s) under visual occlusion.\n\nDesign/methodology/approach\nThe proposed model has three stages: spatial encoding, offset estimation and action strategy. The spatial encoder maps sparse 3D tactile data into a compact 1D feature capturing valid spatial assembly information to enable temporal processing. To compensate for low sensor resolution, consecutive spatial features are input to a multistage temporal convolutional network which estimates alignment offsets. The robot then performs alignment or mating actions based on the estimated offsets.\n\nFindings\nExperiments are conducted on a Redmi Note 4 smartphone assembly platform. Compared to other models, the proposed approach achieves superior offset estimation. Within limited trials, it successfully assembles FPCs under visual occlusion using three-axis tactile sensing.\n\nOriginality/value\nA spatial encoder is designed to encode three-axis tactile data into feature maps, overcoming multistage temporal convolution network’s (MS-TCN) inability to directly process such input. Modifying the output to estimate assembly offsets with related motion semantics overcame MS-TCN’s segmentation points output, unable to meet assembly monitoring needs. Training and testing the improved MS-TCN on an FPC data set demonstrated accurate monitoring of the full process. An assembly platform verified performance on automated FPC assembly.\n","PeriodicalId":501194,"journal":{"name":"Robotic Intelligence and Automation","volume":"124 37","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An MS-TCN based spatiotemporal model with three-axis tactile for enhancing flexible printed circuit assembly\",\"authors\":\"Zengxin Kang, Jing Cui, Yijie Wang, Zhikai Hu, Zhongyi Chu\",\"doi\":\"10.1108/ria-10-2023-0136\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Purpose\\nCurrent flexible printed circuit (FPC) assembly relies heavily on manual labor, limiting capacity and increasing costs. Small FPC size makes automation challenging as terminals can be visually occluded. The purpose of this study is to use 3D tactile sensing to mimic human manual mating skills for enabling sensing offset between FPC terminals (FPC-t) and FPC mating slots (FPC-s) under visual occlusion.\\n\\nDesign/methodology/approach\\nThe proposed model has three stages: spatial encoding, offset estimation and action strategy. The spatial encoder maps sparse 3D tactile data into a compact 1D feature capturing valid spatial assembly information to enable temporal processing. To compensate for low sensor resolution, consecutive spatial features are input to a multistage temporal convolutional network which estimates alignment offsets. The robot then performs alignment or mating actions based on the estimated offsets.\\n\\nFindings\\nExperiments are conducted on a Redmi Note 4 smartphone assembly platform. Compared to other models, the proposed approach achieves superior offset estimation. Within limited trials, it successfully assembles FPCs under visual occlusion using three-axis tactile sensing.\\n\\nOriginality/value\\nA spatial encoder is designed to encode three-axis tactile data into feature maps, overcoming multistage temporal convolution network’s (MS-TCN) inability to directly process such input. Modifying the output to estimate assembly offsets with related motion semantics overcame MS-TCN’s segmentation points output, unable to meet assembly monitoring needs. Training and testing the improved MS-TCN on an FPC data set demonstrated accurate monitoring of the full process. An assembly platform verified performance on automated FPC assembly.\\n\",\"PeriodicalId\":501194,\"journal\":{\"name\":\"Robotic Intelligence and Automation\",\"volume\":\"124 37\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Robotic Intelligence and Automation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1108/ria-10-2023-0136\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotic Intelligence and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/ria-10-2023-0136","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An MS-TCN based spatiotemporal model with three-axis tactile for enhancing flexible printed circuit assembly
Purpose
Current flexible printed circuit (FPC) assembly relies heavily on manual labor, limiting capacity and increasing costs. Small FPC size makes automation challenging as terminals can be visually occluded. The purpose of this study is to use 3D tactile sensing to mimic human manual mating skills for enabling sensing offset between FPC terminals (FPC-t) and FPC mating slots (FPC-s) under visual occlusion.
Design/methodology/approach
The proposed model has three stages: spatial encoding, offset estimation and action strategy. The spatial encoder maps sparse 3D tactile data into a compact 1D feature capturing valid spatial assembly information to enable temporal processing. To compensate for low sensor resolution, consecutive spatial features are input to a multistage temporal convolutional network which estimates alignment offsets. The robot then performs alignment or mating actions based on the estimated offsets.
Findings
Experiments are conducted on a Redmi Note 4 smartphone assembly platform. Compared to other models, the proposed approach achieves superior offset estimation. Within limited trials, it successfully assembles FPCs under visual occlusion using three-axis tactile sensing.
Originality/value
A spatial encoder is designed to encode three-axis tactile data into feature maps, overcoming multistage temporal convolution network’s (MS-TCN) inability to directly process such input. Modifying the output to estimate assembly offsets with related motion semantics overcame MS-TCN’s segmentation points output, unable to meet assembly monitoring needs. Training and testing the improved MS-TCN on an FPC data set demonstrated accurate monitoring of the full process. An assembly platform verified performance on automated FPC assembly.