Chen Chen, Tingyang Chen, Zhenhua Cai, Chunnian Zeng, Xiaoyue Jin
{"title":"A hierarchical visual model for robot automatic arc welding guidance","authors":"Chen Chen, Tingyang Chen, Zhenhua Cai, Chunnian Zeng, Xiaoyue Jin","doi":"10.1108/ir-05-2022-0127","DOIUrl":null,"url":null,"abstract":"\nPurpose\nThe traditional vision system cannot automatically adjust the feature point extraction method according to the type of welding seam. In addition, the robot cannot self-correct the laying position error or machining error. To solve this problem, this paper aims to propose a hierarchical visual model to achieve automatic arc welding guidance.\n\n\nDesign/methodology/approach\nThe hierarchical visual model proposed in this paper is divided into two layers: welding seam classification layer and feature point extraction layer. In the welding seam classification layer, the SegNet network model is trained to identify the welding seam type, and the prediction mask is obtained to segment the corresponding point clouds. In the feature point extraction layer, the scanning path is determined by the point cloud obtained from the upper layer to correct laying position error. The feature points extraction method is automatically determined to correct machining error based on the type of welding seam. Furthermore, the corresponding specific method to extract the feature points for each type of welding seam is proposed. The proposed visual model is experimentally validated, and the feature points extraction results as well as seam tracking error are finally analyzed.\n\n\nFindings\nThe experimental results show that the algorithm can well accomplish welding seam classification, feature points extraction and seam tracking with high precision. The prediction mask accuracy is above 90% for three types of welding seam. The proposed feature points extraction method for each type of welding seam can achieve sub-pixel feature extraction. For the three types of welding seam, the maximum seam tracking error is 0.33–0.41 mm, and the average seam tracking error is 0.11–0.22 mm.\n\n\nOriginality/value\nThe main innovation of this paper is that a hierarchical visual model for robotic arc welding is proposed, which is suitable for various types of welding seam. The proposed visual model well achieves welding seam classification, feature point extraction and error correction, which improves the automation level of robot welding.\n","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":"8 1","pages":"299-313"},"PeriodicalIF":1.9000,"publicationDate":"2022-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Industrial Robot-The International Journal of Robotics Research and Application","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1108/ir-05-2022-0127","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose
The traditional vision system cannot automatically adjust the feature point extraction method according to the type of welding seam. In addition, the robot cannot self-correct the laying position error or machining error. To solve this problem, this paper aims to propose a hierarchical visual model to achieve automatic arc welding guidance.
Design/methodology/approach
The hierarchical visual model proposed in this paper is divided into two layers: welding seam classification layer and feature point extraction layer. In the welding seam classification layer, the SegNet network model is trained to identify the welding seam type, and the prediction mask is obtained to segment the corresponding point clouds. In the feature point extraction layer, the scanning path is determined by the point cloud obtained from the upper layer to correct laying position error. The feature points extraction method is automatically determined to correct machining error based on the type of welding seam. Furthermore, the corresponding specific method to extract the feature points for each type of welding seam is proposed. The proposed visual model is experimentally validated, and the feature points extraction results as well as seam tracking error are finally analyzed.
Findings
The experimental results show that the algorithm can well accomplish welding seam classification, feature points extraction and seam tracking with high precision. The prediction mask accuracy is above 90% for three types of welding seam. The proposed feature points extraction method for each type of welding seam can achieve sub-pixel feature extraction. For the three types of welding seam, the maximum seam tracking error is 0.33–0.41 mm, and the average seam tracking error is 0.11–0.22 mm.
Originality/value
The main innovation of this paper is that a hierarchical visual model for robotic arc welding is proposed, which is suitable for various types of welding seam. The proposed visual model well achieves welding seam classification, feature point extraction and error correction, which improves the automation level of robot welding.
期刊介绍:
Industrial Robot publishes peer reviewed research articles, technology reviews and specially commissioned case studies. Each issue includes high quality content covering all aspects of robotic technology, and reflecting the most interesting and strategically important research and development activities from around the world.
The journal’s policy of not publishing work that has only been tested in simulation means that only the very best and most practical research articles are included. This ensures that the material that is published has real relevance and value for commercial manufacturing and research organizations. Industrial Robot''s coverage includes, but is not restricted to:
Automatic assembly
Flexible manufacturing
Programming optimisation
Simulation and offline programming
Service robots
Autonomous robots
Swarm intelligence
Humanoid robots
Prosthetics and exoskeletons
Machine intelligence
Military robots
Underwater and aerial robots
Cooperative robots
Flexible grippers and tactile sensing
Robot vision
Teleoperation
Mobile robots
Search and rescue robots
Robot welding
Collision avoidance
Robotic machining
Surgical robots
Call for Papers 2020
AI for Autonomous Unmanned Systems
Agricultural Robot
Brain-Computer Interfaces for Human-Robot Interaction
Cooperative Robots
Robots for Environmental Monitoring
Rehabilitation Robots
Wearable Robotics/Exoskeletons.