Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00037
Diego Arce, Jose Balbuena, Daniel Menacho, Luis Caballero, Enzo Cisneros, Dario Huanca, M. Alvites, C. Beltran-Royo, F. Cuéllar
This work presents the design, development and preliminary tests of an innovative mobile robot which interact with humans for marketing, advertising and customer services. The proposed robot can be use for various activities related to human-robot interaction (conferences, plant visits, marketing, advertising, supervision). The robot has multiple sensors in order to evaluate the personal space of customers. It also includes display touch screens in order to achieve an emphatic interaction and provide personalized attention. An emotion classification algorithm was implemented aiming to analyze how the customer reacts to the advertisements that appear on the screens, and modify its response accordingly. The robot functionalities and interaction capabilities were validated using a prototype. The results demonstrate a good assessment regarding reliability, usability and performance, and measured a positive emotional response from the participants.
{"title":"Design and Implementation of Telemarketing Robot with Emotion Identification for Human-Robot Interaction","authors":"Diego Arce, Jose Balbuena, Daniel Menacho, Luis Caballero, Enzo Cisneros, Dario Huanca, M. Alvites, C. Beltran-Royo, F. Cuéllar","doi":"10.1109/IRC55401.2022.00037","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00037","url":null,"abstract":"This work presents the design, development and preliminary tests of an innovative mobile robot which interact with humans for marketing, advertising and customer services. The proposed robot can be use for various activities related to human-robot interaction (conferences, plant visits, marketing, advertising, supervision). The robot has multiple sensors in order to evaluate the personal space of customers. It also includes display touch screens in order to achieve an emphatic interaction and provide personalized attention. An emotion classification algorithm was implemented aiming to analyze how the customer reacts to the advertisements that appear on the screens, and modify its response accordingly. The robot functionalities and interaction capabilities were validated using a prototype. The results demonstrate a good assessment regarding reliability, usability and performance, and measured a positive emotional response from the participants.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115943396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00023
Hwa-Cho Lee, Dohyun Chung, S. Kim, Jiwon Lim, Yoonha Bahng, Suhyun Park, Anthony H. Smith
The inefficiency of fire evacuation has been the issue since the present evacuation method is unsuitable for complex buildings. In order to improve the evacuation system, this paper aims at three main components. First, Kalman Filter and deep learning models were utilized to estimate the user’s location accurately. Second, Q-learning based evacuation algorithm was designed to deal with various fire situations. Lastly, AR and a 2D map offer effective navigation systems. The proposed system offers the safest path based on accurate location with a user-friendly visual supplement.
{"title":"Beacon-based Indoor Fire Evacuation System using Augmented Reality and Machine Learning","authors":"Hwa-Cho Lee, Dohyun Chung, S. Kim, Jiwon Lim, Yoonha Bahng, Suhyun Park, Anthony H. Smith","doi":"10.1109/IRC55401.2022.00023","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00023","url":null,"abstract":"The inefficiency of fire evacuation has been the issue since the present evacuation method is unsuitable for complex buildings. In order to improve the evacuation system, this paper aims at three main components. First, Kalman Filter and deep learning models were utilized to estimate the user’s location accurately. Second, Q-learning based evacuation algorithm was designed to deal with various fire situations. Lastly, AR and a 2D map offer effective navigation systems. The proposed system offers the safest path based on accurate location with a user-friendly visual supplement.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125782627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00021
Julian Hanke, Christian Eymüller, Alexander Poeppel, Julia Reichmann, A. Trauth, M. Sause, W. Reif
This paper presents the use of sensor-guided motions for robot-based component testing to compensate the robot’s path deviations under load. We implemented two different sensor-guided motions consisting of a 3D camera system to minimize the absolute deviation and a force/torque sensor mounted directly to the robot’s end effector to minimize occurring transverse forces and torques. We evaluated these two sensor-guided motions in our testing facility with a classical tensile test and a heavy-duty industrial robot. From the obtained results, it can be stated, that transverse forces as well as the absolute deviation were significantly reduced.
{"title":"Sensor-guided motions for robot-based component testing","authors":"Julian Hanke, Christian Eymüller, Alexander Poeppel, Julia Reichmann, A. Trauth, M. Sause, W. Reif","doi":"10.1109/IRC55401.2022.00021","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00021","url":null,"abstract":"This paper presents the use of sensor-guided motions for robot-based component testing to compensate the robot’s path deviations under load. We implemented two different sensor-guided motions consisting of a 3D camera system to minimize the absolute deviation and a force/torque sensor mounted directly to the robot’s end effector to minimize occurring transverse forces and torques. We evaluated these two sensor-guided motions in our testing facility with a classical tensile test and a heavy-duty industrial robot. From the obtained results, it can be stated, that transverse forces as well as the absolute deviation were significantly reduced.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126000115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00009
Ilmun Ku, Seungyeon Roh, Gyeong-hyeon Kim, Charles Taylor, Yaqin Wang, E. Matson
In recent years, the technology behind Unmanned Aerial Vehicles (UAVs) has continually advanced. However, with these developments, malicious activities employing UAVs have also been on the rise. Within this study, Deep Learning (DL) algorithms are utilized to detect and classify UAVs transporting payload based on the sound they release. In order to exercise DL algorithms on a set of data, a sufficient amount of audio data is necessary to obtain a more reliable result. So UAV sound recordings have been collected alongside the use of data augmentation to secure a satisfactory sample size for testing purposes. Afterward, a feature-based classification was applied to the groups of audio identifying each UAV’s payload (or lack thereof). Lastly, Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Convolutional Recurrent Neural Network(CRNN) are utilized in analyzing the final data-set. They are evaluated for their abilities to correctly categorize the unloaded, one payload, and two payload of UAV classes and noise class solely through audio. As a result, MFCC showed the best performance in CNN, RNN, and CRNN, which are 0.9493, 0.8133, and 0.9174 accuracies. Our contribution to this study is that a cost-efficient data collection method was applied by utilizing laptop microphones. Moreover, DL technology was used in UAV payload detection, whereas neural network was used in prior study. Also, the best feature for UAV payload detection with the three DL technologies was found. The limitation of the paper is that only two UAV models and one kind of payload were used to collect data. Diverse UAVs and payload are expected to be used to collect data in future works.
{"title":"UAV Payload Detection Using Deep Learning and Data Augmentation","authors":"Ilmun Ku, Seungyeon Roh, Gyeong-hyeon Kim, Charles Taylor, Yaqin Wang, E. Matson","doi":"10.1109/IRC55401.2022.00009","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00009","url":null,"abstract":"In recent years, the technology behind Unmanned Aerial Vehicles (UAVs) has continually advanced. However, with these developments, malicious activities employing UAVs have also been on the rise. Within this study, Deep Learning (DL) algorithms are utilized to detect and classify UAVs transporting payload based on the sound they release. In order to exercise DL algorithms on a set of data, a sufficient amount of audio data is necessary to obtain a more reliable result. So UAV sound recordings have been collected alongside the use of data augmentation to secure a satisfactory sample size for testing purposes. Afterward, a feature-based classification was applied to the groups of audio identifying each UAV’s payload (or lack thereof). Lastly, Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Convolutional Recurrent Neural Network(CRNN) are utilized in analyzing the final data-set. They are evaluated for their abilities to correctly categorize the unloaded, one payload, and two payload of UAV classes and noise class solely through audio. As a result, MFCC showed the best performance in CNN, RNN, and CRNN, which are 0.9493, 0.8133, and 0.9174 accuracies. Our contribution to this study is that a cost-efficient data collection method was applied by utilizing laptop microphones. Moreover, DL technology was used in UAV payload detection, whereas neural network was used in prior study. Also, the best feature for UAV payload detection with the three DL technologies was found. The limitation of the paper is that only two UAV models and one kind of payload were used to collect data. Diverse UAVs and payload are expected to be used to collect data in future works.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128452700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00066
M. G. D. Santos, Fábio Petrillo, Sylvain Hallé, Yann-Gaël Guéhéneuc
Industrial robotic systems (IRS) are systems composed of industrial robots that automate industrial processes. They execute repetitive tasks with high accuracy, replacing or supporting dangerous jobs. Consequently, a low failure rate is crucial in IRS. However, to the best of our knowledge, there is a lack of automated software testing for industrial robots. In this paper, we describe a test strategy implementation to apply BDD to automate acceptance testing for IRS.
{"title":"An approach to apply Automated Acceptance Testing for Industrial Robotic Systems","authors":"M. G. D. Santos, Fábio Petrillo, Sylvain Hallé, Yann-Gaël Guéhéneuc","doi":"10.1109/IRC55401.2022.00066","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00066","url":null,"abstract":"Industrial robotic systems (IRS) are systems composed of industrial robots that automate industrial processes. They execute repetitive tasks with high accuracy, replacing or supporting dangerous jobs. Consequently, a low failure rate is crucial in IRS. However, to the best of our knowledge, there is a lack of automated software testing for industrial robots. In this paper, we describe a test strategy implementation to apply BDD to automate acceptance testing for IRS.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"5 1-2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129779235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we investigated the capability of a high-dimensional neural network (NN) using commutative quaternion numbers in control system applications. A multilayer commutative quaternion NN was employed to develop a servo-level controller, where the network input comprised the reference output and tapped-delay inputs/outputs of the object plant, and the network output was used directly as the control input. The commutative quaternion NN in the controller was trained in an offline manner using the stochastic gradient descent method to obtain the inverse transfer function of the plant. The effectiveness of the proposed controller was evaluated in computational experiments to control a discrete-time nonlinear plant. The simulation results demonstrate the feasibility of the commutative quaternion NN for this task and the characteristics of the proposed controller.
{"title":"Remarks on Direct Controller using a Commutative Quaternion Neural Network","authors":"Kazuhiko Takahashi, Sung Tae Hwang, Kuya Hayashi, Masafumi Yoshida, M. Hashimoto","doi":"10.1109/IRC55401.2022.00071","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00071","url":null,"abstract":"In this study, we investigated the capability of a high-dimensional neural network (NN) using commutative quaternion numbers in control system applications. A multilayer commutative quaternion NN was employed to develop a servo-level controller, where the network input comprised the reference output and tapped-delay inputs/outputs of the object plant, and the network output was used directly as the control input. The commutative quaternion NN in the controller was trained in an offline manner using the stochastic gradient descent method to obtain the inverse transfer function of the plant. The effectiveness of the proposed controller was evaluated in computational experiments to control a discrete-time nonlinear plant. The simulation results demonstrate the feasibility of the commutative quaternion NN for this task and the characteristics of the proposed controller.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127939813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00040
A. Kriegler, Csaba Beleznai, Markus Murschitz, Kai Göbel, M. Gelautz
This paper studies the challenging problem of 3D pose and size estimation for multi-object scene configurations from stereo views. Most existing methods rely on CAD models and are therefore limited to a predefined set of known object categories. This closed-set constraint limits the range of applications for robots interacting in dynamic environments where previously unseen objects may appear. To address this problem we propose an oriented 3D bounding box detection method that does not require 3D models or semantic information of the objects and is learned entirely from the category-specific domain, relying on purely geometric cues. These geometric cues are objectness and compactness, as represented in the synthetic domain by generating a diverse set of stereo image pairs featuring pose annotated geometric primitives. We then use stereo matching and derive three representations for 3D image content: disparity maps, surface normal images and a novel representation of disparity-scaled surface normal images. The proposed model, PrimitivePose, is trained as a single-stage multi-task neural network using any one of those representations as input and 3D oriented bounding boxes, object centroids and object sizes as output. We evaluate PrimitivePose for 3D bounding box prediction on difficult unseen objects in a tabletop environment and compare it to the popular PoseCNN model-a video showcasing our results can be found at: https://preview.tinyurl.com/2pccumvt.
{"title":"PrimitivePose: 3D Bounding Box Prediction of Unseen Objects via Synthetic Geometric Primitives","authors":"A. Kriegler, Csaba Beleznai, Markus Murschitz, Kai Göbel, M. Gelautz","doi":"10.1109/IRC55401.2022.00040","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00040","url":null,"abstract":"This paper studies the challenging problem of 3D pose and size estimation for multi-object scene configurations from stereo views. Most existing methods rely on CAD models and are therefore limited to a predefined set of known object categories. This closed-set constraint limits the range of applications for robots interacting in dynamic environments where previously unseen objects may appear. To address this problem we propose an oriented 3D bounding box detection method that does not require 3D models or semantic information of the objects and is learned entirely from the category-specific domain, relying on purely geometric cues. These geometric cues are objectness and compactness, as represented in the synthetic domain by generating a diverse set of stereo image pairs featuring pose annotated geometric primitives. We then use stereo matching and derive three representations for 3D image content: disparity maps, surface normal images and a novel representation of disparity-scaled surface normal images. The proposed model, PrimitivePose, is trained as a single-stage multi-task neural network using any one of those representations as input and 3D oriented bounding boxes, object centroids and object sizes as output. We evaluate PrimitivePose for 3D bounding box prediction on difficult unseen objects in a tabletop environment and compare it to the popular PoseCNN model-a video showcasing our results can be found at: https://preview.tinyurl.com/2pccumvt.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121004177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00019
A. Khan, E. Fontana, Dario Lodi Rizzini, S. Caselli
This paper experimentally evaluates the performance of Lidar Odometry and Mapping (LOAM) algorithms based on two different features namely edges and planar surfaces. This work substitutes the LOAM current feature extraction method with novel SKIP-3D (SKeleton Interest Point 3D) which exploits the sparse Lidar point clouds obtained from 3D Lidar to extract high curvature points in the scan through single point scoring. The prominent features of the proposed method are the detection of sparse, non-uniform 3D point clouds and the ability to produce repeatable key points. Carefully excluding the occluded regions and reduced point cloud after discarding non-significant points enables faster processing. The original F-LOAM feature extractor and SKIP-3D were tested and compared in several benchmark datasets.
本文实验评估了基于边缘和平面两种不同特征的激光雷达测程与映射算法的性能。本工作用新的SKIP-3D (SKeleton Interest Point 3D)取代LOAM现有的特征提取方法,该方法利用3D激光雷达获得的稀疏激光雷达点云,通过单点评分提取扫描中的高曲率点。该方法的突出特点是对稀疏、非均匀三维点云的检测以及产生可重复关键点的能力。在丢弃不重要的点后,小心地排除被遮挡的区域和减少的点云,使处理速度更快。在多个基准数据集上对原始的F-LOAM特征提取器和SKIP-3D进行了测试和比较。
{"title":"Experimental Assessment of Feature-based Lidar Odometry and Mapping","authors":"A. Khan, E. Fontana, Dario Lodi Rizzini, S. Caselli","doi":"10.1109/IRC55401.2022.00019","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00019","url":null,"abstract":"This paper experimentally evaluates the performance of Lidar Odometry and Mapping (LOAM) algorithms based on two different features namely edges and planar surfaces. This work substitutes the LOAM current feature extraction method with novel SKIP-3D (SKeleton Interest Point 3D) which exploits the sparse Lidar point clouds obtained from 3D Lidar to extract high curvature points in the scan through single point scoring. The prominent features of the proposed method are the detection of sparse, non-uniform 3D point clouds and the ability to produce repeatable key points. Carefully excluding the occluded regions and reduced point cloud after discarding non-significant points enables faster processing. The original F-LOAM feature extractor and SKIP-3D were tested and compared in several benchmark datasets.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115365564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00007
Emmanuel K. Raptis, Georgios D. Karatzinis, Marios Krestenitis, Athanasios Ch. Kapoutsis, Kostantinos Z. Ioannidis, S. Vrochidis, I. Kompatsiaris, E. Kosmatopoulos
Unmanned Aerial Vehicles (UAVs) consist of emerging technologies that have the potential to be used gradually in various sectors providing a wide range of applications. In agricultural tasks, the UAV-based solutions are supplanting the labor and time-intensive traditional crop management practices. In this direction, this work proposes an automated framework for efficient data collection in crops employing autonomous path planning operational modes. The first method assures an optimal and collision-free path route for scanning the under examination area. The collected data from the oversight perspective are used for orthomocaic creation and subsequently, vegetation indices are extracted to assess the health levels of crops. The second operational mode is considered as an inspection extension for further on-site enriched information collection, performing fixed radius cycles around the central points of interest. A real-world weed detection application is performed verifying the acquired information using both operational modes. The weed detection performance has been evaluated utilizing a well-known Convolutional Neural Network (CNN), named Feature Pyramid Network (FPN), providing sufficient results in terms of Intersection over Union (IoU).
{"title":"Multimodal Data Collection System for UAV-based Precision Agriculture Applications","authors":"Emmanuel K. Raptis, Georgios D. Karatzinis, Marios Krestenitis, Athanasios Ch. Kapoutsis, Kostantinos Z. Ioannidis, S. Vrochidis, I. Kompatsiaris, E. Kosmatopoulos","doi":"10.1109/IRC55401.2022.00007","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00007","url":null,"abstract":"Unmanned Aerial Vehicles (UAVs) consist of emerging technologies that have the potential to be used gradually in various sectors providing a wide range of applications. In agricultural tasks, the UAV-based solutions are supplanting the labor and time-intensive traditional crop management practices. In this direction, this work proposes an automated framework for efficient data collection in crops employing autonomous path planning operational modes. The first method assures an optimal and collision-free path route for scanning the under examination area. The collected data from the oversight perspective are used for orthomocaic creation and subsequently, vegetation indices are extracted to assess the health levels of crops. The second operational mode is considered as an inspection extension for further on-site enriched information collection, performing fixed radius cycles around the central points of interest. A real-world weed detection application is performed verifying the acquired information using both operational modes. The weed detection performance has been evaluated utilizing a well-known Convolutional Neural Network (CNN), named Feature Pyramid Network (FPN), providing sufficient results in terms of Intersection over Union (IoU).","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117189934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00063
Øyvind W. Mjølhus, Andrej Cibicik, E. B. Njaastad, O. Egeland
This paper presents an algorithm for feature point extraction from scanning data of large tubular T-joints (a subtype of a TKY joint). Extracting such feature points is a vital step for robot path generation in robotic welding. Therefore, fast and reliable feature point extraction is necessary for developing adaptive robotic welding solutions. The algorithm is based on a Convolutional Neural Network (CNN) for detecting feature points in a scanned weld groove, where the scans are done using a laser profile scanner. To facilitate fast and efficient training, we propose a methodology for generating synthetic training data in the computer graphics software Blender using realistic physical properties of objects. Further, an iterative feature point correction procedure is implemented to improve initial feature point results. The algorithm’s performance was validated using a real-world dataset acquired from a large tubular T-joint.
{"title":"CNN-based Feature Extraction for Robotic Laser Scanning of Weld Grooves in Tubular T-joints","authors":"Øyvind W. Mjølhus, Andrej Cibicik, E. B. Njaastad, O. Egeland","doi":"10.1109/IRC55401.2022.00063","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00063","url":null,"abstract":"This paper presents an algorithm for feature point extraction from scanning data of large tubular T-joints (a subtype of a TKY joint). Extracting such feature points is a vital step for robot path generation in robotic welding. Therefore, fast and reliable feature point extraction is necessary for developing adaptive robotic welding solutions. The algorithm is based on a Convolutional Neural Network (CNN) for detecting feature points in a scanned weld groove, where the scans are done using a laser profile scanner. To facilitate fast and efficient training, we propose a methodology for generating synthetic training data in the computer graphics software Blender using realistic physical properties of objects. Further, an iterative feature point correction procedure is implemented to improve initial feature point results. The algorithm’s performance was validated using a real-world dataset acquired from a large tubular T-joint.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"32 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116406966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}