首页 > 最新文献

2022 Sixth IEEE International Conference on Robotic Computing (IRC)最新文献

英文 中文
Design and Implementation of Telemarketing Robot with Emotion Identification for Human-Robot Interaction 面向人机交互的情感识别电话营销机器人的设计与实现
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00037
Diego Arce, Jose Balbuena, Daniel Menacho, Luis Caballero, Enzo Cisneros, Dario Huanca, M. Alvites, C. Beltran-Royo, F. Cuéllar
This work presents the design, development and preliminary tests of an innovative mobile robot which interact with humans for marketing, advertising and customer services. The proposed robot can be use for various activities related to human-robot interaction (conferences, plant visits, marketing, advertising, supervision). The robot has multiple sensors in order to evaluate the personal space of customers. It also includes display touch screens in order to achieve an emphatic interaction and provide personalized attention. An emotion classification algorithm was implemented aiming to analyze how the customer reacts to the advertisements that appear on the screens, and modify its response accordingly. The robot functionalities and interaction capabilities were validated using a prototype. The results demonstrate a good assessment regarding reliability, usability and performance, and measured a positive emotional response from the participants.
这项工作介绍了一种创新的移动机器人的设计、开发和初步测试,这种机器人可以与人类互动,用于营销、广告和客户服务。所提出的机器人可以用于与人机交互相关的各种活动(会议、工厂参观、营销、广告、监督)。该机器人有多个传感器,以评估客户的个人空间。它还包括显示触摸屏,以实现一个强调的互动和提供个性化的关注。实现了一种情绪分类算法,旨在分析客户对屏幕上出现的广告的反应,并相应地修改其反应。使用原型验证了机器人的功能和交互能力。结果表明,在可靠性,可用性和性能方面,评估良好,并测量了参与者的积极情绪反应。
{"title":"Design and Implementation of Telemarketing Robot with Emotion Identification for Human-Robot Interaction","authors":"Diego Arce, Jose Balbuena, Daniel Menacho, Luis Caballero, Enzo Cisneros, Dario Huanca, M. Alvites, C. Beltran-Royo, F. Cuéllar","doi":"10.1109/IRC55401.2022.00037","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00037","url":null,"abstract":"This work presents the design, development and preliminary tests of an innovative mobile robot which interact with humans for marketing, advertising and customer services. The proposed robot can be use for various activities related to human-robot interaction (conferences, plant visits, marketing, advertising, supervision). The robot has multiple sensors in order to evaluate the personal space of customers. It also includes display touch screens in order to achieve an emphatic interaction and provide personalized attention. An emotion classification algorithm was implemented aiming to analyze how the customer reacts to the advertisements that appear on the screens, and modify its response accordingly. The robot functionalities and interaction capabilities were validated using a prototype. The results demonstrate a good assessment regarding reliability, usability and performance, and measured a positive emotional response from the participants.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115943396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beacon-based Indoor Fire Evacuation System using Augmented Reality and Machine Learning 使用增强现实技术和机器学习的基于信标的室内火灾疏散系统
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00023
Hwa-Cho Lee, Dohyun Chung, S. Kim, Jiwon Lim, Yoonha Bahng, Suhyun Park, Anthony H. Smith
The inefficiency of fire evacuation has been the issue since the present evacuation method is unsuitable for complex buildings. In order to improve the evacuation system, this paper aims at three main components. First, Kalman Filter and deep learning models were utilized to estimate the user’s location accurately. Second, Q-learning based evacuation algorithm was designed to deal with various fire situations. Lastly, AR and a 2D map offer effective navigation systems. The proposed system offers the safest path based on accurate location with a user-friendly visual supplement.
由于现有的疏散方法不适合复杂的建筑物,因此存在疏散效率低的问题。为了完善疏散系统,本文主要从三个方面进行研究。首先,利用卡尔曼滤波和深度学习模型对用户位置进行准确估计;其次,设计了基于q学习的疏散算法,以应对各种火灾情况。最后,AR和2D地图提供了有效的导航系统。该系统提供了基于准确位置的最安全路径,并提供了用户友好的视觉补充。
{"title":"Beacon-based Indoor Fire Evacuation System using Augmented Reality and Machine Learning","authors":"Hwa-Cho Lee, Dohyun Chung, S. Kim, Jiwon Lim, Yoonha Bahng, Suhyun Park, Anthony H. Smith","doi":"10.1109/IRC55401.2022.00023","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00023","url":null,"abstract":"The inefficiency of fire evacuation has been the issue since the present evacuation method is unsuitable for complex buildings. In order to improve the evacuation system, this paper aims at three main components. First, Kalman Filter and deep learning models were utilized to estimate the user’s location accurately. Second, Q-learning based evacuation algorithm was designed to deal with various fire situations. Lastly, AR and a 2D map offer effective navigation systems. The proposed system offers the safest path based on accurate location with a user-friendly visual supplement.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125782627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensor-guided motions for robot-based component testing 基于机器人组件测试的传感器引导运动
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00021
Julian Hanke, Christian Eymüller, Alexander Poeppel, Julia Reichmann, A. Trauth, M. Sause, W. Reif
This paper presents the use of sensor-guided motions for robot-based component testing to compensate the robot’s path deviations under load. We implemented two different sensor-guided motions consisting of a 3D camera system to minimize the absolute deviation and a force/torque sensor mounted directly to the robot’s end effector to minimize occurring transverse forces and torques. We evaluated these two sensor-guided motions in our testing facility with a classical tensile test and a heavy-duty industrial robot. From the obtained results, it can be stated, that transverse forces as well as the absolute deviation were significantly reduced.
本文提出了利用传感器引导运动对机器人部件进行测试,以补偿机器人在负载下的路径偏差。我们实现了两种不同的传感器引导运动,包括一个3D相机系统,以最小化绝对偏差,一个力/扭矩传感器直接安装在机器人的末端执行器上,以最小化发生的横向力和扭矩。在我们的测试设备中,我们用经典的拉伸测试和重型工业机器人来评估这两种传感器引导的运动。从得到的结果可以看出,横向力和绝对偏差都大大减小了。
{"title":"Sensor-guided motions for robot-based component testing","authors":"Julian Hanke, Christian Eymüller, Alexander Poeppel, Julia Reichmann, A. Trauth, M. Sause, W. Reif","doi":"10.1109/IRC55401.2022.00021","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00021","url":null,"abstract":"This paper presents the use of sensor-guided motions for robot-based component testing to compensate the robot’s path deviations under load. We implemented two different sensor-guided motions consisting of a 3D camera system to minimize the absolute deviation and a force/torque sensor mounted directly to the robot’s end effector to minimize occurring transverse forces and torques. We evaluated these two sensor-guided motions in our testing facility with a classical tensile test and a heavy-duty industrial robot. From the obtained results, it can be stated, that transverse forces as well as the absolute deviation were significantly reduced.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126000115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
UAV Payload Detection Using Deep Learning and Data Augmentation 基于深度学习和数据增强的无人机有效载荷检测
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00009
Ilmun Ku, Seungyeon Roh, Gyeong-hyeon Kim, Charles Taylor, Yaqin Wang, E. Matson
In recent years, the technology behind Unmanned Aerial Vehicles (UAVs) has continually advanced. However, with these developments, malicious activities employing UAVs have also been on the rise. Within this study, Deep Learning (DL) algorithms are utilized to detect and classify UAVs transporting payload based on the sound they release. In order to exercise DL algorithms on a set of data, a sufficient amount of audio data is necessary to obtain a more reliable result. So UAV sound recordings have been collected alongside the use of data augmentation to secure a satisfactory sample size for testing purposes. Afterward, a feature-based classification was applied to the groups of audio identifying each UAV’s payload (or lack thereof). Lastly, Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Convolutional Recurrent Neural Network(CRNN) are utilized in analyzing the final data-set. They are evaluated for their abilities to correctly categorize the unloaded, one payload, and two payload of UAV classes and noise class solely through audio. As a result, MFCC showed the best performance in CNN, RNN, and CRNN, which are 0.9493, 0.8133, and 0.9174 accuracies. Our contribution to this study is that a cost-efficient data collection method was applied by utilizing laptop microphones. Moreover, DL technology was used in UAV payload detection, whereas neural network was used in prior study. Also, the best feature for UAV payload detection with the three DL technologies was found. The limitation of the paper is that only two UAV models and one kind of payload were used to collect data. Diverse UAVs and payload are expected to be used to collect data in future works.
近年来,无人驾驶飞行器(uav)背后的技术不断进步。然而,随着这些发展,使用无人机的恶意活动也在增加。在这项研究中,深度学习(DL)算法被用于根据无人机释放的声音来检测和分类运输有效载荷的无人机。为了在一组数据上使用深度学习算法,需要足够数量的音频数据来获得更可靠的结果。因此,UAV录音已经收集与使用数据增强,以确保一个令人满意的样本量用于测试目的。之后,基于特征的分类应用于识别每个无人机有效载荷(或缺乏有效载荷)的音频组。最后,利用卷积神经网络(CNN)、递归神经网络(RNN)和卷积递归神经网络(CRNN)对最终数据集进行分析。他们被评估为正确分类无人机类别的卸载,一个有效载荷,两个有效载荷和噪音类别的能力,仅通过音频。结果表明,MFCC在CNN、RNN和CRNN中表现最好,准确率分别为0.9493、0.8133和0.9174。我们对这项研究的贡献是利用笔记本电脑麦克风采用了一种成本效益高的数据收集方法。此外,将深度学习技术应用于无人机有效载荷检测,而之前的研究主要采用神经网络。此外,发现了三种DL技术用于UAV有效载荷检测的最佳特征。本文的局限性在于只使用了两种型号的无人机和一种载荷进行数据采集。在未来的工作中,预计将使用各种无人机和有效载荷来收集数据。
{"title":"UAV Payload Detection Using Deep Learning and Data Augmentation","authors":"Ilmun Ku, Seungyeon Roh, Gyeong-hyeon Kim, Charles Taylor, Yaqin Wang, E. Matson","doi":"10.1109/IRC55401.2022.00009","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00009","url":null,"abstract":"In recent years, the technology behind Unmanned Aerial Vehicles (UAVs) has continually advanced. However, with these developments, malicious activities employing UAVs have also been on the rise. Within this study, Deep Learning (DL) algorithms are utilized to detect and classify UAVs transporting payload based on the sound they release. In order to exercise DL algorithms on a set of data, a sufficient amount of audio data is necessary to obtain a more reliable result. So UAV sound recordings have been collected alongside the use of data augmentation to secure a satisfactory sample size for testing purposes. Afterward, a feature-based classification was applied to the groups of audio identifying each UAV’s payload (or lack thereof). Lastly, Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Convolutional Recurrent Neural Network(CRNN) are utilized in analyzing the final data-set. They are evaluated for their abilities to correctly categorize the unloaded, one payload, and two payload of UAV classes and noise class solely through audio. As a result, MFCC showed the best performance in CNN, RNN, and CRNN, which are 0.9493, 0.8133, and 0.9174 accuracies. Our contribution to this study is that a cost-efficient data collection method was applied by utilizing laptop microphones. Moreover, DL technology was used in UAV payload detection, whereas neural network was used in prior study. Also, the best feature for UAV payload detection with the three DL technologies was found. The limitation of the paper is that only two UAV models and one kind of payload were used to collect data. Diverse UAVs and payload are expected to be used to collect data in future works.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128452700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An approach to apply Automated Acceptance Testing for Industrial Robotic Systems 工业机器人系统自动化验收测试的应用方法
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00066
M. G. D. Santos, Fábio Petrillo, Sylvain Hallé, Yann-Gaël Guéhéneuc
Industrial robotic systems (IRS) are systems composed of industrial robots that automate industrial processes. They execute repetitive tasks with high accuracy, replacing or supporting dangerous jobs. Consequently, a low failure rate is crucial in IRS. However, to the best of our knowledge, there is a lack of automated software testing for industrial robots. In this paper, we describe a test strategy implementation to apply BDD to automate acceptance testing for IRS.
工业机器人系统(IRS)是由工业机器人组成的自动化工业过程的系统。它们以高精度执行重复性任务,取代或支持危险的工作。因此,低故障率对IRS至关重要。然而,据我们所知,目前还缺乏针对工业机器人的自动化软件测试。在本文中,我们描述了一个测试策略实现,将BDD应用于IRS的自动化验收测试。
{"title":"An approach to apply Automated Acceptance Testing for Industrial Robotic Systems","authors":"M. G. D. Santos, Fábio Petrillo, Sylvain Hallé, Yann-Gaël Guéhéneuc","doi":"10.1109/IRC55401.2022.00066","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00066","url":null,"abstract":"Industrial robotic systems (IRS) are systems composed of industrial robots that automate industrial processes. They execute repetitive tasks with high accuracy, replacing or supporting dangerous jobs. Consequently, a low failure rate is crucial in IRS. However, to the best of our knowledge, there is a lack of automated software testing for industrial robots. In this paper, we describe a test strategy implementation to apply BDD to automate acceptance testing for IRS.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"5 1-2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129779235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Remarks on Direct Controller using a Commutative Quaternion Neural Network 基于可交换四元数神经网络的直接控制器
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00071
Kazuhiko Takahashi, Sung Tae Hwang, Kuya Hayashi, Masafumi Yoshida, M. Hashimoto
In this study, we investigated the capability of a high-dimensional neural network (NN) using commutative quaternion numbers in control system applications. A multilayer commutative quaternion NN was employed to develop a servo-level controller, where the network input comprised the reference output and tapped-delay inputs/outputs of the object plant, and the network output was used directly as the control input. The commutative quaternion NN in the controller was trained in an offline manner using the stochastic gradient descent method to obtain the inverse transfer function of the plant. The effectiveness of the proposed controller was evaluated in computational experiments to control a discrete-time nonlinear plant. The simulation results demonstrate the feasibility of the commutative quaternion NN for this task and the characteristics of the proposed controller.
在本研究中,我们研究了使用交换四元数的高维神经网络(NN)在控制系统应用中的能力。采用多层交换四元数神经网络设计伺服级控制器,其中网络输入由目标对象的参考输出和抽头延迟输入/输出组成,网络输出直接作为控制输入。采用随机梯度下降法对控制器中的交换四元数神经网络进行离线训练,得到被控对象的逆传递函数。通过计算实验验证了所提控制器控制离散非线性对象的有效性。仿真结果验证了交换四元数神经网络用于该任务的可行性以及所提控制器的特点。
{"title":"Remarks on Direct Controller using a Commutative Quaternion Neural Network","authors":"Kazuhiko Takahashi, Sung Tae Hwang, Kuya Hayashi, Masafumi Yoshida, M. Hashimoto","doi":"10.1109/IRC55401.2022.00071","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00071","url":null,"abstract":"In this study, we investigated the capability of a high-dimensional neural network (NN) using commutative quaternion numbers in control system applications. A multilayer commutative quaternion NN was employed to develop a servo-level controller, where the network input comprised the reference output and tapped-delay inputs/outputs of the object plant, and the network output was used directly as the control input. The commutative quaternion NN in the controller was trained in an offline manner using the stochastic gradient descent method to obtain the inverse transfer function of the plant. The effectiveness of the proposed controller was evaluated in computational experiments to control a discrete-time nonlinear plant. The simulation results demonstrate the feasibility of the commutative quaternion NN for this task and the characteristics of the proposed controller.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127939813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PrimitivePose: 3D Bounding Box Prediction of Unseen Objects via Synthetic Geometric Primitives PrimitivePose:通过合成几何基元预测未见物体的3D边界框
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00040
A. Kriegler, Csaba Beleznai, Markus Murschitz, Kai Göbel, M. Gelautz
This paper studies the challenging problem of 3D pose and size estimation for multi-object scene configurations from stereo views. Most existing methods rely on CAD models and are therefore limited to a predefined set of known object categories. This closed-set constraint limits the range of applications for robots interacting in dynamic environments where previously unseen objects may appear. To address this problem we propose an oriented 3D bounding box detection method that does not require 3D models or semantic information of the objects and is learned entirely from the category-specific domain, relying on purely geometric cues. These geometric cues are objectness and compactness, as represented in the synthetic domain by generating a diverse set of stereo image pairs featuring pose annotated geometric primitives. We then use stereo matching and derive three representations for 3D image content: disparity maps, surface normal images and a novel representation of disparity-scaled surface normal images. The proposed model, PrimitivePose, is trained as a single-stage multi-task neural network using any one of those representations as input and 3D oriented bounding boxes, object centroids and object sizes as output. We evaluate PrimitivePose for 3D bounding box prediction on difficult unseen objects in a tabletop environment and compare it to the popular PoseCNN model-a video showcasing our results can be found at: https://preview.tinyurl.com/2pccumvt.
本文研究了基于立体视图的多目标场景构型的三维姿态和尺寸估计问题。大多数现有的方法依赖于CAD模型,因此局限于一组预定义的已知对象类别。这种封闭集约束限制了机器人在动态环境中交互的应用范围,在动态环境中,以前看不见的物体可能会出现。为了解决这个问题,我们提出了一种定向的3D边界盒检测方法,该方法不需要3D模型或对象的语义信息,完全从特定类别的领域学习,依赖纯粹的几何线索。这些几何线索是客观性和紧凑性,在合成域中通过生成一组不同的立体图像对来表示,这些图像对具有姿态注释的几何原语。然后,我们使用立体匹配并推导出三维图像内容的三种表示:视差图、表面法线图像和一种新的视差尺度表面法线图像表示。所提出的模型PrimitivePose被训练成一个单阶段多任务神经网络,使用这些表示中的任何一个作为输入,并将面向3D的边界框、物体质心和物体尺寸作为输出。我们评估了PrimitivePose在桌面环境中对难以看不见的物体进行3D边界框预测的能力,并将其与流行的possecnn模型进行了比较——可以在https://preview.tinyurl.com/2pccumvt上找到展示我们结果的视频。
{"title":"PrimitivePose: 3D Bounding Box Prediction of Unseen Objects via Synthetic Geometric Primitives","authors":"A. Kriegler, Csaba Beleznai, Markus Murschitz, Kai Göbel, M. Gelautz","doi":"10.1109/IRC55401.2022.00040","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00040","url":null,"abstract":"This paper studies the challenging problem of 3D pose and size estimation for multi-object scene configurations from stereo views. Most existing methods rely on CAD models and are therefore limited to a predefined set of known object categories. This closed-set constraint limits the range of applications for robots interacting in dynamic environments where previously unseen objects may appear. To address this problem we propose an oriented 3D bounding box detection method that does not require 3D models or semantic information of the objects and is learned entirely from the category-specific domain, relying on purely geometric cues. These geometric cues are objectness and compactness, as represented in the synthetic domain by generating a diverse set of stereo image pairs featuring pose annotated geometric primitives. We then use stereo matching and derive three representations for 3D image content: disparity maps, surface normal images and a novel representation of disparity-scaled surface normal images. The proposed model, PrimitivePose, is trained as a single-stage multi-task neural network using any one of those representations as input and 3D oriented bounding boxes, object centroids and object sizes as output. We evaluate PrimitivePose for 3D bounding box prediction on difficult unseen objects in a tabletop environment and compare it to the popular PoseCNN model-a video showcasing our results can be found at: https://preview.tinyurl.com/2pccumvt.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121004177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experimental Assessment of Feature-based Lidar Odometry and Mapping 基于特征的激光雷达测程与测绘的实验评估
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00019
A. Khan, E. Fontana, Dario Lodi Rizzini, S. Caselli
This paper experimentally evaluates the performance of Lidar Odometry and Mapping (LOAM) algorithms based on two different features namely edges and planar surfaces. This work substitutes the LOAM current feature extraction method with novel SKIP-3D (SKeleton Interest Point 3D) which exploits the sparse Lidar point clouds obtained from 3D Lidar to extract high curvature points in the scan through single point scoring. The prominent features of the proposed method are the detection of sparse, non-uniform 3D point clouds and the ability to produce repeatable key points. Carefully excluding the occluded regions and reduced point cloud after discarding non-significant points enables faster processing. The original F-LOAM feature extractor and SKIP-3D were tested and compared in several benchmark datasets.
本文实验评估了基于边缘和平面两种不同特征的激光雷达测程与映射算法的性能。本工作用新的SKIP-3D (SKeleton Interest Point 3D)取代LOAM现有的特征提取方法,该方法利用3D激光雷达获得的稀疏激光雷达点云,通过单点评分提取扫描中的高曲率点。该方法的突出特点是对稀疏、非均匀三维点云的检测以及产生可重复关键点的能力。在丢弃不重要的点后,小心地排除被遮挡的区域和减少的点云,使处理速度更快。在多个基准数据集上对原始的F-LOAM特征提取器和SKIP-3D进行了测试和比较。
{"title":"Experimental Assessment of Feature-based Lidar Odometry and Mapping","authors":"A. Khan, E. Fontana, Dario Lodi Rizzini, S. Caselli","doi":"10.1109/IRC55401.2022.00019","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00019","url":null,"abstract":"This paper experimentally evaluates the performance of Lidar Odometry and Mapping (LOAM) algorithms based on two different features namely edges and planar surfaces. This work substitutes the LOAM current feature extraction method with novel SKIP-3D (SKeleton Interest Point 3D) which exploits the sparse Lidar point clouds obtained from 3D Lidar to extract high curvature points in the scan through single point scoring. The prominent features of the proposed method are the detection of sparse, non-uniform 3D point clouds and the ability to produce repeatable key points. Carefully excluding the occluded regions and reduced point cloud after discarding non-significant points enables faster processing. The original F-LOAM feature extractor and SKIP-3D were tested and compared in several benchmark datasets.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115365564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multimodal Data Collection System for UAV-based Precision Agriculture Applications 基于无人机的精准农业多模态数据采集系统
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00007
Emmanuel K. Raptis, Georgios D. Karatzinis, Marios Krestenitis, Athanasios Ch. Kapoutsis, Kostantinos Z. Ioannidis, S. Vrochidis, I. Kompatsiaris, E. Kosmatopoulos
Unmanned Aerial Vehicles (UAVs) consist of emerging technologies that have the potential to be used gradually in various sectors providing a wide range of applications. In agricultural tasks, the UAV-based solutions are supplanting the labor and time-intensive traditional crop management practices. In this direction, this work proposes an automated framework for efficient data collection in crops employing autonomous path planning operational modes. The first method assures an optimal and collision-free path route for scanning the under examination area. The collected data from the oversight perspective are used for orthomocaic creation and subsequently, vegetation indices are extracted to assess the health levels of crops. The second operational mode is considered as an inspection extension for further on-site enriched information collection, performing fixed radius cycles around the central points of interest. A real-world weed detection application is performed verifying the acquired information using both operational modes. The weed detection performance has been evaluated utilizing a well-known Convolutional Neural Network (CNN), named Feature Pyramid Network (FPN), providing sufficient results in terms of Intersection over Union (IoU).
无人驾驶飞行器(uav)由新兴技术组成,具有逐步在各个领域使用的潜力,提供了广泛的应用。在农业任务中,基于无人机的解决方案正在取代劳动和时间密集型的传统作物管理实践。在这个方向上,本工作提出了一个采用自主路径规划操作模式的作物有效数据收集的自动化框架。第一种方法保证了扫描被检测区域的最优无碰撞路径。从监督的角度收集的数据用于正交创建,随后提取植被指数以评估作物的健康水平。第二种操作模式被认为是进一步现场丰富信息收集的检查扩展,在中心兴趣点周围执行固定半径循环。一个真实的杂草检测应用程序执行验证获取的信息使用两种操作模式。利用著名的卷积神经网络(CNN),即特征金字塔网络(FPN)对杂草检测性能进行了评估,并在交集比联合(IoU)方面提供了足够的结果。
{"title":"Multimodal Data Collection System for UAV-based Precision Agriculture Applications","authors":"Emmanuel K. Raptis, Georgios D. Karatzinis, Marios Krestenitis, Athanasios Ch. Kapoutsis, Kostantinos Z. Ioannidis, S. Vrochidis, I. Kompatsiaris, E. Kosmatopoulos","doi":"10.1109/IRC55401.2022.00007","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00007","url":null,"abstract":"Unmanned Aerial Vehicles (UAVs) consist of emerging technologies that have the potential to be used gradually in various sectors providing a wide range of applications. In agricultural tasks, the UAV-based solutions are supplanting the labor and time-intensive traditional crop management practices. In this direction, this work proposes an automated framework for efficient data collection in crops employing autonomous path planning operational modes. The first method assures an optimal and collision-free path route for scanning the under examination area. The collected data from the oversight perspective are used for orthomocaic creation and subsequently, vegetation indices are extracted to assess the health levels of crops. The second operational mode is considered as an inspection extension for further on-site enriched information collection, performing fixed radius cycles around the central points of interest. A real-world weed detection application is performed verifying the acquired information using both operational modes. The weed detection performance has been evaluated utilizing a well-known Convolutional Neural Network (CNN), named Feature Pyramid Network (FPN), providing sufficient results in terms of Intersection over Union (IoU).","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117189934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
CNN-based Feature Extraction for Robotic Laser Scanning of Weld Grooves in Tubular T-joints 基于cnn的机器人激光扫描管状t型接头焊缝凹槽特征提取
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00063
Øyvind W. Mjølhus, Andrej Cibicik, E. B. Njaastad, O. Egeland
This paper presents an algorithm for feature point extraction from scanning data of large tubular T-joints (a subtype of a TKY joint). Extracting such feature points is a vital step for robot path generation in robotic welding. Therefore, fast and reliable feature point extraction is necessary for developing adaptive robotic welding solutions. The algorithm is based on a Convolutional Neural Network (CNN) for detecting feature points in a scanned weld groove, where the scans are done using a laser profile scanner. To facilitate fast and efficient training, we propose a methodology for generating synthetic training data in the computer graphics software Blender using realistic physical properties of objects. Further, an iterative feature point correction procedure is implemented to improve initial feature point results. The algorithm’s performance was validated using a real-world dataset acquired from a large tubular T-joint.
本文提出了一种从大型管状t型接头(TKY接头的一种)的扫描数据中提取特征点的算法。在机器人焊接中,这些特征点的提取是机器人路径生成的关键步骤。因此,快速可靠的特征点提取是开发自适应机器人焊接解决方案的必要条件。该算法基于卷积神经网络(CNN),用于检测扫描焊缝坡口中的特征点,其中扫描使用激光轮廓扫描仪完成。为了促进快速有效的训练,我们提出了一种在计算机图形软件Blender中使用对象的真实物理属性生成合成训练数据的方法。此外,实现了迭代特征点校正程序以改进初始特征点结果。该算法的性能通过从大型管状t型接头获取的真实数据集进行了验证。
{"title":"CNN-based Feature Extraction for Robotic Laser Scanning of Weld Grooves in Tubular T-joints","authors":"Øyvind W. Mjølhus, Andrej Cibicik, E. B. Njaastad, O. Egeland","doi":"10.1109/IRC55401.2022.00063","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00063","url":null,"abstract":"This paper presents an algorithm for feature point extraction from scanning data of large tubular T-joints (a subtype of a TKY joint). Extracting such feature points is a vital step for robot path generation in robotic welding. Therefore, fast and reliable feature point extraction is necessary for developing adaptive robotic welding solutions. The algorithm is based on a Convolutional Neural Network (CNN) for detecting feature points in a scanned weld groove, where the scans are done using a laser profile scanner. To facilitate fast and efficient training, we propose a methodology for generating synthetic training data in the computer graphics software Blender using realistic physical properties of objects. Further, an iterative feature point correction procedure is implemented to improve initial feature point results. The algorithm’s performance was validated using a real-world dataset acquired from a large tubular T-joint.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"32 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116406966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 Sixth IEEE International Conference on Robotic Computing (IRC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1