首页 > 最新文献

2020 IEEE International Conference on Robotics and Automation (ICRA)最新文献

英文 中文
PARC: A Plan and Activity Recognition Component for Assistive Robots 辅助机器人的计划和活动识别组件
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196856
Jean Massardi, Mathieu Gravel, É. Beaudry
Mobile robot assistants have many applications, such as helping people in their daily living activities. These robots have to detect and recognize the actions and goals of the humans they are assisting. While there are several wide-spread plan and activity recognition solutions for controlled environments with many built-in sensors, like smart-homes, there is a lack of such systems for mobile robots operating in open settings, such as an apartment. We propose a module for the recognition of activities and goals for daily living by mobile robots, in real time and for complex activities. Our approach recognizes human-object interaction using an RGB-D camera to infer low-level actions which are sent to a goal recognition algorithm. Results show that our approach is both in real time and requires little computational resources, which facilitates its deployment on a mobile and low-cost robotics platform.
移动机器人助手有很多应用,比如帮助人们进行日常生活活动。这些机器人必须检测和识别他们所协助的人类的行动和目标。虽然有一些广泛应用的计划和活动识别解决方案适用于具有许多内置传感器的受控环境,如智能家居,但缺乏在开放环境(如公寓)中运行的移动机器人的此类系统。我们提出了一个模块,用于识别移动机器人的日常生活活动和目标,实时和复杂的活动。我们的方法使用RGB-D相机识别人与物体的交互,以推断低级动作,并将其发送给目标识别算法。结果表明,我们的方法既实时又需要很少的计算资源,这有利于其在移动和低成本机器人平台上的部署。
{"title":"PARC: A Plan and Activity Recognition Component for Assistive Robots","authors":"Jean Massardi, Mathieu Gravel, É. Beaudry","doi":"10.1109/ICRA40945.2020.9196856","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196856","url":null,"abstract":"Mobile robot assistants have many applications, such as helping people in their daily living activities. These robots have to detect and recognize the actions and goals of the humans they are assisting. While there are several wide-spread plan and activity recognition solutions for controlled environments with many built-in sensors, like smart-homes, there is a lack of such systems for mobile robots operating in open settings, such as an apartment. We propose a module for the recognition of activities and goals for daily living by mobile robots, in real time and for complex activities. Our approach recognizes human-object interaction using an RGB-D camera to infer low-level actions which are sent to a goal recognition algorithm. Results show that our approach is both in real time and requires little computational resources, which facilitates its deployment on a mobile and low-cost robotics platform.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"6 1","pages":"3025-3031"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75045869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Simultaneous Estimations of Joint Angle and Torque in Interactions with Environments using EMG 利用肌电图同时估计与环境相互作用下的关节角度和扭矩
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197441
Dongwon Kim, Kyung Koh, Giovanni Oppizzi, Raziyeh Baghi, Li-Chuan Lo, Chunyang Zhang, Li-Qun Zhang
We develop a decoding technique that estimates both the position and torque of a joint of the limb in interaction with an environment based on activities of the agonist-antagonist pair of muscles using electromyography in real time. The long short-term memory (LSTM) network is employed as the core processor of the proposed technique that is capable of learning time series of a long-time span with varying time lags. A validation that is conducted on the wrist joint shows that the decoding approach provides an agreement of greater than 95% in kinetics (i.e. torque) estimation and an agreement of greater than 85% in kinematics (i.e. angle) estimation, between the actual and estimated variables, during interactions with an environment. Also demonstrated is the fact that the proposed decoding method inherits the strengths of the LSTM network in terms of the capability of learning EMG signals and the corresponding responses with time dependency.
我们开发了一种解码技术,该技术可以根据激动剂-拮抗剂对肌肉的活动实时使用肌电图来估计肢体关节在与环境相互作用时的位置和扭矩。该方法采用长短期记忆(LSTM)网络作为核心处理器,能够学习具有不同时滞的大跨度时间序列。在手腕关节上进行的验证表明,在与环境相互作用期间,解码方法在动力学(即扭矩)估计方面提供了大于95%的一致性,在运动学(即角度)估计方面提供了大于85%的一致性。此外,所提出的解码方法继承了LSTM网络在学习肌电信号的能力和相应响应的时间依赖性方面的优势。
{"title":"Simultaneous Estimations of Joint Angle and Torque in Interactions with Environments using EMG","authors":"Dongwon Kim, Kyung Koh, Giovanni Oppizzi, Raziyeh Baghi, Li-Chuan Lo, Chunyang Zhang, Li-Qun Zhang","doi":"10.1109/ICRA40945.2020.9197441","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197441","url":null,"abstract":"We develop a decoding technique that estimates both the position and torque of a joint of the limb in interaction with an environment based on activities of the agonist-antagonist pair of muscles using electromyography in real time. The long short-term memory (LSTM) network is employed as the core processor of the proposed technique that is capable of learning time series of a long-time span with varying time lags. A validation that is conducted on the wrist joint shows that the decoding approach provides an agreement of greater than 95% in kinetics (i.e. torque) estimation and an agreement of greater than 85% in kinematics (i.e. angle) estimation, between the actual and estimated variables, during interactions with an environment. Also demonstrated is the fact that the proposed decoding method inherits the strengths of the LSTM network in terms of the capability of learning EMG signals and the corresponding responses with time dependency.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"32 1","pages":"3818-3824"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77343697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Magnetic miniature swimmers with multiple rigid flagella 具有多个刚性鞭毛的磁性微型游泳者
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196531
Johan E. Quispe, S. Régnier
In this paper, we introduce novel miniature swimmers with multiple rigid tails based on spherical helices. The tail distribution of these prototypes enhances its swimming features as well as allowing to carry objects with it. The proposed swimmers are actuated by a rotating magnetic field, generating the robot rotation and thus producing a considerable thrust to start self-propelling. These prototypes achieved propulsion speeds up to 6 mm/s at 3.5 Hz for a 6-mm in size prototypes. We study the efficiency of different tail distribution for a 2-tailed swimmer by varying the angular position between both tails. Moreover, it is demonstrated that these swimmers experience great sensibility when changing their tail height. Besides, these swimmers demonstrate to be effective for cargo carrying tasks since they can displace objects up to 3.5 times their weight. Finally, wall effect is studied with multi-tailed swimmer robots considering 2 containers with 20 and 50-mm in width. Results showed speeds’ increments up to 59% when swimmers are actuated in the smaller container.
本文介绍了一种基于球形螺旋的多刚性尾翼微型游泳器。这些原型的尾部分布增强了它的游泳功能,并允许携带物体。这种游泳者是由一个旋转的磁场驱动的,产生机器人旋转,从而产生相当大的推力来开始自我推进。对于尺寸为6毫米的原型机,这些原型机在3.5 Hz下的推进速度可达6毫米/秒。通过改变两尾之间的角度位置,研究了双尾游泳者不同尾部分布的效率。此外,还证明了这些游泳者对尾巴高度的变化具有很强的敏感性。此外,这些游泳者被证明是有效的货物搬运任务,因为他们可以移动高达3.5倍于自己体重的物体。最后,研究了多尾游泳机器人的壁面效应,考虑了宽度为20和50 mm的两个容器。结果表明,当游泳者在较小的容器中被驱动时,速度的增量高达59%。
{"title":"Magnetic miniature swimmers with multiple rigid flagella","authors":"Johan E. Quispe, S. Régnier","doi":"10.1109/ICRA40945.2020.9196531","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196531","url":null,"abstract":"In this paper, we introduce novel miniature swimmers with multiple rigid tails based on spherical helices. The tail distribution of these prototypes enhances its swimming features as well as allowing to carry objects with it. The proposed swimmers are actuated by a rotating magnetic field, generating the robot rotation and thus producing a considerable thrust to start self-propelling. These prototypes achieved propulsion speeds up to 6 mm/s at 3.5 Hz for a 6-mm in size prototypes. We study the efficiency of different tail distribution for a 2-tailed swimmer by varying the angular position between both tails. Moreover, it is demonstrated that these swimmers experience great sensibility when changing their tail height. Besides, these swimmers demonstrate to be effective for cargo carrying tasks since they can displace objects up to 3.5 times their weight. Finally, wall effect is studied with multi-tailed swimmer robots considering 2 containers with 20 and 50-mm in width. Results showed speeds’ increments up to 59% when swimmers are actuated in the smaller container.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"61 1","pages":"9237-9243"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77742385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Eciton robotica: Design and Algorithms for an Adaptive Self-Assembling Soft Robot Collective 工程机器人:自适应自组装软机器人群的设计与算法
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196565
Melinda J. D. Malley, Bahar Haghighat, Lucie Houel, R. Nagpal
Social insects successfully create bridges, rafts, nests and other structures out of their own bodies and do so with no centralized control system, simply by following local rules. For example, while traversing rough terrain, army ants (genus Eciton) build bridges which grow and dissolve in response to local traffic. Because these self-assembled structures incorporate smart, flexible materials (i.e. ant bodies) and emerge from local behavior, the bridges are adaptive and dynamic. With the goal of realizing robotic collectives with similar features, we designed a hardware system, Eciton robotica, consisting of flexible robots that can climb over each other to assemble compliant structures and communicate locally using vibration. In simulation, we demonstrate self-assembly of structures: using only local rules and information, robots build and dissolve bridges in response to local traffic and varying terrain. Unlike previous self-assembling robotic systems that focused on latticebased structures and predetermined shapes, our system takes a new approach where soft robots attach to create amorphous structures whose final self-assembled shape can adapt to the needs of the group.
群居昆虫成功地用自己的身体建造桥梁、木筏、巢穴和其他结构,它们这样做没有中央控制系统,只是遵循当地的规则。例如,在穿越崎岖的地形时,军蚁(Eciton属)会建造桥梁,这些桥梁会根据当地的交通情况而生长和溶解。由于这些自组装结构结合了智能、灵活的材料(即蚂蚁体),并从当地行为中出现,因此桥梁具有适应性和动态性。为了实现具有相似特征的机器人集体,我们设计了一个硬件系统,Eciton robotica,由灵活的机器人组成,这些机器人可以爬过彼此来组装兼容的结构,并利用振动进行局部通信。在模拟中,我们展示了结构的自组装:仅使用局部规则和信息,机器人根据当地交通和变化的地形建造和拆除桥梁。与之前专注于基于晶格结构和预定形状的自组装机器人系统不同,我们的系统采用了一种新的方法,软机器人附着在一起创建非晶结构,其最终的自组装形状可以适应群体的需要。
{"title":"Eciton robotica: Design and Algorithms for an Adaptive Self-Assembling Soft Robot Collective","authors":"Melinda J. D. Malley, Bahar Haghighat, Lucie Houel, R. Nagpal","doi":"10.1109/ICRA40945.2020.9196565","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196565","url":null,"abstract":"Social insects successfully create bridges, rafts, nests and other structures out of their own bodies and do so with no centralized control system, simply by following local rules. For example, while traversing rough terrain, army ants (genus Eciton) build bridges which grow and dissolve in response to local traffic. Because these self-assembled structures incorporate smart, flexible materials (i.e. ant bodies) and emerge from local behavior, the bridges are adaptive and dynamic. With the goal of realizing robotic collectives with similar features, we designed a hardware system, Eciton robotica, consisting of flexible robots that can climb over each other to assemble compliant structures and communicate locally using vibration. In simulation, we demonstrate self-assembly of structures: using only local rules and information, robots build and dissolve bridges in response to local traffic and varying terrain. Unlike previous self-assembling robotic systems that focused on latticebased structures and predetermined shapes, our system takes a new approach where soft robots attach to create amorphous structures whose final self-assembled shape can adapt to the needs of the group.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"437 3 1","pages":"4565-4571"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77883078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
3D Orientation Estimation and Vanishing Point Extraction from Single Panoramas Using Convolutional Neural Network 基于卷积神经网络的单幅全景图三维方向估计和消失点提取
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196966
Yongjie Shi, Xin Tong, Jingsi Wen, He Zhao, Xianghua Ying, H. Zha
3D orientation estimation is a key component of many important computer vision tasks such as autonomous navigation and 3D scene understanding. This paper presents a new CNN architecture to estimate the 3D orientation of an omnidirectional camera with respect to the world coordinate system from a single spherical panorama. To train the proposed architecture, we leverage a dataset of panoramas named VOP60K from Google Street View with labeled 3D orientation, including 50 thousand panoramas for training and 10 thousand panoramas for testing. Previous approaches usually estimate 3D orientation under pinhole cameras. However, for a panorama, due to its larger field of view, previous approaches cannot be suitable. In this paper, we propose an edge extractor layer to utilize the low-level and geometric information of panorama, an attention module to fuse different features generated by previous layers. A regression loss for two column vectors of the rotation matrix and classification loss for the position of vanishing points are added to optimize our network simultaneously. The proposed algorithm is validated on our benchmark, and experimental results clearly demonstrate that it outperforms previous methods.
三维方向估计是许多重要的计算机视觉任务的关键组成部分,如自主导航和三维场景理解。本文提出了一种新的CNN架构,用于从单个球面全景图中估计全向相机相对于世界坐标系的三维方向。为了训练提出的架构,我们利用b谷歌街景的一个名为VOP60K的全景数据集,带有标记的3D方向,包括5万张用于训练的全景图和1万张用于测试的全景图。以前的方法通常在针孔相机下估计三维方向。但对于全景图,由于其视野较大,以前的方法不适用。在本文中,我们提出了一个边缘提取层来利用全景的低层次和几何信息,一个关注模块来融合前一层生成的不同特征。同时增加了旋转矩阵两列向量的回归损失和消失点位置的分类损失对网络进行优化。在我们的基准上验证了该算法,实验结果清楚地表明它优于以前的方法。
{"title":"3D Orientation Estimation and Vanishing Point Extraction from Single Panoramas Using Convolutional Neural Network","authors":"Yongjie Shi, Xin Tong, Jingsi Wen, He Zhao, Xianghua Ying, H. Zha","doi":"10.1109/ICRA40945.2020.9196966","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196966","url":null,"abstract":"3D orientation estimation is a key component of many important computer vision tasks such as autonomous navigation and 3D scene understanding. This paper presents a new CNN architecture to estimate the 3D orientation of an omnidirectional camera with respect to the world coordinate system from a single spherical panorama. To train the proposed architecture, we leverage a dataset of panoramas named VOP60K from Google Street View with labeled 3D orientation, including 50 thousand panoramas for training and 10 thousand panoramas for testing. Previous approaches usually estimate 3D orientation under pinhole cameras. However, for a panorama, due to its larger field of view, previous approaches cannot be suitable. In this paper, we propose an edge extractor layer to utilize the low-level and geometric information of panorama, an attention module to fuse different features generated by previous layers. A regression loss for two column vectors of the rotation matrix and classification loss for the position of vanishing points are added to optimize our network simultaneously. The proposed algorithm is validated on our benchmark, and experimental results clearly demonstrate that it outperforms previous methods.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"58 1","pages":"596-602"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80174186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Agile 3D-Navigation of a Helical Magnetic Swimmer 螺旋磁泳者的敏捷3d导航
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197323
J. Leclerc, Haoran Zhao, Daniel Bao, Aaron T. Becker, M. Ghosn, D. Shah
Rotating miniature magnetic swimmers are de-vices that could navigate within the bloodstream to access remote locations of the body and perform minimally invasive procedures. The rotational movement could be used, for example, to abrade a pulmonary embolus. Some regions, such as the heart, are challenging to navigate. Cardiac and respiratory motions of the heart combined with a fast and variable blood flow necessitate a highly agile swimmer. This swimmer should minimize contact with the walls of the blood vessels and the cardiac structures to mitigate the risk of complications. This paper presents experimental tests of a millimeter-scale magnetic helical swimmer navigating in a blood-mimicking solution and describes its turning capabilities. The step-out frequency and the position error were measured for different values of turn radius. The paper also introduces rapid movements that increase the swimmer’s agility and demonstrates these experimentally on a complex 3D trajectory.
旋转的微型磁性游泳器可以在血液中导航,到达身体的偏远部位,并进行微创手术。例如,这种旋转运动可用于清除肺栓塞。有些区域,比如心脏,很难导航。心脏的心脏和呼吸运动,加上快速多变的血液流动,需要一个高度敏捷的游泳者。游泳者应尽量减少与血管壁和心脏结构的接触,以减轻并发症的风险。本文介绍了一种毫米级磁性螺旋游泳者在模拟血液溶液中导航的实验测试,并描述了它的转向能力。测量了不同转弯半径值下的步进频率和位置误差。本文还介绍了增加游泳者敏捷性的快速运动,并在复杂的3D轨迹上进行了实验演示。
{"title":"Agile 3D-Navigation of a Helical Magnetic Swimmer","authors":"J. Leclerc, Haoran Zhao, Daniel Bao, Aaron T. Becker, M. Ghosn, D. Shah","doi":"10.1109/ICRA40945.2020.9197323","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197323","url":null,"abstract":"Rotating miniature magnetic swimmers are de-vices that could navigate within the bloodstream to access remote locations of the body and perform minimally invasive procedures. The rotational movement could be used, for example, to abrade a pulmonary embolus. Some regions, such as the heart, are challenging to navigate. Cardiac and respiratory motions of the heart combined with a fast and variable blood flow necessitate a highly agile swimmer. This swimmer should minimize contact with the walls of the blood vessels and the cardiac structures to mitigate the risk of complications. This paper presents experimental tests of a millimeter-scale magnetic helical swimmer navigating in a blood-mimicking solution and describes its turning capabilities. The step-out frequency and the position error were measured for different values of turn radius. The paper also introduces rapid movements that increase the swimmer’s agility and demonstrates these experimentally on a complex 3D trajectory.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"24 1","pages":"7638-7644"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80197700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
View-Invariant Loop Closure with Oriented Semantic Landmarks 具有面向语义标志的视图不变循环闭包
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196886
J. Li, Karim Koreitem, D. Meger, Gregory Dudek
Recent work on semantic simultaneous localization and mapping (SLAM) have shown the utility of natural objects as landmarks for improving localization accuracy and robustness. In this paper we present a monocular semantic SLAM system that uses object identity and inter-object geometry for view-invariant loop detection and drift correction. Our system's ability to recognize an area of the scene even under large changes in viewing direction allows it to surpass the mapping accuracy of ORB-SLAM, which uses only local appearance-based features that are not robust to large viewpoint changes. Experiments on real indoor scenes show that our method achieves mean drift reduction of 70% when compared directly to ORB-SLAM. Additionally, we propose a method for object orientation estimation, where we leverage the tracked pose of a moving camera under the SLAM setting to overcome ambiguities caused by object symmetry. This allows our SLAM system to produce geometrically detailed semantic maps with object orientation, translation, and scale.
最近在语义同步定位和映射(SLAM)方面的研究表明,将自然物体作为地标可以提高定位精度和鲁棒性。在本文中,我们提出了一个单目语义SLAM系统,该系统利用目标识别和目标间几何来进行视点不变环路检测和漂移校正。我们的系统即使在观看方向发生很大变化的情况下也能识别场景的一个区域,这使得它的测绘精度超过了ORB-SLAM的测绘精度,后者只使用基于局部外观的特征,对大的视点变化不具有鲁棒性。在真实室内场景下的实验表明,与ORB-SLAM相比,我们的方法平均漂移减少了70%。此外,我们还提出了一种物体方向估计方法,该方法利用SLAM设置下运动相机的跟踪姿态来克服由物体对称引起的模糊。这允许我们的SLAM系统生成具有对象方向、平移和比例的几何细节语义地图。
{"title":"View-Invariant Loop Closure with Oriented Semantic Landmarks","authors":"J. Li, Karim Koreitem, D. Meger, Gregory Dudek","doi":"10.1109/ICRA40945.2020.9196886","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196886","url":null,"abstract":"Recent work on semantic simultaneous localization and mapping (SLAM) have shown the utility of natural objects as landmarks for improving localization accuracy and robustness. In this paper we present a monocular semantic SLAM system that uses object identity and inter-object geometry for view-invariant loop detection and drift correction. Our system's ability to recognize an area of the scene even under large changes in viewing direction allows it to surpass the mapping accuracy of ORB-SLAM, which uses only local appearance-based features that are not robust to large viewpoint changes. Experiments on real indoor scenes show that our method achieves mean drift reduction of 70% when compared directly to ORB-SLAM. Additionally, we propose a method for object orientation estimation, where we leverage the tracked pose of a moving camera under the SLAM setting to overcome ambiguities caused by object symmetry. This allows our SLAM system to produce geometrically detailed semantic maps with object orientation, translation, and scale.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"30 1","pages":"7943-7949"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80224969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Hand Pose Estimation for Hand-Object Interaction Cases using Augmented Autoencoder 基于增强自编码器的手-物交互情形手部姿态估计
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197299
Shile Li, Haojie Wang, Dongheui Lee
Hand pose estimation with objects is challenging due to object occlusion and the lack of large annotated datasets. To tackle these issues, we propose an Augmented Autoencoder based deep learning method using augmented clean hand data. Our method takes 3D point cloud of a hand with an augmented object as input and encodes the input to latent representation of the hand. From the latent representation, our method decodes 3D hand pose and we propose to use an auxiliary point cloud decoder to assist the formation of the latent space. Through quantitative and qualitative evaluation on both synthetic dataset and real captured data containing objects, we demonstrate state-of-the-art performance for hand pose estimation with objects, even using only a small number of annotated hand-object samples.
由于物体遮挡和缺乏大型注释数据集,手部姿态估计具有挑战性。为了解决这些问题,我们提出了一种基于增强自动编码器的深度学习方法,使用增强的干净手数据。我们的方法将手的三维点云与增强物体作为输入,并将输入编码为手的潜在表示。基于潜在表示,我们的方法对三维手姿进行解码,并建议使用辅助点云解码器来辅助潜在空间的形成。通过对合成数据集和包含对象的真实捕获数据进行定量和定性评估,我们展示了使用对象进行手部姿态估计的最先进性能,即使仅使用少量注释的手部对象样本。
{"title":"Hand Pose Estimation for Hand-Object Interaction Cases using Augmented Autoencoder","authors":"Shile Li, Haojie Wang, Dongheui Lee","doi":"10.1109/ICRA40945.2020.9197299","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197299","url":null,"abstract":"Hand pose estimation with objects is challenging due to object occlusion and the lack of large annotated datasets. To tackle these issues, we propose an Augmented Autoencoder based deep learning method using augmented clean hand data. Our method takes 3D point cloud of a hand with an augmented object as input and encodes the input to latent representation of the hand. From the latent representation, our method decodes 3D hand pose and we propose to use an auxiliary point cloud decoder to assist the formation of the latent space. Through quantitative and qualitative evaluation on both synthetic dataset and real captured data containing objects, we demonstrate state-of-the-art performance for hand pose estimation with objects, even using only a small number of annotated hand-object samples.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"59 1","pages":"993-999"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80247272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Radar Sensors in Collaborative Robotics: Fast Simulation and Experimental Validation 协同机器人中的雷达传感器:快速仿真与实验验证
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197180
Christian Stetco, Barnaba Ubezio, Stephan Mühlbacher-Karrer, H. Zangl
With the availability of small system in package realizations, radar systems become more and more attractive for a variety of applications in robotics, in particular also for collaborative robotics. As the simulation of robot systems in realistic scenarios has become an important tool, not only for design and optimization, but also e.g. for machine learning approaches, realistic simulation models are needed. In the case of radar sensor simulations, this means providing more realistic results than simple proximity sensors, e.g. in the presence of multiple objects and/or humans, objects with different relative velocities and differentiation between background and foreground movement. Due to the short wavelength in the millimeter range, we propose to utilize methods known from computer graphics (e.g. z-buffer, Lambertian reflectance model) to quickly acquire depth images and reflection estimates. This information is used to calculate an estimate of the received signal for a Frequency Modulated Continuous Wave (FMCW) radar by superposition of the corresponding signal contributions. Due to the moderate computational complexity, the approach can be used with various simulation environments such as V-Rep or Gazebo. Validity and benefits of the approach are demonstrated by means of a comparison with experimental data obtained with a radar sensor on a UR10 arm in different scenarios.
随着小型系统在封装实现中的可用性,雷达系统在机器人,特别是协作机器人中的各种应用中变得越来越有吸引力。由于机器人系统在真实场景下的仿真已经成为一个重要的工具,不仅是设计和优化,而且对于机器学习方法来说,都需要真实的仿真模型。在雷达传感器模拟的情况下,这意味着提供比简单的接近传感器更真实的结果,例如,在多个物体和/或人类存在的情况下,具有不同相对速度的物体以及背景和前景运动之间的差异。由于波长在毫米范围内较短,我们建议利用计算机图形学中已知的方法(例如z-buffer, Lambertian反射模型)来快速获取深度图像和反射估计。该信息用于通过叠加相应的信号贡献来计算调频连续波(FMCW)雷达接收信号的估计。由于计算复杂度适中,该方法可用于各种仿真环境,如V-Rep或Gazebo。通过与UR10臂上雷达传感器在不同场景下获得的实验数据进行比较,证明了该方法的有效性和优越性。
{"title":"Radar Sensors in Collaborative Robotics: Fast Simulation and Experimental Validation","authors":"Christian Stetco, Barnaba Ubezio, Stephan Mühlbacher-Karrer, H. Zangl","doi":"10.1109/ICRA40945.2020.9197180","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197180","url":null,"abstract":"With the availability of small system in package realizations, radar systems become more and more attractive for a variety of applications in robotics, in particular also for collaborative robotics. As the simulation of robot systems in realistic scenarios has become an important tool, not only for design and optimization, but also e.g. for machine learning approaches, realistic simulation models are needed. In the case of radar sensor simulations, this means providing more realistic results than simple proximity sensors, e.g. in the presence of multiple objects and/or humans, objects with different relative velocities and differentiation between background and foreground movement. Due to the short wavelength in the millimeter range, we propose to utilize methods known from computer graphics (e.g. z-buffer, Lambertian reflectance model) to quickly acquire depth images and reflection estimates. This information is used to calculate an estimate of the received signal for a Frequency Modulated Continuous Wave (FMCW) radar by superposition of the corresponding signal contributions. Due to the moderate computational complexity, the approach can be used with various simulation environments such as V-Rep or Gazebo. Validity and benefits of the approach are demonstrated by means of a comparison with experimental data obtained with a radar sensor on a UR10 arm in different scenarios.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"33 1","pages":"10452-10458"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79022264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Evaluation of Perception Latencies in a Human-Robot Collaborative Environment 人机协作环境中感知延迟的评估
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197067
Atle Aalerud, G. Hovland
The latency in vision-based sensor systems used in human-robot collaborative environments is an important safety parameter which in most cases has been neglected by researchers. The main reason for this neglect is the lack of an accurate ground-truth sensor system with a minimal delay to benchmark the vision-sensors against. In this paper the latencies of 3D vision-based sensors are experimentally evaluated and analyzed using an accurate laser-tracker system which communicates on a dedicated EtherCAT channel with minimal delay. The experimental results in the paper demonstrate that the latency in the vision-based sensor system is many orders higher than the latency in the control and actuation system.
在人机协作环境中使用的基于视觉的传感器系统中,延迟是一个重要的安全参数,但在大多数情况下被研究人员所忽视。造成这种忽视的主要原因是缺乏一个精确的、具有最小延迟的地面真值传感器系统来对视觉传感器进行基准测试。本文采用精确的激光跟踪系统,在专用EtherCAT通道上以最小的延迟进行通信,实验评估和分析了基于3D视觉传感器的延迟。实验结果表明,基于视觉的传感器系统的延迟比控制和驱动系统的延迟高许多个数量级。
{"title":"Evaluation of Perception Latencies in a Human-Robot Collaborative Environment","authors":"Atle Aalerud, G. Hovland","doi":"10.1109/ICRA40945.2020.9197067","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197067","url":null,"abstract":"The latency in vision-based sensor systems used in human-robot collaborative environments is an important safety parameter which in most cases has been neglected by researchers. The main reason for this neglect is the lack of an accurate ground-truth sensor system with a minimal delay to benchmark the vision-sensors against. In this paper the latencies of 3D vision-based sensors are experimentally evaluated and analyzed using an accurate laser-tracker system which communicates on a dedicated EtherCAT channel with minimal delay. The experimental results in the paper demonstrate that the latency in the vision-based sensor system is many orders higher than the latency in the control and actuation system.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"32 1","pages":"5018-5023"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79066614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2020 IEEE International Conference on Robotics and Automation (ICRA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1