首页 > 最新文献

2012 IEEE/RSJ International Conference on Intelligent Robots and Systems最新文献

英文 中文
Harp plucking robotic finger 竖琴弹拨机器人的手指
Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6385720
D. Chadefaux, Jean-Loïc Le Carrou, M. Vitrani, Sylvere Billout, L. Quartier
This paper describes results about the development of a repeatable and configurable robotic finger designed to pluck harp strings. Eventually, this device will be a tool to study string instruments in playing conditions. We use a classical robot with two degrees of freedom enhanced with silicone fingertips. The validation method requires a comparison with real harpist performance. A specific experimental setup using a high-speed camera combined with an accelerometer was carried out. It provides finger and string trajectories during the whole plucking action and the soundboard vibrations during the string oscillations. A set of vibrational features are then extracted from these signals to compare robotic finger to harpist plucking actions. These descriptors have been analyzed on six fingertips of various shapes and hardnesses. Results allow to select the optimal shape and hardness among the silicone fingertips according to vibrational features.
本文描述了一种可重复和可配置的机器人手指的开发结果,该手指被设计用于拨动竖琴的琴弦。最终,该装置将成为研究弦乐器演奏条件的工具。我们使用的是一个经典的机器人,它有两个自由度,用硅胶指尖增强。验证方法需要与实际竖琴演奏者的演奏进行比较。建立了一种结合加速度计的高速摄像机实验装置。它在整个拨弦过程中提供手指和弦的轨迹,并在弦振荡过程中提供音板振动。然后从这些信号中提取一组振动特征,将机器人手指与竖琴手的拨弦动作进行比较。在6种不同形状和硬度的指尖上分析了这些描述符。结果允许根据振动特征选择硅胶指尖的最佳形状和硬度。
{"title":"Harp plucking robotic finger","authors":"D. Chadefaux, Jean-Loïc Le Carrou, M. Vitrani, Sylvere Billout, L. Quartier","doi":"10.1109/IROS.2012.6385720","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385720","url":null,"abstract":"This paper describes results about the development of a repeatable and configurable robotic finger designed to pluck harp strings. Eventually, this device will be a tool to study string instruments in playing conditions. We use a classical robot with two degrees of freedom enhanced with silicone fingertips. The validation method requires a comparison with real harpist performance. A specific experimental setup using a high-speed camera combined with an accelerometer was carried out. It provides finger and string trajectories during the whole plucking action and the soundboard vibrations during the string oscillations. A set of vibrational features are then extracted from these signals to compare robotic finger to harpist plucking actions. These descriptors have been analyzed on six fingertips of various shapes and hardnesses. Results allow to select the optimal shape and hardness among the silicone fingertips according to vibrational features.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"121 1","pages":"4886-4891"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79049799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient search for correct and useful topological maps 有效搜索正确和有用的拓扑图
Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6386155
Collin Johnson, B. Kuipers
We present an algorithm for probabilistic topological mapping that heuristically searches a tree of map hypotheses to provide a usable topological map hypothesis online, while still guaranteeing the correct map can always be found. Our algorithm annotates each leaf of the tree with a posterior probability. When a new place is encountered, we expand hypotheses based on their posterior probability, which means only the most probable hypotheses are expanded. By focusing on the most probable hypotheses, we dramatically reduce the number of hypotheses evaluated allowing real-time operation. Additionally, our approach never prunes consistent hypotheses from the tree, which means the correct hypothesis always exists within the tree.
提出了一种概率拓扑映射算法,启发式地搜索地图假设树,在线提供可用的拓扑地图假设,同时保证总能找到正确的地图。我们的算法用后验概率标注树的每一片叶子。当遇到一个新地方时,我们根据后验概率扩展假设,这意味着只扩展最可能的假设。通过关注最可能的假设,我们大大减少了评估的假设数量,允许实时操作。此外,我们的方法从不从树中修剪出一致的假设,这意味着正确的假设总是存在于树中。
{"title":"Efficient search for correct and useful topological maps","authors":"Collin Johnson, B. Kuipers","doi":"10.1109/IROS.2012.6386155","DOIUrl":"https://doi.org/10.1109/IROS.2012.6386155","url":null,"abstract":"We present an algorithm for probabilistic topological mapping that heuristically searches a tree of map hypotheses to provide a usable topological map hypothesis online, while still guaranteeing the correct map can always be found. Our algorithm annotates each leaf of the tree with a posterior probability. When a new place is encountered, we expand hypotheses based on their posterior probability, which means only the most probable hypotheses are expanded. By focusing on the most probable hypotheses, we dramatically reduce the number of hypotheses evaluated allowing real-time operation. Additionally, our approach never prunes consistent hypotheses from the tree, which means the correct hypothesis always exists within the tree.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"16 1","pages":"5277-5282"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79233603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A novel spring mechanism to reduce energy consumption of robotic arms 一种减少机械臂能量消耗的新型弹簧机构
Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6385488
M. Plooij, M. Wisse
Most conventional robotic arms use motors to accelerate the manipulator. This leads to an unnecessary high energy consumption when performing repetitive tasks. This paper presents an approach to reduce energy consumption in robotic arms by performing its repetitive tasks with the help of a parallel spring mechanism. A special non-linear spring characteristic has been achieved by attaching a spring to two connected pulleys. This parallel spring mechanism provides for the accelerations of the manipulator without compromising its ability to vary the task parameters (the time per stroke, the displacement per stroke the grasping time and the payload). The energy consumption of the arm with the spring mechanism is compared to that of the same arm without the spring mechanism. Optimal control studies show that the robotic arm uses 22% less energy due to the spring mechanism. On the 2 DOF prototype, we achieved an energy reduction of 20%. The difference was due to model simplifications. With a spring mechanism, there is an extra energetic cost, because potential energy has to be stored into the spring during startup. This cost is equal to the total energy savings of the 2 DOF arm during 8 strokes. Next, there could have been an energetic cost to position the manipulator outside the equilibrium position. We have designed the spring mechanism in such a way that this holding cost is negligible for a range of start- and end positions. The performed experiments showed that the implementation of the proposed spring mechanism results in a reduction of the energy consumption while the arm is still able to handle varying task parameters.
大多数传统的机械臂使用马达来加速机械手。这将导致在执行重复任务时不必要的高能量消耗。本文提出了一种利用并联弹簧机构来降低机械臂重复性工作能耗的方法。通过在两个相连的滑轮上附加一个弹簧,实现了一种特殊的非线性弹簧特性。这种并联弹簧机构在不影响其改变任务参数(每次冲程时间、每次冲程位移、抓取时间和有效载荷)的情况下提供了机械手的加速度。将带弹簧机构的机械臂与不带弹簧机构的机械臂的能量消耗进行了比较。最优控制研究表明,由于弹簧机构,机械臂的能量消耗减少了22%。在2自由度的原型机上,我们实现了20%的能耗降低。这种差异是由于模型的简化。对于弹簧机构,有一个额外的能量成本,因为势能必须在启动时存储到弹簧中。这一成本等于2自由度臂在8冲程期间节省的总能量。其次,将机械手置于平衡位置之外可能会产生能量成本。我们以这样一种方式设计了弹簧机构,这种保持成本对于开始和结束位置的范围可以忽略不计。实验结果表明,该弹簧机构在降低机械臂能量消耗的同时,仍然能够处理不同的任务参数。
{"title":"A novel spring mechanism to reduce energy consumption of robotic arms","authors":"M. Plooij, M. Wisse","doi":"10.1109/IROS.2012.6385488","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385488","url":null,"abstract":"Most conventional robotic arms use motors to accelerate the manipulator. This leads to an unnecessary high energy consumption when performing repetitive tasks. This paper presents an approach to reduce energy consumption in robotic arms by performing its repetitive tasks with the help of a parallel spring mechanism. A special non-linear spring characteristic has been achieved by attaching a spring to two connected pulleys. This parallel spring mechanism provides for the accelerations of the manipulator without compromising its ability to vary the task parameters (the time per stroke, the displacement per stroke the grasping time and the payload). The energy consumption of the arm with the spring mechanism is compared to that of the same arm without the spring mechanism. Optimal control studies show that the robotic arm uses 22% less energy due to the spring mechanism. On the 2 DOF prototype, we achieved an energy reduction of 20%. The difference was due to model simplifications. With a spring mechanism, there is an extra energetic cost, because potential energy has to be stored into the spring during startup. This cost is equal to the total energy savings of the 2 DOF arm during 8 strokes. Next, there could have been an energetic cost to position the manipulator outside the equilibrium position. We have designed the spring mechanism in such a way that this holding cost is negligible for a range of start- and end positions. The performed experiments showed that the implementation of the proposed spring mechanism results in a reduction of the energy consumption while the arm is still able to handle varying task parameters.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"85 1","pages":"2901-2908"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79728512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
A system of automated training sample generation for visual-based car detection 基于视觉的汽车检测自动训练样本生成系统
Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6386060
Chao Wang, Huijing Zhao, F. Davoine, H. Zha
This paper presents a system to automatically generate car sample dataset for visual-based car detector training. The dataset contains multi-view car samples labeled with the car's pose, so that a view-discriminative training and car detection is also available. There are mainly two parts in the system: laser-based car detection and tracking generates motion trajectories of on-road cars, and then visual samples are extracted by fusing the detection and tracking results with visual-based detection. A multi-modal sensor system is developed for the omni-directional data collection on a test-bed vehicle. By processing the data of experiment conducted on the freeway of Beijing, a large number of multi-view car samples with pose information were generated. The samples' quality is evaluated by applying it in a visual car detector's training and testing procedure.
本文提出了一种自动生成汽车样本数据集的系统,用于基于视觉的汽车检测器训练。该数据集包含带有汽车姿态标记的多视图汽车样本,因此还可以进行视图判别训练和汽车检测。该系统主要包括两个部分:基于激光的汽车检测与跟踪生成道路上汽车的运动轨迹,然后将检测与跟踪结果与基于视觉的检测融合提取视觉样本。研制了一种多模态传感器系统,用于试验台车辆的全方位数据采集。通过对北京高速公路上的实验数据进行处理,生成了大量带有姿态信息的多视角汽车样本。通过将样本应用于视觉汽车检测器的训练和测试过程,对样本的质量进行评价。
{"title":"A system of automated training sample generation for visual-based car detection","authors":"Chao Wang, Huijing Zhao, F. Davoine, H. Zha","doi":"10.1109/IROS.2012.6386060","DOIUrl":"https://doi.org/10.1109/IROS.2012.6386060","url":null,"abstract":"This paper presents a system to automatically generate car sample dataset for visual-based car detector training. The dataset contains multi-view car samples labeled with the car's pose, so that a view-discriminative training and car detection is also available. There are mainly two parts in the system: laser-based car detection and tracking generates motion trajectories of on-road cars, and then visual samples are extracted by fusing the detection and tracking results with visual-based detection. A multi-modal sensor system is developed for the omni-directional data collection on a test-bed vehicle. By processing the data of experiment conducted on the freeway of Beijing, a large number of multi-view car samples with pose information were generated. The samples' quality is evaluated by applying it in a visual car detector's training and testing procedure.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"11 1","pages":"4169-4176"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83555253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Robust and fast visual tracking using constrained sparse coding and dictionary learning 使用约束稀疏编码和字典学习的鲁棒和快速视觉跟踪
Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6385459
Tianxiang Bai, Youfu Li, Xiaolong Zhou
We present a novel appearance model using sparse coding with online sparse dictionary learning techniques for robust visual tracking. In the proposed appearance model, the target appearance is modeled via online sparse dictionary learning technique with an “elastic-net constraint”. This scheme allows us to capture the characteristics of the target local appearance, and promotes the robustness against partial occlusions during tracking. Additionally, we unify the sparse coding and online dictionary learning by defining a “sparsity consistency constraint” that facilitates the generative and discriminative capabilities of the appearance model. Moreover, we propose a robust similarity metric that can eliminate the outliers from the corrupted observations. We then integrate the proposed appearance model with the particle filter framework to form a robust visual tracking algorithm. Experiments on publicly available benchmark video sequences demonstrate that the proposed appearance model improves the tracking performance compared with other state-of-the-art approaches.
我们提出了一种新的外观模型,使用稀疏编码和在线稀疏字典学习技术进行鲁棒视觉跟踪。在该模型中,利用在线稀疏字典学习技术和“弹性网络约束”对目标的外观进行建模。该方案使我们能够捕获目标局部外观的特征,并提高了跟踪过程中对部分遮挡的鲁棒性。此外,我们通过定义一个“稀疏一致性约束”来统一稀疏编码和在线字典学习,该约束促进了外观模型的生成和判别能力。此外,我们提出了一种鲁棒的相似性度量,可以从损坏的观测中消除异常值。然后,我们将所提出的外观模型与粒子滤波框架相结合,形成一个鲁棒的视觉跟踪算法。在公开的基准视频序列上的实验表明,与其他最先进的方法相比,所提出的外观模型提高了跟踪性能。
{"title":"Robust and fast visual tracking using constrained sparse coding and dictionary learning","authors":"Tianxiang Bai, Youfu Li, Xiaolong Zhou","doi":"10.1109/IROS.2012.6385459","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385459","url":null,"abstract":"We present a novel appearance model using sparse coding with online sparse dictionary learning techniques for robust visual tracking. In the proposed appearance model, the target appearance is modeled via online sparse dictionary learning technique with an “elastic-net constraint”. This scheme allows us to capture the characteristics of the target local appearance, and promotes the robustness against partial occlusions during tracking. Additionally, we unify the sparse coding and online dictionary learning by defining a “sparsity consistency constraint” that facilitates the generative and discriminative capabilities of the appearance model. Moreover, we propose a robust similarity metric that can eliminate the outliers from the corrupted observations. We then integrate the proposed appearance model with the particle filter framework to form a robust visual tracking algorithm. Experiments on publicly available benchmark video sequences demonstrate that the proposed appearance model improves the tracking performance compared with other state-of-the-art approaches.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"55 1","pages":"3824-3829"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81198314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Fighting fires with human robot teams 与人类机器人团队一起灭火
Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6386269
E. Martinson, W. Lawson, Samuel Blisard, Anthony M. Harrison, J. Trafton
This video submission demonstrates cooperative human-robot firefighting. A human team leader guides the robot to the fire using a combination of speech and gesture.
这段视频展示了人机合作的消防。一名人类领队通过语言和手势的结合引导机器人走向火场。
{"title":"Fighting fires with human robot teams","authors":"E. Martinson, W. Lawson, Samuel Blisard, Anthony M. Harrison, J. Trafton","doi":"10.1109/IROS.2012.6386269","DOIUrl":"https://doi.org/10.1109/IROS.2012.6386269","url":null,"abstract":"This video submission demonstrates cooperative human-robot firefighting. A human team leader guides the robot to the fire using a combination of speech and gesture.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"29 1","pages":"2682-2683"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81207661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
An integrated approach of attention control of target human by nonverbal behaviors of robots in different viewing situations 不同视觉情境下机器人非语言行为对目标人注意力控制的综合研究
Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6385480
M. M. Hoque, Dipankar Das, Tomomi Onuki, Yoshinori Kobayashi, Y. Kuno
A major challenge in HRI is to design a social robot that can attract a target human's attention to control his/her attention toward a particular direction in various social situations. If a robot would like to initiate an interaction with a person, it may turn its gaze to him/her for eye contact. However, it is not an easy task for the robot to make eye contact because such a turning action alone may not be enough to initiate an interaction in all situations, especially when the robot and the human are not facing each other or the human intensely attends to his/her task. In this paper, we propose a conceptual model of attention control with four phases: attention attraction, eye contact, attention avoidance, and attention shift. In order to initiate an attention control process, the robot first tries to gain the target participant's attention toward it through head turning, or head shaking action depending on the three viewing situations where the robot is captured in his/her field of view (central field of view, near peripheral field of view, and far peripheral field of view). After gaining her/his attention, the robot makes eye contact only with the target person through showing gaze awareness by blinking its eyes, and directs her/his attention toward an object by turning its eyes and head cues. Moreover, the robot can show attention to aversion behaviors if non-target persons look at it. We design a robot based on the proposed approach, and it is confirmed as effective to control the target participant's attention in experimental evaluation.
HRI的一个主要挑战是设计一个社交机器人,它可以吸引目标人类的注意力,从而在各种社交情境中控制他/她的注意力向特定方向移动。如果一个机器人想要与一个人进行互动,它可能会将目光转向他/她进行目光接触。然而,对机器人来说,眼神交流并不是一件容易的事情,因为在所有情况下,这种转身动作可能不足以启动互动,特别是当机器人和人类没有面对面或人类高度关注他/她的任务时。本文提出了一个由注意吸引、目光接触、注意回避和注意转移四个阶段构成的注意控制概念模型。为了启动注意力控制过程,机器人首先尝试通过头部转动或头部晃动动作来获得目标参与者对它的注意,这取决于机器人在其视野中被捕获的三种观看情况(中心视野,近外围视野和远外围视野)。在获得她/他的注意后,机器人只与目标人进行目光接触,通过眨眼显示凝视意识,并通过转动眼睛和头部线索将她/他的注意力转移到一个物体上。此外,如果非目标人看到它,机器人可以表现出对厌恶行为的注意。我们基于该方法设计了一个机器人,并在实验评估中证实了该方法对目标参与者的注意力控制是有效的。
{"title":"An integrated approach of attention control of target human by nonverbal behaviors of robots in different viewing situations","authors":"M. M. Hoque, Dipankar Das, Tomomi Onuki, Yoshinori Kobayashi, Y. Kuno","doi":"10.1109/IROS.2012.6385480","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385480","url":null,"abstract":"A major challenge in HRI is to design a social robot that can attract a target human's attention to control his/her attention toward a particular direction in various social situations. If a robot would like to initiate an interaction with a person, it may turn its gaze to him/her for eye contact. However, it is not an easy task for the robot to make eye contact because such a turning action alone may not be enough to initiate an interaction in all situations, especially when the robot and the human are not facing each other or the human intensely attends to his/her task. In this paper, we propose a conceptual model of attention control with four phases: attention attraction, eye contact, attention avoidance, and attention shift. In order to initiate an attention control process, the robot first tries to gain the target participant's attention toward it through head turning, or head shaking action depending on the three viewing situations where the robot is captured in his/her field of view (central field of view, near peripheral field of view, and far peripheral field of view). After gaining her/his attention, the robot makes eye contact only with the target person through showing gaze awareness by blinking its eyes, and directs her/his attention toward an object by turning its eyes and head cues. Moreover, the robot can show attention to aversion behaviors if non-target persons look at it. We design a robot based on the proposed approach, and it is confirmed as effective to control the target participant's attention in experimental evaluation.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"18 1","pages":"1399-1406"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88575610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Feature-based terrain classification for LittleDog 《LittleDog》基于特征的地形分类
Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6386042
Paul Filitchkin, Katie Byl
Recent work in terrain classification has relied largely on 3D sensing methods and color based classification. We present an approach that works with a single, compact camera and maintains high classification rates that are robust to changes in illumination. Terrain is classified using a bag of visual words (BOVW) created from speeded up robust features (SURF) with a support vector machine (SVM) classifier. We present several novel techniques to augment this approach. A gradient descent inspired algorithm is used to adjust the SURF Hessian threshold to reach a nominal feature density. A sliding window technique is also used to classify mixed terrain images with high resolution. We demonstrate that our approach is suitable for small legged robots by performing real-time terrain classification on LittleDog. The classifier is used to select between predetermined gaits to traverse terrain of varying difficulty. Results indicate that real-time classification in-the-loop is faster than using a single all-terrain gait.
最近的地形分类工作主要依赖于三维传感方法和基于颜色的分类。我们提出了一种方法,与一个单一的,紧凑的相机工作,并保持高分类率,稳健的照明变化。地形分类使用基于加速鲁棒特征(SURF)和支持向量机(SVM)分类器生成的视觉词包(BOVW)。我们提出了几种新的技术来增强这种方法。采用一种梯度下降算法来调整SURF Hessian阈值以达到标称特征密度。采用滑动窗口技术对高分辨率混合地形图像进行分类。通过在LittleDog上执行实时地形分类,我们证明了我们的方法适用于小型腿机器人。该分类器用于在预定步态之间进行选择,以穿越不同难度的地形。结果表明,在环中实时分类比使用单一全地形步态更快。
{"title":"Feature-based terrain classification for LittleDog","authors":"Paul Filitchkin, Katie Byl","doi":"10.1109/IROS.2012.6386042","DOIUrl":"https://doi.org/10.1109/IROS.2012.6386042","url":null,"abstract":"Recent work in terrain classification has relied largely on 3D sensing methods and color based classification. We present an approach that works with a single, compact camera and maintains high classification rates that are robust to changes in illumination. Terrain is classified using a bag of visual words (BOVW) created from speeded up robust features (SURF) with a support vector machine (SVM) classifier. We present several novel techniques to augment this approach. A gradient descent inspired algorithm is used to adjust the SURF Hessian threshold to reach a nominal feature density. A sliding window technique is also used to classify mixed terrain images with high resolution. We demonstrate that our approach is suitable for small legged robots by performing real-time terrain classification on LittleDog. The classifier is used to select between predetermined gaits to traverse terrain of varying difficulty. Results indicate that real-time classification in-the-loop is faster than using a single all-terrain gait.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"24 1","pages":"1387-1392"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87229969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 99
Segmentation of unknown objects in indoor environments 室内环境中未知物体的分割
Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6385661
A. Richtsfeld, Thomas Morwald, J. Prankl, M. Zillich, M. Vincze
We present a framework for segmenting unknown objects in RGB-D images suitable for robotics tasks such as object search, grasping and manipulation. While handling single objects on a table is solved, handling complex scenes poses considerable problems due to clutter and occlusion. After pre-segmentation of the input image based on surface normals, surface patches are estimated using a mixture of planes and NURBS (non-uniform rational B-splines) and model selection is employed to find the best representation for the given data. We then construct a graph from surface patches and relations between pairs of patches and perform graph cut to arrive at object hypotheses segmented from the scene. The energy terms for patch relations are learned from user annotated training data, where support vector machines (SVM) are trained to classify a relation as being indicative of two patches belonging to the same object. We show evaluation of the relations and results on a database of different test sets, demonstrating that the approach can segment objects of various shapes in cluttered table top scenes.
我们提出了一个框架,用于分割RGB-D图像中的未知物体,适用于机器人任务,如物体搜索,抓取和操作。虽然处理一个表上的单个对象得到了解决,但处理复杂的场景由于杂乱和遮挡会带来相当大的问题。在基于表面法线对输入图像进行预分割后,使用平面和NURBS(非均匀有理b样条)的混合来估计表面斑块,并使用模型选择来找到给定数据的最佳表示。然后,我们从表面斑块和斑块对之间的关系构造一个图,并执行图切,以得到从场景中分割出来的对象假设。补丁关系的能量项从用户标注的训练数据中学习,其中训练支持向量机(SVM)将关系分类为属于同一对象的两个补丁的指示。我们展示了在不同测试集的数据库上对关系和结果的评估,表明该方法可以在杂乱的桌面场景中分割各种形状的对象。
{"title":"Segmentation of unknown objects in indoor environments","authors":"A. Richtsfeld, Thomas Morwald, J. Prankl, M. Zillich, M. Vincze","doi":"10.1109/IROS.2012.6385661","DOIUrl":"https://doi.org/10.1109/IROS.2012.6385661","url":null,"abstract":"We present a framework for segmenting unknown objects in RGB-D images suitable for robotics tasks such as object search, grasping and manipulation. While handling single objects on a table is solved, handling complex scenes poses considerable problems due to clutter and occlusion. After pre-segmentation of the input image based on surface normals, surface patches are estimated using a mixture of planes and NURBS (non-uniform rational B-splines) and model selection is employed to find the best representation for the given data. We then construct a graph from surface patches and relations between pairs of patches and perform graph cut to arrive at object hypotheses segmented from the scene. The energy terms for patch relations are learned from user annotated training data, where support vector machines (SVM) are trained to classify a relation as being indicative of two patches belonging to the same object. We show evaluation of the relations and results on a database of different test sets, demonstrating that the approach can segment objects of various shapes in cluttered table top scenes.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"os-39 1","pages":"4791-4796"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87423872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 172
Playmate robots that can act according to a child's mental state 可以根据孩子的精神状态采取行动的玩伴机器人
Pub Date : 2012-12-24 DOI: 10.1109/IROS.2012.6386037
Kasumi Abe, Akiko Iwasaki, Tomoaki Nakamura, T. Nagai, A. Yokoyama, T. Shimotomai, Hiroyuki Okada, T. Omori
We propose a playmate robot system that can play with a child. Unlike many therapeutic service robots, our proposed playmate system is implemented as a functionality of the domestic service robot with a high degree of freedom. This implies that the robot can play high-level games with children, i.e., beyond therapeutic play, using its physical features. The proposed system currently consists of ten play modules, including a chatbot with eye contact, card playing, and drawing. The algorithms of these modules are briefly discussed in this paper. To sustain the player's interest in the system, we also propose an action-selection strategy based on a transition model of the child's mental state. The robot can estimate the child's state and select an appropriate action in the course of play. A portion of the proposed algorithms was implemented on a real robot platform, and experiments were carried out to design and evaluate the proposed system.
我们提出了一个可以和孩子一起玩的玩伴机器人系统。与许多治疗服务机器人不同,我们提出的玩伴系统是作为家庭服务机器人的一个功能来实现的,具有高度的自由度。这意味着机器人可以和孩子们玩高水平的游戏,也就是说,利用它的身体特征,超越治疗性游戏。该系统目前由十个游戏模块组成,包括一个有眼神交流的聊天机器人、打牌和画画。本文简要讨论了这些模块的算法。为了保持玩家对系统的兴趣,我们还提出了一种基于儿童心理状态过渡模型的行动选择策略。机器人可以估计孩子的状态,并在游戏过程中选择适当的动作。在一个真实机器人平台上实现了部分算法,并进行了实验来设计和评估所提出的系统。
{"title":"Playmate robots that can act according to a child's mental state","authors":"Kasumi Abe, Akiko Iwasaki, Tomoaki Nakamura, T. Nagai, A. Yokoyama, T. Shimotomai, Hiroyuki Okada, T. Omori","doi":"10.1109/IROS.2012.6386037","DOIUrl":"https://doi.org/10.1109/IROS.2012.6386037","url":null,"abstract":"We propose a playmate robot system that can play with a child. Unlike many therapeutic service robots, our proposed playmate system is implemented as a functionality of the domestic service robot with a high degree of freedom. This implies that the robot can play high-level games with children, i.e., beyond therapeutic play, using its physical features. The proposed system currently consists of ten play modules, including a chatbot with eye contact, card playing, and drawing. The algorithms of these modules are briefly discussed in this paper. To sustain the player's interest in the system, we also propose an action-selection strategy based on a transition model of the child's mental state. The robot can estimate the child's state and select an appropriate action in the course of play. A portion of the proposed algorithms was implemented on a real robot platform, and experiments were carried out to design and evaluate the proposed system.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"183 1","pages":"4660-4667"},"PeriodicalIF":0.0,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88178339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
期刊
2012 IEEE/RSJ International Conference on Intelligent Robots and Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1