首页 > 最新文献

2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)最新文献

英文 中文
Control method of power-assisted cart with one motor, a differential gear, and brakes based on motion state of the cart 基于小车的运动状态,采用一个电机、差动齿轮和制动器的动力辅助小车的控制方法
Pub Date : 2017-09-01 DOI: 10.1109/IROS.2017.8206114
A. Seino, Yuta Wakabayashi, J. Kinugawa, K. Kosuge
In this study, we propose a control strategy for a power-assisted cart based on its motion state. The power-assisted cart we developed has one motor, a differential gear, and brakes. This cart uses the motor and the differential gear for moving forward, and applying brakes to either wheel allows the cart to turn both left and right. Therefore, the power-assisted cart can support the user when going straight and turning despite having only one motor. In the past we developed a control method that allows to control the cart's speed around the operation point in order to keep its magnitude constant when the cart starts turning. This was necessary, as the differential gear causes a speed change during turning, because of its characteristics. However, the desired behavior when transitioning from straight motion to turning motion is different to the desired behavior when going from turning motion to straight motion. Therefore, in this paper we propose a control method to adjust the speed in the direction of motion based on the state of the cart. We validated the effectiveness of the proposed method through experiments and discussed the results.
在本研究中,我们提出一种基于电动辅助推车运动状态的控制策略。我们开发的动力辅助车有一个马达、一个差动齿轮和制动器。这辆车使用电机和差动齿轮向前移动,并在任何一个轮子上施加刹车,使这辆车可以向左和向右转弯。因此,即使只有一个电机,电动辅助推车也可以支持用户直行和转弯。在过去,我们开发了一种控制方法,可以控制运载车在操作点周围的速度,以便在运载车开始转弯时保持其大小不变。这是必要的,因为差动齿轮导致速度变化在转弯期间,因为它的特点。然而,从直线运动到转弯运动的期望行为与从转弯运动到直线运动的期望行为是不同的。因此,本文提出了一种基于小车状态在运动方向上调整速度的控制方法。通过实验验证了所提方法的有效性,并对结果进行了讨论。
{"title":"Control method of power-assisted cart with one motor, a differential gear, and brakes based on motion state of the cart","authors":"A. Seino, Yuta Wakabayashi, J. Kinugawa, K. Kosuge","doi":"10.1109/IROS.2017.8206114","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206114","url":null,"abstract":"In this study, we propose a control strategy for a power-assisted cart based on its motion state. The power-assisted cart we developed has one motor, a differential gear, and brakes. This cart uses the motor and the differential gear for moving forward, and applying brakes to either wheel allows the cart to turn both left and right. Therefore, the power-assisted cart can support the user when going straight and turning despite having only one motor. In the past we developed a control method that allows to control the cart's speed around the operation point in order to keep its magnitude constant when the cart starts turning. This was necessary, as the differential gear causes a speed change during turning, because of its characteristics. However, the desired behavior when transitioning from straight motion to turning motion is different to the desired behavior when going from turning motion to straight motion. Therefore, in this paper we propose a control method to adjust the speed in the direction of motion based on the state of the cart. We validated the effectiveness of the proposed method through experiments and discussed the results.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"13 1","pages":"2829-2834"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73588593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Improved object pose estimation via deep pre-touch sensing 基于深度预触感的改进目标姿态估计
Pub Date : 2017-09-01 DOI: 10.1109/IROS.2017.8206061
Patrick E. Lancaster, Boling Yang, Joshua R. Smith
For certain manipulation tasks, object pose estimation from head-mounted cameras may not be sufficiently accurate. This is at least in part due to our inability to perfectly calibrate the coordinate frames of today's high degree of freedom robot arms that link the head to the end-effectors. We present a novel framework combining pre-touch sensing and deep learning to more accurately estimate pose in an efficient manner. The use of pre-touch sensing allows our method to localize the object directly with respect to the robot's end effector, thereby avoiding error caused by miscalibration of the arms. Instead of requiring the robot to scan the entire object with its pre-touch sensor, we use a deep neural network to detect object regions that contain distinctive geometric features. By focusing pre-touch sensing on these regions, the robot can more efficiently gather the information necessary to adjust its original pose estimate. Our region detection network was trained using a new dataset containing objects of widely varying geometries and has been labeled in a scalable fashion that is free from human bias. This dataset is applicable to any task that involves a pre-touch sensor gathering geometric information, and has been made publicly available. We evaluate our framework by having the robot re-estimate the pose of a number of objects of varying geometries. Compared to two simpler region proposal methods, we find that our deep neural network performs significantly better. In addition, we find that after a sequence of scans, objects can typically be localized to within 0.5 cm of their true position. We also observe that the original pose estimate can often be significantly improved after collecting a single quick scan.
对于某些操作任务,头戴式摄像机的物体姿态估计可能不够准确。这至少在一定程度上是由于我们无法完美地校准当今连接头部和末端执行器的高度自由度机器人手臂的坐标框架。我们提出了一种结合预触摸传感和深度学习的新框架,以更准确、有效地估计姿态。预触感的使用使我们的方法能够直接定位物体相对于机器人的末端执行器,从而避免了由手臂校准错误引起的误差。我们使用深度神经网络来检测包含独特几何特征的物体区域,而不是要求机器人用其预触摸传感器扫描整个物体。通过将预触摸传感集中在这些区域,机器人可以更有效地收集必要的信息来调整其原始姿态估计。我们的区域检测网络是使用一个新的数据集来训练的,这个数据集包含了各种不同几何形状的物体,并以一种不受人为偏见影响的可扩展方式进行了标记。该数据集适用于任何涉及预触摸传感器收集几何信息的任务,并且已公开提供。我们通过让机器人重新估计许多不同几何形状物体的姿态来评估我们的框架。与两种简单的区域建议方法相比,我们发现我们的深度神经网络的性能明显更好。此外,我们发现经过一系列扫描后,物体通常可以定位在其真实位置0.5厘米以内。我们还观察到,在收集一次快速扫描后,原始姿态估计通常可以显着改善。
{"title":"Improved object pose estimation via deep pre-touch sensing","authors":"Patrick E. Lancaster, Boling Yang, Joshua R. Smith","doi":"10.1109/IROS.2017.8206061","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206061","url":null,"abstract":"For certain manipulation tasks, object pose estimation from head-mounted cameras may not be sufficiently accurate. This is at least in part due to our inability to perfectly calibrate the coordinate frames of today's high degree of freedom robot arms that link the head to the end-effectors. We present a novel framework combining pre-touch sensing and deep learning to more accurately estimate pose in an efficient manner. The use of pre-touch sensing allows our method to localize the object directly with respect to the robot's end effector, thereby avoiding error caused by miscalibration of the arms. Instead of requiring the robot to scan the entire object with its pre-touch sensor, we use a deep neural network to detect object regions that contain distinctive geometric features. By focusing pre-touch sensing on these regions, the robot can more efficiently gather the information necessary to adjust its original pose estimate. Our region detection network was trained using a new dataset containing objects of widely varying geometries and has been labeled in a scalable fashion that is free from human bias. This dataset is applicable to any task that involves a pre-touch sensor gathering geometric information, and has been made publicly available. We evaluate our framework by having the robot re-estimate the pose of a number of objects of varying geometries. Compared to two simpler region proposal methods, we find that our deep neural network performs significantly better. In addition, we find that after a sequence of scans, objects can typically be localized to within 0.5 cm of their true position. We also observe that the original pose estimate can often be significantly improved after collecting a single quick scan.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"11 1","pages":"2448-2455"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75263405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Efficient stochastic multicriteria arm trajectory optimization 高效随机多准则手臂轨迹优化
Pub Date : 2017-09-01 DOI: 10.1109/IROS.2017.8206256
D. Pavlichenko, Sven Behnke
Performing manipulation with robotic arms requires a method for planning trajectories that takes multiple factors into account: collisions, joint limits, orientation constraints, torques, and duration of a trajectory. We present an approach to efficiently optimize arm trajectories with respect to multiple criteria. Our work extends Stochastic Trajectory Optimization for Motion Planning (STOMP). We optimize trajectory duration by including velocity into the optimization. We propose an efficient cost function with normalized components, which allows prioritizing components depending on user-specified requirements. Optimization is done in two stages: first with a partial cost function and in the second stage with full costs. We compare our method to state-of-the art methods. In addition, we perform experiments on real robots: centaur-like robot Momaro and an industrial manipulator.
用机械臂进行操作需要一种规划轨迹的方法,该方法考虑了多种因素:碰撞、关节限制、方向约束、扭矩和轨迹持续时间。我们提出了一种方法来有效地优化手臂轨迹相对于多个标准。我们的工作扩展了运动规划的随机轨迹优化(STOMP)。我们通过将速度纳入优化来优化轨迹持续时间。我们提出了一个具有标准化组件的有效成本函数,它允许根据用户指定的需求对组件进行优先级排序。优化分两个阶段完成:第一阶段是部分成本函数,第二阶段是全部成本函数。我们将我们的方法与最先进的方法进行比较。此外,我们在真实的机器人上进行实验:半人马机器人Momaro和工业机械手。
{"title":"Efficient stochastic multicriteria arm trajectory optimization","authors":"D. Pavlichenko, Sven Behnke","doi":"10.1109/IROS.2017.8206256","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206256","url":null,"abstract":"Performing manipulation with robotic arms requires a method for planning trajectories that takes multiple factors into account: collisions, joint limits, orientation constraints, torques, and duration of a trajectory. We present an approach to efficiently optimize arm trajectories with respect to multiple criteria. Our work extends Stochastic Trajectory Optimization for Motion Planning (STOMP). We optimize trajectory duration by including velocity into the optimization. We propose an efficient cost function with normalized components, which allows prioritizing components depending on user-specified requirements. Optimization is done in two stages: first with a partial cost function and in the second stage with full costs. We compare our method to state-of-the art methods. In addition, we perform experiments on real robots: centaur-like robot Momaro and an industrial manipulator.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"2 1","pages":"4018-4025"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75744605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Morphological optimization for tensegrity quadruped locomotion 张拉整体四足运动的形态优化
Pub Date : 2017-09-01 DOI: 10.1109/IROS.2017.8206253
Dawn M. Hustig-Schultz, Vytas SunSpiral, M. Teodorescu
The increasing complexity of soft and hybrid-soft robots highlights the need for more efficient methods of minimizing machine learning solution spaces, and creative ways to ease the process of rapid prototyping. In this paper, we present an initial exploration of this process, using hand-chosen morphologies. Four different choices of muscle groups will be actuated on a tensegrity quadruped called MountainGoat: three for a primarily spine-driven morphology, and one for a primarily leg-driven morphology, and the locomotion speed will be compared. Each iteration of design seeks to reduce the total number of active muscles, and consequently reduce the dimensionality of the problem for machine learning, while still producing effective locomotion. The reduction in active muscles seeks to simplify future rapid prototyping of the robot.
软机器人和混合软机器人的复杂性日益增加,这凸显出需要更有效的方法来最小化机器学习解决方案空间,以及创造性的方法来简化快速原型制作过程。在本文中,我们提出了这一过程的初步探索,使用手工选择的形态。四种不同的肌肉群将被驱动在一个被称为“山山羊”的张拉整体四足动物身上:三种主要是脊柱驱动的形态,一种主要是腿部驱动的形态,运动速度将被比较。每次设计迭代都寻求减少活动肌肉的总数,从而降低机器学习问题的维度,同时仍然产生有效的运动。减少活动肌肉是为了简化未来机器人的快速原型设计。
{"title":"Morphological optimization for tensegrity quadruped locomotion","authors":"Dawn M. Hustig-Schultz, Vytas SunSpiral, M. Teodorescu","doi":"10.1109/IROS.2017.8206253","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206253","url":null,"abstract":"The increasing complexity of soft and hybrid-soft robots highlights the need for more efficient methods of minimizing machine learning solution spaces, and creative ways to ease the process of rapid prototyping. In this paper, we present an initial exploration of this process, using hand-chosen morphologies. Four different choices of muscle groups will be actuated on a tensegrity quadruped called MountainGoat: three for a primarily spine-driven morphology, and one for a primarily leg-driven morphology, and the locomotion speed will be compared. Each iteration of design seeks to reduce the total number of active muscles, and consequently reduce the dimensionality of the problem for machine learning, while still producing effective locomotion. The reduction in active muscles seeks to simplify future rapid prototyping of the robot.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"3990-3995"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73856749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Dimensional inconsistencies in code and ROS messages: A study of 5.9M lines of code 代码和ROS消息的维度不一致:对590万行代码的研究
Pub Date : 2017-09-01 DOI: 10.1109/IROS.2017.8202229
J. Ore, Sebastian G. Elbaum, Carrick Detweiler
This work presents a study of robot software using the Robot Operating System (ROS), focusing on detecting inconsistencies in physical unit manipulation. We discuss how dimensional analysis, the rules governing how physical quantities are combined, can be used to detect inconsistencies in robot software that are otherwise difficult to detect. Using a corpus of ROS software with 5.9M lines of code, we measure the frequency of these dimensional inconsistencies and find them in 6% (211 / 3,484) of repositories that use ROS. We find that the inconsistency type ‘Assigning multiple units to a variable’ accounts for 75% of inconsistencies in ROS code. We identify the ROS classes and physical units most likely to be involved with dimensional inconsistencies, and find that the ROS Message type geometry_msgs::Twist is involved in over half of all inconsistencies and is used by developers in ways contrary to Twist's intent. We further analyze the frequency of physical units used in ROS programs as a proxy for assessing how developers use ROS, and discuss the practical implications of our results including how to detect and avoid these inconsistencies.
这项工作提出了使用机器人操作系统(ROS)的机器人软件的研究,重点是检测物理单元操作中的不一致性。我们将讨论量纲分析,即控制物理量如何组合的规则,如何用于检测机器人软件中难以检测的不一致性。使用包含590万行代码的ROS软件语料库,我们测量了这些维度不一致的频率,并在6%(211 / 3,484)使用ROS的存储库中发现了它们。我们发现不一致类型“将多个单元分配给一个变量”占ROS代码中不一致的75%。我们确定了最有可能涉及维度不一致的ROS类和物理单元,并发现ROS消息类型geometry_msgs::Twist涉及所有不一致的一半以上,并且被开发人员以与Twist的意图相反的方式使用。我们进一步分析了ROS程序中使用的物理单元的频率,作为评估开发人员如何使用ROS的代理,并讨论了我们的结果的实际含义,包括如何检测和避免这些不一致。
{"title":"Dimensional inconsistencies in code and ROS messages: A study of 5.9M lines of code","authors":"J. Ore, Sebastian G. Elbaum, Carrick Detweiler","doi":"10.1109/IROS.2017.8202229","DOIUrl":"https://doi.org/10.1109/IROS.2017.8202229","url":null,"abstract":"This work presents a study of robot software using the Robot Operating System (ROS), focusing on detecting inconsistencies in physical unit manipulation. We discuss how dimensional analysis, the rules governing how physical quantities are combined, can be used to detect inconsistencies in robot software that are otherwise difficult to detect. Using a corpus of ROS software with 5.9M lines of code, we measure the frequency of these dimensional inconsistencies and find them in 6% (211 / 3,484) of repositories that use ROS. We find that the inconsistency type ‘Assigning multiple units to a variable’ accounts for 75% of inconsistencies in ROS code. We identify the ROS classes and physical units most likely to be involved with dimensional inconsistencies, and find that the ROS Message type geometry_msgs::Twist is involved in over half of all inconsistencies and is used by developers in ways contrary to Twist's intent. We further analyze the frequency of physical units used in ROS programs as a proxy for assessing how developers use ROS, and discuss the practical implications of our results including how to detect and avoid these inconsistencies.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"80 1","pages":"712-718"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75893246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Full 3D rotation estimation in scanning electron microscope 扫描电镜全三维旋转估计
Pub Date : 2017-09-01 DOI: 10.1109/IROS.2017.8202284
A. Kudryavtsev, S. Dembélé, N. L. Fort-Piat
Estimation of 3D object position is a crucial step for a variety of robotics and computer vision applications including 3D reconstruction and object manipulation. When working in microscale, new types of visual sensors are used such as Scanning Electron Microscope (SEM). Nowadays, micro- and nanomanipulation tasks, namely components assembly, are performed in teleoperated mode in most of the cases. Measuring object position and orientation is a crucial step towards automatic object handling. Current methods of pose estimation in SEM allow recovering full object movement using its computer-aided design (CAD) model. If the model is not known, most methods allow to estimate only in-plane translations and rotation around camera optical axis. In the literature, SEM is considered as a camera with parallel projection or an affine camera, which means image invariance to z-movement and bas-relief ambiguity. In this paper, authors address the problem of measuring full 3D rotation of the unknown scene for uncalibrated SEM without additional sensors. Rotations are estimated from image triplets by solving a spherical triangle from fundamental matrices only, without need of intrinsic calibration, allowing to avoid parallel projection ambiguities. The presented results, obtained in simulation and on real data, allow validating the proposed scheme.
三维物体位置的估计是包括三维重建和物体操作在内的各种机器人和计算机视觉应用的关键步骤。在微观尺度下,新型的视觉传感器如扫描电子显微镜(SEM)被广泛应用。目前,微纳米操作任务,即部件组装,在大多数情况下都是以远程操作方式进行的。测量目标的位置和方向是实现目标自动处理的关键步骤。当前的扫描电镜姿态估计方法允许利用其计算机辅助设计(CAD)模型恢复完整的物体运动。如果模型不知道,大多数方法只允许估计平面内平移和围绕相机光轴的旋转。在文献中,SEM被认为是一个具有平行投影的相机或仿射相机,这意味着图像对z轴运动不变性和浅浮雕模糊性。在本文中,作者解决了在没有额外传感器的情况下测量未校准SEM的未知场景的全三维旋转的问题。旋转仅通过从基本矩阵求解球面三角形从图像三元组估计,不需要内在校准,允许避免平行投影歧义。本文给出的仿真结果和实际数据验证了所提方案的有效性。
{"title":"Full 3D rotation estimation in scanning electron microscope","authors":"A. Kudryavtsev, S. Dembélé, N. L. Fort-Piat","doi":"10.1109/IROS.2017.8202284","DOIUrl":"https://doi.org/10.1109/IROS.2017.8202284","url":null,"abstract":"Estimation of 3D object position is a crucial step for a variety of robotics and computer vision applications including 3D reconstruction and object manipulation. When working in microscale, new types of visual sensors are used such as Scanning Electron Microscope (SEM). Nowadays, micro- and nanomanipulation tasks, namely components assembly, are performed in teleoperated mode in most of the cases. Measuring object position and orientation is a crucial step towards automatic object handling. Current methods of pose estimation in SEM allow recovering full object movement using its computer-aided design (CAD) model. If the model is not known, most methods allow to estimate only in-plane translations and rotation around camera optical axis. In the literature, SEM is considered as a camera with parallel projection or an affine camera, which means image invariance to z-movement and bas-relief ambiguity. In this paper, authors address the problem of measuring full 3D rotation of the unknown scene for uncalibrated SEM without additional sensors. Rotations are estimated from image triplets by solving a spherical triangle from fundamental matrices only, without need of intrinsic calibration, allowing to avoid parallel projection ambiguities. The presented results, obtained in simulation and on real data, allow validating the proposed scheme.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"32 1","pages":"1134-1139"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79137055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A multimodal execution monitor with anomaly classification for robot-assisted feeding 用于机器人辅助喂养的多模态异常分类执行监视器
Pub Date : 2017-09-01 DOI: 10.1109/IROS.2017.8206437
Daehyung Park, Hokeun Kim, Yuuna Hoshi, Zackory M. Erickson, Ariel Kapusta, C. Kemp
Activities of daily living (ADLs) are important for quality of life. Robotic assistance offers the opportunity for people with disabilities to perform ADLs on their own. However, when a complex semi-autonomous system provides real-world assistance, occasional anomalies are likely to occur. Robots that can detect, classify and respond appropriately to common anomalies have the potential to provide more effective and safer assistance. We introduce a multimodal execution monitor to detect and classify anomalous executions when robots operate near humans. Our system builds on our past work on multimodal anomaly detection. Our new monitor classifies the type and cause of common anomalies using an artificial neural network. We implemented and evaluated our execution monitor in the context of robot-assisted feeding with a general-purpose mobile manipulator. In our evaluations, our monitor outperformed baseline methods from the literature. It succeeded in detecting 12 common anomalies from 8 able-bodied participants with 83% accuracy and classifying the types and causes of the detected anomalies with 90% and 81% accuracies, respectively. We then performed an in-home evaluation with Henry Evans, a person with severe quadriplegia. With our system, Henry successfully fed himself while the monitor detected, classified the types, and classified the causes of anomalies with 86%, 90%, and 54% accuracy, respectively.
日常生活活动(ADLs)对生活质量很重要。机器人辅助为残疾人提供了自己执行adl的机会。然而,当一个复杂的半自主系统提供现实世界的帮助时,偶尔可能会发生异常情况。能够检测、分类并对常见异常做出适当反应的机器人有可能提供更有效、更安全的帮助。我们引入了一个多模态执行监视器来检测和分类当机器人在人类附近操作时的异常执行。我们的系统建立在我们过去在多模态异常检测方面的工作基础上。我们的新监视器使用人工神经网络对常见异常的类型和原因进行分类。我们在机器人辅助喂养的背景下实现并评估了我们的执行监测器。在我们的评估中,我们的监测仪优于文献中的基线方法。从8名健全参与者中检测出12种常见异常,准确率为83%,对检测到的异常类型和原因进行分类的准确率分别为90%和81%。然后我们对亨利·埃文斯进行了家庭评估,他是一个严重四肢瘫痪的人。在我们的系统中,亨利成功地喂食了自己,同时监视器检测,分类类型,并分类异常原因,准确率分别为86%,90%和54%。
{"title":"A multimodal execution monitor with anomaly classification for robot-assisted feeding","authors":"Daehyung Park, Hokeun Kim, Yuuna Hoshi, Zackory M. Erickson, Ariel Kapusta, C. Kemp","doi":"10.1109/IROS.2017.8206437","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206437","url":null,"abstract":"Activities of daily living (ADLs) are important for quality of life. Robotic assistance offers the opportunity for people with disabilities to perform ADLs on their own. However, when a complex semi-autonomous system provides real-world assistance, occasional anomalies are likely to occur. Robots that can detect, classify and respond appropriately to common anomalies have the potential to provide more effective and safer assistance. We introduce a multimodal execution monitor to detect and classify anomalous executions when robots operate near humans. Our system builds on our past work on multimodal anomaly detection. Our new monitor classifies the type and cause of common anomalies using an artificial neural network. We implemented and evaluated our execution monitor in the context of robot-assisted feeding with a general-purpose mobile manipulator. In our evaluations, our monitor outperformed baseline methods from the literature. It succeeded in detecting 12 common anomalies from 8 able-bodied participants with 83% accuracy and classifying the types and causes of the detected anomalies with 90% and 81% accuracies, respectively. We then performed an in-home evaluation with Henry Evans, a person with severe quadriplegia. With our system, Henry successfully fed himself while the monitor detected, classified the types, and classified the causes of anomalies with 86%, 90%, and 54% accuracy, respectively.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"57 1","pages":"5406-5413"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79257634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Object-based affordances detection with Convolutional Neural Networks and dense Conditional Random Fields 基于卷积神经网络和密集条件随机场的对象启示检测
Pub Date : 2017-09-01 DOI: 10.1109/IROS.2017.8206484
Anh Nguyen, D. Kanoulas, D. Caldwell, N. Tsagarakis
We present a new method to detect object affordances in real-world scenes using deep Convolutional Neural Networks (CNN), an object detector and dense Conditional Random Fields (CRF). Our system first trains an object detector to generate bounding box candidates from the images. A deep CNN is then used to learn the depth features from these bounding boxes. Finally, these feature maps are post-processed with dense CRF to improve the prediction along class boundaries. The experimental results on our new challenging dataset show that the proposed approach outperforms recent state-of-the-art methods by a substantial margin. Furthermore, from the detected affordances we introduce a grasping method that is robust to noisy data. We demonstrate the effectiveness of our framework on the full-size humanoid robot WALK-MAN using different objects in real-world scenarios.
我们提出了一种使用深度卷积神经网络(CNN)、对象检测器和密集条件随机场(CRF)来检测现实场景中对象的可视性的新方法。我们的系统首先训练一个物体检测器从图像中生成候选边界框。然后使用深度CNN从这些边界框中学习深度特征。最后,使用密集CRF对这些特征映射进行后处理,以改进沿类边界的预测。在我们新的具有挑战性的数据集上的实验结果表明,所提出的方法在很大程度上优于最近最先进的方法。此外,根据检测到的启示,我们引入了一种对噪声数据具有鲁棒性的抓取方法。我们在真实场景中使用不同的物体在全尺寸人形机器人WALK-MAN上展示了我们的框架的有效性。
{"title":"Object-based affordances detection with Convolutional Neural Networks and dense Conditional Random Fields","authors":"Anh Nguyen, D. Kanoulas, D. Caldwell, N. Tsagarakis","doi":"10.1109/IROS.2017.8206484","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206484","url":null,"abstract":"We present a new method to detect object affordances in real-world scenes using deep Convolutional Neural Networks (CNN), an object detector and dense Conditional Random Fields (CRF). Our system first trains an object detector to generate bounding box candidates from the images. A deep CNN is then used to learn the depth features from these bounding boxes. Finally, these feature maps are post-processed with dense CRF to improve the prediction along class boundaries. The experimental results on our new challenging dataset show that the proposed approach outperforms recent state-of-the-art methods by a substantial margin. Furthermore, from the detected affordances we introduce a grasping method that is robust to noisy data. We demonstrate the effectiveness of our framework on the full-size humanoid robot WALK-MAN using different objects in real-world scenarios.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"11 1","pages":"5908-5915"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79491931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 113
Contouring error vector and cross-coupled control of multi-axis servo system 多轴伺服系统的轮廓误差矢量与交叉耦合控制
Pub Date : 2017-09-01 DOI: 10.1109/IROS.2017.8206024
Ran Shi, Xiang Zhang, Y. Lou
The contouring error and cross-coupled gains calculation have always been the critical issues in the application of cross-coupled control. Traditionally, the linear approximation and circular approximation are widely used to determine the contouring error and cross-coupled gains. However, for linear approximation and circular approximation, the contouring error and cross-coupled gains are calculated sophisticatedly, especially in three-dimensional applications. In this paper, a contouring error vector is established under task coordinate frame, then the contouring error and cross-coupled gains can be easily obtained based on the magnitude and orientation of the contouring error vector. The experimental results on a three-axis CNC machine indicate the proposed approach simplifies the calculation of contouring error and cross-coupled gains.
轮廓误差和交叉耦合增益计算一直是交叉耦合控制应用中的关键问题。传统上,广泛采用线性近似和圆近似来确定轮廓误差和交叉耦合增益。然而,对于线性近似和圆近似,轮廓误差和交叉耦合增益的计算非常复杂,特别是在三维应用中。本文在任务坐标系下建立了轮廓误差矢量,根据轮廓误差矢量的大小和方向,可以方便地得到轮廓误差和交叉耦合增益。在三轴数控机床上的实验结果表明,该方法简化了轮廓误差和交叉耦合增益的计算。
{"title":"Contouring error vector and cross-coupled control of multi-axis servo system","authors":"Ran Shi, Xiang Zhang, Y. Lou","doi":"10.1109/IROS.2017.8206024","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206024","url":null,"abstract":"The contouring error and cross-coupled gains calculation have always been the critical issues in the application of cross-coupled control. Traditionally, the linear approximation and circular approximation are widely used to determine the contouring error and cross-coupled gains. However, for linear approximation and circular approximation, the contouring error and cross-coupled gains are calculated sophisticatedly, especially in three-dimensional applications. In this paper, a contouring error vector is established under task coordinate frame, then the contouring error and cross-coupled gains can be easily obtained based on the magnitude and orientation of the contouring error vector. The experimental results on a three-axis CNC machine indicate the proposed approach simplifies the calculation of contouring error and cross-coupled gains.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"29 1","pages":"2062-2067"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81435004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Influence of fingertip and object shape on the manipulation ability of underactuated hands 指尖和物体形状对欠驱动手操作能力的影响
Pub Date : 2017-09-01 DOI: 10.1109/IROS.2017.8202176
Diego Ospina, A. Ramirez-Serrano
This paper presents a kinetostatic framework to analyze the grasping and in-hand object manipulation abilities of two-finger underactuated hands. The framework includes a procedure to compute the Grasp Matrix and the Hand Jacobian for objects and fingertips of arbitrary shape considering rolling contacts without slipping. The usefulness of the proposed approach is illustrated in a case study of a pair of underactuated fingers driven by a tendon-pulley differential transmission mechanism and capable of performing in-hand object manipulation. The manipulability region for different object and fingertip shapes is computed and the results are discussed.
本文提出了一个动静力框架来分析两指欠驱动手的抓握和手握物体操作能力。该框架包括计算任意形状物体和指尖的抓取矩阵和手雅可比矩阵的程序,考虑无滑动滚动接触。提出的方法的有用性在一个案例研究中得到了说明,该案例研究了一对由肌腱-滑轮差动传动机构驱动的欠驱动手指,能够执行手持物体操作。计算了不同目标形状和指尖形状下的可操作区域,并对结果进行了讨论。
{"title":"Influence of fingertip and object shape on the manipulation ability of underactuated hands","authors":"Diego Ospina, A. Ramirez-Serrano","doi":"10.1109/IROS.2017.8202176","DOIUrl":"https://doi.org/10.1109/IROS.2017.8202176","url":null,"abstract":"This paper presents a kinetostatic framework to analyze the grasping and in-hand object manipulation abilities of two-finger underactuated hands. The framework includes a procedure to compute the Grasp Matrix and the Hand Jacobian for objects and fingertips of arbitrary shape considering rolling contacts without slipping. The usefulness of the proposed approach is illustrated in a case study of a pair of underactuated fingers driven by a tendon-pulley differential transmission mechanism and capable of performing in-hand object manipulation. The manipulability region for different object and fingertip shapes is computed and the results are discussed.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"25 1","pages":"329-334"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81496817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1