首页 > 最新文献

2020 IEEE International Conference on Robotics and Automation (ICRA)最新文献

英文 中文
CNN-Based Simultaneous Dehazing and Depth Estimation 基于cnn的同步除雾和深度估计
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197358
Byeong-uk Lee, Kyunghyun Lee, Jean Oh, I. Kweon
It is difficult for both cameras and depth sensors to obtain reliable information in hazy scenes. Therefore, image dehazing is still one of the most challenging problems to solve in computer vision and robotics. With the development of convolutional neural networks (CNNs), lots of dehazing and depth estimation algorithms using CNNs have emerged. However, very few of those try to solve these two problems at the same time. Focusing on the fact that traditional haze modeling contains depth information in its formula, we propose a CNN-based simultaneous dehazing and depth estimation network. Our network aims to estimate both a dehazed image and a fully scaled depth map from a single hazy RGB input with end-toend training. The network contains a single dense encoder and four separate decoders; each of them shares the encoded image representation while performing individual tasks. We suggest a novel depth-transmission consistency loss in the training scheme to fully utilize the correlation between the depth information and transmission map. To demonstrate the robustness and effectiveness of our algorithm, we performed various ablation studies and compared our results to those of state-of-the-art algorithms in dehazing and single image depth estimation, both qualitatively and quantitatively. Furthermore, we show the generality of our network by applying it to some real-world examples.
在雾蒙蒙的场景中,无论是相机还是深度传感器都很难获得可靠的信息。因此,图像去雾仍然是计算机视觉和机器人技术中最具挑战性的问题之一。随着卷积神经网络(cnn)的发展,出现了许多基于卷积神经网络的去雾和深度估计算法。然而,很少有人试图同时解决这两个问题。针对传统雾霾建模公式中包含深度信息的问题,提出了一种基于cnn的同时除雾和深度估计网络。我们的网络旨在通过端到端训练从单个模糊RGB输入估计去雾图像和全比例深度图。该网络包含一个密集编码器和四个独立的解码器;它们中的每一个都在执行单独的任务时共享编码图像表示。为了充分利用深度信息与传输图之间的相关性,我们提出了一种新的深度-传输一致性损失训练方案。为了证明我们算法的鲁棒性和有效性,我们进行了各种消融研究,并将我们的结果与最先进的除雾和单图像深度估计算法进行了定性和定量比较。此外,我们通过将网络应用于一些现实世界的例子来证明网络的通用性。
{"title":"CNN-Based Simultaneous Dehazing and Depth Estimation","authors":"Byeong-uk Lee, Kyunghyun Lee, Jean Oh, I. Kweon","doi":"10.1109/ICRA40945.2020.9197358","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197358","url":null,"abstract":"It is difficult for both cameras and depth sensors to obtain reliable information in hazy scenes. Therefore, image dehazing is still one of the most challenging problems to solve in computer vision and robotics. With the development of convolutional neural networks (CNNs), lots of dehazing and depth estimation algorithms using CNNs have emerged. However, very few of those try to solve these two problems at the same time. Focusing on the fact that traditional haze modeling contains depth information in its formula, we propose a CNN-based simultaneous dehazing and depth estimation network. Our network aims to estimate both a dehazed image and a fully scaled depth map from a single hazy RGB input with end-toend training. The network contains a single dense encoder and four separate decoders; each of them shares the encoded image representation while performing individual tasks. We suggest a novel depth-transmission consistency loss in the training scheme to fully utilize the correlation between the depth information and transmission map. To demonstrate the robustness and effectiveness of our algorithm, we performed various ablation studies and compared our results to those of state-of-the-art algorithms in dehazing and single image depth estimation, both qualitatively and quantitatively. Furthermore, we show the generality of our network by applying it to some real-world examples.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73484759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Vision Aided Dynamic Exploration of Unstructured Terrain with a Small-Scale Quadruped Robot 小型四足机器人的视觉辅助非结构化地形动态探测
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196777
Donghyun Kim, D. Carballo, J. Carlo, Benjamin Katz, G. Bledt, Bryan Lim, Sangbae Kim
Legged robots have been highlighted as promising mobile platforms for disaster response and rescue scenarios because of their rough terrain locomotion capability. In cluttered environments, small robots are desirable as they can maneuver through small gaps, narrow paths, or tunnels. However small robots have their own set of difficulties such as limited space for sensors, limited obstacle clearance, and scaled-down walking speed. In this paper, we extensively address these difficulties via effective sensor integration and exploitation of dynamic locomotion and jumping. We integrate two Intel RealSense sensors into the MIT Mini-Cheetah, a 0.3 m tall, 9 kg quadruped robot. Simple and effective filtering and evaluation algorithms are used for foothold adjustment and obstacle avoidance. We showcase the exploration of highly irregular terrain using dynamic trotting and jumping with the small-scale, fully sensorized Mini-Cheetah quadruped robot.
有腿机器人因其在崎岖地形上的移动能力而成为灾害响应和救援场景的有前途的移动平台。在混乱的环境中,小型机器人是可取的,因为它们可以通过小缝隙、狭窄的路径或隧道。然而,小型机器人也有自己的困难,比如传感器的空间有限,障碍清除有限,以及按比例降低的行走速度。在本文中,我们通过有效的传感器集成和开发动态运动和跳跃来广泛解决这些困难。我们将两个英特尔RealSense传感器集成到麻省理工学院的迷你猎豹机器人中,这是一个0.3米高,9公斤重的四足机器人。采用了简单有效的滤波和评估算法来进行落脚点调整和避障。我们展示了探索高度不规则的地形使用动态小跑和跳跃与小型,全传感器迷你猎豹四足机器人。
{"title":"Vision Aided Dynamic Exploration of Unstructured Terrain with a Small-Scale Quadruped Robot","authors":"Donghyun Kim, D. Carballo, J. Carlo, Benjamin Katz, G. Bledt, Bryan Lim, Sangbae Kim","doi":"10.1109/ICRA40945.2020.9196777","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196777","url":null,"abstract":"Legged robots have been highlighted as promising mobile platforms for disaster response and rescue scenarios because of their rough terrain locomotion capability. In cluttered environments, small robots are desirable as they can maneuver through small gaps, narrow paths, or tunnels. However small robots have their own set of difficulties such as limited space for sensors, limited obstacle clearance, and scaled-down walking speed. In this paper, we extensively address these difficulties via effective sensor integration and exploitation of dynamic locomotion and jumping. We integrate two Intel RealSense sensors into the MIT Mini-Cheetah, a 0.3 m tall, 9 kg quadruped robot. Simple and effective filtering and evaluation algorithms are used for foothold adjustment and obstacle avoidance. We showcase the exploration of highly irregular terrain using dynamic trotting and jumping with the small-scale, fully sensorized Mini-Cheetah quadruped robot.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73657909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
Probabilistic Effect Prediction through Semantic Augmentation and Physical Simulation 基于语义增强和物理模拟的概率效应预测
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197477
A. Bauer, Peter Schmaus, F. Stulp, Daniel Leidner
Nowadays, robots are mechanically able to perform highly demanding tasks, where AI-based planning methods are used to schedule a sequence of actions that result in the desired effect. However, it is not always possible to know the exact outcome of an action in advance, as failure situations may occur at any time. To enhance failure tolerance, we propose to predict the effects of robot actions by augmenting collected experience with semantic knowledge and leveraging realistic physics simulations. That is, we consider semantic similarity of actions in order to predict outcome probabilities for previously unknown tasks. Furthermore, physical simulation is used to gather simulated experience that makes the approach robust even in extreme cases. We show how this concept is used to predict action success probabilities and how this information can be exploited throughout future planning trials. The concept is evaluated in a series of real world experiments conducted with the humanoid robot Rollin’ Justin.
如今,机器人在机械上能够执行高要求的任务,其中基于人工智能的规划方法被用来安排一系列动作,从而产生预期的效果。然而,提前知道操作的确切结果并不总是可能的,因为故障情况随时可能发生。为了提高故障容忍度,我们建议通过使用语义知识和利用现实物理模拟来增加收集到的经验来预测机器人动作的影响。也就是说,我们考虑动作的语义相似性,以预测先前未知任务的结果概率。此外,物理模拟用于收集模拟经验,使该方法即使在极端情况下也具有鲁棒性。我们展示了如何使用这个概念来预测行动成功的概率,以及如何在未来的计划试验中利用这些信息。这一概念在一系列真实世界的实验中得到了评估,这些实验是由人形机器人Rollin ' Justin进行的。
{"title":"Probabilistic Effect Prediction through Semantic Augmentation and Physical Simulation","authors":"A. Bauer, Peter Schmaus, F. Stulp, Daniel Leidner","doi":"10.1109/ICRA40945.2020.9197477","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197477","url":null,"abstract":"Nowadays, robots are mechanically able to perform highly demanding tasks, where AI-based planning methods are used to schedule a sequence of actions that result in the desired effect. However, it is not always possible to know the exact outcome of an action in advance, as failure situations may occur at any time. To enhance failure tolerance, we propose to predict the effects of robot actions by augmenting collected experience with semantic knowledge and leveraging realistic physics simulations. That is, we consider semantic similarity of actions in order to predict outcome probabilities for previously unknown tasks. Furthermore, physical simulation is used to gather simulated experience that makes the approach robust even in extreme cases. We show how this concept is used to predict action success probabilities and how this information can be exploited throughout future planning trials. The concept is evaluated in a series of real world experiments conducted with the humanoid robot Rollin’ Justin.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75237742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
High Resolution Soft Tactile Interface for Physical Human-Robot Interaction 用于人机物理交互的高分辨率软触觉界面
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197365
Isabella Huang, R. Bajcsy
If robots and humans are to coexist and cooperate in society, it would be useful for robots to be able to engage in tactile interactions. Touch is an intuitive communication tool as well as a fundamental method by which we assist each other physically. Tactile abilities are challenging to engineer in robots, since both mechanical safety and sensory intelligence are imperative. Existing work reveals a trade-off between these principles— tactile interfaces that are high in resolution are not easily adapted to human-sized geometries, nor are they generally compliant enough to guarantee safety. On the other hand, soft tactile interfaces deliver intrinsically safe mechanical properties, but their non-linear characteristics render them difficult for use in timely sensing and control. We propose a robotic system that is equipped with a completely soft and therefore safe tactile interface that is large enough to interact with human upper limbs, while producing high resolution tactile sensory readings via depth camera imaging of the soft interface. We present and validate a data-driven model that maps point cloud data to contact forces, and verify its efficacy by demonstrating two real-world applications. In particular, the robot is able to react to a human finger’s pokes and change its pose based on the tactile input. In addition, we also demonstrate that the robot can act as an assistive device that dynamically supports and follows a human forearm from underneath.
如果机器人和人类要在社会中共存和合作,那么机器人能够参与触觉互动将是有用的。触摸是一种直观的交流工具,也是我们在身体上相互帮助的基本方法。机器人的触觉能力对工程师来说是一个挑战,因为机械安全和感官智能都是必不可少的。现有的工作揭示了这些原则之间的权衡——高分辨率的触觉界面不容易适应人类大小的几何形状,它们通常也不够兼容,无法保证安全。另一方面,软触觉界面具有本质上安全的机械性能,但其非线性特性使其难以用于及时的传感和控制。我们提出了一种机器人系统,它配备了一个完全柔软的、因此安全的触觉界面,该界面足够大,可以与人类上肢互动,同时通过对软界面的深度相机成像产生高分辨率的触觉感官读数。我们提出并验证了一个数据驱动的模型,该模型将点云数据映射到接触力,并通过演示两个实际应用来验证其有效性。特别是,机器人能够对人类手指的戳动做出反应,并根据触觉输入改变姿势。此外,我们还证明了机器人可以作为一个辅助装置,从下面动态支持和跟随人类前臂。
{"title":"High Resolution Soft Tactile Interface for Physical Human-Robot Interaction","authors":"Isabella Huang, R. Bajcsy","doi":"10.1109/ICRA40945.2020.9197365","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197365","url":null,"abstract":"If robots and humans are to coexist and cooperate in society, it would be useful for robots to be able to engage in tactile interactions. Touch is an intuitive communication tool as well as a fundamental method by which we assist each other physically. Tactile abilities are challenging to engineer in robots, since both mechanical safety and sensory intelligence are imperative. Existing work reveals a trade-off between these principles— tactile interfaces that are high in resolution are not easily adapted to human-sized geometries, nor are they generally compliant enough to guarantee safety. On the other hand, soft tactile interfaces deliver intrinsically safe mechanical properties, but their non-linear characteristics render them difficult for use in timely sensing and control. We propose a robotic system that is equipped with a completely soft and therefore safe tactile interface that is large enough to interact with human upper limbs, while producing high resolution tactile sensory readings via depth camera imaging of the soft interface. We present and validate a data-driven model that maps point cloud data to contact forces, and verify its efficacy by demonstrating two real-world applications. In particular, the robot is able to react to a human finger’s pokes and change its pose based on the tactile input. In addition, we also demonstrate that the robot can act as an assistive device that dynamically supports and follows a human forearm from underneath.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75821813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Novel Solar Tracker Driven by Waves: From Idea to Implementation* 一种由波浪驱动的新型太阳能跟踪器:从想法到实现*
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196998
Ruoyu Xu, Hengli Liu, Chongfeng Liu, Zhenglong Sun, Tin Lun Lam, Huihuan Qian
Traditional solar trackers often adopt motors to automatically adjust the attitude of the solar panels towards the sun for maximum power efficiency. In this paper, a novel design of solar tracker for the ocean environment is introduced. Utilizing the fluctuations due to the waves, electromagnetic brakes are utilized instead of motors to adjust the attitude of the solar panels. Compared with the traditional solar trackers, the proposed one is simpler in hardware while the harvesting efficiency is similar. The desired attitude is calculated out of the local location and time. Then based on the dynamic model of the system, the angular acceleration of the solar panels is estimated and a control algorithm is proposed to decide the release and lock states of the brakes. In such a manner, the adjustment of the attitude of the solar panels can be achieved by using two brakes only. Experiments are conducted to validate the acceleration estimator and the dynamic model. At last, the feasibility of the proposed solar tracker is tested on the real water surface. The results show that the system is able to adjust 40° in two dimensions within 28 seconds.
传统的太阳能跟踪器通常采用电机来自动调整太阳能电池板对太阳的姿态,以获得最大的电力效率。本文介绍了一种适用于海洋环境的太阳能跟踪器的新设计。利用波浪产生的波动,电磁制动器代替电动机来调节太阳能电池板的姿态。与传统的太阳能跟踪器相比,该跟踪器硬件结构简单,但收集效率相近。期望的姿态是由局部位置和时间计算出来的。然后在系统动力学模型的基础上,估计了太阳能帆板的角加速度,并提出了一种控制算法来确定刹车的释放和锁定状态。在这种方式下,太阳能电池板的姿态调整可以通过使用两个制动器来实现。通过实验对加速度估计器和动态模型进行了验证。最后,在实际水面上对所提出的太阳能跟踪器的可行性进行了验证。结果表明,该系统可在28秒内实现二维方向40°的调整。
{"title":"A Novel Solar Tracker Driven by Waves: From Idea to Implementation*","authors":"Ruoyu Xu, Hengli Liu, Chongfeng Liu, Zhenglong Sun, Tin Lun Lam, Huihuan Qian","doi":"10.1109/ICRA40945.2020.9196998","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196998","url":null,"abstract":"Traditional solar trackers often adopt motors to automatically adjust the attitude of the solar panels towards the sun for maximum power efficiency. In this paper, a novel design of solar tracker for the ocean environment is introduced. Utilizing the fluctuations due to the waves, electromagnetic brakes are utilized instead of motors to adjust the attitude of the solar panels. Compared with the traditional solar trackers, the proposed one is simpler in hardware while the harvesting efficiency is similar. The desired attitude is calculated out of the local location and time. Then based on the dynamic model of the system, the angular acceleration of the solar panels is estimated and a control algorithm is proposed to decide the release and lock states of the brakes. In such a manner, the adjustment of the attitude of the solar panels can be achieved by using two brakes only. Experiments are conducted to validate the acceleration estimator and the dynamic model. At last, the feasibility of the proposed solar tracker is tested on the real water surface. The results show that the system is able to adjust 40° in two dimensions within 28 seconds.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74609495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Picking Thin Objects by Tilt-and-Pivot Manipulation and Its Application to Bin Picking 倾斜和枢轴操作拾取薄物体及其在拣箱中的应用
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197493
Zhekai Tong, Tierui He, Chung Hee Kim, Yu Hin Ng, Qianyi Xu, Jungwon Seo
This paper introduces the technique of tilt-andpivot manipulation, a new method for picking thin, rigid objects lying on a flat surface through robotic dexterous in-hand manipulation. During the manipulation process, the gripper is controlled to reorient about the contact with the object such that its finger can get in the space between the object and the supporting surface, which is formed by tilting up the object, with no relative sliding motion at the contact. As a result, a pinch grasp can be obtained on the faces of the thin object with ease. We discuss issues regarding the kinematics and planning of tilt-and-pivot, effector shape design, and the overall practicality of the manipulation technique, which is general enough to be applicable to any rigid convex polygonal objects. We also present a set of experiments in a range of bin picking scenarios.
本文介绍了一种利用机器人灵巧手操作来拾取平面上的薄型刚性物体的新方法——倾斜和枢轴操纵技术。在操作过程中,控制抓手在与物体的接触处重新定向,使其手指能够进入物体与支撑面之间的空间,该空间是由物体向上倾斜形成的,在接触处没有相对滑动运动。因此,可以轻松地在薄物体的表面上获得捏抓。我们讨论关于tilt-and-pivot运动学和规划的问题,效应形状设计,和整体操作技术的实用性,这是一般足以适用于任何刚性凸多边形对象。我们还提出了一组实验在一系列的垃圾箱拾取场景。
{"title":"Picking Thin Objects by Tilt-and-Pivot Manipulation and Its Application to Bin Picking","authors":"Zhekai Tong, Tierui He, Chung Hee Kim, Yu Hin Ng, Qianyi Xu, Jungwon Seo","doi":"10.1109/ICRA40945.2020.9197493","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197493","url":null,"abstract":"This paper introduces the technique of tilt-andpivot manipulation, a new method for picking thin, rigid objects lying on a flat surface through robotic dexterous in-hand manipulation. During the manipulation process, the gripper is controlled to reorient about the contact with the object such that its finger can get in the space between the object and the supporting surface, which is formed by tilting up the object, with no relative sliding motion at the contact. As a result, a pinch grasp can be obtained on the faces of the thin object with ease. We discuss issues regarding the kinematics and planning of tilt-and-pivot, effector shape design, and the overall practicality of the manipulation technique, which is general enough to be applicable to any rigid convex polygonal objects. We also present a set of experiments in a range of bin picking scenarios.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74758110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Optimizing performance in automation through modular robots 通过模块化机器人优化自动化性能
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196590
Stefan B. Liu, M. Althoff
Flexible manufacturing and automation require robots that can be adapted to changing tasks. We propose to use modular robots that are customized from given modules for a specific task. This work presents an algorithm for proposing a module composition that is optimal with respect to performance metrics such as cycle time and energy efficiency, while considering kinematic, dynamic, and obstacle constraints. Tasks are defined as trajectories in Cartesian space, as a list of poses for the robot to reach as fast as possible, or as dexterity in a desired workspace. In a simulated comparison with commercially available industrial robots, we demonstrate the superiority of our approach in randomly generated tasks with respect to the chosen performance metrics. We use our modular robot proModular.1 for the comparison.
柔性制造和自动化要求机器人能够适应不断变化的任务。我们建议使用模块化机器人,根据给定的模块定制特定的任务。这项工作提出了一种算法,可以在考虑运动学、动力学和障碍约束的同时,根据循环时间和能源效率等性能指标提出最优的模块组成。任务被定义为笛卡尔空间中的轨迹,作为机器人尽可能快地达到的姿势列表,或者在期望的工作空间中的灵活性。在与商用工业机器人的模拟比较中,我们证明了我们的方法在随机生成任务中相对于所选择的性能指标的优越性。我们使用模块化机器人proModular。1进行比较。
{"title":"Optimizing performance in automation through modular robots","authors":"Stefan B. Liu, M. Althoff","doi":"10.1109/ICRA40945.2020.9196590","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196590","url":null,"abstract":"Flexible manufacturing and automation require robots that can be adapted to changing tasks. We propose to use modular robots that are customized from given modules for a specific task. This work presents an algorithm for proposing a module composition that is optimal with respect to performance metrics such as cycle time and energy efficiency, while considering kinematic, dynamic, and obstacle constraints. Tasks are defined as trajectories in Cartesian space, as a list of poses for the robot to reach as fast as possible, or as dexterity in a desired workspace. In a simulated comparison with commercially available industrial robots, we demonstrate the superiority of our approach in randomly generated tasks with respect to the chosen performance metrics. We use our modular robot proModular.1 for the comparison.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73153099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Finding Missing Skills for High-Level Behaviors 寻找高级行为缺失的技能
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197223
Adam Pacheck, Salar Moarref, H. Kress-Gazit
Recently, Linear Temporal Logic (LTL) has been used as a formalism for defining high-level robot tasks, and LTL synthesis has been used to automatically create correct-by-construction robot control. The underlying premise of this approach is that the robot has a set of actions, or skills, that can be composed to achieve the high- level task. In this paper we consider LTL specifications that cannot be synthesized into robot control due to lack of appropriate skills; we present algorithms for automatically suggesting new or modified skills for the robot that will guarantee the task will be achieved. We demonstrate our approach with a physical Baxter robot and a simulated KUKA IIWA arm.
近年来,线性时间逻辑(LTL)已被用作定义高级机器人任务的形式化方法,LTL综合已被用于自动创建按结构正确的机器人控制。这种方法的基本前提是机器人具有一组动作或技能,可以组合起来完成高级任务。在本文中,我们考虑了由于缺乏适当的技能而无法综合到机器人控制中的LTL规范;我们提出算法,自动建议新的或修改的技能,机器人将保证任务的完成。我们用一个物理的Baxter机器人和一个模拟的KUKA IIWA手臂来演示我们的方法。
{"title":"Finding Missing Skills for High-Level Behaviors","authors":"Adam Pacheck, Salar Moarref, H. Kress-Gazit","doi":"10.1109/ICRA40945.2020.9197223","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197223","url":null,"abstract":"Recently, Linear Temporal Logic (LTL) has been used as a formalism for defining high-level robot tasks, and LTL synthesis has been used to automatically create correct-by-construction robot control. The underlying premise of this approach is that the robot has a set of actions, or skills, that can be composed to achieve the high- level task. In this paper we consider LTL specifications that cannot be synthesized into robot control due to lack of appropriate skills; we present algorithms for automatically suggesting new or modified skills for the robot that will guarantee the task will be achieved. We demonstrate our approach with a physical Baxter robot and a simulated KUKA IIWA arm.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73719655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
End-to-end Learning for Inter-Vehicle Distance and Relative Velocity Estimation in ADAS with a Monocular Camera 基于单目相机的ADAS车辆间距离和相对速度估计的端到端学习
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197557
Zhenbo Song, Jianfeng Lu, Tong Zhang, Hongdong Li
Inter-vehicle distance and relative velocity estimations are two basic functions for any ADAS (Advanced driver-assistance systems). In this paper, we propose a monocular camera based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network. The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames, which include deep feature clue, scene geometry clue, as well as temporal optical flow clue. We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field (i.e. optical flow). We implement the method by a light-weight deep neural network. Extensive experiments are conducted which confirm the superior performance of our method over other state-of-the-art methods, in terms of estimation accuracy, computational speed, and memory footprint.
车辆间距离和相对速度估计是任何ADAS(高级驾驶员辅助系统)的两个基本功能。在本文中,我们提出了一种基于端到端深度神经网络训练的基于单目摄像机的车辆间距离和相对速度估计方法。该方法的新颖之处在于将任意两个时间连续的单目图像提供的多重视觉线索进行融合,包括深度特征线索、场景几何线索和时间光流线索。我们还提出了一种以车辆为中心的采样机制,以减轻运动场中透视畸变(即光流)的影响。我们通过一个轻量级的深度神经网络来实现该方法。大量的实验证实了我们的方法在估计精度、计算速度和内存占用方面优于其他最先进的方法。
{"title":"End-to-end Learning for Inter-Vehicle Distance and Relative Velocity Estimation in ADAS with a Monocular Camera","authors":"Zhenbo Song, Jianfeng Lu, Tong Zhang, Hongdong Li","doi":"10.1109/ICRA40945.2020.9197557","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197557","url":null,"abstract":"Inter-vehicle distance and relative velocity estimations are two basic functions for any ADAS (Advanced driver-assistance systems). In this paper, we propose a monocular camera based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network. The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames, which include deep feature clue, scene geometry clue, as well as temporal optical flow clue. We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field (i.e. optical flow). We implement the method by a light-weight deep neural network. Extensive experiments are conducted which confirm the superior performance of our method over other state-of-the-art methods, in terms of estimation accuracy, computational speed, and memory footprint.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78597581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Helping Robots Learn: A Human-Robot Master-Apprentice Model Using Demonstrations via Virtual Reality Teleoperation 帮助机器人学习:通过虚拟现实远程操作演示的人机师徒模型
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196754
Joseph DelPreto, J. Lipton, Lindsay M. Sanneman, Aidan J. Fay, Christopher K. Fourie, Changhyun Choi, D. Rus
As artificial intelligence becomes an increasingly prevalent method of enhancing robotic capabilities, it is important to consider effective ways to train these learning pipelines and to leverage human expertise. Working towards these goals, a master-apprentice model is presented and is evaluated during a grasping task for effectiveness and human perception. The apprenticeship model augments self-supervised learning with learning by demonstration, efficiently using the human’s time and expertise while facilitating future scalability to supervision of multiple robots; the human provides demonstrations via virtual reality when the robot cannot complete the task autonomously. Experimental results indicate that the robot learns a grasping task with the apprenticeship model faster than with a solely self-supervised approach and with fewer human interventions than a solely demonstration-based approach; 100% grasping success is obtained after 150 grasps with 19 demonstrations. Preliminary user studies evaluating workload, usability, and effectiveness of the system yield promising results for system scalability and deployability. They also suggest a tendency for users to overestimate the robot’s skill and to generalize its capabilities, especially as learning improves.
随着人工智能成为增强机器人能力的一种日益普遍的方法,重要的是要考虑有效的方法来训练这些学习管道并利用人类的专业知识。为了实现这些目标,提出了一个师徒模型,并在抓取任务期间评估了有效性和人类感知。学徒模型通过示范学习增强了自我监督学习,有效地利用了人类的时间和专业知识,同时促进了未来监督多个机器人的可扩展性;当机器人无法自主完成任务时,人类通过虚拟现实进行演示。实验结果表明,与单纯的自我监督方法相比,学徒模型学习抓取任务的速度更快,与单纯的基于演示的方法相比,人工干预更少;经过19次示范,150次抓握,100%成功抓握。初步的用户研究评估了系统的工作负载、可用性和有效性,对系统的可伸缩性和可部署性产生了有希望的结果。它们还表明,用户倾向于高估机器人的技能,并将其能力一般化,尤其是在学习能力提高的情况下。
{"title":"Helping Robots Learn: A Human-Robot Master-Apprentice Model Using Demonstrations via Virtual Reality Teleoperation","authors":"Joseph DelPreto, J. Lipton, Lindsay M. Sanneman, Aidan J. Fay, Christopher K. Fourie, Changhyun Choi, D. Rus","doi":"10.1109/ICRA40945.2020.9196754","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196754","url":null,"abstract":"As artificial intelligence becomes an increasingly prevalent method of enhancing robotic capabilities, it is important to consider effective ways to train these learning pipelines and to leverage human expertise. Working towards these goals, a master-apprentice model is presented and is evaluated during a grasping task for effectiveness and human perception. The apprenticeship model augments self-supervised learning with learning by demonstration, efficiently using the human’s time and expertise while facilitating future scalability to supervision of multiple robots; the human provides demonstrations via virtual reality when the robot cannot complete the task autonomously. Experimental results indicate that the robot learns a grasping task with the apprenticeship model faster than with a solely self-supervised approach and with fewer human interventions than a solely demonstration-based approach; 100% grasping success is obtained after 150 grasps with 19 demonstrations. Preliminary user studies evaluating workload, usability, and effectiveness of the system yield promising results for system scalability and deployability. They also suggest a tendency for users to overestimate the robot’s skill and to generalize its capabilities, especially as learning improves.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76155789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
2020 IEEE International Conference on Robotics and Automation (ICRA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1