首页 > 最新文献

2019 International Conference on Robotics and Automation (ICRA)最新文献

英文 中文
RESLAM: A real-time robust edge-based SLAM system RESLAM:实时鲁棒的基于边缘的SLAM系统
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794462
Fabian Schenk, F. Fraundorfer
Simultaneous Localization and Mapping is a key requirement for many practical applications in robotics. In this work, we present RESLAM, a novel edge-based SLAM system for RGBD sensors. Due to their sparse representation, larger convergence basin and stability under illumination changes, edges are a promising alternative to feature-based or other direct approaches. We build a complete SLAM pipeline with camera pose estimation, sliding window optimization, loop closure and relocalisation capabilities that utilizes edges throughout all steps. In our system, we additionally refine the initial depth from the sensor, the camera poses and the camera intrinsics in a sliding window to increase accuracy. Further, we introduce an edge-based verification for loop closures that can also be applied for relocalisation. We evaluate RESLAM on wide variety of benchmark datasets that include difficult scenes and camera motions and also present qualitative results. We show that this novel edge-based SLAM system performs comparable to state-of-the-art methods, while running in real-time on a CPU. RESLAM is available as open-source software1.1Code is available: https://github.com/fabianschenk/RESLAM
同时定位和绘图是机器人许多实际应用的关键要求。在这项工作中,我们提出了一种新的基于边缘的RGBD传感器SLAM系统RESLAM。由于它们的稀疏表示、更大的收敛盆地和光照变化下的稳定性,边缘是基于特征或其他直接方法的有希望的替代方法。我们建立了一个完整的SLAM管道,具有相机姿态估计,滑动窗口优化,闭环关闭和重新定位功能,在所有步骤中都利用边缘。在我们的系统中,我们还细化了传感器的初始深度,相机姿态和相机在滑动窗口中的固有特性,以提高精度。此外,我们为循环闭包引入了基于边缘的验证,该验证也可用于重新定位。我们在各种各样的基准数据集上评估RESLAM,包括困难的场景和相机运动,并给出定性结果。我们表明,这种新颖的基于边缘的SLAM系统在CPU上实时运行时,其性能可与最先进的方法相媲美。RESLAM作为开源软件提供1.1 code: https://github.com/fabianschenk/RESLAM
{"title":"RESLAM: A real-time robust edge-based SLAM system","authors":"Fabian Schenk, F. Fraundorfer","doi":"10.1109/ICRA.2019.8794462","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794462","url":null,"abstract":"Simultaneous Localization and Mapping is a key requirement for many practical applications in robotics. In this work, we present RESLAM, a novel edge-based SLAM system for RGBD sensors. Due to their sparse representation, larger convergence basin and stability under illumination changes, edges are a promising alternative to feature-based or other direct approaches. We build a complete SLAM pipeline with camera pose estimation, sliding window optimization, loop closure and relocalisation capabilities that utilizes edges throughout all steps. In our system, we additionally refine the initial depth from the sensor, the camera poses and the camera intrinsics in a sliding window to increase accuracy. Further, we introduce an edge-based verification for loop closures that can also be applied for relocalisation. We evaluate RESLAM on wide variety of benchmark datasets that include difficult scenes and camera motions and also present qualitative results. We show that this novel edge-based SLAM system performs comparable to state-of-the-art methods, while running in real-time on a CPU. RESLAM is available as open-source software1.1Code is available: https://github.com/fabianschenk/RESLAM","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"170 1","pages":"154-160"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74872731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Multimodal Spatio-Temporal Information in End-to-End Networks for Automotive Steering Prediction 端到端网络中的多模态时空信息用于汽车转向预测
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794410
M. Abou-Hussein, Stefan H. Müller-Weinfurtner, J. Boedecker
We study the end-to-end steering problem using visual input data from an onboard vehicle camera. An empirical comparison between spatial, spatio-temporal and multimodal models is performed assessing each concept’s performance from two points of evaluation. First, how close the model is in predicting and imitating a real-life driver’s behavior, second, the smoothness of the predicted steering command. The latter is a newly proposed metric. Building on our results, we propose a new recurrent multimodal model. The suggested model has been tested on a custom dataset recorded by BMW, as well as the public dataset provided by Udacity. Results show that it outperforms previously released scores. Further, a steering correction concept from off-lane driving through the inclusion of correction frames is presented. We show that our suggestion leads to promising results empirically.
我们使用车载摄像头的视觉输入数据研究端到端转向问题。对空间、时空和多模态模型进行了实证比较,从两个评价点评估每个概念的表现。首先,模型在预测和模仿现实驾驶员的行为方面有多接近,其次,预测的转向命令的平稳性。后者是一个新提出的度量标准。基于我们的结果,我们提出了一个新的循环多模态模型。建议的模型已经在BMW记录的定制数据集以及Udacity提供的公共数据集上进行了测试。结果显示,它比以前发布的分数要好。在此基础上,提出了一种包含校正框架的偏离车道驾驶转向校正概念。实证结果表明,我们的建议具有良好的效果。
{"title":"Multimodal Spatio-Temporal Information in End-to-End Networks for Automotive Steering Prediction","authors":"M. Abou-Hussein, Stefan H. Müller-Weinfurtner, J. Boedecker","doi":"10.1109/ICRA.2019.8794410","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794410","url":null,"abstract":"We study the end-to-end steering problem using visual input data from an onboard vehicle camera. An empirical comparison between spatial, spatio-temporal and multimodal models is performed assessing each concept’s performance from two points of evaluation. First, how close the model is in predicting and imitating a real-life driver’s behavior, second, the smoothness of the predicted steering command. The latter is a newly proposed metric. Building on our results, we propose a new recurrent multimodal model. The suggested model has been tested on a custom dataset recorded by BMW, as well as the public dataset provided by Udacity. Results show that it outperforms previously released scores. Further, a steering correction concept from off-lane driving through the inclusion of correction frames is presented. We show that our suggestion leads to promising results empirically.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"16 1","pages":"8641-8647"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79250427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Improved A-search guided tree construction for kinodynamic planning 改进的a搜索引导树构建动力学规划
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8793705
Yebin Wang
With node selection being directed by a heuristic cost [1]–[3], A-search guided tree (AGT) is constructed on-the-fly and enables fast kinodynamic planning. This work presents two variants of AGT to improve computation efficiency. An improved AGT (i-AGT) biases node expansion through prioritizing control actions, an analogy of prioritizing nodes. Focusing on node selection, a bi-directional AGT (BAGT) introduces a second tree originated from the goal in order to offer a better heuristic cost of the first tree. Effectiveness of BAGT pivots on the fact that the second tree encodes obstacles information near the goal. Case study demonstrates that i-AGT consistently reduces the complexity of the tree and improves computation efficiency; and BAGT works largely but not always, particularly with no benefit observed for simple cases.
通过启发式代价(heuristic cost)[1] -[3]指导节点选择,实时构建a -search guided tree (AGT),实现快速的动力学规划。为了提高计算效率,本文提出了AGT的两种变体。一种改进的AGT (i-AGT)通过对控制动作进行优先级排序来影响节点的扩展,类似于对节点进行优先级排序。双向AGT (BAGT)以节点选择为重点,引入了从目标出发的第二棵树,以提供第一棵树的更好的启发式成本。BAGT的有效性取决于第二棵树对目标附近的障碍物信息进行编码。实例研究表明,i-AGT持续降低了树的复杂度,提高了计算效率;BAGT在很大程度上起作用,但并非总是如此,特别是在简单的情况下没有观察到任何好处。
{"title":"Improved A-search guided tree construction for kinodynamic planning","authors":"Yebin Wang","doi":"10.1109/ICRA.2019.8793705","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793705","url":null,"abstract":"With node selection being directed by a heuristic cost [1]–[3], A-search guided tree (AGT) is constructed on-the-fly and enables fast kinodynamic planning. This work presents two variants of AGT to improve computation efficiency. An improved AGT (i-AGT) biases node expansion through prioritizing control actions, an analogy of prioritizing nodes. Focusing on node selection, a bi-directional AGT (BAGT) introduces a second tree originated from the goal in order to offer a better heuristic cost of the first tree. Effectiveness of BAGT pivots on the fact that the second tree encodes obstacles information near the goal. Case study demonstrates that i-AGT consistently reduces the complexity of the tree and improves computation efficiency; and BAGT works largely but not always, particularly with no benefit observed for simple cases.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"5530-5536"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79853947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
SEG-VoxelNet for 3D Vehicle Detection from RGB and LiDAR Data SEG-VoxelNet用于RGB和LiDAR数据的3D车辆检测
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8793492
Jian Dou, Jianru Xue, Jianwu Fang
This paper proposes a SEG-VoxelNet that takes RGB images and LiDAR point clouds as inputs for accurately detecting 3D vehicles in autonomous driving scenarios, which for the first time introduces semantic segmentation technique to assist the 3D LiDAR point cloud based detection. Specifically, SEG-VoxelNet is composed of two sub-networks: an image semantic segmentation network (SEG-Net) and an improved-VoxelNet. The SEG-Net generates the semantic segmentation map which represents the probability of the category for each pixel. The improved-VoxelNet is capable of effectively fusing point cloud data with image semantic feature and generating accurate 3D bounding boxes of vehicles. Experiments on the KITTI 3D vehicle detection benchmark show that our approach outperforms the methods of state-of-the-art.
本文提出了一种以RGB图像和LiDAR点云为输入的SEG-VoxelNet,用于自动驾驶场景下的3D车辆精确检测,首次引入语义分割技术辅助基于3D LiDAR点云的检测。具体来说,SEG-VoxelNet由两个子网络组成:图像语义分割网络(SEG-Net)和改进的voxelnet。SEG-Net生成表示每个像素的类别概率的语义分割图。改进后的- voxelnet能够有效地将点云数据与图像语义特征融合,生成精确的车辆三维边界框。在KITTI三维车辆检测基准上的实验表明,我们的方法优于最先进的方法。
{"title":"SEG-VoxelNet for 3D Vehicle Detection from RGB and LiDAR Data","authors":"Jian Dou, Jianru Xue, Jianwu Fang","doi":"10.1109/ICRA.2019.8793492","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793492","url":null,"abstract":"This paper proposes a SEG-VoxelNet that takes RGB images and LiDAR point clouds as inputs for accurately detecting 3D vehicles in autonomous driving scenarios, which for the first time introduces semantic segmentation technique to assist the 3D LiDAR point cloud based detection. Specifically, SEG-VoxelNet is composed of two sub-networks: an image semantic segmentation network (SEG-Net) and an improved-VoxelNet. The SEG-Net generates the semantic segmentation map which represents the probability of the category for each pixel. The improved-VoxelNet is capable of effectively fusing point cloud data with image semantic feature and generating accurate 3D bounding boxes of vehicles. Experiments on the KITTI 3D vehicle detection benchmark show that our approach outperforms the methods of state-of-the-art.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"69 1","pages":"4362-4368"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83335920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
WheeLeR: Wheel-Leg Reconfigurable Mechanism with Passive Gears for Mobile Robot Applications 惠勒:用于移动机器人应用的带被动齿轮的轮腿可重构机构
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8793686
Chuanqi Zheng, Kiju Lee
This paper presents a new passive wheel-leg transformation mechanism and its embodiment in a small mobile robot. The mechanism is based on a unique geared structure, allowing the wheel to transform between two modes, i.e., wheel or leg, potentially adapting to varying ground conditions. It consists of a central gear and legs with partial gears that rotate around the central gear to open or close the legs. When fully closed, the mechanism forms a seamless circular wheel; when opened, it operates in the leg mode. The central gear actuated by the driving motor generates opening and closing motions of the legs without using an additional actuator. The number of legs, their physical size, and the gear ratio between the central gear and the partial gears on the legs are adjustable. This design is mechanically simple, customizable, and easy to fabricate. For physical demonstration and experiments, a mobile robotic platform was built and its terrainability was tested using five different sets of the transformable wheels with varying sizes and gear ratios. For each design, the performance with successful wheel-leg transformation, obstacle climbing, and locomotion capabilities was tested in different ground conditions.
提出了一种新型被动轮-腿转换机构及其在小型移动机器人上的实现方案。该机构基于独特的齿轮结构,允许车轮在两种模式之间转换,即车轮或腿,潜在地适应不同的地面条件。它由一个中心齿轮和腿组成,部分齿轮围绕中心齿轮旋转以打开或关闭腿。全封闭时,机构形成无缝圆轮;当打开时,它在腿模式下工作。由驱动电机驱动的中心齿轮产生腿的开启和关闭运动,而无需使用额外的致动器。支腿的数量,它们的物理尺寸,以及在支腿上的中心齿轮和部分齿轮之间的传动比是可调的。这种设计机械简单,可定制,易于制造。搭建了移动机器人平台,并采用5组不同尺寸、不同传动比的变形轮对其地形性进行了测试。对于每个设计,在不同的地面条件下测试了成功的轮腿转换,障碍攀登和运动能力。
{"title":"WheeLeR: Wheel-Leg Reconfigurable Mechanism with Passive Gears for Mobile Robot Applications","authors":"Chuanqi Zheng, Kiju Lee","doi":"10.1109/ICRA.2019.8793686","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793686","url":null,"abstract":"This paper presents a new passive wheel-leg transformation mechanism and its embodiment in a small mobile robot. The mechanism is based on a unique geared structure, allowing the wheel to transform between two modes, i.e., wheel or leg, potentially adapting to varying ground conditions. It consists of a central gear and legs with partial gears that rotate around the central gear to open or close the legs. When fully closed, the mechanism forms a seamless circular wheel; when opened, it operates in the leg mode. The central gear actuated by the driving motor generates opening and closing motions of the legs without using an additional actuator. The number of legs, their physical size, and the gear ratio between the central gear and the partial gears on the legs are adjustable. This design is mechanically simple, customizable, and easy to fabricate. For physical demonstration and experiments, a mobile robotic platform was built and its terrainability was tested using five different sets of the transformable wheels with varying sizes and gear ratios. For each design, the performance with successful wheel-leg transformation, obstacle climbing, and locomotion capabilities was tested in different ground conditions.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"23 1","pages":"9292-9298"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87880869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Visual Coverage Control for Teams of Quadcopters via Control Barrier Functions 视觉覆盖控制四轴飞行器的团队通过控制屏障功能
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8793477
Riku Funada, María Santos, J. Yamauchi, T. Hatanaka, M. Fujita, M. Egerstedt
This paper presents a coverage control strategy for teams of quadcopters that ensures that no area is left unsurveyed in between the fields of view of the visual sensors mounted on the quadcopters. We present a locational cost that quantifies the team’s coverage performance according to the sensors’ performance function. Moreover, the cost function penalizes overlaps between the fields of view of the different sensors, with the objective of increasing the area covered by the team. A distributed control law is derived for the quadcopters so that they adjust their position and zoom according to the direction of ascent of the cost. Control barrier functions are implemented to ensure that, while executing the gradient ascent control law, no holes appear in between the fields of view of neighboring robots. The performance of the algorithm is evaluated in simulated experiments.
本文提出了四轴飞行器团队的覆盖控制策略,以确保安装在四轴飞行器上的视觉传感器的视野之间没有区域未被测量。我们提出了一个位置成本,根据传感器的性能函数量化团队的覆盖性能。此外,成本函数惩罚不同传感器视野之间的重叠,目的是增加团队覆盖的区域。推导了四轴飞行器的分布式控制律,使四轴飞行器根据成本上升方向调整位置和变焦。实现控制屏障函数,确保在执行梯度上升控制律时,相邻机器人的视场之间不出现孔洞。仿真实验对该算法的性能进行了评价。
{"title":"Visual Coverage Control for Teams of Quadcopters via Control Barrier Functions","authors":"Riku Funada, María Santos, J. Yamauchi, T. Hatanaka, M. Fujita, M. Egerstedt","doi":"10.1109/ICRA.2019.8793477","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793477","url":null,"abstract":"This paper presents a coverage control strategy for teams of quadcopters that ensures that no area is left unsurveyed in between the fields of view of the visual sensors mounted on the quadcopters. We present a locational cost that quantifies the team’s coverage performance according to the sensors’ performance function. Moreover, the cost function penalizes overlaps between the fields of view of the different sensors, with the objective of increasing the area covered by the team. A distributed control law is derived for the quadcopters so that they adjust their position and zoom according to the direction of ascent of the cost. Control barrier functions are implemented to ensure that, while executing the gradient ascent control law, no holes appear in between the fields of view of neighboring robots. The performance of the algorithm is evaluated in simulated experiments.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"4 1","pages":"3010-3016"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87975156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Effort Estimation in Robot-aided Training with a Neural Network 基于神经网络的机器人辅助训练的努力估计
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794281
A. D. Oliveira, Kevin Warburton, J. Sulzer, A. Deshpande
Robotic exoskeletons open up promising interventions during post-stroke rehabilitation by assisting individuals with sensorimotor impairments to complete therapy tasks. These devices have the ability to provide variable assistance tailored to individual-specific needs and, additionally, can measure several parameters associated with the movement execution. Metrics representative of movement quality are important to guide individualized treatment. While robots can provide data with high resolution, robustness, and consistency, the delineation of the human contribution in the presence of the kinematic guidance introduced by the robotic assistance is a significant challenge. In this paper, we propose a method for assessing voluntary effort from an individual fitted in an upper-body exoskeleton called Harmony. The method separates the active torques generated by the wearer from the effects caused by unmodeled dynamics and passive neuromuscular properties and involuntary forces. Preliminary results show that the effort estimated using the proposed method is consistent with the effort associated with muscle activity and is also sensitive to different levels, indicating that it can reliably evaluate user’s contribution to movement. This method has the potential to serve as a high resolution assessment tool to monitor progress of movement quality throughout the treatment and evaluate motor recovery.
机器人外骨骼通过帮助感觉运动障碍患者完成治疗任务,在中风后康复中开辟了有希望的干预措施。这些设备能够根据个人的具体需求提供不同的帮助,此外,还可以测量与运动执行相关的几个参数。运动质量指标对指导个体化治疗具有重要意义。虽然机器人可以提供高分辨率、鲁棒性和一致性的数据,但在机器人辅助引入的运动指导下描述人类的贡献是一个重大挑战。在本文中,我们提出了一种方法来评估一个上半身外骨骼称为和谐的个人自愿努力。该方法将佩戴者产生的主动扭矩与未建模动力学、被动神经肌肉特性和非随意力引起的影响分离开来。初步结果表明,所提出的方法估算的努力与肌肉活动相关的努力是一致的,并且对不同水平的努力也很敏感,表明该方法可以可靠地评估用户对运动的贡献。该方法有潜力作为一种高分辨率的评估工具,在整个治疗过程中监测运动质量的进展并评估运动恢复。
{"title":"Effort Estimation in Robot-aided Training with a Neural Network","authors":"A. D. Oliveira, Kevin Warburton, J. Sulzer, A. Deshpande","doi":"10.1109/ICRA.2019.8794281","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794281","url":null,"abstract":"Robotic exoskeletons open up promising interventions during post-stroke rehabilitation by assisting individuals with sensorimotor impairments to complete therapy tasks. These devices have the ability to provide variable assistance tailored to individual-specific needs and, additionally, can measure several parameters associated with the movement execution. Metrics representative of movement quality are important to guide individualized treatment. While robots can provide data with high resolution, robustness, and consistency, the delineation of the human contribution in the presence of the kinematic guidance introduced by the robotic assistance is a significant challenge. In this paper, we propose a method for assessing voluntary effort from an individual fitted in an upper-body exoskeleton called Harmony. The method separates the active torques generated by the wearer from the effects caused by unmodeled dynamics and passive neuromuscular properties and involuntary forces. Preliminary results show that the effort estimated using the proposed method is consistent with the effort associated with muscle activity and is also sensitive to different levels, indicating that it can reliably evaluate user’s contribution to movement. This method has the potential to serve as a high resolution assessment tool to monitor progress of movement quality throughout the treatment and evaluate motor recovery.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"175 1","pages":"563-569"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85838198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Inkjet Printable Actuators and Sensors for Soft-bodied Crawling Robots 用于软体爬行机器人的喷墨打印致动器和传感器
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8793827
Tung D. Ta, T. Umedachi, Y. Kawahara
Soft-bodied robots are getting attention from researchers as their potential in designing compliant and adaptive robots. However, soft-bodied robots also pose many challenges not only in non-linear controlling but also in design and fabrication. Especially, the non-compatibility between soft materials and rigid sensors/actuators makes it more difficult to design a fully compliant soft-bodied robot. In this paper, we propose an all-printed sensor and actuator for designing softbodied robots by printing silver nano-particle ink on top of a flexible plastic film. We can print bending sensors and thermal based actuators instantly with home-commodity inkjet printers without any pre/post-processing. We exemplify the application of this fabrication method with an all-printed paper caterpillar robots which can inch forward and sense its body bending angle.
软体机器人因其在设计柔顺性和适应性机器人方面的潜力而受到研究人员的关注。然而,软体机器人不仅在非线性控制方面,而且在设计和制造方面也面临许多挑战。特别是软材料与刚性传感器/执行器之间的不兼容性,使得设计完全柔性的软体机器人变得更加困难。在本文中,我们提出了一种用于设计软体机器人的全印刷传感器和执行器,该传感器和执行器是在柔性塑料薄膜上印刷银纳米颗粒墨水。我们可以使用家用喷墨打印机立即打印弯曲传感器和热致动器,无需任何预处理/后处理。我们以一种能够向前移动并感知其身体弯曲角度的全印刷纸履带式机器人为例,说明了这种制造方法的应用。
{"title":"Inkjet Printable Actuators and Sensors for Soft-bodied Crawling Robots","authors":"Tung D. Ta, T. Umedachi, Y. Kawahara","doi":"10.1109/ICRA.2019.8793827","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793827","url":null,"abstract":"Soft-bodied robots are getting attention from researchers as their potential in designing compliant and adaptive robots. However, soft-bodied robots also pose many challenges not only in non-linear controlling but also in design and fabrication. Especially, the non-compatibility between soft materials and rigid sensors/actuators makes it more difficult to design a fully compliant soft-bodied robot. In this paper, we propose an all-printed sensor and actuator for designing softbodied robots by printing silver nano-particle ink on top of a flexible plastic film. We can print bending sensors and thermal based actuators instantly with home-commodity inkjet printers without any pre/post-processing. We exemplify the application of this fabrication method with an all-printed paper caterpillar robots which can inch forward and sense its body bending angle.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"98 1","pages":"3658-3664"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76141855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Large-Deflection FBG Bending Sensor for SMA Bending Modules for Steerable Surgical Robots 用于可操纵手术机器人SMA弯曲模块的大挠度FBG弯曲传感器
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794302
Jun Sheng, N. Deaton, J. Desai
This paper presents the development of a fiber Bragg grating (FBG) bending sensor for shape memory alloy (SMA) bending modules. Due to the small form factor, low cost, and large-deflection capability, SMA bending modules can be used to construct disposable surgical robots for a variety of minimally invasive procedures. To realize a closed-loop control of SMA bending modules, an intrinsic bending sensor is imperative. Due to the lack of bending sensors for SMA bending modules, we have developed an FBG bending sensor by integrating FBG fibers with a superelastic substrate using flexible adhesive. Since the substrate is ultra-thin and adhesive is flexible, the sensor has low stiffness and can measure large curvatures. Additionally, due to the orthogonal arrangement of the sensor/actuator assembly, the influence of temperature variation caused by SMA actuation can be compensated. The working principle of the developed sensor was modeled followed by simulations. After experimentally evaluating the developed model, the sensor was integrated with an SMA bending module and cyclically bi-directionally deflected. The experimental results proved the relatively high measurement accuracy, high repeatability, and large measurable curvatures of the sensor, although hysteresis was observed due to friction.
本文介绍了一种用于形状记忆合金(SMA)弯曲模块的光纤布拉格光栅(FBG)弯曲传感器。由于外形尺寸小,成本低,挠度大,SMA弯曲模块可用于构建一次性手术机器人,用于各种微创手术。为了实现SMA弯曲模组的闭环控制,必须采用内禀弯曲传感器。由于缺乏用于SMA弯曲模块的弯曲传感器,我们通过使用柔性粘合剂将FBG纤维与超弹性基板集成开发了FBG弯曲传感器。由于基材是超薄的,粘合剂是柔性的,因此传感器具有低刚度,可以测量大曲率。此外,由于传感器/执行器组件的正交排列,可以补偿SMA驱动引起的温度变化的影响。对所研制传感器的工作原理进行了建模,并进行了仿真。在实验评估了所开发的模型后,将传感器与SMA弯曲模块集成,并进行循环双向偏转。实验结果表明,该传感器测量精度较高,重复性好,测量曲率大,但由于摩擦存在滞后。
{"title":"A Large-Deflection FBG Bending Sensor for SMA Bending Modules for Steerable Surgical Robots","authors":"Jun Sheng, N. Deaton, J. Desai","doi":"10.1109/ICRA.2019.8794302","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794302","url":null,"abstract":"This paper presents the development of a fiber Bragg grating (FBG) bending sensor for shape memory alloy (SMA) bending modules. Due to the small form factor, low cost, and large-deflection capability, SMA bending modules can be used to construct disposable surgical robots for a variety of minimally invasive procedures. To realize a closed-loop control of SMA bending modules, an intrinsic bending sensor is imperative. Due to the lack of bending sensors for SMA bending modules, we have developed an FBG bending sensor by integrating FBG fibers with a superelastic substrate using flexible adhesive. Since the substrate is ultra-thin and adhesive is flexible, the sensor has low stiffness and can measure large curvatures. Additionally, due to the orthogonal arrangement of the sensor/actuator assembly, the influence of temperature variation caused by SMA actuation can be compensated. The working principle of the developed sensor was modeled followed by simulations. After experimentally evaluating the developed model, the sensor was integrated with an SMA bending module and cyclically bi-directionally deflected. The experimental results proved the relatively high measurement accuracy, high repeatability, and large measurable curvatures of the sensor, although hysteresis was observed due to friction.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"16 1","pages":"900-906"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88258441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Unsupervised Out-of-context Action Understanding 无监督的脱离情境的行动理解
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8793709
Hirokatsu Kataoka, Y. Satoh
The paper presents an unsupervised out-of-context action (O2CA) paradigm that is based on facilitating understanding by separately presenting both human action and context within a video sequence. As a means of generating an unsupervised label, we comprehensively evaluate responses from action-based (ActionNet) and context-based (ContextNet) convolutional neural networks (CNNs). Additionally, we have created three synthetic databases based on the human action (UCF101, HMDB51) and motion capture (mocap) (SURREAL) datasets. We then conducted experimental comparisons between our approach and conventional approaches. We also compared our unsupervised learning method with supervised learning using an O2CA ground truth given by synthetic data. From the results obtained, we achieved a 96.8 score on Synth-UCF, a 96.8 score on Synth-HMDB, and 89.0 on SURREAL-O2CA with F-score.
本文提出了一种无监督的情境外行为(O2CA)范式,该范式基于通过在视频序列中分别呈现人类行为和情境来促进理解。作为生成无监督标签的一种手段,我们全面评估了基于动作(ActionNet)和基于上下文(ContextNet)的卷积神经网络(cnn)的响应。此外,我们还基于人类动作(UCF101, HMDB51)和动作捕捉(mocap) (SURREAL)数据集创建了三个合成数据库。然后,我们对我们的方法和传统方法进行了实验比较。我们还比较了我们的无监督学习方法和使用由合成数据给出的O2CA基础真值的监督学习方法。从得到的结果来看,我们在Synth-UCF上获得了96.8分,在Synth-HMDB上获得了96.8分,在SURREAL-O2CA上获得了89.0分,获得了f分。
{"title":"Unsupervised Out-of-context Action Understanding","authors":"Hirokatsu Kataoka, Y. Satoh","doi":"10.1109/ICRA.2019.8793709","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793709","url":null,"abstract":"The paper presents an unsupervised out-of-context action (O2CA) paradigm that is based on facilitating understanding by separately presenting both human action and context within a video sequence. As a means of generating an unsupervised label, we comprehensively evaluate responses from action-based (ActionNet) and context-based (ContextNet) convolutional neural networks (CNNs). Additionally, we have created three synthetic databases based on the human action (UCF101, HMDB51) and motion capture (mocap) (SURREAL) datasets. We then conducted experimental comparisons between our approach and conventional approaches. We also compared our unsupervised learning method with supervised learning using an O2CA ground truth given by synthetic data. From the results obtained, we achieved a 96.8 score on Synth-UCF, a 96.8 score on Synth-HMDB, and 89.0 on SURREAL-O2CA with F-score.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"11 1","pages":"8227-8233"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88391299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2019 International Conference on Robotics and Automation (ICRA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1