首页 > 最新文献

2020 IEEE International Conference on Robotics and Automation (ICRA)最新文献

英文 中文
A Bio-Signal Enhanced Adaptive Impedance Controller for Lower Limb Exoskeleton 下肢外骨骼生物信号增强自适应阻抗控制器
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196774
Lin-qing Xia, Yachun Feng, Fan Chen, Xinyu Wu
The problem of human-exoskeleton interaction with uncertain dynamical parameters remains an open-ended research area. It requires an elaborate control strategy design of the exoskeleton to accommodate complex and unpredictable human body movements. In this paper, we proposed a novel control approach for the lower limb exoskeleton to realize its task of assisting the human operator walking. The main challenge of this study was to determine the human lower extremity dynamics, such as the joint torque. For this purpose, we developed a neural network-based torque estimation method. It can predict the joint torques of humans with surface electromyogram signals (sEMG). Then an radial basis function neural network (RBF NN) enhanced adaptive impedance controller is employed to ensure exoskeleton track desired motion trajectory of a human operator. Algorithm performance is evaluated with two healthy subjects and the rehabilitation lower-limb exoskeleton developed by Shenzhen Institutes of Advanced Technology (SIAT).
具有不确定动力学参数的人-外骨骼相互作用问题仍然是一个开放的研究领域。它需要外骨骼的精细控制策略设计,以适应复杂和不可预测的人体运动。本文提出了一种新的下肢外骨骼控制方法,以实现其辅助人类操作者行走的任务。本研究的主要挑战是确定人类下肢动力学,如关节扭矩。为此,我们开发了一种基于神经网络的转矩估计方法。它可以利用肌表电信号来预测人体的关节力矩。然后采用径向基函数神经网络(RBF NN)增强自适应阻抗控制器确保外骨骼跟踪人体操作者的期望运动轨迹。以两名健康受试者和深圳先进技术研究院开发的康复下肢外骨骼为实验对象,对算法的性能进行了评估。
{"title":"A Bio-Signal Enhanced Adaptive Impedance Controller for Lower Limb Exoskeleton","authors":"Lin-qing Xia, Yachun Feng, Fan Chen, Xinyu Wu","doi":"10.1109/ICRA40945.2020.9196774","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196774","url":null,"abstract":"The problem of human-exoskeleton interaction with uncertain dynamical parameters remains an open-ended research area. It requires an elaborate control strategy design of the exoskeleton to accommodate complex and unpredictable human body movements. In this paper, we proposed a novel control approach for the lower limb exoskeleton to realize its task of assisting the human operator walking. The main challenge of this study was to determine the human lower extremity dynamics, such as the joint torque. For this purpose, we developed a neural network-based torque estimation method. It can predict the joint torques of humans with surface electromyogram signals (sEMG). Then an radial basis function neural network (RBF NN) enhanced adaptive impedance controller is employed to ensure exoskeleton track desired motion trajectory of a human operator. Algorithm performance is evaluated with two healthy subjects and the rehabilitation lower-limb exoskeleton developed by Shenzhen Institutes of Advanced Technology (SIAT).","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"12 1","pages":"4739-4744"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84315270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
CCAN: Constraint Co-Attention Network for Instance Grasping 实例抓取的约束共注意网络
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197182
Junhao Cai, X. Tao, Hui Cheng, Zhanpeng Zhang
Instance grasping is a challenging robotic grasping task when a robot aims to grasp a specified target object in cluttered scenes. In this paper, we propose a novel end-to-end instance grasping method using only monocular workspace and query images, where the workspace image includes several objects and the query image only contains the target object. To effectively extract discriminative features and facilitate the training process, a learning-based method, referred to as Constraint Co-Attention Network (CCAN), is proposed which consists of a constraint co-attention module and a grasp affordance predictor. An effective co-attention module is presented to construct the features of a workspace image from the extracted features of the query image. By introducing soft constraints into the co-attention module, it highlights the target object’s features while trivializes other objects’ features in the workspace image. Using the features extracted from the co-attention module, the cascaded grasp affordance interpreter network only predicts the grasp configuration for the target object. The training of the CCAN is totally based on simulated self-supervision. Extensive qualitative and quantitative experiments show the effectiveness of our method both in simulated and real-world environments even for totally unseen objects.
实例抓取是一项具有挑战性的机器人抓取任务,当机器人的目标是在混乱的场景中抓取指定的目标物体时。在本文中,我们提出了一种仅使用单眼工作空间和查询图像的端到端实例抓取方法,其中工作空间图像包含多个对象,查询图像仅包含目标对象。为了有效地提取判别特征,简化训练过程,提出了一种基于学习的约束共注意网络(CCAN)方法,该方法由约束共注意模块和抓取能力预测器组成。提出了一种有效的协同关注模块,通过提取查询图像的特征来构造工作空间图像的特征。通过在共同关注模块中引入软约束,它突出了目标对象的特征,同时淡化了工作空间图像中其他对象的特征。利用从共同关注模块中提取的特征,级联抓取功能解释器网络仅预测目标对象的抓取配置。CCAN的训练完全基于模拟自我监督。大量的定性和定量实验表明,我们的方法在模拟和现实环境中都是有效的,即使是完全看不见的物体。
{"title":"CCAN: Constraint Co-Attention Network for Instance Grasping","authors":"Junhao Cai, X. Tao, Hui Cheng, Zhanpeng Zhang","doi":"10.1109/ICRA40945.2020.9197182","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197182","url":null,"abstract":"Instance grasping is a challenging robotic grasping task when a robot aims to grasp a specified target object in cluttered scenes. In this paper, we propose a novel end-to-end instance grasping method using only monocular workspace and query images, where the workspace image includes several objects and the query image only contains the target object. To effectively extract discriminative features and facilitate the training process, a learning-based method, referred to as Constraint Co-Attention Network (CCAN), is proposed which consists of a constraint co-attention module and a grasp affordance predictor. An effective co-attention module is presented to construct the features of a workspace image from the extracted features of the query image. By introducing soft constraints into the co-attention module, it highlights the target object’s features while trivializes other objects’ features in the workspace image. Using the features extracted from the co-attention module, the cascaded grasp affordance interpreter network only predicts the grasp configuration for the target object. The training of the CCAN is totally based on simulated self-supervision. Extensive qualitative and quantitative experiments show the effectiveness of our method both in simulated and real-world environments even for totally unseen objects.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"8 1","pages":"8353-8359"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84972810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Hybrid Topological and 3D Dense Mapping through Autonomous Exploration for Large Indoor Environments 基于自主探索的大型室内环境混合拓扑和三维密集映射
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197226
Clara Gómez, M. Fehr, A. Millane, A. C. Hernández, Juan I. Nieto, R. Barber, R. Siegwart
Robots require a detailed understanding of the 3D structure of the environment for autonomous navigation and path planning. A popular approach is to represent the environment using metric, dense 3D maps such as 3D occupancy grids. However, in large environments the computational power required for most state-of-the-art 3D dense mapping systems is compromising precision and real-time capability. In this work, we propose a novel mapping method that is able to build and maintain 3D dense representations for large indoor environments using standard CPUs. Topological global representations and 3D dense submaps are maintained as hybrid global map. Submaps are generated for every new visited place. A place (room) is identified as an isolated part of the environment connected to other parts through transit areas (doors). This semantic partitioning of the environment allows for a more efficient mapping and path-planning. We also propose a method for autonomous exploration that directly builds the hybrid representation in real time.We validate the real-time performance of our hybrid system on simulated and real environments regarding mapping and path-planning. The improvement in execution time and memory requirements upholds the contribution of the proposed work.
机器人需要对环境的三维结构有详细的了解,才能进行自主导航和路径规划。一种流行的方法是使用度量、密集的3D地图(如3D占用网格)来表示环境。然而,在大型环境中,大多数最先进的3D密集映射系统所需的计算能力会损害精度和实时能力。在这项工作中,我们提出了一种新的映射方法,该方法能够使用标准cpu为大型室内环境构建和维护3D密集表示。拓扑全局表示和三维密集子图作为混合全局图进行维护。每个新访问的地方都会生成子地图。一个地方(房间)被认为是环境的一个孤立部分,通过交通区域(门)与其他部分相连。环境的这种语义划分允许更有效的映射和路径规划。我们还提出了一种直接实时构建混合表示的自主探索方法。我们在模拟和真实环境中验证了混合系统在映射和路径规划方面的实时性。执行时间和内存需求的改进支持了所建议工作的贡献。
{"title":"Hybrid Topological and 3D Dense Mapping through Autonomous Exploration for Large Indoor Environments","authors":"Clara Gómez, M. Fehr, A. Millane, A. C. Hernández, Juan I. Nieto, R. Barber, R. Siegwart","doi":"10.1109/ICRA40945.2020.9197226","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197226","url":null,"abstract":"Robots require a detailed understanding of the 3D structure of the environment for autonomous navigation and path planning. A popular approach is to represent the environment using metric, dense 3D maps such as 3D occupancy grids. However, in large environments the computational power required for most state-of-the-art 3D dense mapping systems is compromising precision and real-time capability. In this work, we propose a novel mapping method that is able to build and maintain 3D dense representations for large indoor environments using standard CPUs. Topological global representations and 3D dense submaps are maintained as hybrid global map. Submaps are generated for every new visited place. A place (room) is identified as an isolated part of the environment connected to other parts through transit areas (doors). This semantic partitioning of the environment allows for a more efficient mapping and path-planning. We also propose a method for autonomous exploration that directly builds the hybrid representation in real time.We validate the real-time performance of our hybrid system on simulated and real environments regarding mapping and path-planning. The improvement in execution time and memory requirements upholds the contribution of the proposed work.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"38 1","pages":"9673-9679"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85624719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Prediction of Gait Cycle Percentage Using Instrumented Shoes with Artificial Neural Networks 用人工神经网络预测仪表鞋的步态周期百分比
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196747
Antonio Prado, Xiya Cao, Xiangzhuo Ding, S. Agrawal
Gait training is widely used to treat gait abnormalities. Traditional gait measurement systems are limited to instrumented laboratories. Even though gait measurements can be made in these settings, it is challenging to estimate gait parameters robustly in real-time for gait rehabilitation, especially when walking over-ground. In this paper, we present a novel approach to track the continuous gait cycle during overground walking outside the laboratory. In this approach, we instrument standard footwear with a sensorized insole and an inertial measurement unit. Artificial neural networks are used on the raw data obtained from the insoles and IMUs to compute the continuous percentage of the gait cycle for the entire walking session. We show in this paper that when tested with novel subjects, we can predict the gait cycle with a Root Mean Square Error (RMSE) of 7.2%. The onset of each cycle can be detected within an RMSE time of 41.5 ms with a 99% detection rate. The algorithm was tested with 18840 strides collected from 24 adults. In this paper, we tested a combination of fully-connected layers, an Encoder-Decoder using convolutional layers, and recurrent layers to identify an architecture that provided the best performance.
步态训练被广泛用于治疗步态异常。传统的步态测量系统仅限于仪器实验室。尽管可以在这些环境中进行步态测量,但对步态康复的实时步态参数进行稳健估计是一项挑战,特别是在地面行走时。在本文中,我们提出了一种新的方法来跟踪实验室外地面行走时的连续步态周期。在这种方法中,我们用传感器鞋垫和惯性测量单元来测量标准鞋类。对从鞋垫和imu获得的原始数据使用人工神经网络来计算整个步行过程中步态周期的连续百分比。我们在论文中表明,当对新受试者进行测试时,我们可以以7.2%的均方根误差(RMSE)预测步态周期。每个周期的开始可以在41.5 ms的RMSE时间内检测到,检出率为99%。该算法用从24名成年人身上收集的18840步进行了测试。在本文中,我们测试了全连接层、使用卷积层的编码器-解码器和循环层的组合,以确定提供最佳性能的架构。
{"title":"Prediction of Gait Cycle Percentage Using Instrumented Shoes with Artificial Neural Networks","authors":"Antonio Prado, Xiya Cao, Xiangzhuo Ding, S. Agrawal","doi":"10.1109/ICRA40945.2020.9196747","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196747","url":null,"abstract":"Gait training is widely used to treat gait abnormalities. Traditional gait measurement systems are limited to instrumented laboratories. Even though gait measurements can be made in these settings, it is challenging to estimate gait parameters robustly in real-time for gait rehabilitation, especially when walking over-ground. In this paper, we present a novel approach to track the continuous gait cycle during overground walking outside the laboratory. In this approach, we instrument standard footwear with a sensorized insole and an inertial measurement unit. Artificial neural networks are used on the raw data obtained from the insoles and IMUs to compute the continuous percentage of the gait cycle for the entire walking session. We show in this paper that when tested with novel subjects, we can predict the gait cycle with a Root Mean Square Error (RMSE) of 7.2%. The onset of each cycle can be detected within an RMSE time of 41.5 ms with a 99% detection rate. The algorithm was tested with 18840 strides collected from 24 adults. In this paper, we tested a combination of fully-connected layers, an Encoder-Decoder using convolutional layers, and recurrent layers to identify an architecture that provided the best performance.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"54 1","pages":"2834-2840"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85875753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Using multiple short hops for multicopter navigation with only inertial sensors 多旋翼飞行器仅用惯性传感器导航的多短跳方法
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196610
Xiangyu Wu, M. Mueller
In certain challenging environments, such as inside buildings on fire, the main sensors (e.g. cameras, LiDARs and GPS systems) used for multicopter localization can become unavailable. Direct integration of the inertial navigation sensors (the accelerometer and rate gyroscope), is however unaffected by external disturbances, but the rapid error accumulation quickly makes a naive application of such a strategy feasible only for very short durations. In this work we propose a motion strategy for reducing the inertial navigation state estimation error of multicopters. The proposed strategy breaks a long duration flight into multiple short duration hops between which the vehicle remains stationary on the ground. When the vehicle is stationary, zero-velocity pseudo-measurements are introduced to an extended Kalman Filter to reduce the state estimation error. We perform experiments for closed-loop control of a multicopter for evaluation. The mean absolute position estimation error was 3.4% over a total flight distance of 5m in the experiments. The results showed a 80% reduction compared to the standard inertial navigation method without using this strategy. In addition, an additional experiment with total flight distance of 10m is conducted to demonstrate the ability of this method to navigate a multicopter in real-world environment. The final trajectory tracking error was 3% of the total flight distance.
在某些具有挑战性的环境中,例如在着火的建筑物内,用于多直升机定位的主要传感器(例如摄像机,激光雷达和GPS系统)可能无法使用。然而,惯性导航传感器(加速度计和速率陀螺仪)的直接集成不受外部干扰的影响,但快速的误差积累很快使这种策略的幼稚应用仅在很短的持续时间内可行。本文提出了一种减小多旋翼机惯性导航状态估计误差的运动策略。所提出的策略将长时间的飞行分解为多个短时间的跳跃,在这些跳跃之间,飞行器在地面上保持静止。当车辆静止时,在扩展卡尔曼滤波器中引入零速度伪测量以减小状态估计误差。我们对多旋翼机的闭环控制进行了实验以进行评估。实验中,在总飞行距离为5m时,平均绝对位置估计误差为3.4%。结果表明,与不使用该策略的标准惯性导航方法相比,减少了80%。此外,还进行了一个总飞行距离为10米的附加实验,以验证该方法在实际环境中导航多旋翼飞机的能力。最终的轨迹跟踪误差为总飞行距离的3%。
{"title":"Using multiple short hops for multicopter navigation with only inertial sensors","authors":"Xiangyu Wu, M. Mueller","doi":"10.1109/ICRA40945.2020.9196610","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196610","url":null,"abstract":"In certain challenging environments, such as inside buildings on fire, the main sensors (e.g. cameras, LiDARs and GPS systems) used for multicopter localization can become unavailable. Direct integration of the inertial navigation sensors (the accelerometer and rate gyroscope), is however unaffected by external disturbances, but the rapid error accumulation quickly makes a naive application of such a strategy feasible only for very short durations. In this work we propose a motion strategy for reducing the inertial navigation state estimation error of multicopters. The proposed strategy breaks a long duration flight into multiple short duration hops between which the vehicle remains stationary on the ground. When the vehicle is stationary, zero-velocity pseudo-measurements are introduced to an extended Kalman Filter to reduce the state estimation error. We perform experiments for closed-loop control of a multicopter for evaluation. The mean absolute position estimation error was 3.4% over a total flight distance of 5m in the experiments. The results showed a 80% reduction compared to the standard inertial navigation method without using this strategy. In addition, an additional experiment with total flight distance of 10m is conducted to demonstrate the ability of this method to navigate a multicopter in real-world environment. The final trajectory tracking error was 3% of the total flight distance.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"109 1","pages":"8559-8565"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77102778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Automated detection of soleus concentric contraction in variable gait conditions for improved exosuit control 可变步态条件下比目鱼同心圆收缩的自动检测,以改进外太空服控制
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197428
R. Nuckols, K. Swaminathan, Sangjun Lee, L. Awad, C. Walsh, R. Howe
Exosuits can reduce metabolic demand and improve gait. Controllers explicitly derived from biological mechanisms that reflect the user's joint or muscle dynamics should in theory allow for individualized assistance and enable adaptation to changing gait. With the goal of developing an exosuit control strategy based on muscle power, we present an approach for estimating, at real time rates, when the soleus muscle begins to generate positive power. A low-profile ultrasound system recorded B-mode images of the soleus in walking individuals. An automated routine using optical flow segmented the data to a normalized gait cycle and estimated the onset of concentric contraction at real-time rates (~130Hz). Segmentation error was within 1% of the gait cycle compared to using ground reaction forces. Estimation of onset of concentric contraction had a high correlation (R2=0.92) and an RMSE of 2.6% gait cycle relative to manual estimation. We demonstrated the ability to estimate the onset of concentric contraction during fixed speed walking in healthy individuals that ranged from 39.3% to 45.8% of the gait cycle and feasibility in two persons post-stroke walking at comfortable walking speed. We also showed the ability to measure a shift in onset timing to 7% earlier when the biological system adapts from level to incline walking. Finally, we provided an initial evaluation for how the onset of concentric contraction might be used to inform exosuit control in level and incline walking.
外装可以减少代谢需求,改善步态。控制器明确源自反映用户关节或肌肉动态的生物机制,理论上应该允许个性化的辅助,并能够适应不断变化的步态。为了开发一种基于肌肉力量的外骨骼控制策略,我们提出了一种实时估计比目鱼肌何时开始产生正能量的方法。一个低姿态的超声系统记录了行走个体比目鱼肌的b型图像。使用光流的自动化程序将数据分割为标准化的步态周期,并以实时速率(~130Hz)估计同心收缩的开始。与使用地面反作用力相比,分割误差在步态周期的1%以内。对同心收缩开始的估计具有高相关性(R2=0.92),相对于人工估计,RMSE为2.6%的步态周期。我们证明了在健康个体中固定速度行走时同心收缩开始的能力,其范围从步态周期的39.3%到45.8%,以及在两个人中风后以舒适的行走速度行走时的可行性。我们还展示了当生物系统从水平行走适应到倾斜行走时,测量发病时间提前7%的能力。最后,我们提供了一个初步的评估,即同心收缩的开始如何用于水平和倾斜行走时的外衣控制。
{"title":"Automated detection of soleus concentric contraction in variable gait conditions for improved exosuit control","authors":"R. Nuckols, K. Swaminathan, Sangjun Lee, L. Awad, C. Walsh, R. Howe","doi":"10.1109/ICRA40945.2020.9197428","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197428","url":null,"abstract":"Exosuits can reduce metabolic demand and improve gait. Controllers explicitly derived from biological mechanisms that reflect the user's joint or muscle dynamics should in theory allow for individualized assistance and enable adaptation to changing gait. With the goal of developing an exosuit control strategy based on muscle power, we present an approach for estimating, at real time rates, when the soleus muscle begins to generate positive power. A low-profile ultrasound system recorded B-mode images of the soleus in walking individuals. An automated routine using optical flow segmented the data to a normalized gait cycle and estimated the onset of concentric contraction at real-time rates (~130Hz). Segmentation error was within 1% of the gait cycle compared to using ground reaction forces. Estimation of onset of concentric contraction had a high correlation (R2=0.92) and an RMSE of 2.6% gait cycle relative to manual estimation. We demonstrated the ability to estimate the onset of concentric contraction during fixed speed walking in healthy individuals that ranged from 39.3% to 45.8% of the gait cycle and feasibility in two persons post-stroke walking at comfortable walking speed. We also showed the ability to measure a shift in onset timing to 7% earlier when the biological system adapts from level to incline walking. Finally, we provided an initial evaluation for how the onset of concentric contraction might be used to inform exosuit control in level and incline walking.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"21 1","pages":"4855-4862"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78557831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Enhanced Teleoperation Using Autocomplete 增强远程操作使用自动完成
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197140
Mohammad Kassem Zein, Abbas Sidaoui, Daniel C. Asmar, I. Elhajj
Controlling and manning robots from a remote location is difficult because of the limitations one faces in perception and available degrees of actuation. Although humans can become skilled teleoperators, the amount of training time required to acquire such skills is typically very high. In this paper, we propose a novel solution (named Autocomplete) to aid novice teleoperators in manning robots adroitly. At the input side, Autocomplete relies on machine learning to detect and categorize human inputs as one from a group of motion primitives. Once a desired motion is recognized, at the actuation side an automated command replaces the human input in performing the desired action. So far, Autocomplete can recognize and synthesize lines, arcs, full circles, 3-D helices, and sine trajectories. Autocomplete was tested in simulation on the teleoperation of an unmanned aerial vehicle, and results demonstrate the advantages of the proposed solution versus manual steering.
由于感知和可用的驱动程度的限制,从远程位置控制和操纵机器人是困难的。虽然人类可以成为熟练的远程操作员,但获得此类技能所需的培训时间通常非常高。在本文中,我们提出了一种新的解决方案(称为自动完成),以帮助新手熟练地操纵机器人。在输入端,自动完成依赖于机器学习来检测和分类人类输入作为一组运动原语。一旦识别出所需的动作,在驱动端,自动命令取代人工输入来执行所需的动作。到目前为止,Autocomplete可以识别和合成直线、圆弧、全圆、三维螺旋和正弦轨迹。在无人机远程操作仿真中对自动补全进行了测试,结果表明了该方案相对于手动转向的优势。
{"title":"Enhanced Teleoperation Using Autocomplete","authors":"Mohammad Kassem Zein, Abbas Sidaoui, Daniel C. Asmar, I. Elhajj","doi":"10.1109/ICRA40945.2020.9197140","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197140","url":null,"abstract":"Controlling and manning robots from a remote location is difficult because of the limitations one faces in perception and available degrees of actuation. Although humans can become skilled teleoperators, the amount of training time required to acquire such skills is typically very high. In this paper, we propose a novel solution (named Autocomplete) to aid novice teleoperators in manning robots adroitly. At the input side, Autocomplete relies on machine learning to detect and categorize human inputs as one from a group of motion primitives. Once a desired motion is recognized, at the actuation side an automated command replaces the human input in performing the desired action. So far, Autocomplete can recognize and synthesize lines, arcs, full circles, 3-D helices, and sine trajectories. Autocomplete was tested in simulation on the teleoperation of an unmanned aerial vehicle, and results demonstrate the advantages of the proposed solution versus manual steering.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"126 1","pages":"9178-9184"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73603108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Nonlinear Synchronization Control for Short-Range Mobile Sensors Drifting in Geophysical Flows 地球物理流中漂移的近程移动传感器非线性同步控制
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196701
Cong Wei, H. Tanner, M. A. Hsieh
This paper presents a synchronization controller for mobile sensors that are minimally actuated and can only communicate with each other over a very short range. This work is motivated by ocean monitoring applications where large-scale sensor networks consisting of drifters with minimal actuation capabilities, i.e., active drifters, are employed. We assume drifters are tasked to monitor regions consisting of gyre flows where their trajectories are periodic. As drifters in neighboring regions move into each other's proximity, it presents an opportunity for data exchange and synchronization to ensure future rendezvous. We present a nonlinear synchronization control strategy to ensure that drifters will periodically rendezvous and maximize the time they are in their rendezvous regions. Numerical simulations and small-scale experiments validate the efficacy of the control strategy and hint at extensions to large-scale mobile sensor networks.
本文提出了一种用于移动传感器的同步控制器,该控制器是最小驱动的,并且只能在很短的范围内相互通信。这项工作的动机是海洋监测应用,其中采用了由具有最小驱动能力的漂流器(即主动漂流器)组成的大型传感器网络。我们假设漂流者的任务是监测由环流组成的区域,其中它们的轨迹是周期性的。当相邻区域的漂移者彼此靠近时,它为数据交换和同步提供了机会,以确保未来的会合。我们提出了一种非线性同步控制策略,以保证漂移者周期性地交会,并使它们在交会区域停留的时间最大化。数值模拟和小规模实验验证了该控制策略的有效性,并暗示了该控制策略可扩展到大规模移动传感器网络。
{"title":"Nonlinear Synchronization Control for Short-Range Mobile Sensors Drifting in Geophysical Flows","authors":"Cong Wei, H. Tanner, M. A. Hsieh","doi":"10.1109/ICRA40945.2020.9196701","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196701","url":null,"abstract":"This paper presents a synchronization controller for mobile sensors that are minimally actuated and can only communicate with each other over a very short range. This work is motivated by ocean monitoring applications where large-scale sensor networks consisting of drifters with minimal actuation capabilities, i.e., active drifters, are employed. We assume drifters are tasked to monitor regions consisting of gyre flows where their trajectories are periodic. As drifters in neighboring regions move into each other's proximity, it presents an opportunity for data exchange and synchronization to ensure future rendezvous. We present a nonlinear synchronization control strategy to ensure that drifters will periodically rendezvous and maximize the time they are in their rendezvous regions. Numerical simulations and small-scale experiments validate the efficacy of the control strategy and hint at extensions to large-scale mobile sensor networks.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"907-913"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85505442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos Motion2Vec:基于手术视频的半监督表示学习
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197324
A. Tanwani, P. Sermanet, Andy Yan, Raghav V. Anand, Mariano Phielipp, Ken Goldberg
Learning meaningful visual representations in an embedding space can facilitate generalization in downstream tasks such as action segmentation and imitation. In this paper, we learn a motion-centric representation of surgical video demonstrations by grouping them into action segments/subgoals/options in a semi-supervised manner. We present Motion2Vec, an algorithm that learns a deep embedding feature space from video observations by minimizing a metric learning loss in a Siamese network: images from the same action segment are pulled together while pushed away from randomly sampled images of other segments, while respecting the temporal ordering of the images. The embeddings are iteratively segmented with a recurrent neural network for a given parametrization of the embedding space after pre-training the Siamese network. We only use a small set of labeled video segments to semantically align the embedding space and assign pseudo-labels to the remaining unlabeled data by inference on the learned model parameters. We demonstrate the use of this representation to imitate surgical suturing kinematic motions from publicly available videos of the JIGSAWS dataset. Results give 85.5% segmentation accuracy on average suggesting performance improvement over several state-of-the-art baselines, while kinematic pose imitation gives 0.94 centimeter error in position per observation on the test set. Videos, code and data are available at: https://sites.google.com/view/motion2vec
在嵌入空间中学习有意义的视觉表示可以促进下游任务(如动作分割和模仿)的泛化。在本文中,我们以半监督的方式将外科手术视频演示分组为动作片段/子目标/选项,从而学习以运动为中心的表示。我们提出了Motion2Vec算法,该算法通过最小化Siamese网络中的度量学习损失,从视频观察中学习深度嵌入特征空间:来自相同动作片段的图像被拉到一起,同时远离其他片段的随机采样图像,同时尊重图像的时间顺序。在对Siamese网络进行预训练后,对给定的嵌入空间参数化,使用递归神经网络对嵌入进行迭代分割。我们只使用一小部分标记的视频片段对嵌入空间进行语义对齐,并通过对学习到的模型参数的推断为剩余的未标记数据分配伪标签。我们演示了使用这种表示来模仿来自JIGSAWS数据集的公开视频的手术缝合运动学运动。结果显示,在几个最先进的基线上,平均分割精度为85.5%,表明性能有所提高,而运动学姿态模仿在测试集上每次观察的位置误差为0.94厘米。视频、代码和数据可在https://sites.google.com/view/motion2vec上获得
{"title":"Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos","authors":"A. Tanwani, P. Sermanet, Andy Yan, Raghav V. Anand, Mariano Phielipp, Ken Goldberg","doi":"10.1109/ICRA40945.2020.9197324","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197324","url":null,"abstract":"Learning meaningful visual representations in an embedding space can facilitate generalization in downstream tasks such as action segmentation and imitation. In this paper, we learn a motion-centric representation of surgical video demonstrations by grouping them into action segments/subgoals/options in a semi-supervised manner. We present Motion2Vec, an algorithm that learns a deep embedding feature space from video observations by minimizing a metric learning loss in a Siamese network: images from the same action segment are pulled together while pushed away from randomly sampled images of other segments, while respecting the temporal ordering of the images. The embeddings are iteratively segmented with a recurrent neural network for a given parametrization of the embedding space after pre-training the Siamese network. We only use a small set of labeled video segments to semantically align the embedding space and assign pseudo-labels to the remaining unlabeled data by inference on the learned model parameters. We demonstrate the use of this representation to imitate surgical suturing kinematic motions from publicly available videos of the JIGSAWS dataset. Results give 85.5% segmentation accuracy on average suggesting performance improvement over several state-of-the-art baselines, while kinematic pose imitation gives 0.94 centimeter error in position per observation on the test set. Videos, code and data are available at: https://sites.google.com/view/motion2vec","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"2174-2181"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84069170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
A Feature-Based Underwater Path Planning Approach using Multiple Perspective Prior Maps 基于多视角先验地图的水下路径规划方法
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196680
Daniel Cagara, M. Dunbabin, P. Rigby
This paper presents a path planning methodology which enables Autonomous Underwater Vehicles (AUVs) to navigate in shallow complex environments such as coral reefs. The approach leverages prior information from an aerial photographic survey, and derived bathymetric information of the corresponding area. From these prior maps, a set of features is obtained which define an expected arrangement of objects and bathymetry likely to be perceived by the AUV when underwater. A navigation graph is then constructed by predicting the arrangement of features visible from a set of test points within the prior, which allows the calculation of the shortest paths from any pair of start and destination points. A maximum likelihood function is defined which allows the AUV to match its observations to the navigation graph as it undertakes its mission. To improve robustness, the history of observed features are retained to facilitate possible recovery from non-detectable or misclassified objects. The approach is evaluated using a photo-realistic simulated environment, and results illustrate the merits of the approach even when only a relatively small number of features can be identified from the prior map.
本文提出了一种自主水下航行器(auv)在珊瑚礁等浅层复杂环境中导航的路径规划方法。该方法利用了航空摄影测量的先验信息,以及相应区域的派生水深信息。从这些先前的地图中,可以获得一组特征,这些特征定义了AUV在水下可能感知到的物体和水深的预期排列。然后,通过预测从先验中的一组测试点可见的特征的排列来构建导航图,这允许计算从任何一对起点和终点的最短路径。定义了一个最大似然函数,允许AUV在执行任务时将其观测结果与导航图相匹配。为了提高鲁棒性,保留了观察到的特征的历史,以方便从不可检测或错误分类的对象中恢复。使用逼真的模拟环境对该方法进行了评估,结果说明了该方法的优点,即使只有相对少量的特征可以从先前的地图中识别出来。
{"title":"A Feature-Based Underwater Path Planning Approach using Multiple Perspective Prior Maps","authors":"Daniel Cagara, M. Dunbabin, P. Rigby","doi":"10.1109/ICRA40945.2020.9196680","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196680","url":null,"abstract":"This paper presents a path planning methodology which enables Autonomous Underwater Vehicles (AUVs) to navigate in shallow complex environments such as coral reefs. The approach leverages prior information from an aerial photographic survey, and derived bathymetric information of the corresponding area. From these prior maps, a set of features is obtained which define an expected arrangement of objects and bathymetry likely to be perceived by the AUV when underwater. A navigation graph is then constructed by predicting the arrangement of features visible from a set of test points within the prior, which allows the calculation of the shortest paths from any pair of start and destination points. A maximum likelihood function is defined which allows the AUV to match its observations to the navigation graph as it undertakes its mission. To improve robustness, the history of observed features are retained to facilitate possible recovery from non-detectable or misclassified objects. The approach is evaluated using a photo-realistic simulated environment, and results illustrate the merits of the approach even when only a relatively small number of features can be identified from the prior map.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"78 1","pages":"8573-8579"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84086342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2020 IEEE International Conference on Robotics and Automation (ICRA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1