首页 > 最新文献

Autonomous Robots最新文献

英文 中文
Towards neuromorphic FPGA-based infrastructures for a robotic arm 面向基于神经形态FPGA的机械臂基础设施
IF 3.5 3区 计算机科学 Q1 Computer Science Pub Date : 2023-07-14 DOI: 10.1007/s10514-023-10111-x
Salvador Canas-Moreno, Enrique Piñero-Fuentes, Antonio Rios-Navarro, Daniel Cascado-Caballero, Fernando Perez-Peña, Alejandro Linares-Barranco

Muscles are stretched with bursts of spikes that come from motor neurons connected to the cerebellum through the spinal cord. Then, alpha motor neurons directly innervate the muscles to complete the motor command coming from upper biological structures. Nevertheless, classical robotic systems usually require complex computational capabilities and relative high-power consumption to process their control algorithm, which requires information from the robot’s proprioceptive sensors. The way in which the information is encoded and transmitted is an important difference between biological systems and robotic machines. Neuromorphic engineering mimics these behaviors found in biology into engineering solutions to produce more efficient systems and for a better understanding of neural systems. This paper presents the application of a Spike-based Proportional-Integral-Derivative controller to a 6-DoF Scorbot ER-VII robotic arm, feeding the motors with Pulse-Frequency-Modulation instead of Pulse-Width-Modulation, mimicking the way in which motor neurons act over muscles. The presented frameworks allow the robot to be commanded and monitored locally or remotely from both a Python software running on a computer or from a spike-based neuromorphic hardware. Multi-FPGA and single-PSoC solutions are compared. These frameworks are intended for experimental use of the neuromorphic community as a testbed platform and for dataset recording for machine learning purposes.

通过脊髓与小脑相连的运动神经元发出的脉冲会拉伸肌肉。然后,α运动神经元直接支配肌肉,完成来自上层生物结构的运动指令。然而,传统的机器人系统通常需要复杂的计算能力和相对高的功耗来处理它们的控制算法,这需要机器人本体感觉传感器的信息。信息编码和传输的方式是生物系统和机器人之间的一个重要区别。神经形态工程将生物学中的这些行为模仿成工程解决方案,以生产更高效的系统,并更好地理解神经系统。本文介绍了一种基于脉冲的比例-积分-导数控制器在6自由度Scorbot ER-VII机械臂上的应用,模拟运动神经元在肌肉上的作用方式,为电机提供脉冲频率调制而不是脉冲宽度调制。所提出的框架允许从计算机上运行的Python软件或基于spike的神经形态硬件对机器人进行本地或远程命令和监控。比较了多fpga和单psoc解决方案。这些框架旨在用于神经形态社区作为测试平台的实验使用,以及用于机器学习目的的数据集记录。
{"title":"Towards neuromorphic FPGA-based infrastructures for a robotic arm","authors":"Salvador Canas-Moreno,&nbsp;Enrique Piñero-Fuentes,&nbsp;Antonio Rios-Navarro,&nbsp;Daniel Cascado-Caballero,&nbsp;Fernando Perez-Peña,&nbsp;Alejandro Linares-Barranco","doi":"10.1007/s10514-023-10111-x","DOIUrl":"10.1007/s10514-023-10111-x","url":null,"abstract":"<div><p>Muscles are stretched with bursts of spikes that come from motor neurons connected to the cerebellum through the spinal cord. Then, alpha motor neurons directly innervate the muscles to complete the motor command coming from upper biological structures. Nevertheless, classical robotic systems usually require complex computational capabilities and relative high-power consumption to process their control algorithm, which requires information from the robot’s proprioceptive sensors. The way in which the information is encoded and transmitted is an important difference between biological systems and robotic machines. Neuromorphic engineering mimics these behaviors found in biology into engineering solutions to produce more efficient systems and for a better understanding of neural systems. This paper presents the application of a Spike-based Proportional-Integral-Derivative controller to a 6-DoF Scorbot ER-VII robotic arm, feeding the motors with Pulse-Frequency-Modulation instead of Pulse-Width-Modulation, mimicking the way in which motor neurons act over muscles. The presented frameworks allow the robot to be commanded and monitored locally or remotely from both a Python software running on a computer or from a spike-based neuromorphic hardware. Multi-FPGA and single-PSoC solutions are compared. These frameworks are intended for experimental use of the neuromorphic community as a testbed platform and for dataset recording for machine learning purposes.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10111-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45151320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning rewards from exploratory demonstrations using probabilistic temporal ranking 使用概率时间排序从探索性演示中学习奖励
IF 3.5 3区 计算机科学 Q1 Computer Science Pub Date : 2023-07-10 DOI: 10.1007/s10514-023-10120-w
Michael Burke, Katie Lu, Daniel Angelov, Artūras Straižys, Craig Innes, Kartic Subr, Subramanian Ramamoorthy

Informative path-planning is a well established approach to visual-servoing and active viewpoint selection in robotics, but typically assumes that a suitable cost function or goal state is known. This work considers the inverse problem, where the goal of the task is unknown, and a reward function needs to be inferred from exploratory example demonstrations provided by a demonstrator, for use in a downstream informative path-planning policy. Unfortunately, many existing reward inference strategies are unsuited to this class of problems, due to the exploratory nature of the demonstrations. In this paper, we propose an alternative approach to cope with the class of problems where these sub-optimal, exploratory demonstrations occur. We hypothesise that, in tasks which require discovery, successive states of any demonstration are progressively more likely to be associated with a higher reward, and use this hypothesis to generate time-based binary comparison outcomes and infer reward functions that support these ranks, under a probabilistic generative model. We formalise this probabilistic temporal ranking approach and show that it improves upon existing approaches to perform reward inference for autonomous ultrasound scanning, a novel application of learning from demonstration in medical imaging while also being of value across a broad range of goal-oriented learning from demonstration tasks.

信息路径规划是机器人视觉伺服和主动视点选择的一种成熟方法,但通常假设已知合适的成本函数或目标状态。这项工作考虑了反问题,其中任务的目标是未知的,并且需要从演示者提供的探索性示例演示中推断出奖励函数,以用于下游信息路径规划策略。不幸的是,由于演示的探索性,许多现有的奖励推理策略不适合这类问题。在本文中,我们提出了一种替代方法来处理出现这些次优探索性演示的这类问题。我们假设,在需要发现的任务中,任何演示的连续状态都越来越有可能与更高的奖励相关联,并使用该假设在概率生成模型下生成基于时间的二元比较结果,并推断支持这些排名的奖励函数。我们将这种概率时间排序方法正式化,并表明它改进了现有的方法来执行自主超声扫描的奖励推理,这是一种从演示中学习在医学成像中的新应用,同时在从演示任务中进行广泛的目标导向学习方面也有价值。
{"title":"Learning rewards from exploratory demonstrations using probabilistic temporal ranking","authors":"Michael Burke,&nbsp;Katie Lu,&nbsp;Daniel Angelov,&nbsp;Artūras Straižys,&nbsp;Craig Innes,&nbsp;Kartic Subr,&nbsp;Subramanian Ramamoorthy","doi":"10.1007/s10514-023-10120-w","DOIUrl":"10.1007/s10514-023-10120-w","url":null,"abstract":"<div><p>Informative path-planning is a well established approach to visual-servoing and active viewpoint selection in robotics, but typically assumes that a suitable cost function or goal state is known. This work considers the inverse problem, where the goal of the task is unknown, and a reward function needs to be inferred from exploratory example demonstrations provided by a demonstrator, for use in a downstream informative path-planning policy. Unfortunately, many existing reward inference strategies are unsuited to this class of problems, due to the exploratory nature of the demonstrations. In this paper, we propose an alternative approach to cope with the class of problems where these sub-optimal, exploratory demonstrations occur. We hypothesise that, in tasks which require discovery, successive states of any demonstration are progressively more likely to be associated with a higher reward, and use this hypothesis to generate time-based binary comparison outcomes and infer reward functions that support these ranks, under a probabilistic generative model. We formalise this <i>probabilistic temporal ranking</i> approach and show that it improves upon existing approaches to perform reward inference for autonomous ultrasound scanning, a novel application of learning from demonstration in medical imaging while also being of value across a broad range of goal-oriented learning from demonstration tasks.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10120-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48400024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inverse reinforcement learning for autonomous navigation via differentiable semantic mapping and planning 基于可微语义映射和规划的自主导航逆强化学习
IF 3.5 3区 计算机科学 Q1 Computer Science Pub Date : 2023-07-06 DOI: 10.1007/s10514-023-10118-4
Tianyu Wang, Vikas Dhiman, Nikolay Atanasov

This paper focuses on inverse reinforcement learning for autonomous navigation using distance and semantic category observations. The objective is to infer a cost function that explains demonstrated behavior while relying only on the expert’s observations and state-control trajectory. We develop a map encoder, that infers semantic category probabilities from the observation sequence, and a cost encoder, defined as a deep neural network over the semantic features. Since the expert cost is not directly observable, the model parameters can only be optimized by differentiating the error between demonstrated controls and a control policy computed from the cost estimate. We propose a new model of expert behavior that enables error minimization using a closed-form subgradient computed only over a subset of promising states via a motion planning algorithm. Our approach allows generalizing the learned behavior to new environments with new spatial configurations of the semantic categories. We analyze the different components of our model in a minigrid environment. We also demonstrate that our approach learns to follow traffic rules in the autonomous driving CARLA simulator by relying on semantic observations of buildings, sidewalks, and road lanes.

本文重点研究了利用距离和语义类别观测进行自主导航的反向强化学习。其目的是推断出一个成本函数,该函数可以解释所演示的行为,同时仅依赖于专家的观察和状态控制轨迹。我们开发了一个映射编码器,从观察序列推断语义类别概率,以及一个成本编码器,定义为语义特征上的深度神经网络。由于专家成本不能直接观察到,因此只能通过区分所证明的控制和根据成本估计计算的控制策略之间的误差来优化模型参数。我们提出了一种新的专家行为模型,该模型使用通过运动规划算法仅在有希望的状态的子集上计算的闭式次梯度来实现误差最小化。我们的方法允许将学习到的行为推广到具有语义类别的新空间配置的新环境中。我们在小型网格环境中分析模型的不同组件。我们还证明,我们的方法通过对建筑物、人行道和车道的语义观察,在自动驾驶CARLA模拟器中学习遵守交通规则。
{"title":"Inverse reinforcement learning for autonomous navigation via differentiable semantic mapping and planning","authors":"Tianyu Wang,&nbsp;Vikas Dhiman,&nbsp;Nikolay Atanasov","doi":"10.1007/s10514-023-10118-4","DOIUrl":"10.1007/s10514-023-10118-4","url":null,"abstract":"<div><p>This paper focuses on inverse reinforcement learning for autonomous navigation using distance and semantic category observations. The objective is to infer a cost function that explains demonstrated behavior while relying only on the expert’s observations and state-control trajectory. We develop a map encoder, that infers semantic category probabilities from the observation sequence, and a cost encoder, defined as a deep neural network over the semantic features. Since the expert cost is not directly observable, the model parameters can only be optimized by differentiating the error between demonstrated controls and a control policy computed from the cost estimate. We propose a new model of expert behavior that enables error minimization using a closed-form subgradient computed only over a subset of promising states via a motion planning algorithm. Our approach allows generalizing the learned behavior to new environments with new spatial configurations of the semantic categories. We analyze the different components of our model in a minigrid environment. We also demonstrate that our approach learns to follow traffic rules in the autonomous driving CARLA simulator by relying on semantic observations of buildings, sidewalks, and road lanes.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10118-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44835312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
AGRI-SLAM: a real-time stereo visual SLAM for agricultural environment 农业环境实时立体视觉SLAM
IF 3.5 3区 计算机科学 Q1 Computer Science Pub Date : 2023-07-04 DOI: 10.1007/s10514-023-10110-y
Rafiqul Islam, Habibullah Habibullah, Tagor Hossain

In this research, we proposed a stereo visual simultaneous localisation and mapping (SLAM) system that efficiently works in agricultural scenarios without compromising the performance and accuracy in contrast to the other state-of-the-art methods. The proposed system is equipped with an image enhancement technique for the ORB point and LSD line features recovery, which enables it to work in broader scenarios and gives extensive spatial information from the low-light and hazy agricultural environment. Firstly, the method has been tested on the standard dataset, i.e., KITTI and EuRoC, to validate the localisation accuracy by comparing it with the other state-of-the-art methods, namely VINS-SLAM, PL-SLAM, and ORB-SLAM2. The experimental results evidence that the proposed method obtains superior localisation and mapping accuracy than the other visual SLAM methods. Secondly, the proposed method is tested on the ROSARIO dataset, our low-light agricultural dataset, and O-HAZE dataset to validate the performance in agricultural environments. In such cases, while other methods fail to operate in such complex agricultural environments, our method successfully operates with high localisation and mapping accuracy.

在这项研究中,我们提出了一种立体视觉同时定位和绘图(SLAM)系统,与其他最先进的方法相比,该系统在不影响性能和准确性的情况下有效地工作在农业场景中。所提出的系统配备了用于ORB点和LSD线特征恢复的图像增强技术,这使其能够在更广泛的场景中工作,并从弱光和朦胧的农业环境中提供广泛的空间信息。首先,该方法已在标准数据集(即KITTI和EuRoC)上进行了测试,通过将其与其他最先进的方法(即VINS-SLAM、PL-SLAM和ORB-SLAM2)进行比较来验证定位精度。实验结果表明,与其他视觉SLAM方法相比,该方法获得了更好的定位和映射精度。其次,在ROSARIO数据集、我们的低光农业数据集和O-HAZE数据集上测试了所提出的方法,以验证其在农业环境中的性能。在这种情况下,虽然其他方法无法在如此复杂的农业环境中运行,但我们的方法成功地以高定位和绘图精度运行。
{"title":"AGRI-SLAM: a real-time stereo visual SLAM for agricultural environment","authors":"Rafiqul Islam,&nbsp;Habibullah Habibullah,&nbsp;Tagor Hossain","doi":"10.1007/s10514-023-10110-y","DOIUrl":"10.1007/s10514-023-10110-y","url":null,"abstract":"<div><p>In this research, we proposed a stereo visual simultaneous localisation and mapping (SLAM) system that efficiently works in agricultural scenarios without compromising the performance and accuracy in contrast to the other state-of-the-art methods. The proposed system is equipped with an image enhancement technique for the ORB point and LSD line features recovery, which enables it to work in broader scenarios and gives extensive spatial information from the low-light and hazy agricultural environment. Firstly, the method has been tested on the standard dataset, i.e., KITTI and EuRoC, to validate the localisation accuracy by comparing it with the other state-of-the-art methods, namely VINS-SLAM, PL-SLAM, and ORB-SLAM2. The experimental results evidence that the proposed method obtains superior localisation and mapping accuracy than the other visual SLAM methods. Secondly, the proposed method is tested on the ROSARIO dataset, our low-light agricultural dataset, and O-HAZE dataset to validate the performance in agricultural environments. In such cases, while other methods fail to operate in such complex agricultural environments, our method successfully operates with high localisation and mapping accuracy.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10110-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45660238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On robot grasp learning using equivariant models 基于等变模型的机器人抓取学习
IF 3.5 3区 计算机科学 Q1 Computer Science Pub Date : 2023-07-04 DOI: 10.1007/s10514-023-10112-w
Xupeng Zhu, Dian Wang, Guanang Su, Ondrej Biza, Robin Walters, Robert Platt

Real-world grasp detection is challenging due to the stochasticity in grasp dynamics and the noise in hardware. Ideally, the system would adapt to the real world by training directly on physical systems. However, this is generally difficult due to the large amount of training data required by most grasp learning models. In this paper, we note that the planar grasp function is (textrm{SE}(2))-equivariant and demonstrate that this structure can be used to constrain the neural network used during learning. This creates an inductive bias that can significantly improve the sample efficiency of grasp learning and enable end-to-end training from scratch on a physical robot with as few as 600 grasp attempts. We call this method Symmetric Grasp learning (SymGrasp) and show that it can learn to grasp “from scratch” in less that 1.5 h of physical robot time. This paper represents an expanded and revised version of the conference paper Zhu et al. (2022).

由于抓取动力学的随机性和硬件的噪声,在现实世界中抓取检测具有挑战性。理想情况下,该系统将通过直接在物理系统上进行训练来适应现实世界。然而,这通常是困难的,因为大多数掌握学习模型需要大量的训练数据。在本文中,我们注意到平面抓取函数是(textrm{SE}(2)) -等变的,并证明这种结构可以用来约束学习过程中使用的神经网络。这就产生了一种归纳偏差,可以显著提高抓取学习的样本效率,并实现在物理机器人上从头开始的端到端训练,只需600次抓取尝试。我们称这种方法为对称抓取学习(SymGrasp),并表明它可以在不到1.5小时的物理机器人时间内“从零开始”学习抓取。本文是会议论文Zhu et al.(2022)的扩展和修订版本。
{"title":"On robot grasp learning using equivariant models","authors":"Xupeng Zhu,&nbsp;Dian Wang,&nbsp;Guanang Su,&nbsp;Ondrej Biza,&nbsp;Robin Walters,&nbsp;Robert Platt","doi":"10.1007/s10514-023-10112-w","DOIUrl":"10.1007/s10514-023-10112-w","url":null,"abstract":"<div><p>Real-world grasp detection is challenging due to the stochasticity in grasp dynamics and the noise in hardware. Ideally, the system would adapt to the real world by training directly on physical systems. However, this is generally difficult due to the large amount of training data required by most grasp learning models. In this paper, we note that the planar grasp function is <span>(textrm{SE}(2))</span>-equivariant and demonstrate that this structure can be used to constrain the neural network used during learning. This creates an inductive bias that can significantly improve the sample efficiency of grasp learning and enable end-to-end training from scratch on a physical robot with as few as 600 grasp attempts. We call this method Symmetric Grasp learning (SymGrasp) and show that it can learn to grasp “from scratch” in less that 1.5 h of physical robot time. This paper represents an expanded and revised version of the conference paper Zhu et al. (2022).\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10112-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138473184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TNES: terrain traversability mapping, navigation and excavation system for autonomous excavators on worksite TNES:用于现场自动挖掘机的地形可穿越性测绘、导航和挖掘系统
IF 3.5 3区 计算机科学 Q1 Computer Science Pub Date : 2023-07-04 DOI: 10.1007/s10514-023-10113-9
Tianrui Guan, Zhenpeng He, Ruitao Song, Liangjun Zhang

We present a terrain traversability mapping and navigation system (TNS) for autonomous excavator applications in an unstructured environment. We use an efficient approach to extract terrain features from RGB images and 3D point clouds and incorporate them into a global map for planning and navigation. Our system can adapt to changing environments and update the terrain information in real-time. Moreover, we present a novel dataset, the Complex Worksite Terrain dataset, which consists of RGB images from construction sites with seven categories based on navigability. Our novel algorithms improve the mapping accuracy over previous methods by 4.17–30.48(%) and reduce MSE on the traversability map by 13.8–71.4(%). We have combined our mapping approach with planning and control modules in an autonomous excavator navigation system and observe (49.3%) improvement in the overall success rate. Based on TNS, we demonstrate the first autonomous excavator that can navigate through unstructured environments consisting of deep pits, steep hills, rock piles, and other complex terrain features. In addition, we combine the proposed TNS with the autonomous excavation system (AES), and deploy the new pipeline, TNES, on a more complex construction site. With minimum human intervention, we demonstrate autonomous navigation capability with excavation tasks.

我们提出了一种用于非结构化环境中自主挖掘机应用的地形可穿越性地图和导航系统(TNS)。我们使用一种有效的方法从RGB图像和3D点云中提取地形特征,并将它们合并到全局地图中进行规划和导航。我们的系统能够适应不断变化的环境,并实时更新地形信息。此外,我们提出了一个新的数据集,即复杂工地地形数据集,该数据集由来自建筑工地的RGB图像组成,基于可导航性有七个类别。与以前的方法相比,我们的新算法将测绘精度提高了4.17–30.48(%),并将可穿越性地图上的MSE降低了13.8–71.4(%)。我们将我们的测绘方法与自主挖掘机导航系统中的规划和控制模块相结合,观察到总体成功率提高了49.3%。基于TNS,我们展示了第一台能够在由深坑、陡坡、岩堆和其他复杂地形组成的非结构化环境中导航的自动挖掘机。此外,我们将拟议的TNS与自主挖掘系统(AES)相结合,并将新管道TNES部署在更复杂的施工现场。在最少的人工干预下,我们展示了挖掘任务的自主导航能力。
{"title":"TNES: terrain traversability mapping, navigation and excavation system for autonomous excavators on worksite","authors":"Tianrui Guan,&nbsp;Zhenpeng He,&nbsp;Ruitao Song,&nbsp;Liangjun Zhang","doi":"10.1007/s10514-023-10113-9","DOIUrl":"10.1007/s10514-023-10113-9","url":null,"abstract":"<div><p>We present a terrain traversability mapping and navigation system (TNS) for autonomous excavator applications in an unstructured environment. We use an efficient approach to extract terrain features from RGB images and 3D point clouds and incorporate them into a global map for planning and navigation. Our system can adapt to changing environments and update the terrain information in real-time. Moreover, we present a novel dataset, the Complex Worksite Terrain dataset, which consists of RGB images from construction sites with seven categories based on navigability. Our novel algorithms improve the mapping accuracy over previous methods by 4.17–30.48<span>(%)</span> and reduce MSE on the traversability map by 13.8–71.4<span>(%)</span>. We have combined our mapping approach with planning and control modules in an autonomous excavator navigation system and observe <span>(49.3%)</span> improvement in the overall success rate. Based on TNS, we demonstrate the first autonomous excavator that can navigate through unstructured environments consisting of deep pits, steep hills, rock piles, and other complex terrain features. In addition, we combine the proposed TNS with the autonomous excavation system (AES), and deploy the new pipeline, TNES, on a more complex construction site. With minimum human intervention, we demonstrate autonomous navigation capability with excavation tasks.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41582747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Complex environment localization system using complementary ceiling and ground map information 利用互补的天花板和地面地图信息的复杂环境定位系统
IF 3.5 3区 计算机科学 Q1 Computer Science Pub Date : 2023-06-28 DOI: 10.1007/s10514-023-10116-6
Chee-An Yu, Hao-Yun Chen, Chun-Chieh Wang, Li-Chen Fu

This paper proposes a robust localization system using complementary information extracted from ceiling and ground plans, particularly applicable to dynamic and complex environments. The ceiling perception provides the robot with stable and time-invariant environmental features independent of the dynamic changes on the ground, whereas the ground perception allows the robot to navigate in the ground plane while avoiding stationary obstacles. We propose an architecture to fuse ground 2D LiDAR scan and ceiling 3D LiDAR scan with our enhanced mapping algorithm associating perception from both sources efficiently. The localization ability and the navigation performance can be promisingly secured even in a harsh environment with our complementary sensed information from the ground and ceiling. The salient feature of our work is that our system can simultaneously map both the ceiling and ground plane efficiently without extra efforts of deploying articulated landmarks and apply such hybrid information effectively, which facilitates the robot to travel through any indoor environment with human crowds without getting lost.

本文提出了一种使用从天花板和地面平面图中提取的互补信息的鲁棒定位系统,特别适用于动态和复杂的环境。天花板感知为机器人提供了独立于地面动态变化的稳定和时不变的环境特征,而地面感知允许机器人在地平面中导航,同时避开静止的障碍物。我们提出了一种将地面2D激光雷达扫描和天花板3D激光雷达扫描与我们的增强映射算法相融合的架构,该算法将来自两个源的感知有效地关联起来。即使在恶劣的环境中,通过我们从地面和天花板获得的互补感知信息,也可以很有希望地确保定位能力和导航性能。我们工作的显著特点是,我们的系统可以同时高效地绘制天花板和地平面图,而无需额外部署铰接地标,并有效地应用这种混合信息,这有助于机器人在任何有人的室内环境中穿行而不迷路。
{"title":"Complex environment localization system using complementary ceiling and ground map information","authors":"Chee-An Yu,&nbsp;Hao-Yun Chen,&nbsp;Chun-Chieh Wang,&nbsp;Li-Chen Fu","doi":"10.1007/s10514-023-10116-6","DOIUrl":"10.1007/s10514-023-10116-6","url":null,"abstract":"<div><p>This paper proposes a robust localization system using complementary information extracted from ceiling and ground plans, particularly applicable to dynamic and complex environments. The ceiling perception provides the robot with stable and time-invariant environmental features independent of the dynamic changes on the ground, whereas the ground perception allows the robot to navigate in the ground plane while avoiding stationary obstacles. We propose an architecture to fuse ground 2D LiDAR scan and ceiling 3D LiDAR scan with our enhanced mapping algorithm associating perception from both sources efficiently. The localization ability and the navigation performance can be promisingly secured even in a harsh environment with our complementary sensed information from the ground and ceiling. The salient feature of our work is that our system can simultaneously map both the ceiling and ground plane efficiently without extra efforts of deploying articulated landmarks and apply such hybrid information effectively, which facilitates the robot to travel through any indoor environment with human crowds without getting lost.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10116-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41898236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Event-based neural learning for quadrotor control 基于事件的四旋翼控制神经学习
IF 3.5 3区 计算机科学 Q1 Computer Science Pub Date : 2023-06-23 DOI: 10.1007/s10514-023-10115-7
Estéban Carvalho, Pierre Susbielle, Nicolas Marchand, Ahmad Hably, Jilles S. Dibangoye

The design of a simple and adaptive flight controller is a real challenge in aerial robotics. A simple flight controller often generates a poor flight tracking performance. Furthermore, adaptive algorithms might be costly in time and resources or deep learning based methods may cause instability problems, for instance in presence of disturbances. In this paper, we propose an event-based neural learning control strategy that combines the use of a standard cascaded flight controller enhanced by a deep neural network that learns the disturbances in order to improve the tracking performance. The strategy relies on two events: one allowing the improvement of tracking errors and the second to ensure closed-loop system stability. After a validation of the proposed strategy in a ROS/Gazebo simulation environment, its effectiveness is confirmed in real experiments in the presence of wind disturbance.

设计一个简单的自适应飞行控制器是航空机器人的一个真正的挑战。简单的飞行控制器往往产生较差的飞行跟踪性能。此外,自适应算法可能在时间和资源上代价高昂,或者基于深度学习的方法可能导致不稳定问题,例如在存在干扰的情况下。在本文中,我们提出了一种基于事件的神经学习控制策略,该策略结合使用由深度神经网络增强的标准级联飞行控制器来学习干扰,以提高跟踪性能。该策略依赖于两个事件:一是允许改进跟踪误差,二是确保闭环系统的稳定性。在ROS/Gazebo仿真环境中对该策略进行了验证,并在存在风干扰的实际实验中验证了该策略的有效性。
{"title":"Event-based neural learning for quadrotor control","authors":"Estéban Carvalho,&nbsp;Pierre Susbielle,&nbsp;Nicolas Marchand,&nbsp;Ahmad Hably,&nbsp;Jilles S. Dibangoye","doi":"10.1007/s10514-023-10115-7","DOIUrl":"10.1007/s10514-023-10115-7","url":null,"abstract":"<div><p>The design of a simple and adaptive flight controller is a real challenge in aerial robotics. A simple flight controller often generates a poor flight tracking performance. Furthermore, adaptive algorithms might be costly in time and resources or deep learning based methods may cause instability problems, for instance in presence of disturbances. In this paper, we propose an event-based neural learning control strategy that combines the use of a standard cascaded flight controller enhanced by a deep neural network that learns the disturbances in order to improve the tracking performance. The strategy relies on two events: one allowing the improvement of tracking errors and the second to ensure closed-loop system stability. After a validation of the proposed strategy in a ROS/Gazebo simulation environment, its effectiveness is confirmed in real experiments in the presence of wind disturbance.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45786020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning latent representations to co-adapt to humans 学习潜在表征以共同适应人类
IF 3.5 3区 计算机科学 Q1 Computer Science Pub Date : 2023-06-17 DOI: 10.1007/s10514-023-10109-5
Sagar Parekh, Dylan P. Losey

When robots interact with humans in homes, roads, or factories the human’s behavior often changes in response to the robot. Non-stationary humans are challenging for robot learners: actions the robot has learned to coordinate with the original human may fail after the human adapts to the robot. In this paper we introduce an algorithmic formalism that enables robots (i.e., ego agents) to co-adapt alongside dynamic humans (i.e., other agents) using only the robot’s low-level states, actions, and rewards. A core challenge is that humans not only react to the robot’s behavior, but the way in which humans react inevitably changes both over time and between users. To deal with this challenge, our insight is that—instead of building an exact model of the human–robots can learn and reason over high-level representations of the human’s policy and policy dynamics. Applying this insight we develop RILI: Robustly Influencing Latent Intent. RILI first embeds low-level robot observations into predictions of the human’s latent strategy and strategy dynamics. Next, RILI harnesses these predictions to select actions that influence the adaptive human towards advantageous, high reward behaviors over repeated interactions. We demonstrate that—given RILI’s measured performance with users sampled from an underlying distribution—we can probabilistically bound RILI’s expected performance across new humans sampled from the same distribution. Our simulated experiments compare RILI to state-of-the-art representation and reinforcement learning baselines, and show that RILI better learns to coordinate with imperfect, noisy, and time-varying agents. Finally, we conduct two user studies where RILI co-adapts alongside actual humans in a game of tag and a tower-building task. See videos of our user studies here: https://youtu.be/WYGO5amDXbQ

当机器人在家里、道路上或工厂里与人类互动时,人类的行为往往会随着机器人的变化而改变。非静止的人类对机器人学习者来说是一个挑战:在人类适应机器人后,机器人学会与原始人类协调的动作可能会失败。在本文中,我们介绍了一种算法形式,它使机器人(即自我代理)能够与动态人类(即其他代理)一起共同适应,只使用机器人的低级状态、动作和奖励。一个核心挑战是,人类不仅对机器人的行为做出反应,而且随着时间的推移和用户之间的变化,人类的反应方式也不可避免地会发生变化。为了应对这一挑战,我们的见解是,机器人可以通过对人类政策和政策动态的高级表征来学习和推理,而不是建立人类的精确模型。运用这一见解,我们开发了RILI:稳健影响潜在意图。RILI首先将低水平的机器人观察嵌入到对人类潜在策略和策略动力学的预测中。接下来,RILI利用这些预测来选择影响适应性人类在重复互动中做出有利、高回报行为的行动。我们证明,考虑到RILI对从底层分布中采样的用户的测量性能,我们可以在同一分布中采样新用户的RILI预期性能之间进行概率绑定。我们的模拟实验将RILI与最先进的表示和强化学习基线进行了比较,并表明RILI能够更好地学习与不完美、有噪声和时变代理的协调。最后,我们进行了两项用户研究,其中RILI在标签游戏和塔楼建造任务中与实际人类共同适应。点击此处查看我们的用户研究视频:https://youtu.be/WYGO5amDXbQ
{"title":"Learning latent representations to co-adapt to humans","authors":"Sagar Parekh,&nbsp;Dylan P. Losey","doi":"10.1007/s10514-023-10109-5","DOIUrl":"10.1007/s10514-023-10109-5","url":null,"abstract":"<div><p>When robots interact with humans in homes, roads, or factories the human’s behavior often changes in response to the robot. Non-stationary humans are challenging for robot learners: actions the robot has learned to coordinate with the original human may fail after the human adapts to the robot. In this paper we introduce an algorithmic formalism that enables robots (i.e., ego agents) to <i>co-adapt</i> alongside dynamic humans (i.e., other agents) using only the robot’s low-level states, actions, and rewards. A core challenge is that humans not only react to the robot’s behavior, but the way in which humans react inevitably changes both over time and between users. To deal with this challenge, our insight is that—instead of building an exact model of the human–robots can learn and reason over <i>high-level representations</i> of the human’s policy and policy dynamics. Applying this insight we develop RILI: Robustly Influencing Latent Intent. RILI first embeds low-level robot observations into predictions of the human’s latent strategy and strategy dynamics. Next, RILI harnesses these predictions to select actions that influence the adaptive human towards advantageous, high reward behaviors over repeated interactions. We demonstrate that—given RILI’s measured performance with users sampled from an underlying distribution—we can probabilistically bound RILI’s expected performance across new humans sampled from the same distribution. Our simulated experiments compare RILI to state-of-the-art representation and reinforcement learning baselines, and show that RILI better learns to coordinate with imperfect, noisy, and time-varying agents. Finally, we conduct two user studies where RILI co-adapts alongside actual humans in a game of tag and a tower-building task. See videos of our user studies here: https://youtu.be/WYGO5amDXbQ</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10109-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45856995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A learning-based approach to surface vehicle dynamics modeling for robust multistep prediction 基于学习的地面车辆动力学建模鲁棒多步预测方法
IF 3.5 3区 计算机科学 Q1 Computer Science Pub Date : 2023-06-14 DOI: 10.1007/s10514-023-10114-8
Junwoo Jang, Changyu Lee, Jinwhan Kim

Determining the dynamics of surface vehicles and marine robots is important for developing marine autopilot and autonomous navigation systems. However, this often requires extensive experimental data and intense effort because they are highly nonlinear and involve various uncertainties in real operating conditions. Herein, we propose an efficient data-driven approach for analyzing and predicting the motion of a surface vehicle in a real environment based on deep learning techniques. The proposed multistep model is robust to measurement uncertainty and overcomes compounding errors by eliminating the correlation between the prediction results. Additionally, latent state representation and mixup augmentation are introduced to make the model more consistent and accurate. The performance analysis reveals that the proposed method outperforms conventional methods and is robust against environmental disturbances.

确定水面车辆和海洋机器人的动力学对于开发海洋自动驾驶和自主导航系统具有重要意义。然而,这通常需要大量的实验数据和大量的努力,因为它们是高度非线性的,并且在实际操作条件中涉及各种不确定性。在此,我们提出了一种基于深度学习技术的高效数据驱动方法,用于分析和预测地面车辆在真实环境中的运动。所提出的多步模型对测量不确定性具有鲁棒性,并通过消除预测结果之间的相关性来克服复合误差。此外,引入了潜在状态表示和混合增强,使模型更加一致和准确。性能分析表明,该方法优于传统方法,对环境扰动具有鲁棒性。
{"title":"A learning-based approach to surface vehicle dynamics modeling for robust multistep prediction","authors":"Junwoo Jang,&nbsp;Changyu Lee,&nbsp;Jinwhan Kim","doi":"10.1007/s10514-023-10114-8","DOIUrl":"10.1007/s10514-023-10114-8","url":null,"abstract":"<div><p>Determining the dynamics of surface vehicles and marine robots is important for developing marine autopilot and autonomous navigation systems. However, this often requires extensive experimental data and intense effort because they are highly nonlinear and involve various uncertainties in real operating conditions. Herein, we propose an efficient data-driven approach for analyzing and predicting the motion of a surface vehicle in a real environment based on deep learning techniques. The proposed multistep model is robust to measurement uncertainty and overcomes compounding errors by eliminating the correlation between the prediction results. Additionally, latent state representation and mixup augmentation are introduced to make the model more consistent and accurate. The performance analysis reveals that the proposed method outperforms conventional methods and is robust against environmental disturbances.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42816645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Autonomous Robots
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1