首页 > 最新文献

arXiv - CS - Robotics最新文献

英文 中文
Composing Option Sequences by Adaptation: Initial Results 通过改编组成期权序列:初步结果
Pub Date : 2024-09-12 DOI: arxiv-2409.08195
Charles A. Meehan, Paul Rademacher, Mark Roberts, Laura M. Hiatt
Robot manipulation in real-world settings often requires adapting the robot'sbehavior to the current situation, such as by changing the sequences in whichpolicies execute to achieve the desired task. Problematically, however, we showthat composing a novel sequence of five deep RL options to perform apick-and-place task is unlikely to successfully complete, even if theirinitiation and termination conditions align. We propose a framework todetermine whether sequences will succeed a priori, and examine three approachesthat adapt options to sequence successfully if they will not. Crucially, ouradaptation methods consider the actual subset of points that the option istrained from or where it ends: (1) trains the second option to start where thefirst ends; (2) trains the first option to reach the centroid of where thesecond starts; and (3) trains the first option to reach the median of where thesecond starts. Our results show that our framework and adaptation methods havepromise in adapting options to work in novel sequences.
现实世界中的机器人操纵往往需要根据当前情况调整机器人的行为,例如通过改变策略的执行顺序来完成所需的任务。然而,问题在于,我们发现即使五个深度 RL 选项的启动条件和终止条件一致,组成一个新的序列来执行 "点击-放置 "任务也不太可能成功完成。我们提出了一个先验地确定序列是否会成功的框架,并研究了在序列不会成功的情况下调整选项以成功完成序列的三种方法。最重要的是,我们的适应方法考虑了选项的实际起始点或终止点:(1)训练第二个选项从第一个选项的终止点开始;(2)训练第一个选项到达第二个选项起始点的中心点;(3)训练第一个选项到达第二个选项起始点的中位数。我们的研究结果表明,我们的框架和适应方法有望使选项适应新的序列。
{"title":"Composing Option Sequences by Adaptation: Initial Results","authors":"Charles A. Meehan, Paul Rademacher, Mark Roberts, Laura M. Hiatt","doi":"arxiv-2409.08195","DOIUrl":"https://doi.org/arxiv-2409.08195","url":null,"abstract":"Robot manipulation in real-world settings often requires adapting the robot's\u0000behavior to the current situation, such as by changing the sequences in which\u0000policies execute to achieve the desired task. Problematically, however, we show\u0000that composing a novel sequence of five deep RL options to perform a\u0000pick-and-place task is unlikely to successfully complete, even if their\u0000initiation and termination conditions align. We propose a framework to\u0000determine whether sequences will succeed a priori, and examine three approaches\u0000that adapt options to sequence successfully if they will not. Crucially, our\u0000adaptation methods consider the actual subset of points that the option is\u0000trained from or where it ends: (1) trains the second option to start where the\u0000first ends; (2) trains the first option to reach the centroid of where the\u0000second starts; and (3) trains the first option to reach the median of where the\u0000second starts. Our results show that our framework and adaptation methods have\u0000promise in adapting options to work in novel sequences.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Robot Coordination Induced in Hazardous Environments through an Adversarial Graph-Traversal Game 通过对抗性图遍历博弈诱导危险环境中的多机器人协调
Pub Date : 2024-09-12 DOI: arxiv-2409.08222
James Berneburg, Xuan Wang, Xuesu Xiao, Daigo Shishika
This paper presents a game theoretic formulation of a graph traversalproblem, with applications to robots moving through hazardous environments inthe presence of an adversary, as in military and security applications. Theblue team of robots moves in an environment modeled by a time-varying graph,attempting to reach some goal with minimum cost, while the red team controlshow the graph changes to maximize the cost. The problem is formulated as astochastic game, so that Nash equilibrium strategies can be computednumerically. Bounds are provided for the game value, with a guarantee that itsolves the original problem. Numerical simulations demonstrate the results andthe effectiveness of this method, particularly showing the benefit of mixingactions for both players, as well as beneficial coordinated behavior, whereblue robots split up and/or synchronize to traverse risky edges.
本文提出了图遍历问题的博弈论表述,并将其应用于机器人在有对手存在的危险环境中移动,如军事和安全应用领域。蓝队机器人在一个由时变图建模的环境中移动,试图以最小的代价达到某个目标,而红队机器人则控制着图的变化方式,以最大限度地降低代价。该问题被表述为随机博弈,因此可以用数字计算纳什均衡策略。我们提供了博弈值的边界,并保证它能解决原始问题。数值模拟证明了这一方法的结果和有效性,特别是显示了双方混合行动的益处,以及有利的协调行为,即蓝色机器人分头行动和/或同步穿越有风险的边缘。
{"title":"Multi-Robot Coordination Induced in Hazardous Environments through an Adversarial Graph-Traversal Game","authors":"James Berneburg, Xuan Wang, Xuesu Xiao, Daigo Shishika","doi":"arxiv-2409.08222","DOIUrl":"https://doi.org/arxiv-2409.08222","url":null,"abstract":"This paper presents a game theoretic formulation of a graph traversal\u0000problem, with applications to robots moving through hazardous environments in\u0000the presence of an adversary, as in military and security applications. The\u0000blue team of robots moves in an environment modeled by a time-varying graph,\u0000attempting to reach some goal with minimum cost, while the red team controls\u0000how the graph changes to maximize the cost. The problem is formulated as a\u0000stochastic game, so that Nash equilibrium strategies can be computed\u0000numerically. Bounds are provided for the game value, with a guarantee that it\u0000solves the original problem. Numerical simulations demonstrate the results and\u0000the effectiveness of this method, particularly showing the benefit of mixing\u0000actions for both players, as well as beneficial coordinated behavior, where\u0000blue robots split up and/or synchronize to traverse risky edges.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
iKalibr-RGBD: Partially-Specialized Target-Free Visual-Inertial Spatiotemporal Calibration For RGBDs via Continuous-Time Velocity Estimation iKalibr-RGBD:通过连续时间速度估计对 RGBD 进行部分专用的无目标视觉惯性时空校准
Pub Date : 2024-09-11 DOI: arxiv-2409.07116
Shuolong Chen, Xingxing Li, Shengyu Li, Yuxuan Zhou
Visual-inertial systems have been widely studied and applied in the last twodecades, mainly due to their low cost and power consumption, small footprint,and high availability. Such a trend simultaneously leads to a large amount ofvisual-inertial calibration methods being presented, as accurate spatiotemporalparameters between sensors are a prerequisite for visual-inertial fusion. Inour previous work, i.e., iKalibr, a continuous-time-based visual-inertialcalibration method was proposed as a part of one-shot multi-sensor resilientspatiotemporal calibration. While requiring no artificial target bringsconsiderable convenience, computationally expensive pose estimation is demandedin initialization and batch optimization, limiting its availability.Fortunately, this could be vastly improved for the RGBDs with additional depthinformation, by employing mapping-free ego-velocity estimation instead ofmapping-based pose estimation. In this paper, we present the continuous-timeego-velocity estimation-based RGBD-inertial spatiotemporal calibration, termedas iKalibr-RGBD, which is also targetless but computationally efficient. Thegeneral pipeline of iKalibr-RGBD is inherited from iKalibr, composed of arigorous initialization procedure and several continuous-time batchoptimizations. The implementation of iKalibr-RGBD is open-sourced at(https://github.com/Unsigned-Long/iKalibr) to benefit the research community.
视觉惯性系统在过去二十年中得到了广泛的研究和应用,这主要归功于其低成本、低功耗、小体积和高可用性。这种趋势同时导致了大量视觉惯性校准方法的出现,因为传感器之间精确的时空参数是视觉惯性融合的先决条件。在我们之前的工作(即 iKalibr)中,提出了一种基于连续时间的视觉惯性校准方法,作为一次性多传感器弹性时空校准的一部分。虽然不需要人工目标会带来相当大的便利,但在初始化和批量优化过程中需要进行计算成本高昂的姿态估计,从而限制了其可用性。幸运的是,通过采用无映射自我速度估计而不是基于映射的姿态估计,可以极大地改进具有额外深度信息的 RGBD。在本文中,我们介绍了基于连续时间自我速度估计的 RGBD 惯性时空校准,称为 iKalibr-RGBD,它也是无目标的,但计算效率很高。iKalibr-RGBD 的一般流程继承自 iKalibr,由严格的初始化程序和若干连续时间批量优化组成。iKalibr-RGBD 的实现已开源(https://github.com/Unsigned-Long/iKalibr),以造福研究界。
{"title":"iKalibr-RGBD: Partially-Specialized Target-Free Visual-Inertial Spatiotemporal Calibration For RGBDs via Continuous-Time Velocity Estimation","authors":"Shuolong Chen, Xingxing Li, Shengyu Li, Yuxuan Zhou","doi":"arxiv-2409.07116","DOIUrl":"https://doi.org/arxiv-2409.07116","url":null,"abstract":"Visual-inertial systems have been widely studied and applied in the last two\u0000decades, mainly due to their low cost and power consumption, small footprint,\u0000and high availability. Such a trend simultaneously leads to a large amount of\u0000visual-inertial calibration methods being presented, as accurate spatiotemporal\u0000parameters between sensors are a prerequisite for visual-inertial fusion. In\u0000our previous work, i.e., iKalibr, a continuous-time-based visual-inertial\u0000calibration method was proposed as a part of one-shot multi-sensor resilient\u0000spatiotemporal calibration. While requiring no artificial target brings\u0000considerable convenience, computationally expensive pose estimation is demanded\u0000in initialization and batch optimization, limiting its availability.\u0000Fortunately, this could be vastly improved for the RGBDs with additional depth\u0000information, by employing mapping-free ego-velocity estimation instead of\u0000mapping-based pose estimation. In this paper, we present the continuous-time\u0000ego-velocity estimation-based RGBD-inertial spatiotemporal calibration, termed\u0000as iKalibr-RGBD, which is also targetless but computationally efficient. The\u0000general pipeline of iKalibr-RGBD is inherited from iKalibr, composed of a\u0000rigorous initialization procedure and several continuous-time batch\u0000optimizations. The implementation of iKalibr-RGBD is open-sourced at\u0000(https://github.com/Unsigned-Long/iKalibr) to benefit the research community.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptive Pedipulation with Local Obstacle Avoidance 具有局部障碍物规避功能的感知脚踏装置
Pub Date : 2024-09-11 DOI: arxiv-2409.07195
Jonas Stolle, Philip Arm, Mayank Mittal, Marco Hutter
Pedipulation leverages the feet of legged robots for mobile manipulation,eliminating the need for dedicated robotic arms. While previous works haveshowcased blind and task-specific pedipulation skills, they fail to account forstatic and dynamic obstacles in the environment. To address this limitation, weintroduce a reinforcement learning-based approach to train a whole-bodyobstacle-aware policy that tracks foot position commands while simultaneouslyavoiding obstacles. Despite training the policy in only five different staticscenarios in simulation, we show that it generalizes to unknown environmentswith different numbers and types of obstacles. We analyze the performance ofour method through a set of simulation experiments and successfully deploy thelearned policy on the ANYmal quadruped, demonstrating its capability to followfoot commands while navigating around static and dynamic obstacles.
脚踏机器人利用腿部机器人的脚进行移动操纵,无需专用机械臂。虽然之前的研究已经展示了盲人和特定任务的脚踏技能,但它们未能考虑到环境中的静态和动态障碍。为了解决这一局限性,我们引入了一种基于强化学习的方法来训练全身障碍感知策略,该策略可在避开障碍的同时跟踪脚的位置指令。尽管只在五个不同的模拟场景中训练了该策略,但我们发现它可以泛化到具有不同数量和类型障碍物的未知环境中。我们通过一系列模拟实验分析了该方法的性能,并成功地在 ANYmal 四足动物上部署了学习到的策略,证明了它能够在绕过静态和动态障碍物的同时遵循脚部指令。
{"title":"Perceptive Pedipulation with Local Obstacle Avoidance","authors":"Jonas Stolle, Philip Arm, Mayank Mittal, Marco Hutter","doi":"arxiv-2409.07195","DOIUrl":"https://doi.org/arxiv-2409.07195","url":null,"abstract":"Pedipulation leverages the feet of legged robots for mobile manipulation,\u0000eliminating the need for dedicated robotic arms. While previous works have\u0000showcased blind and task-specific pedipulation skills, they fail to account for\u0000static and dynamic obstacles in the environment. To address this limitation, we\u0000introduce a reinforcement learning-based approach to train a whole-body\u0000obstacle-aware policy that tracks foot position commands while simultaneously\u0000avoiding obstacles. Despite training the policy in only five different static\u0000scenarios in simulation, we show that it generalizes to unknown environments\u0000with different numbers and types of obstacles. We analyze the performance of\u0000our method through a set of simulation experiments and successfully deploy the\u0000learned policy on the ANYmal quadruped, demonstrating its capability to follow\u0000foot commands while navigating around static and dynamic obstacles.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FaVoR: Features via Voxel Rendering for Camera Relocalization FaVoR:通过体素渲染实现摄像机重定位的功能
Pub Date : 2024-09-11 DOI: arxiv-2409.07571
Vincenzo Polizzi, Marco Cannici, Davide Scaramuzza, Jonathan Kelly
Camera relocalization methods range from dense image alignment to directcamera pose regression from a query image. Among these, sparse feature matchingstands out as an efficient, versatile, and generally lightweight approach withnumerous applications. However, feature-based methods often struggle withsignificant viewpoint and appearance changes, leading to matching failures andinaccurate pose estimates. To overcome this limitation, we propose a novelapproach that leverages a globally sparse yet locally dense 3D representationof 2D features. By tracking and triangulating landmarks over a sequence offrames, we construct a sparse voxel map optimized to render image patchdescriptors observed during tracking. Given an initial pose estimate, we firstsynthesize descriptors from the voxels using volumetric rendering and thenperform feature matching to estimate the camera pose. This methodology enablesthe generation of descriptors for unseen views, enhancing robustness to viewchanges. We extensively evaluate our method on the 7-Scenes and CambridgeLandmarks datasets. Our results show that our method significantly outperformsexisting state-of-the-art feature representation techniques in indoorenvironments, achieving up to a 39% improvement in median translation error.Additionally, our approach yields comparable results to other methods foroutdoor scenarios while maintaining lower memory and computational costs.
相机重新定位的方法多种多样,从密集图像配准到根据查询图像直接进行相机姿态回归。其中,稀疏特征匹配是一种高效、多用途、轻量级的方法,应用广泛。然而,基于特征的方法往往难以应对显著的视角和外观变化,导致匹配失败和姿势估计不准确。为了克服这一局限性,我们提出了一种新颖的方法,利用全局稀疏但局部密集的二维特征三维表示。通过对一系列帧中的地标进行跟踪和三角测量,我们构建了一个稀疏的体素图,并对其进行了优化,以呈现跟踪过程中观察到的图像斑块描述符。给定初始姿态估计值后,我们首先使用体积渲染技术从体素中合成描述符,然后进行特征匹配以估计摄像机姿态。这种方法可以生成未见视图的描述符,增强了对视图变化的鲁棒性。我们在 7-Scenes 和 CambridgeLandmarks 数据集上广泛评估了我们的方法。结果表明,在室内环境中,我们的方法明显优于现有的最先进的特征表示技术,翻译误差中值提高了 39%。
{"title":"FaVoR: Features via Voxel Rendering for Camera Relocalization","authors":"Vincenzo Polizzi, Marco Cannici, Davide Scaramuzza, Jonathan Kelly","doi":"arxiv-2409.07571","DOIUrl":"https://doi.org/arxiv-2409.07571","url":null,"abstract":"Camera relocalization methods range from dense image alignment to direct\u0000camera pose regression from a query image. Among these, sparse feature matching\u0000stands out as an efficient, versatile, and generally lightweight approach with\u0000numerous applications. However, feature-based methods often struggle with\u0000significant viewpoint and appearance changes, leading to matching failures and\u0000inaccurate pose estimates. To overcome this limitation, we propose a novel\u0000approach that leverages a globally sparse yet locally dense 3D representation\u0000of 2D features. By tracking and triangulating landmarks over a sequence of\u0000frames, we construct a sparse voxel map optimized to render image patch\u0000descriptors observed during tracking. Given an initial pose estimate, we first\u0000synthesize descriptors from the voxels using volumetric rendering and then\u0000perform feature matching to estimate the camera pose. This methodology enables\u0000the generation of descriptors for unseen views, enhancing robustness to view\u0000changes. We extensively evaluate our method on the 7-Scenes and Cambridge\u0000Landmarks datasets. Our results show that our method significantly outperforms\u0000existing state-of-the-art feature representation techniques in indoor\u0000environments, achieving up to a 39% improvement in median translation error.\u0000Additionally, our approach yields comparable results to other methods for\u0000outdoor scenarios while maintaining lower memory and computational costs.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling Shared-Control for A Riding Ballbot System 实现骑行球机器人系统的共享控制
Pub Date : 2024-09-11 DOI: arxiv-2409.07013
Yu Chen, Mahshid Mansouri, Chenzhang Xiao, Ze Wang, Elizabeth T. Hsiao-Wecksler, William R. Norris
This study introduces a shared-control approach for collision avoidance in aself-balancing riding ballbot, called PURE, marked by its dynamic stability,omnidirectional movement, and hands-free interface. Integrated with a sensorarray and a novel Passive Artificial Potential Field (PAPF) method, PUREprovides intuitive navigation with deceleration assistance and haptic/audiofeedback, effectively mitigating collision risks. This approach addresses thelimitations of traditional APF methods, such as control oscillations andunnecessary speed reduction in challenging scenarios. A human-robot interactionexperiment, with 20 manual wheelchair users and able-bodied individuals, wasconducted to evaluate the performance of indoor navigation and obstacleavoidance with the proposed shared-control algorithm. Results indicated thatshared-control significantly reduced collisions and cognitive load withoutaffecting travel speed, offering intuitive and safe operation. These findingshighlight the shared-control system's suitability for enhancing collisionavoidance in self-balancing mobility devices, a relatively unexplored area inassistive mobility research.
本研究介绍了一种用于自平衡骑行球形机器人避免碰撞的共享控制方法,名为 PURE,其特点是动态稳定、全向运动和免提界面。PURE集成了传感器阵列和新颖的被动人工势场(PAPF)方法,通过减速辅助和触觉/声音反馈提供直观导航,有效降低了碰撞风险。这种方法解决了传统 APF 方法的局限性,如控制振荡和在具有挑战性的场景中不必要的减速。为了评估共享控制算法在室内导航和避障方面的性能,我们对20名手动轮椅使用者和健全人进行了人机交互实验。结果表明,共享控制大大减少了碰撞和认知负荷,同时不影响行进速度,提供了直观和安全的操作。这些发现凸显了共享控制系统适用于增强自平衡移动设备的防撞能力,而这是辅助移动研究中一个相对尚未开发的领域。
{"title":"Enabling Shared-Control for A Riding Ballbot System","authors":"Yu Chen, Mahshid Mansouri, Chenzhang Xiao, Ze Wang, Elizabeth T. Hsiao-Wecksler, William R. Norris","doi":"arxiv-2409.07013","DOIUrl":"https://doi.org/arxiv-2409.07013","url":null,"abstract":"This study introduces a shared-control approach for collision avoidance in a\u0000self-balancing riding ballbot, called PURE, marked by its dynamic stability,\u0000omnidirectional movement, and hands-free interface. Integrated with a sensor\u0000array and a novel Passive Artificial Potential Field (PAPF) method, PURE\u0000provides intuitive navigation with deceleration assistance and haptic/audio\u0000feedback, effectively mitigating collision risks. This approach addresses the\u0000limitations of traditional APF methods, such as control oscillations and\u0000unnecessary speed reduction in challenging scenarios. A human-robot interaction\u0000experiment, with 20 manual wheelchair users and able-bodied individuals, was\u0000conducted to evaluate the performance of indoor navigation and obstacle\u0000avoidance with the proposed shared-control algorithm. Results indicated that\u0000shared-control significantly reduced collisions and cognitive load without\u0000affecting travel speed, offering intuitive and safe operation. These findings\u0000highlight the shared-control system's suitability for enhancing collision\u0000avoidance in self-balancing mobility devices, a relatively unexplored area in\u0000assistive mobility research.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Invariant filtering for wheeled vehicle localization with unknown wheel radius and unknown GNSS lever arm 未知车轮半径和未知全球导航卫星系统杠杆臂的轮式车辆定位不变滤波技术
Pub Date : 2024-09-11 DOI: arxiv-2409.07050
Paul ChauchatAMU SCI, AMU, LIS, DIAPRO, Silvère BonnabelCAOR, Axel Barrau
We consider the problem of observer design for a nonholonomic car (moregenerally a wheeled robot) equipped with wheel speeds with unknown wheelradius, and whose position is measured via a GNSS antenna placed at an unknownposition in the car. In a tutorial and unified exposition, we recall the recenttheory of two-frame systems within the field of invariant Kalman filtering. Wethen show how to adapt it geometrically to address the considered problem,although it seems at first sight out of its scope. This yields an invariantextended Kalman filter having autonomous error equations, and state-independentJacobians, which is shown to work remarkably well in simulations. The proposednovel construction thus extends the application scope of invariant filtering.
我们考虑的是非全局性汽车(一般来说是轮式机器人)的观测器设计问题,该汽车配备了轮速未知的轮半径,其位置通过放置在汽车未知位置的 GNSS 天线测量。在教程和统一论述中,我们回顾了不变卡尔曼滤波领域中最新的双框架系统理论。我们展示了如何从几何角度对其进行调整,以解决所考虑的问题,尽管乍一看似乎超出了其范围。这就产生了一个具有自主误差方程和与状态无关的加可比系数的不变量下延卡尔曼滤波器,并在仿真中证明了其出色的工作性能。因此,所提出的新结构扩展了不变滤波的应用范围。
{"title":"Invariant filtering for wheeled vehicle localization with unknown wheel radius and unknown GNSS lever arm","authors":"Paul ChauchatAMU SCI, AMU, LIS, DIAPRO, Silvère BonnabelCAOR, Axel Barrau","doi":"arxiv-2409.07050","DOIUrl":"https://doi.org/arxiv-2409.07050","url":null,"abstract":"We consider the problem of observer design for a nonholonomic car (more\u0000generally a wheeled robot) equipped with wheel speeds with unknown wheel\u0000radius, and whose position is measured via a GNSS antenna placed at an unknown\u0000position in the car. In a tutorial and unified exposition, we recall the recent\u0000theory of two-frame systems within the field of invariant Kalman filtering. We\u0000then show how to adapt it geometrically to address the considered problem,\u0000although it seems at first sight out of its scope. This yields an invariant\u0000extended Kalman filter having autonomous error equations, and state-independent\u0000Jacobians, which is shown to work remarkably well in simulations. The proposed\u0000novel construction thus extends the application scope of invariant filtering.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Fairness Perceptions in Human-Robot Interaction 人机交互中的动态公平感知
Pub Date : 2024-09-11 DOI: arxiv-2409.07560
Houston Claure, Kate Candon, Inyoung Shin, Marynel Vázquez
People deeply care about how fairly they are treated by robots. Theestablished paradigm for probing fairness in Human-Robot Interaction (HRI)involves measuring the perception of the fairness of a robot at the conclusionof an interaction. However, such an approach is limited as interactions varyover time, potentially causing changes in fairness perceptions as well. Tovalidate this idea, we conducted a 2x2 user study with a mixed design (N=40)where we investigated two factors: the timing of unfair robot actions (early orlate in an interaction) and the beneficiary of those actions (either anotherrobot or the participant). Our results show that fairness judgments are notstatic. They can shift based on the timing of unfair robot actions. Further, weexplored using perceptions of three key factors (reduced welfare, conduct, andmoral transgression) proposed by a Fairness Theory from Organizational Justiceto predict momentary perceptions of fairness in our study. Interestingly, wefound that the reduced welfare and moral transgression factors were betterpredictors than all factors together. Our findings reinforce the idea thatunfair robot behavior can shape perceptions of group dynamics and trust towardsa robot and pave the path to future research directions on moment-to-momentfairness perceptions
人们非常关心机器人对待他们的公平程度。在人机交互(HRI)中,探究公平性的既定模式包括在交互结束时测量对机器人公平性的感知。然而,这种方法存在局限性,因为交互会随着时间的推移而变化,从而可能导致公平感的变化。为了验证这一观点,我们采用混合设计(N=40)进行了一项 2x2 用户研究,其中我们调查了两个因素:机器人不公平行为的时间(交互早期或晚期)和这些行为的受益人(另一个机器人或参与者)。我们的研究结果表明,公平性判断并不是一成不变的。它们会根据机器人不公平行为的时间发生变化。此外,我们还利用组织公正理论提出的公平理论中的三个关键因素(福利减少、行为和道德越轨)来预测我们研究中的瞬间公平感知。有趣的是,我们发现福利减少和道德败坏这两个因素比所有因素加在一起更能预测公平感。我们的研究结果强化了这样一种观点,即不公平的机器人行为会影响人们对群体动态的感知和对机器人的信任,并为未来关于瞬间公平感知的研究方向铺平了道路。
{"title":"Dynamic Fairness Perceptions in Human-Robot Interaction","authors":"Houston Claure, Kate Candon, Inyoung Shin, Marynel Vázquez","doi":"arxiv-2409.07560","DOIUrl":"https://doi.org/arxiv-2409.07560","url":null,"abstract":"People deeply care about how fairly they are treated by robots. The\u0000established paradigm for probing fairness in Human-Robot Interaction (HRI)\u0000involves measuring the perception of the fairness of a robot at the conclusion\u0000of an interaction. However, such an approach is limited as interactions vary\u0000over time, potentially causing changes in fairness perceptions as well. To\u0000validate this idea, we conducted a 2x2 user study with a mixed design (N=40)\u0000where we investigated two factors: the timing of unfair robot actions (early or\u0000late in an interaction) and the beneficiary of those actions (either another\u0000robot or the participant). Our results show that fairness judgments are not\u0000static. They can shift based on the timing of unfair robot actions. Further, we\u0000explored using perceptions of three key factors (reduced welfare, conduct, and\u0000moral transgression) proposed by a Fairness Theory from Organizational Justice\u0000to predict momentary perceptions of fairness in our study. Interestingly, we\u0000found that the reduced welfare and moral transgression factors were better\u0000predictors than all factors together. Our findings reinforce the idea that\u0000unfair robot behavior can shape perceptions of group dynamics and trust towards\u0000a robot and pave the path to future research directions on moment-to-moment\u0000fairness perceptions","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SIS: Seam-Informed Strategy for T-shirt Unfolding SIS: T恤展开的接缝信息战略
Pub Date : 2024-09-11 DOI: arxiv-2409.06990
Xuzhao Huang, Akira Seino, Fuyuki Tokuda, Akinari Kobayashi, Dayuan Chen, Yasuhisa Hirata, Norman C. Tien, Kazuhiro Kosuge
Seams are information-rich components of garments. The presence of differenttypes of seams and their combinations helps to select grasping points forgarment handling. In this paper, we propose a new Seam-Informed Strategy (SIS)for finding actions for handling a garment, such as grasping and unfolding aT-shirt. Candidates for a pair of grasping points for a dual-arm manipulatorsystem are extracted using the proposed Seam Feature Extraction Method (SFEM).A pair of grasping points for the robot system is selected by the proposedDecision Matrix Iteration Method (DMIM). The decision matrix is first computedby multiple human demonstrations and updated by the robot execution results toimprove the grasping and unfolding performance of the robot. Note that theproposed scheme is trained on real data without relying on simulation.Experimental results demonstrate the effectiveness of the proposed strategy.The project video is available at https://github.com/lancexz/sis.
接缝是服装中信息丰富的组成部分。不同类型接缝的存在及其组合有助于选择服装处理的抓取点。在本文中,我们提出了一种新的 "接缝信息策略"(Seam-Informed Strategy,SIS),用于寻找处理服装的动作,如抓取和展开衬衫。利用提出的接缝特征提取方法(SFEM)提取双臂机械手系统的一对抓取点候选。决策矩阵首先由多次人类示范计算得出,然后根据机器人的执行结果进行更新,以提高机器人的抓取和展开性能。实验结果证明了所提策略的有效性。项目视频请访问 https://github.com/lancexz/sis。
{"title":"SIS: Seam-Informed Strategy for T-shirt Unfolding","authors":"Xuzhao Huang, Akira Seino, Fuyuki Tokuda, Akinari Kobayashi, Dayuan Chen, Yasuhisa Hirata, Norman C. Tien, Kazuhiro Kosuge","doi":"arxiv-2409.06990","DOIUrl":"https://doi.org/arxiv-2409.06990","url":null,"abstract":"Seams are information-rich components of garments. The presence of different\u0000types of seams and their combinations helps to select grasping points for\u0000garment handling. In this paper, we propose a new Seam-Informed Strategy (SIS)\u0000for finding actions for handling a garment, such as grasping and unfolding a\u0000T-shirt. Candidates for a pair of grasping points for a dual-arm manipulator\u0000system are extracted using the proposed Seam Feature Extraction Method (SFEM).\u0000A pair of grasping points for the robot system is selected by the proposed\u0000Decision Matrix Iteration Method (DMIM). The decision matrix is first computed\u0000by multiple human demonstrations and updated by the robot execution results to\u0000improve the grasping and unfolding performance of the robot. Note that the\u0000proposed scheme is trained on real data without relying on simulation.\u0000Experimental results demonstrate the effectiveness of the proposed strategy.\u0000The project video is available at https://github.com/lancexz/sis.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-View 3D Reconstruction via SO(2)-Equivariant Gaussian Sculpting Networks 通过 SO(2)-Equivariant 高斯雕刻网络进行单视角三维重建
Pub Date : 2024-09-11 DOI: arxiv-2409.07245
Ruihan Xu, Anthony Opipari, Joshua Mah, Stanley Lewis, Haoran Zhang, Hanzhe Guo, Odest Chadwicke Jenkins
This paper introduces SO(2)-Equivariant Gaussian Sculpting Networks (GSNs) asan approach for SO(2)-Equivariant 3D object reconstruction from single-viewimage observations. GSNs take a single observation as input to generate a Gaussian splatrepresentation describing the observed object's geometry and texture. By usinga shared feature extractor before decoding Gaussian colors, covariances,positions, and opacities, GSNs achieve extremely high throughput (>150FPS).Experiments demonstrate that GSNs can be trained efficiently using a multi-viewrendering loss and are competitive, in quality, with expensive diffusion-basedreconstruction algorithms. The GSN model is validated on multiple benchmarkexperiments. Moreover, we demonstrate the potential for GSNs to be used withina robotic manipulation pipeline for object-centric grasping.
本文介绍了 SO(2)-Equivariant 高斯雕刻网络(GSNs),它是一种从单视角图像观测中重建 SO(2)-Equivariant 三维物体的方法。高斯雕刻网络将单个观测数据作为输入,生成描述观测对象几何和纹理的高斯拼接表示。通过在解码高斯颜色、协方差、位置和不透明度之前使用共享特征提取器,GSN 实现了极高的吞吐量(>150FPS)。GSN 模型在多个基准实验中得到了验证。此外,我们还展示了 GSN 与机器人操纵流水线一起用于以物体为中心的抓取的潜力。
{"title":"Single-View 3D Reconstruction via SO(2)-Equivariant Gaussian Sculpting Networks","authors":"Ruihan Xu, Anthony Opipari, Joshua Mah, Stanley Lewis, Haoran Zhang, Hanzhe Guo, Odest Chadwicke Jenkins","doi":"arxiv-2409.07245","DOIUrl":"https://doi.org/arxiv-2409.07245","url":null,"abstract":"This paper introduces SO(2)-Equivariant Gaussian Sculpting Networks (GSNs) as\u0000an approach for SO(2)-Equivariant 3D object reconstruction from single-view\u0000image observations. GSNs take a single observation as input to generate a Gaussian splat\u0000representation describing the observed object's geometry and texture. By using\u0000a shared feature extractor before decoding Gaussian colors, covariances,\u0000positions, and opacities, GSNs achieve extremely high throughput (>150FPS).\u0000Experiments demonstrate that GSNs can be trained efficiently using a multi-view\u0000rendering loss and are competitive, in quality, with expensive diffusion-based\u0000reconstruction algorithms. The GSN model is validated on multiple benchmark\u0000experiments. Moreover, we demonstrate the potential for GSNs to be used within\u0000a robotic manipulation pipeline for object-centric grasping.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1