首页 > 最新文献

2022 Sixth IEEE International Conference on Robotic Computing (IRC)最新文献

英文 中文
Real-Time Learning of Wing Motion Correction in an Unconstrained Flapping-Wing Air Vehicle 无约束扑翼飞行器机翼运动校正的实时学习
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00010
J. Gallagher, E. Matson, Ryan Slater
Small Flapping-Wing Micro-Air Vehicles (FW-MAVs) can experience wing damage and wear while in service. Even small amounts of wing can prevent the vehicle from attaining desired waypoints without significant adaptation to onboard flight control. In previous work, we demonstrated that low-level adaptation of wing motion patterns, rather than high-level adaptation of path control, could restore acceptable performance. We further demonstrated that this low-level adaptation could be accomplished while the vehicle was in normal service and without requiring excessive amounts of flight time. Previous work, however, did not carefully consider the use of these methods when the vehicle was completely unconstrained in three-dimensional space (I.E. no mechanical safety supports) and when all vehicle degrees of freedom had to be simultaneously controlled. Also, previous work presumed that the learning algorithm could adapt wing motion patterns with minimal constraints on shape. The newest generation of FW-MAVs we consider place some significant constraints on legal wing motions which brings into question the efficacy of previous work for current vehicles. In this paper, we will provide compelling evidence that learning during unconstrained flight under the newly imposed wing motion conditions is both practical and feasible. This paper constitutes the first formal report of these results and removes the final barriers that had existed to implementation in a fully-realized physical FW-MAV.
小型扑翼微型飞行器(FW-MAVs)在服役过程中会出现机翼损伤和磨损。即使少量的机翼也能阻止飞行器在没有显著适应机载飞行控制的情况下获得期望的航路点。在之前的工作中,我们证明了低水平的翅膀运动模式适应,而不是高水平的路径控制适应,可以恢复可接受的性能。我们进一步证明,这种低水平的适应可以在飞行器正常服役时完成,而不需要过多的飞行时间。然而,先前的工作并没有仔细考虑当车辆在三维空间中完全不受约束(即没有机械安全支撑)以及必须同时控制所有车辆自由度时这些方法的使用。此外,先前的工作假设学习算法可以在最小的形状约束下适应翅膀的运动模式。我们认为,最新一代的FW-MAVs对合法的机翼运动施加了一些重大限制,这使人们对现有车辆之前工作的有效性产生了质疑。在本文中,我们将提供令人信服的证据,证明在新施加的机翼运动条件下,在无约束飞行中学习是切实可行的。本文构成了这些结果的第一份正式报告,并消除了在完全实现的物理FW-MAV中存在的最终障碍。
{"title":"Real-Time Learning of Wing Motion Correction in an Unconstrained Flapping-Wing Air Vehicle","authors":"J. Gallagher, E. Matson, Ryan Slater","doi":"10.1109/IRC55401.2022.00010","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00010","url":null,"abstract":"Small Flapping-Wing Micro-Air Vehicles (FW-MAVs) can experience wing damage and wear while in service. Even small amounts of wing can prevent the vehicle from attaining desired waypoints without significant adaptation to onboard flight control. In previous work, we demonstrated that low-level adaptation of wing motion patterns, rather than high-level adaptation of path control, could restore acceptable performance. We further demonstrated that this low-level adaptation could be accomplished while the vehicle was in normal service and without requiring excessive amounts of flight time. Previous work, however, did not carefully consider the use of these methods when the vehicle was completely unconstrained in three-dimensional space (I.E. no mechanical safety supports) and when all vehicle degrees of freedom had to be simultaneously controlled. Also, previous work presumed that the learning algorithm could adapt wing motion patterns with minimal constraints on shape. The newest generation of FW-MAVs we consider place some significant constraints on legal wing motions which brings into question the efficacy of previous work for current vehicles. In this paper, we will provide compelling evidence that learning during unconstrained flight under the newly imposed wing motion conditions is both practical and feasible. This paper constitutes the first formal report of these results and removes the final barriers that had existed to implementation in a fully-realized physical FW-MAV.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133240360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Variability Analysis for Robot Operating System Applications 机器人操作系统应用的可变性分析
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00028
A. Santos, Alcino Cunha, Nuno Macedo, Sara Melo, Ricardo Pereira
Robotic applications are often designed to be reusable and configurable. Sometimes, due to the different supported software and hardware components, as well as the different implemented robot capabilities, the total number of possible configurations for a single system can be extremely large. In these scenarios, understanding how different configurations coexist and which components and capabilities are compatible with each other is a significant time sink both for developers and end users alike. In this paper, we present a static analysis tool, specifically designed for robotic software developed for the Robot Operating System (ROS), that is capable of presenting a graphical and interactive overview of the system’s runtime variability, with the goal of simplifying the deployment of the desired robot configuration.
机器人应用程序通常被设计为可重用和可配置的。有时,由于支持的软件和硬件组件不同,以及实现的机器人功能不同,单个系统的可能配置总数可能非常大。在这些场景中,理解不同的配置如何共存,以及哪些组件和功能彼此兼容,对于开发人员和最终用户来说都是一项耗费大量时间的工作。在本文中,我们提出了一个静态分析工具,专门为机器人操作系统(ROS)开发的机器人软件设计,能够呈现系统运行时可变性的图形化和交互式概述,目的是简化所需机器人配置的部署。
{"title":"Variability Analysis for Robot Operating System Applications","authors":"A. Santos, Alcino Cunha, Nuno Macedo, Sara Melo, Ricardo Pereira","doi":"10.1109/IRC55401.2022.00028","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00028","url":null,"abstract":"Robotic applications are often designed to be reusable and configurable. Sometimes, due to the different supported software and hardware components, as well as the different implemented robot capabilities, the total number of possible configurations for a single system can be extremely large. In these scenarios, understanding how different configurations coexist and which components and capabilities are compatible with each other is a significant time sink both for developers and end users alike. In this paper, we present a static analysis tool, specifically designed for robotic software developed for the Robot Operating System (ROS), that is capable of presenting a graphical and interactive overview of the system’s runtime variability, with the goal of simplifying the deployment of the desired robot configuration.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116442844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Task Mapping for Hardware-Accelerated Robotics Applications using ReconROS 基于ReconROS的硬件加速机器人应用任务映射
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00033
Christian Lienen, M. Platzner
Modern software architectures for robotics map tasks to heterogeneous computing platforms comprising multi-core CPUs, GPUs, and FPGAs. FPGAs promise huge potential for energy efficient and fast computation, but their use in robotics requires profound knowledge of hardware design and is thus challenging. ReconROS, a combination of the reconfigurable operating system ReconOS and the robot operating system (ROS) aims to overcome this challenge with a consistent programming model across the hardware/software boundary and support of event-driven programming. In this paper, we summarize different approaches for mapping tasks to computational resources in ReconROS. These approaches include static and dynamic mappings, and the exploitation of data parallelism for single ROS nodes. Further, for dynamic mapping we propose and analyse different replacement strategies for hardware nodes to minimize reconfiguration overhead. We evaluate the presented techniques and illustrate ReconROS’ capabilites through an autonomous vehicle example in a hardware-in-the-loop simulation.
机器人技术的现代软件架构将任务映射到异构计算平台,包括多核cpu、gpu和fpga。fpga在节能和快速计算方面具有巨大的潜力,但它们在机器人技术中的应用需要对硬件设计有深刻的了解,因此具有挑战性。ReconROS是可重构操作系统ReconOS和机器人操作系统(ROS)的结合,旨在通过跨硬件/软件边界的一致编程模型和事件驱动编程的支持来克服这一挑战。在本文中,我们总结了在ReconROS中将任务映射到计算资源的不同方法。这些方法包括静态和动态映射,以及利用单个ROS节点的数据并行性。此外,对于动态映射,我们提出并分析了不同的硬件节点替换策略,以最小化重新配置开销。我们评估了所提出的技术,并通过硬件在环仿真中的自动驾驶汽车示例说明了ReconROS的功能。
{"title":"Task Mapping for Hardware-Accelerated Robotics Applications using ReconROS","authors":"Christian Lienen, M. Platzner","doi":"10.1109/IRC55401.2022.00033","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00033","url":null,"abstract":"Modern software architectures for robotics map tasks to heterogeneous computing platforms comprising multi-core CPUs, GPUs, and FPGAs. FPGAs promise huge potential for energy efficient and fast computation, but their use in robotics requires profound knowledge of hardware design and is thus challenging. ReconROS, a combination of the reconfigurable operating system ReconOS and the robot operating system (ROS) aims to overcome this challenge with a consistent programming model across the hardware/software boundary and support of event-driven programming. In this paper, we summarize different approaches for mapping tasks to computational resources in ReconROS. These approaches include static and dynamic mappings, and the exploitation of data parallelism for single ROS nodes. Further, for dynamic mapping we propose and analyse different replacement strategies for hardware nodes to minimize reconfiguration overhead. We evaluate the presented techniques and illustrate ReconROS’ capabilites through an autonomous vehicle example in a hardware-in-the-loop simulation.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133451948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Outdoor visual SLAM and Path Planning for Mobile-Robot 户外视觉SLAM与移动机器人路径规划
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00057
Seongil Heo, Jueun Mun, Jiwoong Choi, Jiwon Park, E. Matson
This paper proposes a robust visual SLAM and a path planning algorithm for autonomous vehicles in the outdoor environment. The consideration of the outdoor characteristics was essential in both SLAM and path planning processes. This study can be used when it is necessary to know the exact appearance of the environment due to the impossibility of observing the environment through a satellite map, e.g., inside a forest. The visual SLAM system was developed using GPS data in consideration of the deterioration of camera recognition performance outdoors. The GPS data was inserted into every multi-thread of visual SLAM, which are Camera Tracking, Local Mapping, and Loop Closing. It enhanced the accuracy of the map and saved computational power by preventing useless calculations. In the path planning part, our method divided the path based on the stability of the roads. When determining the optimal path, the stability of the road and the driving time were considered, and the weight was assigned based on the GPS data.
本文提出了一种鲁棒视觉SLAM算法和自动驾驶汽车在室外环境下的路径规划算法。在SLAM和路径规划过程中,室外特征的考虑是必不可少的。由于无法通过卫星地图观察环境,因此有必要了解环境的确切情况时,例如在森林内,可以使用这项研究。考虑到室外摄像机识别性能的下降,利用GPS数据开发了视觉SLAM系统。将GPS数据插入到视觉SLAM的每个多线程中,即Camera Tracking、Local Mapping和Loop Closing。它提高了地图的准确性,并通过防止无用的计算节省了计算能力。在路径规划部分,我们的方法根据道路的稳定性对路径进行划分。在确定最优路径时,考虑了道路的稳定性和行驶时间,并根据GPS数据分配权重。
{"title":"Outdoor visual SLAM and Path Planning for Mobile-Robot","authors":"Seongil Heo, Jueun Mun, Jiwoong Choi, Jiwon Park, E. Matson","doi":"10.1109/IRC55401.2022.00057","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00057","url":null,"abstract":"This paper proposes a robust visual SLAM and a path planning algorithm for autonomous vehicles in the outdoor environment. The consideration of the outdoor characteristics was essential in both SLAM and path planning processes. This study can be used when it is necessary to know the exact appearance of the environment due to the impossibility of observing the environment through a satellite map, e.g., inside a forest. The visual SLAM system was developed using GPS data in consideration of the deterioration of camera recognition performance outdoors. The GPS data was inserted into every multi-thread of visual SLAM, which are Camera Tracking, Local Mapping, and Loop Closing. It enhanced the accuracy of the map and saved computational power by preventing useless calculations. In the path planning part, our method divided the path based on the stability of the roads. When determining the optimal path, the stability of the road and the driving time were considered, and the weight was assigned based on the GPS data.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128823864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart Robot Vision System for Plant Inspection for Disaster Prevention 防灾工厂巡检智能机器人视觉系统
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00079
Saifuddin Mahmud, Justin Dannemiller, R. Sourave, Xiangxu Lin, Jong-Hoon Kim
Simulation of emergency response scenarios and routine inspections are imperative means in ensuring the proper functioning and safety of power plants, oil refineries, iron works, and industrial units. By utilizing autonomous robots, moreover, the reliability and frequency of such inspections can be improved. With the exception of facilities located in hazardous areas, such as off-shore factories, where dispatching response teams might be impossible, accidents caused by human mistakes can be prevented by autonomous inspections and diagnosis of facilities (pumps, tanks, boilers, and so on). One of the primary obstacles in robot-assisted inspection operations is detecting various types of gauges, reading them, and taking appropriate action. This study describes a unique robot vision-based plant inspection system that may be used to enhance the frequency of routine checks and, in turn, minimize equipment faults and accidents (explosions or fires caused by gas leaks) caused by human mistakes or natural degradation. This suggested system can conduct facility inspections by detecting and reading a variety of gauges and issuing reports upon the detection of any anomalies. Furthermore, this system is capable of responding to unforeseen anomalous events that pose potential harm to human response teams, such as the direct manipulation of valves in the presence of a gas leak.
应急情景模拟和例行检查是确保电厂、炼油厂、铁厂和工业单位正常运行和安全的必要手段。此外,通过使用自主机器人,可以提高此类检查的可靠性和频率。除了位于危险区域的设施(如海上工厂)可能无法派遣响应小组之外,可以通过对设施(泵、储罐、锅炉等)的自动检查和诊断来防止人为错误造成的事故。机器人辅助检测操作的主要障碍之一是检测各种类型的仪表,读取它们,并采取适当的行动。本研究描述了一种独特的基于机器人视觉的工厂检查系统,可用于提高例行检查的频率,从而最大限度地减少由人为错误或自然退化引起的设备故障和事故(由气体泄漏引起的爆炸或火灾)。该建议系统可以通过检测和读取各种仪表来进行设施检查,并在检测到任何异常时发出报告。此外,该系统还能够应对不可预见的异常事件,这些事件可能对人类响应团队造成潜在危害,例如在气体泄漏时直接操纵阀门。
{"title":"Smart Robot Vision System for Plant Inspection for Disaster Prevention","authors":"Saifuddin Mahmud, Justin Dannemiller, R. Sourave, Xiangxu Lin, Jong-Hoon Kim","doi":"10.1109/IRC55401.2022.00079","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00079","url":null,"abstract":"Simulation of emergency response scenarios and routine inspections are imperative means in ensuring the proper functioning and safety of power plants, oil refineries, iron works, and industrial units. By utilizing autonomous robots, moreover, the reliability and frequency of such inspections can be improved. With the exception of facilities located in hazardous areas, such as off-shore factories, where dispatching response teams might be impossible, accidents caused by human mistakes can be prevented by autonomous inspections and diagnosis of facilities (pumps, tanks, boilers, and so on). One of the primary obstacles in robot-assisted inspection operations is detecting various types of gauges, reading them, and taking appropriate action. This study describes a unique robot vision-based plant inspection system that may be used to enhance the frequency of routine checks and, in turn, minimize equipment faults and accidents (explosions or fires caused by gas leaks) caused by human mistakes or natural degradation. This suggested system can conduct facility inspections by detecting and reading a variety of gauges and issuing reports upon the detection of any anomalies. Furthermore, this system is capable of responding to unforeseen anomalous events that pose potential harm to human response teams, such as the direct manipulation of valves in the presence of a gas leak.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129208348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Patterns and tools in Robotic Systems Integration 机器人系统集成中的模式和工具
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00065
N. Garcia, A. Wortmann
Software integration activity is crucial to develop robotic applications. However, there is little specific literature dedicated to the integration process in robotics. The research in this field today is mostly focused on a concrete implementation phase and on the development of new technologies, but few researchers target their integration as an application-agnostic process. To shed light on the state of robotics software integration, we conducted a survey among researchers and practitioners in the field. As part of this survey we inquired how robotics software integration is currently performed. This study allowed us to find patterns in the way this process is carried out.
软件集成活动对于开发机器人应用程序至关重要。然而,很少有专门的文献致力于机器人技术的集成过程。目前该领域的研究主要集中在具体的实现阶段和新技术的开发上,但很少有研究人员将其集成作为一个与应用无关的过程。为了阐明机器人软件集成的现状,我们在该领域的研究人员和从业者中进行了一项调查。作为调查的一部分,我们询问了机器人软件集成目前是如何执行的。这项研究让我们找到了这个过程的模式。
{"title":"Patterns and tools in Robotic Systems Integration","authors":"N. Garcia, A. Wortmann","doi":"10.1109/IRC55401.2022.00065","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00065","url":null,"abstract":"Software integration activity is crucial to develop robotic applications. However, there is little specific literature dedicated to the integration process in robotics. The research in this field today is mostly focused on a concrete implementation phase and on the development of new technologies, but few researchers target their integration as an application-agnostic process. To shed light on the state of robotics software integration, we conducted a survey among researchers and practitioners in the field. As part of this survey we inquired how robotics software integration is currently performed. This study allowed us to find patterns in the way this process is carried out.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125375235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Mechanical Exploration of the Design of Tactile Fingertips via Finite Element Analysis 基于有限元分析的触觉指尖设计的力学探索
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00014
Yihua Wang, Xiao-Yan Shi, Longhui Qin
Haptic perception facilitates robots to interact with surrounding environments dexterously. Embedded tactile fingers behave robust and reliable, and is wildly used in robots. However, seldom research can be found on its perceptual mechanism in mechanics and the design guidance of its structure. In this paper, a numerical model is established to address the contact process between embedded-type tactile fingertips and a roughness surface via finite element analysis. Experimental and simulation results are compared during three contact processes: Drop-down, sliding and lifting-up. From the mechanical perspective, the strain and stress within the fingertip are explored, based on which several design suggestions are given on the spatial arrangement of sensing elements. In addition to explaining the perception mechanism in mechanical view, this paper also provides a reference for the general design of tactile fingertips.
触觉感知使机器人能够灵活地与周围环境进行交互。嵌入式触觉手指坚固可靠,广泛应用于机器人中。然而,对其力学感知机理和结构设计指导的研究却很少。本文通过有限元分析,建立了嵌入式触觉指尖与粗糙表面接触过程的数值模型。对比了下拉、滑动和上升三种接触过程的实验结果和仿真结果。从力学角度探讨了指尖内的应变和应力,并在此基础上对传感元件的空间布置提出了几点设计建议。本文除了从力学角度解释触觉指尖的感知机制外,也为触觉指尖的一般设计提供了参考。
{"title":"Mechanical Exploration of the Design of Tactile Fingertips via Finite Element Analysis","authors":"Yihua Wang, Xiao-Yan Shi, Longhui Qin","doi":"10.1109/IRC55401.2022.00014","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00014","url":null,"abstract":"Haptic perception facilitates robots to interact with surrounding environments dexterously. Embedded tactile fingers behave robust and reliable, and is wildly used in robots. However, seldom research can be found on its perceptual mechanism in mechanics and the design guidance of its structure. In this paper, a numerical model is established to address the contact process between embedded-type tactile fingertips and a roughness surface via finite element analysis. Experimental and simulation results are compared during three contact processes: Drop-down, sliding and lifting-up. From the mechanical perspective, the strain and stress within the fingertip are explored, based on which several design suggestions are given on the spatial arrangement of sensing elements. In addition to explaining the perception mechanism in mechanical view, this paper also provides a reference for the general design of tactile fingertips.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115078591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tracking Visual Landmarks of Opportunity as Rally Points for Unmanned Ground Vehicles 跟踪视觉地标的机会作为无人驾驶地面车辆的集合点
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00086
Martin Rebert, Gwenaél Schmitt, D. Monnin
In addition to the remote-control for piloting an Unmanned Ground Vehicle (UGV), we aim at adding a new high level command mode: reaching a visual rally point selected by the operator from the camera feed. We modify the recent TransT-M single object tracker to track the rally point as the UGV moves and we couple it with our visual odometry. We design a visibility test to discard false positives, when the rally point disappears from the field of view. We test the tracking on image sequences taken from our platform and the KITTI Vision Benchmark to demonstrate the efficiency and the robustness of the proposed tracking architecture. The visibility test improves the chance of recovery after the landmark disappears and removes false positives. We show that the operator does not need to designate the landmark with high precision as the tracker is tolerant to imprecision on the position and the size of the landmark.
除了遥控无人地面车辆(UGV)之外,我们的目标是增加一种新的高级指挥模式:到达操作员从摄像机馈送中选择的视觉集合点。我们修改了最近的TransT-M单目标跟踪器,以跟踪UGV移动时的集合点,并将其与视觉里程计相结合。当集合点从视野中消失时,我们设计了一个可见性测试来丢弃误报。我们对从我们的平台和KITTI视觉基准中获取的图像序列进行了跟踪测试,以证明所提出的跟踪架构的效率和鲁棒性。可见性测试提高了地标消失后恢复的机会,并消除了假阳性。我们表明,由于跟踪器可以容忍地标位置和大小的不精确,因此操作员不需要指定高精度的地标。
{"title":"Tracking Visual Landmarks of Opportunity as Rally Points for Unmanned Ground Vehicles","authors":"Martin Rebert, Gwenaél Schmitt, D. Monnin","doi":"10.1109/IRC55401.2022.00086","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00086","url":null,"abstract":"In addition to the remote-control for piloting an Unmanned Ground Vehicle (UGV), we aim at adding a new high level command mode: reaching a visual rally point selected by the operator from the camera feed. We modify the recent TransT-M single object tracker to track the rally point as the UGV moves and we couple it with our visual odometry. We design a visibility test to discard false positives, when the rally point disappears from the field of view. We test the tracking on image sequences taken from our platform and the KITTI Vision Benchmark to demonstrate the efficiency and the robustness of the proposed tracking architecture. The visibility test improves the chance of recovery after the landmark disappears and removes false positives. We show that the operator does not need to designate the landmark with high precision as the tracker is tolerant to imprecision on the position and the size of the landmark.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115337239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A virtual suturing task: proof of concept for awareness in autonomous camera motion 虚拟缝合任务:自动相机运动感知的概念证明
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00073
Nicolò Pasini, A. Mariani, A. Munawar, E. Momi, P. Kazanzides
Robot-assisted Minimally Invasive Surgery (MIS) requires the surgeon to alternatively control both the surgical instruments and the endoscopic camera, or to leave this burden to an assistant. This increases the cognitive load and interrupts the workflow of the operation. Camera motion automation has been examined in the literature to mitigate these aspects, but still lacks situation awareness, a key factor for camera navigation enhancement. This paper presents the development of a phase-specific camera motion automation, implemented in Virtual Reality (VR) during a suturing task. A user study involving 10 users was carried out using the master console of the da Vinci Research Kit. Each subject performed the suturing task undergoing both the proposed autonomous camera motion and the traditional manual camera control. Results show that the proposed system can reduce operational time, decreasing both the user's mental and physical demand. Situational awareness is shown to be fundamental in exploiting the benefits introduced by camera motion automation.
机器人辅助微创手术(MIS)要求外科医生交替控制手术器械和内窥镜相机,或者把这个负担留给助手。这增加了认知负荷并中断了操作的工作流程。相机运动自动化已经在文献中进行了研究,以减轻这些方面,但仍然缺乏态势感知,这是增强相机导航的关键因素。本文介绍了一个相位特定的相机运动自动化的发展,实现在虚拟现实(VR)在一个缝合任务。一项涉及10名用户的用户研究使用达芬奇研究套件的主控制台进行。每个受试者在完成缝合任务时都经历了自主相机运动和传统的手动相机控制。结果表明,该系统可以减少操作时间,降低用户的体力和脑力需求。态势感知被证明是利用摄像机运动自动化带来的好处的基础。
{"title":"A virtual suturing task: proof of concept for awareness in autonomous camera motion","authors":"Nicolò Pasini, A. Mariani, A. Munawar, E. Momi, P. Kazanzides","doi":"10.1109/IRC55401.2022.00073","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00073","url":null,"abstract":"Robot-assisted Minimally Invasive Surgery (MIS) requires the surgeon to alternatively control both the surgical instruments and the endoscopic camera, or to leave this burden to an assistant. This increases the cognitive load and interrupts the workflow of the operation. Camera motion automation has been examined in the literature to mitigate these aspects, but still lacks situation awareness, a key factor for camera navigation enhancement. This paper presents the development of a phase-specific camera motion automation, implemented in Virtual Reality (VR) during a suturing task. A user study involving 10 users was carried out using the master console of the da Vinci Research Kit. Each subject performed the suturing task undergoing both the proposed autonomous camera motion and the traditional manual camera control. Results show that the proposed system can reduce operational time, decreasing both the user's mental and physical demand. Situational awareness is shown to be fundamental in exploiting the benefits introduced by camera motion automation.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130013168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
NIAR: Interaction-aware Maneuver Prediction using Graph Neural Networks and Recurrent Neural Networks for Autonomous Driving 基于图神经网络和递归神经网络的自动驾驶交互感知机动预测
Pub Date : 2022-12-01 DOI: 10.1109/IRC55401.2022.00072
Petrit Rama, N. Bajçinca
Human driving involves an inherent discrete layer in decision-making corresponding to specific maneuvers such as overtaking, lane changing, lane keeping, etc. This is sensible to inherit at a higher layer of a hierarchical assembly in machine driving too, in order to enable tractable solutions for the otherwise highly complex problem of autonomous driving. This has been the motivation for this work that focuses on maneuver prediction for the ego-vehicle. Being inherently feedback structured, especially in dense traffic scenarios, maneuver prediction requires modeling approaches that account for the interaction awareness of the involved traffic agents. As a direct consequence, the problem of maneuver prediction is aggravated by the uncertainty in control policies of individual agents. The present paper tackles this difficulty by introducing three deep learning architectures for interaction-aware tactical maneuver prediction of the ego-vehicle, based on motion dynamics of surrounding traffic agents. Thus, the traffic scenario is modeled as an interaction graph, exploiting spatial features between traffic agents via Graph Neural Networks (GNNs). Dynamic motion patterns of traffic agents are extracted via Recurrent Neural Networks (RNNs). These architectures have been trained and evaluated using the BLVD dataset. To increase the model robustness and improve the learning process, the dataset is extended by making use of data augmentation, data oversampling, and data undersampling techniques. Finally, we successfully validate the proposed learning architectures and compare the trained models for maneuver prediction of the ego-vehicle obtained thereof in diverse driving scenarios with various numbers of surrounding traffic agents.
人类驾驶涉及到一个固有的离散决策层,该决策层与超车、变道、保持车道等具体操作相对应。在机器驾驶中,继承更高层次的分层装配也是明智的,以便为高度复杂的自动驾驶问题提供易于处理的解决方案。这一直是这项工作的动机,重点是机动预测的自我车辆。作为固有的反馈结构,特别是在密集的交通场景中,机动预测需要建模方法来考虑所涉及的交通代理的交互意识。其直接后果是,个体智能体控制策略的不确定性加剧了机动预测问题。本文通过引入三种深度学习架构来解决这一难题,该架构基于周围交通代理的运动动力学,用于自我车辆的交互感知战术机动预测。因此,交通场景被建模为一个交互图,通过图神经网络(gnn)利用交通代理之间的空间特征。利用递归神经网络(RNNs)提取交通agent的动态运动模式。这些架构已经使用BLVD数据集进行了训练和评估。为了提高模型的鲁棒性和改进学习过程,通过使用数据增强、数据过采样和数据欠采样技术对数据集进行扩展。最后,我们成功地验证了所提出的学习架构,并将所获得的自我车辆在不同驾驶场景下的机动预测模型与周围不同数量的交通代理进行了比较。
{"title":"NIAR: Interaction-aware Maneuver Prediction using Graph Neural Networks and Recurrent Neural Networks for Autonomous Driving","authors":"Petrit Rama, N. Bajçinca","doi":"10.1109/IRC55401.2022.00072","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00072","url":null,"abstract":"Human driving involves an inherent discrete layer in decision-making corresponding to specific maneuvers such as overtaking, lane changing, lane keeping, etc. This is sensible to inherit at a higher layer of a hierarchical assembly in machine driving too, in order to enable tractable solutions for the otherwise highly complex problem of autonomous driving. This has been the motivation for this work that focuses on maneuver prediction for the ego-vehicle. Being inherently feedback structured, especially in dense traffic scenarios, maneuver prediction requires modeling approaches that account for the interaction awareness of the involved traffic agents. As a direct consequence, the problem of maneuver prediction is aggravated by the uncertainty in control policies of individual agents. The present paper tackles this difficulty by introducing three deep learning architectures for interaction-aware tactical maneuver prediction of the ego-vehicle, based on motion dynamics of surrounding traffic agents. Thus, the traffic scenario is modeled as an interaction graph, exploiting spatial features between traffic agents via Graph Neural Networks (GNNs). Dynamic motion patterns of traffic agents are extracted via Recurrent Neural Networks (RNNs). These architectures have been trained and evaluated using the BLVD dataset. To increase the model robustness and improve the learning process, the dataset is extended by making use of data augmentation, data oversampling, and data undersampling techniques. Finally, we successfully validate the proposed learning architectures and compare the trained models for maneuver prediction of the ego-vehicle obtained thereof in diverse driving scenarios with various numbers of surrounding traffic agents.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124616982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 Sixth IEEE International Conference on Robotic Computing (IRC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1