首页 > 最新文献

2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)最新文献

英文 中文
Continuous Hybrid Localization in Environments with Physical and Temporal Sensor Occlusions 物理和时间传感器遮挡环境下的连续混合定位
Pub Date : 2021-10-25 DOI: 10.1109/SSRR53300.2021.9597693
J. Borer, M. Pryor
The proliferation of more affordable sensors technologies has enabled the development of a variety of localization modalities and fusion techniques for pose estimation. Utilizing multiple localization techniques provides a more accurate pose estimate as well as a more robust solution reducing the risk associated with a lack of data for any given sensor due to occlusion(s) which here refers to the unavailability of a sensor's data due to signal loss from physical attenuation, distance, low bandwidth, sensor limitations, or sensor failure. In large complex environments the impact of occlusions cannot be known a priori - temporally or spatially. Here, we present a novel heuristic-based GNSS carrier noise re-initialization framework to manage transitions between localization modalities. Disturbance rejection is used to eliminate discrete filter jitter and manage the competing interests of competing state estimate data sources. The hybrid localization method is evaluated in a relevant environment and shown to be more effective than each individual localization modality.
更经济实惠的传感器技术的扩散,使各种定位模式和融合技术的姿态估计的发展。利用多种定位技术提供了更准确的姿态估计以及更强大的解决方案,减少了由于遮挡(s)而导致的任何给定传感器数据缺乏相关的风险,这里指的是由于物理衰减,距离,低带宽,传感器限制或传感器故障导致的信号丢失而导致传感器数据不可用。在大型复杂环境中,闭塞的影响不能先验地-时间或空间地知道。在这里,我们提出了一种新的启发式GNSS载波噪声重新初始化框架来管理定位模式之间的转换。扰动抑制用于消除离散滤波器抖动和管理竞争状态估计数据源的竞争利益。在相关环境中对混合定位方法进行了评估,结果表明混合定位方法比单个定位方法更有效。
{"title":"Continuous Hybrid Localization in Environments with Physical and Temporal Sensor Occlusions","authors":"J. Borer, M. Pryor","doi":"10.1109/SSRR53300.2021.9597693","DOIUrl":"https://doi.org/10.1109/SSRR53300.2021.9597693","url":null,"abstract":"The proliferation of more affordable sensors technologies has enabled the development of a variety of localization modalities and fusion techniques for pose estimation. Utilizing multiple localization techniques provides a more accurate pose estimate as well as a more robust solution reducing the risk associated with a lack of data for any given sensor due to occlusion(s) which here refers to the unavailability of a sensor's data due to signal loss from physical attenuation, distance, low bandwidth, sensor limitations, or sensor failure. In large complex environments the impact of occlusions cannot be known a priori - temporally or spatially. Here, we present a novel heuristic-based GNSS carrier noise re-initialization framework to manage transitions between localization modalities. Disturbance rejection is used to eliminate discrete filter jitter and manage the competing interests of competing state estimate data sources. The hybrid localization method is evaluated in a relevant environment and shown to be more effective than each individual localization modality.","PeriodicalId":423263,"journal":{"name":"2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122227496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Human Understanding of a Mixed Reality Interface for Autonomous Robot-Based Change Detection 评估人类对基于自主机器人的变化检测的混合现实界面的理解
Pub Date : 2021-10-25 DOI: 10.1109/SSRR53300.2021.9597854
Christopher M. Reardon, Kerstin S Haring, J. Gregory, J. Rogers
Online change detection performed by mobile robots has incredible potential to impact safety and security applications. While robots are superior to humans at detecting changes, humans are still better at interpreting this information and will be responsible for making critical decisions in these contexts. For these reasons, robot-to-human communication of change detection is a fundamental requirement for successful human-robot teams operating in such scenarios. In this work we seek to improve this communication, and present the results of a study that evaluates the interpretability of autonomous robot-based change detections conveyed via mixed reality to untrained human participants. Our results show that humans are able to identify changes and understand the visualizations employed without prior training. Our analysis of the limitations of this initial study should be constructive to future work in this domain.
由移动机器人执行的在线变化检测具有影响安全和安保应用的不可思议的潜力。虽然机器人在检测变化方面优于人类,但人类仍然更擅长解释这些信息,并将负责在这些环境中做出关键决策。由于这些原因,变更检测的机器人与人类之间的交流是在这种情况下成功操作人机团队的基本要求。在这项工作中,我们寻求改善这种沟通,并提出了一项研究的结果,该研究评估了通过混合现实向未经训练的人类参与者传达的基于自主机器人的变化检测的可解释性。我们的研究结果表明,人类能够识别变化,并在没有事先训练的情况下理解所采用的可视化。我们对这一初步研究的局限性的分析应该对这一领域的未来工作具有建设性。
{"title":"Evaluating Human Understanding of a Mixed Reality Interface for Autonomous Robot-Based Change Detection","authors":"Christopher M. Reardon, Kerstin S Haring, J. Gregory, J. Rogers","doi":"10.1109/SSRR53300.2021.9597854","DOIUrl":"https://doi.org/10.1109/SSRR53300.2021.9597854","url":null,"abstract":"Online change detection performed by mobile robots has incredible potential to impact safety and security applications. While robots are superior to humans at detecting changes, humans are still better at interpreting this information and will be responsible for making critical decisions in these contexts. For these reasons, robot-to-human communication of change detection is a fundamental requirement for successful human-robot teams operating in such scenarios. In this work we seek to improve this communication, and present the results of a study that evaluates the interpretability of autonomous robot-based change detections conveyed via mixed reality to untrained human participants. Our results show that humans are able to identify changes and understand the visualizations employed without prior training. Our analysis of the limitations of this initial study should be constructive to future work in this domain.","PeriodicalId":423263,"journal":{"name":"2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114083907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
MPDrone: FPGA-based Platform for Intelligent Real-time Autonomous Drone Operations MPDrone:基于fpga的智能实时自主无人机操作平台
Pub Date : 2021-10-25 DOI: 10.1109/SSRR53300.2021.9597857
Bálint Kövári, E. Ebeid
AI-based autonomous onboard drone applications are evolving rapidly and demand dedicated hardware resources to perform effectively. Currently, CPUs and GPUs are commonly used to run these applications. This paper presents a novel drone platform called MPDrone based on the cutting-edge MPSoC boards that combine FPGA, CPU, and GPU in a single chip. The proposed platform utilizes the reconfigurable FPGA chip to run heavy AI algorithms and the CPU to execute ROS for processing communication with the drone flight controller and onboard sensors. The paper introduces the design and implementation of the MPDrone platform, which is validated in simulation and real-world testing through an intelligent object detection and landing use case. The testing results proved the applicability of the proposed FPGA-based platform for AI applications.
基于人工智能的自主机载无人机应用正在迅速发展,需要专用的硬件资源才能有效地执行。目前,通常使用cpu和gpu来运行这些应用程序。本文提出了一种新型无人机平台MPDrone,该平台基于先进的MPSoC板,将FPGA, CPU和GPU结合在一个芯片上。该平台利用可重构FPGA芯片运行重型人工智能算法,并利用CPU执行ROS处理与无人机飞行控制器和机载传感器的通信。本文介绍了MPDrone平台的设计与实现,并通过一个智能目标检测与着陆用例在仿真和实际测试中进行了验证。测试结果证明了所提出的基于fpga的人工智能应用平台的适用性。
{"title":"MPDrone: FPGA-based Platform for Intelligent Real-time Autonomous Drone Operations","authors":"Bálint Kövári, E. Ebeid","doi":"10.1109/SSRR53300.2021.9597857","DOIUrl":"https://doi.org/10.1109/SSRR53300.2021.9597857","url":null,"abstract":"AI-based autonomous onboard drone applications are evolving rapidly and demand dedicated hardware resources to perform effectively. Currently, CPUs and GPUs are commonly used to run these applications. This paper presents a novel drone platform called MPDrone based on the cutting-edge MPSoC boards that combine FPGA, CPU, and GPU in a single chip. The proposed platform utilizes the reconfigurable FPGA chip to run heavy AI algorithms and the CPU to execute ROS for processing communication with the drone flight controller and onboard sensors. The paper introduces the design and implementation of the MPDrone platform, which is validated in simulation and real-world testing through an intelligent object detection and landing use case. The testing results proved the applicability of the proposed FPGA-based platform for AI applications.","PeriodicalId":423263,"journal":{"name":"2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114139105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Decentralized Asynchronous Collaborative Genetic Algorithm for Heterogeneous Multi-agent Search and Rescue Problems 异构多智能体搜救问题的分散异步协同遗传算法
Pub Date : 2021-10-25 DOI: 10.1109/SSRR53300.2021.9597856
Martin Pallin, Jayedur Rashid, Petter Ögren
In this paper we propose a version of the Genetic Algorithm (GA) for combined task assignment and path planning that is highly decentralized in the sense that each agent only knows its own capabilities and data, and a set of so-called handover values communicated to it from the other agents over an unreliable low bandwidth communication channel. These handover values are used in combination with a local GA involving no other agents, to decide what tasks to execute, and what tasks to leave to others. We compare the performance of our approach to a centralized version of GA, and a partly decentralized version of GA where computations are local, but all agents need complete information regarding all other agents, including position, range, battery, and local obstacle maps. We compare solution performance as well as messages sent for the three algorithms, and conclude that the proposed algorithms has a small decrease in performance, but a significant decrease in required communication. We compare the performance of our approach to a centralized version of GA, and a partly decentralized version of GA where computations are local, but all agents need complete information regarding all other agents, including position, range, battery, and local obstacle maps. We compare solution performance as well as messages sent for the three algorithms, and conclude that the proposed algorithms has a small decrease in performance, but a significant decrease in required communication. We compare solution performance as well as messages sent for the three algorithms, and conclude that the proposed algorithms has a small decrease in performance, but a significant decrease in required communication.
在本文中,我们提出了一种用于组合任务分配和路径规划的遗传算法(GA)版本,它是高度分散的,因为每个代理只知道自己的能力和数据,以及一组所谓的切换值,这些切换值是通过不可靠的低带宽通信通道从其他代理传递给它的。这些移交值与不涉及其他代理的本地遗传算法结合使用,以决定执行哪些任务,以及将哪些任务留给其他代理。我们将我们的方法的性能与集中版本的遗传算法和部分分散版本的遗传算法进行比较,其中计算是局部的,但所有代理都需要关于所有其他代理的完整信息,包括位置、范围、电池和局部障碍图。我们比较了三种算法的解决方案性能以及发送的消息,并得出结论,所提出的算法性能略有下降,但所需的通信显著减少。我们将我们的方法的性能与集中版本的遗传算法和部分分散版本的遗传算法进行比较,其中计算是局部的,但所有代理都需要关于所有其他代理的完整信息,包括位置、范围、电池和局部障碍图。我们比较了三种算法的解决方案性能以及发送的消息,并得出结论,所提出的算法性能略有下降,但所需的通信显著减少。我们比较了三种算法的解决方案性能以及发送的消息,并得出结论,所提出的算法性能略有下降,但所需的通信显著减少。
{"title":"A Decentralized Asynchronous Collaborative Genetic Algorithm for Heterogeneous Multi-agent Search and Rescue Problems","authors":"Martin Pallin, Jayedur Rashid, Petter Ögren","doi":"10.1109/SSRR53300.2021.9597856","DOIUrl":"https://doi.org/10.1109/SSRR53300.2021.9597856","url":null,"abstract":"In this paper we propose a version of the Genetic Algorithm (GA) for combined task assignment and path planning that is highly decentralized in the sense that each agent only knows its own capabilities and data, and a set of so-called handover values communicated to it from the other agents over an unreliable low bandwidth communication channel. These handover values are used in combination with a local GA involving no other agents, to decide what tasks to execute, and what tasks to leave to others. We compare the performance of our approach to a centralized version of GA, and a partly decentralized version of GA where computations are local, but all agents need complete information regarding all other agents, including position, range, battery, and local obstacle maps. We compare solution performance as well as messages sent for the three algorithms, and conclude that the proposed algorithms has a small decrease in performance, but a significant decrease in required communication. We compare the performance of our approach to a centralized version of GA, and a partly decentralized version of GA where computations are local, but all agents need complete information regarding all other agents, including position, range, battery, and local obstacle maps. We compare solution performance as well as messages sent for the three algorithms, and conclude that the proposed algorithms has a small decrease in performance, but a significant decrease in required communication. We compare solution performance as well as messages sent for the three algorithms, and conclude that the proposed algorithms has a small decrease in performance, but a significant decrease in required communication.","PeriodicalId":423263,"journal":{"name":"2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128105934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Shared Autonomy Surface Disinfection System Using a Mobile Manipulator Robot 基于移动机械手的共享自主表面消毒系统
Pub Date : 2021-10-25 DOI: 10.1109/SSRR53300.2021.9597678
Alana Sanchez, W. Smart
Robots are being increasingly used in the fight against highly-infectious diseases such as Ebola, MERS, and SARS-CoV-2. Many of these robots use ultraviolet lights mounted on a mobile base to inactivate the pathogens. While the lights are generally effective at irradiating open spaces and walls, they are less effective when it comes to horizontal surfaces, because of the orientation of the light sources. This can be problematic for pathogens such as Ebola, where transmission via contaminated work surfaces, which are often horizontal, is a concern. In this paper, we describe the design, implementation, and testing of an ultraviolet light disinfection system implemented on a mobile manipulator robot designed to address the problem of horizontal surface disinfection. A human supervisor designates a surface for disinfection, the robot autonomously plans and executes an end-effector trajectory to disinfect the surface to the required certainty, and then displays the results for the supervisor to verify. We also provide some background information on Ultraviolet Germicidal Irradiation (UVGI) and describe how we constructed and validated models of ultraviolet radiation propagation and accumulation in our system. Finally, we describe our implementation on a Fetch mobile manipulation platform, and discuss how the practicalities of implementation on a real robot affect our models.
机器人越来越多地用于对抗埃博拉、中东呼吸综合征、新冠肺炎等高传染性疾病。许多这样的机器人使用安装在移动基座上的紫外线灯来灭活病原体。虽然这种灯通常在开放空间和墙壁上有效,但由于光源的方向,它们在水平表面上的效果就不那么好了。这对埃博拉等病原体来说可能是一个问题,因为受污染的工作表面通常是水平的,这是一个令人担忧的问题。在本文中,我们描述了在移动机械手机器人上实现的紫外线消毒系统的设计,实现和测试,旨在解决水平表面消毒问题。人类主管指定一个表面进行消毒,机器人自主规划并执行末端执行器轨迹,对表面进行消毒,达到所需的确定性,然后显示结果供主管验证。我们还提供了一些关于紫外线杀菌辐照(UVGI)的背景信息,并描述了我们如何构建和验证紫外线辐射在我们系统中的传播和积累模型。最后,我们描述了我们在Fetch移动操作平台上的实现,并讨论了在真实机器人上实现的实用性如何影响我们的模型。
{"title":"A Shared Autonomy Surface Disinfection System Using a Mobile Manipulator Robot","authors":"Alana Sanchez, W. Smart","doi":"10.1109/SSRR53300.2021.9597678","DOIUrl":"https://doi.org/10.1109/SSRR53300.2021.9597678","url":null,"abstract":"Robots are being increasingly used in the fight against highly-infectious diseases such as Ebola, MERS, and SARS-CoV-2. Many of these robots use ultraviolet lights mounted on a mobile base to inactivate the pathogens. While the lights are generally effective at irradiating open spaces and walls, they are less effective when it comes to horizontal surfaces, because of the orientation of the light sources. This can be problematic for pathogens such as Ebola, where transmission via contaminated work surfaces, which are often horizontal, is a concern. In this paper, we describe the design, implementation, and testing of an ultraviolet light disinfection system implemented on a mobile manipulator robot designed to address the problem of horizontal surface disinfection. A human supervisor designates a surface for disinfection, the robot autonomously plans and executes an end-effector trajectory to disinfect the surface to the required certainty, and then displays the results for the supervisor to verify. We also provide some background information on Ultraviolet Germicidal Irradiation (UVGI) and describe how we constructed and validated models of ultraviolet radiation propagation and accumulation in our system. Finally, we describe our implementation on a Fetch mobile manipulation platform, and discuss how the practicalities of implementation on a real robot affect our models.","PeriodicalId":423263,"journal":{"name":"2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115390512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Soft Drone with Multi-modal Mobility for the Exploration of Confined Spaces 用于密闭空间探索的多模态机动软无人机
Pub Date : 2021-10-25 DOI: 10.1109/SSRR53300.2021.9597683
Amedeo Fabris, Steffen Kirchgeorg, S. Mintchev
In post-disaster scenarios, rescuers are often confronted with the challenge of accessing confined and cluttered environments including long and narrow passageways, gaps in walls or ceilings. Because of their mobility and versatility, there is a growing interest in developing drones for the remote exploration of these dangerous and often difficult to access places. However, the mechanical design and locomotion strategies of current drones limit the size of the confined space that can be explored. In this work, we present a quadcopter capable of traversing long passageways 34% smaller than its nominal size. The combination of a soft morphing frame and multi-modal mobility allows the drone to exploit a new dynamic strategy for passageway traversal. The drone flies at a given speed towards the entrance of the passageway until it collides with it. The momentum and ability of the frame to soften allow the drone to passively fold and enter. Once the drone is squeezed between the walls of the passageway, it uses two tracks to crawl through. Through experiments, we characterize the main mechanical systems of the drone and study the entry into crevices of different sizes.
在灾后情况下,救援人员经常面临进入狭窄和杂乱环境的挑战,包括狭长的通道、墙壁或天花板上的缝隙。由于无人机的机动性和多功能性,人们越来越有兴趣开发无人机,用于对这些危险且往往难以进入的地方进行远程探索。然而,目前无人机的机械设计和运动策略限制了可以探索的密闭空间的大小。在这项工作中,我们提出了一种四轴飞行器,能够穿越比其标称尺寸小34%的长通道。软变形框架和多模态机动性的结合使无人机能够利用新的动态策略来穿越通道。无人机以给定的速度飞向通道入口,直到与通道发生碰撞。框架的动量和软化能力允许无人机被动地折叠和进入。一旦无人机被挤进通道的墙壁之间,它就会使用两条轨道爬行。通过实验,我们对无人机的主要机械系统进行了表征,并对不同尺寸的裂缝进入进行了研究。
{"title":"A Soft Drone with Multi-modal Mobility for the Exploration of Confined Spaces","authors":"Amedeo Fabris, Steffen Kirchgeorg, S. Mintchev","doi":"10.1109/SSRR53300.2021.9597683","DOIUrl":"https://doi.org/10.1109/SSRR53300.2021.9597683","url":null,"abstract":"In post-disaster scenarios, rescuers are often confronted with the challenge of accessing confined and cluttered environments including long and narrow passageways, gaps in walls or ceilings. Because of their mobility and versatility, there is a growing interest in developing drones for the remote exploration of these dangerous and often difficult to access places. However, the mechanical design and locomotion strategies of current drones limit the size of the confined space that can be explored. In this work, we present a quadcopter capable of traversing long passageways 34% smaller than its nominal size. The combination of a soft morphing frame and multi-modal mobility allows the drone to exploit a new dynamic strategy for passageway traversal. The drone flies at a given speed towards the entrance of the passageway until it collides with it. The momentum and ability of the frame to soften allow the drone to passively fold and enter. Once the drone is squeezed between the walls of the passageway, it uses two tracks to crawl through. Through experiments, we characterize the main mechanical systems of the drone and study the entry into crevices of different sizes.","PeriodicalId":423263,"journal":{"name":"2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115973012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
3D position estimation of drone and object based on QR code segmentation model for inventory management automation 基于二维码分割模型的无人机和物体三维位置估计的库存管理自动化
Pub Date : 2021-10-25 DOI: 10.1109/SSRR53300.2021.9597865
Bohan Yoon, Hyeonha Kim, Geonsik Youn, J. Rhee
Recently, drones have been used more in various fields such as safety, security, and rescue. Drones have the advantage of being able to explore in a wide range through the camera mounted on the drone. In the field of inventory management automation, research was conducted to utilize it. For inventory management automation in a large warehouse, a camera mounted on the drone scan pre-displayed ground QR (Quick Response) code to explore the path. The drone runs along the navigated path and manages the inventory of the warehouse by scanning the barcode or QR code attached to the product. However, unlike warehouses, which have well-defined grids or shelves, the location where products are stored in a yard is not fixed but flexible. Thus, for efficient inventory management in the storage yard, it is also necessary to estimate the position of the QR codes attached to the product. Therefore, in this paper, we propose a position estimation method for drones and products based on the QR code segmentation model. The segmentation model is used to detect the region of perspective distortion QR code caused by the angle difference between the camera and the QR code. Subsequently, shape correction and decoding of the detected QR code region are performed to determine whether it is a ground QR code or not, and the position of the drone is estimated. Finally, the 3D coordinates of the QR code attached to the product, not the ground QR code, are calculated from images taken by drones from two different viewpoints. Consequently, the 3D position coordinates of the drones and QR codes attached to the products will be estimated using the ground QR codes, and efficient inventory management in the storage yard will be achieved in this way.
近年来,无人机越来越多地应用于安全、安保、救援等各个领域。无人机的优点是可以通过安装在无人机上的摄像头进行大范围的探索。在库存管理自动化领域,对其应用进行了研究。对于大型仓库的自动化库存管理,安装在无人机上的摄像头扫描预先显示的地面QR(快速响应)代码来探索路径。无人机沿着导航路径运行,通过扫描附加在产品上的条形码或QR码来管理仓库的库存。然而,与仓库有明确的网格或货架不同,产品存储在院子里的位置不是固定的,而是灵活的。因此,为了在仓库中进行有效的库存管理,还需要估计产品所附QR码的位置。因此,在本文中,我们提出了一种基于二维码分割模型的无人机和产品位置估计方法。该分割模型用于检测由于相机与QR码的角度差而导致的透视畸变QR码区域。随后,对检测到的二维码区域进行形状校正和解码,判断其是否为地面二维码,并估计无人机的位置。最后,附着在产品上的二维码的3D坐标,而不是地面的二维码,是根据无人机从两个不同的视点拍摄的图像计算出来的。因此,无人机的三维位置坐标和附着在产品上的QR码将使用地面QR码进行估计,并通过这种方式实现高效的库存管理。
{"title":"3D position estimation of drone and object based on QR code segmentation model for inventory management automation","authors":"Bohan Yoon, Hyeonha Kim, Geonsik Youn, J. Rhee","doi":"10.1109/SSRR53300.2021.9597865","DOIUrl":"https://doi.org/10.1109/SSRR53300.2021.9597865","url":null,"abstract":"Recently, drones have been used more in various fields such as safety, security, and rescue. Drones have the advantage of being able to explore in a wide range through the camera mounted on the drone. In the field of inventory management automation, research was conducted to utilize it. For inventory management automation in a large warehouse, a camera mounted on the drone scan pre-displayed ground QR (Quick Response) code to explore the path. The drone runs along the navigated path and manages the inventory of the warehouse by scanning the barcode or QR code attached to the product. However, unlike warehouses, which have well-defined grids or shelves, the location where products are stored in a yard is not fixed but flexible. Thus, for efficient inventory management in the storage yard, it is also necessary to estimate the position of the QR codes attached to the product. Therefore, in this paper, we propose a position estimation method for drones and products based on the QR code segmentation model. The segmentation model is used to detect the region of perspective distortion QR code caused by the angle difference between the camera and the QR code. Subsequently, shape correction and decoding of the detected QR code region are performed to determine whether it is a ground QR code or not, and the position of the drone is estimated. Finally, the 3D coordinates of the QR code attached to the product, not the ground QR code, are calculated from images taken by drones from two different viewpoints. Consequently, the 3D position coordinates of the drones and QR codes attached to the products will be estimated using the ground QR codes, and efficient inventory management in the storage yard will be achieved in this way.","PeriodicalId":423263,"journal":{"name":"2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121700529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flying-Climbing Mobile Robot for Steel Bridge Inspection 飞爬式移动钢桥检测机器人
Pub Date : 2021-10-25 DOI: 10.1109/SSRR53300.2021.9597676
A. Pham, Anh T. La, Ethan Chang, Hung M. La
The research of robots to assist people in inspecting the quality of steel bridges has attracted significant attention in recent years. However, the intricate structure of the steel bridge components poses a massive challenge for researchers to move the robot across the bridge to perform the tests. This paper presents a new development of a hybrid flying-climbing robotic system, which can move flexibly and quickly to different positions on the steel bridge. In addition to using high-resolution cameras for an overview, the design allows the robot to stick to steel surfaces and act as a mobile robot for more detailed inspection with our developed giant magneto-resistance (GMR) sensor array system. We conduct a mechanical analysis to show the climbing capability of the mobile part. Additionally, we develop a landing algorithm to allow the robot to land on a steel surface to perform in-depth inspection safely. The designed GMR sensor array has shown the capability of detecting steel cracks to support the in-depth inspection mode. We have tested and validated our developed robot on real bridges to ensure that the design works well and is stable.
近年来,利用机器人辅助人们进行钢桥质量检测的研究引起了人们的广泛关注。然而,钢桥组件的复杂结构给研究人员移动机器人过桥进行测试带来了巨大的挑战。本文提出了一种新型的飞行-攀爬混合机器人系统,该系统能在钢桥上灵活快速地移动到不同位置。除了使用高分辨率相机进行概述外,该设计还允许机器人粘附在钢铁表面,并作为移动机器人使用我们开发的巨磁阻(GMR)传感器阵列系统进行更详细的检查。我们进行了力学分析,以显示可移动部件的爬升能力。此外,我们还开发了一种着陆算法,使机器人能够安全地降落在钢表面上进行深度检查。所设计的GMR传感器阵列显示出检测钢裂纹的能力,支持深度检测模式。我们已经在真实的桥梁上测试和验证了我们开发的机器人,以确保设计工作良好且稳定。
{"title":"Flying-Climbing Mobile Robot for Steel Bridge Inspection","authors":"A. Pham, Anh T. La, Ethan Chang, Hung M. La","doi":"10.1109/SSRR53300.2021.9597676","DOIUrl":"https://doi.org/10.1109/SSRR53300.2021.9597676","url":null,"abstract":"The research of robots to assist people in inspecting the quality of steel bridges has attracted significant attention in recent years. However, the intricate structure of the steel bridge components poses a massive challenge for researchers to move the robot across the bridge to perform the tests. This paper presents a new development of a hybrid flying-climbing robotic system, which can move flexibly and quickly to different positions on the steel bridge. In addition to using high-resolution cameras for an overview, the design allows the robot to stick to steel surfaces and act as a mobile robot for more detailed inspection with our developed giant magneto-resistance (GMR) sensor array system. We conduct a mechanical analysis to show the climbing capability of the mobile part. Additionally, we develop a landing algorithm to allow the robot to land on a steel surface to perform in-depth inspection safely. The designed GMR sensor array has shown the capability of detecting steel cracks to support the in-depth inspection mode. We have tested and validated our developed robot on real bridges to ensure that the design works well and is stable.","PeriodicalId":423263,"journal":{"name":"2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131218176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Robust Multisensor Fusion for Reliable Mapping and Navigation in Degraded Visual Conditions 鲁棒多传感器融合在视觉退化条件下的可靠映射和导航
Pub Date : 2021-10-25 DOI: 10.1109/SSRR53300.2021.9597866
Moritz Torchalla, Marius Schnaubelt, Kevin Daun, O. Stryk
We address the problem of robust simultaneous mapping and localization in degraded visual conditions using low-cost off-the-shelf radars. Current methods often use high-end radar sensors or are tightly coupled to specific sensors, limiting the applicability to new robots. In contrast, we present a sensor-agnostic processing pipeline based on a novel forward sensor model to achieve accurate updates of signed distance function-based maps and robust optimization techniques to reach robust and accurate pose estimates. Our evaluation demonstrates accurate mapping and pose estimation in indoor environments under poor visual conditions and higher accuracy compared to existing methods on publicly available benchmark data.
我们使用低成本的现成雷达解决了在退化的视觉条件下的鲁棒同时映射和定位问题。目前的方法通常使用高端雷达传感器或与特定传感器紧密耦合,限制了对新机器人的适用性。相比之下,我们提出了一种基于新型前向传感器模型的传感器不可知处理管道,以实现基于签名距离函数的地图的准确更新和鲁棒优化技术,以达到鲁棒和准确的姿态估计。我们的评估表明,在视觉条件较差的室内环境中,与现有的公开基准数据方法相比,可以准确地绘制和估计姿态。
{"title":"Robust Multisensor Fusion for Reliable Mapping and Navigation in Degraded Visual Conditions","authors":"Moritz Torchalla, Marius Schnaubelt, Kevin Daun, O. Stryk","doi":"10.1109/SSRR53300.2021.9597866","DOIUrl":"https://doi.org/10.1109/SSRR53300.2021.9597866","url":null,"abstract":"We address the problem of robust simultaneous mapping and localization in degraded visual conditions using low-cost off-the-shelf radars. Current methods often use high-end radar sensors or are tightly coupled to specific sensors, limiting the applicability to new robots. In contrast, we present a sensor-agnostic processing pipeline based on a novel forward sensor model to achieve accurate updates of signed distance function-based maps and robust optimization techniques to reach robust and accurate pose estimates. Our evaluation demonstrates accurate mapping and pose estimation in indoor environments under poor visual conditions and higher accuracy compared to existing methods on publicly available benchmark data.","PeriodicalId":423263,"journal":{"name":"2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116573266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Deployment of Aerial Robots after a major fire of an industrial hall with hazardous substances, a report 一份报告称,在一个充满危险物质的工业大厅发生重大火灾后,空中机器人的部署
Pub Date : 2021-10-25 DOI: 10.1109/SSRR53300.2021.9597677
H. Surmann, Dominik Slomma, Stefan Grobelny, R. Grafe
This technical report is about the mission and the experience gained during the reconnaissance of an industrial hall with hazardous substances after a major fire in Berlin. During this operation, only UAVs and cameras were used to obtain information about the site and the building. First, a geo-referenced 3D model of the building was created in order to plan the entry into the hall. Subsequently, the UAVs were used to fly in the heavily damaged interior and take pictures from inside of the hall. A 360° camera mounted under the UAV was used to collect images of the surrounding area especially from sections that were difficult to fly into. Since the collected data set contained similar images as well as blurred images, it was cleaned from non-optimal images using visual SLAM, bundle adjustment and blur detection so that a 3D model and overviews could be calculated. It was shown that the emergency services were not able to extract the necessary information from the 3D model. Therefore, an interactive panorama viewer with links to other 360° images was implemented where the links to the other images depends on the semi dense point cloud and located camera positions of the visual SLAM algorithm so that the emergency forces could view the surroundings.
这份技术报告是关于在柏林发生重大火灾后对一个有危险物质的工业大厅进行侦察的任务和所获得的经验。在这次行动中,只使用了无人驾驶飞机和照相机来获取有关该地点和建筑物的信息。首先,为了规划进入大厅的入口,创建了建筑的地理参考3D模型。随后,无人机被用来在严重受损的内部飞行,并从大厅内部拍照。安装在无人机下的360°摄像机用于收集周围区域的图像,特别是从难以飞进的部分。由于收集到的数据集包含相似图像和模糊图像,因此使用视觉SLAM、束调整和模糊检测来清除非最优图像,从而计算出3D模型和概述。结果表明,紧急服务部门无法从3D模型中提取必要的信息。因此,我们实现了一个与其他360°图像链接的交互式全景查看器,其中与其他图像的链接依赖于视觉SLAM算法的半密集点云和定位相机位置,以便应急部队可以查看周围环境。
{"title":"Deployment of Aerial Robots after a major fire of an industrial hall with hazardous substances, a report","authors":"H. Surmann, Dominik Slomma, Stefan Grobelny, R. Grafe","doi":"10.1109/SSRR53300.2021.9597677","DOIUrl":"https://doi.org/10.1109/SSRR53300.2021.9597677","url":null,"abstract":"This technical report is about the mission and the experience gained during the reconnaissance of an industrial hall with hazardous substances after a major fire in Berlin. During this operation, only UAVs and cameras were used to obtain information about the site and the building. First, a geo-referenced 3D model of the building was created in order to plan the entry into the hall. Subsequently, the UAVs were used to fly in the heavily damaged interior and take pictures from inside of the hall. A 360° camera mounted under the UAV was used to collect images of the surrounding area especially from sections that were difficult to fly into. Since the collected data set contained similar images as well as blurred images, it was cleaned from non-optimal images using visual SLAM, bundle adjustment and blur detection so that a 3D model and overviews could be calculated. It was shown that the emergency services were not able to extract the necessary information from the 3D model. Therefore, an interactive panorama viewer with links to other 360° images was implemented where the links to the other images depends on the semi dense point cloud and located camera positions of the visual SLAM algorithm so that the emergency forces could view the surroundings.","PeriodicalId":423263,"journal":{"name":"2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131051057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1