Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018735
Riccardo Franceschini, M. Fumagalli, J. Becerra
Safe autonomous navigation of robots in complex and cluttered environments is a crucial task and is still an open challenge even in 2D environments. Being able to efficiently minimize multiple constraints such as safety or battery drain requires the ability to understand and leverage information from different cost maps. Rapid-exploring random trees (RRT) methods are often used in current path planning methods, thanks to their efficiency in finding a quick path to the goal. However, these approaches suffer from a slow convergence towards an optimal solution, especially when the planner's goal must consider other aspects like safety or battery consumption besides simply achieving the goal. Therefore, it is proposed a sample-efficient and cost-aware sampling RRT* method that can overcome previous methods by exploiting the information gathered from map analysis. In particular, the use of a Reinforcement Learning agent is leveraged to guide the RRT* sampling toward an almost optimal solution. The performance of the proposed method is demonstrated against different RRT* implementations in multiple synthetic environments.
{"title":"Learn to efficiently exploit cost maps by combining RRT* with Reinforcement Learning","authors":"Riccardo Franceschini, M. Fumagalli, J. Becerra","doi":"10.1109/SSRR56537.2022.10018735","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018735","url":null,"abstract":"Safe autonomous navigation of robots in complex and cluttered environments is a crucial task and is still an open challenge even in 2D environments. Being able to efficiently minimize multiple constraints such as safety or battery drain requires the ability to understand and leverage information from different cost maps. Rapid-exploring random trees (RRT) methods are often used in current path planning methods, thanks to their efficiency in finding a quick path to the goal. However, these approaches suffer from a slow convergence towards an optimal solution, especially when the planner's goal must consider other aspects like safety or battery consumption besides simply achieving the goal. Therefore, it is proposed a sample-efficient and cost-aware sampling RRT* method that can overcome previous methods by exploiting the information gathered from map analysis. In particular, the use of a Reinforcement Learning agent is leveraged to guide the RRT* sampling toward an almost optimal solution. The performance of the proposed method is demonstrated against different RRT* implementations in multiple synthetic environments.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132358068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aerial robot development has gathered steam in recent years for applications such as package delivery and transportation of arbitrary payloads, both in academia and business. However, current solutions for Unmanned Aerial Vehicles (UAVs) based transportation of large objects and/or parcels rely on some form of standardization of packaging. This design constraint greatly limits the applicability of the autonomous package delivery drone concepts. In this paper, we propose a reconfigurable, tethered aerial gripping system that can allow for the execution of a more diverse range of package handling and transportation tasks, employing autonomous aerial robots. The system combines a reconfigurable, telescopic, rectangular frame that is used to conform to the parcel geometry and lift it, and a net system that is used to secure the parcel from the bottom, facilitating the execution of caging grasps. This combination provides reliable aerial grasping and transportation capabilities to the package delivery UAV. The grasping and transportation process used by the proposed concept system can be divided into three stages: i) the reconfigurable, telescopic frame conforms to the parcel geometry securing it, ii) the package is lifted or tilted by the frame's lifting mechanism, exposing its bottom part, and iii) the net is closed, caging and securing the package for transportation. A series of airborne gripping and transportation trials have experimentally validated the system's effectiveness, confirming the viability and usefulness of the proposed concept. Results demonstrate that the prototype can successfully secure and transport a package box. Furthermore, the complete system can be tethered to any type of aerial robotic vehicle.
{"title":"An Adaptive, Reconfigurable, Tethered Aerial Grasping System for Reliable Caging and Transportation of Packages","authors":"Shaoqian Lin, Joao Buzzatto, Junbang Liang, Minas Liarokapis","doi":"10.1109/SSRR56537.2022.10018625","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018625","url":null,"abstract":"Aerial robot development has gathered steam in recent years for applications such as package delivery and transportation of arbitrary payloads, both in academia and business. However, current solutions for Unmanned Aerial Vehicles (UAVs) based transportation of large objects and/or parcels rely on some form of standardization of packaging. This design constraint greatly limits the applicability of the autonomous package delivery drone concepts. In this paper, we propose a reconfigurable, tethered aerial gripping system that can allow for the execution of a more diverse range of package handling and transportation tasks, employing autonomous aerial robots. The system combines a reconfigurable, telescopic, rectangular frame that is used to conform to the parcel geometry and lift it, and a net system that is used to secure the parcel from the bottom, facilitating the execution of caging grasps. This combination provides reliable aerial grasping and transportation capabilities to the package delivery UAV. The grasping and transportation process used by the proposed concept system can be divided into three stages: i) the reconfigurable, telescopic frame conforms to the parcel geometry securing it, ii) the package is lifted or tilted by the frame's lifting mechanism, exposing its bottom part, and iii) the net is closed, caging and securing the package for transportation. A series of airborne gripping and transportation trials have experimentally validated the system's effectiveness, confirming the viability and usefulness of the proposed concept. Results demonstrate that the prototype can successfully secure and transport a package box. Furthermore, the complete system can be tethered to any type of aerial robotic vehicle.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126137390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018611
Abhinav Rajvanshi, Alex Krasner, Mikhail Sizintsev, Han-Pang Chiu, Joseph Sottile, Z. Agioutantis, S. Schafrik, Jimmy Rose
This paper describes a vision-based autonomous docking solution that moves a coalmine shuttle car to the continuous miner in GPS-denied underground environments. The solution adapts and improves state-of-the-art autonomous docking techniques using a RGBD camera specifically in under-ground mine environments. It includes five processing modules: scene segmentation, segmented point-cloud generation, occupancy grid estimation, path planner, and controller. A two-stage approach is developed to train the scene segmentation network for adapting to the changes from normal environments to dark mines. The resulting network detects both the continuous miner and other objects accurately in mines. Based upon these recognized objects, a path is planned for moving the shuttle car from its initial position to the continuous miner, while avoiding obstacles and other workers. Experiments are conducted using the system in a 1/6th-scale lab environment and data collected in a full-scale realistic mine environment with full-size equipment. The results show the potential of this solution, which can significantly enhance the safety of workers in mining operations.
{"title":"Autonomous Docking Using Learning-Based Scene Segmentation in Underground Mine Environments","authors":"Abhinav Rajvanshi, Alex Krasner, Mikhail Sizintsev, Han-Pang Chiu, Joseph Sottile, Z. Agioutantis, S. Schafrik, Jimmy Rose","doi":"10.1109/SSRR56537.2022.10018611","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018611","url":null,"abstract":"This paper describes a vision-based autonomous docking solution that moves a coalmine shuttle car to the continuous miner in GPS-denied underground environments. The solution adapts and improves state-of-the-art autonomous docking techniques using a RGBD camera specifically in under-ground mine environments. It includes five processing modules: scene segmentation, segmented point-cloud generation, occupancy grid estimation, path planner, and controller. A two-stage approach is developed to train the scene segmentation network for adapting to the changes from normal environments to dark mines. The resulting network detects both the continuous miner and other objects accurately in mines. Based upon these recognized objects, a path is planned for moving the shuttle car from its initial position to the continuous miner, while avoiding obstacles and other workers. Experiments are conducted using the system in a 1/6th-scale lab environment and data collected in a full-scale realistic mine environment with full-size equipment. The results show the potential of this solution, which can significantly enhance the safety of workers in mining operations.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125540957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018629
Manuel Toscano-Moreno, Juan Bravo-Arrabal, Manuel Sánchez-Montero, Javier Serón Barba, R. Vázquez-Martín, J. Fernandez-Lozano, A. Mandow, A. García-Cerezo
Cloud robotics and the Internet of robotic things (IoRT) can boost the performance of human-robot cooperative teams in demanding environments (e.g., disaster response, mining, demolition, and nuclear sites) by allowing timely information sharing between agents on the field (both human and robotic) and the mission control center. In previous works, we defined an Edge/Cloud-based IoRT and communications architecture for heterogeneous multi-agent systems that was applied to search and rescue missions (SAR-IoCA). In this paper, we address the integration of a remote mission control center, which performs path planning, teleoperation and mission supervision, into a ROS network. Furthermore, we present the UMA-ROS-Android app, which allows publishing smartphone sensor data, including audio and high definition images from the rear camera, and can be used by responders for requesting a robot to the control center from a geolocalized field position. The app works up to API 32 and has been shared for the ROS community. The paper offers a case study where the proposed framework was applied to a cooperative casualty evacuation mission with professional responders and an unmanned rover with two detachable stretchers in a high-fidelity exercise performed in Malaga (Spain) in June 2022.
{"title":"Integrating ROS and Android for Rescuers in a Cloud Robotics Architecture: Application to a Casualty Evacuation Exercise","authors":"Manuel Toscano-Moreno, Juan Bravo-Arrabal, Manuel Sánchez-Montero, Javier Serón Barba, R. Vázquez-Martín, J. Fernandez-Lozano, A. Mandow, A. García-Cerezo","doi":"10.1109/SSRR56537.2022.10018629","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018629","url":null,"abstract":"Cloud robotics and the Internet of robotic things (IoRT) can boost the performance of human-robot cooperative teams in demanding environments (e.g., disaster response, mining, demolition, and nuclear sites) by allowing timely information sharing between agents on the field (both human and robotic) and the mission control center. In previous works, we defined an Edge/Cloud-based IoRT and communications architecture for heterogeneous multi-agent systems that was applied to search and rescue missions (SAR-IoCA). In this paper, we address the integration of a remote mission control center, which performs path planning, teleoperation and mission supervision, into a ROS network. Furthermore, we present the UMA-ROS-Android app, which allows publishing smartphone sensor data, including audio and high definition images from the rear camera, and can be used by responders for requesting a robot to the control center from a geolocalized field position. The app works up to API 32 and has been shared for the ROS community. The paper offers a case study where the proposed framework was applied to a cooperative casualty evacuation mission with professional responders and an unmanned rover with two detachable stretchers in a high-fidelity exercise performed in Malaga (Spain) in June 2022.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131373925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018685
J. Weber, M. Schmidt
Ahstract- Mobile robots are more and more used in human environments, which means they have to navigate near the walking path’ of humans. Navigation in crowds is difficult for autonomous mobile robots because humans are unpredictable mobile obstacles that make localization difficult by obscuring sensor fields of view. Especially when there are many dynamic obstacles around the robot, localization is disturbed and navigation may fail. Another challenge is that the robot has to pay special attention to humans for safety reasons. In order for mobile robots to be used safely and reliably in the vicinity of humans in the future, new algorithms need to be developed and extensively tested. In practice, these tests are very time-consuming and expensive, especially if they are done in many different environments with a large number of humans. To reduce this workload and enable extensive testing in many different environments, we present a new cosimulation in this paper. It allows to simulate crowds in the vicinity of navigating mobile robots. For this, 3D apartments are automatically generated from over 80k residential drawings, in which robots and humans can navigate. Thus, this simulation allows to perform tests in many generated environments and thus to make statements that are less dependent on the environment. In simulated experiments with up to 15 humans in an apartment, the influence of the number of humans on the localization error as well as on the navigation is investigated and the simulation results are evaluated.
{"title":"Simulation of Mobile Robots in Human Crowds Based on Automatically Generated Maps","authors":"J. Weber, M. Schmidt","doi":"10.1109/SSRR56537.2022.10018685","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018685","url":null,"abstract":"Ahstract- Mobile robots are more and more used in human environments, which means they have to navigate near the walking path’ of humans. Navigation in crowds is difficult for autonomous mobile robots because humans are unpredictable mobile obstacles that make localization difficult by obscuring sensor fields of view. Especially when there are many dynamic obstacles around the robot, localization is disturbed and navigation may fail. Another challenge is that the robot has to pay special attention to humans for safety reasons. In order for mobile robots to be used safely and reliably in the vicinity of humans in the future, new algorithms need to be developed and extensively tested. In practice, these tests are very time-consuming and expensive, especially if they are done in many different environments with a large number of humans. To reduce this workload and enable extensive testing in many different environments, we present a new cosimulation in this paper. It allows to simulate crowds in the vicinity of navigating mobile robots. For this, 3D apartments are automatically generated from over 80k residential drawings, in which robots and humans can navigate. Thus, this simulation allows to perform tests in many generated environments and thus to make statements that are less dependent on the environment. In simulated experiments with up to 15 humans in an apartment, the influence of the number of humans on the localization error as well as on the navigation is investigated and the simulation results are evaluated.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132559449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018621
M. Watanabe, Yuuki Ozawa, Kenichi Takahashi, Tetsuya Kimura, K. Tadakuma, G. Marafioti, S. Tadokoro
Response and recovery are critical soon after or during a large-scale disaster such as earthquakes, floods, landslides, strong winds, explosions, structure failures, and so on. In the project called CURSOR, we have been developing a search and rescue kit to grasp the situation and find trapped victims efficiently under the debris while securing the safety of the first responders. In this paper, hardware design and tests of a small two-wheeled robot platform SMURF are shown. SMURF aims to search for victims under the rubble efficiently by large-scale deployment using transport drones. While descending the rubble pile, they search for victims using cameras and gas sensors. To evaluate the performance and verify the effectiveness of the mobility system in actual conditions, ruggedization tests, mobility tests, and field tests were conducted. The reliability and mobility performance results show the potential of the developed two-wheeled robots to carry out large-scale disaster responses.
{"title":"Hardware Design and Tests of Two-Wheeled Robot Platform for Searching Survivors in Debris Cones","authors":"M. Watanabe, Yuuki Ozawa, Kenichi Takahashi, Tetsuya Kimura, K. Tadakuma, G. Marafioti, S. Tadokoro","doi":"10.1109/SSRR56537.2022.10018621","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018621","url":null,"abstract":"Response and recovery are critical soon after or during a large-scale disaster such as earthquakes, floods, landslides, strong winds, explosions, structure failures, and so on. In the project called CURSOR, we have been developing a search and rescue kit to grasp the situation and find trapped victims efficiently under the debris while securing the safety of the first responders. In this paper, hardware design and tests of a small two-wheeled robot platform SMURF are shown. SMURF aims to search for victims under the rubble efficiently by large-scale deployment using transport drones. While descending the rubble pile, they search for victims using cameras and gas sensors. To evaluate the performance and verify the effectiveness of the mobility system in actual conditions, ruggedization tests, mobility tests, and field tests were conducted. The reliability and mobility performance results show the potential of the developed two-wheeled robots to carry out large-scale disaster responses.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130570359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018809
F. Py, Giulia Robbiani, G. Marafioti, Yuuki Ozawa, M. Watanabe, Kenichi Takahashi, S. Tadokoro
Search and rescue personnel is facing many challenges when deployed in the field after a natural or man-made disaster. In some cases they are exposed to safety risks, for instance when searching for trapped victims under a partially collapsed building after an earthquake. Robots could be a tool that the search and rescue teams could use to search in areas that are too dangerous or too difficult to reach. In this paper, part of the effort made by the CURSOR project is described. In particular, we present a software architecture designed and developed for the Soft Miniaturised Underground Robotic Finder (SMURF). The SMURF is a robotic platform designed and built to assist the search and rescue teams during their operations. Finally, we describe the main components of the SMURFs and share our findings and our acquired experience when developing and testing the SMURFs in realistic environments.
{"title":"SMURF software architecture for low power mobile robots: experience in search and rescue operations","authors":"F. Py, Giulia Robbiani, G. Marafioti, Yuuki Ozawa, M. Watanabe, Kenichi Takahashi, S. Tadokoro","doi":"10.1109/SSRR56537.2022.10018809","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018809","url":null,"abstract":"Search and rescue personnel is facing many challenges when deployed in the field after a natural or man-made disaster. In some cases they are exposed to safety risks, for instance when searching for trapped victims under a partially collapsed building after an earthquake. Robots could be a tool that the search and rescue teams could use to search in areas that are too dangerous or too difficult to reach. In this paper, part of the effort made by the CURSOR project is described. In particular, we present a software architecture designed and developed for the Soft Miniaturised Underground Robotic Finder (SMURF). The SMURF is a robotic platform designed and built to assist the search and rescue teams during their operations. Finally, we describe the main components of the SMURFs and share our findings and our acquired experience when developing and testing the SMURFs in realistic environments.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132616118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018713
A. Safa, Tim Verbelen, I. Ocket, A. Bourdoux, Hichem Sahli, F. Catthoor, G. Gielen
Learning to safely navigate in unknown environ-ments is an important task for autonomous drones used in surveillance and rescue operations. In recent years, a number of learning-based Simultaneous Localisation and Mapping (SLAM) systems relying on deep neural networks (DNNs) have been proposed for applications where conventional feature descriptors do not perform well. However, such learning-based SLAM systems rely on DNN feature encoders trained offline in typical deep learning settings. This makes them less suited for drones deployed in environments unseen during training, where continual adaptation is paramount. In this paper, we present a new method for learning to SLAM on the fly in unknown environments, by modulating a low-complexity Dictionary Learning and Sparse Coding (DLSC) pipeline with a newly proposed Quadratic Bayesian Surprise (QBS) factor. We experimentally validate our approach with data collected by a drone in a challenging warehouse scenario, where the high number of ambiguous scenes makes visual disambiguation hard.
{"title":"Learning to Encode Vision on the Fly in Unknown Environments: A Continual Learning SLAM Approach for Drones","authors":"A. Safa, Tim Verbelen, I. Ocket, A. Bourdoux, Hichem Sahli, F. Catthoor, G. Gielen","doi":"10.1109/SSRR56537.2022.10018713","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018713","url":null,"abstract":"Learning to safely navigate in unknown environ-ments is an important task for autonomous drones used in surveillance and rescue operations. In recent years, a number of learning-based Simultaneous Localisation and Mapping (SLAM) systems relying on deep neural networks (DNNs) have been proposed for applications where conventional feature descriptors do not perform well. However, such learning-based SLAM systems rely on DNN feature encoders trained offline in typical deep learning settings. This makes them less suited for drones deployed in environments unseen during training, where continual adaptation is paramount. In this paper, we present a new method for learning to SLAM on the fly in unknown environments, by modulating a low-complexity Dictionary Learning and Sparse Coding (DLSC) pipeline with a newly proposed Quadratic Bayesian Surprise (QBS) factor. We experimentally validate our approach with data collected by a drone in a challenging warehouse scenario, where the high number of ambiguous scenes makes visual disambiguation hard.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133319297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018733
A. Barišić, Marlan Ball, Noah Jackson, Riley McCarthy, Nasib Naimi, Luca Strässle, Jonathan Becker, Maurice Brunner, Julius Fricke, Lovro Markovic, Isaac Seslar, D. Novick, J. Salton, R. Siegwart, S. Bogdan, R. Fierro
With the rapid development of technology and the proliferation of uncrewed aerial systems (UAS), there is an immediate need for security solutions. Toward this end, we propose the use of a multi-robot system for autonomous and cooperative counter-UAS missions. In this paper, we present the design of the hardware and software components of different complementary robotic platforms: a mobile uncrewed ground vehicle (UGV) equipped with a LiDAR sensor, an uncrewed aerial vehicle (UAV) with a gimbal-mounted stereo camera for air-to-air inspections, and a UAV with a capture mechanism equipped with radars and camera. Our proposed system features 1) scalability to larger areas due to the distributed approach and online processing, 2) long-term cooperative missions, and 3) complementary multimodal perception for the detection of multirotor UAVs. In field experiments, we demonstrate the integration of all subsystems in accomplishing a counter-UAS task within an unstructured environment. The obtained results confirm the promising direction of using multi-robot and multi-modal systems for C-UAS.
{"title":"Multi-Robot System for Autonomous Cooperative Counter-UAS Missions: Design, Integration, and Field Testing","authors":"A. Barišić, Marlan Ball, Noah Jackson, Riley McCarthy, Nasib Naimi, Luca Strässle, Jonathan Becker, Maurice Brunner, Julius Fricke, Lovro Markovic, Isaac Seslar, D. Novick, J. Salton, R. Siegwart, S. Bogdan, R. Fierro","doi":"10.1109/SSRR56537.2022.10018733","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018733","url":null,"abstract":"With the rapid development of technology and the proliferation of uncrewed aerial systems (UAS), there is an immediate need for security solutions. Toward this end, we propose the use of a multi-robot system for autonomous and cooperative counter-UAS missions. In this paper, we present the design of the hardware and software components of different complementary robotic platforms: a mobile uncrewed ground vehicle (UGV) equipped with a LiDAR sensor, an uncrewed aerial vehicle (UAV) with a gimbal-mounted stereo camera for air-to-air inspections, and a UAV with a capture mechanism equipped with radars and camera. Our proposed system features 1) scalability to larger areas due to the distributed approach and online processing, 2) long-term cooperative missions, and 3) complementary multimodal perception for the detection of multirotor UAVs. In field experiments, we demonstrate the integration of all subsystems in accomplishing a counter-UAS task within an unstructured environment. The obtained results confirm the promising direction of using multi-robot and multi-modal systems for C-UAS.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115519847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018660
J. Sanchez-Diaz, Francisco Javier Gañán, R. Tapia, J. R. M. Dios, A. Ollero
Autonomous aerial robots for urban search and rescue (USAR) operations require robust perception systems for localization and mapping. Although local feature description is widely used for geometric map construction, global image descriptors leverage scene information to perform semantic localization, allowing topological maps to consider relations between places and elements in the scenario. This paper proposes a scene recognition method for USAR operations using a collaborative human-robot approach. The proposed method uses global image description to train an SVM-based classification model with semi-supervised labeled data. It has been experimentally validated in several indoor scenarios on board a multirotor robot.
{"title":"Scene Recognition for Urban Search and Rescue using Global Description and Semi-Supervised Labelling","authors":"J. Sanchez-Diaz, Francisco Javier Gañán, R. Tapia, J. R. M. Dios, A. Ollero","doi":"10.1109/SSRR56537.2022.10018660","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018660","url":null,"abstract":"Autonomous aerial robots for urban search and rescue (USAR) operations require robust perception systems for localization and mapping. Although local feature description is widely used for geometric map construction, global image descriptors leverage scene information to perform semantic localization, allowing topological maps to consider relations between places and elements in the scenario. This paper proposes a scene recognition method for USAR operations using a collaborative human-robot approach. The proposed method uses global image description to train an SVM-based classification model with semi-supervised labeled data. It has been experimentally validated in several indoor scenarios on board a multirotor robot.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"86 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126282816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}