Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018686
W. Smith, Yongming Qin, T. Furukawa, G. Dissanayake
This paper presents a multistage approach to refining the map of an environment and satisfying the targeted resolution and local accuracy by an autonomous mobile robot. The proposed approach consists of two steps. Having a globally accurate coarse map of the environment developed using a conventional technique such as SLAM or SfM with bundle adjustment, the proposed first step plans a path for the robot to revisit the environment while maintaining a desired distance to all occupied regions of interest since the resolution and the local accuracy of the map typically depends on the distance from which objects in the environment are observed. An Unoccupancy Distance Map (UDM) and a reduced-order Travelling Salesman Problem (TSP) techniques are newly proposed to solve this class of problems. In the final step, an online path replanning and map refinement technique is proposed to achieve the targeted resolution and local accuracy of the map. Parametric studies have firstly validated the effectiveness of the proposed two steps. The autonomous capability of the proposed approach has then been demonstrated successfully in its use for a practical mission.
{"title":"Autonomous Robotic Map Refinement for Targeted Resolution and Local Accuracy","authors":"W. Smith, Yongming Qin, T. Furukawa, G. Dissanayake","doi":"10.1109/SSRR56537.2022.10018686","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018686","url":null,"abstract":"This paper presents a multistage approach to refining the map of an environment and satisfying the targeted resolution and local accuracy by an autonomous mobile robot. The proposed approach consists of two steps. Having a globally accurate coarse map of the environment developed using a conventional technique such as SLAM or SfM with bundle adjustment, the proposed first step plans a path for the robot to revisit the environment while maintaining a desired distance to all occupied regions of interest since the resolution and the local accuracy of the map typically depends on the distance from which objects in the environment are observed. An Unoccupancy Distance Map (UDM) and a reduced-order Travelling Salesman Problem (TSP) techniques are newly proposed to solve this class of problems. In the final step, an online path replanning and map refinement technique is proposed to achieve the targeted resolution and local accuracy of the map. Parametric studies have firstly validated the effectiveness of the proposed two steps. The autonomous capability of the proposed approach has then been demonstrated successfully in its use for a practical mission.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115643398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018638
Nathan L. Schomer, J. Adams
Mountain search and rescue in North America relies on limited funding and teams of volunteers to assist people in remote environments. The search and rescue process is often difficult and dangerous. Unmanned aerial vehicles can decrease rescuer workload and increase safety. However, commercially available products do not meet the unique needs of mountain search and rescue, resulting in slow adoption within this community. A survey of mountain rescue teams was conducted to collect financial and unmanned aerial vehicle use information, which will inform development to meet the financial and technology needs of mountain search and rescue.
{"title":"Unmanned Aerial Vehicle Usage and Affordability Among Mountain Search and Rescue Teams","authors":"Nathan L. Schomer, J. Adams","doi":"10.1109/SSRR56537.2022.10018638","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018638","url":null,"abstract":"Mountain search and rescue in North America relies on limited funding and teams of volunteers to assist people in remote environments. The search and rescue process is often difficult and dangerous. Unmanned aerial vehicles can decrease rescuer workload and increase safety. However, commercially available products do not meet the unique needs of mountain search and rescue, resulting in slow adoption within this community. A survey of mountain rescue teams was conducted to collect financial and unmanned aerial vehicle use information, which will inform development to meet the financial and technology needs of mountain search and rescue.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124503128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018630
M. Sobhani, Alex Smith, M. Giuliani, A. Pipe
The usability of a novel triple-arm mixed-reality robot teleoperation system is investigated. The system is developed to provide a sense of remote presence for the operator. Different types of interfaces and camera setups have been proposed previously. Our novel approach is to have a moving stereo vision camera mounted on a robotic arm in the remote scene controlled with a virtual reality (VR) headset. By streaming live stereo video into the VR headset in a video see-through configuration the operator experiences a sense of remote presence. The teleoperation task is done using two more robotic arms. These arms are set up in a mirror teleoperation setting so that the remote (follower) arm copies the movements of the control (leader) arm. To investigate the effect of latency on the operator a within-subject usability study of the system with 20 participants has been conducted. Participants completed a pick-and-place task sorting objects into marked containers in two conditions. In one condition, the camera robot arm was controlled by a joint position controller with low latency but jittery robot motion. In the other condition, the camera robot was controlled by a joint velocity controller with higher latency but smooth motion. Participants completed the System Usability Scale questionnaire after each trial. The task completion time and participants' head movement were also recorded as objective measures. The study result did not show a significant difference in any of the objective or subjective measures, although, the position controller scored higher overall. This could be due to the number of participants or the ability of people to adapt to the latency in the system and further analysis in future work is required.
{"title":"Usability Study of a Novel Triple-arm Mixed-Reality Robot Teleoperation System","authors":"M. Sobhani, Alex Smith, M. Giuliani, A. Pipe","doi":"10.1109/SSRR56537.2022.10018630","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018630","url":null,"abstract":"The usability of a novel triple-arm mixed-reality robot teleoperation system is investigated. The system is developed to provide a sense of remote presence for the operator. Different types of interfaces and camera setups have been proposed previously. Our novel approach is to have a moving stereo vision camera mounted on a robotic arm in the remote scene controlled with a virtual reality (VR) headset. By streaming live stereo video into the VR headset in a video see-through configuration the operator experiences a sense of remote presence. The teleoperation task is done using two more robotic arms. These arms are set up in a mirror teleoperation setting so that the remote (follower) arm copies the movements of the control (leader) arm. To investigate the effect of latency on the operator a within-subject usability study of the system with 20 participants has been conducted. Participants completed a pick-and-place task sorting objects into marked containers in two conditions. In one condition, the camera robot arm was controlled by a joint position controller with low latency but jittery robot motion. In the other condition, the camera robot was controlled by a joint velocity controller with higher latency but smooth motion. Participants completed the System Usability Scale questionnaire after each trial. The task completion time and participants' head movement were also recorded as objective measures. The study result did not show a significant difference in any of the objective or subjective measures, although, the position controller scored higher overall. This could be due to the number of participants or the ability of people to adapt to the latency in the system and further analysis in future work is required.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128087784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018756
Friedrich Steinhäusler, Harris V. Georgiou
Finding victims after devastating earthquakes is a challenging task. In response to these challenges, a European-Japanese consortium initiated the EU Horizon-2020 project CURSOR (*). Its primary objective is to develop an innovative Search and Rescue (SAR) Kit that will be mobile, quick to deploy and easy to operate. An important component of the SAR Kit is the CURSOR Drone Fleet (CDF). This paper describes the advantages and the limitations of the CDF, consisting of Tethered Mothership Drone (MD), Ground Penetrating Radar Drone (GPRD), Advanced Situational Awareness Drone (ASAD), Transport Drone (TD) and Modelling Drones (MOD). More specifically, the inherent technical and logistical limitations of the CDF are analyzed in relation to the current state-of-the-art (SOTA) technology, such as: (1) drone take-off weight optimization, (2) impact of no-fly zones and electronic stray signals, and (3) meteorological conditions (ambient air temperature, wind speed, gustiness). CDF provides First Responders (FR) on scene with (a) continuous drone-based aerial surveillance up to 100 m above ground; (b) aerial photos, HD video and thermal images of the disaster area; (c) radar-based information on potential survivors buried under rubble; and (d) 3D model of the disaster area with low- or high resolution. The proposed CDF in the CURSOR Kit is demonstrated as a key component in improving operational effectiveness of field teams in future SAR missions.
{"title":"Detection of victims with UAVs during wide area Search and Rescue operations","authors":"Friedrich Steinhäusler, Harris V. Georgiou","doi":"10.1109/SSRR56537.2022.10018756","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018756","url":null,"abstract":"Finding victims after devastating earthquakes is a challenging task. In response to these challenges, a European-Japanese consortium initiated the EU Horizon-2020 project CURSOR (*). Its primary objective is to develop an innovative Search and Rescue (SAR) Kit that will be mobile, quick to deploy and easy to operate. An important component of the SAR Kit is the CURSOR Drone Fleet (CDF). This paper describes the advantages and the limitations of the CDF, consisting of Tethered Mothership Drone (MD), Ground Penetrating Radar Drone (GPRD), Advanced Situational Awareness Drone (ASAD), Transport Drone (TD) and Modelling Drones (MOD). More specifically, the inherent technical and logistical limitations of the CDF are analyzed in relation to the current state-of-the-art (SOTA) technology, such as: (1) drone take-off weight optimization, (2) impact of no-fly zones and electronic stray signals, and (3) meteorological conditions (ambient air temperature, wind speed, gustiness). CDF provides First Responders (FR) on scene with (a) continuous drone-based aerial surveillance up to 100 m above ground; (b) aerial photos, HD video and thermal images of the disaster area; (c) radar-based information on potential survivors buried under rubble; and (d) 3D model of the disaster area with low- or high resolution. The proposed CDF in the CURSOR Kit is demonstrated as a key component in improving operational effectiveness of field teams in future SAR missions.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133361864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018668
David De Schepper, Ivo Dekker, Mattias Simons, Lowie Brabants, W. Schroeyers, E. Demeester
During the last decades, the (partial) automation of tasks during the dismantling and decommissioning of potentially nuclear contaminated environments has become of emerging interest for e.g. homeland security, disaster response, continuous maintenance, and dismantling and decomissioning activities. Nowadays, the nuclear scene is mostly dominated by manual labour. Radiation protection officers have the task of characterising an environment, which is often unknown a priori, before any dismantling and decomissioning activity can take place. Besides the potential involved health risks, going from radiation disease to an increase in the risk of cancer, this important preliminary task is very time-consuming and prone to errors concerning the taken measurements, storage and post-processing of the recorded data. To minimise the disadvantages, this paper presents the design and development of a proof-of-concept semi-autonomous, ground-based mobile manipulator robot ARCHER (Autonomous Robot platform for CHaractERization) suited for radiological monitoring purposes. Besides the mechanical and electrical overview of the design of the mobile manipulator, this paper describes the software tools used to build and deploy the robot. In addition, this paper describes the results of several in-situ laboratory experiments where the mobile manipulator platform is asked to perform a radiological scanning task on a wall.
{"title":"Towards a Semi-Autonomous Robot Platform for the Characterisation of Radiological Environments","authors":"David De Schepper, Ivo Dekker, Mattias Simons, Lowie Brabants, W. Schroeyers, E. Demeester","doi":"10.1109/SSRR56537.2022.10018668","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018668","url":null,"abstract":"During the last decades, the (partial) automation of tasks during the dismantling and decommissioning of potentially nuclear contaminated environments has become of emerging interest for e.g. homeland security, disaster response, continuous maintenance, and dismantling and decomissioning activities. Nowadays, the nuclear scene is mostly dominated by manual labour. Radiation protection officers have the task of characterising an environment, which is often unknown a priori, before any dismantling and decomissioning activity can take place. Besides the potential involved health risks, going from radiation disease to an increase in the risk of cancer, this important preliminary task is very time-consuming and prone to errors concerning the taken measurements, storage and post-processing of the recorded data. To minimise the disadvantages, this paper presents the design and development of a proof-of-concept semi-autonomous, ground-based mobile manipulator robot ARCHER (Autonomous Robot platform for CHaractERization) suited for radiological monitoring purposes. Besides the mechanical and electrical overview of the design of the mobile manipulator, this paper describes the software tools used to build and deploy the robot. In addition, this paper describes the results of several in-situ laboratory experiments where the mobile manipulator platform is asked to perform a radiological scanning task on a wall.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123981850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018710
Hsuan-Chi Chang, K. Tseng
Quadrupedal robots are designed to walk over complex terrains (e.g., hills, rubble, deformable terrains, etc.) However, training quadruped robots to walk on complex terrains is a challenge. One difficulty is the problem caused by the sensors. Exteroceptive sensors such as cameras are cheap and convenient, but cameras are limited in some environments (e.g., sewers without lights). Training a legged robot using proprioceptive can avoid the aforementioned situation. This research proposes a method combining terrain curriculum and adaptive submodularity. The legged robot is able to adaptively select actions over complex terrains without exteroceptive sensors. Adaptive submodularity is utilized to predict the terrain and take sequential actions with theoretical guarantees. The experiments demonstrate the proposed approach has fewer prediction errors than the random approach.
{"title":"Localizing Complex Terrains through Adaptive Submodularity","authors":"Hsuan-Chi Chang, K. Tseng","doi":"10.1109/SSRR56537.2022.10018710","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018710","url":null,"abstract":"Quadrupedal robots are designed to walk over complex terrains (e.g., hills, rubble, deformable terrains, etc.) However, training quadruped robots to walk on complex terrains is a challenge. One difficulty is the problem caused by the sensors. Exteroceptive sensors such as cameras are cheap and convenient, but cameras are limited in some environments (e.g., sewers without lights). Training a legged robot using proprioceptive can avoid the aforementioned situation. This research proposes a method combining terrain curriculum and adaptive submodularity. The legged robot is able to adaptively select actions over complex terrains without exteroceptive sensors. Adaptive submodularity is utilized to predict the terrain and take sequential actions with theoretical guarantees. The experiments demonstrate the proposed approach has fewer prediction errors than the random approach.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132048647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018667
Gong Chen, Duong Nguyen-Nam, Malika Meghjani, Phan Minh Tri, Marcel Bartholomeus Prasetyo, Mohammad Alif Daffa, Tony Q. S. Quek
We introduce Astralis simulator, a high-fidelity robot simulation platform for the development of multi-robot and human-robot coordination algorithms which can be seamlessly translated to real-world environments. The simulator provides novel features of dynamically initializing the virtual environment with real-world 3D point cloud data and a uniformly random arrangement of static and dynamic obstacles in the environment. This allows the user to generate several variants of a base scenario for strategic planning and algorithm validation. The simulator can receive high-level command inputs to control a team of Unmanned Aerial Vehicles (UAVs), Unmanned Ground Vehicles (UGVs), and human avatars. The simulated robot models are built with high fidelity control and navigation capabilities which can be readily deployed on real robot platforms. We use Astralis simulator to analyze human-robot coordination algorithms for tracking, following and leading targets in a search and rescue mission. The algorithm is validated using a UAV and a UGV in simulation and on physical platforms. Our simulator provides comparable results to the real-world experiments in terms of the executed trajectories by the robots.
{"title":"Astralis: A High-Fidelity Simulator for Heterogeneous Robot and Human-Robot Teaming","authors":"Gong Chen, Duong Nguyen-Nam, Malika Meghjani, Phan Minh Tri, Marcel Bartholomeus Prasetyo, Mohammad Alif Daffa, Tony Q. S. Quek","doi":"10.1109/SSRR56537.2022.10018667","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018667","url":null,"abstract":"We introduce Astralis simulator, a high-fidelity robot simulation platform for the development of multi-robot and human-robot coordination algorithms which can be seamlessly translated to real-world environments. The simulator provides novel features of dynamically initializing the virtual environment with real-world 3D point cloud data and a uniformly random arrangement of static and dynamic obstacles in the environment. This allows the user to generate several variants of a base scenario for strategic planning and algorithm validation. The simulator can receive high-level command inputs to control a team of Unmanned Aerial Vehicles (UAVs), Unmanned Ground Vehicles (UGVs), and human avatars. The simulated robot models are built with high fidelity control and navigation capabilities which can be readily deployed on real robot platforms. We use Astralis simulator to analyze human-robot coordination algorithms for tracking, following and leading targets in a search and rescue mission. The algorithm is validated using a UAV and a UGV in simulation and on physical platforms. Our simulator provides comparable results to the real-world experiments in terms of the executed trajectories by the robots.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130244252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018645
Tomoki Sakaue, Toru Nagakita, T. Kaneda, Y. Yamashita, Koju Nishizawa, Kenshi Kanbara, Hajime Hanaoka, S. Shirai, Shiro Kikuchi, Daisaku Uchijima
A vast amount of radioactive water was generated as a result of the nuclear incident at the Fukushima Daiichi Nuclear Power Station. Underground floors of the radioactive waste management facility were utilized as a temporary storage for contaminated water, and hundreds of sandbags containing zeolite particles were placed onto the floors to remove radioactivity from the water. As the decommissioning work has been ongoing for over a decade now, all zeolite sandbags must be removed, and the floors dried up. A survey was conducted in this highly irradiated environment to investigate the unknown recent conditions of the zeolite and floor structures. A remotely operated USV (Unmanned Surface Vehicle) named “ROV-boat”, modified from a reliable commercial off-the-shelf ROV (Remotely Operated Vehicle), was developed for this purpose. The ROV-boat was equipped with two cameras and a dosimeter installed on the body, with an external float for increased buoyancy, allowing it to move on the water surface to conduct a detailed survey of the environment. The prototype was evaluated and improved iteratively following an agile development approach. The ROV-boat was successfully developed by TEPCO (Tokyo Electric Power Company) several times in the summer of 2021, attaining valuable data for the future dry-up process.
由于福岛第一核电站的核事故,产生了大量的放射性水。放射性废物管理设施的地下楼层被用作临时储存受污染的水,数百个装有沸石颗粒的沙袋被放置在地板上,以去除水中的放射性。由于退役工作已经进行了十多年,所有的沸石沙袋都必须被移除,地面也会干涸。在这个高度辐射的环境中进行了一项调查,以调查沸石和地板结构的未知近期状况。一种名为“ROV-boat”的远程操作的USV(无人水面航行器),由一种可靠的商用现成的ROV(远程操作航行器)改进而来,为此目的而开发。ROV-boat配备了两个摄像头和一个安装在身体上的剂量计,带有一个增加浮力的外部浮子,允许它在水面上移动,对环境进行详细的调查。原型按照敏捷开发方法进行了评估和迭代改进。2021年夏季,东京电力公司(Tokyo Electric Power Company)多次成功开发ROV-boat,为未来的干化过程获得了有价值的数据。
{"title":"Development of USV Used in Underground Floors Surveying of the Contaminated Buildings at Fukushima Daiichi NPS","authors":"Tomoki Sakaue, Toru Nagakita, T. Kaneda, Y. Yamashita, Koju Nishizawa, Kenshi Kanbara, Hajime Hanaoka, S. Shirai, Shiro Kikuchi, Daisaku Uchijima","doi":"10.1109/SSRR56537.2022.10018645","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018645","url":null,"abstract":"A vast amount of radioactive water was generated as a result of the nuclear incident at the Fukushima Daiichi Nuclear Power Station. Underground floors of the radioactive waste management facility were utilized as a temporary storage for contaminated water, and hundreds of sandbags containing zeolite particles were placed onto the floors to remove radioactivity from the water. As the decommissioning work has been ongoing for over a decade now, all zeolite sandbags must be removed, and the floors dried up. A survey was conducted in this highly irradiated environment to investigate the unknown recent conditions of the zeolite and floor structures. A remotely operated USV (Unmanned Surface Vehicle) named “ROV-boat”, modified from a reliable commercial off-the-shelf ROV (Remotely Operated Vehicle), was developed for this purpose. The ROV-boat was equipped with two cameras and a dosimeter installed on the body, with an external float for increased buoyancy, allowing it to move on the water surface to conduct a detailed survey of the environment. The prototype was evaluated and improved iteratively following an agile development approach. The ROV-boat was successfully developed by TEPCO (Tokyo Electric Power Company) several times in the summer of 2021, attaining valuable data for the future dry-up process.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129200774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018705
E. Scorsone, Jocelyn Boutzen, David Fras, Khasim Cali, G. Marafioti, T. Mugaas, Krishna Persaud
a miniaturized gas-sensing module composed of both commercial off-the-shelf gas sensors along with a novel sensor array combining Quartz Crystal Microbalance transducers coated with Odorant Binding Proteins (OBP) was developed. The so-called “sniffer” module was integrated into a search and rescue robot with the ultimate goal of improving the efficiency of Urban Search and Rescue (USaR) operations on disaster sites., typically following the collapse of buildings. The selection of sensors combined with a fuzzy logic algorithm were aimed at localizing human victims., together with the support from other sensing technologies including both visible/infrared video cameras and a microphone., and at discriminating between both alive and deceased victims., in order to prioritize the USaR operations.
{"title":"Sniffer. a protein-based advanced VOC sensor for victim search","authors":"E. Scorsone, Jocelyn Boutzen, David Fras, Khasim Cali, G. Marafioti, T. Mugaas, Krishna Persaud","doi":"10.1109/SSRR56537.2022.10018705","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018705","url":null,"abstract":"a miniaturized gas-sensing module composed of both commercial off-the-shelf gas sensors along with a novel sensor array combining Quartz Crystal Microbalance transducers coated with Odorant Binding Proteins (OBP) was developed. The so-called “sniffer” module was integrated into a search and rescue robot with the ultimate goal of improving the efficiency of Urban Search and Rescue (USaR) operations on disaster sites., typically following the collapse of buildings. The selection of sensors combined with a fuzzy logic algorithm were aimed at localizing human victims., together with the support from other sensing technologies including both visible/infrared video cameras and a microphone., and at discriminating between both alive and deceased victims., in order to prioritize the USaR operations.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"68 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126990116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018759
R. Edlinger, Christoph Föls, Ulrich Mitterhuber, A. Nüchter
Dexterous mobile manipulators that are capable of performing a wide array of tasks are essential for unstructured human-centered environments. Especially in rescue scenarios where time and resources are limited, a gripping system should be as versatile but at the same time as efficient as possible. In addition, situational awareness in accidents and impaired visibility is essential for safely performing missions or automating behaviors, and thus supporting operators. The main contribution of this work is a multi-functional gripping system with technologies and methods for manipulation in rescue and recovery operations as well as for handling hazardous materials. The gripping system is supplemented by an RGB and thermal camera and object detection algorithms, which run in real-time on an embedded device for robust recognition in harsh and dynamic environments. The proposed multi-functional gripping system has been thoroughly evaluated and tested in laboratory experiments and real field facilities.
{"title":"AI supported Multi-Functional Gripping System for Dexterous Manipulation Tasks","authors":"R. Edlinger, Christoph Föls, Ulrich Mitterhuber, A. Nüchter","doi":"10.1109/SSRR56537.2022.10018759","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018759","url":null,"abstract":"Dexterous mobile manipulators that are capable of performing a wide array of tasks are essential for unstructured human-centered environments. Especially in rescue scenarios where time and resources are limited, a gripping system should be as versatile but at the same time as efficient as possible. In addition, situational awareness in accidents and impaired visibility is essential for safely performing missions or automating behaviors, and thus supporting operators. The main contribution of this work is a multi-functional gripping system with technologies and methods for manipulation in rescue and recovery operations as well as for handling hazardous materials. The gripping system is supplemented by an RGB and thermal camera and object detection algorithms, which run in real-time on an embedded device for robust recognition in harsh and dynamic environments. The proposed multi-functional gripping system has been thoroughly evaluated and tested in laboratory experiments and real field facilities.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126310964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}