Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018779
Ananya Bal, Robert Ladig, Pranav Goyal, J. Galeotti, H. Choset, David F. Merrick, Robin R. Murphy
3D representations of geographical surfaces in the form of dense point clouds can be a valuable tool for documenting and reconstructing a structural collapse, such as the 2021 Champlain Towers Condominium collapse in Surfside, Florida. Point cloud data reconstructed from aerial footage taken by uncrewed aerial systems at frequent intervals from a dynamic search and rescue scene poses significant challenges. Properly aligning large point clouds in this context, or registering them, poses noteworthy issues as they capture multiple regions whose geometries change over time. These regions denote dynamic features such as excavation machinery, cones marking boundaries and the structural collapse rubble itself. In this paper, the performances of commonly used point cloud registration methods for dynamic scenes present in the raw data are studied. The use of Iterative Closest Point (ICP), Rigid - Coherent Point Drift (CPD) and PointNetLK for registering dense point clouds, reconstructed sequentially over a time-frame of five days, is studied and evaluated. All methods are compared by error in performance, execution time, and robustness with a concluding analysis and a judgement of the preeminent method for the specific data at hand is provided.
{"title":"A Comparison of Point Cloud Registration Techniques for on-site Disaster Data from the Surfside Structural Collapse","authors":"Ananya Bal, Robert Ladig, Pranav Goyal, J. Galeotti, H. Choset, David F. Merrick, Robin R. Murphy","doi":"10.1109/SSRR56537.2022.10018779","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018779","url":null,"abstract":"3D representations of geographical surfaces in the form of dense point clouds can be a valuable tool for documenting and reconstructing a structural collapse, such as the 2021 Champlain Towers Condominium collapse in Surfside, Florida. Point cloud data reconstructed from aerial footage taken by uncrewed aerial systems at frequent intervals from a dynamic search and rescue scene poses significant challenges. Properly aligning large point clouds in this context, or registering them, poses noteworthy issues as they capture multiple regions whose geometries change over time. These regions denote dynamic features such as excavation machinery, cones marking boundaries and the structural collapse rubble itself. In this paper, the performances of commonly used point cloud registration methods for dynamic scenes present in the raw data are studied. The use of Iterative Closest Point (ICP), Rigid - Coherent Point Drift (CPD) and PointNetLK for registering dense point clouds, reconstructed sequentially over a time-frame of five days, is studied and evaluated. All methods are compared by error in performance, execution time, and robustness with a concluding analysis and a judgement of the preeminent method for the specific data at hand is provided.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126965255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018751
Goran Vasiljević, Vedran Brkić, Zeljko Postruzin, Z. Kovačić
In this paper, we present a method for scan planning of a system for nondestructive testing of reactor pressure vessels (RPVs). The system consists of a central nacelle hanging on the cables from the top of the RPV. The central nacelle supports two robotic arms with 6 degrees of freedom (6-DoF), which can be equipped with various tools for nondestructive testing. We present a method for calculating a 2-parameter scan path based on the known 3D model of the RPV. We also present a method for calculating a feasible trajectory in the joint space of a robotic arm based on the calculated path. The method is tested both in simulation and on the mockup system.
{"title":"3D Model-Based Nondestructive Scanning of Reactor Pressure Vessels with 6DoF Robotic Arms","authors":"Goran Vasiljević, Vedran Brkić, Zeljko Postruzin, Z. Kovačić","doi":"10.1109/SSRR56537.2022.10018751","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018751","url":null,"abstract":"In this paper, we present a method for scan planning of a system for nondestructive testing of reactor pressure vessels (RPVs). The system consists of a central nacelle hanging on the cables from the top of the RPV. The central nacelle supports two robotic arms with 6 degrees of freedom (6-DoF), which can be equipped with various tools for nondestructive testing. We present a method for calculating a 2-parameter scan path based on the known 3D model of the RPV. We also present a method for calculating a feasible trajectory in the joint space of a robotic arm based on the calculated path. The method is tested both in simulation and on the mockup system.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127646039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018810
Liyun Zhang, P. Ratsamee, Yuuki Uranishi, Manabu Higashida, H. Takemura
A panoptic perception-based generative adversarial network for thermal-to-color image translation is proposed to demonstrate its potential as an image sequence enhancement for monocular visual odometry in blurry and low-resolution thermal vision. The pre-trained panoptic segmentation model is utilized to obtain the panoptic perception (i.e., bounding boxes, categories, and masks) of the image scene to guide the alignment between the object content codes of the original thermal domain and panoptic-level style codes sampled from the target color style space. A feature masking module further refines the style-aligned object representations for sharpening object boundaries to synthesize higher fidelity translated color image sequences. The extensive experimental evaluation shows that our method outperforms other thermal-to-color image translation methods in the image quality of translated color images. We demonstrate that the enhanced image sequences significantly improve the performance of monocular visual odometry compared with dif-ferent competing methods including thermal image sequences.
{"title":"Thermal-to-Color Image Translation for Enhancing Visual Odometry of Thermal Vision","authors":"Liyun Zhang, P. Ratsamee, Yuuki Uranishi, Manabu Higashida, H. Takemura","doi":"10.1109/SSRR56537.2022.10018810","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018810","url":null,"abstract":"A panoptic perception-based generative adversarial network for thermal-to-color image translation is proposed to demonstrate its potential as an image sequence enhancement for monocular visual odometry in blurry and low-resolution thermal vision. The pre-trained panoptic segmentation model is utilized to obtain the panoptic perception (i.e., bounding boxes, categories, and masks) of the image scene to guide the alignment between the object content codes of the original thermal domain and panoptic-level style codes sampled from the target color style space. A feature masking module further refines the style-aligned object representations for sharpening object boundaries to synthesize higher fidelity translated color image sequences. The extensive experimental evaluation shows that our method outperforms other thermal-to-color image translation methods in the image quality of translated color images. We demonstrate that the enhanced image sequences significantly improve the performance of monocular visual odometry compared with dif-ferent competing methods including thermal image sequences.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126418902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018758
Anirudh Nair, Fulin Jiang, K. Hou, Zifan Xu, Shuo Li, Xuesu Xiao, P. Stone
Safely avoiding dynamic obstacles while moving toward a goal is a fundamental capability of autonomous mobile robots. Current benchmarks for dynamic obstacle avoidance do not provide a way to alter how obstacles move and instead use only a single method to uniquely determine the movement of obstacles, e.g., constant velocity, the social force model, or Optimal Reciprocal Collision Avoidance (ORCA). Using a single method in this way restricts the variety of scenarios in which the robot navigation system is trained and/or evaluated, thus limiting its robustness to dynamic obstacles of different speeds, trajectory smoothness, acceleration/deceleration, etc., which we call motion profiles. In this paper, we present a simulation testbed, DynaBARN, to evaluate a robot navigation system's ability to navigate in environments with obstacles with different motion profiles, which are systematically generated by a set of difficulty metrics. Additionally, we provide a demonstration collection pipeline that records robot navigation trials controlled by human users to compare with autonomous navigation performance and to develop navigation systems using learning from demonstration. Finally, we provide results of four classical and learning-based navigation systems in DynaBARN, which can serve as baselines for future studies. We release DynaBARN open source as a standardized benchmark for future autonomous navigation research in environments with different dynamic obstacles. The code and environments are released at https://github.com/aninair1905/DynaBARN.
{"title":"DynaBARN: Benchmarking Metric Ground Navigation in Dynamic Environments","authors":"Anirudh Nair, Fulin Jiang, K. Hou, Zifan Xu, Shuo Li, Xuesu Xiao, P. Stone","doi":"10.1109/SSRR56537.2022.10018758","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018758","url":null,"abstract":"Safely avoiding dynamic obstacles while moving toward a goal is a fundamental capability of autonomous mobile robots. Current benchmarks for dynamic obstacle avoidance do not provide a way to alter how obstacles move and instead use only a single method to uniquely determine the movement of obstacles, e.g., constant velocity, the social force model, or Optimal Reciprocal Collision Avoidance (ORCA). Using a single method in this way restricts the variety of scenarios in which the robot navigation system is trained and/or evaluated, thus limiting its robustness to dynamic obstacles of different speeds, trajectory smoothness, acceleration/deceleration, etc., which we call motion profiles. In this paper, we present a simulation testbed, DynaBARN, to evaluate a robot navigation system's ability to navigate in environments with obstacles with different motion profiles, which are systematically generated by a set of difficulty metrics. Additionally, we provide a demonstration collection pipeline that records robot navigation trials controlled by human users to compare with autonomous navigation performance and to develop navigation systems using learning from demonstration. Finally, we provide results of four classical and learning-based navigation systems in DynaBARN, which can serve as baselines for future studies. We release DynaBARN open source as a standardized benchmark for future autonomous navigation research in environments with different dynamic obstacles. The code and environments are released at https://github.com/aninair1905/DynaBARN.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115110835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018776
Alexander Puzicha, P. Buchholz
Planning missions by truly autonomous robots is a challenge. This paper presents a novel approach to design mission functions for optimization-based controllers that generate trajectories without explicit goal specifications for each robot. Potential fields are used to implicitly describe the goal of a mission. This allows one to model a great variety of missions including nonlinearities, discontinuities, and discrete parts. The proposed control algorithm is designed for swarms in which the communication is based on unreliable mesh networks requiring a completely decentralized control. The selection of the missions presented in this paper is mostly requested for disaster areas, defense and security operations, and logistics. Furthermore, experiments express the functionality of the chosen mission functions and the performance of the entire approach.
{"title":"Dynamic Mission Control for Decentralized Mobile Robot Swarms","authors":"Alexander Puzicha, P. Buchholz","doi":"10.1109/SSRR56537.2022.10018776","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018776","url":null,"abstract":"Planning missions by truly autonomous robots is a challenge. This paper presents a novel approach to design mission functions for optimization-based controllers that generate trajectories without explicit goal specifications for each robot. Potential fields are used to implicitly describe the goal of a mission. This allows one to model a great variety of missions including nonlinearities, discontinuities, and discrete parts. The proposed control algorithm is designed for swarms in which the communication is based on unreliable mesh networks requiring a completely decentralized control. The selection of the missions presented in this paper is mostly requested for disaster areas, defense and security operations, and logistics. Furthermore, experiments express the functionality of the chosen mission functions and the performance of the entire approach.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124230988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018616
Arjun Kumar, A. Davatzes, T. Shipley, M. A. Hsieh
The generation of accurate, dense 3D maps of natural environments is an important problem relevant to geological survey and wilderness search and rescue. In the search and rescue context it is imperative for map generation to occur in a timely manner. LIDAR SLAM based mapping algorithms run in real time and are resilient to the varying lighting conditions that cause issues in visual mapping systems. We evaluate LeGO-LOAM and UPSLAM for their trajectory and reconstruction accuracy in the Mecca Hills as well as the efficacy of visual fiducial markers for drift correction in natural environments. We find visual landmarks to provide significant drift improvements even on the 100 m scale and the LIDAR reconstructions of UPSLAM to perform comparably to the Metashape baseline 3D reconstruction.
{"title":"LIDAR SLAM-Based Dense Reconstructions of Natural Environments: Field Evaluations","authors":"Arjun Kumar, A. Davatzes, T. Shipley, M. A. Hsieh","doi":"10.1109/SSRR56537.2022.10018616","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018616","url":null,"abstract":"The generation of accurate, dense 3D maps of natural environments is an important problem relevant to geological survey and wilderness search and rescue. In the search and rescue context it is imperative for map generation to occur in a timely manner. LIDAR SLAM based mapping algorithms run in real time and are resilient to the varying lighting conditions that cause issues in visual mapping systems. We evaluate LeGO-LOAM and UPSLAM for their trajectory and reconstruction accuracy in the Mecca Hills as well as the efficacy of visual fiducial markers for drift correction in natural environments. We find visual landmarks to provide significant drift improvements even on the 100 m scale and the LIDAR reconstructions of UPSLAM to perform comparably to the Metashape baseline 3D reconstruction.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129565231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018792
Ananya Rao, Robin R. Murphy, David F. Merrick, H. Choset
The 2021 Champlain Towers South Condominiums collapse in Surfside, Florida, resulted 98 deaths. Nine people are thought to have survived the initial collapse, and might have been rescued if rescue workers could have located them. Perhaps, if rescue workers had been able to use robots to search the interior of the rubble pile, outcomes might have been better. An improved understanding of the environment in which a robot would have to operate to be able to search the interior of a rubble pile would help roboticists develop better suited robotic platforms and control strategies. To this end, this work offers an approach to characterize and visualize the interior of a rubble pile and conduct a preliminary analysis of the occurrence of voids. Specifically, the analysis makes opportunistic use of four days of aerial imagery gathered from responders at Surfside to create a 3D volumetric aggregated model of the collapse in order to identify and characterize void spaces in the interior of the rubble. The preliminary results confirm expectations of small number and scale of these interior voids. The results can inform better selection and control of existing robots for disaster response, aid in determining the design specifications (specifically scale and form factor), and improve control of future robotic platforms developed for search operations in rubble.
{"title":"Analysis of Interior Rubble Void Spaces at Champlain Towers South Collapse","authors":"Ananya Rao, Robin R. Murphy, David F. Merrick, H. Choset","doi":"10.1109/SSRR56537.2022.10018792","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018792","url":null,"abstract":"The 2021 Champlain Towers South Condominiums collapse in Surfside, Florida, resulted 98 deaths. Nine people are thought to have survived the initial collapse, and might have been rescued if rescue workers could have located them. Perhaps, if rescue workers had been able to use robots to search the interior of the rubble pile, outcomes might have been better. An improved understanding of the environment in which a robot would have to operate to be able to search the interior of a rubble pile would help roboticists develop better suited robotic platforms and control strategies. To this end, this work offers an approach to characterize and visualize the interior of a rubble pile and conduct a preliminary analysis of the occurrence of voids. Specifically, the analysis makes opportunistic use of four days of aerial imagery gathered from responders at Surfside to create a 3D volumetric aggregated model of the collapse in order to identify and characterize void spaces in the interior of the rubble. The preliminary results confirm expectations of small number and scale of these interior voids. The results can inform better selection and control of existing robots for disaster response, aid in determining the design specifications (specifically scale and form factor), and improve control of future robotic platforms developed for search operations in rubble.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125224678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018671
Robin R. Murphy, Amrita Kathasagaram, Truitt Millican, A. Clendenin, P.J.A.R. Dewitte, Jason B. Moats
This article examines 152 reports the use of robots explicitly due to the COVID-19 pandemic reported in the science, trade, and press from 24 Jan 2021 to 23 Jan 2022 (Year 2) and compares with the previously published uses from 24 Jan 2020 to 23 Jan 2021 (Year 1). Of these 152 reports, 80 were new unique instances documented in 25 countries, bringing the total to 420 instances in 52 countries since 2020. The instances did not add new work domains or use cases, though they changed the relative ranking of three use cases. The most notable trend in Year was the shift from a) government or institutional use of robots to protect healthcare workers and the Public to b) personal and business use to enable the continuity of work and education. In Year 1, Public Safety, Clinical Care, and Continuity of Work and Education were the three highest work domains but in Year 2, Continuity of Work and Education had the highest number of instances.
{"title":"Analysis of the Use of Robots for the Second Year of the COVID-19 Pandemic","authors":"Robin R. Murphy, Amrita Kathasagaram, Truitt Millican, A. Clendenin, P.J.A.R. Dewitte, Jason B. Moats","doi":"10.1109/SSRR56537.2022.10018671","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018671","url":null,"abstract":"This article examines 152 reports the use of robots explicitly due to the COVID-19 pandemic reported in the science, trade, and press from 24 Jan 2021 to 23 Jan 2022 (Year 2) and compares with the previously published uses from 24 Jan 2020 to 23 Jan 2021 (Year 1). Of these 152 reports, 80 were new unique instances documented in 25 countries, bringing the total to 420 instances in 52 countries since 2020. The instances did not add new work domains or use cases, though they changed the relative ranking of three use cases. The most notable trend in Year was the shift from a) government or institutional use of robots to protect healthcare workers and the Public to b) personal and business use to enable the continuity of work and education. In Year 1, Public Safety, Clinical Care, and Continuity of Work and Education were the three highest work domains but in Year 2, Continuity of Work and Education had the highest number of instances.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127300251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018698
H. Surmann, Marchell E. Thurow, Dominik Slomma
This work proposes a new method for real-time dense 3d reconstruction for common 360° action cams, which can be mounted on small scouting UAVs during USAR missions. The proposed method extends a feature based Visual monocular SLAM (OpenVSLAM, based on the popular ORB-SLAM) for robust long-term localization on equirectangular video input by adding an additional densification thread that computes dense correspondences for any given keyframe with respect to a local keyframe-neighboorhood using a PatchMatch-Stereo-approach. While PatchMatch-Stereo-types of algorithms are considered state of the art for large scale Mutli-View-Stereo they had not been adapted so far for real-time dense 3d reconstruction tasks. This work describes a new massively parallel variant of the PatchMatch-Stereo-algorithm that differs from current approaches in two ways: First it supports the equirectangular camera model while other solutions are limited to the pinhole camera model. Second it is optimized for low latency while keeping a high level of completeness and accuracy. To achieve this it operates only on small sequences of keyframes, but employs techniques to compensate for the potential loss of accuracy due to the limited number of frames. Results demonstrate that dense 3d reconstruction is possible on a consumer grade laptop with a recent mobile GPU and that it is possible with improved accuracy and completeness over common offline-MVS solutions with comparable quality settings.
{"title":"PatchMatch-Stereo-Panorama, a fast dense reconstruction from 360° video images","authors":"H. Surmann, Marchell E. Thurow, Dominik Slomma","doi":"10.1109/SSRR56537.2022.10018698","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018698","url":null,"abstract":"This work proposes a new method for real-time dense 3d reconstruction for common 360° action cams, which can be mounted on small scouting UAVs during USAR missions. The proposed method extends a feature based Visual monocular SLAM (OpenVSLAM, based on the popular ORB-SLAM) for robust long-term localization on equirectangular video input by adding an additional densification thread that computes dense correspondences for any given keyframe with respect to a local keyframe-neighboorhood using a PatchMatch-Stereo-approach. While PatchMatch-Stereo-types of algorithms are considered state of the art for large scale Mutli-View-Stereo they had not been adapted so far for real-time dense 3d reconstruction tasks. This work describes a new massively parallel variant of the PatchMatch-Stereo-algorithm that differs from current approaches in two ways: First it supports the equirectangular camera model while other solutions are limited to the pinhole camera model. Second it is optimized for low latency while keeping a high level of completeness and accuracy. To achieve this it operates only on small sequences of keyframes, but employs techniques to compensate for the potential loss of accuracy due to the limited number of frames. Results demonstrate that dense 3d reconstruction is possible on a consumer grade laptop with a recent mobile GPU and that it is possible with improved accuracy and completeness over common offline-MVS solutions with comparable quality settings.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134228139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018641
F. J. Lera, Miguel Ángel González Santamarta, Gonzalo Esteban-Costales, Unay Ayucar, E. Gil-Uriarte, Alfonso Glera-Picón, V. Vilches
The benefits of applying and integrating robotics and automation machinery in production plans are being followed by the peak of cybersecurity issues associated with them. This study presents the threat model for a production plant integrated with different components such as PLCs, machine tools, sensors, actuators, and robots. Attending to the heterogeneity of components, protocols, and devices, this paper tries to represent the possible threats that would be affecting the factory and proposes a set of changes and mitigations that would increase their cybersecurity and resilience.
{"title":"Threat modeling for robotic-based production plants","authors":"F. J. Lera, Miguel Ángel González Santamarta, Gonzalo Esteban-Costales, Unay Ayucar, E. Gil-Uriarte, Alfonso Glera-Picón, V. Vilches","doi":"10.1109/SSRR56537.2022.10018641","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018641","url":null,"abstract":"The benefits of applying and integrating robotics and automation machinery in production plans are being followed by the peak of cybersecurity issues associated with them. This study presents the threat model for a production plant integrated with different components such as PLCs, machine tools, sensors, actuators, and robots. Attending to the heterogeneity of components, protocols, and devices, this paper tries to represent the possible threats that would be affecting the factory and proposes a set of changes and mitigations that would increase their cybersecurity and resilience.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133411546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}