Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018826
M. Salathe, B. Quiter, M. Bandstra, Xin Chen, V. Negut, Micah Folsom, G. Weber, Christopher Greulich, M. Swinney, N. Prins, D. Archer
A handheld system developed to digitize a contextual understanding of the scene at a chemical, biological, radiological, nuclear and/or explosives (CBRNE) events is described. The system uses LiDAR and cameras to create a colorized 3D model of the environment, which helps domain experts that are supporting responders in the field. To generate the digitized model, a responder scans any suspicious objects and the surroundings by carrying the system through the scene. The scanning system provides a real-time user interface to inform the user about scanning progress and to indicate any areas that may have been missed either by the LiDAR sensors or the cameras. Currently, the collected data are post-processed on a different device, building a colorized triangular mesh of the encountered scene, with the intention of moving this pipeline to the scanner at a later point. The mesh is sufficiently compressed to be sent over a reduced bandwidth connection to a remote analyst. Furthermore, the system tracks fiducial markers attached to diagnostic equipment that is placed around the suspicious object. The resulting tracking information can be transmitted to remote analysts to further facilitate their supporting efforts. The paper will discuss the system's design, software components, the user interface used for scanning a scene, the necessary procedures for calibration of the sensors, and the processing steps of the resulting data. The discussion will close by evaluating the system's performance on 11 scenes.
{"title":"A multi-modal scanning system to digitize CBRNE emergency response scenes","authors":"M. Salathe, B. Quiter, M. Bandstra, Xin Chen, V. Negut, Micah Folsom, G. Weber, Christopher Greulich, M. Swinney, N. Prins, D. Archer","doi":"10.1109/SSRR56537.2022.10018826","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018826","url":null,"abstract":"A handheld system developed to digitize a contextual understanding of the scene at a chemical, biological, radiological, nuclear and/or explosives (CBRNE) events is described. The system uses LiDAR and cameras to create a colorized 3D model of the environment, which helps domain experts that are supporting responders in the field. To generate the digitized model, a responder scans any suspicious objects and the surroundings by carrying the system through the scene. The scanning system provides a real-time user interface to inform the user about scanning progress and to indicate any areas that may have been missed either by the LiDAR sensors or the cameras. Currently, the collected data are post-processed on a different device, building a colorized triangular mesh of the encountered scene, with the intention of moving this pipeline to the scanner at a later point. The mesh is sufficiently compressed to be sent over a reduced bandwidth connection to a remote analyst. Furthermore, the system tracks fiducial markers attached to diagnostic equipment that is placed around the suspicious object. The resulting tracking information can be transmitted to remote analysts to further facilitate their supporting efforts. The paper will discuss the system's design, software components, the user interface used for scanning a scene, the necessary procedures for calibration of the sensors, and the processing steps of the resulting data. The discussion will close by evaluating the system's performance on 11 scenes.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124186714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018773
Manuel Patchou, J. Tiemann, Christian Arendt, Stefan Böeker, C. Wietfeld
The evaluation of remotely supervised robotic systems must include exposure to probable network disturbances to assess and tune their behavior for similar situations. This paper presents the vSTING module, a solution to evaluate a system's network resilience with minimal installation overhead. It can subject a network link to constraints provided in various ways: user input, recorded network traces, or location-based. In the first evaluation step, the general ability to constrain a network link is confirmed. Next, the teleoperation performance of multiple robots is challenged. Finally, the challenging network environment recorded during a mission is replayed. The evaluation validates vSTING as a useful tool to assess and develop the resilience of systems against the network disturbances expected with real-world wireless connectivity.
{"title":"Realtime Wireless Network Emulation for Evaluation of Teleoperated Mobile Robots","authors":"Manuel Patchou, J. Tiemann, Christian Arendt, Stefan Böeker, C. Wietfeld","doi":"10.1109/SSRR56537.2022.10018773","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018773","url":null,"abstract":"The evaluation of remotely supervised robotic systems must include exposure to probable network disturbances to assess and tune their behavior for similar situations. This paper presents the vSTING module, a solution to evaluate a system's network resilience with minimal installation overhead. It can subject a network link to constraints provided in various ways: user input, recorded network traces, or location-based. In the first evaluation step, the general ability to constrain a network link is confirmed. Next, the teleoperation performance of multiple robots is challenged. Finally, the challenging network environment recorded during a mission is replayed. The evaluation validates vSTING as a useful tool to assess and develop the resilience of systems against the network disturbances expected with real-world wireless connectivity.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127748444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018691
W. Tang, Chaoyu Xue, C. Li, Qiuguo Zhu
Advances in robot locomotion algorithms, sensor technologies, planning frameworks and hardware designs have motivated the deployment of robot groups in unstructured environments. One of the most common tasks is autonomous exploration. During the multi-robot autonomous exploration task, it is desirable for robots to coordinate in order to steer them to different areas so as to increase the overall efficiency. Such coordination requires communication among peer robots. In some environments, such as tunnels, natural caves or urban underground spaces, it is not always feasible to maintain a stable communication network between robots. Thus, robust coordinated exploration solution must operate in unstable or constrained settings. In this work, we propose a decentralized and bandwidth-efficient algorithm to robustly coordinate robot groups to explore unknown unstructured environments. The proposed method's performance is evaluated in typical simulation environments and the experiment results show much reduced average bandwidth consumption compared with baseline algorithms. In addition, the proposed algorithms are implemented in a quadruped robot groups to demonstrate the effectiveness in a real world autonomous exploration task.
{"title":"Towards Coordinated Multi-Robot Exploration under Bandwidth-constrained Conditions","authors":"W. Tang, Chaoyu Xue, C. Li, Qiuguo Zhu","doi":"10.1109/SSRR56537.2022.10018691","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018691","url":null,"abstract":"Advances in robot locomotion algorithms, sensor technologies, planning frameworks and hardware designs have motivated the deployment of robot groups in unstructured environments. One of the most common tasks is autonomous exploration. During the multi-robot autonomous exploration task, it is desirable for robots to coordinate in order to steer them to different areas so as to increase the overall efficiency. Such coordination requires communication among peer robots. In some environments, such as tunnels, natural caves or urban underground spaces, it is not always feasible to maintain a stable communication network between robots. Thus, robust coordinated exploration solution must operate in unstable or constrained settings. In this work, we propose a decentralized and bandwidth-efficient algorithm to robustly coordinate robot groups to explore unknown unstructured environments. The proposed method's performance is evaluated in typical simulation environments and the experiment results show much reduced average bandwidth consumption compared with baseline algorithms. In addition, the proposed algorithms are implemented in a quadruped robot groups to demonstrate the effectiveness in a real world autonomous exploration task.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130605429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Small unmanned aerial vehicles (UAVs) are widely used for such tasks as pesticide spraying, infrastructure inspection and photography. Because of their widespread usage, accidents are also increasing. Focusing on the risk of propellers for small UAVs, in this study, research was conducted on a quantitative evaluation method for propeller risk with experiments using various types of protective gear, such as cut-resistant gloves, protective eyewear, and safety nets. Preliminary experiments were conducted in which different types of small UAV propeller 15–20 inches in diameter collided with the protective gear. For the cut-resistant glove experiment, the result confirmed the effectiveness of the gloves according to the protective levels of the EN388:2016 standard. For the other types of gear, the experiments' high-speed recordings and objects post collision are observed. The results show that the protective gear, while not risk-free, improved safety during the operation of UAVs and should be equipped carefully. A new UAV-specific safety standard for nets is also needed to ensure safety while UAVs are operated.
{"title":"Experimental evaluation of personal protective equipment for small UAVs and related issues for safety standards","authors":"The Duc Luong, Hiroki Igarashi, Kosuke Yoshizaki, Ayumu Miyahara, Sota Okoshi, Telmen Batkhishig, Uuganbayar Saikhanbayar, Tomoya Noake, Toshiroh Houshi, Tetsuya Kimura","doi":"10.1109/SSRR56537.2022.10018628","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018628","url":null,"abstract":"Small unmanned aerial vehicles (UAVs) are widely used for such tasks as pesticide spraying, infrastructure inspection and photography. Because of their widespread usage, accidents are also increasing. Focusing on the risk of propellers for small UAVs, in this study, research was conducted on a quantitative evaluation method for propeller risk with experiments using various types of protective gear, such as cut-resistant gloves, protective eyewear, and safety nets. Preliminary experiments were conducted in which different types of small UAV propeller 15–20 inches in diameter collided with the protective gear. For the cut-resistant glove experiment, the result confirmed the effectiveness of the gloves according to the protective levels of the EN388:2016 standard. For the other types of gear, the experiments' high-speed recordings and objects post collision are observed. The results show that the protective gear, while not risk-free, improved safety during the operation of UAVs and should be equipped carefully. A new UAV-specific safety standard for nets is also needed to ensure safety while UAVs are operated.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"4 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131638223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018655
Francisco Javier Gañán, J. Sanchez-Diaz, R. Tapia, J. R. M. Dios, A. Ollero
Autonomous intrusion monitoring in unstructured complex scenarios using aerial robots requires perception systems capable to deal with problems such as motion blur or changing lighting conditions, among others. Event cameras are neuromorphic sensors that capture per-pixel illumination changes, providing low latency and high dynamic range. This paper presents an efficient event-based processing scheme for intrusion detection and tracking onboard strict resource-constrained robots. The method tracks moving objects using a probabilistic distribution that is updated event by event, but the processing of each event involves few low-cost operations, enabling online execution on resource-constrained onboard computers. The method has been experimentally validated in several real scenarios under different lighting conditions, evidencing its accurate performance.
{"title":"Efficient Event-based Intrusion Monitoring using Probabilistic Distributions","authors":"Francisco Javier Gañán, J. Sanchez-Diaz, R. Tapia, J. R. M. Dios, A. Ollero","doi":"10.1109/SSRR56537.2022.10018655","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018655","url":null,"abstract":"Autonomous intrusion monitoring in unstructured complex scenarios using aerial robots requires perception systems capable to deal with problems such as motion blur or changing lighting conditions, among others. Event cameras are neuromorphic sensors that capture per-pixel illumination changes, providing low latency and high dynamic range. This paper presents an efficient event-based processing scheme for intrusion detection and tracking onboard strict resource-constrained robots. The method tracks moving objects using a probabilistic distribution that is updated event by event, but the processing of each event involves few low-cost operations, enabling online execution on resource-constrained onboard computers. The method has been experimentally validated in several real scenarios under different lighting conditions, evidencing its accurate performance.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115774535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018818
R. Edlinger, Christoph Föls, A. Nüchter
This paper addresses the search and rescue scenario Casualty Evacuation (CasEvac) at the European Land Robot Trial (ELROB). A disaster response robot can be sent into areas to rescue victims where it is too dangerous for human rescue due to environmental issues like the danger of collapse or radioactivity. If injured persons are no longer able to move, the robot must be able to rescue them from the danger zone. This paper addresses this scenario and describes our system design, the manipulator tool and the innovative control mechanism for transporting victims. The experiment was tested at the competition and compared with other solutions from the participating teams and currently implemented developments.
{"title":"An innovative pick-up and transport robot system for casualty evacuation","authors":"R. Edlinger, Christoph Föls, A. Nüchter","doi":"10.1109/SSRR56537.2022.10018818","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018818","url":null,"abstract":"This paper addresses the search and rescue scenario Casualty Evacuation (CasEvac) at the European Land Robot Trial (ELROB). A disaster response robot can be sent into areas to rescue victims where it is too dangerous for human rescue due to environmental issues like the danger of collapse or radioactivity. If injured persons are no longer able to move, the robot must be able to rescue them from the danger zone. This paper addresses this scenario and describes our system design, the manipulator tool and the innovative control mechanism for transporting victims. The experiment was tested at the competition and compared with other solutions from the participating teams and currently implemented developments.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114773218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018662
Peter Mitchell, Reuben O'Brien, Minas Liarokapis
Ahstract- Autonomous boats can have a plethora of applications related to sealife, pollution monitoring, search and rescue, border patrol, inspection of internal waterways and the open ocean among others. Moreover, the design, development, and control of such platforms poses some excellent engineering challenges related to mechanical design, autonomy, robustness, ability to perceive and navigate the highly dynamic and unstructured turbulent water environment. This paper is focused on the design, development, and experimental validation of open-source, low-cost, waterjet-power robotic speedboats for education and research. The proposed speedboats are developed based on a modular hull and a waterjet propulsion system that are both 3D printed. The speedboat design is easy to replicate and maintain, and it can accommodate all the sensors needed for autonomous navigation, such as, LiDAR, monocular vision, GPS and more. Water-jets allow the platform to: i) operate in shallow waters, ii) reduce the risk of entanglement, and iii) reduce any risk of injury to users or sealife. The efficiency of the speedboats has been experimentally validated through velocity, thrust, and efficiency testing and real-world deployment. The designs are disseminated in an open source manner and they are accompanied by a speedboat racing competition that involves both dynamic and static events. These resources are expected to be valuable for robotics researchers and for lecturers that want to introduce hands-on assignments in courses related to robotics and autonomous systems.
{"title":"On the Development of Waterjet-Powered Robotic Speedboats: An Open-Source, Low-Cost Platform for Education and Research","authors":"Peter Mitchell, Reuben O'Brien, Minas Liarokapis","doi":"10.1109/SSRR56537.2022.10018662","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018662","url":null,"abstract":"Ahstract- Autonomous boats can have a plethora of applications related to sealife, pollution monitoring, search and rescue, border patrol, inspection of internal waterways and the open ocean among others. Moreover, the design, development, and control of such platforms poses some excellent engineering challenges related to mechanical design, autonomy, robustness, ability to perceive and navigate the highly dynamic and unstructured turbulent water environment. This paper is focused on the design, development, and experimental validation of open-source, low-cost, waterjet-power robotic speedboats for education and research. The proposed speedboats are developed based on a modular hull and a waterjet propulsion system that are both 3D printed. The speedboat design is easy to replicate and maintain, and it can accommodate all the sensors needed for autonomous navigation, such as, LiDAR, monocular vision, GPS and more. Water-jets allow the platform to: i) operate in shallow waters, ii) reduce the risk of entanglement, and iii) reduce any risk of injury to users or sealife. The efficiency of the speedboats has been experimentally validated through velocity, thrust, and efficiency testing and real-world deployment. The designs are disseminated in an open source manner and they are accompanied by a speedboat racing competition that involves both dynamic and static events. These resources are expected to be valuable for robotics researchers and for lecturers that want to introduce hands-on assignments in courses related to robotics and autonomous systems.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125725287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018673
Nils Mandischer, Marius Gürtler, Sebastian Döbler, M. Hüsing, B. Corves
Many recently designed exploration algorithms for search and rescue are based on the expansion of existing exploration algorithms to multiple simultaneously operating agents. These algorithms are quite useful in extensive search and rescue operations but neglect the need for adaptability necessary in small scale environments. Therefore, this paper proposes a novel modular multi-layer approach, which combines conventional Next-Best-View Exploration with predefined boundary conditions to enable a multi-goal driven search for victims and operators. The boundary conditions are mapped on cost maps individually and fused dynamically in a common weighting matrix. Exemplary conditions are the last known operator pose or estimated positions of fire sources. The exploration algorithm compares nearby points of interest in regards to their weight and chooses an appropriate navigation goal. The method is evaluated for usage in context of firefighting operations with teams of humans and robots.
{"title":"Finding Moving Operators in Firefighting Operations Based on Multi-Goal Next-Best-View Exploration","authors":"Nils Mandischer, Marius Gürtler, Sebastian Döbler, M. Hüsing, B. Corves","doi":"10.1109/SSRR56537.2022.10018673","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018673","url":null,"abstract":"Many recently designed exploration algorithms for search and rescue are based on the expansion of existing exploration algorithms to multiple simultaneously operating agents. These algorithms are quite useful in extensive search and rescue operations but neglect the need for adaptability necessary in small scale environments. Therefore, this paper proposes a novel modular multi-layer approach, which combines conventional Next-Best-View Exploration with predefined boundary conditions to enable a multi-goal driven search for victims and operators. The boundary conditions are mapped on cost maps individually and fused dynamically in a common weighting matrix. Exemplary conditions are the last known operator pose or estimated positions of fire sources. The exploration algorithm compares nearby points of interest in regards to their weight and chooses an appropriate navigation goal. The method is evaluated for usage in context of firefighting operations with teams of humans and robots.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124153925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018780
Trey Smith, Sounak Mukhopadhyay, Robin R. Murphy, Thomas Manzini, Patricia Itzel Rodriguez
Uncrewed marine surface vehicles (USV) with side scan sonar are increasingly being used to locate submerged victims who drowned in open water. This work demonstrates a novel algorithm that automates path planning by optimizing transect orientation of a Boustrophedon path through a convex polygon for sonar quality. The orientation maximizes the length of the transects while minimizing the variation in length. The algorithm uses a weighted sum to score possible paths. The weightings are explored by simulation with four convex polygons of different sizes representing locations in Texas and Washington where marine search and recovery exercises have been conducted or have been planned. The overall weighting (0.5, 0.5) was demonstrated using the Hydronalix AMY USV at Lake Sahuarita, Arizona, confirming that the best scored orientation does produce a more favorable path for sonar than the worst scored orientation. In addition, the path for the worst scored orientation was more difficult to execute.
{"title":"Path Coverage Optimization for USV with Side Scan Sonar for Victim Recovery","authors":"Trey Smith, Sounak Mukhopadhyay, Robin R. Murphy, Thomas Manzini, Patricia Itzel Rodriguez","doi":"10.1109/SSRR56537.2022.10018780","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018780","url":null,"abstract":"Uncrewed marine surface vehicles (USV) with side scan sonar are increasingly being used to locate submerged victims who drowned in open water. This work demonstrates a novel algorithm that automates path planning by optimizing transect orientation of a Boustrophedon path through a convex polygon for sonar quality. The orientation maximizes the length of the transects while minimizing the variation in length. The algorithm uses a weighted sum to score possible paths. The weightings are explored by simulation with four convex polygons of different sizes representing locations in Texas and Washington where marine search and recovery exercises have been conducted or have been planned. The overall weighting (0.5, 0.5) was demonstrated using the Hydronalix AMY USV at Lake Sahuarita, Arizona, confirming that the best scored orientation does produce a more favorable path for sonar than the worst scored orientation. In addition, the path for the worst scored orientation was more difficult to execute.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132110309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-08DOI: 10.1109/SSRR56537.2022.10018708
S. K. Nayak, K. Ohno, Ranulfo Bezerra, M. Konyo, S. Tadokoro
Ahstract- Light projection-based visual feedback navigation systems are advantageous in easily conveying direction. It also shifts the mental attention of the user from the screen to the environment, which increases safety. However, building an autonomous navigation system for fast navigation using light projection has the challenge of human motion sensitivity and tuning of navigation parameters based on human needs. In this paper, we present an autonomous wearable multiple laser projection stimulus (mLPS) based navigation system. The mLPS consists of three discrete laser lights for representing directional cues. This enables very quick switching for high-speed movements and precise navigation by visual cues. We designed a chest-based wearable suit for minimizing the sensitivity of light projection to human motion. Furthermore, we present a waypoint-based human navigation system and tuning of its navigation parameters based on both navigation performance and perceived human stress. Finally, a successful demonstration of the concept is presented.
{"title":"Autonomous Human Navigation Using Wearable Multiple Laser Projection Suit","authors":"S. K. Nayak, K. Ohno, Ranulfo Bezerra, M. Konyo, S. Tadokoro","doi":"10.1109/SSRR56537.2022.10018708","DOIUrl":"https://doi.org/10.1109/SSRR56537.2022.10018708","url":null,"abstract":"Ahstract- Light projection-based visual feedback navigation systems are advantageous in easily conveying direction. It also shifts the mental attention of the user from the screen to the environment, which increases safety. However, building an autonomous navigation system for fast navigation using light projection has the challenge of human motion sensitivity and tuning of navigation parameters based on human needs. In this paper, we present an autonomous wearable multiple laser projection stimulus (mLPS) based navigation system. The mLPS consists of three discrete laser lights for representing directional cues. This enables very quick switching for high-speed movements and precise navigation by visual cues. We designed a chest-based wearable suit for minimizing the sensitivity of light projection to human motion. Furthermore, we present a waypoint-based human navigation system and tuning of its navigation parameters based on both navigation performance and perceived human stress. Finally, a successful demonstration of the concept is presented.","PeriodicalId":272862,"journal":{"name":"2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114136709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}