Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00046
Maria Höffmann, J. Clemens, David Stronzek-Pfeifer, Ruggero Simonelli, Andreas Serov, Sven Schettino, Margareta Runge, K. Schill, C. Büskens
In this paper, we present a concept for automatic path planning and high-precision localization for autonomous lawn mowers. In particular, two objectives contribute to the increased efficiency of the presented approach compared to classical automatic lawn mowing techniques. First, the standard chaotic control of the mower is replaced by an efficient planning strategy for traversing the area without gaps and with as few overlaps as possible. Second, the conventional boundary wires become unnecessary as high-precision localization based on multi-sensor fusion allows for keeping the virtual boundaries. The whole concept is implemented and tested on an industrial-grade lawn mower. The advantages of intelligent path planning over chaotic strategies are shown, and the localization performance is validated using real-world data.
{"title":"Coverage Path Planning and Precise Localization for Autonomous Lawn Mowers","authors":"Maria Höffmann, J. Clemens, David Stronzek-Pfeifer, Ruggero Simonelli, Andreas Serov, Sven Schettino, Margareta Runge, K. Schill, C. Büskens","doi":"10.1109/IRC55401.2022.00046","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00046","url":null,"abstract":"In this paper, we present a concept for automatic path planning and high-precision localization for autonomous lawn mowers. In particular, two objectives contribute to the increased efficiency of the presented approach compared to classical automatic lawn mowing techniques. First, the standard chaotic control of the mower is replaced by an efficient planning strategy for traversing the area without gaps and with as few overlaps as possible. Second, the conventional boundary wires become unnecessary as high-precision localization based on multi-sensor fusion allows for keeping the virtual boundaries. The whole concept is implemented and tested on an industrial-grade lawn mower. The advantages of intelligent path planning over chaotic strategies are shown, and the localization performance is validated using real-world data.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133962390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00062
E. Bang, Y. Seo, Jeongyoun Seo, Raymond Zeng, A. Niang, Yaqin Wang, E. Matson
The Federal Aviation Administration (FAA) set the Unmanned Aerial Vehicles (UAV) speed limit at 100 mph. This research focused on detecting when the UAV exceeds a speed limit for an experiment and using the sound dataset to predict the velocity of a UAV. It is hard to detect a malicious UAV, but we can assume that a UAV over 100 mph is most likely malicious. An indoor environment will be used as a controlled environment and the dataset is divided into two classes: slow (0- 9mph) and fast (over 10mph). Support Vector Machine (SVM), Random Forest, and Light Gradient Boosting Machine (LGBM) were the Machine Learning models used for this research, and Convolutional Neural Network (CNN) was the Deep Learning model used for this research. The result shows that the CNN model has the highest performance (F-1 score: 1.0, Accuracy: 1.0, Recall: 1.0, Precision: 1.0) for classifying the sound of the UAV speed.
{"title":"UAV Velocity Prediction Using Audio data","authors":"E. Bang, Y. Seo, Jeongyoun Seo, Raymond Zeng, A. Niang, Yaqin Wang, E. Matson","doi":"10.1109/IRC55401.2022.00062","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00062","url":null,"abstract":"The Federal Aviation Administration (FAA) set the Unmanned Aerial Vehicles (UAV) speed limit at 100 mph. This research focused on detecting when the UAV exceeds a speed limit for an experiment and using the sound dataset to predict the velocity of a UAV. It is hard to detect a malicious UAV, but we can assume that a UAV over 100 mph is most likely malicious. An indoor environment will be used as a controlled environment and the dataset is divided into two classes: slow (0- 9mph) and fast (over 10mph). Support Vector Machine (SVM), Random Forest, and Light Gradient Boosting Machine (LGBM) were the Machine Learning models used for this research, and Convolutional Neural Network (CNN) was the Deep Learning model used for this research. The result shows that the CNN model has the highest performance (F-1 score: 1.0, Accuracy: 1.0, Recall: 1.0, Precision: 1.0) for classifying the sound of the UAV speed.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125831696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00050
Shayan Ahmed, J. Gedschold, Tim Erich Wegner, Adrian Sode, J. Trabert, G. D. Galdo
For effective Computer Vision (CV) applications, one of the difficult challenges service robots have to face concerns with complete scene understanding. Therefore, various strategies are employed for point-level segregation of the 3D scene, such as semantic segmentation. Currently Deep Learning (DL) based algorithms are popular in this domain. However, they require precisely labeled ground truth data. Generating this data is a lengthy and expensive procedure, resulting in a limited variety of available data. On the contrary, the 2D image domain offers labeled data in abundance. Therefore, this study explores how we can achieve accurate labels for the 3D domain by utilizing semantic segmentation on 2D images and projecting the estimated labels to the 3D space via the depth channel. The labeled data may then be used for vision related tasks such as robot navigation or localization.
{"title":"Labeling Custom Indoor Point Clouds Through 2D Semantic Image Segmentation","authors":"Shayan Ahmed, J. Gedschold, Tim Erich Wegner, Adrian Sode, J. Trabert, G. D. Galdo","doi":"10.1109/IRC55401.2022.00050","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00050","url":null,"abstract":"For effective Computer Vision (CV) applications, one of the difficult challenges service robots have to face concerns with complete scene understanding. Therefore, various strategies are employed for point-level segregation of the 3D scene, such as semantic segmentation. Currently Deep Learning (DL) based algorithms are popular in this domain. However, they require precisely labeled ground truth data. Generating this data is a lengthy and expensive procedure, resulting in a limited variety of available data. On the contrary, the 2D image domain offers labeled data in abundance. Therefore, this study explores how we can achieve accurate labels for the 3D domain by utilizing semantic segmentation on 2D images and projecting the estimated labels to the 3D space via the depth channel. The labeled data may then be used for vision related tasks such as robot navigation or localization.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133011587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00012
J. Rohde, Olga Meyer, Quy Luu Duc, C. Jürgenhake, Talib Sankal, R. Dumitrescu, R. H. Schmitt
The next generation of cellular mobile communication standard 5G is considered to have more stable and secure communication links, low latency and higher flexibility which can be enablers for various new scenarios and applications. Teleoperation is the field that can benefit greatly from 5G technology. Demanding and mission-critical use cases such as telesurgery have challenging requirements like low latency and high reliability of the communication channel, which can be possibly met by 5G in the future. This work presents a 5G architecture for the teleoperation of an industrial robot over long distance. For this purpose three Fraunhofer Institutes were connected over public internet and 5G campus networks. Furthermore, this work describes the latency performance of the investigated teleoperation use case. The measurements in the experiment are performed with two 5G standalone (SA) networks and an edge cloud.
{"title":"Teleoperation of an Industrial Robot using Public Networks and 5G SA Campus Networks","authors":"J. Rohde, Olga Meyer, Quy Luu Duc, C. Jürgenhake, Talib Sankal, R. Dumitrescu, R. H. Schmitt","doi":"10.1109/IRC55401.2022.00012","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00012","url":null,"abstract":"The next generation of cellular mobile communication standard 5G is considered to have more stable and secure communication links, low latency and higher flexibility which can be enablers for various new scenarios and applications. Teleoperation is the field that can benefit greatly from 5G technology. Demanding and mission-critical use cases such as telesurgery have challenging requirements like low latency and high reliability of the communication channel, which can be possibly met by 5G in the future. This work presents a 5G architecture for the teleoperation of an industrial robot over long distance. For this purpose three Fraunhofer Institutes were connected over public internet and 5G campus networks. Furthermore, this work describes the latency performance of the investigated teleoperation use case. The measurements in the experiment are performed with two 5G standalone (SA) networks and an edge cloud.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125736431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00068
Chun Hei Lee, Nicole Chee Lin Yueh, K. Woo
Generating good and human-like locomotion or other legged motions for bipedal robots has always been challenging. One of the emerging solutions to this challenge is to use imitation learning. The sources for imitation are mostly state-only demonstrations, so using state-of-the-art Generative Adversarial Imitation Learning (GAIL) with Imitation from Observation (IfO) ability will be an ideal frameworks to use in solving this problem. However, it is often difficult to allow new or complicated movements as the common sources for these frameworks are either expensive to set up or hard to produce satisfactory results without computationally expensive preprocessing, due to accuracy problems. Inspired by how people learn advanced knowledge after acquiring basic understandings of specific subjects, this paper proposes a Motion capture-aided Video Imitation (MoVI) learning framework based on Adversarial Motion Priors (AMP) by combining motion capture data of primary actions like walking with video clips of target motion like running, aiming to create smooth and natural imitation results of the target motion. This framework is able to produce various human-like locomotion by taking the most common and abundant motion capture data with any video clips of motion without the need for expensive datasets or sophisticated preprocessing.
{"title":"Human-inspired Video Imitation Learning on Humanoid Model","authors":"Chun Hei Lee, Nicole Chee Lin Yueh, K. Woo","doi":"10.1109/IRC55401.2022.00068","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00068","url":null,"abstract":"Generating good and human-like locomotion or other legged motions for bipedal robots has always been challenging. One of the emerging solutions to this challenge is to use imitation learning. The sources for imitation are mostly state-only demonstrations, so using state-of-the-art Generative Adversarial Imitation Learning (GAIL) with Imitation from Observation (IfO) ability will be an ideal frameworks to use in solving this problem. However, it is often difficult to allow new or complicated movements as the common sources for these frameworks are either expensive to set up or hard to produce satisfactory results without computationally expensive preprocessing, due to accuracy problems. Inspired by how people learn advanced knowledge after acquiring basic understandings of specific subjects, this paper proposes a Motion capture-aided Video Imitation (MoVI) learning framework based on Adversarial Motion Priors (AMP) by combining motion capture data of primary actions like walking with video clips of target motion like running, aiming to create smooth and natural imitation results of the target motion. This framework is able to produce various human-like locomotion by taking the most common and abundant motion capture data with any video clips of motion without the need for expensive datasets or sophisticated preprocessing.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124793211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00080
Jeryes Danial, Y. Ben-Asher
We consider the problem of a drone (quadcopter) that autonomously needs to scan or search an unknown maze of walls and obstacles (no GPS and no communication). This ability (navigating in an unknown indoor environment) is a fundamental problem in the area of drones (even in general robotics) and has applications in military, security, search & rescue and surveillance tasks. Typically, previous works proposed systems that construct a 3D map (via camera images or distance sensors) of the drone’s surroundings. This 3D map is then analyzed to determine the drone’s location and an obstacle-free path. The algorithm proposed here skips over the 3D map and the computation of the obstacle-free path by using random “blind” billiard zig-zag movements to scan the maze. This way, the drone simply bounces from walls and obstacles disregarding the need to find an obstacle-free path in a 3D map. Thus the algorithm requires only a simple form of obstacle detection, one that alerts the drone that there is a close obstacle in its direction of flight. Just using zigzag movements was not enough to obtain efficient cover of the maze were “efficient” cover is when the drone performs no more than one pass per corridor/room (OPTtime). Hence, a more complex algorithm was developed on top of these random zigzag movements. Experimental results using a realistic flight simulation in a random maze showed about 95% cover in OPTtime.
{"title":"ZigZag Algorithm: Scanning an Unknown Maze by an Autonomous Drone","authors":"Jeryes Danial, Y. Ben-Asher","doi":"10.1109/IRC55401.2022.00080","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00080","url":null,"abstract":"We consider the problem of a drone (quadcopter) that autonomously needs to scan or search an unknown maze of walls and obstacles (no GPS and no communication). This ability (navigating in an unknown indoor environment) is a fundamental problem in the area of drones (even in general robotics) and has applications in military, security, search & rescue and surveillance tasks. Typically, previous works proposed systems that construct a 3D map (via camera images or distance sensors) of the drone’s surroundings. This 3D map is then analyzed to determine the drone’s location and an obstacle-free path. The algorithm proposed here skips over the 3D map and the computation of the obstacle-free path by using random “blind” billiard zig-zag movements to scan the maze. This way, the drone simply bounces from walls and obstacles disregarding the need to find an obstacle-free path in a 3D map. Thus the algorithm requires only a simple form of obstacle detection, one that alerts the drone that there is a close obstacle in its direction of flight. Just using zigzag movements was not enough to obtain efficient cover of the maze were “efficient” cover is when the drone performs no more than one pass per corridor/room (OPTtime). Hence, a more complex algorithm was developed on top of these random zigzag movements. Experimental results using a realistic flight simulation in a random maze showed about 95% cover in OPTtime.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124009032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00084
Manuel Belke, P. Blanke, S. Storms, W. Herfs
The handling of objects is a crucial robotic skill for the automation of the production industry. The trend to use machine learning to estimate the 6D pose of objects is driven by higher robustness and faster processing times. Machine-learning based 6D pose estimation algorithms are available with varying estimation performance, robustness and flexibility. Suitable algorithms have to be selected based on use-case specific production requirements. A concept to evaluate these algorithms is presented. The generation of synthetic data based on the production requirements is proposed, followed by an evaluation of the algorithms to assess the generalization performance from generic benchmark datasets to custom industrial datasets. The overall pipeline is presented, realized and discussed.
{"title":"Object pose estimation in industrial environments using a synthetic data generation pipeline","authors":"Manuel Belke, P. Blanke, S. Storms, W. Herfs","doi":"10.1109/IRC55401.2022.00084","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00084","url":null,"abstract":"The handling of objects is a crucial robotic skill for the automation of the production industry. The trend to use machine learning to estimate the 6D pose of objects is driven by higher robustness and faster processing times. Machine-learning based 6D pose estimation algorithms are available with varying estimation performance, robustness and flexibility. Suitable algorithms have to be selected based on use-case specific production requirements. A concept to evaluate these algorithms is presented. The generation of synthetic data based on the production requirements is proposed, followed by an evaluation of the algorithms to assess the generalization performance from generic benchmark datasets to custom industrial datasets. The overall pipeline is presented, realized and discussed.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121325497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00078
Maryam Khazaei Pool, Carlos Diaz Alvarenga, Marcelo Kallmann
Path smoothing is an important operation in a number of path planning applications. While several approaches have been proposed in the literature, a lack of simple and effective methods with quality-based termination conditions can be observed. In this paper we propose a deterministic shortcut-based smoothing method that is simple to be implemented and achieves user-specified termination conditions based on solution quality, overcoming one of the main limitations observed in traditional random-based approaches. We present several benchmarks demonstrating that our method produces higher-quality results when compared to the traditional random shortcuts approach.
{"title":"Path Smoothing with Deterministic Shortcuts","authors":"Maryam Khazaei Pool, Carlos Diaz Alvarenga, Marcelo Kallmann","doi":"10.1109/IRC55401.2022.00078","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00078","url":null,"abstract":"Path smoothing is an important operation in a number of path planning applications. While several approaches have been proposed in the literature, a lack of simple and effective methods with quality-based termination conditions can be observed. In this paper we propose a deterministic shortcut-based smoothing method that is simple to be implemented and achieves user-specified termination conditions based on solution quality, overcoming one of the main limitations observed in traditional random-based approaches. We present several benchmarks demonstrating that our method produces higher-quality results when compared to the traditional random shortcuts approach.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129373764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00064
Christopher Allred, Huzeyfe Kocabas, Mario Harper, J. Pusey
Gait-based legged robots offer substantial advantages for traversing complicated, unstructured, or discontinuous terrain. Thus increasing their use in many real-world applications. However, they are also challenging to deploy due to limitations in operation time, range, and payload capabilities due to their complex locomotion and power needs. Anticipating the impact of terrain transitions on the range and average power consumption is crucial for understanding operational limits in autonomous and teleoperated missions. This study examines strategies for forecasting terrain-dependent energy costs on five unique surfaces (asphalt, concrete, grass, brush, and snow). The field experiments demonstrate the effectiveness of our combined proprioception and vision approach called MEP-VP. This hybrid framework only requires two seconds of motion data before returning actionable power estimates. Validation is conducted on physical hardware in field demonstration.
{"title":"Terrain Dependent Power Estimation for Legged Robots in Unstructured Environments","authors":"Christopher Allred, Huzeyfe Kocabas, Mario Harper, J. Pusey","doi":"10.1109/IRC55401.2022.00064","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00064","url":null,"abstract":"Gait-based legged robots offer substantial advantages for traversing complicated, unstructured, or discontinuous terrain. Thus increasing their use in many real-world applications. However, they are also challenging to deploy due to limitations in operation time, range, and payload capabilities due to their complex locomotion and power needs. Anticipating the impact of terrain transitions on the range and average power consumption is crucial for understanding operational limits in autonomous and teleoperated missions. This study examines strategies for forecasting terrain-dependent energy costs on five unique surfaces (asphalt, concrete, grass, brush, and snow). The field experiments demonstrate the effectiveness of our combined proprioception and vision approach called MEP-VP. This hybrid framework only requires two seconds of motion data before returning actionable power estimates. Validation is conducted on physical hardware in field demonstration.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133333770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00048
Marcus Dorau, M. Alpen, J. Horn
A source localization experiment with a group of ground robots is presented in this paper. The process implemented on the robots is shown together with its building blocks which include methods from robotics and control. The results of the experiments show that the source localization is successful in the presented environment. The workings of the process are explained and an implication about the performance in dependence on the number of robots used is given. A way to use measurements from the mapping phase for source localization is presented which speeds up the localization process and the effect of the tuning parameters is investigated.
{"title":"Practical Validation of Autonomous Source Localization with Ground Robots","authors":"Marcus Dorau, M. Alpen, J. Horn","doi":"10.1109/IRC55401.2022.00048","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00048","url":null,"abstract":"A source localization experiment with a group of ground robots is presented in this paper. The process implemented on the robots is shown together with its building blocks which include methods from robotics and control. The results of the experiments show that the source localization is successful in the presented environment. The workings of the process are explained and an implication about the performance in dependence on the number of robots used is given. A way to use measurements from the mapping phase for source localization is presented which speeds up the localization process and the effect of the tuning parameters is investigated.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131723105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}