Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00020
Florian Spiess, Norbert Strobel, Tobias Kaupp, Samuel Kounev
In this paper, an analysis of the precision of LIDAR range measurements is presented. LIDAR data from two different sensors (HLS-LFCD-LDS and SICK TIM561) were analyzed regarding the influence of range, incident angle to the surface, and material. Based on the results, a data-driven model for LIDAR precision behavior was developed, and a comparison with standard deviation models based on the vendor-provided specifications was presented. Our model can be used to create realistic sensor simulations and to develop robot navigation algorithms weighing sensor range readings based on the precision.
{"title":"A data-driven Sensor Model for LIDAR Range Measurements used for Mobile Robot Navigation","authors":"Florian Spiess, Norbert Strobel, Tobias Kaupp, Samuel Kounev","doi":"10.1109/IRC55401.2022.00020","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00020","url":null,"abstract":"In this paper, an analysis of the precision of LIDAR range measurements is presented. LIDAR data from two different sensors (HLS-LFCD-LDS and SICK TIM561) were analyzed regarding the influence of range, incident angle to the surface, and material. Based on the results, a data-driven model for LIDAR precision behavior was developed, and a comparison with standard deviation models based on the vendor-provided specifications was presented. Our model can be used to create realistic sensor simulations and to develop robot navigation algorithms weighing sensor range readings based on the precision.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130617452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00061
Kripash Shrestha, Hung M. La, Hyung-Jin Yoon
Recent large wildfires in the United States and the subsequent damage that they have caused have increased the importance of wildfire monitoring and tracking. However, human monitoring on the ground or in the air may be too dangerous and therefore, there need to be alternatives to monitoring wildfires. Unmanned Aerial Vehicles (UAVs) have been previously used in this problem domain to track and monitor wildfires with approaches such as artificial potential fields and reinforcement learning. Our work aims to look at a team of UAVs, in a distributed approach, over an area to maximize the sensor coverage in dynamic wildfire environments. We proposed and implemented the Deep Q-Network (DQN) with a state estimator (auto-encoder), then compared it to existing methods including a Q-learning, a Q-learning with experience replay, and a DQN. The proposed DQN with a state estimator outperformed existing deep learning methods in terms of reward maximization and convergence.
{"title":"A Distributed Deep Learning Approach for A Team of Unmanned Aerial Vehicles for Wildfire Tracking and Coverage","authors":"Kripash Shrestha, Hung M. La, Hyung-Jin Yoon","doi":"10.1109/IRC55401.2022.00061","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00061","url":null,"abstract":"Recent large wildfires in the United States and the subsequent damage that they have caused have increased the importance of wildfire monitoring and tracking. However, human monitoring on the ground or in the air may be too dangerous and therefore, there need to be alternatives to monitoring wildfires. Unmanned Aerial Vehicles (UAVs) have been previously used in this problem domain to track and monitor wildfires with approaches such as artificial potential fields and reinforcement learning. Our work aims to look at a team of UAVs, in a distributed approach, over an area to maximize the sensor coverage in dynamic wildfire environments. We proposed and implemented the Deep Q-Network (DQN) with a state estimator (auto-encoder), then compared it to existing methods including a Q-learning, a Q-learning with experience replay, and a DQN. The proposed DQN with a state estimator outperformed existing deep learning methods in terms of reward maximization and convergence.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"132 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132477656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00069
Francesco Grella, A. Albini, G. Cannata
In this paper we propose an adaptive algorithm for safe physical human-robot collaboration using admittance control. Our approach adopts tactile sensors as a physical communication channel through which a human can express its intention to the robot. The use of distributed tactile sensors allows to retrieve a rich geometric representation of unpredictable contact events, useful to reconstruct a footprint of the external environment. In particular the shape of a human hand can be retrieved whenever a person touches or grasps a surface covered with tactile sensors. We use hand shape detection to discriminate between voluntary and non-voluntary interaction, thus classifying situations in which the human is deliberately making contact with the robot or an eventual collision is unintended. This method allows to enable robot motion only when the operator intentionally decides to move it, thus avoiding unpredictable behaviors in case of accidental collisions. For this purpose, detection information is used to perform online gain tuning of an admittance controller in order to enforce safety in manual guidance applications. We validate our approach on a Franka Emika 7-dof manipulator, evaluating the algorithm in scenarios where both voluntary and undesired contacts can occur, comparing the proposed method with respect to a basic admittance controller. Through experiments we show how voluntary interaction detection can mitigate the effects of undesired collisions with any of the body parts and could potentially limit harmful situations. A comprehensive video of the experiments is available at the following link: https://youtu.be/C0UeTFudy3M.
{"title":"Voluntary Interaction Detection for Safe Human-Robot Collaboration","authors":"Francesco Grella, A. Albini, G. Cannata","doi":"10.1109/IRC55401.2022.00069","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00069","url":null,"abstract":"In this paper we propose an adaptive algorithm for safe physical human-robot collaboration using admittance control. Our approach adopts tactile sensors as a physical communication channel through which a human can express its intention to the robot. The use of distributed tactile sensors allows to retrieve a rich geometric representation of unpredictable contact events, useful to reconstruct a footprint of the external environment. In particular the shape of a human hand can be retrieved whenever a person touches or grasps a surface covered with tactile sensors. We use hand shape detection to discriminate between voluntary and non-voluntary interaction, thus classifying situations in which the human is deliberately making contact with the robot or an eventual collision is unintended. This method allows to enable robot motion only when the operator intentionally decides to move it, thus avoiding unpredictable behaviors in case of accidental collisions. For this purpose, detection information is used to perform online gain tuning of an admittance controller in order to enforce safety in manual guidance applications. We validate our approach on a Franka Emika 7-dof manipulator, evaluating the algorithm in scenarios where both voluntary and undesired contacts can occur, comparing the proposed method with respect to a basic admittance controller. Through experiments we show how voluntary interaction detection can mitigate the effects of undesired collisions with any of the body parts and could potentially limit harmful situations. A comprehensive video of the experiments is available at the following link: https://youtu.be/C0UeTFudy3M.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124230929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00025
Richard Krieg, M. Ebner
Collision detection is a crucial part of every mobile robot system. The field of collision detection has received a lot of attention in recent years. Proper handling of a collision event involves many challenges. Once a collision has occurred, the robot needs to decide on how to proceed. However, prior to taking action it is important to localize the point of impact. This can be done efficiently and accurately using machine learning methods. We show how the recent method FRUITS can be used for point of impact localization using IMU data on a mobile robot. We also compare it with the very efficient algorithm ROCKET. Our results show that both methods are able to accurately identify discrete points of impact but FRUITS has a quicker response time.
{"title":"Time Series Classification of IMU Data for Point of Impact Localization","authors":"Richard Krieg, M. Ebner","doi":"10.1109/IRC55401.2022.00025","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00025","url":null,"abstract":"Collision detection is a crucial part of every mobile robot system. The field of collision detection has received a lot of attention in recent years. Proper handling of a collision event involves many challenges. Once a collision has occurred, the robot needs to decide on how to proceed. However, prior to taking action it is important to localize the point of impact. This can be done efficiently and accurately using machine learning methods. We show how the recent method FRUITS can be used for point of impact localization using IMU data on a mobile robot. We also compare it with the very efficient algorithm ROCKET. Our results show that both methods are able to accurately identify discrete points of impact but FRUITS has a quicker response time.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129102204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00045
Yanming Wu, P. Vandewalle, P. Slaets, E. Demeester
Tracking 6D poses of objects in video sequences is important for many applications such as robot manipulation and augmented reality. End-to-end deep learning based 6D pose tracking methods have achieved notable performance both in terms of accuracy and speed on standard benchmarks characterized by slowly varying poses. However, these methods fail to address a key challenge for using 6D pose trackers in fast motion scenarios. The performance of temporal trackers degrades significantly in fast motion scenarios and tracking failures occur frequently. In this work, we propose a framework to make end-to-end 6D pose trackers work better for fast motion scenarios. We integrate the “Relative Pose Estimation Network” from an end-to-end 6D pose tracker into an EKF framework. The EKF adopts a constant velocity motion model and its measurement is computed from the output of the “Relative Pose Estimation Network”. The proposed method is evaluated on challenging hand-object interaction sequences from the Laval dataset and compared against the original end-to-end pose tracker, referred to as the baseline. Experiments show that integration with EKF significantly improves the tracking performance, achieving a pose detection rate of 85.23% compared to 61.32% achieved by the baseline. The proposed framework exceeds the real-time performance requirement of 30 fps.
{"title":"An Improved Approach to 6D Object Pose Tracking in Fast Motion Scenarios","authors":"Yanming Wu, P. Vandewalle, P. Slaets, E. Demeester","doi":"10.1109/IRC55401.2022.00045","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00045","url":null,"abstract":"Tracking 6D poses of objects in video sequences is important for many applications such as robot manipulation and augmented reality. End-to-end deep learning based 6D pose tracking methods have achieved notable performance both in terms of accuracy and speed on standard benchmarks characterized by slowly varying poses. However, these methods fail to address a key challenge for using 6D pose trackers in fast motion scenarios. The performance of temporal trackers degrades significantly in fast motion scenarios and tracking failures occur frequently. In this work, we propose a framework to make end-to-end 6D pose trackers work better for fast motion scenarios. We integrate the “Relative Pose Estimation Network” from an end-to-end 6D pose tracker into an EKF framework. The EKF adopts a constant velocity motion model and its measurement is computed from the output of the “Relative Pose Estimation Network”. The proposed method is evaluated on challenging hand-object interaction sequences from the Laval dataset and compared against the original end-to-end pose tracker, referred to as the baseline. Experiments show that integration with EKF significantly improves the tracking performance, achieving a pose detection rate of 85.23% compared to 61.32% achieved by the baseline. The proposed framework exceeds the real-time performance requirement of 30 fps.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114729149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00083
Minh Trinh, C. Brecher
Neural networks (NNs) are able to model nonlinear systems with increasing accuracy. Further developments towards explainable artificial intelligence or the integration of already existing physical knowledge promote their acceptance and transparency. For these reasons, they are suitable for application in real systems, especially for modeling highly dynamic relationships. One possible application of NNs is the accuracy optimization of robot-based machining processes. Due to their flexibility and comparatively low investment costs, industrial robots (IR) are suitable for the machining of large components. However, due to their design characteristics, IRs show deficiencies with respect to their stiffness compared to traditional machine tools. One way to counteract these problems is to compensate for the compliance by means of model-based control. For this purpose, NNs can be used that predict the drive torques required in the axes. Compared to conventional analytical dynamics models, no complex identification of model parameters is necessary. In addition, NNs can take complex, nonlinear influences such as friction into account. In this work, NNs will be applied for a real-time model-based control of an IR using the Robot Operating System.
神经网络(NN)能够为非线性系统建模,而且精度越来越高。可解释人工智能的进一步发展或现有物理知识的整合促进了它们的接受度和透明度。因此,神经网络适合应用于实际系统,尤其是高度动态关系的建模。NN 的一个可能应用是基于机器人的加工过程的精度优化。由于其灵活性和相对较低的投资成本,工业机器人 (IR) 适用于大型部件的加工。然而,由于其设计特点,与传统机床相比,工业机器人在刚度方面存在不足。解决这些问题的方法之一是通过基于模型的控制来补偿顺应性。为此,可以使用 NN 来预测轴所需的驱动扭矩。与传统的分析动力学模型相比,无需对模型参数进行复杂的识别。此外,NN 还能将复杂的非线性影响因素(如摩擦)考虑在内。在这项工作中,将使用机器人操作系统对基于模型的集成电路进行实时控制。
{"title":"Neural Network Control of Industrial Robots Using ROS","authors":"Minh Trinh, C. Brecher","doi":"10.1109/IRC55401.2022.00083","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00083","url":null,"abstract":"Neural networks (NNs) are able to model nonlinear systems with increasing accuracy. Further developments towards explainable artificial intelligence or the integration of already existing physical knowledge promote their acceptance and transparency. For these reasons, they are suitable for application in real systems, especially for modeling highly dynamic relationships. One possible application of NNs is the accuracy optimization of robot-based machining processes. Due to their flexibility and comparatively low investment costs, industrial robots (IR) are suitable for the machining of large components. However, due to their design characteristics, IRs show deficiencies with respect to their stiffness compared to traditional machine tools. One way to counteract these problems is to compensate for the compliance by means of model-based control. For this purpose, NNs can be used that predict the drive torques required in the axes. Compared to conventional analytical dynamics models, no complex identification of model parameters is necessary. In addition, NNs can take complex, nonlinear influences such as friction into account. In this work, NNs will be applied for a real-time model-based control of an IR using the Robot Operating System.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127795898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00070
Lukas Sauer, D. Henrich
Making automation with robots more viable in smaller enterprises requires programming methods aimed at non-experts. In this work, we expand an automata-based programming approach from our previous research to multiple robot arms. This adds the challenge of synchronisation between the robots (to avoid conflicts or deadlocks during execution). The basic process consists of kinesthetically guiding the robot and programming step by step, without a graphical representation of the program or editor. The developed formalism and the corresponding programming method are presented. In a user study, we evaluated the resulting system with regards to usability by experts and non-experts. The experiments suggest that both expert and non-expert users were able solve small tasks with the system. Non-experts were less successful on average than experts, but deemed the system less complex.
{"title":"Synchronisation in Extended Robot State Automata","authors":"Lukas Sauer, D. Henrich","doi":"10.1109/IRC55401.2022.00070","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00070","url":null,"abstract":"Making automation with robots more viable in smaller enterprises requires programming methods aimed at non-experts. In this work, we expand an automata-based programming approach from our previous research to multiple robot arms. This adds the challenge of synchronisation between the robots (to avoid conflicts or deadlocks during execution). The basic process consists of kinesthetically guiding the robot and programming step by step, without a graphical representation of the program or editor. The developed formalism and the corresponding programming method are presented. In a user study, we evaluated the resulting system with regards to usability by experts and non-experts. The experiments suggest that both expert and non-expert users were able solve small tasks with the system. Non-experts were less successful on average than experts, but deemed the system less complex.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127269183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00032
Jose Fuentes, Leonardo Bobadilla, Ryan N. Smith
Localization in underwater environments is a fundamental problem for autonomous vehicles with important applications such as underwater ecology monitoring, infrastructure maintenance, and conservation of marine species. However, several traditional sensing modalities used for localization in outdoor robotics (e.g., GPS, compasses, LIDAR, and Vision) are compromised in underwater scenarios. In addition, other problems such as aliasing, drifting, and dynamic changes in the environment also affect state estimation in aquatic environments. Motivated by these issues, we propose novel state estimation algorithms for underwater vehicles that can read noisy sensor observations in spatio-temporal varying fields in water (e.g., temperature, pH, chlorophyll-A, and dissolved oxygen) and have access to a model of the evolution of the fields as a set of partial differential equations. We frame the underwater robot localization in an optimization framework and formulate, study, and solve the state-estimation problem. First, we find the most likely position given a sequence of observations, and we prove upper and lower bounds for the estimation error given information about the error and the fields. Our methodology can find the actual location within a 95% confidence interval around the median in over 90% of the cases in different conditions and extensions.
{"title":"Localization in Seemingly Sensory-Denied Environments through Spatio-Temporal Varying Fields","authors":"Jose Fuentes, Leonardo Bobadilla, Ryan N. Smith","doi":"10.1109/IRC55401.2022.00032","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00032","url":null,"abstract":"Localization in underwater environments is a fundamental problem for autonomous vehicles with important applications such as underwater ecology monitoring, infrastructure maintenance, and conservation of marine species. However, several traditional sensing modalities used for localization in outdoor robotics (e.g., GPS, compasses, LIDAR, and Vision) are compromised in underwater scenarios. In addition, other problems such as aliasing, drifting, and dynamic changes in the environment also affect state estimation in aquatic environments. Motivated by these issues, we propose novel state estimation algorithms for underwater vehicles that can read noisy sensor observations in spatio-temporal varying fields in water (e.g., temperature, pH, chlorophyll-A, and dissolved oxygen) and have access to a model of the evolution of the fields as a set of partial differential equations. We frame the underwater robot localization in an optimization framework and formulate, study, and solve the state-estimation problem. First, we find the most likely position given a sequence of observations, and we prove upper and lower bounds for the estimation error given information about the error and the fields. Our methodology can find the actual location within a 95% confidence interval around the median in over 90% of the cases in different conditions and extensions.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121576007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00077
Tauhidul Alam, Fabian Okafor, Ankit Patel, Abdullah Al Redwan Newaz
Autonomous surface vehicle (ASV) navigation in marine environments is challenging due to the disturbances caused by water currents and their spatiotemporal variations. Existing methods take into account only spatial variations of vector fields that are measured through vehicle sensors, but neglect temporal variations of vector fields. Effective path planning for ASVs also requires critical reasoning about the prediction of spatiotemporally varying water currents in marine environments. Therefore, this paper presents a method that integrates the prediction of water vector fields with a randomized path planner. We model the water flow of an area of interest as an unknown vector field and then train a Long-Short Term Memory (LSTM) neural network to learn such an unknown vector field accurately and effectively from real ocean current data. This allows the generation of a randomized path that moves along the vector field in a continuous space. To generate a randomized path on the predicted vector field, we present a Deep Vector Field - Rapidly-exploring Random Tree (DVF-RRT) algorithm for reaching a goal configuration starting from an initial configuration that leverages the strength of the RRT algorithm. The algorithm is validated through simulated randomized paths on predictive vector fields and benchmarking with regard to an existing VF-RRT method.
{"title":"DVF-RRT: Randomized Path Planning on Predictive Vector Fields","authors":"Tauhidul Alam, Fabian Okafor, Ankit Patel, Abdullah Al Redwan Newaz","doi":"10.1109/IRC55401.2022.00077","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00077","url":null,"abstract":"Autonomous surface vehicle (ASV) navigation in marine environments is challenging due to the disturbances caused by water currents and their spatiotemporal variations. Existing methods take into account only spatial variations of vector fields that are measured through vehicle sensors, but neglect temporal variations of vector fields. Effective path planning for ASVs also requires critical reasoning about the prediction of spatiotemporally varying water currents in marine environments. Therefore, this paper presents a method that integrates the prediction of water vector fields with a randomized path planner. We model the water flow of an area of interest as an unknown vector field and then train a Long-Short Term Memory (LSTM) neural network to learn such an unknown vector field accurately and effectively from real ocean current data. This allows the generation of a randomized path that moves along the vector field in a continuous space. To generate a randomized path on the predicted vector field, we present a Deep Vector Field - Rapidly-exploring Random Tree (DVF-RRT) algorithm for reaching a goal configuration starting from an initial configuration that leverages the strength of the RRT algorithm. The algorithm is validated through simulated randomized paths on predictive vector fields and benchmarking with regard to an existing VF-RRT method.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"363 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123556675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00039
Yaqin Wang, Zhiwei Chu, Ilmun Ku, E. C. Smith, E. Matson
The increased popularity and accessibility of UAVs may create potential threats. Researchers have been developing UAV detection and classification systems with different methods, including audio-based approach. However, the number of publicly available UAV audio datasets is limited. To fill this gap, we selected 10 different UAVs, ranging from toy hand drones to Class I drones, and recorded a total of 5215 seconds length of audio data generated from the flying UAVs. To the best of our knowledge, the proposed dataset is the largest audio dataset for UAVs so far. We further implemented a convolutional neural network (CNN) model for 10-class UAV classification and trained the model with the collected data. The overall test accuracy of the trained model is 97.7% and the test loss is 0.085.
{"title":"A Large-Scale UAV Audio Dataset and Audio-Based UAV Classification Using CNN","authors":"Yaqin Wang, Zhiwei Chu, Ilmun Ku, E. C. Smith, E. Matson","doi":"10.1109/IRC55401.2022.00039","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00039","url":null,"abstract":"The increased popularity and accessibility of UAVs may create potential threats. Researchers have been developing UAV detection and classification systems with different methods, including audio-based approach. However, the number of publicly available UAV audio datasets is limited. To fill this gap, we selected 10 different UAVs, ranging from toy hand drones to Class I drones, and recorded a total of 5215 seconds length of audio data generated from the flying UAVs. To the best of our knowledge, the proposed dataset is the largest audio dataset for UAVs so far. We further implemented a convolutional neural network (CNN) model for 10-class UAV classification and trained the model with the collected data. The overall test accuracy of the trained model is 97.7% and the test loss is 0.085.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124636217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}