Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00010
J. Gallagher, E. Matson, Ryan Slater
Small Flapping-Wing Micro-Air Vehicles (FW-MAVs) can experience wing damage and wear while in service. Even small amounts of wing can prevent the vehicle from attaining desired waypoints without significant adaptation to onboard flight control. In previous work, we demonstrated that low-level adaptation of wing motion patterns, rather than high-level adaptation of path control, could restore acceptable performance. We further demonstrated that this low-level adaptation could be accomplished while the vehicle was in normal service and without requiring excessive amounts of flight time. Previous work, however, did not carefully consider the use of these methods when the vehicle was completely unconstrained in three-dimensional space (I.E. no mechanical safety supports) and when all vehicle degrees of freedom had to be simultaneously controlled. Also, previous work presumed that the learning algorithm could adapt wing motion patterns with minimal constraints on shape. The newest generation of FW-MAVs we consider place some significant constraints on legal wing motions which brings into question the efficacy of previous work for current vehicles. In this paper, we will provide compelling evidence that learning during unconstrained flight under the newly imposed wing motion conditions is both practical and feasible. This paper constitutes the first formal report of these results and removes the final barriers that had existed to implementation in a fully-realized physical FW-MAV.
{"title":"Real-Time Learning of Wing Motion Correction in an Unconstrained Flapping-Wing Air Vehicle","authors":"J. Gallagher, E. Matson, Ryan Slater","doi":"10.1109/IRC55401.2022.00010","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00010","url":null,"abstract":"Small Flapping-Wing Micro-Air Vehicles (FW-MAVs) can experience wing damage and wear while in service. Even small amounts of wing can prevent the vehicle from attaining desired waypoints without significant adaptation to onboard flight control. In previous work, we demonstrated that low-level adaptation of wing motion patterns, rather than high-level adaptation of path control, could restore acceptable performance. We further demonstrated that this low-level adaptation could be accomplished while the vehicle was in normal service and without requiring excessive amounts of flight time. Previous work, however, did not carefully consider the use of these methods when the vehicle was completely unconstrained in three-dimensional space (I.E. no mechanical safety supports) and when all vehicle degrees of freedom had to be simultaneously controlled. Also, previous work presumed that the learning algorithm could adapt wing motion patterns with minimal constraints on shape. The newest generation of FW-MAVs we consider place some significant constraints on legal wing motions which brings into question the efficacy of previous work for current vehicles. In this paper, we will provide compelling evidence that learning during unconstrained flight under the newly imposed wing motion conditions is both practical and feasible. This paper constitutes the first formal report of these results and removes the final barriers that had existed to implementation in a fully-realized physical FW-MAV.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133240360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00028
A. Santos, Alcino Cunha, Nuno Macedo, Sara Melo, Ricardo Pereira
Robotic applications are often designed to be reusable and configurable. Sometimes, due to the different supported software and hardware components, as well as the different implemented robot capabilities, the total number of possible configurations for a single system can be extremely large. In these scenarios, understanding how different configurations coexist and which components and capabilities are compatible with each other is a significant time sink both for developers and end users alike. In this paper, we present a static analysis tool, specifically designed for robotic software developed for the Robot Operating System (ROS), that is capable of presenting a graphical and interactive overview of the system’s runtime variability, with the goal of simplifying the deployment of the desired robot configuration.
{"title":"Variability Analysis for Robot Operating System Applications","authors":"A. Santos, Alcino Cunha, Nuno Macedo, Sara Melo, Ricardo Pereira","doi":"10.1109/IRC55401.2022.00028","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00028","url":null,"abstract":"Robotic applications are often designed to be reusable and configurable. Sometimes, due to the different supported software and hardware components, as well as the different implemented robot capabilities, the total number of possible configurations for a single system can be extremely large. In these scenarios, understanding how different configurations coexist and which components and capabilities are compatible with each other is a significant time sink both for developers and end users alike. In this paper, we present a static analysis tool, specifically designed for robotic software developed for the Robot Operating System (ROS), that is capable of presenting a graphical and interactive overview of the system’s runtime variability, with the goal of simplifying the deployment of the desired robot configuration.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116442844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00033
Christian Lienen, M. Platzner
Modern software architectures for robotics map tasks to heterogeneous computing platforms comprising multi-core CPUs, GPUs, and FPGAs. FPGAs promise huge potential for energy efficient and fast computation, but their use in robotics requires profound knowledge of hardware design and is thus challenging. ReconROS, a combination of the reconfigurable operating system ReconOS and the robot operating system (ROS) aims to overcome this challenge with a consistent programming model across the hardware/software boundary and support of event-driven programming. In this paper, we summarize different approaches for mapping tasks to computational resources in ReconROS. These approaches include static and dynamic mappings, and the exploitation of data parallelism for single ROS nodes. Further, for dynamic mapping we propose and analyse different replacement strategies for hardware nodes to minimize reconfiguration overhead. We evaluate the presented techniques and illustrate ReconROS’ capabilites through an autonomous vehicle example in a hardware-in-the-loop simulation.
{"title":"Task Mapping for Hardware-Accelerated Robotics Applications using ReconROS","authors":"Christian Lienen, M. Platzner","doi":"10.1109/IRC55401.2022.00033","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00033","url":null,"abstract":"Modern software architectures for robotics map tasks to heterogeneous computing platforms comprising multi-core CPUs, GPUs, and FPGAs. FPGAs promise huge potential for energy efficient and fast computation, but their use in robotics requires profound knowledge of hardware design and is thus challenging. ReconROS, a combination of the reconfigurable operating system ReconOS and the robot operating system (ROS) aims to overcome this challenge with a consistent programming model across the hardware/software boundary and support of event-driven programming. In this paper, we summarize different approaches for mapping tasks to computational resources in ReconROS. These approaches include static and dynamic mappings, and the exploitation of data parallelism for single ROS nodes. Further, for dynamic mapping we propose and analyse different replacement strategies for hardware nodes to minimize reconfiguration overhead. We evaluate the presented techniques and illustrate ReconROS’ capabilites through an autonomous vehicle example in a hardware-in-the-loop simulation.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133451948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00057
Seongil Heo, Jueun Mun, Jiwoong Choi, Jiwon Park, E. Matson
This paper proposes a robust visual SLAM and a path planning algorithm for autonomous vehicles in the outdoor environment. The consideration of the outdoor characteristics was essential in both SLAM and path planning processes. This study can be used when it is necessary to know the exact appearance of the environment due to the impossibility of observing the environment through a satellite map, e.g., inside a forest. The visual SLAM system was developed using GPS data in consideration of the deterioration of camera recognition performance outdoors. The GPS data was inserted into every multi-thread of visual SLAM, which are Camera Tracking, Local Mapping, and Loop Closing. It enhanced the accuracy of the map and saved computational power by preventing useless calculations. In the path planning part, our method divided the path based on the stability of the roads. When determining the optimal path, the stability of the road and the driving time were considered, and the weight was assigned based on the GPS data.
{"title":"Outdoor visual SLAM and Path Planning for Mobile-Robot","authors":"Seongil Heo, Jueun Mun, Jiwoong Choi, Jiwon Park, E. Matson","doi":"10.1109/IRC55401.2022.00057","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00057","url":null,"abstract":"This paper proposes a robust visual SLAM and a path planning algorithm for autonomous vehicles in the outdoor environment. The consideration of the outdoor characteristics was essential in both SLAM and path planning processes. This study can be used when it is necessary to know the exact appearance of the environment due to the impossibility of observing the environment through a satellite map, e.g., inside a forest. The visual SLAM system was developed using GPS data in consideration of the deterioration of camera recognition performance outdoors. The GPS data was inserted into every multi-thread of visual SLAM, which are Camera Tracking, Local Mapping, and Loop Closing. It enhanced the accuracy of the map and saved computational power by preventing useless calculations. In the path planning part, our method divided the path based on the stability of the roads. When determining the optimal path, the stability of the road and the driving time were considered, and the weight was assigned based on the GPS data.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128823864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00079
Saifuddin Mahmud, Justin Dannemiller, R. Sourave, Xiangxu Lin, Jong-Hoon Kim
Simulation of emergency response scenarios and routine inspections are imperative means in ensuring the proper functioning and safety of power plants, oil refineries, iron works, and industrial units. By utilizing autonomous robots, moreover, the reliability and frequency of such inspections can be improved. With the exception of facilities located in hazardous areas, such as off-shore factories, where dispatching response teams might be impossible, accidents caused by human mistakes can be prevented by autonomous inspections and diagnosis of facilities (pumps, tanks, boilers, and so on). One of the primary obstacles in robot-assisted inspection operations is detecting various types of gauges, reading them, and taking appropriate action. This study describes a unique robot vision-based plant inspection system that may be used to enhance the frequency of routine checks and, in turn, minimize equipment faults and accidents (explosions or fires caused by gas leaks) caused by human mistakes or natural degradation. This suggested system can conduct facility inspections by detecting and reading a variety of gauges and issuing reports upon the detection of any anomalies. Furthermore, this system is capable of responding to unforeseen anomalous events that pose potential harm to human response teams, such as the direct manipulation of valves in the presence of a gas leak.
{"title":"Smart Robot Vision System for Plant Inspection for Disaster Prevention","authors":"Saifuddin Mahmud, Justin Dannemiller, R. Sourave, Xiangxu Lin, Jong-Hoon Kim","doi":"10.1109/IRC55401.2022.00079","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00079","url":null,"abstract":"Simulation of emergency response scenarios and routine inspections are imperative means in ensuring the proper functioning and safety of power plants, oil refineries, iron works, and industrial units. By utilizing autonomous robots, moreover, the reliability and frequency of such inspections can be improved. With the exception of facilities located in hazardous areas, such as off-shore factories, where dispatching response teams might be impossible, accidents caused by human mistakes can be prevented by autonomous inspections and diagnosis of facilities (pumps, tanks, boilers, and so on). One of the primary obstacles in robot-assisted inspection operations is detecting various types of gauges, reading them, and taking appropriate action. This study describes a unique robot vision-based plant inspection system that may be used to enhance the frequency of routine checks and, in turn, minimize equipment faults and accidents (explosions or fires caused by gas leaks) caused by human mistakes or natural degradation. This suggested system can conduct facility inspections by detecting and reading a variety of gauges and issuing reports upon the detection of any anomalies. Furthermore, this system is capable of responding to unforeseen anomalous events that pose potential harm to human response teams, such as the direct manipulation of valves in the presence of a gas leak.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129208348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00065
N. Garcia, A. Wortmann
Software integration activity is crucial to develop robotic applications. However, there is little specific literature dedicated to the integration process in robotics. The research in this field today is mostly focused on a concrete implementation phase and on the development of new technologies, but few researchers target their integration as an application-agnostic process. To shed light on the state of robotics software integration, we conducted a survey among researchers and practitioners in the field. As part of this survey we inquired how robotics software integration is currently performed. This study allowed us to find patterns in the way this process is carried out.
{"title":"Patterns and tools in Robotic Systems Integration","authors":"N. Garcia, A. Wortmann","doi":"10.1109/IRC55401.2022.00065","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00065","url":null,"abstract":"Software integration activity is crucial to develop robotic applications. However, there is little specific literature dedicated to the integration process in robotics. The research in this field today is mostly focused on a concrete implementation phase and on the development of new technologies, but few researchers target their integration as an application-agnostic process. To shed light on the state of robotics software integration, we conducted a survey among researchers and practitioners in the field. As part of this survey we inquired how robotics software integration is currently performed. This study allowed us to find patterns in the way this process is carried out.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125375235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00014
Yihua Wang, Xiao-Yan Shi, Longhui Qin
Haptic perception facilitates robots to interact with surrounding environments dexterously. Embedded tactile fingers behave robust and reliable, and is wildly used in robots. However, seldom research can be found on its perceptual mechanism in mechanics and the design guidance of its structure. In this paper, a numerical model is established to address the contact process between embedded-type tactile fingertips and a roughness surface via finite element analysis. Experimental and simulation results are compared during three contact processes: Drop-down, sliding and lifting-up. From the mechanical perspective, the strain and stress within the fingertip are explored, based on which several design suggestions are given on the spatial arrangement of sensing elements. In addition to explaining the perception mechanism in mechanical view, this paper also provides a reference for the general design of tactile fingertips.
{"title":"Mechanical Exploration of the Design of Tactile Fingertips via Finite Element Analysis","authors":"Yihua Wang, Xiao-Yan Shi, Longhui Qin","doi":"10.1109/IRC55401.2022.00014","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00014","url":null,"abstract":"Haptic perception facilitates robots to interact with surrounding environments dexterously. Embedded tactile fingers behave robust and reliable, and is wildly used in robots. However, seldom research can be found on its perceptual mechanism in mechanics and the design guidance of its structure. In this paper, a numerical model is established to address the contact process between embedded-type tactile fingertips and a roughness surface via finite element analysis. Experimental and simulation results are compared during three contact processes: Drop-down, sliding and lifting-up. From the mechanical perspective, the strain and stress within the fingertip are explored, based on which several design suggestions are given on the spatial arrangement of sensing elements. In addition to explaining the perception mechanism in mechanical view, this paper also provides a reference for the general design of tactile fingertips.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115078591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00086
Martin Rebert, Gwenaél Schmitt, D. Monnin
In addition to the remote-control for piloting an Unmanned Ground Vehicle (UGV), we aim at adding a new high level command mode: reaching a visual rally point selected by the operator from the camera feed. We modify the recent TransT-M single object tracker to track the rally point as the UGV moves and we couple it with our visual odometry. We design a visibility test to discard false positives, when the rally point disappears from the field of view. We test the tracking on image sequences taken from our platform and the KITTI Vision Benchmark to demonstrate the efficiency and the robustness of the proposed tracking architecture. The visibility test improves the chance of recovery after the landmark disappears and removes false positives. We show that the operator does not need to designate the landmark with high precision as the tracker is tolerant to imprecision on the position and the size of the landmark.
{"title":"Tracking Visual Landmarks of Opportunity as Rally Points for Unmanned Ground Vehicles","authors":"Martin Rebert, Gwenaél Schmitt, D. Monnin","doi":"10.1109/IRC55401.2022.00086","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00086","url":null,"abstract":"In addition to the remote-control for piloting an Unmanned Ground Vehicle (UGV), we aim at adding a new high level command mode: reaching a visual rally point selected by the operator from the camera feed. We modify the recent TransT-M single object tracker to track the rally point as the UGV moves and we couple it with our visual odometry. We design a visibility test to discard false positives, when the rally point disappears from the field of view. We test the tracking on image sequences taken from our platform and the KITTI Vision Benchmark to demonstrate the efficiency and the robustness of the proposed tracking architecture. The visibility test improves the chance of recovery after the landmark disappears and removes false positives. We show that the operator does not need to designate the landmark with high precision as the tracker is tolerant to imprecision on the position and the size of the landmark.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115337239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00073
Nicolò Pasini, A. Mariani, A. Munawar, E. Momi, P. Kazanzides
Robot-assisted Minimally Invasive Surgery (MIS) requires the surgeon to alternatively control both the surgical instruments and the endoscopic camera, or to leave this burden to an assistant. This increases the cognitive load and interrupts the workflow of the operation. Camera motion automation has been examined in the literature to mitigate these aspects, but still lacks situation awareness, a key factor for camera navigation enhancement. This paper presents the development of a phase-specific camera motion automation, implemented in Virtual Reality (VR) during a suturing task. A user study involving 10 users was carried out using the master console of the da Vinci Research Kit. Each subject performed the suturing task undergoing both the proposed autonomous camera motion and the traditional manual camera control. Results show that the proposed system can reduce operational time, decreasing both the user's mental and physical demand. Situational awareness is shown to be fundamental in exploiting the benefits introduced by camera motion automation.
{"title":"A virtual suturing task: proof of concept for awareness in autonomous camera motion","authors":"Nicolò Pasini, A. Mariani, A. Munawar, E. Momi, P. Kazanzides","doi":"10.1109/IRC55401.2022.00073","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00073","url":null,"abstract":"Robot-assisted Minimally Invasive Surgery (MIS) requires the surgeon to alternatively control both the surgical instruments and the endoscopic camera, or to leave this burden to an assistant. This increases the cognitive load and interrupts the workflow of the operation. Camera motion automation has been examined in the literature to mitigate these aspects, but still lacks situation awareness, a key factor for camera navigation enhancement. This paper presents the development of a phase-specific camera motion automation, implemented in Virtual Reality (VR) during a suturing task. A user study involving 10 users was carried out using the master console of the da Vinci Research Kit. Each subject performed the suturing task undergoing both the proposed autonomous camera motion and the traditional manual camera control. Results show that the proposed system can reduce operational time, decreasing both the user's mental and physical demand. Situational awareness is shown to be fundamental in exploiting the benefits introduced by camera motion automation.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130013168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00072
Petrit Rama, N. Bajçinca
Human driving involves an inherent discrete layer in decision-making corresponding to specific maneuvers such as overtaking, lane changing, lane keeping, etc. This is sensible to inherit at a higher layer of a hierarchical assembly in machine driving too, in order to enable tractable solutions for the otherwise highly complex problem of autonomous driving. This has been the motivation for this work that focuses on maneuver prediction for the ego-vehicle. Being inherently feedback structured, especially in dense traffic scenarios, maneuver prediction requires modeling approaches that account for the interaction awareness of the involved traffic agents. As a direct consequence, the problem of maneuver prediction is aggravated by the uncertainty in control policies of individual agents. The present paper tackles this difficulty by introducing three deep learning architectures for interaction-aware tactical maneuver prediction of the ego-vehicle, based on motion dynamics of surrounding traffic agents. Thus, the traffic scenario is modeled as an interaction graph, exploiting spatial features between traffic agents via Graph Neural Networks (GNNs). Dynamic motion patterns of traffic agents are extracted via Recurrent Neural Networks (RNNs). These architectures have been trained and evaluated using the BLVD dataset. To increase the model robustness and improve the learning process, the dataset is extended by making use of data augmentation, data oversampling, and data undersampling techniques. Finally, we successfully validate the proposed learning architectures and compare the trained models for maneuver prediction of the ego-vehicle obtained thereof in diverse driving scenarios with various numbers of surrounding traffic agents.
{"title":"NIAR: Interaction-aware Maneuver Prediction using Graph Neural Networks and Recurrent Neural Networks for Autonomous Driving","authors":"Petrit Rama, N. Bajçinca","doi":"10.1109/IRC55401.2022.00072","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00072","url":null,"abstract":"Human driving involves an inherent discrete layer in decision-making corresponding to specific maneuvers such as overtaking, lane changing, lane keeping, etc. This is sensible to inherit at a higher layer of a hierarchical assembly in machine driving too, in order to enable tractable solutions for the otherwise highly complex problem of autonomous driving. This has been the motivation for this work that focuses on maneuver prediction for the ego-vehicle. Being inherently feedback structured, especially in dense traffic scenarios, maneuver prediction requires modeling approaches that account for the interaction awareness of the involved traffic agents. As a direct consequence, the problem of maneuver prediction is aggravated by the uncertainty in control policies of individual agents. The present paper tackles this difficulty by introducing three deep learning architectures for interaction-aware tactical maneuver prediction of the ego-vehicle, based on motion dynamics of surrounding traffic agents. Thus, the traffic scenario is modeled as an interaction graph, exploiting spatial features between traffic agents via Graph Neural Networks (GNNs). Dynamic motion patterns of traffic agents are extracted via Recurrent Neural Networks (RNNs). These architectures have been trained and evaluated using the BLVD dataset. To increase the model robustness and improve the learning process, the dataset is extended by making use of data augmentation, data oversampling, and data undersampling techniques. Finally, we successfully validate the proposed learning architectures and compare the trained models for maneuver prediction of the ego-vehicle obtained thereof in diverse driving scenarios with various numbers of surrounding traffic agents.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124616982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}