Lucas Agostinho , Diogo Pereira , Antoine Hiolle , Andry Pinto
{"title":"TEFu-Net:用于稳健多模态自我运动估算的时间感知后期融合架构","authors":"Lucas Agostinho , Diogo Pereira , Antoine Hiolle , Andry Pinto","doi":"10.1016/j.robot.2024.104700","DOIUrl":null,"url":null,"abstract":"<div><p>Ego-motion estimation plays a critical role in autonomous driving systems by providing accurate and timely information about the vehicle’s position and orientation. To achieve high levels of accuracy and robustness, it is essential to leverage a range of sensor modalities to account for highly dynamic and diverse scenes, and consequent sensor limitations.</p><p>In this work, we introduce TEFu-Net, a Deep-Learning-based late fusion architecture that combines multiple ego-motion estimates from diverse data modalities, including stereo RGB, LiDAR point clouds and GNSS/IMU measurements. Our approach is non-parametric and scalable, making it adaptable to different sensor set configurations. By leveraging a Long Short-Term Memory (LSTM), TEFu-Net produces reliable and robust spatiotemporal ego-motion estimates. This capability allows it to filter out erroneous input measurements, ensuring the accuracy of the car’s motion calculations over time. Extensive experiments show an average accuracy increase of 63% over TEFu-Net’s input estimators and on par results with the state-of-the-art in real-world driving scenarios. We also demonstrate that our solution can achieve accurate estimates under sensor or input failure. Therefore, TEFu-Net enhances the accuracy and robustness of ego-motion estimation in real-world driving scenarios, particularly in challenging conditions such as cluttered environments, tunnels, dense vegetation, and unstructured scenes. As a result of these enhancements, it bolsters the reliability of autonomous driving functions.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TEFu-Net: A time-aware late fusion architecture for robust multi-modal ego-motion estimation\",\"authors\":\"Lucas Agostinho , Diogo Pereira , Antoine Hiolle , Andry Pinto\",\"doi\":\"10.1016/j.robot.2024.104700\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Ego-motion estimation plays a critical role in autonomous driving systems by providing accurate and timely information about the vehicle’s position and orientation. To achieve high levels of accuracy and robustness, it is essential to leverage a range of sensor modalities to account for highly dynamic and diverse scenes, and consequent sensor limitations.</p><p>In this work, we introduce TEFu-Net, a Deep-Learning-based late fusion architecture that combines multiple ego-motion estimates from diverse data modalities, including stereo RGB, LiDAR point clouds and GNSS/IMU measurements. Our approach is non-parametric and scalable, making it adaptable to different sensor set configurations. By leveraging a Long Short-Term Memory (LSTM), TEFu-Net produces reliable and robust spatiotemporal ego-motion estimates. This capability allows it to filter out erroneous input measurements, ensuring the accuracy of the car’s motion calculations over time. Extensive experiments show an average accuracy increase of 63% over TEFu-Net’s input estimators and on par results with the state-of-the-art in real-world driving scenarios. We also demonstrate that our solution can achieve accurate estimates under sensor or input failure. Therefore, TEFu-Net enhances the accuracy and robustness of ego-motion estimation in real-world driving scenarios, particularly in challenging conditions such as cluttered environments, tunnels, dense vegetation, and unstructured scenes. As a result of these enhancements, it bolsters the reliability of autonomous driving functions.</p></div>\",\"PeriodicalId\":49592,\"journal\":{\"name\":\"Robotics and Autonomous Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-04-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Robotics and Autonomous Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0921889024000836\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics and Autonomous Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0921889024000836","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
TEFu-Net: A time-aware late fusion architecture for robust multi-modal ego-motion estimation
Ego-motion estimation plays a critical role in autonomous driving systems by providing accurate and timely information about the vehicle’s position and orientation. To achieve high levels of accuracy and robustness, it is essential to leverage a range of sensor modalities to account for highly dynamic and diverse scenes, and consequent sensor limitations.
In this work, we introduce TEFu-Net, a Deep-Learning-based late fusion architecture that combines multiple ego-motion estimates from diverse data modalities, including stereo RGB, LiDAR point clouds and GNSS/IMU measurements. Our approach is non-parametric and scalable, making it adaptable to different sensor set configurations. By leveraging a Long Short-Term Memory (LSTM), TEFu-Net produces reliable and robust spatiotemporal ego-motion estimates. This capability allows it to filter out erroneous input measurements, ensuring the accuracy of the car’s motion calculations over time. Extensive experiments show an average accuracy increase of 63% over TEFu-Net’s input estimators and on par results with the state-of-the-art in real-world driving scenarios. We also demonstrate that our solution can achieve accurate estimates under sensor or input failure. Therefore, TEFu-Net enhances the accuracy and robustness of ego-motion estimation in real-world driving scenarios, particularly in challenging conditions such as cluttered environments, tunnels, dense vegetation, and unstructured scenes. As a result of these enhancements, it bolsters the reliability of autonomous driving functions.
期刊介绍:
Robotics and Autonomous Systems will carry articles describing fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems.
Robotics and Autonomous Systems will carry articles on the theoretical, computational and experimental aspects of autonomous systems, or modules of such systems.