Pub Date : 2024-03-25DOI: 10.1109/TIV.2024.3380074
Sifan Wu;Daxin Tian;Xuting Duan;Jianshan Zhou;Dezong Zhao;Dongpu Cao
Reinforcement learning methods have shown the ability to solve challenging scenarios in unmanned systems. However, solving long-time decision-making sequences in a highly complex environment, such as continuous lane change and overtaking in dense scenarios, remains challenging. Although existing unmanned vehicle systems have made considerable progress, minimizing driving risk is the first consideration. Risk-aware reinforcement learning is crucial for addressing potential driving risks. However, the variability of the risks posed by several risk sources is not considered by existing reinforcement learning algorithms applied in unmanned vehicles. Based on the above analysis, this study proposes a risk-aware reinforcement learning method with driving task decomposition to minimize the risk of various sources. Specifically, risk potential fields are constructed and combined with reinforcement learning to decompose the driving task. The proposed reinforcement learning framework uses different risk-branching networks to learn the driving task. Furthermore, a low-risk episodic sampling augmentation method for different risk branches is proposed to solve the shortage of high-quality samples and further improve sampling efficiency. Also, an intervention training strategy is employed wherein the artificial potential field (APF) is combined with reinforcement learning to speed up training and further ensure safety. Finally, the complete intervention risk classification twin delayed deep deterministic policy gradient-task decompose (IDRCTD3-TD) algorithm is proposed. Two scenarios with different difficulties are designed to validate the superiority of this framework. Results show that the proposed framework has remarkable improvements in performance.
{"title":"Continuous Decision-Making in Lane Changing and Overtaking Maneuvers for Unmanned Vehicles: A Risk-Aware Reinforcement Learning Approach With Task Decomposition","authors":"Sifan Wu;Daxin Tian;Xuting Duan;Jianshan Zhou;Dezong Zhao;Dongpu Cao","doi":"10.1109/TIV.2024.3380074","DOIUrl":"https://doi.org/10.1109/TIV.2024.3380074","url":null,"abstract":"Reinforcement learning methods have shown the ability to solve challenging scenarios in unmanned systems. However, solving long-time decision-making sequences in a highly complex environment, such as continuous lane change and overtaking in dense scenarios, remains challenging. Although existing unmanned vehicle systems have made considerable progress, minimizing driving risk is the first consideration. Risk-aware reinforcement learning is crucial for addressing potential driving risks. However, the variability of the risks posed by several risk sources is not considered by existing reinforcement learning algorithms applied in unmanned vehicles. Based on the above analysis, this study proposes a risk-aware reinforcement learning method with driving task decomposition to minimize the risk of various sources. Specifically, risk potential fields are constructed and combined with reinforcement learning to decompose the driving task. The proposed reinforcement learning framework uses different risk-branching networks to learn the driving task. Furthermore, a low-risk episodic sampling augmentation method for different risk branches is proposed to solve the shortage of high-quality samples and further improve sampling efficiency. Also, an intervention training strategy is employed wherein the artificial potential field (APF) is combined with reinforcement learning to speed up training and further ensure safety. Finally, the complete intervention risk classification twin delayed deep deterministic policy gradient-task decompose (IDRCTD3-TD) algorithm is proposed. Two scenarios with different difficulties are designed to validate the superiority of this framework. Results show that the proposed framework has remarkable improvements in performance.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 4","pages":"4657-4674"},"PeriodicalIF":8.2,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-21DOI: 10.1109/TIV.2024.3380244
Lili Fan;Junhao Wang;Yuanmeng Chang;Yuke Li;Yutong Wang;Dongpu Cao
The rapid development of autonomous driving technology has driven continuous innovation in perception systems, with 4D millimeter-wave (mmWave) radar being one of the key sensing devices. Leveraging its all-weather operational characteristics and robust perception capabilities in challenging environments, 4D mmWave radar plays a crucial role in achieving highly automated driving. This review systematically summarizes the latest advancements and key applications of 4D mmWave radar in the field of autonomous driving. To begin with, we introduce the fundamental principles and technical features of 4D mmWave radar, delving into its comprehensive perception capabilities across distance, speed, angle, and time dimensions. Subsequently, we provide a detailed analysis of the performance advantages of 4D mmWave radar compared to other sensors in complex environments. We then discuss the latest developments in target detection and tracking using 4D mmWave radar, along with existing datasets in this domain. Finally, we explore the current technological challenges and future directions. This review offers researchers and engineers a comprehensive understanding of the cutting-edge technology and future development directions of 4D mmWave radar in the context of autonomous driving perception.
{"title":"4D mmWave Radar for Autonomous Driving Perception: A Comprehensive Survey","authors":"Lili Fan;Junhao Wang;Yuanmeng Chang;Yuke Li;Yutong Wang;Dongpu Cao","doi":"10.1109/TIV.2024.3380244","DOIUrl":"https://doi.org/10.1109/TIV.2024.3380244","url":null,"abstract":"The rapid development of autonomous driving technology has driven continuous innovation in perception systems, with 4D millimeter-wave (mmWave) radar being one of the key sensing devices. Leveraging its all-weather operational characteristics and robust perception capabilities in challenging environments, 4D mmWave radar plays a crucial role in achieving highly automated driving. This review systematically summarizes the latest advancements and key applications of 4D mmWave radar in the field of autonomous driving. To begin with, we introduce the fundamental principles and technical features of 4D mmWave radar, delving into its comprehensive perception capabilities across distance, speed, angle, and time dimensions. Subsequently, we provide a detailed analysis of the performance advantages of 4D mmWave radar compared to other sensors in complex environments. We then discuss the latest developments in target detection and tracking using 4D mmWave radar, along with existing datasets in this domain. Finally, we explore the current technological challenges and future directions. This review offers researchers and engineers a comprehensive understanding of the cutting-edge technology and future development directions of 4D mmWave radar in the context of autonomous driving perception.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 4","pages":"4606-4620"},"PeriodicalIF":8.2,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-21DOI: 10.1109/TIV.2024.3380083
Weixin Ma;Huan Yin;Lei Yao;Yuxiang Sun;Zhongqing Su
Place recognition is a critical capability for autonomous vehicles. It matches current sensor data with a pre-built database to provide coarse localization results. However, the effectiveness of long-term place recognition may be degraded by environment changes, such as seasonal or weather changes. To have a deep understanding of this issue, we conduct a comprehensive evaluation study on several state-of-the-art range sensing-based (i.e., LiDAR and radar) place recognition methods on the Borease dataset, which encapsulates long-term localization scenarios with stark seasonal variations and adverse weather conditions. In addition, we design a novel metric to evaluate the influence of matching thresholds on place recognition performance for long-term localization. Our results and findings provide fresh insights to the community and potential directions for future study.
{"title":"Evaluation of Range Sensing-Based Place Recognition for Long-Term Urban Localization","authors":"Weixin Ma;Huan Yin;Lei Yao;Yuxiang Sun;Zhongqing Su","doi":"10.1109/TIV.2024.3380083","DOIUrl":"https://doi.org/10.1109/TIV.2024.3380083","url":null,"abstract":"Place recognition is a critical capability for autonomous vehicles. It matches current sensor data with a pre-built database to provide coarse localization results. However, the effectiveness of long-term place recognition may be degraded by environment changes, such as seasonal or weather changes. To have a deep understanding of this issue, we conduct a comprehensive evaluation study on several state-of-the-art range sensing-based (i.e., LiDAR and radar) place recognition methods on the Borease dataset, which encapsulates long-term localization scenarios with stark seasonal variations and adverse weather conditions. In addition, we design a novel metric to evaluate the influence of matching thresholds on place recognition performance for long-term localization. Our results and findings provide fresh insights to the community and potential directions for future study.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 5","pages":"4905-4916"},"PeriodicalIF":14.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-20DOI: 10.1109/TIV.2024.3379928
Amin Foshati;Alireza Ejlali
In modern vehicles, there are hundreds of sensors, and many of them are safety-critical, which means a malfunction in their operation can cause catastrophic consequences. The conventional approach for the fault tolerance of these sensors is to use redundant sensors, which inevitably increases costs and overhead. To address this challenge, we propose a new perspective for redundant sensors, which we refer to as cyber-approximate sensors. The idea is that instead of relying solely on physical redundancy, we devise sensors favoring existing cyber facilities to create redundancy. Furthermore, recognizing that the redundant sensors do not need to be as accurate as the primary ones, we exploit an approximation-based model that incurs low overhead. To this end, our sensors employ inherent dependencies among vehicle sensors in two steps: i) identifying related dependencies and ii) designing a regression model. As a case study, we applied the cyber redundancy approach to a fuel control system and conducted fault injection experiments using the Hardware-in-the-Loop platform to analyze the fault tolerance. Since the performability metric, unlike reliability, can consider performance degradation, we employed the performability metric to evaluate fault tolerance. Indeed, reliability follows a binary nature, where a system is either correct or failed. However, vehicle sensors can exhibit varying degrees of functionality between perfect operation and complete failure. They might experience partial degradation, which can still be acceptable. Our experiments show that the proposed cyber redundancy approach not only reduces high-cost physical overhead (by roughly 50%) but also enhances performability (by approximately 7%).
{"title":"Enhancing Sensor Fault Tolerance in Automotive Systems With Cost-Effective Cyber Redundancy","authors":"Amin Foshati;Alireza Ejlali","doi":"10.1109/TIV.2024.3379928","DOIUrl":"https://doi.org/10.1109/TIV.2024.3379928","url":null,"abstract":"In modern vehicles, there are hundreds of sensors, and many of them are safety-critical, which means a malfunction in their operation can cause catastrophic consequences. The conventional approach for the fault tolerance of these sensors is to use redundant sensors, which inevitably increases costs and overhead. To address this challenge, we propose a new perspective for redundant sensors, which we refer to as cyber-approximate sensors. The idea is that instead of relying solely on physical redundancy, we devise sensors favoring existing cyber facilities to create redundancy. Furthermore, recognizing that the redundant sensors do not need to be as accurate as the primary ones, we exploit an approximation-based model that incurs low overhead. To this end, our sensors employ inherent dependencies among vehicle sensors in two steps: i) identifying related dependencies and ii) designing a regression model. As a case study, we applied the cyber redundancy approach to a fuel control system and conducted fault injection experiments using the Hardware-in-the-Loop platform to analyze the fault tolerance. Since the performability metric, unlike reliability, can consider performance degradation, we employed the performability metric to evaluate fault tolerance. Indeed, reliability follows a binary nature, where a system is either correct or failed. However, vehicle sensors can exhibit varying degrees of functionality between perfect operation and complete failure. They might experience partial degradation, which can still be acceptable. Our experiments show that the proposed cyber redundancy approach not only reduces high-cost physical overhead (by roughly 50%) but also enhances performability (by approximately 7%).","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 4","pages":"4794-4803"},"PeriodicalIF":8.2,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the era of future mobility within Transportation 5.0, autonomy and cooperation across all road users and smart infrastructure stand as the key features to enhance transportation safety, efficiency, and sustainability, supported by cooperative perception, decision-making and planning, and control. An accurate and robust localization system plays a vital role in enabling these modules for future mobility and is constrained by environmental uncertainties and sensing limitations. To achieve precise and resilient localization in this new era, this letter introduces emerging technologies including edge computing, hybrid data-driven and physical model approaches, foundation models as well as parallel intelligence, that are beneficial for next-generation localization systems. On top of these key technologies, by integrating real-world testing and digital twin technology, we further put forward a Decentralized Autonomous Service (DAS)-based cooperative localization framework for future mobility systems to enhance the resilience, robustness, and safety of transportation systems.
{"title":"Cooperative Localization in Transportation 5.0","authors":"Letian Gao;Xin Xia;Zhaoliang Zheng;Hao Xiang;Zonglin Meng;Xu Han;Zewei Zhou;Yi He;Yutong Wang;Zhaojian Li;Yubiao Zhang;Jiaqi Ma","doi":"10.1109/TIV.2024.3377163","DOIUrl":"https://doi.org/10.1109/TIV.2024.3377163","url":null,"abstract":"In the era of future mobility within Transportation 5.0, autonomy and cooperation across all road users and smart infrastructure stand as the key features to enhance transportation safety, efficiency, and sustainability, supported by cooperative perception, decision-making and planning, and control. An accurate and robust localization system plays a vital role in enabling these modules for future mobility and is constrained by environmental uncertainties and sensing limitations. To achieve precise and resilient localization in this new era, this letter introduces emerging technologies including edge computing, hybrid data-driven and physical model approaches, foundation models as well as parallel intelligence, that are beneficial for next-generation localization systems. On top of these key technologies, by integrating real-world testing and digital twin technology, we further put forward a Decentralized Autonomous Service (DAS)-based cooperative localization framework for future mobility systems to enhance the resilience, robustness, and safety of transportation systems.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 3","pages":"4259-4264"},"PeriodicalIF":8.2,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140820321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1109/TIV.2024.3378579
Vitor Furlan de Oliveira;Guilherme Matiolli;Cláudio José Bordin Júnior;Ricardo Gaspar;Romulo Gonçalves Lins
Digital Twins (DTs) and Cyber-Physical Systems (CPSs) have the potential to play a crucial role in creating intelligent, connected, and efficient commercial vehicles (buses and trucks). A systematic literature review was conducted to analyze this area's current state of knowledge. The results of the review point to successful cases of using these technological solutions in this area. However, it also points to the need for a clear consensus regarding the definition of DT and CPS, generating conceptual challenges. Furthermore, the analysis reveals that most studies consider only one perspective concerning physical assets in DTs and CPSs, indicating the need to explore multiple dimensions of these assets. This study also emphasizes the potential of Industry 4.0 (I4.0) and its standards as possible solutions to address the identified gaps. The pursuit of integration and interoperability is highlighted as a promising direction to advance the representation and effective use of physical assets. This work provides a comprehensive overview of the opportunities and challenges related to DTs and CPSs in commercial vehicles, highlighting the continued need for research and development in this evolving field.
{"title":"Digital Twin and Cyber-Physical System Integration in Commercial Vehicles: Latest Concepts, Challenges and Opportunities","authors":"Vitor Furlan de Oliveira;Guilherme Matiolli;Cláudio José Bordin Júnior;Ricardo Gaspar;Romulo Gonçalves Lins","doi":"10.1109/TIV.2024.3378579","DOIUrl":"https://doi.org/10.1109/TIV.2024.3378579","url":null,"abstract":"Digital Twins (DTs) and Cyber-Physical Systems (CPSs) have the potential to play a crucial role in creating intelligent, connected, and efficient commercial vehicles (buses and trucks). A systematic literature review was conducted to analyze this area's current state of knowledge. The results of the review point to successful cases of using these technological solutions in this area. However, it also points to the need for a clear consensus regarding the definition of DT and CPS, generating conceptual challenges. Furthermore, the analysis reveals that most studies consider only one perspective concerning physical assets in DTs and CPSs, indicating the need to explore multiple dimensions of these assets. This study also emphasizes the potential of Industry 4.0 (I4.0) and its standards as possible solutions to address the identified gaps. The pursuit of integration and interoperability is highlighted as a promising direction to advance the representation and effective use of physical assets. This work provides a comprehensive overview of the opportunities and challenges related to DTs and CPSs in commercial vehicles, highlighting the continued need for research and development in this evolving field.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 4","pages":"4804-4819"},"PeriodicalIF":8.2,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In autonomous vehicle (AV) technology, the ability to accurately predict the movements of surrounding vehicles is paramount for ensuring safety and operational efficiency. Incorporating human decision-making insights enables AVs to more effectively anticipate the potential actions of other vehicles, significantly improving prediction accuracy and responsiveness in dynamic environments. This paper introduces the Human-Like Trajectory Prediction (HLTP) model, which adopts a teacher-student knowledge distillation framework inspired by human cognitive processes. The HLTP model incorporates a sophisticated teacher-student knowledge distillation framework. The “teacher” model, equipped with an adaptive visual sector, mimics the visual processing of the human brain, particularly the functions of the occipital and temporal lobes. The “student” model focuses on real-time interaction and decision-making, drawing parallels to prefrontal and parietal cortex functions. This approach allows for dynamic adaptation to changing driving scenarios, capturing essential perceptual cues for accurate prediction. Evaluated using the Macao Connected and Autonomous Driving (MoCAD) dataset, along with the NGSIM and HighD benchmarks, HLTP demonstrates superior performance compared to existing models, particularly in challenging environments with incomplete data.
{"title":"A Cognitive-Based Trajectory Prediction Approach for Autonomous Driving","authors":"Haicheng Liao;Yongkang Li;Zhenning Li;Chengyue Wang;Zhiyong Cui;Shengbo Eben Li;Chengzhong Xu","doi":"10.1109/TIV.2024.3376074","DOIUrl":"https://doi.org/10.1109/TIV.2024.3376074","url":null,"abstract":"In autonomous vehicle (AV) technology, the ability to accurately predict the movements of surrounding vehicles is paramount for ensuring safety and operational efficiency. Incorporating human decision-making insights enables AVs to more effectively anticipate the potential actions of other vehicles, significantly improving prediction accuracy and responsiveness in dynamic environments. This paper introduces the Human-Like Trajectory Prediction (HLTP) model, which adopts a teacher-student knowledge distillation framework inspired by human cognitive processes. The HLTP model incorporates a sophisticated teacher-student knowledge distillation framework. The “teacher” model, equipped with an adaptive visual sector, mimics the visual processing of the human brain, particularly the functions of the occipital and temporal lobes. The “student” model focuses on real-time interaction and decision-making, drawing parallels to prefrontal and parietal cortex functions. This approach allows for dynamic adaptation to changing driving scenarios, capturing essential perceptual cues for accurate prediction. Evaluated using the Macao Connected and Autonomous Driving (MoCAD) dataset, along with the NGSIM and HighD benchmarks, HLTP demonstrates superior performance compared to existing models, particularly in challenging environments with incomplete data.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 4","pages":"4632-4643"},"PeriodicalIF":8.2,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1109/TIV.2024.3376534
Zhen Feng;Yanning Guo;Yuxiang Sun
Segmentation of road negative obstacles (i.e., potholes and cracks) is important to the safety of autonomous driving. Although existing RGB-D fusion networks could achieve acceptable performance, most of them only conduct binary segmentation for negative obstacles, which does not distinguish potholes and cracks. Moreover, their performance is susceptible to depth noises, in which case the fluctuations of depth data caused by the noises may make the networks mistakenly treat the area as a negative obstacle. To provide a solution to the above issues, we design a novel RGB-D semantic segmentation network with dual semantic-feature complementary fusion for road negative obstacle segmentation. We also re-label an RGB-D dataset for this task, which distinguishes road potholes and cracks as two different classes. Experimental results show that our network achieves state-of-the-art performance compared to existing well-known networks.
{"title":"Segmentation of Road Negative Obstacles Based on Dual Semantic-Feature Complementary Fusion for Autonomous Driving","authors":"Zhen Feng;Yanning Guo;Yuxiang Sun","doi":"10.1109/TIV.2024.3376534","DOIUrl":"https://doi.org/10.1109/TIV.2024.3376534","url":null,"abstract":"Segmentation of road negative obstacles (i.e., potholes and cracks) is important to the safety of autonomous driving. Although existing RGB-D fusion networks could achieve acceptable performance, most of them only conduct binary segmentation for negative obstacles, which does not distinguish potholes and cracks. Moreover, their performance is susceptible to depth noises, in which case the fluctuations of depth data caused by the noises may make the networks mistakenly treat the area as a negative obstacle. To provide a solution to the above issues, we design a novel RGB-D semantic segmentation network with dual semantic-feature complementary fusion for road negative obstacle segmentation. We also re-label an RGB-D dataset for this task, which distinguishes road potholes and cracks as two different classes. Experimental results show that our network achieves state-of-the-art performance compared to existing well-known networks.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 4","pages":"4687-4697"},"PeriodicalIF":8.2,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-14DOI: 10.1109/TIV.2024.3375273
Long Chen;Yuchen Li;Luxi Li;Shuangying Qi;Jian Zhou;Youchen Tang;Jianjian Yang;Jingmin Xin
Autonomous driving technology has achieved significant breakthroughs in open scenarios, enabling the deployment of excellent positioning, detection, and navigation algorithms on passenger vehicles. However, there has been limited research attention devoted to autonomous driving for specialized vehicles in non-open scenarios. This manuscript introduces a perception system designed for heavy-duty mining transportation trucks operating in open-pit mines, which are typical of non-open scenarios. The system comprises four independent algorithms: high-precision fusion positioning, multi-task 2D detection, 9 Degrees of Freedom (9 DoF) 3D head, and autonomous navigation technology. Experimental verification demonstrates the effectiveness of these methods in addressing the challenges posed by mining environments, ultimately leading to enhanced safety and efficiency for trucks. This research outcome, through the comprehensive examination of positioning, detection, and navigation, aims to address the challenges encountered by mining trucks during operations. Its significance lies in enhancing automation levels in mining scenarios.
{"title":"High-Precision Positioning, Perception and Safe Navigation for Automated Heavy-Duty Mining Trucks","authors":"Long Chen;Yuchen Li;Luxi Li;Shuangying Qi;Jian Zhou;Youchen Tang;Jianjian Yang;Jingmin Xin","doi":"10.1109/TIV.2024.3375273","DOIUrl":"https://doi.org/10.1109/TIV.2024.3375273","url":null,"abstract":"Autonomous driving technology has achieved significant breakthroughs in open scenarios, enabling the deployment of excellent positioning, detection, and navigation algorithms on passenger vehicles. However, there has been limited research attention devoted to autonomous driving for specialized vehicles in non-open scenarios. This manuscript introduces a perception system designed for heavy-duty mining transportation trucks operating in open-pit mines, which are typical of non-open scenarios. The system comprises four independent algorithms: high-precision fusion positioning, multi-task 2D detection, 9 Degrees of Freedom (9 DoF) 3D head, and autonomous navigation technology. Experimental verification demonstrates the effectiveness of these methods in addressing the challenges posed by mining environments, ultimately leading to enhanced safety and efficiency for trucks. This research outcome, through the comprehensive examination of positioning, detection, and navigation, aims to address the challenges encountered by mining trucks during operations. Its significance lies in enhancing automation levels in mining scenarios.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 4","pages":"4644-4656"},"PeriodicalIF":8.2,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-08DOI: 10.1109/TIV.2024.3373696
Yang Liu;Zhihao Sun;Xueyi Wang;Zheng Fan;Xiangyang Wang;Lele Zhang;Hailing Fu;Fang Deng
Long-range target geolocalization in outdoor complex environments has been a long-term challenge in intelligent transportation and autonomous vehicles with great interest in fields of vehicle detection, monitoring, and security. However, since traditional monocular or binocular geolocalization methods are typically implemented by depth estimation or parallax computation, suffering from large errors when targets are far away, and thus hard to be directly applied to outdoor environments. In this paper, we propose a visual servo-based global geolocalization system, namely VSG, which takes the target position information in the binocular camera images as the control signals, automatically solves the global positions according to the gimbal rotation angles. This system solves the problem of long-range static and dynamic target geolocalization (ranging from 220 m to 1200 m), and localizes the farthest target of 1223.8 m with only 3.5 $%$