首页 > 最新文献

IEEE Transactions on Intelligent Vehicles最新文献

英文 中文
Enhancement Technology for Perception in Smart Mining Vehicles: 4D Millimeter-Wave Radar and Multi-Sensor Fusion 智能采矿车辆感知增强技术:4D 毫米波雷达和多传感器融合
IF 14 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-01 DOI: 10.1109/TIV.2024.3427718
Jianjian Yang;Tianmu Gui;Yuyuan Zhang;Shirong Ge;Qiankun Huang;Guanghui Zhao
Advancements in 4D mmWave radar with multi-sensor fusion have significantly enhanced the robustness of autonomous driving systems. In the context of “Mining 5.0” based on parallel intelligence theory, autonomous haulage need to achieve full autonomy in open-pit mines. Current systems use 3D mmWave radar, LiDAR, and cameras but have limited automation progress. This perspective discusses the limitations of these systems and how integrating 4D mmWave radar can improve mining autonomy. This perspective results from discussions at several recent Distributed/Decentralized Hybrid Workshops on Autonomous Mining (DHW-AM) and aims at enhancing the intelligence of future mining operations.
四维毫米波雷达与多传感器融合技术的进步大大增强了自动驾驶系统的鲁棒性。在基于并行智能理论的 "采矿 5.0 "背景下,自主运输需要在露天矿中实现完全自主。目前的系统使用三维毫米波雷达、激光雷达和摄像头,但自动化进展有限。本视角讨论了这些系统的局限性,以及集成 4D 毫米波雷达如何提高采矿自主性。本视角源于最近几届分布式/分散式混合自主采矿研讨会(DHW-AM)的讨论,旨在提高未来采矿作业的智能化水平。
{"title":"Enhancement Technology for Perception in Smart Mining Vehicles: 4D Millimeter-Wave Radar and Multi-Sensor Fusion","authors":"Jianjian Yang;Tianmu Gui;Yuyuan Zhang;Shirong Ge;Qiankun Huang;Guanghui Zhao","doi":"10.1109/TIV.2024.3427718","DOIUrl":"https://doi.org/10.1109/TIV.2024.3427718","url":null,"abstract":"Advancements in 4D mmWave radar with multi-sensor fusion have significantly enhanced the robustness of autonomous driving systems. In the context of “Mining 5.0” based on parallel intelligence theory, autonomous haulage need to achieve full autonomy in open-pit mines. Current systems use 3D mmWave radar, LiDAR, and cameras but have limited automation progress. This perspective discusses the limitations of these systems and how integrating 4D mmWave radar can improve mining autonomy. This perspective results from discussions at several recent Distributed/Decentralized Hybrid Workshops on Autonomous Mining (DHW-AM) and aims at enhancing the intelligence of future mining operations.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 6","pages":"5009-5013"},"PeriodicalIF":14.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Intelligent Vehicles Publication Information 电气和电子工程师学会智能车辆论文集》出版信息
IF 14 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-01 DOI: 10.1109/TIV.2024.3430209
{"title":"IEEE Transactions on Intelligent Vehicles Publication Information","authors":"","doi":"10.1109/TIV.2024.3430209","DOIUrl":"https://doi.org/10.1109/TIV.2024.3430209","url":null,"abstract":"","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 6","pages":"C2-C2"},"PeriodicalIF":14.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10631778","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TechRxiv: Share Your Preprint Research with the World! TechRxiv:与世界分享您的预印本研究成果!
IF 14 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-01 DOI: 10.1109/TIV.2024.3437221
{"title":"TechRxiv: Share Your Preprint Research with the World!","authors":"","doi":"10.1109/TIV.2024.3437221","DOIUrl":"https://doi.org/10.1109/TIV.2024.3437221","url":null,"abstract":"","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 6","pages":"5118-5118"},"PeriodicalIF":14.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10631816","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Transactions on Intelligent Vehicles Information 智能车辆信息论文集
IF 14 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-01 DOI: 10.1109/TIV.2024.3435289
{"title":"The Transactions on Intelligent Vehicles Information","authors":"","doi":"10.1109/TIV.2024.3435289","DOIUrl":"https://doi.org/10.1109/TIV.2024.3435289","url":null,"abstract":"","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 6","pages":"C4-C4"},"PeriodicalIF":14.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10631780","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imaginative Intelligence for Intelligent Vehicles: Sora Inspired New Directions for New Mobility and Vehicle Intelligence 智能汽车的想象力智能:Sora 为新交通和车辆智能带来的新灵感
IF 8.2 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-29 DOI: 10.1109/TIV.2024.3393638
Fei-Yue Wang
The current issue includes 3 perspectives, 2 letters and 17 regular papers. These perspectives explore critical issues within the field of IVs and propose prospective research directions based on the evolution of foundation models. After Scanning the Issue, I would like to share insights on how Sora-based imaginative intelligence could propel the future development of IVs.
本期包括 3 篇观点、2 封信和 17 篇常规论文。这些视角探讨了智能体领域的关键问题,并根据基础模型的演变提出了前瞻性的研究方向。在 "本期扫描 "之后,我想就基于 Sora 的想象智能如何推动 IVs 的未来发展与大家分享一些见解。
{"title":"Imaginative Intelligence for Intelligent Vehicles: Sora Inspired New Directions for New Mobility and Vehicle Intelligence","authors":"Fei-Yue Wang","doi":"10.1109/TIV.2024.3393638","DOIUrl":"https://doi.org/10.1109/TIV.2024.3393638","url":null,"abstract":"The current issue includes 3 perspectives, 2 letters and 17 regular papers. These perspectives explore critical issues within the field of IVs and propose prospective research directions based on the evolution of foundation models. After \u0000<bold>Scanning the Issue</b>\u0000, I would like to share insights on how Sora-based imaginative intelligence could propel the future development of IVs.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 4","pages":"4557-4562"},"PeriodicalIF":8.2,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sora for Smart Mining: Towards Sustainability With Imaginative Intelligence and Parallel Intelligence 用于智能采矿的 Sora:利用想象力智能和并行智能实现可持续发展
IF 8.2 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-29 DOI: 10.1109/TIV.2024.3394520
Yuting Xie;Cong Wang;Kunhua Liu;Zhe Xuanyuan;Yuhang He;Hui Cheng;Andreas Nüchter;Lingxi Li;Rouxing Huai;Shuming Tang;Siji Ma;Long Chen
This letter summarizes discussions from IEEE TIV's Autonomous Mining Workshop, emphasizing the potential of video generation models in advancing smart mining.
本信总结了 IEEE TIV 自主采矿研讨会的讨论情况,强调了视频生成模型在推进智能采矿方面的潜力。
{"title":"Sora for Smart Mining: Towards Sustainability With Imaginative Intelligence and Parallel Intelligence","authors":"Yuting Xie;Cong Wang;Kunhua Liu;Zhe Xuanyuan;Yuhang He;Hui Cheng;Andreas Nüchter;Lingxi Li;Rouxing Huai;Shuming Tang;Siji Ma;Long Chen","doi":"10.1109/TIV.2024.3394520","DOIUrl":"https://doi.org/10.1109/TIV.2024.3394520","url":null,"abstract":"This letter summarizes discussions from IEEE TIV's Autonomous Mining Workshop, emphasizing the potential of video generation models in advancing smart mining.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 4","pages":"4577-4578"},"PeriodicalIF":8.2,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TUMTraf Event: Calibration and Fusion Resulting in a Dataset for Roadside Event-Based and RGB Cameras TUMTraf 事件:校准和融合形成路边基于事件和 RGB 摄像机的数据集
IF 14 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-25 DOI: 10.1109/TIV.2024.3393749
Christian Creß;Walter Zimmer;Nils Purschke;Bach Ngoc Doan;Sven Kirchner;Venkatnarayanan Lakshminarasimhan;Leah Strand;Alois C. Knoll
Event-based cameras are predestined for Intelligent Transportation Systems (ITS). They provide very high temporal resolution and dynamic range, which can eliminate motion blur and improve detection performance at night. However, event-based images lack color and texture compared to images from a conventional RGB camera. Considering that, data fusion between event-based and conventional cameras can combine the strengths of both modalities. For this purpose, extrinsic calibration is necessary. To the best of our knowledge, no targetless calibration between event-based and RGB cameras can handle multiple moving objects, nor does data fusion optimized for the domain of roadside ITS exist. Furthermore, synchronized event-based and RGB camera datasets considering roadside perspective are not yet published. To fill these research gaps, based on our previous work, we extended our targetless calibration approach with clustering methods to handle multiple moving objects. Furthermore, we developed an Early Fusion, Simple Late Fusion, and a novel Spatiotemporal Late Fusion method. Lastly, we published the TUMTraf Event Dataset, which contains more than 4,111 synchronized event-based and RGB images with 50,496 labeled 2D boxes. During our extensive experiments, we verified the effectiveness of our calibration method with multiple moving objects. Furthermore, compared to a single RGB camera, we increased the detection performance of up to +9% mAP in the day and up to +13% mAP during the challenging night with our presented event-based sensor fusion methods.
基于事件的摄像机是智能交通系统(ITS)的首选。它们具有极高的时间分辨率和动态范围,可以消除运动模糊,提高夜间检测性能。然而,与传统 RGB 摄像机的图像相比,基于事件的图像缺乏色彩和纹理。考虑到这一点,基于事件的相机和传统相机之间的数据融合可以结合两种模式的优势。为此,需要进行外部校准。据我们所知,基于事件的摄像机和 RGB 摄像机之间的无目标校准无法处理多个移动物体,也不存在针对路边智能交通系统领域进行优化的数据融合。此外,考虑到路边视角的基于事件和 RGB 摄像机同步数据集尚未发布。为了填补这些研究空白,我们在之前工作的基础上,利用聚类方法扩展了无目标校准方法,以处理多个移动物体。此外,我们还开发了早期融合、简单后期融合和新型时空后期融合方法。最后,我们发布了 TUMTraf 事件数据集,其中包含超过 4,111 张基于事件的同步 RGB 图像和 50,496 个标记的二维方框。在广泛的实验中,我们验证了我们的校准方法对多个移动物体的有效性。此外,与单个 RGB 摄像机相比,我们提出的基于事件的传感器融合方法在白天的检测性能提高了 +9% mAP,在具有挑战性的夜间的检测性能提高了 +13% mAP。
{"title":"TUMTraf Event: Calibration and Fusion Resulting in a Dataset for Roadside Event-Based and RGB Cameras","authors":"Christian Creß;Walter Zimmer;Nils Purschke;Bach Ngoc Doan;Sven Kirchner;Venkatnarayanan Lakshminarasimhan;Leah Strand;Alois C. Knoll","doi":"10.1109/TIV.2024.3393749","DOIUrl":"https://doi.org/10.1109/TIV.2024.3393749","url":null,"abstract":"Event-based cameras are predestined for Intelligent Transportation Systems (ITS). They provide very high temporal resolution and dynamic range, which can eliminate motion blur and improve detection performance at night. However, event-based images lack color and texture compared to images from a conventional RGB camera. Considering that, data fusion between event-based and conventional cameras can combine the strengths of both modalities. For this purpose, extrinsic calibration is necessary. To the best of our knowledge, no targetless calibration between event-based and RGB cameras can handle multiple moving objects, nor does data fusion optimized for the domain of roadside ITS exist. Furthermore, synchronized event-based and RGB camera datasets considering roadside perspective are not yet published. To fill these research gaps, based on our previous work, we extended our targetless calibration approach with clustering methods to handle multiple moving objects. Furthermore, we developed an Early Fusion, Simple Late Fusion, and a novel Spatiotemporal Late Fusion method. Lastly, we published the TUMTraf Event Dataset, which contains more than 4,111 synchronized event-based and RGB images with 50,496 labeled 2D boxes. During our extensive experiments, we verified the effectiveness of our calibration method with multiple moving objects. Furthermore, compared to a single RGB camera, we increased the detection performance of up to +9% mAP in the day and up to +13% mAP during the challenging night with our presented event-based sensor fusion methods.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 7","pages":"5186-5203"},"PeriodicalIF":14.0,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10508494","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142320476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception and Planning of Intelligent Vehicles Based on BEV in Extreme Off-Road Scenarios 基于 BEV 的智能车辆在极端越野场景中的感知与规划
IF 8.2 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-23 DOI: 10.1109/TIV.2024.3392753
Jingjing Fan;Lili Fan;Qinghua Ni;Junhao Wang;Yi Liu;Ren Li;Yutong Wang;Sanjin Wang
In extreme off-road scenarios, autonomous driving technology holds strategic significance for enhancing emergency rescue capabilities, reducing labor intensity, and mitigating safety risks. Challenges such as adverse conditions, complex terrains, unstable satellite signals, and lack of roads pose serious safety challenges for autonomous driving. This perspective first delves into a Bird's Eye View (BEV)-based perception-planning framework, aiming to enhance the adaptability of intelligent vehicles to their environment. Subsequently, this perspective further discusses key issues such as Cyber-Physical-Social Systems (CPSS), foundation models, and technologies like Sora for off-road scenario generation, vehicle cognitive enhancement, and autonomous decision-making. Ultimately, the discussed framework is poised to endow intelligent vehicles with the capability to perform challenging tasks in complex off-road scenarios, realizing a more efficient, safer, and sustainable transportation system, thereby providing better support for the low-altitude economy.
在极端越野场景中,自动驾驶技术对于提高应急救援能力、降低劳动强度和减少安全风险具有重要的战略意义。恶劣的条件、复杂的地形、不稳定的卫星信号、缺乏道路等挑战给自动驾驶带来了严峻的安全挑战。本视角首先深入探讨了基于鸟瞰(BEV)的感知规划框架,旨在增强智能车辆对环境的适应性。随后,本视角进一步讨论了一些关键问题,如网络-物理-社会系统(CPSS)、基础模型以及用于越野场景生成、车辆认知增强和自主决策的 Sora 等技术。最终,所讨论的框架将赋予智能车辆在复杂越野场景中执行挑战性任务的能力,实现更高效、更安全和可持续的交通系统,从而为低空经济提供更好的支持。
{"title":"Perception and Planning of Intelligent Vehicles Based on BEV in Extreme Off-Road Scenarios","authors":"Jingjing Fan;Lili Fan;Qinghua Ni;Junhao Wang;Yi Liu;Ren Li;Yutong Wang;Sanjin Wang","doi":"10.1109/TIV.2024.3392753","DOIUrl":"https://doi.org/10.1109/TIV.2024.3392753","url":null,"abstract":"In extreme off-road scenarios, autonomous driving technology holds strategic significance for enhancing emergency rescue capabilities, reducing labor intensity, and mitigating safety risks. Challenges such as adverse conditions, complex terrains, unstable satellite signals, and lack of roads pose serious safety challenges for autonomous driving. This perspective first delves into a Bird's Eye View (BEV)-based perception-planning framework, aiming to enhance the adaptability of intelligent vehicles to their environment. Subsequently, this perspective further discusses key issues such as Cyber-Physical-Social Systems (CPSS), foundation models, and technologies like Sora for off-road scenario generation, vehicle cognitive enhancement, and autonomous decision-making. Ultimately, the discussed framework is poised to endow intelligent vehicles with the capability to perform challenging tasks in complex off-road scenarios, realizing a more efficient, safer, and sustainable transportation system, thereby providing better support for the low-altitude economy.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 4","pages":"4568-4572"},"PeriodicalIF":8.2,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RoadFormer: Duplex Transformer for RGB-Normal Semantic Road Scene Parsing RoadFormer:用于 RGB-Normal 道路场景语义解析的双工变换器
IF 14 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-17 DOI: 10.1109/TIV.2024.3388726
Jiahang Li;Yikang Zhang;Peng Yun;Guangliang Zhou;Qijun Chen;Rui Fan
The recent advancements in deep convolutional neural networks have shown significant promise in the domain of road scene parsing. Nevertheless, the existing works focus primarily on freespace detection, with little attention given to hazardous road defects that could compromise both driving safety and comfort. In this article, we introduce RoadFormer, a novel Transformer-based data-fusion network developed for road scene parsing. RoadFormer utilizes a duplex encoder architecture to extract heterogeneous features from both RGB images and surface normal information. The encoded features are subsequently fed into a novel heterogeneous feature synergy block for effective feature fusion and recalibration. The pixel decoder then learns multi-scale long-range dependencies from the fused and recalibrated heterogeneous features, which are subsequently processed by a Transformer decoder to produce the final semantic prediction. Additionally, we release SYN-UDTIRI, the first large-scale road scene parsing dataset that contains over 10,407 RGB images, dense depth images, and the corresponding pixel-level annotations for both freespace and road defects of different shapes and sizes. Extensive experimental evaluations conducted on our SYN-UDTIRI dataset, as well as on three public datasets, including KITTI road, CityScapes, and ORFD, demonstrate that RoadFormer outperforms all other state-of-the-art networks for road scene parsing. Specifically, RoadFormer ranks first on the KITTI road benchmark.
深度卷积神经网络的最新进展已在道路场景解析领域展现出巨大前景。然而,现有的工作主要集中在自由空间检测上,很少关注可能影响驾驶安全和舒适度的危险道路缺陷。在本文中,我们将介绍一种基于 Transformer 的新型数据融合网络 RoadFormer,它是专为道路场景解析而开发的。RoadFormer 采用双工编码器架构,从 RGB 图像和表面法线信息中提取异构特征。编码后的特征随后被送入一个新颖的异构特征协同块,以进行有效的特征融合和重新校准。然后,像素解码器会从融合和重新校准的异构特征中学习多尺度长距离依赖关系,并由变换器解码器进行处理,以生成最终的语义预测结果。此外,我们还发布了首个大规模道路场景解析数据集 SYN-UDTIRI,该数据集包含超过 10,407 幅 RGB 图像、高密度深度图像以及相应的像素级注释,涉及不同形状和大小的自由空间和道路缺陷。在我们的 SYN-UDTIRI 数据集以及三个公开数据集(包括 KITTI road、CityScapes 和 ORFD)上进行的广泛实验评估表明,RoadFormer 在道路场景解析方面的表现优于所有其他最先进的网络。特别是在 KITTI 道路基准测试中,RoadFormer 排名第一。
{"title":"RoadFormer: Duplex Transformer for RGB-Normal Semantic Road Scene Parsing","authors":"Jiahang Li;Yikang Zhang;Peng Yun;Guangliang Zhou;Qijun Chen;Rui Fan","doi":"10.1109/TIV.2024.3388726","DOIUrl":"https://doi.org/10.1109/TIV.2024.3388726","url":null,"abstract":"The recent advancements in deep convolutional neural networks have shown significant promise in the domain of road scene parsing. Nevertheless, the existing works focus primarily on freespace detection, with little attention given to hazardous road defects that could compromise both driving safety and comfort. In this article, we introduce RoadFormer, a novel Transformer-based data-fusion network developed for road scene parsing. RoadFormer utilizes a duplex encoder architecture to extract heterogeneous features from both RGB images and surface normal information. The encoded features are subsequently fed into a novel heterogeneous feature synergy block for effective feature fusion and recalibration. The pixel decoder then learns multi-scale long-range dependencies from the fused and recalibrated heterogeneous features, which are subsequently processed by a Transformer decoder to produce the final semantic prediction. Additionally, we release SYN-UDTIRI, the first large-scale road scene parsing dataset that contains over 10,407 RGB images, dense depth images, and the corresponding pixel-level annotations for both freespace and road defects of different shapes and sizes. Extensive experimental evaluations conducted on our SYN-UDTIRI dataset, as well as on three public datasets, including KITTI road, CityScapes, and ORFD, demonstrate that RoadFormer outperforms all other state-of-the-art networks for road scene parsing. Specifically, RoadFormer ranks first on the KITTI road benchmark.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 7","pages":"5163-5172"},"PeriodicalIF":14.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142320486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
eVTOL Performance Analysis: A Review From Control Perspectives eVTOL 性能分析:从控制角度回顾
IF 14 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-11 DOI: 10.1109/TIV.2024.3387405
Jiangcheng Su;Hailong Huang;Hong Zhang;Yutong Wang;Fei-Yue Wang
Electric Vertical Takeoff and Landing (eVTOL) aircraft has gained significant attention as a basic element of urban air mobility (UAM), a potential solution for urban transportation challenges using low-altitude urban airspace. Ensuring the safe operation of eVTOL is crucial for UAM applications, which are related to various professional fields such as aerodynamics, control, structures, and power systems. This article systematically analyzes the characteristics of different design configurations, including multi-rotor, lift+cruise, and tilt-rotor types of eVTOL. The advantages and limitations of each type of eVTOL are analyzed. After that, the overall design problems are analyzed, and challenges of eVTOL control system design are discussed from aspects of overall control structure and subsystems, such as controller, sensors, actuators, and command generator. This article tries to fill the gap in the eVTOL design from a control perspective and provides some resolutions for the eVTOL application.
电动垂直起降(eVTOL)飞机作为城市空中交通(UAM)的一个基本要素,已经获得了极大的关注,它是利用低空城市空域应对城市交通挑战的一个潜在解决方案。确保 eVTOL 的安全运行对 UAM 应用至关重要,这涉及空气动力学、控制、结构和动力系统等多个专业领域。本文系统分析了不同设计配置的特点,包括多旋翼、升力+巡航和倾转旋翼类型的 eVTOL。分析了每种 eVTOL 的优势和局限性。然后,分析了整体设计问题,并从整体控制结构和子系统(如控制器、传感器、执行器和指令发生器)等方面讨论了 eVTOL 控制系统设计所面临的挑战。本文试图从控制角度填补 eVTOL 设计的空白,并为 eVTOL 应用提供一些解决方案。
{"title":"eVTOL Performance Analysis: A Review From Control Perspectives","authors":"Jiangcheng Su;Hailong Huang;Hong Zhang;Yutong Wang;Fei-Yue Wang","doi":"10.1109/TIV.2024.3387405","DOIUrl":"https://doi.org/10.1109/TIV.2024.3387405","url":null,"abstract":"Electric Vertical Takeoff and Landing (eVTOL) aircraft has gained significant attention as a basic element of urban air mobility (UAM), a potential solution for urban transportation challenges using low-altitude urban airspace. Ensuring the safe operation of eVTOL is crucial for UAM applications, which are related to various professional fields such as aerodynamics, control, structures, and power systems. This article systematically analyzes the characteristics of different design configurations, including multi-rotor, lift+cruise, and tilt-rotor types of eVTOL. The advantages and limitations of each type of eVTOL are analyzed. After that, the overall design problems are analyzed, and challenges of eVTOL control system design are discussed from aspects of overall control structure and subsystems, such as controller, sensors, actuators, and command generator. This article tries to fill the gap in the eVTOL design from a control perspective and provides some resolutions for the eVTOL application.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 5","pages":"4877-4889"},"PeriodicalIF":14.0,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Intelligent Vehicles
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1