首页 > 最新文献

IEEE Transactions on Intelligent Transportation Systems最新文献

英文 中文
IEEE Intelligent Transportation Systems Society Information IEEE智能交通系统学会信息
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2025-01-14 DOI: 10.1109/TITS.2024.3518293
{"title":"IEEE Intelligent Transportation Systems Society Information","authors":"","doi":"10.1109/TITS.2024.3518293","DOIUrl":"https://doi.org/10.1109/TITS.2024.3518293","url":null,"abstract":"","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 1","pages":"C3-C3"},"PeriodicalIF":7.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10841913","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE INTELLIGENT TRANSPORTATION SYSTEMS SOCIETY Ieee智能交通系统学会
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2025-01-14 DOI: 10.1109/TITS.2024.3518292
{"title":"IEEE INTELLIGENT TRANSPORTATION SYSTEMS SOCIETY","authors":"","doi":"10.1109/TITS.2024.3518292","DOIUrl":"https://doi.org/10.1109/TITS.2024.3518292","url":null,"abstract":"","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 1","pages":"C2-C2"},"PeriodicalIF":7.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10841923","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scanning the Issue 扫描问题
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2025-01-14 DOI: 10.1109/TITS.2024.3518135
Simona Sacone
“Scanning the Issue.“
“扫描问题”。”
{"title":"Scanning the Issue","authors":"Simona Sacone","doi":"10.1109/TITS.2024.3518135","DOIUrl":"https://doi.org/10.1109/TITS.2024.3518135","url":null,"abstract":"“Scanning the Issue.“","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 1","pages":"3-21"},"PeriodicalIF":7.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10841924","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Motion Incongruence Ratings in Closed- and Open-Loop Urban Driving Simulation 闭环和开环城市驾驶仿真中运动不一致等级的预测
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2025-01-14 DOI: 10.1109/TITS.2024.3503496
Maurice Kolff;Joost Venrooij;Elena Arcidiacono;Daan M. Pool;Max Mulder
This paper presents a three-step validation approach for subjective rating predictions of driving simulator motion incongruences based on objective mismatches between reference vehicle and simulator motion. This approach relies on using high-resolution rating predictions of open-loop driving (participants being driven) for ratings of motion in closed-loop driving (participants driving themselves). A driving simulator experiment in an urban scenario is described, of which the rating data of 36 participants was recorded and analyzed. In the experiment’s first phase, participants actively drove themselves (i.e., closed-loop). By recording the drives of the participants and playing these back to themselves (open-loop) in the second phase, participants experienced the same motion in both phases. Participants rated the motion after each maneuver and at the end of each drive. In the third phase they again drove open-loop, but rated the motion continuously, only possible in open-loop driving. Results show that a rating model, acquired through a different experiment, can well predict the measured continuous ratings. Second, the maximum of the measured continuous ratings correlates to both the maneuver-based ( $rho =0.94$ ) and overall ( $rho =0.69$ ) ratings, allowing for predictions of both rating types based on the continuous rating model. Third, using Bayesian statistics it is then shown that both the maneuver-based and overall ratings between the closed-loop and open-loop drives are equivalent. This allows for predictions of maneuver-based and overall ratings using the high-resolution continuous rating models. These predictions can be used as an accurate trade-off method of motion cueing settings of future closed-loop driving simulator experiments.
本文提出了一种基于参考车辆与模拟器运动客观不匹配的驾驶模拟器运动不一致性主观评定预测的三步验证方法。这种方法依赖于使用开环驾驶(参与者被驾驶)的高分辨率评级预测来对闭环驾驶(参与者自己驾驶)的运动进行评级。介绍了一种城市场景下的驾驶模拟器实验,记录并分析了36名参与者的评分数据。在实验的第一阶段,参与者主动驱动自己(即闭环)。在第二阶段,通过记录参与者的驱动并回放给自己听(开环),参与者在两个阶段都经历了相同的运动。参与者在每次动作后和每次驾驶结束时对动作进行评分。在第三阶段,他们再次开环驱动,但连续额定运动,只有在开环驱动下才有可能。结果表明,通过不同的实验获得的评级模型可以很好地预测实测的连续评级。其次,测量的连续评级最大值与基于机动的评级($rho =0.94$)和总体评级($rho =0.69$)相关,从而允许基于连续评级模型的两种评级类型的预测。第三,使用贝叶斯统计,然后表明闭环和开环驱动器之间基于机动和总体评级是等效的。这允许使用高分辨率连续评级模型预测基于机动和总体评级。这些预测可以作为未来闭环驾驶模拟器实验中运动提示设置的精确权衡方法。
{"title":"Predicting Motion Incongruence Ratings in Closed- and Open-Loop Urban Driving Simulation","authors":"Maurice Kolff;Joost Venrooij;Elena Arcidiacono;Daan M. Pool;Max Mulder","doi":"10.1109/TITS.2024.3503496","DOIUrl":"https://doi.org/10.1109/TITS.2024.3503496","url":null,"abstract":"This paper presents a three-step validation approach for subjective rating predictions of driving simulator motion incongruences based on objective mismatches between reference vehicle and simulator motion. This approach relies on using high-resolution rating predictions of open-loop driving (participants being driven) for ratings of motion in closed-loop driving (participants driving themselves). A driving simulator experiment in an urban scenario is described, of which the rating data of 36 participants was recorded and analyzed. In the experiment’s first phase, participants actively drove themselves (i.e., closed-loop). By recording the drives of the participants and playing these back to themselves (open-loop) in the second phase, participants experienced the same motion in both phases. Participants rated the motion after each maneuver and at the end of each drive. In the third phase they again drove open-loop, but rated the motion continuously, only possible in open-loop driving. Results show that a rating model, acquired through a different experiment, can well predict the measured continuous ratings. Second, the maximum of the measured continuous ratings correlates to both the maneuver-based (<inline-formula> <tex-math>$rho =0.94$ </tex-math></inline-formula>) and overall (<inline-formula> <tex-math>$rho =0.69$ </tex-math></inline-formula>) ratings, allowing for predictions of both rating types based on the continuous rating model. Third, using Bayesian statistics it is then shown that both the maneuver-based and overall ratings between the closed-loop and open-loop drives are equivalent. This allows for predictions of maneuver-based and overall ratings using the high-resolution continuous rating models. These predictions can be used as an accurate trade-off method of motion cueing settings of future closed-loop driving simulator experiments.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 1","pages":"517-528"},"PeriodicalIF":7.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Asphalt Pavement Deterioration Under Climate Change Uncertainty Using Bayesian Neural Network 气候变化不确定性下沥青路面劣化的贝叶斯神经网络预测
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2024-12-16 DOI: 10.1109/TITS.2024.3505237
Bingyan Cui;Hao Wang
Uncertainty in climate change poses challenges in obtaining accurate and reliable prediction models for future pavement performance. This study aimed to develop an advanced prediction model specifically for flexible pavements, incorporating uncertainty quantification through a Bayesian Neural Network (BNN). Focusing on predicting the International Roughness Index (IRI) and rut depth of asphalt pavement, BNN model was applied to different climate regions, using long-term pavement performance (LTPP) data from 1989 to 2021. The Tree-structured Parzen Estimators (TPE) algorithm was used to optimize model hyperparameters. The impact of climate change on IRI and rut depth was analyzed. Results showed that the proposed BNN model surpasses Artificial Neural Network (ANN), providing predictions with confidence intervals that account for uncertainty in climate data and model parameters. Compared to historical climate data, increases in IRI and rut depth were more significant when based on projected climate data. Relying only on historical climate data would underestimate pavement deterioration. Climate change appeared to have a more significant impact on rut depth than on IRI. Rut depth was particularly sensitive to climate change, increasing by more than 40%. Considering the uncertainty, rutting depth could increase by up to 85.6%. This highlights the importance of considering regional differences in climate change when developing reliable prediction models. The main contributions of this study include the quantification of uncertainty, the impact analysis of climate change and regional sensitivity analysis. It helps adapt to future climate change and supports informed decision-making in transportation infrastructure management.
气候变化的不确定性对获得准确可靠的未来路面性能预测模型提出了挑战。本研究旨在通过贝叶斯神经网络(BNN)将不确定性量化纳入柔性路面的高级预测模型。以预测沥青路面国际粗糙度指数(IRI)和车辙深度为重点,利用1989 - 2021年的长期路面性能(LTPP)数据,将BNN模型应用于不同气候区域。采用树结构Parzen估计(TPE)算法对模型超参数进行优化。分析了气候变化对IRI和车辙深度的影响。结果表明,所提出的BNN模型优于人工神经网络(ANN),提供的预测具有考虑气候数据和模型参数不确定性的置信区间。与历史气候数据相比,基于预估气候数据的IRI和车辙深度增加更为显著。仅仅依靠历史气候数据会低估路面的恶化程度。气候变化对车辙深度的影响大于对IRI的影响。车辙深度对气候变化特别敏感,增加了40%以上。考虑到不确定性,车辙深度可增加85.6%。这突出了在开发可靠的预测模型时考虑气候变化区域差异的重要性。本研究的主要贡献包括不确定性的量化、气候变化的影响分析和区域敏感性分析。它有助于适应未来的气候变化,并支持交通基础设施管理方面的明智决策。
{"title":"Predicting Asphalt Pavement Deterioration Under Climate Change Uncertainty Using Bayesian Neural Network","authors":"Bingyan Cui;Hao Wang","doi":"10.1109/TITS.2024.3505237","DOIUrl":"https://doi.org/10.1109/TITS.2024.3505237","url":null,"abstract":"Uncertainty in climate change poses challenges in obtaining accurate and reliable prediction models for future pavement performance. This study aimed to develop an advanced prediction model specifically for flexible pavements, incorporating uncertainty quantification through a Bayesian Neural Network (BNN). Focusing on predicting the International Roughness Index (IRI) and rut depth of asphalt pavement, BNN model was applied to different climate regions, using long-term pavement performance (LTPP) data from 1989 to 2021. The Tree-structured Parzen Estimators (TPE) algorithm was used to optimize model hyperparameters. The impact of climate change on IRI and rut depth was analyzed. Results showed that the proposed BNN model surpasses Artificial Neural Network (ANN), providing predictions with confidence intervals that account for uncertainty in climate data and model parameters. Compared to historical climate data, increases in IRI and rut depth were more significant when based on projected climate data. Relying only on historical climate data would underestimate pavement deterioration. Climate change appeared to have a more significant impact on rut depth than on IRI. Rut depth was particularly sensitive to climate change, increasing by more than 40%. Considering the uncertainty, rutting depth could increase by up to 85.6%. This highlights the importance of considering regional differences in climate change when developing reliable prediction models. The main contributions of this study include the quantification of uncertainty, the impact analysis of climate change and regional sensitivity analysis. It helps adapt to future climate change and supports informed decision-making in transportation infrastructure management.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 1","pages":"785-797"},"PeriodicalIF":7.9,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142975721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2024 Index IEEE Transactions on Intelligent Transportation Systems Vol. 25 智能交通系统学报,第25卷
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2024-12-13 DOI: 10.1109/TITS.2024.3516892
{"title":"2024 Index IEEE Transactions on Intelligent Transportation Systems Vol. 25","authors":"","doi":"10.1109/TITS.2024.3516892","DOIUrl":"https://doi.org/10.1109/TITS.2024.3516892","url":null,"abstract":"","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"25 12","pages":"1-312"},"PeriodicalIF":7.9,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10798999","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142870213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AGSENet: A Robust Road Ponding Detection Method for Proactive Traffic Safety AGSENet:一种面向主动交通安全的稳健道路积水检测方法
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2024-12-09 DOI: 10.1109/TITS.2024.3506659
Ronghui Zhang;Shangyu Yang;Dakang Lyu;Zihan Wang;Junzhou Chen;Yilong Ren;Bolin Gao;Zhihan Lv
Road ponding, a prevalent traffic hazard, poses a serious threat to road safety by causing vehicles to lose control and leading to accidents ranging from minor fender benders to severe collisions. Existing technologies struggle to accurately identify road ponding due to complex road textures and variable ponding coloration influenced by reflection characteristics. To address this challenge, we propose a novel approach called Self-Attention-based Global Saliency-Enhanced Network (AGSENet) for proactive road ponding detection and traffic safety improvement. AGSENet incorporates saliency detection techniques through the Channel Saliency Information Focus (CSIF) and Spatial Saliency Information Enhancement (SSIE) modules. The CSIF module, integrated into the encoder, employs self-attention to highlight similar features by fusing spatial and channel information. The SSIE module, embedded in the decoder, refines edge features and reduces noise by leveraging correlations across different feature levels. To ensure accurate and reliable evaluation, we corrected significant mislabeling and missing annotations in the Puddle-1000 dataset. Additionally, we constructed the Foggy-Puddle and Night-Puddle datasets for road ponding detection in low-light and foggy conditions, respectively. Experimental results demonstrate that AGSENet outperforms existing methods, achieving IoU improvements of 2.03%, 0.62%, and 1.06% on the Puddle-1000, Foggy-Puddle, and Night-Puddle datasets, respectively, setting a new state-of-the-art in this field. Finally, we verified the algorithm’s reliability on edge computing devices. This work provides a valuable reference for proactive warning research in road traffic safety. The source code and datasets are placed in the https://github.com/Lyu-Dakang/AGSENet.
道路积水是一项普遍存在的交通危害,严重威胁道路安全。积水会使车辆失去控制,导致小至轻微的轻微碰撞,大至严重的碰撞。由于复杂的路面纹理和受反射特性影响的不同颜色,现有技术难以准确识别道路积水。为了应对这一挑战,我们提出了一种新的方法,称为基于自我注意的全球显著性增强网络(AGSENet),用于主动检测道路积水和改善交通安全。AGSENet通过信道显著性信息焦点(CSIF)和空间显著性信息增强(SSIE)模块集成了显著性检测技术。CSIF模块集成到编码器中,通过融合空间和信道信息,利用自关注来突出相似的特征。SSIE模块嵌入在解码器中,通过利用不同特征级别之间的相关性来细化边缘特征并降低噪声。为了确保评估的准确性和可靠性,我们纠正了水坑-1000数据集中明显的错误标记和缺失注释。此外,我们还分别构建了雾坑和夜坑数据集,用于低光和多雾条件下的道路积水检测。实验结果表明,AGSENet优于现有方法,在pudle -1000、Foggy-Puddle和Night-Puddle数据集上的IoU分别提高了2.03%、0.62%和1.06%,开创了该领域的新技术。最后,在边缘计算设备上验证了算法的可靠性。该工作为道路交通安全的主动预警研究提供了有价值的参考。源代码和数据集位于https://github.com/Lyu-Dakang/AGSENet。
{"title":"AGSENet: A Robust Road Ponding Detection Method for Proactive Traffic Safety","authors":"Ronghui Zhang;Shangyu Yang;Dakang Lyu;Zihan Wang;Junzhou Chen;Yilong Ren;Bolin Gao;Zhihan Lv","doi":"10.1109/TITS.2024.3506659","DOIUrl":"https://doi.org/10.1109/TITS.2024.3506659","url":null,"abstract":"Road ponding, a prevalent traffic hazard, poses a serious threat to road safety by causing vehicles to lose control and leading to accidents ranging from minor fender benders to severe collisions. Existing technologies struggle to accurately identify road ponding due to complex road textures and variable ponding coloration influenced by reflection characteristics. To address this challenge, we propose a novel approach called Self-Attention-based Global Saliency-Enhanced Network (AGSENet) for proactive road ponding detection and traffic safety improvement. AGSENet incorporates saliency detection techniques through the Channel Saliency Information Focus (CSIF) and Spatial Saliency Information Enhancement (SSIE) modules. The CSIF module, integrated into the encoder, employs self-attention to highlight similar features by fusing spatial and channel information. The SSIE module, embedded in the decoder, refines edge features and reduces noise by leveraging correlations across different feature levels. To ensure accurate and reliable evaluation, we corrected significant mislabeling and missing annotations in the Puddle-1000 dataset. Additionally, we constructed the Foggy-Puddle and Night-Puddle datasets for road ponding detection in low-light and foggy conditions, respectively. Experimental results demonstrate that AGSENet outperforms existing methods, achieving IoU improvements of 2.03%, 0.62%, and 1.06% on the Puddle-1000, Foggy-Puddle, and Night-Puddle datasets, respectively, setting a new state-of-the-art in this field. Finally, we verified the algorithm’s reliability on edge computing devices. This work provides a valuable reference for proactive warning research in road traffic safety. The source code and datasets are placed in the <uri>https://github.com/Lyu-Dakang/AGSENet</uri>.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 1","pages":"497-516"},"PeriodicalIF":7.9,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultra-Fast Deraining Plugin for Vision-Based Perception of Autonomous Driving 基于视觉的自动驾驶感知超快速训练插件
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2024-12-09 DOI: 10.1109/TITS.2024.3503556
Jihao Li;Jincheng Hu;Pengyu Fu;Jun Yang;Jingjing Jiang;Yuanjian Zhang
Rain deviates the distribution of rainy images and the clean, rain-free data typically used during perception model training, this kind of out-of-distribution (OOD) issue making it difficult for models to generalize effectively in rainy scenarios, leading the performance degrade of autonomous perception systems in visual tasks such as lane detection and depth estimation, posing serious safety risks. To address this issue, we propose the Ultra-Fast Deraining Plugin (UFDP), a model-efficient deraining solution specifically designed to realign the distribution of rainy images and their rain-free counterparts. UFDP not only effectively removes rain from images but also seamlessly integrates into existing visual perception models, significantly enhancing their robustness and stability under rainy conditions. Through a detailed analysis of single-image color histograms and dataset-level distribution, we demonstrate how UFDP improves the similarity between rainy and non-rainy image distributions. Additionally, qualitative and quantitative results highlight UFDP’s superiority over state-of-the-art (SOTA) methods, showing a 5.4% improvement in SSIM and 8.1% in PSNR. UFDP also excels in terms of efficiency, achieving 7 times higher FPS than the slowest method, reducing FLOPs by 53.7 times, and using 28.8 times fewer MACs, with 6.2 times fewer parameters. This makes UFDP an ideal solution for ensuring reliable performance in autonomous driving visual perception systems, particularly in challenging rainy environments.
雨水偏离了有雨图像和通常用于感知模型训练的干净、无雨数据的分布,这种分布外(OOD)问题使模型难以在有雨场景下有效泛化,导致自主感知系统在车道检测和深度估计等视觉任务中的性能下降,带来严重的安全风险。为了解决这个问题,我们提出了超快速脱轨插件(UFDP),这是一个模型高效的脱轨解决方案,专门用于重新调整下雨图像和无雨图像的分布。UFDP不仅可以有效地去除图像中的雨水,而且可以无缝地集成到现有的视觉感知模型中,显著提高了模型在降雨条件下的鲁棒性和稳定性。通过对单幅图像颜色直方图和数据集级分布的详细分析,我们展示了UFDP如何提高下雨和非下雨图像分布之间的相似性。此外,定性和定量结果突出了UFDP优于最先进的(SOTA)方法,显示SSIM提高5.4%,PSNR提高8.1%。UFDP在效率方面也很出色,比最慢的方法实现了7倍的FPS,减少了53.7倍的FLOPs,使用的mac减少了28.8倍,参数减少了6.2倍。这使得UFDP成为确保自动驾驶视觉感知系统可靠性能的理想解决方案,特别是在具有挑战性的多雨环境中。
{"title":"Ultra-Fast Deraining Plugin for Vision-Based Perception of Autonomous Driving","authors":"Jihao Li;Jincheng Hu;Pengyu Fu;Jun Yang;Jingjing Jiang;Yuanjian Zhang","doi":"10.1109/TITS.2024.3503556","DOIUrl":"https://doi.org/10.1109/TITS.2024.3503556","url":null,"abstract":"Rain deviates the distribution of rainy images and the clean, rain-free data typically used during perception model training, this kind of out-of-distribution (OOD) issue making it difficult for models to generalize effectively in rainy scenarios, leading the performance degrade of autonomous perception systems in visual tasks such as lane detection and depth estimation, posing serious safety risks. To address this issue, we propose the Ultra-Fast Deraining Plugin (UFDP), a model-efficient deraining solution specifically designed to realign the distribution of rainy images and their rain-free counterparts. UFDP not only effectively removes rain from images but also seamlessly integrates into existing visual perception models, significantly enhancing their robustness and stability under rainy conditions. Through a detailed analysis of single-image color histograms and dataset-level distribution, we demonstrate how UFDP improves the similarity between rainy and non-rainy image distributions. Additionally, qualitative and quantitative results highlight UFDP’s superiority over state-of-the-art (SOTA) methods, showing a 5.4% improvement in SSIM and 8.1% in PSNR. UFDP also excels in terms of efficiency, achieving 7 times higher FPS than the slowest method, reducing FLOPs by 53.7 times, and using 28.8 times fewer MACs, with 6.2 times fewer parameters. This makes UFDP an ideal solution for ensuring reliable performance in autonomous driving visual perception systems, particularly in challenging rainy environments.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 1","pages":"1227-1240"},"PeriodicalIF":7.9,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142975920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
POSEIDON-SAT: Data Enhancement for Optical Fishing Vessel Detection From Low-Cost Satellites 波塞冬卫星:低成本卫星光学渔船探测的数据增强
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2024-12-09 DOI: 10.1109/TITS.2024.3506748
Kyler Nelson;Mario Harper
This paper presents POSEIDON-SAT, a novel dataset augmentation method designed to enhance the detection of fishing vessels using optical remote sensing technologies. Illegal fishing poses a significant threat to conservation and economic fishing zones, and its detection is often hindered by tactics such as the disabling or manipulation of Automatic Identification System (AIS) transponders. While convolutional neural networks (CNNs) have shown promise in ship detection from optical imagery, the fine-grained classification of fishing vessels is limited by the scarcity of detailed datasets, as these vessels are often underrepresented in existing databases. POSEIDON-SAT addresses this gap by augmenting datasets with synthesized fishing vessel instances, improving the performance of ship detection models, particularly in low-resource scenarios. This approach is tailored for use on low-power, edge computing platforms aboard small satellites, such as CubeSats, where computational resources are highly constrained. By comparing POSEIDON-SAT to traditional class-weighting techniques, we evaluate its impact on lightweight YOLO models optimized for real-time detection aboard such satellites. Our experimental results demonstrate that POSEIDON-SAT significantly improves detection accuracy while reducing false positives, making it an effective tool for enhancing the capabilities of remote sensing platforms in monitoring illegal fishing. This method holds promise for addressing the global challenge of illegal fishing through scalable, efficient satellite-based monitoring systems.
本文提出了一种新的数据集增强方法POSEIDON-SAT,该方法旨在利用光学遥感技术增强对渔船的检测。非法捕鱼对养护区和经济渔区构成重大威胁,对非法捕鱼的侦查常常受到诸如禁用或操纵自动识别系统(AIS)应答器等手段的阻碍。虽然卷积神经网络(cnn)在从光学图像中检测船舶方面显示出了前景,但由于缺乏详细的数据集,渔船的细粒度分类受到限制,因为这些船只在现有数据库中往往代表性不足。POSEIDON-SAT通过合成渔船实例来增加数据集,提高船舶检测模型的性能,特别是在资源匮乏的情况下,解决了这一差距。这种方法是为小型卫星(如立方体卫星)上的低功耗边缘计算平台量身定制的,在这些平台上,计算资源受到高度限制。通过将POSEIDON-SAT与传统的类别加权技术进行比较,我们评估了其对轻型YOLO模型的影响,该模型针对此类卫星上的实时探测进行了优化。实验结果表明,POSEIDON-SAT在降低误报的同时显著提高了探测精度,是提高遥感平台监测非法捕捞能力的有效工具。这种方法有望通过可扩展、高效的卫星监测系统解决非法捕鱼的全球挑战。
{"title":"POSEIDON-SAT: Data Enhancement for Optical Fishing Vessel Detection From Low-Cost Satellites","authors":"Kyler Nelson;Mario Harper","doi":"10.1109/TITS.2024.3506748","DOIUrl":"https://doi.org/10.1109/TITS.2024.3506748","url":null,"abstract":"This paper presents POSEIDON-SAT, a novel dataset augmentation method designed to enhance the detection of fishing vessels using optical remote sensing technologies. Illegal fishing poses a significant threat to conservation and economic fishing zones, and its detection is often hindered by tactics such as the disabling or manipulation of Automatic Identification System (AIS) transponders. While convolutional neural networks (CNNs) have shown promise in ship detection from optical imagery, the fine-grained classification of fishing vessels is limited by the scarcity of detailed datasets, as these vessels are often underrepresented in existing databases. POSEIDON-SAT addresses this gap by augmenting datasets with synthesized fishing vessel instances, improving the performance of ship detection models, particularly in low-resource scenarios. This approach is tailored for use on low-power, edge computing platforms aboard small satellites, such as CubeSats, where computational resources are highly constrained. By comparing POSEIDON-SAT to traditional class-weighting techniques, we evaluate its impact on lightweight YOLO models optimized for real-time detection aboard such satellites. Our experimental results demonstrate that POSEIDON-SAT significantly improves detection accuracy while reducing false positives, making it an effective tool for enhancing the capabilities of remote sensing platforms in monitoring illegal fishing. This method holds promise for addressing the global challenge of illegal fishing through scalable, efficient satellite-based monitoring systems.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 1","pages":"1113-1122"},"PeriodicalIF":7.9,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Infrastructure-Side Point Cloud Object Detection via Multi-Frame Aggregation and Multi-Scale Fusion 基于多帧聚合和多尺度融合的基础设施侧点云目标检测
IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL Pub Date : 2024-12-04 DOI: 10.1109/TITS.2024.3491784
Ye Yue;Honggang Qi;Yongqiang Deng;Juanjuan Li;Hao Liang;Jun Miao
In recent years, with the advancement of artificial intelligence technology, autonomous driving technologies have gradually emerged. 3D object detection using point clouds has become a key in this field. Multi-frame fusion of point clouds is a promising technique to enhance 3D object detection for autonomous driving systems. However, most existing multi-frame detection methods focus primarily on utilizing vehicle-side lidar data. Infrastructure-side detection remains relatively unexplored, yet can enhance vital vehicle-road coordination capabilities. To help with this coordination, we propose an efficient multi-frame aggregation multi-scale fusion network specifically for infrastructure-side 3D object detection. First, our key innovation is a novel multi-frame feature aggregation module that effectively integrates information from multiple past point cloud frames to improve detection accuracy. This module comprises a feature pyramid network to fuse multi-scale features, as well as a cross-attention mechanism to learn semantic correlations between different frames over time. Next, we incorporate deformable attention, which reduces the computational overhead of aggregation by sampling locations. We designed Multi-frame and Multi-scale modules, thereby we named the model MAMF-Net. Finally, through extensive experiments on two infrastructure-side datasets including the V2X-Seq-SPD dataset which was released by Baidu corporation, we demonstrate that MAMF-Net delivers consistent accuracy improvements over single frame detectors such as PointPillars, PV-RCNN and TED-S, especially boosting pedestrian detection by 5%. Our approach also surpasses other multi-frame methods designed for vehicle-side point clouds such as MPPNet.
近年来,随着人工智能技术的进步,自动驾驶技术逐渐兴起。利用点云进行三维目标检测已成为该领域的一个关键。多帧点云融合是一种很有前途的增强自动驾驶系统三维目标检测的技术。然而,大多数现有的多帧检测方法主要集中在利用车侧激光雷达数据。基础设施侧的检测仍然相对未被开发,但可以增强重要的车辆-道路协调能力。为了帮助这种协调,我们提出了一种高效的多帧聚合多尺度融合网络,专门用于基础设施侧的3D目标检测。首先,我们的关键创新是一种新颖的多帧特征聚合模块,该模块有效地集成了多个过去点云帧的信息,以提高检测精度。该模块包括一个融合多尺度特征的特征金字塔网络,以及一个跨注意机制来学习不同框架之间随时间的语义相关性。接下来,我们结合了可变形注意力,这减少了通过采样位置聚合的计算开销。我们设计了多框架、多尺度模块,并将其命名为MAMF-Net。最后,通过在两个基础设施侧数据集(包括百度公司发布的V2X-Seq-SPD数据集)上的广泛实验,我们证明了MAMF-Net比PointPillars、PV-RCNN和TED-S等单帧检测器提供了一致的精度提高,特别是将行人检测提高了5%。我们的方法也超越了为MPPNet等车侧点云设计的其他多帧方法。
{"title":"Infrastructure-Side Point Cloud Object Detection via Multi-Frame Aggregation and Multi-Scale Fusion","authors":"Ye Yue;Honggang Qi;Yongqiang Deng;Juanjuan Li;Hao Liang;Jun Miao","doi":"10.1109/TITS.2024.3491784","DOIUrl":"https://doi.org/10.1109/TITS.2024.3491784","url":null,"abstract":"In recent years, with the advancement of artificial intelligence technology, autonomous driving technologies have gradually emerged. 3D object detection using point clouds has become a key in this field. Multi-frame fusion of point clouds is a promising technique to enhance 3D object detection for autonomous driving systems. However, most existing multi-frame detection methods focus primarily on utilizing vehicle-side lidar data. Infrastructure-side detection remains relatively unexplored, yet can enhance vital vehicle-road coordination capabilities. To help with this coordination, we propose an efficient multi-frame aggregation multi-scale fusion network specifically for infrastructure-side 3D object detection. First, our key innovation is a novel multi-frame feature aggregation module that effectively integrates information from multiple past point cloud frames to improve detection accuracy. This module comprises a feature pyramid network to fuse multi-scale features, as well as a cross-attention mechanism to learn semantic correlations between different frames over time. Next, we incorporate deformable attention, which reduces the computational overhead of aggregation by sampling locations. We designed Multi-frame and Multi-scale modules, thereby we named the model MAMF-Net. Finally, through extensive experiments on two infrastructure-side datasets including the V2X-Seq-SPD dataset which was released by Baidu corporation, we demonstrate that MAMF-Net delivers consistent accuracy improvements over single frame detectors such as PointPillars, PV-RCNN and TED-S, especially boosting pedestrian detection by 5%. Our approach also surpasses other multi-frame methods designed for vehicle-side point clouds such as MPPNet.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 1","pages":"703-713"},"PeriodicalIF":7.9,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Intelligent Transportation Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1