Enhancing 3D object detection in autonomous vehicles based on synthetic virtual environment analysis

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Image and Vision Computing Pub Date : 2025-02-01 Epub Date: 2024-12-18 DOI:10.1016/j.imavis.2024.105385
Vladislav Li , Ilias Siniosoglou , Thomai Karamitsou , Anastasios Lytos , Ioannis D. Moscholios , Sotirios K. Goudos , Jyoti S. Banerjee , Panagiotis Sarigiannidis , Vasileios Argyriou
{"title":"Enhancing 3D object detection in autonomous vehicles based on synthetic virtual environment analysis","authors":"Vladislav Li ,&nbsp;Ilias Siniosoglou ,&nbsp;Thomai Karamitsou ,&nbsp;Anastasios Lytos ,&nbsp;Ioannis D. Moscholios ,&nbsp;Sotirios K. Goudos ,&nbsp;Jyoti S. Banerjee ,&nbsp;Panagiotis Sarigiannidis ,&nbsp;Vasileios Argyriou","doi":"10.1016/j.imavis.2024.105385","DOIUrl":null,"url":null,"abstract":"<div><div>Autonomous Vehicles (AVs) rely on real-time processing of natural images and videos for scene understanding and safety assurance through proactive object detection. Traditional methods have primarily focused on 2D object detection, limiting their spatial understanding. This study introduces a novel approach by leveraging 3D object detection in conjunction with augmented reality (AR) ecosystems for enhanced real-time scene analysis. Our approach pioneers the integration of a synthetic dataset, designed to simulate various environmental, lighting, and spatiotemporal conditions, to train and evaluate an AI model capable of deducing 3D bounding boxes. This dataset, with its diverse weather conditions and varying camera settings, allows us to explore detection performance in highly challenging scenarios. The proposed method also significantly improves processing times while maintaining accuracy, offering competitive results in conditions previously considered difficult for object recognition. The combination of 3D detection within the AR framework and the use of synthetic data to tackle environmental complexity marks a notable contribution to the field of AV scene analysis.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105385"},"PeriodicalIF":4.2000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885624004906","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/12/18 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Autonomous Vehicles (AVs) rely on real-time processing of natural images and videos for scene understanding and safety assurance through proactive object detection. Traditional methods have primarily focused on 2D object detection, limiting their spatial understanding. This study introduces a novel approach by leveraging 3D object detection in conjunction with augmented reality (AR) ecosystems for enhanced real-time scene analysis. Our approach pioneers the integration of a synthetic dataset, designed to simulate various environmental, lighting, and spatiotemporal conditions, to train and evaluate an AI model capable of deducing 3D bounding boxes. This dataset, with its diverse weather conditions and varying camera settings, allows us to explore detection performance in highly challenging scenarios. The proposed method also significantly improves processing times while maintaining accuracy, offering competitive results in conditions previously considered difficult for object recognition. The combination of 3D detection within the AR framework and the use of synthetic data to tackle environmental complexity marks a notable contribution to the field of AV scene analysis.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于综合虚拟环境分析的自动驾驶汽车三维目标检测
自动驾驶汽车(AVs)依靠对自然图像和视频的实时处理,通过主动物体检测来理解场景并确保安全。传统的方法主要集中在二维物体检测上,限制了它们的空间理解。本研究引入了一种利用3D物体检测与增强现实(AR)生态系统相结合的新方法,以增强实时场景分析。我们的方法开创了综合数据集的集成,旨在模拟各种环境,照明和时空条件,以训练和评估能够推断3D边界框的人工智能模型。该数据集具有不同的天气条件和不同的相机设置,使我们能够在极具挑战性的场景中探索检测性能。所提出的方法在保持准确性的同时显著提高了处理时间,在以前被认为难以识别的条件下提供了有竞争力的结果。AR框架内的3D检测与使用合成数据来解决环境复杂性的结合标志着对AV场景分析领域的显著贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Image and Vision Computing
Image and Vision Computing 工程技术-工程:电子与电气
CiteScore
8.50
自引率
8.50%
发文量
143
审稿时长
7.8 months
期刊介绍: Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.
期刊最新文献
TABNet: A Triplet Augmentation Self-recovery framework with Boundary-aware Pseudo-labels for scribble-based medical image segmentation Distilling auxiliary RGB–T features for unsupervised semantic segmentation A survey on dynamic neural networks: From computer vision to multi-modal sensor fusion FPFusion: Fourier frequency-prior guidance for infrared and visible image fusion Topology-enhanced prototypes with geometric self-adaptation for few-shot 3D point cloud semantic segmentation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1