Deep Encoder-Decoder Network for Lane-Following on Autonomous Vehicle

Abida Khanum, Chao-Yang Lee, Chu-Sing Yang
{"title":"Deep Encoder-Decoder Network for Lane-Following on Autonomous Vehicle","authors":"Abida Khanum, Chao-Yang Lee, Chu-Sing Yang","doi":"10.1109/ICCE-Taiwan55306.2022.9869205","DOIUrl":null,"url":null,"abstract":"Nowadays there is a vast interest in a self-driving car from both academia and industry. The main reason behind recently enormous progress in deep learning approaches for an autonomous vehicle. The main objective of this research is to propose a deep hybrid encoder-decoder network with input multi-modal data to predict the decision-making task. Therefore, the proposed approaches are tested by both real and simulation data but in the real data single camera image and simulator data three-camera image data. The proposed method analyzes the effects of input data. The experiment results in analyses in terms of Computational time as-well-as parameters in which values of the steering wheel and brake both real and simulated data are (6ms and 9ms) respectively. The analysis shows that our method performs well in driving action prediction.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Consumer Electronics - Taiwan","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869205","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Nowadays there is a vast interest in a self-driving car from both academia and industry. The main reason behind recently enormous progress in deep learning approaches for an autonomous vehicle. The main objective of this research is to propose a deep hybrid encoder-decoder network with input multi-modal data to predict the decision-making task. Therefore, the proposed approaches are tested by both real and simulation data but in the real data single camera image and simulator data three-camera image data. The proposed method analyzes the effects of input data. The experiment results in analyses in terms of Computational time as-well-as parameters in which values of the steering wheel and brake both real and simulated data are (6ms and 9ms) respectively. The analysis shows that our method performs well in driving action prediction.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
自动驾驶汽车车道跟随深度编解码器网络
如今,学术界和工业界都对自动驾驶汽车产生了浓厚的兴趣。最近在自动驾驶汽车的深度学习方法上取得巨大进展的主要原因。本研究的主要目标是提出一个输入多模态数据的深度混合编码器-解码器网络来预测决策任务。因此,本文提出的方法通过真实数据和模拟数据进行了验证,但在真实数据单相机图像和模拟器数据三相机图像数据中进行了验证。该方法分析了输入数据的影响。实验结果在计算时间和参数方面进行了分析,其中方向盘和刹车的真实数据和模拟数据的值分别为6ms和9ms。分析表明,该方法具有较好的驾驶动作预测效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Dynamic Thermal-Predicted Workload Movement with Three-Dimensional DRAM-RRAM Hybrid Memories for Convolutional Neural Network Applications Performance Evaluation of Fault-Tolerant Routing Methods Using Parallel Programs Down-Sampling Dark Channel Prior of Airlight Estimation for Low Complexity Image Dehazing Chip Design Image Confusion Applied to Industrial Defect Detection System On Multimodal Semantic Consistency Detection of News Articles with Image Caption Pairs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1