Subash Kumar, Kartikeya, S. Sushanth Kumar, Nikhil Gupta, Agrima Yadav
{"title":"Lane and Vehicle Detection Using Hough Transform and YOLOv3","authors":"Subash Kumar, Kartikeya, S. Sushanth Kumar, Nikhil Gupta, Agrima Yadav","doi":"10.1109/CONIT55038.2022.9847985","DOIUrl":null,"url":null,"abstract":"Object tracking at dark is critical to minimizing the number of nocturnal traffic crashes. This paper presents a deep convolutional neural network dubbed M-YOLO to enhance the precision of nocturnal object recognition and to be suited for limited contexts (also including microcontrollers in automobiles). To begin, track line images are separated into other * 2S panels based on the features of uneven spatial and temporal dispersion densities. Additionally, the sensor frequency has been limited to four measurement levels, making it even more suited for tiny source localization, like lateral distance measurement. Thirdly, to optimize the connectivity, a fully connected layer throughout the basic Yolo v3 method is reduced by 53 to 49 levels. Lastly, characteristics like cluster center radius and backpropagation are enhanced.","PeriodicalId":270445,"journal":{"name":"2022 2nd International Conference on Intelligent Technologies (CONIT)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 2nd International Conference on Intelligent Technologies (CONIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CONIT55038.2022.9847985","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Object tracking at dark is critical to minimizing the number of nocturnal traffic crashes. This paper presents a deep convolutional neural network dubbed M-YOLO to enhance the precision of nocturnal object recognition and to be suited for limited contexts (also including microcontrollers in automobiles). To begin, track line images are separated into other * 2S panels based on the features of uneven spatial and temporal dispersion densities. Additionally, the sensor frequency has been limited to four measurement levels, making it even more suited for tiny source localization, like lateral distance measurement. Thirdly, to optimize the connectivity, a fully connected layer throughout the basic Yolo v3 method is reduced by 53 to 49 levels. Lastly, characteristics like cluster center radius and backpropagation are enhanced.