Machine Learning Based Performance Analysis of Video Object Detection and Classification Using Modified Yolov3 and Mobilenet Algorithm

Mohandoss T, Rangaraj J
{"title":"Machine Learning Based Performance Analysis of Video Object Detection and Classification Using Modified Yolov3 and Mobilenet Algorithm","authors":"Mohandoss T, Rangaraj J","doi":"10.53759/7669/jmc202303025","DOIUrl":null,"url":null,"abstract":"Detecting foreground objects in video is crucial in various machine vision applications and computerized video surveillance technologies. Object tracking and detection are essential in object identification, surveillance, and navigation approaches. Object detection is the technique of differentiating between background and foreground features in a photograph. Recent improvements in vision systems, including distributed smart cameras, have inspired researchers to develop enhanced machine vision applications for embedded systems. The efficiency of featured object detection algorithms declines as dynamic video data increases as contrasted to conventional object detection methods. Moving subjects that are blurred, fast-moving objects, backdrop occlusion, or dynamic background shifts within the foreground area of a video frame can all cause problems. These challenges result in insufficient prominence detection. This work develops a deep-learning model to overcome this issue. For object detection, a novel method utilizing YOLOv3 and MobileNet was built. First, rather than picking predefined feature maps in the conventional YOLOv3 architecture, the technique for determining feature maps in the MobileNet is optimized based on examining the receptive fields. This work focuses on three primary processes: object detection, recognition, and classification, to classify moving objects before shared features. Compared to existing algorithms, experimental findings on public datasets and our dataset reveal that the suggested approach achieves 99% correct classification accuracy for urban settings with moving objects. Experiments reveal that the suggested model beats existing cutting-edge models by speed and computation.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"23 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of machine learning and computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.53759/7669/jmc202303025","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Detecting foreground objects in video is crucial in various machine vision applications and computerized video surveillance technologies. Object tracking and detection are essential in object identification, surveillance, and navigation approaches. Object detection is the technique of differentiating between background and foreground features in a photograph. Recent improvements in vision systems, including distributed smart cameras, have inspired researchers to develop enhanced machine vision applications for embedded systems. The efficiency of featured object detection algorithms declines as dynamic video data increases as contrasted to conventional object detection methods. Moving subjects that are blurred, fast-moving objects, backdrop occlusion, or dynamic background shifts within the foreground area of a video frame can all cause problems. These challenges result in insufficient prominence detection. This work develops a deep-learning model to overcome this issue. For object detection, a novel method utilizing YOLOv3 and MobileNet was built. First, rather than picking predefined feature maps in the conventional YOLOv3 architecture, the technique for determining feature maps in the MobileNet is optimized based on examining the receptive fields. This work focuses on three primary processes: object detection, recognition, and classification, to classify moving objects before shared features. Compared to existing algorithms, experimental findings on public datasets and our dataset reveal that the suggested approach achieves 99% correct classification accuracy for urban settings with moving objects. Experiments reveal that the suggested model beats existing cutting-edge models by speed and computation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于机器学习的基于改进Yolov3和Mobilenet算法的视频目标检测和分类性能分析
在各种机器视觉应用和计算机视频监控技术中,检测视频中的前景目标是至关重要的。目标跟踪和检测在目标识别、监视和导航方法中是必不可少的。目标检测是一种区分照片中背景和前景特征的技术。最近视觉系统的改进,包括分布式智能相机,激发了研究人员为嵌入式系统开发增强的机器视觉应用。与传统的目标检测方法相比,随着动态视频数据的增加,特征目标检测算法的效率会下降。移动对象模糊、快速移动的对象、背景遮挡或视频帧前景区域内的动态背景移动都可能导致问题。这些挑战导致日珥检测不足。这项工作开发了一个深度学习模型来克服这个问题。在目标检测方面,利用YOLOv3和MobileNet构建了一种新的目标检测方法。首先,与传统的YOLOv3架构中选择预定义的特征映射不同,MobileNet中确定特征映射的技术是基于检查接受域而优化的。这项工作主要集中在三个主要过程:目标检测、识别和分类,在共享特征之前对运动目标进行分类。与现有算法相比,在公共数据集和我们的数据集上的实验结果表明,本文提出的方法对具有运动物体的城市环境的分类准确率达到99%。实验表明,该模型在速度和计算能力上都优于现有的前沿模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Discussion of Key Aspects and Trends in Self Driving Vehicle Technology An Efficient Voice Authentication System using Enhanced Inceptionv3 Algorithm Hybrid Machine Learning Technique to Detect Active Botnet Attacks for Network Security and Privacy Engineering, Structural Materials and Biomaterials: A Review of Sustainable Engineering Using Advanced Biomaterials Comparative Analysis of Transaction Speed and Throughput in Blockchain and Hashgraph: A Performance Study for Distributed Ledger Technologies
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1