TOD: Transprecise Object Detection to Maximise Real-Time Accuracy on the Edge

JunKyu Lee, B. Varghese, Roger Francis Woods, H. Vandierendonck
{"title":"TOD: Transprecise Object Detection to Maximise Real-Time Accuracy on the Edge","authors":"JunKyu Lee, B. Varghese, Roger Francis Woods, H. Vandierendonck","doi":"10.1109/ICFEC51620.2021.00015","DOIUrl":null,"url":null,"abstract":"Real-time video analytics on the edge is challenging as the computationally constrained resources typically cannot analyse video streams at full fidelity and frame rate, which results in loss of accuracy. This paper proposes a Transprecise Object Detector (TOD) which maximises the real-time object detection accuracy on an edge device by selecting an appropriate Deep Neural Network (DNN) on the fly with negligible computational overhead. TOD makes two key contributions over the state of the art: (1) TOD leverages characteristics of the video stream such as object size and speed of movement to identify networks with high prediction accuracy for the current frames; (2) it selects the best-performing network based on projected accuracy and computational demand using an effective and low-overhead decision mechanism. Experimental evaluation on a Jetson Nano demonstrates that TOD improves the average object detection precision by 34.7% over the YOLOv4-tiny-288 model on average over the MOT17Det dataset. In the MOT17-05 test dataset, TOD utilises only 45.1% of GPU resource and 62.7% of the GPU board power without losing accuracy, compared to YOLOv4-416 model. We expect that TOD will maximise the application of edge devices to real-time object detection, since TOD maximises real-time object detection accuracy given edge devices according to dynamic input features without increasing inference latency in practice.","PeriodicalId":436220,"journal":{"name":"2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICFEC51620.2021.00015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

Real-time video analytics on the edge is challenging as the computationally constrained resources typically cannot analyse video streams at full fidelity and frame rate, which results in loss of accuracy. This paper proposes a Transprecise Object Detector (TOD) which maximises the real-time object detection accuracy on an edge device by selecting an appropriate Deep Neural Network (DNN) on the fly with negligible computational overhead. TOD makes two key contributions over the state of the art: (1) TOD leverages characteristics of the video stream such as object size and speed of movement to identify networks with high prediction accuracy for the current frames; (2) it selects the best-performing network based on projected accuracy and computational demand using an effective and low-overhead decision mechanism. Experimental evaluation on a Jetson Nano demonstrates that TOD improves the average object detection precision by 34.7% over the YOLOv4-tiny-288 model on average over the MOT17Det dataset. In the MOT17-05 test dataset, TOD utilises only 45.1% of GPU resource and 62.7% of the GPU board power without losing accuracy, compared to YOLOv4-416 model. We expect that TOD will maximise the application of edge devices to real-time object detection, since TOD maximises real-time object detection accuracy given edge devices according to dynamic input features without increasing inference latency in practice.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
TOD:透明的目标检测,最大限度地提高边缘的实时精度
边缘实时视频分析具有挑战性,因为计算资源有限,通常无法以全保真度和帧率分析视频流,从而导致准确性损失。本文提出了一种透明目标检测器(TOD),该检测器通过动态选择合适的深度神经网络(DNN),在计算开销可以忽略不计的情况下,最大限度地提高边缘设备上的实时目标检测精度。TOD在目前的技术水平上做出了两个关键贡献:(1)TOD利用视频流的特征,如物体大小和运动速度,来识别当前帧具有高预测精度的网络;(2)采用有效的低开销决策机制,根据预测精度和计算需求选择性能最佳的网络。在Jetson Nano上的实验评估表明,在MOT17Det数据集上,TOD模型比YOLOv4-tiny-288模型平均提高了34.7%的目标检测精度。在MOT17-05测试数据集中,与YOLOv4-416模型相比,TOD只使用了45.1%的GPU资源和62.7%的GPU主板功率,而不损失精度。我们期望TOD将最大化边缘设备在实时目标检测中的应用,因为TOD根据动态输入特征最大化给定边缘设备的实时目标检测精度,而不会增加实践中的推理延迟。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
TOD: Transprecise Object Detection to Maximise Real-Time Accuracy on the Edge PA-Offload: Performability-Aware Adaptive Fog Offloading for Drone Image Processing Performance Evaluation of Some Adaptive Task Allocation Algorithms for Fog Networks Multilayer Resource-aware Partitioning for Fog Application Placement Mapping IoT Applications on the Edge to Cloud Continuum with a Filter Stream Model
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1