SwitchingNet: Edge-Assisted Model Switching for Accurate Video Recognition Over Best-Effort Networks

Florian Beye, Yasunori Babazaki, Ryuhei Ando, Takashi Oshiba, Koichi Nihei, Katsuhiko Takahashi
{"title":"SwitchingNet: Edge-Assisted Model Switching for Accurate Video Recognition Over Best-Effort Networks","authors":"Florian Beye, Yasunori Babazaki, Ryuhei Ando, Takashi Oshiba, Koichi Nihei, Katsuhiko Takahashi","doi":"10.1109/CCNC51664.2024.10454650","DOIUrl":null,"url":null,"abstract":"Despite the remarkable success of deep-learning in image and video recognition, constructing real-time recognition systems for computationally intensive tasks such as spatio-temporal human action localization is still challenging. As computational complexity of these tasks can easily exceed the capacity of edge devices, inference must be performed in remote (cloud) environments. But then, recognition accuracy is subject to fluctuating networking conditions in best-effort networks due to compression artefacts incurred from low-bitrate video streaming. To improve overall recognition accuracy under various networking conditions, we propose SwitchingNet, an edge-assisted inference model switching method. In SwitchingNet, we train multiple recognition models specialized towards different levels of image quality and a neural switching model for dynamically choosing among the specialized recognition models during system operation. Switching decisions are made at the edge given an image quality vector calculated from compressed and uncompressed frames. In the experiments, we show that our approach can on average sustain higher recognition accuracy than plain recognition systems under heavily fluctuating networking conditions. Also, our switching-based recognition approach is far less computationally intensive than competing ensemble methods and allows to significantly reduce cloud computing costs.","PeriodicalId":518411,"journal":{"name":"2024 IEEE 21st Consumer Communications & Networking Conference (CCNC)","volume":"93 2","pages":"37-43"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 IEEE 21st Consumer Communications & Networking Conference (CCNC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCNC51664.2024.10454650","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Despite the remarkable success of deep-learning in image and video recognition, constructing real-time recognition systems for computationally intensive tasks such as spatio-temporal human action localization is still challenging. As computational complexity of these tasks can easily exceed the capacity of edge devices, inference must be performed in remote (cloud) environments. But then, recognition accuracy is subject to fluctuating networking conditions in best-effort networks due to compression artefacts incurred from low-bitrate video streaming. To improve overall recognition accuracy under various networking conditions, we propose SwitchingNet, an edge-assisted inference model switching method. In SwitchingNet, we train multiple recognition models specialized towards different levels of image quality and a neural switching model for dynamically choosing among the specialized recognition models during system operation. Switching decisions are made at the edge given an image quality vector calculated from compressed and uncompressed frames. In the experiments, we show that our approach can on average sustain higher recognition accuracy than plain recognition systems under heavily fluctuating networking conditions. Also, our switching-based recognition approach is far less computationally intensive than competing ensemble methods and allows to significantly reduce cloud computing costs.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
交换网络:边缘辅助模型切换,通过尽力网络实现准确视频识别
尽管深度学习在图像和视频识别领域取得了巨大成功,但为计算密集型任务(如时空人类动作定位)构建实时识别系统仍然充满挑战。由于这些任务的计算复杂度很容易超出边缘设备的能力,因此必须在远程(云)环境中进行推理。但是,在尽力而为的网络中,由于低比特率视频流造成的压缩假象,识别准确率会受到网络条件波动的影响。为了提高各种网络条件下的整体识别准确率,我们提出了一种边缘辅助推理模型切换方法 SwitchingNet。在 SwitchingNet 中,我们针对不同的图像质量水平训练了多个专门的识别模型,并训练了一个神经切换模型,用于在系统运行过程中动态选择专门的识别模型。根据压缩和未压缩帧计算出的图像质量向量,在边缘做出切换决定。在实验中,我们发现在网络严重波动的条件下,我们的方法比普通识别系统平均能保持更高的识别准确率。此外,我们基于切换的识别方法的计算密集度远远低于同类的集合方法,可以显著降低云计算成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Towards Transparency in Email Security Distance-Statistical Based Byzantine-Robust Algorithms in Federated Learning Natively Secure 6G IoT Using Intelligent Physical Layer Security Accessibility of Mobile User Interfaces using Flutter and React Native Resource-Aware Service Prioritization in a Slice-Supportive 5G Core Control Plane for Improved Resilience and Sustenance
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1