{"title":"SwitchingNet: Edge-Assisted Model Switching for Accurate Video Recognition Over Best-Effort Networks","authors":"Florian Beye, Yasunori Babazaki, Ryuhei Ando, Takashi Oshiba, Koichi Nihei, Katsuhiko Takahashi","doi":"10.1109/CCNC51664.2024.10454650","DOIUrl":null,"url":null,"abstract":"Despite the remarkable success of deep-learning in image and video recognition, constructing real-time recognition systems for computationally intensive tasks such as spatio-temporal human action localization is still challenging. As computational complexity of these tasks can easily exceed the capacity of edge devices, inference must be performed in remote (cloud) environments. But then, recognition accuracy is subject to fluctuating networking conditions in best-effort networks due to compression artefacts incurred from low-bitrate video streaming. To improve overall recognition accuracy under various networking conditions, we propose SwitchingNet, an edge-assisted inference model switching method. In SwitchingNet, we train multiple recognition models specialized towards different levels of image quality and a neural switching model for dynamically choosing among the specialized recognition models during system operation. Switching decisions are made at the edge given an image quality vector calculated from compressed and uncompressed frames. In the experiments, we show that our approach can on average sustain higher recognition accuracy than plain recognition systems under heavily fluctuating networking conditions. Also, our switching-based recognition approach is far less computationally intensive than competing ensemble methods and allows to significantly reduce cloud computing costs.","PeriodicalId":518411,"journal":{"name":"2024 IEEE 21st Consumer Communications & Networking Conference (CCNC)","volume":"93 2","pages":"37-43"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 IEEE 21st Consumer Communications & Networking Conference (CCNC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCNC51664.2024.10454650","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Despite the remarkable success of deep-learning in image and video recognition, constructing real-time recognition systems for computationally intensive tasks such as spatio-temporal human action localization is still challenging. As computational complexity of these tasks can easily exceed the capacity of edge devices, inference must be performed in remote (cloud) environments. But then, recognition accuracy is subject to fluctuating networking conditions in best-effort networks due to compression artefacts incurred from low-bitrate video streaming. To improve overall recognition accuracy under various networking conditions, we propose SwitchingNet, an edge-assisted inference model switching method. In SwitchingNet, we train multiple recognition models specialized towards different levels of image quality and a neural switching model for dynamically choosing among the specialized recognition models during system operation. Switching decisions are made at the edge given an image quality vector calculated from compressed and uncompressed frames. In the experiments, we show that our approach can on average sustain higher recognition accuracy than plain recognition systems under heavily fluctuating networking conditions. Also, our switching-based recognition approach is far less computationally intensive than competing ensemble methods and allows to significantly reduce cloud computing costs.