Low-latency speculative inference on distributed multi-modal data streams

Tianxing Li, Jin Huang, Erik Risinger, Deepak Ganesan
{"title":"Low-latency speculative inference on distributed multi-modal data streams","authors":"Tianxing Li, Jin Huang, Erik Risinger, Deepak Ganesan","doi":"10.1145/3458864.3467884","DOIUrl":null,"url":null,"abstract":"While multi-modal deep learning is useful in distributed sensing tasks like human tracking, activity recognition, and audio and video analysis, deploying state-of-the-art multi-modal models in a wirelessly networked sensor system poses unique challenges. The data sizes for different modalities can be highly asymmetric (e.g., video vs. audio), and these differences can lead to significant delays between streams in the presence of wireless dynamics. Therefore, a slow stream can significantly slow down a multi-modal inference system in the cloud, leading to either increased latency (when blocked by the slow stream) or degradation in inference accuracy (if inference proceeds without waiting). In this paper, we introduce speculative inference on multi-modal data streams to adapt to these asymmetries across modalities. Rather than blocking inference until all sensor streams have arrived and been temporally aligned, we impute any missing, corrupt, or partially-available sensor data, then generate a speculative inference using the learned models and imputed data. A rollback module looks at the class output of speculative inference and determines whether the class is sufficiently robust to incomplete data to accept the result; if not, we roll back the inference and update the model's output. We implement the system in three multi-modal application scenarios using public datasets. The experimental results show that our system achieves 7 -- 128× latency speedup with the same accuracy as six state-of-the-art methods.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"50 S5","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3458864.3467884","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

While multi-modal deep learning is useful in distributed sensing tasks like human tracking, activity recognition, and audio and video analysis, deploying state-of-the-art multi-modal models in a wirelessly networked sensor system poses unique challenges. The data sizes for different modalities can be highly asymmetric (e.g., video vs. audio), and these differences can lead to significant delays between streams in the presence of wireless dynamics. Therefore, a slow stream can significantly slow down a multi-modal inference system in the cloud, leading to either increased latency (when blocked by the slow stream) or degradation in inference accuracy (if inference proceeds without waiting). In this paper, we introduce speculative inference on multi-modal data streams to adapt to these asymmetries across modalities. Rather than blocking inference until all sensor streams have arrived and been temporally aligned, we impute any missing, corrupt, or partially-available sensor data, then generate a speculative inference using the learned models and imputed data. A rollback module looks at the class output of speculative inference and determines whether the class is sufficiently robust to incomplete data to accept the result; if not, we roll back the inference and update the model's output. We implement the system in three multi-modal application scenarios using public datasets. The experimental results show that our system achieves 7 -- 128× latency speedup with the same accuracy as six state-of-the-art methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
分布式多模态数据流的低延迟推测推断
虽然多模态深度学习在人体跟踪、活动识别、音频和视频分析等分布式传感任务中很有用,但在无线网络传感器系统中部署最先进的多模态模型带来了独特的挑战。不同模式的数据大小可能是高度不对称的(例如,视频与音频),这些差异可能导致存在无线动态的流之间的显著延迟。因此,慢流会显著降低云中的多模态推理系统的速度,导致延迟增加(当被慢流阻塞时)或推理精度降低(如果推理不等待就进行)。在本文中,我们引入了多模态数据流的推测推理来适应这些跨模态的不对称性。而不是阻塞推理,直到所有的传感器流已经到达并暂时对齐,我们推算任何缺失的,损坏的,或部分可用的传感器数据,然后使用学习模型和推算数据生成推测推理。回滚模块查看推测推理的类输出,并确定类对不完整数据是否足够健壮以接受结果;如果不是,我们回滚推理并更新模型的输出。我们使用公共数据集在三种多模式应用场景中实现了该系统。实验结果表明,我们的系统实现了7—128倍的延迟加速,并且与六种最先进的方法具有相同的精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Open source RAN slicing on POWDER: a top-to-bottom O-RAN use case Measuring forest carbon with mobile phones ThingSpire OS: a WebAssembly-based IoT operating system for cloud-edge integration SOS: isolated health monitoring system to save our satellites Acoustic ruler using wireless earbud
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1