首页 > 最新文献

Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services最新文献

英文 中文
Low-latency speculative inference on distributed multi-modal data streams 分布式多模态数据流的低延迟推测推断
Tianxing Li, Jin Huang, Erik Risinger, Deepak Ganesan
While multi-modal deep learning is useful in distributed sensing tasks like human tracking, activity recognition, and audio and video analysis, deploying state-of-the-art multi-modal models in a wirelessly networked sensor system poses unique challenges. The data sizes for different modalities can be highly asymmetric (e.g., video vs. audio), and these differences can lead to significant delays between streams in the presence of wireless dynamics. Therefore, a slow stream can significantly slow down a multi-modal inference system in the cloud, leading to either increased latency (when blocked by the slow stream) or degradation in inference accuracy (if inference proceeds without waiting). In this paper, we introduce speculative inference on multi-modal data streams to adapt to these asymmetries across modalities. Rather than blocking inference until all sensor streams have arrived and been temporally aligned, we impute any missing, corrupt, or partially-available sensor data, then generate a speculative inference using the learned models and imputed data. A rollback module looks at the class output of speculative inference and determines whether the class is sufficiently robust to incomplete data to accept the result; if not, we roll back the inference and update the model's output. We implement the system in three multi-modal application scenarios using public datasets. The experimental results show that our system achieves 7 -- 128× latency speedup with the same accuracy as six state-of-the-art methods.
虽然多模态深度学习在人体跟踪、活动识别、音频和视频分析等分布式传感任务中很有用,但在无线网络传感器系统中部署最先进的多模态模型带来了独特的挑战。不同模式的数据大小可能是高度不对称的(例如,视频与音频),这些差异可能导致存在无线动态的流之间的显著延迟。因此,慢流会显著降低云中的多模态推理系统的速度,导致延迟增加(当被慢流阻塞时)或推理精度降低(如果推理不等待就进行)。在本文中,我们引入了多模态数据流的推测推理来适应这些跨模态的不对称性。而不是阻塞推理,直到所有的传感器流已经到达并暂时对齐,我们推算任何缺失的,损坏的,或部分可用的传感器数据,然后使用学习模型和推算数据生成推测推理。回滚模块查看推测推理的类输出,并确定类对不完整数据是否足够健壮以接受结果;如果不是,我们回滚推理并更新模型的输出。我们使用公共数据集在三种多模式应用场景中实现了该系统。实验结果表明,我们的系统实现了7—128倍的延迟加速,并且与六种最先进的方法具有相同的精度。
{"title":"Low-latency speculative inference on distributed multi-modal data streams","authors":"Tianxing Li, Jin Huang, Erik Risinger, Deepak Ganesan","doi":"10.1145/3458864.3467884","DOIUrl":"https://doi.org/10.1145/3458864.3467884","url":null,"abstract":"While multi-modal deep learning is useful in distributed sensing tasks like human tracking, activity recognition, and audio and video analysis, deploying state-of-the-art multi-modal models in a wirelessly networked sensor system poses unique challenges. The data sizes for different modalities can be highly asymmetric (e.g., video vs. audio), and these differences can lead to significant delays between streams in the presence of wireless dynamics. Therefore, a slow stream can significantly slow down a multi-modal inference system in the cloud, leading to either increased latency (when blocked by the slow stream) or degradation in inference accuracy (if inference proceeds without waiting). In this paper, we introduce speculative inference on multi-modal data streams to adapt to these asymmetries across modalities. Rather than blocking inference until all sensor streams have arrived and been temporally aligned, we impute any missing, corrupt, or partially-available sensor data, then generate a speculative inference using the learned models and imputed data. A rollback module looks at the class output of speculative inference and determines whether the class is sufficiently robust to incomplete data to accept the result; if not, we roll back the inference and update the model's output. We implement the system in three multi-modal application scenarios using public datasets. The experimental results show that our system achieves 7 -- 128× latency speedup with the same accuracy as six state-of-the-art methods.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"50 S5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132581441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ITrackU ITrackU
Yifeng Cao, Ashutosh Dhekne, M. Ammar
High-precision tracking of a pen-like instrument's movements is desirable in a wide range of fields spanning education, robotics, and art, to name a few. The key challenge in doing so stems from the impracticality of embedding electronics in the tip of such instruments (a pen, marker, scalpel, etc.) as well as the difficulties in instrumenting the surface that it works on. In this paper, we present ITrackU, a movement digitization system that does not require modifications to the surface or the tracked instrument's tip. ITrackU fuses locations obtained using ultra-wideband radios (UWB), with an inertial and magnetic unit (IMU) and a pressure sensor, yielding multidimensional improvements in accuracy, range, cost, and robustness, over existing works. ITrackU embeds a micro-transmitter at the base of a pen which creates a trackable beacon, that is localized from the corners of a writing surface. Fused with inertial motion sensor and a pressure sensor, ITrackU enables accurate tracking. Our prototype of ITrackU covers a large 2.5m × 2m area, while obtaining around 2.9mm median error. We demonstrate the accuracy of our system by drawing numerous shapes and characters on a whiteboard, and compare them against a touchscreen and a camera-based ground-truthing system. Finally, the produced stream of digitized data is minuscule in volume, when compared with a video of the whiteboard, which saves both network bandwidth and storage space.
{"title":"ITrackU","authors":"Yifeng Cao, Ashutosh Dhekne, M. Ammar","doi":"10.1145/3458864.3467885","DOIUrl":"https://doi.org/10.1145/3458864.3467885","url":null,"abstract":"High-precision tracking of a pen-like instrument's movements is desirable in a wide range of fields spanning education, robotics, and art, to name a few. The key challenge in doing so stems from the impracticality of embedding electronics in the tip of such instruments (a pen, marker, scalpel, etc.) as well as the difficulties in instrumenting the surface that it works on. In this paper, we present ITrackU, a movement digitization system that does not require modifications to the surface or the tracked instrument's tip. ITrackU fuses locations obtained using ultra-wideband radios (UWB), with an inertial and magnetic unit (IMU) and a pressure sensor, yielding multidimensional improvements in accuracy, range, cost, and robustness, over existing works. ITrackU embeds a micro-transmitter at the base of a pen which creates a trackable beacon, that is localized from the corners of a writing surface. Fused with inertial motion sensor and a pressure sensor, ITrackU enables accurate tracking. Our prototype of ITrackU covers a large 2.5m × 2m area, while obtaining around 2.9mm median error. We demonstrate the accuracy of our system by drawing numerous shapes and characters on a whiteboard, and compare them against a touchscreen and a camera-based ground-truthing system. Finally, the produced stream of digitized data is minuscule in volume, when compared with a video of the whiteboard, which saves both network bandwidth and storage space.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130174095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Measuring forest carbon with mobile phones 用移动电话测量森林碳
Amelia Holcomb, Bill Tong, Megan Penny, Srinivasan Keshav
Tree trunk diameter, currently measured during manual forest inventories, is a key input to tree carbon storage calculations. We designan app running on a smartphone equipped with a time-of-flight sensor that allows efficient, low-cost, and accurate measurement of trunk diameter, even in the face of natural leaf and branch occlusion. The algorithm runs in near real-time on the phone, allowing user interaction to improve the quality of the results. We evaluate the app in realistic settings and find that in a corpus of 55 sample tree images, it estimates trunk diameter with mean error of 7.8%.
目前在人工森林清查中测量的树干直径是计算树木碳储量的关键输入。我们设计了一款运行在智能手机上的应用程序,该应用程序配备了飞行时间传感器,即使在面对自然树叶和树枝遮挡的情况下,也可以高效、低成本、准确地测量树干直径。该算法在手机上近乎实时地运行,允许用户交互来提高结果的质量。我们在现实环境中评估了该应用程序,发现在55个样本树图像的语料库中,它估计树干直径的平均误差为7.8%。
{"title":"Measuring forest carbon with mobile phones","authors":"Amelia Holcomb, Bill Tong, Megan Penny, Srinivasan Keshav","doi":"10.1145/3458864.3466916","DOIUrl":"https://doi.org/10.1145/3458864.3466916","url":null,"abstract":"Tree trunk diameter, currently measured during manual forest inventories, is a key input to tree carbon storage calculations. We designan app running on a smartphone equipped with a time-of-flight sensor that allows efficient, low-cost, and accurate measurement of trunk diameter, even in the face of natural leaf and branch occlusion. The algorithm runs in near real-time on the phone, allowing user interaction to improve the quality of the results. We evaluate the app in realistic settings and find that in a corpus of 55 sample tree images, it estimates trunk diameter with mean error of 7.8%.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"306 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115619006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Rushmore 总统山
Chang Min Park, Donghwi Kim, D. Sidhwani, Andrew Fuchs, Arnob Paul, Sung-ju Lee, Karthik Dantu, Steven Y. Ko
We present Rushmore, a system that securely displays static or animated images using TrustZone. The core functionality of Rushmore is to securely decrypt and display encrypted images (sent by a trusted party) on a mobile device. Although previous approaches have shown that it is possible to securely display encrypted images using TrustZone, they exhibit a critical limitation that significantly hampers the applicability of using TrustZone for display security. The limitation is that, when the trusted domain of TrustZone (the secure world) takes control of the display, the untrusted domain (the normal world) cannot display anything simultaneously. This limitation comes from the fact that previous approaches give the secure world exclusive access to the display hardware to preserve security. With Rushmore, we overcome this limitation by leveraging a well-known, yet overlooked hardware feature called an IPU (Image Processing Unit) that provides multiple display channels. By partitioning these channels across the normal world and the secure world, we enable the two worlds to simultaneously display pixels on the screen without sacrificing security. Furthermore, we show that with the right type of cryptographic method, we can decrypt and display encrypted animated images at 30 FPS or higher for medium-to-small images and at around 30 FPS for large images. One notable cryptographic method we adapt for Rushmore is visual cryptography, and we demonstrate that it is a light-weight alternative to other cryptographic methods for certain use cases. Our evaluation shows that in addition to providing usable frame rates, Rushmore incurs less than 5% overhead to the applications running in the normal world.
{"title":"Rushmore","authors":"Chang Min Park, Donghwi Kim, D. Sidhwani, Andrew Fuchs, Arnob Paul, Sung-ju Lee, Karthik Dantu, Steven Y. Ko","doi":"10.1145/3458864.3467887","DOIUrl":"https://doi.org/10.1145/3458864.3467887","url":null,"abstract":"We present Rushmore, a system that securely displays static or animated images using TrustZone. The core functionality of Rushmore is to securely decrypt and display encrypted images (sent by a trusted party) on a mobile device. Although previous approaches have shown that it is possible to securely display encrypted images using TrustZone, they exhibit a critical limitation that significantly hampers the applicability of using TrustZone for display security. The limitation is that, when the trusted domain of TrustZone (the secure world) takes control of the display, the untrusted domain (the normal world) cannot display anything simultaneously. This limitation comes from the fact that previous approaches give the secure world exclusive access to the display hardware to preserve security. With Rushmore, we overcome this limitation by leveraging a well-known, yet overlooked hardware feature called an IPU (Image Processing Unit) that provides multiple display channels. By partitioning these channels across the normal world and the secure world, we enable the two worlds to simultaneously display pixels on the screen without sacrificing security. Furthermore, we show that with the right type of cryptographic method, we can decrypt and display encrypted animated images at 30 FPS or higher for medium-to-small images and at around 30 FPS for large images. One notable cryptographic method we adapt for Rushmore is visual cryptography, and we demonstrate that it is a light-weight alternative to other cryptographic methods for certain use cases. Our evaluation shows that in addition to providing usable frame rates, Rushmore incurs less than 5% overhead to the applications running in the normal world.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123221538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A smart agent guided contactless data collection system amid a pandemic 大流行期间,智能代理引导非接触式数据采集系统
Murtadha M. N. Aldeer, Justin Yu, Tahiya Chowdhury, Joseph Florentine, Jakub Kolodziejski, R. Howard, R. Martin, Jorge Ortiz
The COVID-19 pandemic has impacted academic life in different ways. In the mobile and pervasive computing community, there was a struggle on data collection for the evaluation of human-sensing systems. An automated and contactless solution to collect data from users at home is one way that can help in the continuation of user-centric studies. In this poster, we present a portable system for remote, in-home data collection. The system is powered by a Raspberry Pi© and input peripherals (a camera, a microphone, and a wireless receiver). Our system uses a speech interface for text-to-speech and speech-to-text conversions. The system acts as a voice-based "smart agent" that guides the user during an experiment session. We aim to use our system to collect data from a set of smart pill bottles that we previously designed for medication adherence monitoring [1] and user identification [3].
新冠肺炎疫情以不同的方式影响了学术生活。在移动和普适计算社区中,有一场关于评估人类感知系统的数据收集的斗争。从用户家中收集数据的自动化和非接触式解决方案是一种有助于继续进行以用户为中心的研究的方法。在这张海报中,我们展示了一个用于远程家庭数据收集的便携式系统。该系统由树莓派©和输入外设(摄像头、麦克风和无线接收器)供电。我们的系统使用语音接口进行文本到语音和语音到文本的转换。该系统作为一个基于语音的“智能代理”,在实验过程中指导用户。我们的目标是使用我们的系统从一组智能药瓶中收集数据,我们之前设计了这些药瓶用于药物依从性监测[1]和用户识别[3]。
{"title":"A smart agent guided contactless data collection system amid a pandemic","authors":"Murtadha M. N. Aldeer, Justin Yu, Tahiya Chowdhury, Joseph Florentine, Jakub Kolodziejski, R. Howard, R. Martin, Jorge Ortiz","doi":"10.1145/3458864.3466908","DOIUrl":"https://doi.org/10.1145/3458864.3466908","url":null,"abstract":"The COVID-19 pandemic has impacted academic life in different ways. In the mobile and pervasive computing community, there was a struggle on data collection for the evaluation of human-sensing systems. An automated and contactless solution to collect data from users at home is one way that can help in the continuation of user-centric studies. In this poster, we present a portable system for remote, in-home data collection. The system is powered by a Raspberry Pi© and input peripherals (a camera, a microphone, and a wireless receiver). Our system uses a speech interface for text-to-speech and speech-to-text conversions. The system acts as a voice-based \"smart agent\" that guides the user during an experiment session. We aim to use our system to collect data from a set of smart pill bottles that we previously designed for medication adherence monitoring [1] and user identification [3].","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127276322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How much battery does dark mode save?: an accurate OLED display power profiler for modern smartphones 暗模式能节省多少电量?:为现代智能手机提供精确的OLED显示功率分析器
Pranab Dash, Y. C. Hu
By omitting external lighting, OLED display significantly reduces the power draw compared to its predecessor LCD and has gained wide adoption in modern smartphones. The real potential of OLED in saving phone battery drain lies in exploiting app UI color design, i.e., how to design app UI to use pixel colors that result in low OLED display power draw. In this paper, we design and implement an accurate per-frame OLED display power profiler, PFOP, that helps developers to gain insight into the impact of different app UI design on its OLED power draw, and an enhanced Android Battery that helps phone users to understand and manage phone display energy drain, for example, from different app and display configurations such as dark mode and screen brightness. A major challenge in designing both tools is to develop an accurate and robust OLED display power model. We experimentally show that linear-regression-based OLED power models developed in the past decade cannot capture the unique behavior of OLED display hardware in modern smartphones which have a large color space and propose a new piecewise power model that achieves much better modeling accuracy than the prior-art by applying linear regression in each small regions of the vast color space. Using the two tools, we performed to our knowledge the first power saving measurement of the emerging dark mode for a set of popular Google Android apps.
由于省去了外部照明,OLED显示屏比其前身LCD大幅降低了功耗,在现代智能手机中得到了广泛采用。OLED在节省手机电池耗电方面的真正潜力在于开发应用UI的色彩设计,即如何设计应用UI使用像素色彩,从而降低OLED显示的功耗。在本文中,我们设计并实现了一个精确的每帧OLED显示功耗分析器PFOP,它可以帮助开发者深入了解不同应用程序UI设计对OLED功耗的影响,以及一个增强的Android电池,帮助手机用户了解和管理手机显示功耗,例如,不同的应用程序和显示配置,如黑暗模式和屏幕亮度。设计这两种工具的一个主要挑战是开发一个准确而稳健的OLED显示功率模型。我们通过实验表明,过去十年开发的基于线性回归的OLED功率模型无法捕捉具有大色彩空间的现代智能手机中OLED显示硬件的独特行为,并提出了一种新的分段功率模型,该模型通过在广阔色彩空间的每个小区域应用线性回归,实现了比现有技术更好的建模精度。使用这两个工具,我们对一组流行的谷歌安卓应用程序的新兴暗模式进行了首次省电测量。
{"title":"How much battery does dark mode save?: an accurate OLED display power profiler for modern smartphones","authors":"Pranab Dash, Y. C. Hu","doi":"10.1145/3458864.3467682","DOIUrl":"https://doi.org/10.1145/3458864.3467682","url":null,"abstract":"By omitting external lighting, OLED display significantly reduces the power draw compared to its predecessor LCD and has gained wide adoption in modern smartphones. The real potential of OLED in saving phone battery drain lies in exploiting app UI color design, i.e., how to design app UI to use pixel colors that result in low OLED display power draw. In this paper, we design and implement an accurate per-frame OLED display power profiler, PFOP, that helps developers to gain insight into the impact of different app UI design on its OLED power draw, and an enhanced Android Battery that helps phone users to understand and manage phone display energy drain, for example, from different app and display configurations such as dark mode and screen brightness. A major challenge in designing both tools is to develop an accurate and robust OLED display power model. We experimentally show that linear-regression-based OLED power models developed in the past decade cannot capture the unique behavior of OLED display hardware in modern smartphones which have a large color space and propose a new piecewise power model that achieves much better modeling accuracy than the prior-art by applying linear regression in each small regions of the vast color space. Using the two tools, we performed to our knowledge the first power saving measurement of the emerging dark mode for a set of popular Google Android apps.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125344192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
AR game traffic characterization: a case of Pokémon Go in a flash crowd event AR游戏流量表征:以快闪人群事件中的《pokemon Go》为例
Hsi Chen, Ruey-Tzer Hsu, Ying-Chiao Chen, Wei-Chen Hsu, Polly Huang
Latency is a major issue towards practical use of augmented reality (AR) in mobile apps such as navigation and gaming. A string of work has appeared recently, proposing to offload a part of the AR-related processing pipeline to the edge [8]. One pitfall in these studies is the (simplified) assumption about the network delay. As a reality check and to gather insights to realize AR in real time, we seek in this work a better understanding of how a popular AR game, Pokémon Go, delivers its data in situ.
延迟是在导航和游戏等移动应用程序中实际使用增强现实(AR)的主要问题。最近出现了一系列工作,建议将ar相关处理管道的一部分卸载到边缘[8]。这些研究中的一个缺陷是对网络延迟的(简化)假设。作为现实检查和收集实时实现AR的见解,我们在这项工作中寻求更好地理解流行的AR游戏pok mon Go如何在原位传递数据。
{"title":"AR game traffic characterization: a case of Pokémon Go in a flash crowd event","authors":"Hsi Chen, Ruey-Tzer Hsu, Ying-Chiao Chen, Wei-Chen Hsu, Polly Huang","doi":"10.1145/3458864.3466914","DOIUrl":"https://doi.org/10.1145/3458864.3466914","url":null,"abstract":"Latency is a major issue towards practical use of augmented reality (AR) in mobile apps such as navigation and gaming. A string of work has appeared recently, proposing to offload a part of the AR-related processing pipeline to the edge [8]. One pitfall in these studies is the (simplified) assumption about the network delay. As a reality check and to gather insights to realize AR in real time, we seek in this work a better understanding of how a popular AR game, Pokémon Go, delivers its data in situ.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124331425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
ClusterFL
Xiaomin Ouyang, Zhiyuan Xie, Jiayu Zhou, Jianwei Huang, Guoliang Xing
Federated Learning (FL) has recently received significant interests thanks to its capability of protecting data privacy. However, existing FL paradigms yield unsatisfactory performance for a wide class of human activity recognition (HAR) applications since they are oblivious to the intrinsic relationship between data of different users. We propose ClusterFL, a similarity-aware federated learning system that can provide high model accuracy and low communication overhead for HAR applications. ClusterFL features a novel clustered multi-task federated learning framework that maximizes the training accuracy of multiple learned models while automatically capturing the intrinsic clustering relationship among the data of different nodes. Based on the learned cluster relationship, ClusterFL can efficiently drop out the nodes that converge slower or have little correlation with other nodes in each cluster, significantly speeding up the convergence while maintaining the accuracy performance. We evaluate the performance of ClusterFL on an NVIDIA edge testbed using four new HAR datasets collected from total 145 users. The results show that, ClusterFL outperforms several state-of-the-art FL paradigms in terms of overall accuracy, and save more than 50% communication overhead at the expense of negligible accuracy degradation.
{"title":"ClusterFL","authors":"Xiaomin Ouyang, Zhiyuan Xie, Jiayu Zhou, Jianwei Huang, Guoliang Xing","doi":"10.1145/3458864.3467681","DOIUrl":"https://doi.org/10.1145/3458864.3467681","url":null,"abstract":"Federated Learning (FL) has recently received significant interests thanks to its capability of protecting data privacy. However, existing FL paradigms yield unsatisfactory performance for a wide class of human activity recognition (HAR) applications since they are oblivious to the intrinsic relationship between data of different users. We propose ClusterFL, a similarity-aware federated learning system that can provide high model accuracy and low communication overhead for HAR applications. ClusterFL features a novel clustered multi-task federated learning framework that maximizes the training accuracy of multiple learned models while automatically capturing the intrinsic clustering relationship among the data of different nodes. Based on the learned cluster relationship, ClusterFL can efficiently drop out the nodes that converge slower or have little correlation with other nodes in each cluster, significantly speeding up the convergence while maintaining the accuracy performance. We evaluate the performance of ClusterFL on an NVIDIA edge testbed using four new HAR datasets collected from total 145 users. The results show that, ClusterFL outperforms several state-of-the-art FL paradigms in terms of overall accuracy, and save more than 50% communication overhead at the expense of negligible accuracy degradation.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"24 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128007010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
OESense: employing occlusion effect for in-ear human sensing OESense:利用遮挡效果进行耳内人体感知
Dong Ma, Andrea Ferlini, C. Mascolo
Smart earbuds are recognized as a new wearable platform for personal-scale human motion sensing. However, due to the interference from head movement or background noise, commonly-used modalities (e.g. accelerometer and microphone) fail to reliably detect both intense and light motions. To obviate this, we propose OESense, an acoustic-based in-ear system for general human motion sensing. The core idea behind OESense is the joint use of the occlusion effect (i.e., the enhancement of low-frequency components of bone-conducted sounds in an occluded ear canal) and inward-facing microphone, which naturally boosts the sensing signal and suppresses external interference. We prototype OESense as an earbud and evaluate its performance on three representative applications, i.e., step counting, activity recognition, and hand-to-face gesture interaction. With data collected from 31 subjects, we show that OESense achieves 99.3% step counting recall, 98.3% recognition recall for 5 activities, and 97.0% recall for five tapping gestures on human face, respectively. We also demonstrate that OESense is compatible with earbuds' fundamental functionalities (e.g. music playback and phone calls). In terms of energy, OESense consumes 746 mW during data recording and recognition and it has a response latency of 40.85 ms for gesture recognition. Our analysis indicates such overhead is acceptable and OESense is potential to be integrated into future earbuds.
智能耳塞被认为是一种新的可穿戴平台,用于个人规模的人体运动传感。然而,由于头部运动或背景噪声的干扰,常用的模式(例如加速度计和麦克风)无法可靠地检测强烈和轻微的运动。为了避免这种情况,我们提出了OESense,一种基于声学的耳内系统,用于一般的人体运动感应。OESense的核心思想是将闭塞效应(即在闭塞的耳道中增强骨传导声音的低频成分)与内置麦克风联合使用,自然增强传感信号,抑制外界干扰。我们将OESense作为耳塞原型,并在三个代表性应用中评估其性能,即步数计数,活动识别和手对脸手势交互。通过对31个被试的数据分析,我们发现OESense对5个动作的步数召回率为99.3%,对5个动作的识别召回率为98.3%,对5个敲击人脸手势的召回率为97.0%。我们还演示了OESense与耳塞的基本功能(例如音乐播放和电话)兼容。在能量方面,OESense在数据记录和识别过程中消耗746 mW,手势识别的响应延迟为40.85 ms。我们的分析表明,这样的开销是可以接受的,OESense有可能集成到未来的耳塞中。
{"title":"OESense: employing occlusion effect for in-ear human sensing","authors":"Dong Ma, Andrea Ferlini, C. Mascolo","doi":"10.1145/3458864.3467680","DOIUrl":"https://doi.org/10.1145/3458864.3467680","url":null,"abstract":"Smart earbuds are recognized as a new wearable platform for personal-scale human motion sensing. However, due to the interference from head movement or background noise, commonly-used modalities (e.g. accelerometer and microphone) fail to reliably detect both intense and light motions. To obviate this, we propose OESense, an acoustic-based in-ear system for general human motion sensing. The core idea behind OESense is the joint use of the occlusion effect (i.e., the enhancement of low-frequency components of bone-conducted sounds in an occluded ear canal) and inward-facing microphone, which naturally boosts the sensing signal and suppresses external interference. We prototype OESense as an earbud and evaluate its performance on three representative applications, i.e., step counting, activity recognition, and hand-to-face gesture interaction. With data collected from 31 subjects, we show that OESense achieves 99.3% step counting recall, 98.3% recognition recall for 5 activities, and 97.0% recall for five tapping gestures on human face, respectively. We also demonstrate that OESense is compatible with earbuds' fundamental functionalities (e.g. music playback and phone calls). In terms of energy, OESense consumes 746 mW during data recording and recognition and it has a response latency of 40.85 ms for gesture recognition. Our analysis indicates such overhead is acceptable and OESense is potential to be integrated into future earbuds.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132370823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
FastZIP: faster and more secure zero-interaction pairing FastZIP:更快、更安全的零交互配对
Mikhail Fomichev, Julia Hesse, Lars Almon, Timm Lippert, Jun Han, M. Hollick
With the advent of the Internet of Things (IoT), establishing a secure channel between smart devices becomes crucial. Recent research proposes zero-interaction pairing (ZIP), which enables pairing without user assistance by utilizing devices' physical context (e.g., ambient audio) to obtain a shared secret key. The state-of-the-art ZIP schemes suffer from three limitations: (1) prolonged pairing time (i.e., minutes or hours), (2) vulnerability to brute-force offline attacks on a shared key, and (3) susceptibility to attacks caused by predictable context (e.g., replay attack) because they rely on limited entropy of physical context to protect a shared key. We address these limitations, proposing FastZIP, a novel ZIP scheme that significantly reduces pairing time while preventing offline and predictable context attacks. In particular, we adapt a recently introduced Fuzzy Password-Authenticated Key Exchange (fPAKE) protocol and utilize sensor fusion, maximizing their advantages. We instantiate FastZIP for intra-car device pairing to demonstrate its feasibility and show how the design of FastZIP can be adapted to other ZIP use cases. We implement FastZIP and evaluate it by driving four cars for a total of 800 km. We achieve up to three times shorter pairing time compared to the state-of-the-art ZIP schemes while assuring robust security with adversarial error rates below 0.5%.
随着物联网(IoT)的出现,在智能设备之间建立安全通道变得至关重要。最近的研究提出了零交互配对(ZIP),它通过利用设备的物理环境(例如环境音频)来获得共享密钥,从而在没有用户帮助的情况下实现配对。最先进的ZIP方案有三个限制:(1)配对时间延长(即几分钟或几小时),(2)易受对共享密钥进行暴力破解的离线攻击,以及(3)易受可预测上下文(例如重播攻击)引起的攻击的影响,因为它们依赖于有限的物理上下文熵来保护共享密钥。我们解决了这些限制,提出了FastZIP,这是一种新颖的ZIP方案,可以显着减少配对时间,同时防止离线和可预测的上下文攻击。特别是,我们采用了最近引入的模糊密码认证密钥交换(fPAKE)协议,并利用传感器融合,最大限度地发挥其优势。我们为车内设备配对实例化了FastZIP,以证明其可行性,并展示了FastZIP的设计如何适用于其他ZIP用例。我们实施了FastZIP,并通过驾驶四辆汽车行驶800公里来评估它。与最先进的ZIP方案相比,我们的配对时间缩短了三倍,同时确保了强大的安全性,对抗性错误率低于0.5%。
{"title":"FastZIP: faster and more secure zero-interaction pairing","authors":"Mikhail Fomichev, Julia Hesse, Lars Almon, Timm Lippert, Jun Han, M. Hollick","doi":"10.1145/3458864.3467883","DOIUrl":"https://doi.org/10.1145/3458864.3467883","url":null,"abstract":"With the advent of the Internet of Things (IoT), establishing a secure channel between smart devices becomes crucial. Recent research proposes zero-interaction pairing (ZIP), which enables pairing without user assistance by utilizing devices' physical context (e.g., ambient audio) to obtain a shared secret key. The state-of-the-art ZIP schemes suffer from three limitations: (1) prolonged pairing time (i.e., minutes or hours), (2) vulnerability to brute-force offline attacks on a shared key, and (3) susceptibility to attacks caused by predictable context (e.g., replay attack) because they rely on limited entropy of physical context to protect a shared key. We address these limitations, proposing FastZIP, a novel ZIP scheme that significantly reduces pairing time while preventing offline and predictable context attacks. In particular, we adapt a recently introduced Fuzzy Password-Authenticated Key Exchange (fPAKE) protocol and utilize sensor fusion, maximizing their advantages. We instantiate FastZIP for intra-car device pairing to demonstrate its feasibility and show how the design of FastZIP can be adapted to other ZIP use cases. We implement FastZIP and evaluate it by driving four cars for a total of 800 km. We achieve up to three times shorter pairing time compared to the state-of-the-art ZIP schemes while assuring robust security with adversarial error rates below 0.5%.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127012740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1