首页 > 最新文献

Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services最新文献

英文 中文
Commodity-level BLE backscatter 商品级BLE反向散射
Ma Zhang, Si Chen, Jia Zhao, Wei Gong
The communication reliability of state-of-the-art Bluetooth Low Energy (BLE) backscatter systems is fundamentally limited by their modulation schemes because the Binary Frequency Shift Keying (BFSK) modulation of the tag does not exactly match commodity BLE receivers designed for Gauss Frequency Shift Keying (GFSK) modulated signals with high bandwidth efficiency. Gaussian pulse shaping is a missing piece in state-of-the-art BLE backscatter systems. Inspired by active BLE and applying calculus, we present IBLE, a BLE backscatter communication system that achieves full compatibility with commodity BLE devices. IBLE leverages the fact that phase shift is the integral of frequency over time to build a reliable physical layer for BLE backscatter. IBLE uses instantaneous phase shift (IPS) modulation, GFSK modulation, and optional FEC coding to improve the reliability of BLE backscatter communication to the commodity level. We prototype IBLE using various commodity BLE devices and a customized tag with FPGA. Empirical results demonstrate that IBLE achieves PERs of 0.04% and 0.68% when the uplink distances are 2 m and 14 m respectively, which are 280x and 70x lower than the PERs of the state-of-the-art system RBLE. On the premise of meeting the BER requirements of the BLE specification, the uplink range of IBLE is 20 m. Since BLE devices are everywhere, IBLE is readily deployable in our everyday IoT applications.
最先进的蓝牙低功耗(BLE)反向散射系统的通信可靠性从根本上受到其调制方案的限制,因为标签的二进制频移键控(BFSK)调制并不完全匹配为高带宽效率的高斯频移键控(GFSK)调制信号而设计的商用BLE接收器。高斯脉冲整形是最先进的低BLE后向散射系统中缺失的一部分。受有源BLE启发,应用微积分,我们提出了一种与商用BLE器件完全兼容的BLE反向散射通信系统。BLE利用相移是频率随时间的积分这一事实,为BLE反向散射建立可靠的物理层。BLE采用瞬时相移(IPS)调制、GFSK调制和可选的FEC编码,将BLE反向散射通信的可靠性提高到商品水平。我们使用各种商品BLE器件和带有FPGA的定制标签来原型化BLE。实证结果表明,当上行距离为2 m和14 m时,系统的per值分别为0.04%和0.68%,分别比当前系统RBLE的per值低280倍和70倍。在满足BLE规范的误码率要求的前提下,BLE的上行范围为20m。由于BLE设备无处不在,因此BLE很容易部署在我们的日常物联网应用中。
{"title":"Commodity-level BLE backscatter","authors":"Ma Zhang, Si Chen, Jia Zhao, Wei Gong","doi":"10.1145/3458864.3466865","DOIUrl":"https://doi.org/10.1145/3458864.3466865","url":null,"abstract":"The communication reliability of state-of-the-art Bluetooth Low Energy (BLE) backscatter systems is fundamentally limited by their modulation schemes because the Binary Frequency Shift Keying (BFSK) modulation of the tag does not exactly match commodity BLE receivers designed for Gauss Frequency Shift Keying (GFSK) modulated signals with high bandwidth efficiency. Gaussian pulse shaping is a missing piece in state-of-the-art BLE backscatter systems. Inspired by active BLE and applying calculus, we present IBLE, a BLE backscatter communication system that achieves full compatibility with commodity BLE devices. IBLE leverages the fact that phase shift is the integral of frequency over time to build a reliable physical layer for BLE backscatter. IBLE uses instantaneous phase shift (IPS) modulation, GFSK modulation, and optional FEC coding to improve the reliability of BLE backscatter communication to the commodity level. We prototype IBLE using various commodity BLE devices and a customized tag with FPGA. Empirical results demonstrate that IBLE achieves PERs of 0.04% and 0.68% when the uplink distances are 2 m and 14 m respectively, which are 280x and 70x lower than the PERs of the state-of-the-art system RBLE. On the premise of meeting the BER requirements of the BLE specification, the uplink range of IBLE is 20 m. Since BLE devices are everywhere, IBLE is readily deployable in our everyday IoT applications.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131069218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Lost and Found!: associating target persons in camera surveillance footage with smartphone identifiers 失物招领处!:将摄像头监控录像中的目标人物与智能手机识别码相关联
Hansi Liu, Abrar Alali, Mohamed Ibrahim, Hongyu Li, M. Gruteser, Shubham Jain, Kristin J. Dana, A. Ashok, Bin Cheng, Hongsheng Lu
We demonstrate an application of finding target persons on a surveillance video. Each visually detected participant is tagged with a smartphone ID and the target person with the query ID is highlighted. This work is motivated by the fact that establishing associations between subjects observed in camera images and messages transmitted from their wireless devices can enable fast and reliable tagging. This is particularly helpful when target pedestrians need to be found on public surveillance footage, without the reliance on facial recognition. The underlying system uses a multi-modal approach that leverages WiFi Fine Timing Measurements (FTM) and inertial sensor (IMU) data to associate each visually detected individual with a corresponding smartphone identifier. These smartphone measurements are combined strategically with RGB-D information from the camera, to learn affinity matrices using a multi-modal deep learning network.
我们演示了在监控视频中寻找目标人员的应用。每个视觉检测到的参与者都标有智能手机ID,并突出显示具有查询ID的目标人。这项工作的动机是在相机图像中观察到的对象和从他们的无线设备传输的信息之间建立联系,可以实现快速可靠的标记。当需要在公共监控录像中找到目标行人,而不依赖面部识别时,这一点尤其有用。底层系统采用多模态方法,利用WiFi精细定时测量(FTM)和惯性传感器(IMU)数据,将每个视觉检测到的个体与相应的智能手机标识符关联起来。这些智能手机测量数据策略性地与来自相机的RGB-D信息相结合,使用多模态深度学习网络学习亲和矩阵。
{"title":"Lost and Found!: associating target persons in camera surveillance footage with smartphone identifiers","authors":"Hansi Liu, Abrar Alali, Mohamed Ibrahim, Hongyu Li, M. Gruteser, Shubham Jain, Kristin J. Dana, A. Ashok, Bin Cheng, Hongsheng Lu","doi":"10.1145/3458864.3466904","DOIUrl":"https://doi.org/10.1145/3458864.3466904","url":null,"abstract":"We demonstrate an application of finding target persons on a surveillance video. Each visually detected participant is tagged with a smartphone ID and the target person with the query ID is highlighted. This work is motivated by the fact that establishing associations between subjects observed in camera images and messages transmitted from their wireless devices can enable fast and reliable tagging. This is particularly helpful when target pedestrians need to be found on public surveillance footage, without the reliance on facial recognition. The underlying system uses a multi-modal approach that leverages WiFi Fine Timing Measurements (FTM) and inertial sensor (IMU) data to associate each visually detected individual with a corresponding smartphone identifier. These smartphone measurements are combined strategically with RGB-D information from the camera, to learn affinity matrices using a multi-modal deep learning network.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132866441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SCOPE 范围
Leonardo Bonati, Salvatore D’oro, S. Basagni, T. Melodia
{"title":"SCOPE","authors":"Leonardo Bonati, Salvatore D’oro, S. Basagni, T. Melodia","doi":"10.1787/1028e588-en","DOIUrl":"https://doi.org/10.1787/1028e588-en","url":null,"abstract":"","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116890274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A do-it-yourself computer vision based robotic ball throw trainer 一个自己动手的基于计算机视觉的机器人投球训练器
Bronson Tharpe, A. Bourgeois, A. Ashok
We demonstrate a self-training system for sports involving throwing a ball. We design a do-it-yourself (DIY) machinery that can be assembled using off-the-shelf items and integrates computer vision to visually track the ball throw accuracy. In this work, we demonstrate a system that can identify if the ball went through the hoop and approximately in which of the hoop's inner region. We envision that this preliminary design sets the foundation for a complete DIY sports IoT system that involves a hoola hoop, RaspberryPi, PiCamera and a LED strip, along with advanced ball placement and dynamics tracking.
我们演示了一个涉及投球运动的自我训练系统。我们设计了一种自己动手(DIY)的机器,可以使用现成的物品组装,并集成了计算机视觉来直观地跟踪球的投掷精度。在这项工作中,我们展示了一个系统,可以识别球是否穿过了呼拉圈,并大致在呼拉圈的内部区域。我们设想,这一初步设计为一个完整的DIY体育物联网系统奠定了基础,该系统包括一个呼拉圈、树莓派、piccamera和LED灯带,以及先进的球定位和动态跟踪。
{"title":"A do-it-yourself computer vision based robotic ball throw trainer","authors":"Bronson Tharpe, A. Bourgeois, A. Ashok","doi":"10.1145/3458864.3466909","DOIUrl":"https://doi.org/10.1145/3458864.3466909","url":null,"abstract":"We demonstrate a self-training system for sports involving throwing a ball. We design a do-it-yourself (DIY) machinery that can be assembled using off-the-shelf items and integrates computer vision to visually track the ball throw accuracy. In this work, we demonstrate a system that can identify if the ball went through the hoop and approximately in which of the hoop's inner region. We envision that this preliminary design sets the foundation for a complete DIY sports IoT system that involves a hoola hoop, RaspberryPi, PiCamera and a LED strip, along with advanced ball placement and dynamics tracking.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127011802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Encrypted cloud photo storage using Google photos 加密云照片存储使用谷歌照片
John S. Koh, Jason Nieh, S. Bellovin
Cloud photo services are widely used for persistent, convenient, and often free photo storage, which is especially useful for mobile devices. As users store more and more photos in the cloud, significant privacy concerns arise because even a single compromise of a user's credentials give attackers unfettered access to all of the user's photos. We have created Easy Secure Photos (ESP) to enable users to protect their photos on cloud photo services such as Google Photos. ESP introduces a new client-side encryption architecture that includes a novel format-preserving image encryption algorithm, an encrypted thumbnail display mechanism, and a usable key management system. ESP encrypts image data such that the result is still a standard format image like JPEG that is compatible with cloud photo services. ESP efficiently generates and displays encrypted thumbnails for fast and easy browsing of photo galleries from trusted user devices. ESP's key management makes it simple to authorize multiple user devices to view encrypted image content via a process similar to device pairing, but using the cloud photo service as a QR code communication channel. We have implemented ESP in a popular Android photos app for use with Google Photos and demonstrate that it is easy to use and provides encryption functionality transparently to users, maintains good interactive performance and image quality while providing strong privacy guarantees, and retains the sharing and storage benefits of Google Photos without any changes to the cloud service.
云照片服务被广泛用于持久、方便且通常免费的照片存储,这对移动设备尤其有用。随着用户在云中存储越来越多的照片,严重的隐私问题出现了,因为即使用户的凭据被泄露,攻击者也可以不受限制地访问所有用户的照片。我们已经创建了简易安全照片(ESP),使用户能够保护他们在云照片服务(如Google Photos)上的照片。ESP引入了一种新的客户端加密体系结构,其中包括一种新颖的保持格式的图像加密算法、一种加密的缩略图显示机制和一个可用的密钥管理系统。ESP对图像数据进行加密,因此结果仍然是与云照片服务兼容的JPEG等标准格式的图像。ESP有效地生成和显示加密的缩略图,以便从受信任的用户设备快速轻松地浏览照片库。ESP的密钥管理使得通过类似于设备配对的过程授权多个用户设备查看加密的图像内容变得简单,但使用云照片服务作为QR码通信通道。我们已经在一个流行的Android照片应用程序中实现了ESP,用于Google photos,并证明它易于使用,并向用户透明地提供加密功能,在提供强大的隐私保证的同时保持良好的交互性能和图像质量,并保留了Google photos的共享和存储优势,而无需对云服务进行任何更改。
{"title":"Encrypted cloud photo storage using Google photos","authors":"John S. Koh, Jason Nieh, S. Bellovin","doi":"10.1145/3458864.3468220","DOIUrl":"https://doi.org/10.1145/3458864.3468220","url":null,"abstract":"Cloud photo services are widely used for persistent, convenient, and often free photo storage, which is especially useful for mobile devices. As users store more and more photos in the cloud, significant privacy concerns arise because even a single compromise of a user's credentials give attackers unfettered access to all of the user's photos. We have created Easy Secure Photos (ESP) to enable users to protect their photos on cloud photo services such as Google Photos. ESP introduces a new client-side encryption architecture that includes a novel format-preserving image encryption algorithm, an encrypted thumbnail display mechanism, and a usable key management system. ESP encrypts image data such that the result is still a standard format image like JPEG that is compatible with cloud photo services. ESP efficiently generates and displays encrypted thumbnails for fast and easy browsing of photo galleries from trusted user devices. ESP's key management makes it simple to authorize multiple user devices to view encrypted image content via a process similar to device pairing, but using the cloud photo service as a QR code communication channel. We have implemented ESP in a popular Android photos app for use with Google Photos and demonstrate that it is easy to use and provides encryption functionality transparently to users, maintains good interactive performance and image quality while providing strong privacy guarantees, and retains the sharing and storage benefits of Google Photos without any changes to the cloud service.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121672952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
LATTE: online MU-MIMO grouping for video streaming over commodity wifi LATTE:通过商品wifi进行视频流的在线MU-MIMO分组
H. Pasandi, T. Nadeem
In this paper, we present LATTE, a novel framework that proposes MU-MIMO group selection optimization for multi-user video streaming over IEEE 802.11ac. Taking a cross-layer approach, LATTE first optimizes the MU-MIMO user group selection for the users with the same characteristics in the PHY/MAC layer. It then optimizes the video bitrate for each group accordingly. We present our design and its evaluation on smartphones over 802.11ac WiFi.
在本文中,我们提出了LATTE,这是一个新颖的框架,提出了IEEE 802.11ac上多用户视频流的MU-MIMO组选择优化。LATTE采用跨层方法,首先针对PHY/MAC层具有相同特性的用户优化MU-MIMO用户组选择。然后,它会相应地优化每个组的视频比特率。我们介绍了我们的设计及其在802.11ac WiFi智能手机上的评估。
{"title":"LATTE: online MU-MIMO grouping for video streaming over commodity wifi","authors":"H. Pasandi, T. Nadeem","doi":"10.1145/3458864.3466913","DOIUrl":"https://doi.org/10.1145/3458864.3466913","url":null,"abstract":"In this paper, we present LATTE, a novel framework that proposes MU-MIMO group selection optimization for multi-user video streaming over IEEE 802.11ac. Taking a cross-layer approach, LATTE first optimizes the MU-MIMO user group selection for the users with the same characteristics in the PHY/MAC layer. It then optimizes the video bitrate for each group accordingly. We present our design and its evaluation on smartphones over 802.11ac WiFi.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122286079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
nn-Meter: towards accurate latency prediction of deep-learning model inference on diverse edge devices n- meter:在不同边缘设备上实现深度学习模型推理的准确延迟预测
L. Zhang, S. Han, Jianyu Wei, Ningxin Zheng, Ting Cao, Yuqing Yang, Yunxin Liu
With the recent trend of on-device deep learning, inference latency has become a crucial metric in running Deep Neural Network (DNN) models on various mobile and edge devices. To this end, latency prediction of DNN model inference is highly desirable for many tasks where measuring the latency on real devices is infeasible or too costly, such as searching for efficient DNN models with latency constraints from a huge model-design space. Yet it is very challenging and existing approaches fail to achieve a high accuracy of prediction, due to the varying model-inference latency caused by the runtime optimizations on diverse edge devices. In this paper, we propose and develop nn-Meter, a novel and efficient system to accurately predict the inference latency of DNN models on diverse edge devices. The key idea of nn-Meter is dividing a whole model inference into kernels, i.e., the execution units on a device, and conducting kernel-level prediction. nn-Meter builds atop two key techniques: (i) kernel detection to automatically detect the execution unit of model inference via a set of well-designed test cases; and (ii) adaptive sampling to efficiently sample the most beneficial configurations from a large space to build accurate kernel-level latency predictors. Implemented on three popular platforms of edge hardware (mobile CPU, mobile GPU, and Intel VPU) and evaluated using a large dataset of 26,000 models, nn-Meter significantly outperforms the prior state-of-the-art.
随着设备上深度学习的最新趋势,推理延迟已成为在各种移动和边缘设备上运行深度神经网络(DNN)模型的关键指标。为此,DNN模型推理的延迟预测对于许多在真实设备上测量延迟是不可行的或成本过高的任务是非常可取的,例如从巨大的模型设计空间中搜索具有延迟约束的有效DNN模型。然而,由于在不同的边缘设备上运行时优化导致的模型推理延迟不同,现有的方法无法达到较高的预测精度。在本文中,我们提出并开发了一种新颖高效的系统nn-Meter,用于准确预测DNN模型在不同边缘设备上的推理延迟。nn-Meter的核心思想是将整个模型推理划分为核,即设备上的执行单元,并进行核级预测。nn-Meter建立在两个关键技术之上:(i)内核检测,通过一组设计良好的测试用例自动检测模型推理的执行单元;(ii)自适应采样,从大空间中有效地采样最有利的配置,以构建准确的核级延迟预测器。n- meter在三种流行的边缘硬件平台(移动CPU、移动GPU和英特尔VPU)上实现,并使用26,000个模型的大型数据集进行评估,其性能明显优于先前的最先进技术。
{"title":"nn-Meter: towards accurate latency prediction of deep-learning model inference on diverse edge devices","authors":"L. Zhang, S. Han, Jianyu Wei, Ningxin Zheng, Ting Cao, Yuqing Yang, Yunxin Liu","doi":"10.1145/3458864.3467882","DOIUrl":"https://doi.org/10.1145/3458864.3467882","url":null,"abstract":"With the recent trend of on-device deep learning, inference latency has become a crucial metric in running Deep Neural Network (DNN) models on various mobile and edge devices. To this end, latency prediction of DNN model inference is highly desirable for many tasks where measuring the latency on real devices is infeasible or too costly, such as searching for efficient DNN models with latency constraints from a huge model-design space. Yet it is very challenging and existing approaches fail to achieve a high accuracy of prediction, due to the varying model-inference latency caused by the runtime optimizations on diverse edge devices. In this paper, we propose and develop nn-Meter, a novel and efficient system to accurately predict the inference latency of DNN models on diverse edge devices. The key idea of nn-Meter is dividing a whole model inference into kernels, i.e., the execution units on a device, and conducting kernel-level prediction. nn-Meter builds atop two key techniques: (i) kernel detection to automatically detect the execution unit of model inference via a set of well-designed test cases; and (ii) adaptive sampling to efficiently sample the most beneficial configurations from a large space to build accurate kernel-level latency predictors. Implemented on three popular platforms of edge hardware (mobile CPU, mobile GPU, and Intel VPU) and evaluated using a large dataset of 26,000 models, nn-Meter significantly outperforms the prior state-of-the-art.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128529444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 71
MotionCompass
Yan He, Qiuye He, Song Fang, Yao Liu
Wireless security cameras are integral components of security systems used by military installations, corporations, and, due to their increased affordability, many private homes. These cameras commonly employ motion sensors to identify that something is occurring in their fields of vision before starting to record and notifying the property owner of the activity. In this paper, we discover that the motion sensing action can disclose the location of the camera through a novel wireless camera localization technique we call MotionCompass. In short, a user who aims to avoid surveillance can find a hidden camera by creating motion stimuli and sniffing wireless traffic for a response to that stimuli. With the motion trajectories within the motion detection zone, the exact location of the camera can be then computed. We develop an Android app to implement MotionCompass. Our extensive experiments using the developed app and 18 popular wireless security cameras demonstrate that for cameras with one motion sensor, MotionCompass can attain a mean localization error of around 5 cm with less than 140 seconds. This localization technique builds upon existing work that detects the existence of hidden cameras, to pinpoint their exact location and area of surveillance.
{"title":"MotionCompass","authors":"Yan He, Qiuye He, Song Fang, Yao Liu","doi":"10.1145/3458864.3467683","DOIUrl":"https://doi.org/10.1145/3458864.3467683","url":null,"abstract":"Wireless security cameras are integral components of security systems used by military installations, corporations, and, due to their increased affordability, many private homes. These cameras commonly employ motion sensors to identify that something is occurring in their fields of vision before starting to record and notifying the property owner of the activity. In this paper, we discover that the motion sensing action can disclose the location of the camera through a novel wireless camera localization technique we call MotionCompass. In short, a user who aims to avoid surveillance can find a hidden camera by creating motion stimuli and sniffing wireless traffic for a response to that stimuli. With the motion trajectories within the motion detection zone, the exact location of the camera can be then computed. We develop an Android app to implement MotionCompass. Our extensive experiments using the developed app and 18 popular wireless security cameras demonstrate that for cameras with one motion sensor, MotionCompass can attain a mean localization error of around 5 cm with less than 140 seconds. This localization technique builds upon existing work that detects the existence of hidden cameras, to pinpoint their exact location and area of surveillance.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116796058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Microstructure-guided spatial sensing for low-power IoT 面向低功耗物联网的微结构引导空间传感
Nakul Garg, Yang Bai, Nirupam Roy
This demonstration presents a working prototype of Owlet, an alternative design for spatial sensing of acoustic signals. To overcome the fundamental limitations in form-factor, power consumption, and hardware requirements with array-based techniques, Owlet explores wave's interaction with acoustic structures for sensing. By combining passive acoustic microstructures with microphones, we envision achieving the same functionalities as microphone and speaker arrays with less power consumption and in a smaller form factor. Our design uses a 3D-printed metamaterial structure over a microphone to introduce a carefully designed spatial signature to the recorded signal. Owlet prototype shows 3.6° median error in Direction-of-Arrival (DoA) estimation and 10 cm median error in source localization while using a 1.5cm × 1.3cm acoustic structure for sensing.
本演示展示了Owlet的工作原型,Owlet是声学信号空间传感的另一种设计。为了克服阵列技术在外形因素、功耗和硬件要求方面的基本限制,Owlet探索了波与声学结构的相互作用。通过将被动声学微结构与麦克风相结合,我们设想以更低的功耗和更小的外形实现与麦克风和扬声器阵列相同的功能。我们的设计在麦克风上使用3d打印的超材料结构,为录制的信号引入精心设计的空间特征。在使用1.5cm × 1.3cm声学结构进行传感时,Owlet原型在到达方向(DoA)估计中值误差为3.6°,在源定位中值误差为10 cm。
{"title":"Microstructure-guided spatial sensing for low-power IoT","authors":"Nakul Garg, Yang Bai, Nirupam Roy","doi":"10.1145/3458864.3466906","DOIUrl":"https://doi.org/10.1145/3458864.3466906","url":null,"abstract":"This demonstration presents a working prototype of Owlet, an alternative design for spatial sensing of acoustic signals. To overcome the fundamental limitations in form-factor, power consumption, and hardware requirements with array-based techniques, Owlet explores wave's interaction with acoustic structures for sensing. By combining passive acoustic microstructures with microphones, we envision achieving the same functionalities as microphone and speaker arrays with less power consumption and in a smaller form factor. Our design uses a 3D-printed metamaterial structure over a microphone to introduce a carefully designed spatial signature to the recorded signal. Owlet prototype shows 3.6° median error in Direction-of-Arrival (DoA) estimation and 10 cm median error in source localization while using a 1.5cm × 1.3cm acoustic structure for sensing.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121966113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Owlet: enabling spatial information in ubiquitous acoustic devices Owlet:在无处不在的声学设备中实现空间信息
Nakul Garg, Yang Bai, Nirupam Roy
This paper presents a low-power and miniaturized design for acoustic direction-of-arrival (DoA) estimation and source localization, called Owlet. The required aperture, power consumption, and hardware complexity of the traditional array-based spatial sensing techniques make them unsuitable for small and power-constrained IoT devices. Aiming to overcome these fundamental limitations, Owlet explores acoustic microstructures for extracting spatial information. It uses a carefully designed 3D-printed metamaterial structure that covers the microphone. The structure embeds a direction-specific signature in the recorded sounds. Owlet system learns the directional signatures through a one-time in-lab calibration. The system uses an additional microphone as a reference channel and develops techniques that eliminate environmental variation, making the design robust to noises and multipaths in arbitrary locations of operations. Owlet prototype shows 3.6° median error in DoA estimation and 10cm median error in source localization while using a 1.5cm × 1.3cm acoustic structure for sensing. The prototype consumes less than 100th of the energy required by a traditional microphone array to achieve similar DoA estimation accuracy. Owlet opens up possibilities of low-power sensing through 3D-printed passive structures.
本文提出了一种低功耗、小型化的声到达方向(DoA)估计和声源定位设计——Owlet。传统的基于阵列的空间传感技术所需的孔径、功耗和硬件复杂性使其不适合小型和功率受限的物联网设备。为了克服这些基本的限制,Owlet探索声学微结构来提取空间信息。它使用精心设计的3d打印超材料结构覆盖麦克风。该结构在录制的声音中嵌入了特定方向的签名。Owlet系统通过一次实验室校准来学习方向特征。该系统使用一个额外的麦克风作为参考通道,并开发了消除环境变化的技术,使设计对任意位置的噪声和多路径具有鲁棒性。在使用1.5cm × 1.3cm声学结构进行传感时,Owlet原型的DoA估计中值误差为3.6°,声源定位中值误差为10cm。该原型所消耗的能量不到传统麦克风阵列所需能量的百分之一,从而达到相似的DoA估计精度。Owlet通过3d打印的被动结构开辟了低功耗传感的可能性。
{"title":"Owlet: enabling spatial information in ubiquitous acoustic devices","authors":"Nakul Garg, Yang Bai, Nirupam Roy","doi":"10.1145/3458864.3467880","DOIUrl":"https://doi.org/10.1145/3458864.3467880","url":null,"abstract":"This paper presents a low-power and miniaturized design for acoustic direction-of-arrival (DoA) estimation and source localization, called Owlet. The required aperture, power consumption, and hardware complexity of the traditional array-based spatial sensing techniques make them unsuitable for small and power-constrained IoT devices. Aiming to overcome these fundamental limitations, Owlet explores acoustic microstructures for extracting spatial information. It uses a carefully designed 3D-printed metamaterial structure that covers the microphone. The structure embeds a direction-specific signature in the recorded sounds. Owlet system learns the directional signatures through a one-time in-lab calibration. The system uses an additional microphone as a reference channel and develops techniques that eliminate environmental variation, making the design robust to noises and multipaths in arbitrary locations of operations. Owlet prototype shows 3.6° median error in DoA estimation and 10cm median error in source localization while using a 1.5cm × 1.3cm acoustic structure for sensing. The prototype consumes less than 100th of the energy required by a traditional microphone array to achieve similar DoA estimation accuracy. Owlet opens up possibilities of low-power sensing through 3D-printed passive structures.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122180482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1