The communication reliability of state-of-the-art Bluetooth Low Energy (BLE) backscatter systems is fundamentally limited by their modulation schemes because the Binary Frequency Shift Keying (BFSK) modulation of the tag does not exactly match commodity BLE receivers designed for Gauss Frequency Shift Keying (GFSK) modulated signals with high bandwidth efficiency. Gaussian pulse shaping is a missing piece in state-of-the-art BLE backscatter systems. Inspired by active BLE and applying calculus, we present IBLE, a BLE backscatter communication system that achieves full compatibility with commodity BLE devices. IBLE leverages the fact that phase shift is the integral of frequency over time to build a reliable physical layer for BLE backscatter. IBLE uses instantaneous phase shift (IPS) modulation, GFSK modulation, and optional FEC coding to improve the reliability of BLE backscatter communication to the commodity level. We prototype IBLE using various commodity BLE devices and a customized tag with FPGA. Empirical results demonstrate that IBLE achieves PERs of 0.04% and 0.68% when the uplink distances are 2 m and 14 m respectively, which are 280x and 70x lower than the PERs of the state-of-the-art system RBLE. On the premise of meeting the BER requirements of the BLE specification, the uplink range of IBLE is 20 m. Since BLE devices are everywhere, IBLE is readily deployable in our everyday IoT applications.
{"title":"Commodity-level BLE backscatter","authors":"Ma Zhang, Si Chen, Jia Zhao, Wei Gong","doi":"10.1145/3458864.3466865","DOIUrl":"https://doi.org/10.1145/3458864.3466865","url":null,"abstract":"The communication reliability of state-of-the-art Bluetooth Low Energy (BLE) backscatter systems is fundamentally limited by their modulation schemes because the Binary Frequency Shift Keying (BFSK) modulation of the tag does not exactly match commodity BLE receivers designed for Gauss Frequency Shift Keying (GFSK) modulated signals with high bandwidth efficiency. Gaussian pulse shaping is a missing piece in state-of-the-art BLE backscatter systems. Inspired by active BLE and applying calculus, we present IBLE, a BLE backscatter communication system that achieves full compatibility with commodity BLE devices. IBLE leverages the fact that phase shift is the integral of frequency over time to build a reliable physical layer for BLE backscatter. IBLE uses instantaneous phase shift (IPS) modulation, GFSK modulation, and optional FEC coding to improve the reliability of BLE backscatter communication to the commodity level. We prototype IBLE using various commodity BLE devices and a customized tag with FPGA. Empirical results demonstrate that IBLE achieves PERs of 0.04% and 0.68% when the uplink distances are 2 m and 14 m respectively, which are 280x and 70x lower than the PERs of the state-of-the-art system RBLE. On the premise of meeting the BER requirements of the BLE specification, the uplink range of IBLE is 20 m. Since BLE devices are everywhere, IBLE is readily deployable in our everyday IoT applications.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131069218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hansi Liu, Abrar Alali, Mohamed Ibrahim, Hongyu Li, M. Gruteser, Shubham Jain, Kristin J. Dana, A. Ashok, Bin Cheng, Hongsheng Lu
We demonstrate an application of finding target persons on a surveillance video. Each visually detected participant is tagged with a smartphone ID and the target person with the query ID is highlighted. This work is motivated by the fact that establishing associations between subjects observed in camera images and messages transmitted from their wireless devices can enable fast and reliable tagging. This is particularly helpful when target pedestrians need to be found on public surveillance footage, without the reliance on facial recognition. The underlying system uses a multi-modal approach that leverages WiFi Fine Timing Measurements (FTM) and inertial sensor (IMU) data to associate each visually detected individual with a corresponding smartphone identifier. These smartphone measurements are combined strategically with RGB-D information from the camera, to learn affinity matrices using a multi-modal deep learning network.
{"title":"Lost and Found!: associating target persons in camera surveillance footage with smartphone identifiers","authors":"Hansi Liu, Abrar Alali, Mohamed Ibrahim, Hongyu Li, M. Gruteser, Shubham Jain, Kristin J. Dana, A. Ashok, Bin Cheng, Hongsheng Lu","doi":"10.1145/3458864.3466904","DOIUrl":"https://doi.org/10.1145/3458864.3466904","url":null,"abstract":"We demonstrate an application of finding target persons on a surveillance video. Each visually detected participant is tagged with a smartphone ID and the target person with the query ID is highlighted. This work is motivated by the fact that establishing associations between subjects observed in camera images and messages transmitted from their wireless devices can enable fast and reliable tagging. This is particularly helpful when target pedestrians need to be found on public surveillance footage, without the reliance on facial recognition. The underlying system uses a multi-modal approach that leverages WiFi Fine Timing Measurements (FTM) and inertial sensor (IMU) data to associate each visually detected individual with a corresponding smartphone identifier. These smartphone measurements are combined strategically with RGB-D information from the camera, to learn affinity matrices using a multi-modal deep learning network.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132866441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leonardo Bonati, Salvatore D’oro, S. Basagni, T. Melodia
{"title":"SCOPE","authors":"Leonardo Bonati, Salvatore D’oro, S. Basagni, T. Melodia","doi":"10.1787/1028e588-en","DOIUrl":"https://doi.org/10.1787/1028e588-en","url":null,"abstract":"","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116890274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We demonstrate a self-training system for sports involving throwing a ball. We design a do-it-yourself (DIY) machinery that can be assembled using off-the-shelf items and integrates computer vision to visually track the ball throw accuracy. In this work, we demonstrate a system that can identify if the ball went through the hoop and approximately in which of the hoop's inner region. We envision that this preliminary design sets the foundation for a complete DIY sports IoT system that involves a hoola hoop, RaspberryPi, PiCamera and a LED strip, along with advanced ball placement and dynamics tracking.
{"title":"A do-it-yourself computer vision based robotic ball throw trainer","authors":"Bronson Tharpe, A. Bourgeois, A. Ashok","doi":"10.1145/3458864.3466909","DOIUrl":"https://doi.org/10.1145/3458864.3466909","url":null,"abstract":"We demonstrate a self-training system for sports involving throwing a ball. We design a do-it-yourself (DIY) machinery that can be assembled using off-the-shelf items and integrates computer vision to visually track the ball throw accuracy. In this work, we demonstrate a system that can identify if the ball went through the hoop and approximately in which of the hoop's inner region. We envision that this preliminary design sets the foundation for a complete DIY sports IoT system that involves a hoola hoop, RaspberryPi, PiCamera and a LED strip, along with advanced ball placement and dynamics tracking.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127011802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud photo services are widely used for persistent, convenient, and often free photo storage, which is especially useful for mobile devices. As users store more and more photos in the cloud, significant privacy concerns arise because even a single compromise of a user's credentials give attackers unfettered access to all of the user's photos. We have created Easy Secure Photos (ESP) to enable users to protect their photos on cloud photo services such as Google Photos. ESP introduces a new client-side encryption architecture that includes a novel format-preserving image encryption algorithm, an encrypted thumbnail display mechanism, and a usable key management system. ESP encrypts image data such that the result is still a standard format image like JPEG that is compatible with cloud photo services. ESP efficiently generates and displays encrypted thumbnails for fast and easy browsing of photo galleries from trusted user devices. ESP's key management makes it simple to authorize multiple user devices to view encrypted image content via a process similar to device pairing, but using the cloud photo service as a QR code communication channel. We have implemented ESP in a popular Android photos app for use with Google Photos and demonstrate that it is easy to use and provides encryption functionality transparently to users, maintains good interactive performance and image quality while providing strong privacy guarantees, and retains the sharing and storage benefits of Google Photos without any changes to the cloud service.
{"title":"Encrypted cloud photo storage using Google photos","authors":"John S. Koh, Jason Nieh, S. Bellovin","doi":"10.1145/3458864.3468220","DOIUrl":"https://doi.org/10.1145/3458864.3468220","url":null,"abstract":"Cloud photo services are widely used for persistent, convenient, and often free photo storage, which is especially useful for mobile devices. As users store more and more photos in the cloud, significant privacy concerns arise because even a single compromise of a user's credentials give attackers unfettered access to all of the user's photos. We have created Easy Secure Photos (ESP) to enable users to protect their photos on cloud photo services such as Google Photos. ESP introduces a new client-side encryption architecture that includes a novel format-preserving image encryption algorithm, an encrypted thumbnail display mechanism, and a usable key management system. ESP encrypts image data such that the result is still a standard format image like JPEG that is compatible with cloud photo services. ESP efficiently generates and displays encrypted thumbnails for fast and easy browsing of photo galleries from trusted user devices. ESP's key management makes it simple to authorize multiple user devices to view encrypted image content via a process similar to device pairing, but using the cloud photo service as a QR code communication channel. We have implemented ESP in a popular Android photos app for use with Google Photos and demonstrate that it is easy to use and provides encryption functionality transparently to users, maintains good interactive performance and image quality while providing strong privacy guarantees, and retains the sharing and storage benefits of Google Photos without any changes to the cloud service.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121672952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present LATTE, a novel framework that proposes MU-MIMO group selection optimization for multi-user video streaming over IEEE 802.11ac. Taking a cross-layer approach, LATTE first optimizes the MU-MIMO user group selection for the users with the same characteristics in the PHY/MAC layer. It then optimizes the video bitrate for each group accordingly. We present our design and its evaluation on smartphones over 802.11ac WiFi.
{"title":"LATTE: online MU-MIMO grouping for video streaming over commodity wifi","authors":"H. Pasandi, T. Nadeem","doi":"10.1145/3458864.3466913","DOIUrl":"https://doi.org/10.1145/3458864.3466913","url":null,"abstract":"In this paper, we present LATTE, a novel framework that proposes MU-MIMO group selection optimization for multi-user video streaming over IEEE 802.11ac. Taking a cross-layer approach, LATTE first optimizes the MU-MIMO user group selection for the users with the same characteristics in the PHY/MAC layer. It then optimizes the video bitrate for each group accordingly. We present our design and its evaluation on smartphones over 802.11ac WiFi.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122286079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Zhang, S. Han, Jianyu Wei, Ningxin Zheng, Ting Cao, Yuqing Yang, Yunxin Liu
With the recent trend of on-device deep learning, inference latency has become a crucial metric in running Deep Neural Network (DNN) models on various mobile and edge devices. To this end, latency prediction of DNN model inference is highly desirable for many tasks where measuring the latency on real devices is infeasible or too costly, such as searching for efficient DNN models with latency constraints from a huge model-design space. Yet it is very challenging and existing approaches fail to achieve a high accuracy of prediction, due to the varying model-inference latency caused by the runtime optimizations on diverse edge devices. In this paper, we propose and develop nn-Meter, a novel and efficient system to accurately predict the inference latency of DNN models on diverse edge devices. The key idea of nn-Meter is dividing a whole model inference into kernels, i.e., the execution units on a device, and conducting kernel-level prediction. nn-Meter builds atop two key techniques: (i) kernel detection to automatically detect the execution unit of model inference via a set of well-designed test cases; and (ii) adaptive sampling to efficiently sample the most beneficial configurations from a large space to build accurate kernel-level latency predictors. Implemented on three popular platforms of edge hardware (mobile CPU, mobile GPU, and Intel VPU) and evaluated using a large dataset of 26,000 models, nn-Meter significantly outperforms the prior state-of-the-art.
{"title":"nn-Meter: towards accurate latency prediction of deep-learning model inference on diverse edge devices","authors":"L. Zhang, S. Han, Jianyu Wei, Ningxin Zheng, Ting Cao, Yuqing Yang, Yunxin Liu","doi":"10.1145/3458864.3467882","DOIUrl":"https://doi.org/10.1145/3458864.3467882","url":null,"abstract":"With the recent trend of on-device deep learning, inference latency has become a crucial metric in running Deep Neural Network (DNN) models on various mobile and edge devices. To this end, latency prediction of DNN model inference is highly desirable for many tasks where measuring the latency on real devices is infeasible or too costly, such as searching for efficient DNN models with latency constraints from a huge model-design space. Yet it is very challenging and existing approaches fail to achieve a high accuracy of prediction, due to the varying model-inference latency caused by the runtime optimizations on diverse edge devices. In this paper, we propose and develop nn-Meter, a novel and efficient system to accurately predict the inference latency of DNN models on diverse edge devices. The key idea of nn-Meter is dividing a whole model inference into kernels, i.e., the execution units on a device, and conducting kernel-level prediction. nn-Meter builds atop two key techniques: (i) kernel detection to automatically detect the execution unit of model inference via a set of well-designed test cases; and (ii) adaptive sampling to efficiently sample the most beneficial configurations from a large space to build accurate kernel-level latency predictors. Implemented on three popular platforms of edge hardware (mobile CPU, mobile GPU, and Intel VPU) and evaluated using a large dataset of 26,000 models, nn-Meter significantly outperforms the prior state-of-the-art.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128529444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless security cameras are integral components of security systems used by military installations, corporations, and, due to their increased affordability, many private homes. These cameras commonly employ motion sensors to identify that something is occurring in their fields of vision before starting to record and notifying the property owner of the activity. In this paper, we discover that the motion sensing action can disclose the location of the camera through a novel wireless camera localization technique we call MotionCompass. In short, a user who aims to avoid surveillance can find a hidden camera by creating motion stimuli and sniffing wireless traffic for a response to that stimuli. With the motion trajectories within the motion detection zone, the exact location of the camera can be then computed. We develop an Android app to implement MotionCompass. Our extensive experiments using the developed app and 18 popular wireless security cameras demonstrate that for cameras with one motion sensor, MotionCompass can attain a mean localization error of around 5 cm with less than 140 seconds. This localization technique builds upon existing work that detects the existence of hidden cameras, to pinpoint their exact location and area of surveillance.
{"title":"MotionCompass","authors":"Yan He, Qiuye He, Song Fang, Yao Liu","doi":"10.1145/3458864.3467683","DOIUrl":"https://doi.org/10.1145/3458864.3467683","url":null,"abstract":"Wireless security cameras are integral components of security systems used by military installations, corporations, and, due to their increased affordability, many private homes. These cameras commonly employ motion sensors to identify that something is occurring in their fields of vision before starting to record and notifying the property owner of the activity. In this paper, we discover that the motion sensing action can disclose the location of the camera through a novel wireless camera localization technique we call MotionCompass. In short, a user who aims to avoid surveillance can find a hidden camera by creating motion stimuli and sniffing wireless traffic for a response to that stimuli. With the motion trajectories within the motion detection zone, the exact location of the camera can be then computed. We develop an Android app to implement MotionCompass. Our extensive experiments using the developed app and 18 popular wireless security cameras demonstrate that for cameras with one motion sensor, MotionCompass can attain a mean localization error of around 5 cm with less than 140 seconds. This localization technique builds upon existing work that detects the existence of hidden cameras, to pinpoint their exact location and area of surveillance.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116796058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This demonstration presents a working prototype of Owlet, an alternative design for spatial sensing of acoustic signals. To overcome the fundamental limitations in form-factor, power consumption, and hardware requirements with array-based techniques, Owlet explores wave's interaction with acoustic structures for sensing. By combining passive acoustic microstructures with microphones, we envision achieving the same functionalities as microphone and speaker arrays with less power consumption and in a smaller form factor. Our design uses a 3D-printed metamaterial structure over a microphone to introduce a carefully designed spatial signature to the recorded signal. Owlet prototype shows 3.6° median error in Direction-of-Arrival (DoA) estimation and 10 cm median error in source localization while using a 1.5cm × 1.3cm acoustic structure for sensing.
{"title":"Microstructure-guided spatial sensing for low-power IoT","authors":"Nakul Garg, Yang Bai, Nirupam Roy","doi":"10.1145/3458864.3466906","DOIUrl":"https://doi.org/10.1145/3458864.3466906","url":null,"abstract":"This demonstration presents a working prototype of Owlet, an alternative design for spatial sensing of acoustic signals. To overcome the fundamental limitations in form-factor, power consumption, and hardware requirements with array-based techniques, Owlet explores wave's interaction with acoustic structures for sensing. By combining passive acoustic microstructures with microphones, we envision achieving the same functionalities as microphone and speaker arrays with less power consumption and in a smaller form factor. Our design uses a 3D-printed metamaterial structure over a microphone to introduce a carefully designed spatial signature to the recorded signal. Owlet prototype shows 3.6° median error in Direction-of-Arrival (DoA) estimation and 10 cm median error in source localization while using a 1.5cm × 1.3cm acoustic structure for sensing.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121966113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a low-power and miniaturized design for acoustic direction-of-arrival (DoA) estimation and source localization, called Owlet. The required aperture, power consumption, and hardware complexity of the traditional array-based spatial sensing techniques make them unsuitable for small and power-constrained IoT devices. Aiming to overcome these fundamental limitations, Owlet explores acoustic microstructures for extracting spatial information. It uses a carefully designed 3D-printed metamaterial structure that covers the microphone. The structure embeds a direction-specific signature in the recorded sounds. Owlet system learns the directional signatures through a one-time in-lab calibration. The system uses an additional microphone as a reference channel and develops techniques that eliminate environmental variation, making the design robust to noises and multipaths in arbitrary locations of operations. Owlet prototype shows 3.6° median error in DoA estimation and 10cm median error in source localization while using a 1.5cm × 1.3cm acoustic structure for sensing. The prototype consumes less than 100th of the energy required by a traditional microphone array to achieve similar DoA estimation accuracy. Owlet opens up possibilities of low-power sensing through 3D-printed passive structures.
{"title":"Owlet: enabling spatial information in ubiquitous acoustic devices","authors":"Nakul Garg, Yang Bai, Nirupam Roy","doi":"10.1145/3458864.3467880","DOIUrl":"https://doi.org/10.1145/3458864.3467880","url":null,"abstract":"This paper presents a low-power and miniaturized design for acoustic direction-of-arrival (DoA) estimation and source localization, called Owlet. The required aperture, power consumption, and hardware complexity of the traditional array-based spatial sensing techniques make them unsuitable for small and power-constrained IoT devices. Aiming to overcome these fundamental limitations, Owlet explores acoustic microstructures for extracting spatial information. It uses a carefully designed 3D-printed metamaterial structure that covers the microphone. The structure embeds a direction-specific signature in the recorded sounds. Owlet system learns the directional signatures through a one-time in-lab calibration. The system uses an additional microphone as a reference channel and develops techniques that eliminate environmental variation, making the design robust to noises and multipaths in arbitrary locations of operations. Owlet prototype shows 3.6° median error in DoA estimation and 10cm median error in source localization while using a 1.5cm × 1.3cm acoustic structure for sensing. The prototype consumes less than 100th of the energy required by a traditional microphone array to achieve similar DoA estimation accuracy. Owlet opens up possibilities of low-power sensing through 3D-printed passive structures.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122180482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}