The ubiquity of the lighting infrastructure makes the visible light communication (VLC) well suited for mobile and Internet of Things (IoT) applications in the indoor environment. However, existing VLC systems have primarily been focused on one-way communications from the illumination infrastructure to the mobile device. They are power demanding and not applicable for communication in the opposite direction. In this paper, we present RetroVLC, a duplex VLC system that enables a battery-free device to perform bi-directional communications over a shared light carrier across the uplink and downlink. The design features a retro-reflector fabric that backscatters light, an LCD modulator, and several low-power optimization techniques. We have prototyped a working system consisting of a credit card-sized battery-free tag and an illuminating LED reader. Experimental results show that the tag can achieve 10kbps downlink speed and 0.5kbps uplink speed over a distance of 2.4m. We outline several potential applications and limitations of the system.
{"title":"Retro-VLC: Enabling Battery-free Duplex Visible Light Communication for Mobile and IoT Applications","authors":"Jiangtao Li, Angli Liu, G. Shen, Liqun Li, Chao Sun, Feng Zhao","doi":"10.1145/2699343.2699354","DOIUrl":"https://doi.org/10.1145/2699343.2699354","url":null,"abstract":"The ubiquity of the lighting infrastructure makes the visible light communication (VLC) well suited for mobile and Internet of Things (IoT) applications in the indoor environment. However, existing VLC systems have primarily been focused on one-way communications from the illumination infrastructure to the mobile device. They are power demanding and not applicable for communication in the opposite direction. In this paper, we present RetroVLC, a duplex VLC system that enables a battery-free device to perform bi-directional communications over a shared light carrier across the uplink and downlink. The design features a retro-reflector fabric that backscatters light, an LCD modulator, and several low-power optimization techniques. We have prototyped a working system consisting of a credit card-sized battery-free tag and an illuminating LED reader. Experimental results show that the tag can achieve 10kbps downlink speed and 0.5kbps uplink speed over a distance of 2.4m. We outline several potential applications and limitations of the system.","PeriodicalId":252231,"journal":{"name":"Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132803178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sensor-equipped smartphones and wearables are transforming a variety of mobile apps ranging from health monitoring to digital assistants. However, reliably inferring user behavior and context from noisy and complex sensor data collected under mobile device constraints remains an open problem, and a key bottleneck to sensor app development. In recent years, advances in the field of deep learning have resulted in nearly unprecedented gains in related inference tasks such as speech and object recognition. However, although mobile sensing shares many of the same data modeling challenges, we have yet to see deep learning be systematically studied within the sensing domain. If deep learning could lead to significantly more robust and efficient mobile sensor inference it would revolutionize the field by rapidly expanding the number of sensor apps ready for mainstream usage. In this paper, we provide preliminary answers to this potentially game-changing question by prototyping a low-power Deep Neural Network (DNN) inference engine that exploits both the CPU and DSP of a mobile device SoC. We use this engine to study typical mobile sensing tasks (e.g., activity recognition) using DNNs, and compare results to learning techniques in more common usage. Our early findings provide illustrative examples of DNN usage that do not overburden modern mobile hardware, while also indicating how they can improve inference accuracy. Moreover, we show DNNs can gracefully scale to larger numbers of inference classes and can be flexibly partitioned across mobile and remote resources. Collectively, these results highlight the critical need for further exploration as to how the field of mobile sensing can best make use of advances in deep learning towards robust and efficient sensor inference.
{"title":"Can Deep Learning Revolutionize Mobile Sensing?","authors":"N. Lane, Petko Georgiev","doi":"10.1145/2699343.2699349","DOIUrl":"https://doi.org/10.1145/2699343.2699349","url":null,"abstract":"Sensor-equipped smartphones and wearables are transforming a variety of mobile apps ranging from health monitoring to digital assistants. However, reliably inferring user behavior and context from noisy and complex sensor data collected under mobile device constraints remains an open problem, and a key bottleneck to sensor app development. In recent years, advances in the field of deep learning have resulted in nearly unprecedented gains in related inference tasks such as speech and object recognition. However, although mobile sensing shares many of the same data modeling challenges, we have yet to see deep learning be systematically studied within the sensing domain. If deep learning could lead to significantly more robust and efficient mobile sensor inference it would revolutionize the field by rapidly expanding the number of sensor apps ready for mainstream usage. In this paper, we provide preliminary answers to this potentially game-changing question by prototyping a low-power Deep Neural Network (DNN) inference engine that exploits both the CPU and DSP of a mobile device SoC. We use this engine to study typical mobile sensing tasks (e.g., activity recognition) using DNNs, and compare results to learning techniques in more common usage. Our early findings provide illustrative examples of DNN usage that do not overburden modern mobile hardware, while also indicating how they can improve inference accuracy. Moreover, we show DNNs can gracefully scale to larger numbers of inference classes and can be flexibly partitioned across mobile and remote resources. Collectively, these results highlight the critical need for further exploration as to how the field of mobile sensing can best make use of advances in deep learning towards robust and efficient sensor inference.","PeriodicalId":252231,"journal":{"name":"Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126350971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Health workers in remote settings are increasingly using mobile devices to assist with a range of medical tasks that may require them to handle potentially infectious biological material, and touching their mobile device in these scenarios is undesirable or potentially harmful. To overcome this challenge, we present Maestro, a software-only gesture detection system that enables touch-free interaction on commodity mobile devices. Maestro uses the built-in, forward-facing camera on the device and computer vision to recognize users' in-air gestures. Our key design criteria are high gesture recognition rates and low power consumption. We describe Maestro's design and implementation and show that the system is able to detect and respond to users' gestures in real-time with acceptable energy consumption and memory overheads. We also evaluate Maestro through a controlled user study that provides insight into the performance of touch-free interaction, finding that participants were able to make gestures quickly and accurately enough to be useful for a variety of motivating global health applications. Finally, we describe the programming effort required to integrate touch-free interaction into several open-source mobile applications so that it can be used on commodity devices without requiring changes to the operating system. Taken together, our findings suggest that Maestro is a simple and practical tool that could allow health workers in remote settings to interact with their devices touch-free in demanding settings.
{"title":"Mobile Touch-Free Interaction for Global Health","authors":"Nicola Dell, Krittika D’Silva, G. Borriello","doi":"10.1145/2699343.2699355","DOIUrl":"https://doi.org/10.1145/2699343.2699355","url":null,"abstract":"Health workers in remote settings are increasingly using mobile devices to assist with a range of medical tasks that may require them to handle potentially infectious biological material, and touching their mobile device in these scenarios is undesirable or potentially harmful. To overcome this challenge, we present Maestro, a software-only gesture detection system that enables touch-free interaction on commodity mobile devices. Maestro uses the built-in, forward-facing camera on the device and computer vision to recognize users' in-air gestures. Our key design criteria are high gesture recognition rates and low power consumption. We describe Maestro's design and implementation and show that the system is able to detect and respond to users' gestures in real-time with acceptable energy consumption and memory overheads. We also evaluate Maestro through a controlled user study that provides insight into the performance of touch-free interaction, finding that participants were able to make gestures quickly and accurately enough to be useful for a variety of motivating global health applications. Finally, we describe the programming effort required to integrate touch-free interaction into several open-source mobile applications so that it can be used on commodity devices without requiring changes to the operating system. Taken together, our findings suggest that Maestro is a simple and practical tool that could allow health workers in remote settings to interact with their devices touch-free in demanding settings.","PeriodicalId":252231,"journal":{"name":"Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126545475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Person identification is crucial in various smart building applications, including customer behavior analysis, patient monitoring, etc. Prior works on person identification mainly focused on access control related applications. They achieve identification by sensing certain biometrics with specific sensors. However, these methods and apparatuses can be intrusive and not scalable because of instrumentation and sensing limitations. In this paper, we introduce our indoor person identification system that utilizes footstep induced structural vibration. Because structural vibration can be measured without interrupting human activities, our system is suitable for many ubiquitous sensing applications. Our system senses floor vibration and detects the signal induced by footsteps. Then the system extracts features from the signals that represent characteristics of each person's gait pattern. With the extracted features, the system conducts hierarchical classification at an individual step level and then at a trace (i.e., collection of consecutive steps) level. Our system achieves over 83% identification accuracy on average. Furthermore, when the application requires different levels of accuracy, our system can adjust confidence level threshold to discard uncertain traces. For example, at a threshold that allows only most certain 50% traces for classification, the identification accuracy increases to 96.5%.
{"title":"Indoor Person Identification through Footstep Induced Structural Vibration","authors":"Shijia Pan, Ningning Wang, Yuqiu Qian, Irem Velibeyoglu, H. Noh, Pei Zhang","doi":"10.1145/2699343.2699364","DOIUrl":"https://doi.org/10.1145/2699343.2699364","url":null,"abstract":"Person identification is crucial in various smart building applications, including customer behavior analysis, patient monitoring, etc. Prior works on person identification mainly focused on access control related applications. They achieve identification by sensing certain biometrics with specific sensors. However, these methods and apparatuses can be intrusive and not scalable because of instrumentation and sensing limitations. In this paper, we introduce our indoor person identification system that utilizes footstep induced structural vibration. Because structural vibration can be measured without interrupting human activities, our system is suitable for many ubiquitous sensing applications. Our system senses floor vibration and detects the signal induced by footsteps. Then the system extracts features from the signals that represent characteristics of each person's gait pattern. With the extracted features, the system conducts hierarchical classification at an individual step level and then at a trace (i.e., collection of consecutive steps) level. Our system achieves over 83% identification accuracy on average. Furthermore, when the application requires different levels of accuracy, our system can adjust confidence level threshold to discard uncertain traces. For example, at a threshold that allows only most certain 50% traces for classification, the identification accuracy increases to 96.5%.","PeriodicalId":252231,"journal":{"name":"Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125508681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications","authors":"","doi":"10.1145/2699343","DOIUrl":"https://doi.org/10.1145/2699343","url":null,"abstract":"","PeriodicalId":252231,"journal":{"name":"Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124946396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}