The availability of Internet of Things (IoT)-enabled devices has resulted in a rapid increase in the number of devices a user carries with them. To support the collection of data from the set of a user's devices as well as interact with the environment around them, we present MiHub, a system architecture that dynamically manages limited resources and redundant services based on the resource availability and the dynamically changing set of a user's personal devices. For any given set of co-located user devices, a MiHub is elected, which then configures the set of services on the devices in its proximity. To provide more reliable data access, MiHub is designed around the use of more expensive primary data sources, supported by low cost secondary services that can quickly take over upon failure of the primary source. We evaluate the basic components of MiHub, highlighting its energy-efficient mechanisms to ensure IoT-enabled service availability in the face of dynamically changing groups of devices in the proximity of the user
{"title":"MiHub: Wearable Management for IoT","authors":"Kirill Varshavskiy, A. Harris, R. Kravets","doi":"10.1145/2935643.2935646","DOIUrl":"https://doi.org/10.1145/2935643.2935646","url":null,"abstract":"The availability of Internet of Things (IoT)-enabled devices has resulted in a rapid increase in the number of devices a user carries with them. To support the collection of data from the set of a user's devices as well as interact with the environment around them, we present MiHub, a system architecture that dynamically manages limited resources and redundant services based on the resource availability and the dynamically changing set of a user's personal devices.\u0000 For any given set of co-located user devices, a MiHub is elected, which then configures the set of services on the devices in its proximity. To provide more reliable data access, MiHub is designed around the use of more expensive primary data sources, supported by low cost secondary services that can quickly take over upon failure of the primary source. We evaluate the basic components of MiHub, highlighting its energy-efficient mechanisms to ensure IoT-enabled service availability in the face of dynamically changing groups of devices in the proximity of the user","PeriodicalId":345713,"journal":{"name":"WearSys '16","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122318388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cecil D'silva, Vickram Parthasarathy, Sethuraman N. Rao
A Smartphone is a combination of a cell phone, a personal digital assistant (PDA), a media player, a GPS navigation unit and much more. To a blind and visually impaired (BVI) user, a Smartphone can assist and help connect with our modern day cyber social world. The only inconvenience to the BVI user is its user interface, which is a touch screen display. In this project, an economical user-friendly compact text-input device for a BVI Smartphone user is developed. This device is designed for daily use and it supports Braille text. The device is connected to the smart phone by Bluetooth interface. A Smartphone app pairs the phone with the device and switches the default keyboard service to use the device keyboard. The Smartphone operating system runs the screen reader when the keyboard is in use, giving real time audio feedback to the user. Thus, a setup for nearly seamless text input is provided to the BVI user. A low-cost Arduino based prototype has been developed as a proof of concept. As part of the future work, the prototype keyboard will be evaluated by BVI users and the results will be compared with other existing text input methods for BVI users.
{"title":"Wireless Smartphone Keyboard for Visually Challenged Users","authors":"Cecil D'silva, Vickram Parthasarathy, Sethuraman N. Rao","doi":"10.1145/2935643.2935648","DOIUrl":"https://doi.org/10.1145/2935643.2935648","url":null,"abstract":"A Smartphone is a combination of a cell phone, a personal digital assistant (PDA), a media player, a GPS navigation unit and much more. To a blind and visually impaired (BVI) user, a Smartphone can assist and help connect with our modern day cyber social world. The only inconvenience to the BVI user is its user interface, which is a touch screen display. In this project, an economical user-friendly compact text-input device for a BVI Smartphone user is developed. This device is designed for daily use and it supports Braille text. The device is connected to the smart phone by Bluetooth interface. A Smartphone app pairs the phone with the device and switches the default keyboard service to use the device keyboard. The Smartphone operating system runs the screen reader when the keyboard is in use, giving real time audio feedback to the user. Thus, a setup for nearly seamless text input is provided to the BVI user. A low-cost Arduino based prototype has been developed as a proof of concept. As part of the future work, the prototype keyboard will be evaluated by BVI users and the results will be compared with other existing text input methods for BVI users.","PeriodicalId":345713,"journal":{"name":"WearSys '16","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134128875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, a branch of machine learning algorithms called deep learning gained huge attention to boost up accuracy of a variety of sensing applications. However, execution of deep learning algorithm such as convolutional neural network on mobile processor is non-trivial due to intensive computational requirements. In this paper, we present our early design of DeepSense - a mobile GPU-based deep convolutional neural network (CNN) framework. For its design, we first explored the differences between server-class and mobile-class GPUs, and studied effectiveness of various optimization strategies such as branch divergence elimination and memory vectorization. Our results show that DeepSense is able to execute a variety of CNN models for image recognition, object detection and face recognition in soft real time with no or marginal accuracy tradeoffs. Experiments also show that our framework is scalable across multiple devices with different GPU architectures (e.g. Adreno and Mali).
{"title":"DeepSense: A GPU-based Deep Convolutional Neural Network Framework on Commodity Mobile Devices","authors":"Huynh Nguyen Loc, R. Balan, Youngki Lee","doi":"10.1145/2935643.2935650","DOIUrl":"https://doi.org/10.1145/2935643.2935650","url":null,"abstract":"Recently, a branch of machine learning algorithms called deep learning gained huge attention to boost up accuracy of a variety of sensing applications. However, execution of deep learning algorithm such as convolutional neural network on mobile processor is non-trivial due to intensive computational requirements. In this paper, we present our early design of DeepSense - a mobile GPU-based deep convolutional neural network (CNN) framework. For its design, we first explored the differences between server-class and mobile-class GPUs, and studied effectiveness of various optimization strategies such as branch divergence elimination and memory vectorization. Our results show that DeepSense is able to execute a variety of CNN models for image recognition, object detection and face recognition in soft real time with no or marginal accuracy tradeoffs. Experiments also show that our framework is scalable across multiple devices with different GPU architectures (e.g. Adreno and Mali).","PeriodicalId":345713,"journal":{"name":"WearSys '16","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127809745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As people use and interact with more and more wearables and IoT-enabled devices, their private information is being exposed without any privacy protections. However, the limited capabilities of IoT devices makes implementing robust privacy protections challenging. In response, we present CryptoCoP, an energy-efficient, content agnostic privacy and encryption protocol for IoT devices. Eavesdroppers cannot snoop on data protected by CryptoCoP or track users via their IoT devices. We evaluate CryptoCoP and show that the performance and energy overheads are viable in a wide variety of situations, and can be modified to trade off forward secrecy and energy consumption against required key storage on the device.
{"title":"CryptoCoP: Lightweight, Energy-efficient Encryption and Privacy for Wearable Devices","authors":"Robin Snader, R. Kravets, A. Harris","doi":"10.1145/2935643.2935647","DOIUrl":"https://doi.org/10.1145/2935643.2935647","url":null,"abstract":"As people use and interact with more and more wearables and IoT-enabled devices, their private information is being exposed without any privacy protections. However, the limited capabilities of IoT devices makes implementing robust privacy protections challenging. In response, we present CryptoCoP, an energy-efficient, content agnostic privacy and encryption protocol for IoT devices. Eavesdroppers cannot snoop on data protected by CryptoCoP or track users via their IoT devices. We evaluate CryptoCoP and show that the performance and energy overheads are viable in a wide variety of situations, and can be modified to trade off forward secrecy and energy consumption against required key storage on the device.","PeriodicalId":345713,"journal":{"name":"WearSys '16","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125370517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anh Nguyen, Raghda Alqurashi, Zohreh Raghebi, F. Kashani, A. Halbower, Thang N. Dinh, Tam N. Vu
In this work, we present a low-cost and light-weight wearable sensing system that can monitor bioelectrical signals generated by electrically active tissues across the brain, the eyes, and the facial muscles from inside human ears. Our work presents two key aspects of the sensing, which include the construction of electrodes and the extraction of these biosignals using a supervised non-negative matrix factorization learning algorithm. To illustrate the usefulness of the system, we developed an autonomous sleep staging system using the output of our proposed in-ear sensing system. We prototyped the device and evaluated its sleep stage classification performance on 8 participants for a period of 1 month. With 94% accuracy on average, the evaluation results show that our wearable sensing system is promising to monitor brain, eyes, and facial muscle signals with reasonable fidelity from human ear canals.
{"title":"In-ear Biosignal Recording System: A Wearable For Automatic Whole-night Sleep Staging","authors":"Anh Nguyen, Raghda Alqurashi, Zohreh Raghebi, F. Kashani, A. Halbower, Thang N. Dinh, Tam N. Vu","doi":"10.1145/2935643.2935649","DOIUrl":"https://doi.org/10.1145/2935643.2935649","url":null,"abstract":"In this work, we present a low-cost and light-weight wearable sensing system that can monitor bioelectrical signals generated by electrically active tissues across the brain, the eyes, and the facial muscles from inside human ears. Our work presents two key aspects of the sensing, which include the construction of electrodes and the extraction of these biosignals using a supervised non-negative matrix factorization learning algorithm. To illustrate the usefulness of the system, we developed an autonomous sleep staging system using the output of our proposed in-ear sensing system. We prototyped the device and evaluated its sleep stage classification performance on 8 participants for a period of 1 month. With 94% accuracy on average, the evaluation results show that our wearable sensing system is promising to monitor brain, eyes, and facial muscle signals with reasonable fidelity from human ear canals.","PeriodicalId":345713,"journal":{"name":"WearSys '16","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123243862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}