Hanbin Zhang, Gabriel Guo, Emery Comstock, Baicheng Chen, Xingyu Chen, Chen Song, J. Ajay, Jeanne Langan, Sutanuka Bhattacharjya, L. Cavuoto, Wenyao Xu
Approximately 7 million survivors of stroke reside in the United States. Over half of these individuals will have residual deficits, making stroke one of the leading causes of disability. Long-term rehabilitation opportunities are critical for millions of individuals with chronic upper limb motor deicits due to stroke. Traditional in-home rehabilitation is reported to be dull, boring, and un-engaging. Moreover, existing rehabilitation technologies are not user-friendly and cannot be adaptable to different and ever-changing demands from individual stroke survivors. In this work, we present RehabPhone, a highly-usable software-defined stroke rehabilitation paradigm using the smartphone and 3D printing technologies. This software-definition has twofold. First, RehabPhone leverages the cost-effective 3D printing technology to augment ordinal smartphones into customized rehabilitation tools. The size, weight, and shape of rehabilitation tools are software-defined according to individual rehabilitation needs and goals. Second, RehabPhone integrates 13 functional rehabilitation activities co-designed with stroke professionals into a smartphone APP. The software utilizes built-in smartphone sensors to analyzes rehabilitation activities and provides real-time feedback to coach and engage stroke users. We perform the in-lab usability optimization with the RehabPhone prototype with involving 16 healthy adults and 4 stroke survivors. After that, we conduct a 6-week unattended intervention study in 12 homes of stroke residence. In the course of the clinical study, over 32,000 samples of physical rehabilitation activities are collected and evaluated. Results indicate that stroke users with RehabPhone demonstrate a high adherence and clinical efficacy in a self-managed home-based rehabilitation course. To the best of our knowledge, this is the first exploratory clinical study using mobile health technologies in real-world stroke rehabilitation.
{"title":"RehabPhone","authors":"Hanbin Zhang, Gabriel Guo, Emery Comstock, Baicheng Chen, Xingyu Chen, Chen Song, J. Ajay, Jeanne Langan, Sutanuka Bhattacharjya, L. Cavuoto, Wenyao Xu","doi":"10.1145/3386901.3389028","DOIUrl":"https://doi.org/10.1145/3386901.3389028","url":null,"abstract":"Approximately 7 million survivors of stroke reside in the United States. Over half of these individuals will have residual deficits, making stroke one of the leading causes of disability. Long-term rehabilitation opportunities are critical for millions of individuals with chronic upper limb motor deicits due to stroke. Traditional in-home rehabilitation is reported to be dull, boring, and un-engaging. Moreover, existing rehabilitation technologies are not user-friendly and cannot be adaptable to different and ever-changing demands from individual stroke survivors. In this work, we present RehabPhone, a highly-usable software-defined stroke rehabilitation paradigm using the smartphone and 3D printing technologies. This software-definition has twofold. First, RehabPhone leverages the cost-effective 3D printing technology to augment ordinal smartphones into customized rehabilitation tools. The size, weight, and shape of rehabilitation tools are software-defined according to individual rehabilitation needs and goals. Second, RehabPhone integrates 13 functional rehabilitation activities co-designed with stroke professionals into a smartphone APP. The software utilizes built-in smartphone sensors to analyzes rehabilitation activities and provides real-time feedback to coach and engage stroke users. We perform the in-lab usability optimization with the RehabPhone prototype with involving 16 healthy adults and 4 stroke survivors. After that, we conduct a 6-week unattended intervention study in 12 homes of stroke residence. In the course of the clinical study, over 32,000 samples of physical rehabilitation activities are collected and evaluated. Results indicate that stroke users with RehabPhone demonstrate a high adherence and clinical efficacy in a self-managed home-based rehabilitation course. To the best of our knowledge, this is the first exploratory clinical study using mobile health technologies in real-world stroke rehabilitation.","PeriodicalId":345029,"journal":{"name":"Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services","volume":"611 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116210057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Localization in urban environments is becoming increasingly important and used in tools such as ARCore [11], ARKit [27] and others. One popular mechanism to achieve accurate indoor localization as well as a map of the space is using Visual Simultaneous Localization and Mapping (Visual-SLAM). However, Visual-SLAM is known to be resource-intensive in memory and processing time. Further, some of the operations grow in complexity over time, making it challenging to run on mobile devices continuously. Edge computing provides additional compute and memory resources to mobile devices to allow offloading of some tasks without the large latencies seen when offloading to the cloud. In this paper, we present Edge-SLAM, a system that uses edge computing resources to offload parts of Visual-SLAM. We use ORB-SLAM2 as a prototypical Visual-SLAM system and modify it to a split architecture between the edge and the mobile device. We keep the tracking computation on the mobile device and move the rest of the computation, i.e., local mapping and loop closure, to the edge. We describe the design choices in this effort and implement them in our prototype. Our results show that our split architecture can allow the functioning of the Visual-SLAM system long-term with limited resources without affecting the accuracy of operation. It also keeps the computation and memory cost on the mobile device constant which would allow for deployment of other end applications that use Visual-SLAM.
{"title":"Edge-SLAM: edge-assisted visual simultaneous localization and mapping","authors":"Ali J. Ben Ali, Z. S. Hashemifar, Karthik Dantu","doi":"10.1145/3386901.3389033","DOIUrl":"https://doi.org/10.1145/3386901.3389033","url":null,"abstract":"Localization in urban environments is becoming increasingly important and used in tools such as ARCore [11], ARKit [27] and others. One popular mechanism to achieve accurate indoor localization as well as a map of the space is using Visual Simultaneous Localization and Mapping (Visual-SLAM). However, Visual-SLAM is known to be resource-intensive in memory and processing time. Further, some of the operations grow in complexity over time, making it challenging to run on mobile devices continuously. Edge computing provides additional compute and memory resources to mobile devices to allow offloading of some tasks without the large latencies seen when offloading to the cloud. In this paper, we present Edge-SLAM, a system that uses edge computing resources to offload parts of Visual-SLAM. We use ORB-SLAM2 as a prototypical Visual-SLAM system and modify it to a split architecture between the edge and the mobile device. We keep the tracking computation on the mobile device and move the rest of the computation, i.e., local mapping and loop closure, to the edge. We describe the design choices in this effort and implement them in our prototype. Our results show that our split architecture can allow the functioning of the Visual-SLAM system long-term with limited resources without affecting the accuracy of operation. It also keeps the computation and memory cost on the mobile device constant which would allow for deployment of other end applications that use Visual-SLAM.","PeriodicalId":345029,"journal":{"name":"Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128947848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinyu Lei, Guan-Hua Tu, Chi-Yu Li, Tian Xie, Mi Zhang
Smart home Wi-Fi IoT devices are prevalent nowadays and potentially bring significant improvements to daily life. However, they pose an attractive target for adversaries seeking to launch attacks. Since the secure IoT communications are the foundation of secure IoT devices, this study commences by examining the extent to which mainstream security protocols are supported by 40 of the best selling Wi-Fi smart home IoT devices on the Amazon platform. It is shown that 29 of these devices have either no security protocols deployed, or have problematic security protocol implementations. Seemingly, these vulnerabilities can be easily fixed by installing security patches. However, many IoT devices lack the requisite software/hardware resources to do so. To address this problem, the present study proposes a SecWIR (Secure Wi-Fi IoT communication Router) framework designed for implementation on top of the users' existing home Wi-Fi routers to provide IoT devices with a secure IoT communication capability. However, it is way challenging for SecWIR to function effectively on all home Wi-Fi routers since some routers are resource-constrained. Thus, several novel techniques for resolving this implementation issue are additionally proposed. The experimental results show that SecWIR performs well on a variety of commercial off-the-shelf (COTS) Wi-Fi routers at the expense of only a small reduction in the non-IoT data service throughput (less than 8%), and small increases in the CPU usage (4.5%~7%), RAM usage (1.9 MB~2.2 MB), and the IoT device access delay (24 ms~154 ms) while securing 250 IoT devices.
{"title":"SecWIR","authors":"Xinyu Lei, Guan-Hua Tu, Chi-Yu Li, Tian Xie, Mi Zhang","doi":"10.1145/3386901.3388941","DOIUrl":"https://doi.org/10.1145/3386901.3388941","url":null,"abstract":"Smart home Wi-Fi IoT devices are prevalent nowadays and potentially bring significant improvements to daily life. However, they pose an attractive target for adversaries seeking to launch attacks. Since the secure IoT communications are the foundation of secure IoT devices, this study commences by examining the extent to which mainstream security protocols are supported by 40 of the best selling Wi-Fi smart home IoT devices on the Amazon platform. It is shown that 29 of these devices have either no security protocols deployed, or have problematic security protocol implementations. Seemingly, these vulnerabilities can be easily fixed by installing security patches. However, many IoT devices lack the requisite software/hardware resources to do so. To address this problem, the present study proposes a SecWIR (Secure Wi-Fi IoT communication Router) framework designed for implementation on top of the users' existing home Wi-Fi routers to provide IoT devices with a secure IoT communication capability. However, it is way challenging for SecWIR to function effectively on all home Wi-Fi routers since some routers are resource-constrained. Thus, several novel techniques for resolving this implementation issue are additionally proposed. The experimental results show that SecWIR performs well on a variety of commercial off-the-shelf (COTS) Wi-Fi routers at the expense of only a small reduction in the non-IoT data service throughput (less than 8%), and small increases in the CPU usage (4.5%~7%), RAM usage (1.9 MB~2.2 MB), and the IoT device access delay (24 ms~154 ms) while securing 250 IoT devices.","PeriodicalId":345029,"journal":{"name":"Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121576709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiao Zhu, Jiachen Sun, Xumiao Zhang, Y. Guo, Feng Qian, Z. Morley Mao
We demo MPBond, a novel multipath transport system allowing multiple personal mobile devices to collaboratively fetch content from the Internet. Inspired by the success of MPTCP, MPBond applies the concept of distributed multipath transport where multiple subflows can traverse different devices. Other key design aspects of MPBond include a device/connection management scheme, a buffering strategy, a packet scheduling algorithm, and a policy framework tailored to MPBond's architecture. We install MPBond on commodity mobile devices and show how easy it is to configure the usage of MPBond for unmodified apps. We visualize the runtime behavior of MPBond to further illustrate its design. We also demonstrate the download time and energy reduction of file download, as well as the video streaming QoE improvement with MPBond.
{"title":"MPBond: efficient network-level collaboration among personal mobile devices","authors":"Xiao Zhu, Jiachen Sun, Xumiao Zhang, Y. Guo, Feng Qian, Z. Morley Mao","doi":"10.1145/3386901.3396600","DOIUrl":"https://doi.org/10.1145/3386901.3396600","url":null,"abstract":"We demo MPBond, a novel multipath transport system allowing multiple personal mobile devices to collaboratively fetch content from the Internet. Inspired by the success of MPTCP, MPBond applies the concept of distributed multipath transport where multiple subflows can traverse different devices. Other key design aspects of MPBond include a device/connection management scheme, a buffering strategy, a packet scheduling algorithm, and a policy framework tailored to MPBond's architecture. We install MPBond on commodity mobile devices and show how easy it is to configure the usage of MPBond for unmodified apps. We visualize the runtime behavior of MPBond to further illustrate its design. We also demonstrate the download time and energy reduction of file download, as well as the video streaming QoE improvement with MPBond.","PeriodicalId":345029,"journal":{"name":"Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121381256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anlan Zhang, Chendong Wang, Xing Liu, Bo Han, Feng Qian
Volumetric videos allow viewers to exercise 6-DoF (degrees of freedom) movement when watching them. Due to their true 3D nature, streaming volumetric videos is highly bandwidth demanding. In this work, we present to our knowledge a first volumetric video streaming system that leverages deep super resolution (SR) to boost the video quality on commodity mobile devices. We propose a series of judicious optimizations to make SR efficient on mobile devices.
{"title":"Mobile Volumetric Video Streaming Enhanced by Super Resolution","authors":"Anlan Zhang, Chendong Wang, Xing Liu, Bo Han, Feng Qian","doi":"10.1145/3386901.3396598","DOIUrl":"https://doi.org/10.1145/3386901.3396598","url":null,"abstract":"Volumetric videos allow viewers to exercise 6-DoF (degrees of freedom) movement when watching them. Due to their true 3D nature, streaming volumetric videos is highly bandwidth demanding. In this work, we present to our knowledge a first volumetric video streaming system that leverages deep super resolution (SR) to boost the video quality on commodity mobile devices. We propose a series of judicious optimizations to make SR efficient on mobile devices.","PeriodicalId":345029,"journal":{"name":"Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117235386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiao Zhu, Jiachen Sun, Xumiao Zhang, Y. E. Guo, Fengqi Qian, Z. Mao
MPBond is an efficient system allowing multiple personal mobile devices to collaboratively fetch content from the Internet. For example, a smartwatch can assist its paired smartphone with downloading data. Inspired by the success of MPTCP, MPBond applies the concept of distributed multipath transport where multiple subflows can traverse different devices. We develop a cross-device connection management scheme, a buffering strategy, a packet scheduling algorithm, and a policy framework tailored to MPBond's architecture. We implement MPBond on commodity mobile devices such as Android smartphones and smartwatches. Our real-world evaluations using different workloads under various network conditions demonstrate the efficiency of MPBond. Compared to state-of-the-art collaboration frameworks, MPBond reduces file download time by 5% to 46%, and improves the video streaming bitrate by 2% to 118%. Meanwhile, it improves the energy efficiency by 10% to 57%.
{"title":"MPBond","authors":"Xiao Zhu, Jiachen Sun, Xumiao Zhang, Y. E. Guo, Fengqi Qian, Z. Mao","doi":"10.1145/3386901.3388943","DOIUrl":"https://doi.org/10.1145/3386901.3388943","url":null,"abstract":"MPBond is an efficient system allowing multiple personal mobile devices to collaboratively fetch content from the Internet. For example, a smartwatch can assist its paired smartphone with downloading data. Inspired by the success of MPTCP, MPBond applies the concept of distributed multipath transport where multiple subflows can traverse different devices. We develop a cross-device connection management scheme, a buffering strategy, a packet scheduling algorithm, and a policy framework tailored to MPBond's architecture. We implement MPBond on commodity mobile devices such as Android smartphones and smartwatches. Our real-world evaluations using different workloads under various network conditions demonstrate the efficiency of MPBond. Compared to state-of-the-art collaboration frameworks, MPBond reduces file download time by 5% to 46%, and improves the video streaming bitrate by 2% to 118%. Meanwhile, it improves the energy efficiency by 10% to 57%.","PeriodicalId":345029,"journal":{"name":"Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117321723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Songtao He, F. Bastani, Arjun Balasingam, Karthik Gopalakrishna, Ziwen Jiang, Mohammad Alizadeh, Harinarayanan Balakrishnan, Michael J. Cafarella, T. Kraska, S. Madden
The rapid development of small aerial drones has enabled numerous drone-based applications, e.g., geographic mapping, air pollution sensing, and search and rescue. To assist the development of these applications, we propose BeeCluster, a drone orchestration system that manages a fleet of drones. BeeCluster provides a virtual drone abstraction that enables developers to express a sequence of geographical sensing tasks, and determines how to map these tasks to the fleet efficiently. BeeCluster's core contribution is predictive optimization, in which an inferred model of the future tasks of the application is used to generate an optimized flight and sensing schedule for the drones that aims to minimize the total expected execution time. We built a prototype of BeeCluster and evaluated it on five real-world case studies with drones in outdoor environments, measuring speedups from 11.6% to 23.9%.
{"title":"BeeCluster","authors":"Songtao He, F. Bastani, Arjun Balasingam, Karthik Gopalakrishna, Ziwen Jiang, Mohammad Alizadeh, Harinarayanan Balakrishnan, Michael J. Cafarella, T. Kraska, S. Madden","doi":"10.1145/3386901.3388912","DOIUrl":"https://doi.org/10.1145/3386901.3388912","url":null,"abstract":"The rapid development of small aerial drones has enabled numerous drone-based applications, e.g., geographic mapping, air pollution sensing, and search and rescue. To assist the development of these applications, we propose BeeCluster, a drone orchestration system that manages a fleet of drones. BeeCluster provides a virtual drone abstraction that enables developers to express a sequence of geographical sensing tasks, and determines how to map these tasks to the fleet efficiently. BeeCluster's core contribution is predictive optimization, in which an inferred model of the future tasks of the application is used to generate an optimized flight and sensing schedule for the drones that aims to minimize the total expected execution time. We built a prototype of BeeCluster and evaluated it on five real-world case studies with drones in outdoor environments, measuring speedups from 11.6% to 23.9%.","PeriodicalId":345029,"journal":{"name":"Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131081093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As we look to use Wear OS (formerly known as Android Wear) devices for fitness and health monitoring, it is important to evaluate the reliability of its ecosystem. The goal of this paper is to understand the reliability weak spots in Wear OS ecosystem. We develop a state-aware fuzzing tool, Vulcan, without any elevated privileges, to uncover these weak spots by fuzzing Wear OS apps. We evaluate the outcomes due to these weak spots by fuzzing 100 popular apps downloaded from Google Play Store. The outcomes include causing specific apps to crash, causing the running app to become unresponsive, and causing the device to reboot. We finally propose a proof-of-concept mitigation solution to address the system reboot issue.
当我们希望使用Wear OS(以前称为Android Wear)设备进行健身和健康监测时,评估其生态系统的可靠性非常重要。本文的目的是了解Wear OS生态系统的可靠性弱点。我们开发了一个状态感知模糊测试工具Vulcan,它没有任何特权,可以通过模糊测试Wear OS应用程序来发现这些弱点。我们通过对Google Play Store下载的100款热门应用进行模糊分析,来评估这些薄弱环节带来的结果。结果包括导致特定应用程序崩溃,导致正在运行的应用程序无响应,以及导致设备重新启动。我们最后提出了一个概念验证缓解解决方案来解决系统重启问题。
{"title":"Vulcan: lessons on reliability of wearables through state-aware fuzzing","authors":"E. Yi, Heng Zhang, A. Maji, Kefan Xu, S. Bagchi","doi":"10.1145/3386901.3388916","DOIUrl":"https://doi.org/10.1145/3386901.3388916","url":null,"abstract":"As we look to use Wear OS (formerly known as Android Wear) devices for fitness and health monitoring, it is important to evaluate the reliability of its ecosystem. The goal of this paper is to understand the reliability weak spots in Wear OS ecosystem. We develop a state-aware fuzzing tool, Vulcan, without any elevated privileges, to uncover these weak spots by fuzzing Wear OS apps. We evaluate the outcomes due to these weak spots by fuzzing 100 popular apps downloaded from Google Play Store. The outcomes include causing specific apps to crash, causing the running app to become unresponsive, and causing the device to reboot. We finally propose a proof-of-concept mitigation solution to address the system reboot issue.","PeriodicalId":345029,"journal":{"name":"Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133069745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Wu, Jinghao Feng, Xuejin Tian, Edward Sun, Yunxin Liu, Bo Dong, Fengyuan Xu, Sheng Zhong
Real-time user emotion recognition is highly desirable for many applications on eyewear devices like smart glasses. However, it is very challenging to enable this capability on such devices due to tightly constrained image contents (only eye-area images available from the on-device eye-tracking camera) and computing resources of the embedded system. In this paper, we propose and develop a novel system called EMO that can recognize, on top of a resource-limited eyewear device, real-time emotions of the user who wears it. Unlike most existing solutions that require whole-face images to recognize emotions, EMO only utilizes the single-eye-area images captured by the eye-tracking camera of the eyewear. To achieve this, we design a customized deep-learning network to effectively extract emotional features from input single-eye images and a personalized feature classifier to accurately identify a user's emotions. EMO also exploits the temporal locality and feature similarity among consecutive video frames of the eye-tracking camera to further reduce the recognition latency and system resource usage. We implement EMO on two hardware platforms and conduct comprehensive experimental evaluations. Our results demonstrate that EMO can continuously recognize seven-type emotions at 12.8 frames per second with a mean accuracy of 72.2%, significantly outperforming the state-of-the-art approach, and consume much fewer system resources.
{"title":"EMO: real-time emotion recognition from single-eye images for resource-constrained eyewear devices","authors":"Hao Wu, Jinghao Feng, Xuejin Tian, Edward Sun, Yunxin Liu, Bo Dong, Fengyuan Xu, Sheng Zhong","doi":"10.1145/3386901.3388917","DOIUrl":"https://doi.org/10.1145/3386901.3388917","url":null,"abstract":"Real-time user emotion recognition is highly desirable for many applications on eyewear devices like smart glasses. However, it is very challenging to enable this capability on such devices due to tightly constrained image contents (only eye-area images available from the on-device eye-tracking camera) and computing resources of the embedded system. In this paper, we propose and develop a novel system called EMO that can recognize, on top of a resource-limited eyewear device, real-time emotions of the user who wears it. Unlike most existing solutions that require whole-face images to recognize emotions, EMO only utilizes the single-eye-area images captured by the eye-tracking camera of the eyewear. To achieve this, we design a customized deep-learning network to effectively extract emotional features from input single-eye images and a personalized feature classifier to accurately identify a user's emotions. EMO also exploits the temporal locality and feature similarity among consecutive video frames of the eye-tracking camera to further reduce the recognition latency and system resource usage. We implement EMO on two hardware platforms and conduct comprehensive experimental evaluations. Our results demonstrate that EMO can continuously recognize seven-type emotions at 12.8 frames per second with a mean accuracy of 72.2%, significantly outperforming the state-of-the-art approach, and consume much fewer system resources.","PeriodicalId":345029,"journal":{"name":"Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128004590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yihao Liu, Kai Huang, Xingzhe Song, Boyuan Yang, Wei Gao
Stylus pens have been widely used with today's mobile devices to provide a convenient handwriting input method, but also bring a unique security vulnerability that may unveil the user's handwriting contents to a nearby eavesdropper. In this paper, we present MagHacker, a new sensing system that realizes such eavesdropping attack over commodity mobile devices, which monitor and analyze the magnetic field being produced by the stylus pen's internal magnet. MagHacker divides the continuous magnetometer readings into small segments that represent individual letters, and then translates these readings into writing trajectories for letter recognition. Experiment results over realistic handwritings from multiple human beings demonstrate that MagHacker can accurately eavesdrop more than 80% of handwriting with stylus pens, from a distance of 10cm. Only slight degradation in such accuracy is produced when the eavesdropping distance or the handwriting speed increases. MagHacker is highly energy efficient, and can well adapt to different stylus pen models and environmental contexts.
{"title":"MagHacker","authors":"Yihao Liu, Kai Huang, Xingzhe Song, Boyuan Yang, Wei Gao","doi":"10.1145/3386901.3389030","DOIUrl":"https://doi.org/10.1145/3386901.3389030","url":null,"abstract":"Stylus pens have been widely used with today's mobile devices to provide a convenient handwriting input method, but also bring a unique security vulnerability that may unveil the user's handwriting contents to a nearby eavesdropper. In this paper, we present MagHacker, a new sensing system that realizes such eavesdropping attack over commodity mobile devices, which monitor and analyze the magnetic field being produced by the stylus pen's internal magnet. MagHacker divides the continuous magnetometer readings into small segments that represent individual letters, and then translates these readings into writing trajectories for letter recognition. Experiment results over realistic handwritings from multiple human beings demonstrate that MagHacker can accurately eavesdrop more than 80% of handwriting with stylus pens, from a distance of 10cm. Only slight degradation in such accuracy is produced when the eavesdropping distance or the handwriting speed increases. MagHacker is highly energy efficient, and can well adapt to different stylus pen models and environmental contexts.","PeriodicalId":345029,"journal":{"name":"Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130898632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}