Proceedings of the ... annual International Conference on Mobile Computing and Networking. International Conference on Mobile Computing and Networking最新文献
Pub Date : 2024-11-01Epub Date: 2024-12-04DOI: 10.1145/3636534.3698866
Md Sabbir Ahmed, Arafat Rahman, Zhiyuan Wang, Mark Rucker, Laura E Barnes
While audio data shows promise in addressing various health challenges, there is a lack of research on on-device audio processing for smartwatches. Privacy concerns make storing raw audio and performing post-hoc analysis undesirable for many users. Additionally, current on-device audio processing systems for smartwatches are limited in their feature extraction capabilities, restricting their potential for understanding user behavior and health. We developed a real-time system for on-device audio processing on smartwatches, which takes an average of 1.78 minutes (SD = 0.07 min) to extract 22 spectral and rhythmic features from a 1-minute audio sample, using a small window size of 25 milliseconds. Using these extracted audio features on a public dataset, we developed and incorporated models into a watch to classify foreground and background speech in real-time. Our Random Forest-based model classifies speech with a balanced accuracy of 80.3%.
{"title":"A Resource Efficient System for On-Smartwatch Audio Processing.","authors":"Md Sabbir Ahmed, Arafat Rahman, Zhiyuan Wang, Mark Rucker, Laura E Barnes","doi":"10.1145/3636534.3698866","DOIUrl":"10.1145/3636534.3698866","url":null,"abstract":"<p><p>While audio data shows promise in addressing various health challenges, there is a lack of research on on-device audio processing for smartwatches. Privacy concerns make storing raw audio and performing post-hoc analysis undesirable for many users. Additionally, current on-device audio processing systems for smartwatches are limited in their feature extraction capabilities, restricting their potential for understanding user behavior and health. We developed a real-time system for on-device audio processing on smartwatches, which takes an average of 1.78 minutes (SD = 0.07 min) to extract 22 spectral and rhythmic features from a 1-minute audio sample, using a small window size of 25 milliseconds. Using these extracted audio features on a public dataset, we developed and incorporated models into a watch to classify foreground and background speech in real-time. Our Random Forest-based model classifies speech with a balanced accuracy of 80.3%.</p>","PeriodicalId":91382,"journal":{"name":"Proceedings of the ... annual International Conference on Mobile Computing and Networking. International Conference on Mobile Computing and Networking","volume":"2024 ","pages":"1805-1807"},"PeriodicalIF":0.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12126283/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144200976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-12-04DOI: 10.1145/3636534.3698115
Md Touhiduzzaman, Jane Chung, Ingrid Pretzer-Aboff, Eyuphan Bulut
Maintaining independence in daily activities and mobility is critical for healthy aging. Older adults who are losing the ability to care for themselves or ambulate are at a high risk of adverse health outcomes and decreased quality of life. It is essential to monitor daily activities and mobility routinely and capture early decline before a clinical symptom arises. Existing solutions use self-reports, or technology-based solutions that depend on cameras or wearables to track daily activities; however, these solutions have different issues (e.g., bias, privacy, burden to carry/recharge them) and do not fit well for seniors. In this study, we discuss a non-invasive, and low-cost wireless sensing-based solution to track the daily activities of low-income older adults. The proposed sensing solution relies on a deep learning-based fine-grained analysis of ambient WiFi signals and it is non-invasive compared to video or wearable-based existing solutions. We deployed this system in real senior housing settings for a week and evaluated its performance. Our initial results show that we can detect a variety of daily activities of the participants with this low-cost system with an accuracy of up to 76.90%.
{"title":"Wireless Sensing-based Daily Activity Tracking System Deployment in Low-Income Senior Housing Environments.","authors":"Md Touhiduzzaman, Jane Chung, Ingrid Pretzer-Aboff, Eyuphan Bulut","doi":"10.1145/3636534.3698115","DOIUrl":"10.1145/3636534.3698115","url":null,"abstract":"<p><p>Maintaining independence in daily activities and mobility is critical for healthy aging. Older adults who are losing the ability to care for themselves or ambulate are at a high risk of adverse health outcomes and decreased quality of life. It is essential to monitor daily activities and mobility routinely and capture early decline before a clinical symptom arises. Existing solutions use self-reports, or technology-based solutions that depend on cameras or wearables to track daily activities; however, these solutions have different issues (e.g., bias, privacy, burden to carry/recharge them) and do not fit well for seniors. In this study, we discuss a non-invasive, and low-cost wireless sensing-based solution to track the daily activities of low-income older adults. The proposed sensing solution relies on a deep learning-based fine-grained analysis of ambient WiFi signals and it is non-invasive compared to video or wearable-based existing solutions. We deployed this system in real senior housing settings for a week and evaluated its performance. Our initial results show that we can detect a variety of daily activities of the participants with this low-cost system with an accuracy of up to 76.90%.</p>","PeriodicalId":91382,"journal":{"name":"Proceedings of the ... annual International Conference on Mobile Computing and Networking. International Conference on Mobile Computing and Networking","volume":"2024 ","pages":"2260-2267"},"PeriodicalIF":0.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12674459/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145679440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-12-04DOI: 10.1145/3636534.3690698
Kai Huang, Xiangyu Yin, Tao Gu, Wei Gao
Image super-resolution (SR) is widely used on mobile devices to enhance user experience. However, neural networks used for SR are computationally expensive, posing challenges for mobile devices with limited computing power. A viable solution is to use heterogeneous processors on mobile devices, especially the specialized hardware AI accelerators, for SR computations, but the reduced arithmetic precision on AI accelerators can lead to degraded perceptual quality in upscaled images. To address this limitation, in this paper we present SR For Your Eyes (FYE-SR), a novel image SR technique that enhances the perceptual quality of upscaled images when using heterogeneous processors for SR computations. FYE-SR strategically splits the SR model and dispatches different layers to heterogeneous processors, to meet the time constraint of SR computations while minimizing the impact of AI accelerators on image quality. Experiment results show that FYE-SR outperforms the best baselines, improving perceptual image quality by up to 2×, or reducing SR computing latency by up to 5.6× with on-par image quality.
{"title":"Perceptual-Centric Image Super-Resolution using Heterogeneous Processors on Mobile Devices.","authors":"Kai Huang, Xiangyu Yin, Tao Gu, Wei Gao","doi":"10.1145/3636534.3690698","DOIUrl":"10.1145/3636534.3690698","url":null,"abstract":"<p><p>Image super-resolution (SR) is widely used on mobile devices to enhance user experience. However, neural networks used for SR are computationally expensive, posing challenges for mobile devices with limited computing power. A viable solution is to use heterogeneous processors on mobile devices, especially the specialized hardware AI accelerators, for SR computations, but the reduced arithmetic precision on AI accelerators can lead to degraded perceptual quality in upscaled images. To address this limitation, in this paper we present <i>SR For Your Eyes (FYE-SR)</i>, a novel image SR technique that enhances the perceptual quality of upscaled images when using heterogeneous processors for SR computations. FYE-SR strategically splits the SR model and dispatches different layers to heterogeneous processors, to meet the time constraint of SR computations while minimizing the impact of AI accelerators on image quality. Experiment results show that FYE-SR outperforms the best baselines, improving perceptual image quality by up to 2×, or reducing SR computing latency by up to 5.6× with on-par image quality.</p>","PeriodicalId":91382,"journal":{"name":"Proceedings of the ... annual International Conference on Mobile Computing and Networking. International Conference on Mobile Computing and Networking","volume":"2024 ","pages":"1361-1376"},"PeriodicalIF":0.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11931654/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01Epub Date: 2019-10-11DOI: 10.1145/3300061.3345432
George Boateng, Vivian Genaro Motti, Varun Mishra, John A Batsis, Josiah Hester, David Kotz
Wrist-worn devices hold great potential as a platform for mobile health (mHealth) applications because they comprise a familiar, convenient form factor and can embed sensors in proximity to the human body. Despite this potential, however, they are severely limited in battery life, storage, band-width, computing power, and screen size. In this paper, we describe the experience of the research and development team designing, implementing and evaluating Amulet - an open-hardware, open-software wrist-worn computing device - and its experience using Amulet to deploy mHealth apps in the field. In the past five years the team conducted 11 studies in the lab and in the field, involving 204 participants and collecting over 77,780 hours of sensor data. We describe the technical issues the team encountered and the lessons they learned, and conclude with a set of recommendations. We anticipate the experience described herein will be useful for the development of other research-oriented computing platforms. It should also be useful for researchers interested in developing and deploying mHealth applications, whether with the Amulet system or with other wearable platforms.
{"title":"Experience: Design, Development and Evaluation of a Wearable Device for mHealth Applications.","authors":"George Boateng, Vivian Genaro Motti, Varun Mishra, John A Batsis, Josiah Hester, David Kotz","doi":"10.1145/3300061.3345432","DOIUrl":"10.1145/3300061.3345432","url":null,"abstract":"<p><p>Wrist-worn devices hold great potential as a platform for mobile health (mHealth) applications because they comprise a familiar, convenient form factor and can embed sensors in proximity to the human body. Despite this potential, however, they are severely limited in battery life, storage, band-width, computing power, and screen size. In this paper, we describe the experience of the research and development team designing, implementing and evaluating <i>Amulet</i> - an open-hardware, open-software wrist-worn computing device - and its experience using Amulet to deploy mHealth apps in the field. In the past five years the team conducted 11 studies in the lab and in the field, involving 204 participants and collecting over 77,780 hours of sensor data. We describe the technical issues the team encountered and the lessons they learned, and conclude with a set of recommendations. We anticipate the experience described herein will be useful for the development of other research-oriented computing platforms. It should also be useful for researchers interested in developing and deploying mHealth applications, whether with the Amulet system or with other wearable platforms.</p>","PeriodicalId":91382,"journal":{"name":"Proceedings of the ... annual International Conference on Mobile Computing and Networking. International Conference on Mobile Computing and Networking","volume":"2019 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8276769/pdf/nihms-1045926.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39184527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}