Spontaneous selection of IoT devices from the head-mounted device is key for user-centered pervasive interaction. BLEselect enables users to select an unmodified Bluetooth 5.1 compatible IoT device by nodding at, pointing at, or drawing a circle in the air around it. We designed a compact antenna array that fits on a pair of smart glasses to estimate the Angle of Arrival (AoA) of IoT and wrist-worn devices’ advertising signals. We then developed a sensing pipeline that supports all three selection gestures with lightweight machine learning models, which are trained in real-time for both hand gestures. Extensive characterizations and evaluations show that our system is accurate, natural, low-power, and privacy-preserving. Despite the small effective size of the antenna array, our system achieves a higher than 90% selection accuracy within a 3 meters distance in front of the user. In a user study that mimics real-life usage cases, the overall selection accuracy is 96.7% for a diverse set of 22 participants in terms of age, technology savviness, and body structures.
{"title":"BLEselect: Gestural IoT Device Selection via Bluetooth Angle of Arrival Estimation from Smart Glasses","authors":"Tengxiang Zhang, Zitong Lan, Chenren Xu, Yanrong Li, Yiqiang Chen","doi":"10.1145/3569482","DOIUrl":"https://doi.org/10.1145/3569482","url":null,"abstract":"Spontaneous selection of IoT devices from the head-mounted device is key for user-centered pervasive interaction. BLEselect enables users to select an unmodified Bluetooth 5.1 compatible IoT device by nodding at, pointing at, or drawing a circle in the air around it. We designed a compact antenna array that fits on a pair of smart glasses to estimate the Angle of Arrival (AoA) of IoT and wrist-worn devices’ advertising signals. We then developed a sensing pipeline that supports all three selection gestures with lightweight machine learning models, which are trained in real-time for both hand gestures. Extensive characterizations and evaluations show that our system is accurate, natural, low-power, and privacy-preserving. Despite the small effective size of the antenna array, our system achieves a higher than 90% selection accuracy within a 3 meters distance in front of the user. In a user study that mimics real-life usage cases, the overall selection accuracy is 96.7% for a diverse set of 22 participants in terms of age, technology savviness, and body structures.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"280 1","pages":"198:1-198:28"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80136760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhigang Yin, M. Liyanage, Abdul-Rasheed Ottun, Souvik Paul, Agustin Zuniga, P. Nurmi, Huber Flores
Hand-grip strength is widely used to estimate muscle strength and it serves as a general indicator of the overall health of a person, particularly in aging adults. Hand-grip strength is typically estimated using dynamometers or specialized force resistant pressure sensors embedded onto objects. Both of these solutions require the user to interact with a dedicated measurement device which unnecessarily restricts the contexts where estimates are acquired. We contribute HIPPO, a novel non-intrusive and opportunistic method for estimating hand-grip strength from everyday interactions with objects. HIPPO re-purposes light sensors available in wearables (e.g., rings or gloves) to capture changes in light reflectivity when people interact with objects. This allows HIPPO to non-intrusively piggyback everyday interactions for health information without affecting the user’s everyday routines. We present two prototypes integrating HIPPO, an early smart glove proof-of-concept, and a further optimized solution that uses sensors integrated onto a ring. We validate HIPPO through extensive experiments and compare HIPPO against three baselines, including a clinical dynamometer. Our results show that HIPPO operates robustly across a wide range of everyday objects, and participants. The force strength estimates correlate with estimates produced by pressure-based devices, and can also determine the correct hand grip strength category with up to 86% accuracy. Our findings also suggest that users prefer our approach to existing solutions as HIPPO blends the estimation with everyday interactions.
{"title":"HIPPO: Pervasive Hand-Grip Estimation from Everyday Interactions","authors":"Zhigang Yin, M. Liyanage, Abdul-Rasheed Ottun, Souvik Paul, Agustin Zuniga, P. Nurmi, Huber Flores","doi":"10.1145/3570344","DOIUrl":"https://doi.org/10.1145/3570344","url":null,"abstract":"Hand-grip strength is widely used to estimate muscle strength and it serves as a general indicator of the overall health of a person, particularly in aging adults. Hand-grip strength is typically estimated using dynamometers or specialized force resistant pressure sensors embedded onto objects. Both of these solutions require the user to interact with a dedicated measurement device which unnecessarily restricts the contexts where estimates are acquired. We contribute HIPPO, a novel non-intrusive and opportunistic method for estimating hand-grip strength from everyday interactions with objects. HIPPO re-purposes light sensors available in wearables (e.g., rings or gloves) to capture changes in light reflectivity when people interact with objects. This allows HIPPO to non-intrusively piggyback everyday interactions for health information without affecting the user’s everyday routines. We present two prototypes integrating HIPPO, an early smart glove proof-of-concept, and a further optimized solution that uses sensors integrated onto a ring. We validate HIPPO through extensive experiments and compare HIPPO against three baselines, including a clinical dynamometer. Our results show that HIPPO operates robustly across a wide range of everyday objects, and participants. The force strength estimates correlate with estimates produced by pressure-based devices, and can also determine the correct hand grip strength category with up to 86% accuracy. Our findings also suggest that users prefer our approach to existing solutions as HIPPO blends the estimation with everyday interactions.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"61 1","pages":"209:1-209:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74486798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spirometry is the gold standard for evaluating lung functions. Recent research has proposed that mobile devices can measure lung function indices cost-efficiently. However, these designs fall short in two aspects. First, they cannot provide the flow-volume (F-V) curve, which is more informative than lung function indices. Secondly, these solutions lack inspiratory measurement, which is sensitive to lung diseases such as variable extrathoracic obstruction. In this paper, we present EarSpiro, an earphone-based solution that interprets the recorded airflow sound during a spirometry test into an F-V curve, including both the expiratory and inspiratory measurements. EarSpiro leverages a convolutional neural network (CNN) and a recurrent neural network (RNN) to capture the complex correlation between airflow sound and airflow speed. Meanwhile, EarSpiro adopts a clustering-based segmentation algorithm to track the weak inspiratory signals from the raw audio recording to enable inspiratory measurement. We also enable EarSpiro with daily mouthpiece-like objects such as a funnel using transfer learning and a decoder network with the help of only a few true lung function indices from the user. Extensive experiments with 60 subjects show that EarSpiro achieves mean errors of 0 . 20 𝐿 / 𝑠 and 0 . 42 𝐿 / 𝑠 for expiratory and inspiratory flow rate estimation, and 0 . 61 𝐿 / 𝑠 and 0 . 83 𝐿 / 𝑠 for expiratory and inspiratory F-V curve estimation. The mean correlation coefficient between the estimated F-V curve and the true one is 0 . 94. The mean estimation error for four common lung function indices is 7 . 3%.
{"title":"EarSpiro: Earphone-based Spirometry for Lung Function Assessment","authors":"Wentao Xie, Qing Hu, Jin Zhang, Qian Zhang","doi":"10.1145/3569480","DOIUrl":"https://doi.org/10.1145/3569480","url":null,"abstract":"Spirometry is the gold standard for evaluating lung functions. Recent research has proposed that mobile devices can measure lung function indices cost-efficiently. However, these designs fall short in two aspects. First, they cannot provide the flow-volume (F-V) curve, which is more informative than lung function indices. Secondly, these solutions lack inspiratory measurement, which is sensitive to lung diseases such as variable extrathoracic obstruction. In this paper, we present EarSpiro, an earphone-based solution that interprets the recorded airflow sound during a spirometry test into an F-V curve, including both the expiratory and inspiratory measurements. EarSpiro leverages a convolutional neural network (CNN) and a recurrent neural network (RNN) to capture the complex correlation between airflow sound and airflow speed. Meanwhile, EarSpiro adopts a clustering-based segmentation algorithm to track the weak inspiratory signals from the raw audio recording to enable inspiratory measurement. We also enable EarSpiro with daily mouthpiece-like objects such as a funnel using transfer learning and a decoder network with the help of only a few true lung function indices from the user. Extensive experiments with 60 subjects show that EarSpiro achieves mean errors of 0 . 20 𝐿 / 𝑠 and 0 . 42 𝐿 / 𝑠 for expiratory and inspiratory flow rate estimation, and 0 . 61 𝐿 / 𝑠 and 0 . 83 𝐿 / 𝑠 for expiratory and inspiratory F-V curve estimation. The mean correlation coefficient between the estimated F-V curve and the true one is 0 . 94. The mean estimation error for four common lung function indices is 7 . 3%.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"111 1","pages":"188:1-188:27"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77870686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abhijeet Mishra, Piyush Kumar, Jainendra Shukla, Aman Parnami
We presently rely on mechanical approaches to leverage drag (friction) effects for digital interaction as haptic feedback over real surfaces. Unfortunately, due to their mechanical nature, such methods are inconvenient, difficult to scale, and include object deployment issues. Accordingly, we present HaptiDrag, a thin (1 mm) and lightweight (2 gram) device that can reliably produce various intensities of on-surface drag effects through electroadhesion phenomenon. We first performed design evaluation to determine minimal size (5 cm x 5 cm) of HaptiDrag to enable drag effect. Further, with reference to eight distinct surfaces, we present technical performance of 2 sizes of HaptiDrag in real environment conditions. Later, we conducted two user studies; the first to discover absolute detection threshold friction spots of varying intensities common to all surfaces under test and the second to validate the absolute detection threshold points for noticeability with all sizes of HaptiDrag. Finally, we demonstrate device’s utility in different scenarios.
我们目前依靠机械方法来利用拖动(摩擦)效应,作为真实表面的触觉反馈进行数字交互。不幸的是,由于它们的机械性质,这些方法不方便,难以扩展,并且包含对象部署问题。因此,我们提出了HaptiDrag,这是一种薄(1毫米)轻(2克)的设备,可以通过电粘附现象可靠地产生各种强度的表面阻力效应。我们首先进行了设计评估,以确定HaptiDrag的最小尺寸(5cm x 5cm)以实现拖动效果。此外,参考八种不同的表面,我们展示了两种尺寸的HaptiDrag在真实环境条件下的技术性能。后来,我们进行了两次用户研究;第一个是发现所有测试表面共有的不同强度的绝对检测阈值摩擦点,第二个是验证所有尺寸的HaptiDrag的绝对检测阈值点的可注意性。最后,我们将演示设备在不同场景中的效用。
{"title":"HaptiDrag: A Device with the Ability to Generate Varying Levels of Drag (Friction) Effects on Real Surfaces","authors":"Abhijeet Mishra, Piyush Kumar, Jainendra Shukla, Aman Parnami","doi":"10.1145/3550310","DOIUrl":"https://doi.org/10.1145/3550310","url":null,"abstract":"We presently rely on mechanical approaches to leverage drag (friction) effects for digital interaction as haptic feedback over real surfaces. Unfortunately, due to their mechanical nature, such methods are inconvenient, difficult to scale, and include object deployment issues. Accordingly, we present HaptiDrag, a thin (1 mm) and lightweight (2 gram) device that can reliably produce various intensities of on-surface drag effects through electroadhesion phenomenon. We first performed design evaluation to determine minimal size (5 cm x 5 cm) of HaptiDrag to enable drag effect. Further, with reference to eight distinct surfaces, we present technical performance of 2 sizes of HaptiDrag in real environment conditions. Later, we conducted two user studies; the first to discover absolute detection threshold friction spots of varying intensities common to all surfaces under test and the second to validate the absolute detection threshold points for noticeability with all sizes of HaptiDrag. Finally, we demonstrate device’s utility in different scenarios.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"23 1","pages":"131:1-131:26"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72774743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent studies on intermittent computing target single-core processors and underestimate the efficient parallel execution of highly-parallelizable machine learning tasks. Even though general-purpose multicore processors provide a high degree of parallelism and programming flexibility, intermittent computing has not exploited them yet. Filling this gap, we introduce AdaMICA (Adaptive Multicore Intermittent Computing) runtime that supports, for the first time, parallel intermittent computing and provides the highest degree of flexibility of programmable general-purpose multiple cores. AdaMICA is adaptive since it responds to the changes in the environmental power availability by dynamically reconfiguring the underlying multicore architecture to use the power most optimally. Our results demonstrate that AdaMICA significantly increases the throughput (52% on average) and decreases the latency (31% on average) by dynamically scaling the underlying architecture, considering the variations in the unpredictable harvested energy.
{"title":"AdaMICA: Adaptive Multicore Intermittent Computing","authors":"K. Akhunov, K. Yıldırım","doi":"10.1145/3550304","DOIUrl":"https://doi.org/10.1145/3550304","url":null,"abstract":"Recent studies on intermittent computing target single-core processors and underestimate the efficient parallel execution of highly-parallelizable machine learning tasks. Even though general-purpose multicore processors provide a high degree of parallelism and programming flexibility, intermittent computing has not exploited them yet. Filling this gap, we introduce AdaMICA (Adaptive Multicore Intermittent Computing) runtime that supports, for the first time, parallel intermittent computing and provides the highest degree of flexibility of programmable general-purpose multiple cores. AdaMICA is adaptive since it responds to the changes in the environmental power availability by dynamically reconfiguring the underlying multicore architecture to use the power most optimally. Our results demonstrate that AdaMICA significantly increases the throughput (52% on average) and decreases the latency (31% on average) by dynamically scaling the underlying architecture, considering the variations in the unpredictable harvested energy.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"349 1","pages":"98:1-98:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78081842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ke Li, Ruidong Zhang, Bo Li, François Guimbretière, Cheng Zhang
This paper presents EarIO, an AI-powered acoustic sensing technology that allows an earable (e.g., earphone) to continuously track facial expressions using two pairs of microphone and speaker (one on each side), which are widely available in commodity earphones. It emits acoustic signals from a speaker on an earable towards the face. Depending on facial expressions, the muscles, tissues, and skin around the ear would deform differently, resulting in unique echo profiles in the reflected signals captured by an on-device microphone. These received acoustic signals are processed and learned by a customized deep learning pipeline to continuously infer the full facial expressions represented by 52 parameters captured using a TruthDepth camera. Compared to similar technologies, it has significantly lower power consumption, as it can sample at 86 Hz with a power signature of 154 mW. A user study with 16 participants under three different scenarios, showed that EarIO can reliably estimate the detailed facial movements when the participants were sitting, walking or after remounting the device. Based on the encouraging results, we further discuss the potential opportunities and challenges on applying EarIO on future ear-mounted wearables.
{"title":"EarIO: A Low-power Acoustic Sensing Earable for Continuously Tracking Detailed Facial Movements","authors":"Ke Li, Ruidong Zhang, Bo Li, François Guimbretière, Cheng Zhang","doi":"10.1145/3534621","DOIUrl":"https://doi.org/10.1145/3534621","url":null,"abstract":"This paper presents EarIO, an AI-powered acoustic sensing technology that allows an earable (e.g., earphone) to continuously track facial expressions using two pairs of microphone and speaker (one on each side), which are widely available in commodity earphones. It emits acoustic signals from a speaker on an earable towards the face. Depending on facial expressions, the muscles, tissues, and skin around the ear would deform differently, resulting in unique echo profiles in the reflected signals captured by an on-device microphone. These received acoustic signals are processed and learned by a customized deep learning pipeline to continuously infer the full facial expressions represented by 52 parameters captured using a TruthDepth camera. Compared to similar technologies, it has significantly lower power consumption, as it can sample at 86 Hz with a power signature of 154 mW. A user study with 16 participants under three different scenarios, showed that EarIO can reliably estimate the detailed facial movements when the participants were sitting, walking or after remounting the device. Based on the encouraging results, we further discuss the potential opportunities and challenges on applying EarIO on future ear-mounted wearables.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"2013 1","pages":"62:1-62:24"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86279527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Typing while wearing a standalone Head Mounted Display (HMD)—systems without external input devices or sensors to support text entry—is hard. To address this issue, prior work has used external trackers to monitor finger movements to support in-air typing on virtual keyboards. While performance has been promising, current systems are practically infeasible: finger movements may be visually occluded from inside-out HMD based tracking systems or, otherwise, awkward and uncomfortable to perform. To address these issues, this paper explores an alternative approach. Taking inspiration from the prevalence of thumb-typing on mobile phones, we describe four studies exploring, defining and validating the performance of ThumbAir, an in-air thumb-typing system implemented on a commercial HMD. The first study explores viable target locations, ultimately recommending eight targets sites. The second study collects performance data for taps on pairs of these targets to both inform the design of a target selection procedure and also support a computational design process to select a keyboard layout. The final two studies validate the selected keyboard layout in word repetition and phrase entry tasks, ultimately achieving final WPMs of 27.1 and 13.73. Qualitative data captured in the final study indicate that the discreet movements required to operate ThumbAir, in comparison to the larger scale finger and hand motions used in a baseline design from prior work, lead to reduced levels of perceived exertion and physical demand and are rated as acceptable for use in a wider range of social situations.
{"title":"ThumbAir: In-Air Typing for Head Mounted Displays","authors":"Hyunjae Gil, Ian Oakley","doi":"10.1145/3569474","DOIUrl":"https://doi.org/10.1145/3569474","url":null,"abstract":"Typing while wearing a standalone Head Mounted Display (HMD)—systems without external input devices or sensors to support text entry—is hard. To address this issue, prior work has used external trackers to monitor finger movements to support in-air typing on virtual keyboards. While performance has been promising, current systems are practically infeasible: finger movements may be visually occluded from inside-out HMD based tracking systems or, otherwise, awkward and uncomfortable to perform. To address these issues, this paper explores an alternative approach. Taking inspiration from the prevalence of thumb-typing on mobile phones, we describe four studies exploring, defining and validating the performance of ThumbAir, an in-air thumb-typing system implemented on a commercial HMD. The first study explores viable target locations, ultimately recommending eight targets sites. The second study collects performance data for taps on pairs of these targets to both inform the design of a target selection procedure and also support a computational design process to select a keyboard layout. The final two studies validate the selected keyboard layout in word repetition and phrase entry tasks, ultimately achieving final WPMs of 27.1 and 13.73. Qualitative data captured in the final study indicate that the discreet movements required to operate ThumbAir, in comparison to the larger scale finger and hand motions used in a baseline design from prior work, lead to reduced levels of perceived exertion and physical demand and are rated as acceptable for use in a wider range of social situations.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"21 1","pages":"164:1-164:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79256287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amanda Watson, Claire Kendell, Anush Lingamoorthy, Insup Lee, James Weimer
Spectroscopy, the study of the interaction between electromagnetic radiation and matter, is a vital technique in many disciplines. This technique is limited to lab settings, and, as such, sensing is isolated and infrequent. Thus, it can only provide a brief snapshot of the monitored parameter. Wearable technology brings sensing and tracking technologies out into everyday life, creating longitudinal datasets that provide more insight into the monitored parameter. In this paper, we describe Lumos, an open-source device for wearable spectroscopy research. Lumos can facilitate on-body spectroscopy research in health monitoring, athletics, rehabilitation, and more. We developed an algorithm to determine the spectral response of a medium with a mean absolute error of 13nm. From this, researchers can determine the optimal spectrum and create customized sensors for their target application. We show the utility of Lumos in a pilot study, sensing of prediabetes, where we determine the relevant spectrum for glucose and create and evaluate a targeted tracking device.
{"title":"Lumos: An Open-Source Device for Wearable Spectroscopy Research","authors":"Amanda Watson, Claire Kendell, Anush Lingamoorthy, Insup Lee, James Weimer","doi":"10.1145/3569502","DOIUrl":"https://doi.org/10.1145/3569502","url":null,"abstract":"Spectroscopy, the study of the interaction between electromagnetic radiation and matter, is a vital technique in many disciplines. This technique is limited to lab settings, and, as such, sensing is isolated and infrequent. Thus, it can only provide a brief snapshot of the monitored parameter. Wearable technology brings sensing and tracking technologies out into everyday life, creating longitudinal datasets that provide more insight into the monitored parameter. In this paper, we describe Lumos, an open-source device for wearable spectroscopy research. Lumos can facilitate on-body spectroscopy research in health monitoring, athletics, rehabilitation, and more. We developed an algorithm to determine the spectral response of a medium with a mean absolute error of 13nm. From this, researchers can determine the optimal spectrum and create customized sensors for their target application. We show the utility of Lumos in a pilot study, sensing of prediabetes, where we determine the relevant spectrum for glucose and create and evaluate a targeted tracking device.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"114 1","pages":"187:1-187:24"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76723311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyunchul Lim, Yaxuan Li, Matthew Dressa, Fangwei Hu, Jae Hoon Kim, Ruidong Zhang, Cheng Zhang
In this paper, we present BodyTrak, an intelligent sensing technology that can estimate full body poses on a wristband.
在本文中,我们介绍了BodyTrak,一种可以在腕带上估计全身姿势的智能传感技术。
{"title":"BodyTrak: Inferring Full-body Poses from Body Silhouettes Using a Miniature Camera on a Wristband","authors":"Hyunchul Lim, Yaxuan Li, Matthew Dressa, Fangwei Hu, Jae Hoon Kim, Ruidong Zhang, Cheng Zhang","doi":"10.1145/3552312","DOIUrl":"https://doi.org/10.1145/3552312","url":null,"abstract":"In this paper, we present BodyTrak, an intelligent sensing technology that can estimate full body poses on a wristband.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"56 1 1","pages":"154:1-154:21"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83378125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}