Yang Chen, Katherine Fennedy, A. Fogel, Shengdong Zhao, Chaoyang Zhang, Lijuan Liu, C. Yen
One key strategy of combating obesity is to slow down eating; however, this is difficult to achieve due to people’s habitual nature. In this paper, we explored the feasibility of incorporating shape-changing interface into an eating spoon to directly intervene in undesirable eating behaviour. First, we investigated the optimal dimension (i.e., Z-depth) and ideal range of spoon transformation for different food forms that could affect bite size while maintaining usability. Those findings allowed the development of SSpoon prototype through a series of design explorations that are optimised for user’s adoption. Then, we applied two shape-changing strategies: instant transformations based on food forms and subtle transformations based on food intake) and examined in two comparative studies involving a full course meal using Wizard-of-Oz approach. The results indicated that SSpoon could achieve comparable effects to a small spoon (5ml) in reducing eating rate by 13.7-16.1% and food consumption by 4.4-4.6%, while retaining similar user satisfaction as a normal eating spoon (10ml). These results demonstrate the feasibility of a shape-changing eating utensil as a promising alternative to combat the growing concern of obesity. . These provide initial to RQ4 , suggesting that SSpoon may not influence the perceived despite the overall of in a standardized
{"title":"SSpoon: A Shape-changing Spoon That Optimizes Bite Size for Eating Rate Regulation","authors":"Yang Chen, Katherine Fennedy, A. Fogel, Shengdong Zhao, Chaoyang Zhang, Lijuan Liu, C. Yen","doi":"10.1145/3550312","DOIUrl":"https://doi.org/10.1145/3550312","url":null,"abstract":"One key strategy of combating obesity is to slow down eating; however, this is difficult to achieve due to people’s habitual nature. In this paper, we explored the feasibility of incorporating shape-changing interface into an eating spoon to directly intervene in undesirable eating behaviour. First, we investigated the optimal dimension (i.e., Z-depth) and ideal range of spoon transformation for different food forms that could affect bite size while maintaining usability. Those findings allowed the development of SSpoon prototype through a series of design explorations that are optimised for user’s adoption. Then, we applied two shape-changing strategies: instant transformations based on food forms and subtle transformations based on food intake) and examined in two comparative studies involving a full course meal using Wizard-of-Oz approach. The results indicated that SSpoon could achieve comparable effects to a small spoon (5ml) in reducing eating rate by 13.7-16.1% and food consumption by 4.4-4.6%, while retaining similar user satisfaction as a normal eating spoon (10ml). These results demonstrate the feasibility of a shape-changing eating utensil as a promising alternative to combat the growing concern of obesity. . These provide initial to RQ4 , suggesting that SSpoon may not influence the perceived despite the overall of in a standardized","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"434 1","pages":"105:1-105:32"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79599477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sign language translation (SLT) is considered as the core technology to break the communication barrier between the deaf and hearing people. However, most studies only focus on recognizing the sequence of sign gestures (sign language recognition (SLR)), ignoring the significant difference of linguistic structures between sign language and spoken language. In this paper, we approach SLT as a spatio-temporal machine translation task and propose a wearable-based system, WearSign, to enable direct translation from the sign-induced sensory signals into spoken texts. WearSign leverages a smartwatch and an armband of ElectroMyoGraphy (EMG) sensors to capture the sophisticated sign gestures. In the design of the translation network, considering the significant modality and linguistic gap between sensory signals and spoken language, we design a multi-task encoder-decoder framework which uses sign glosses (sign gesture labels) for intermediate supervision to guide the end-to-end training. In addition, due to the lack of sufficient training data, the performance of prior studies usually degrades drastically when it comes to sentences with complex structures or unseen in the training set. To tackle this, we borrow the idea of back-translation and leverage the much more available spoken language data to synthesize the paired sign language data. We include the synthetic pairs into the training process, which enables the network to learn better sequence-to-sequence mapping as well as generate more fluent spoken language sentences. We construct an American sign language (ASL) dataset consisting of 250 commonly used sentences gathered from 15 volunteers. WearSign achieves 4.7% and 8.6% word error rate (WER) in user-independent tests and unseen sentence tests respectively. We also implement a real-time version of WearSign which runs fully on the smartphone with a low latency and energy overhead. CCS Concepts:
手语翻译(SLT)被认为是打破聋人与听人之间交流障碍的核心技术。然而,大多数研究只关注手势序列的识别(sign language recognition, SLR),而忽视了手语与口语之间语言结构的显著差异。在本文中,我们将语言翻译作为一种时空机器翻译任务,并提出了一种基于可穿戴设备的系统WearSign,以实现从符号诱导的感官信号到口语文本的直接翻译。WearSign利用智能手表和肌电图(EMG)传感器臂带来捕捉复杂的手势。在翻译网络的设计中,考虑到感官信号与口语之间存在显著的语态差异和语言差异,我们设计了一个多任务编码器-解码器框架,该框架使用手势符号作为中间监督,指导端到端训练。此外,由于缺乏足够的训练数据,当涉及到结构复杂的句子或未在训练集中看到的句子时,先前的研究的性能通常会急剧下降。为了解决这个问题,我们借用了反向翻译的思想,并利用更多可用的口语数据来合成成对的手语数据。我们将合成对纳入训练过程,这使得网络能够更好地学习序列到序列的映射,并生成更流利的口语句子。我们构建了一个由来自15名志愿者的250个常用句子组成的美国手语(ASL)数据集。在用户独立测试和未见句子测试中,WearSign分别实现了4.7%和8.6%的单词错误率(WER)。我们还实现了一个实时版本的WearSign,它完全运行在智能手机上,具有低延迟和低能耗。CCS的概念:
{"title":"WearSign: Pushing the Limit of Sign Language Translation Using Inertial and EMG Wearables","authors":"Qian Zhang, JiaZhen Jing, Dong Wang, Run Zhao","doi":"10.1145/3517257","DOIUrl":"https://doi.org/10.1145/3517257","url":null,"abstract":"Sign language translation (SLT) is considered as the core technology to break the communication barrier between the deaf and hearing people. However, most studies only focus on recognizing the sequence of sign gestures (sign language recognition (SLR)), ignoring the significant difference of linguistic structures between sign language and spoken language. In this paper, we approach SLT as a spatio-temporal machine translation task and propose a wearable-based system, WearSign, to enable direct translation from the sign-induced sensory signals into spoken texts. WearSign leverages a smartwatch and an armband of ElectroMyoGraphy (EMG) sensors to capture the sophisticated sign gestures. In the design of the translation network, considering the significant modality and linguistic gap between sensory signals and spoken language, we design a multi-task encoder-decoder framework which uses sign glosses (sign gesture labels) for intermediate supervision to guide the end-to-end training. In addition, due to the lack of sufficient training data, the performance of prior studies usually degrades drastically when it comes to sentences with complex structures or unseen in the training set. To tackle this, we borrow the idea of back-translation and leverage the much more available spoken language data to synthesize the paired sign language data. We include the synthetic pairs into the training process, which enables the network to learn better sequence-to-sequence mapping as well as generate more fluent spoken language sentences. We construct an American sign language (ASL) dataset consisting of 250 commonly used sentences gathered from 15 volunteers. WearSign achieves 4.7% and 8.6% word error rate (WER) in user-independent tests and unseen sentence tests respectively. We also implement a real-time version of WearSign which runs fully on the smartphone with a low latency and energy overhead. CCS Concepts:","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"22 1","pages":"35:1-35:27"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78004612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spirometry is the gold standard for evaluating lung functions. Recent research has proposed that mobile devices can measure lung function indices cost-efficiently. However, these designs fall short in two aspects. First, they cannot provide the flow-volume (F-V) curve, which is more informative than lung function indices. Secondly, these solutions lack inspiratory measurement, which is sensitive to lung diseases such as variable extrathoracic obstruction. In this paper, we present EarSpiro, an earphone-based solution that interprets the recorded airflow sound during a spirometry test into an F-V curve, including both the expiratory and inspiratory measurements. EarSpiro leverages a convolutional neural network (CNN) and a recurrent neural network (RNN) to capture the complex correlation between airflow sound and airflow speed. Meanwhile, EarSpiro adopts a clustering-based segmentation algorithm to track the weak inspiratory signals from the raw audio recording to enable inspiratory measurement. We also enable EarSpiro with daily mouthpiece-like objects such as a funnel using transfer learning and a decoder network with the help of only a few true lung function indices from the user. Extensive experiments with 60 subjects show that EarSpiro achieves mean errors of 0 . 20 𝐿 / 𝑠 and 0 . 42 𝐿 / 𝑠 for expiratory and inspiratory flow rate estimation, and 0 . 61 𝐿 / 𝑠 and 0 . 83 𝐿 / 𝑠 for expiratory and inspiratory F-V curve estimation. The mean correlation coefficient between the estimated F-V curve and the true one is 0 . 94. The mean estimation error for four common lung function indices is 7 . 3%.
{"title":"EarSpiro: Earphone-based Spirometry for Lung Function Assessment","authors":"Wentao Xie, Qing Hu, Jin Zhang, Qian Zhang","doi":"10.1145/3569480","DOIUrl":"https://doi.org/10.1145/3569480","url":null,"abstract":"Spirometry is the gold standard for evaluating lung functions. Recent research has proposed that mobile devices can measure lung function indices cost-efficiently. However, these designs fall short in two aspects. First, they cannot provide the flow-volume (F-V) curve, which is more informative than lung function indices. Secondly, these solutions lack inspiratory measurement, which is sensitive to lung diseases such as variable extrathoracic obstruction. In this paper, we present EarSpiro, an earphone-based solution that interprets the recorded airflow sound during a spirometry test into an F-V curve, including both the expiratory and inspiratory measurements. EarSpiro leverages a convolutional neural network (CNN) and a recurrent neural network (RNN) to capture the complex correlation between airflow sound and airflow speed. Meanwhile, EarSpiro adopts a clustering-based segmentation algorithm to track the weak inspiratory signals from the raw audio recording to enable inspiratory measurement. We also enable EarSpiro with daily mouthpiece-like objects such as a funnel using transfer learning and a decoder network with the help of only a few true lung function indices from the user. Extensive experiments with 60 subjects show that EarSpiro achieves mean errors of 0 . 20 𝐿 / 𝑠 and 0 . 42 𝐿 / 𝑠 for expiratory and inspiratory flow rate estimation, and 0 . 61 𝐿 / 𝑠 and 0 . 83 𝐿 / 𝑠 for expiratory and inspiratory F-V curve estimation. The mean correlation coefficient between the estimated F-V curve and the true one is 0 . 94. The mean estimation error for four common lung function indices is 7 . 3%.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"111 1","pages":"188:1-188:27"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77870686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abhijeet Mishra, Piyush Kumar, Jainendra Shukla, Aman Parnami
We presently rely on mechanical approaches to leverage drag (friction) effects for digital interaction as haptic feedback over real surfaces. Unfortunately, due to their mechanical nature, such methods are inconvenient, difficult to scale, and include object deployment issues. Accordingly, we present HaptiDrag, a thin (1 mm) and lightweight (2 gram) device that can reliably produce various intensities of on-surface drag effects through electroadhesion phenomenon. We first performed design evaluation to determine minimal size (5 cm x 5 cm) of HaptiDrag to enable drag effect. Further, with reference to eight distinct surfaces, we present technical performance of 2 sizes of HaptiDrag in real environment conditions. Later, we conducted two user studies; the first to discover absolute detection threshold friction spots of varying intensities common to all surfaces under test and the second to validate the absolute detection threshold points for noticeability with all sizes of HaptiDrag. Finally, we demonstrate device’s utility in different scenarios.
我们目前依靠机械方法来利用拖动(摩擦)效应,作为真实表面的触觉反馈进行数字交互。不幸的是,由于它们的机械性质,这些方法不方便,难以扩展,并且包含对象部署问题。因此,我们提出了HaptiDrag,这是一种薄(1毫米)轻(2克)的设备,可以通过电粘附现象可靠地产生各种强度的表面阻力效应。我们首先进行了设计评估,以确定HaptiDrag的最小尺寸(5cm x 5cm)以实现拖动效果。此外,参考八种不同的表面,我们展示了两种尺寸的HaptiDrag在真实环境条件下的技术性能。后来,我们进行了两次用户研究;第一个是发现所有测试表面共有的不同强度的绝对检测阈值摩擦点,第二个是验证所有尺寸的HaptiDrag的绝对检测阈值点的可注意性。最后,我们将演示设备在不同场景中的效用。
{"title":"HaptiDrag: A Device with the Ability to Generate Varying Levels of Drag (Friction) Effects on Real Surfaces","authors":"Abhijeet Mishra, Piyush Kumar, Jainendra Shukla, Aman Parnami","doi":"10.1145/3550310","DOIUrl":"https://doi.org/10.1145/3550310","url":null,"abstract":"We presently rely on mechanical approaches to leverage drag (friction) effects for digital interaction as haptic feedback over real surfaces. Unfortunately, due to their mechanical nature, such methods are inconvenient, difficult to scale, and include object deployment issues. Accordingly, we present HaptiDrag, a thin (1 mm) and lightweight (2 gram) device that can reliably produce various intensities of on-surface drag effects through electroadhesion phenomenon. We first performed design evaluation to determine minimal size (5 cm x 5 cm) of HaptiDrag to enable drag effect. Further, with reference to eight distinct surfaces, we present technical performance of 2 sizes of HaptiDrag in real environment conditions. Later, we conducted two user studies; the first to discover absolute detection threshold friction spots of varying intensities common to all surfaces under test and the second to validate the absolute detection threshold points for noticeability with all sizes of HaptiDrag. Finally, we demonstrate device’s utility in different scenarios.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"23 1","pages":"131:1-131:26"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72774743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent studies on intermittent computing target single-core processors and underestimate the efficient parallel execution of highly-parallelizable machine learning tasks. Even though general-purpose multicore processors provide a high degree of parallelism and programming flexibility, intermittent computing has not exploited them yet. Filling this gap, we introduce AdaMICA (Adaptive Multicore Intermittent Computing) runtime that supports, for the first time, parallel intermittent computing and provides the highest degree of flexibility of programmable general-purpose multiple cores. AdaMICA is adaptive since it responds to the changes in the environmental power availability by dynamically reconfiguring the underlying multicore architecture to use the power most optimally. Our results demonstrate that AdaMICA significantly increases the throughput (52% on average) and decreases the latency (31% on average) by dynamically scaling the underlying architecture, considering the variations in the unpredictable harvested energy.
{"title":"AdaMICA: Adaptive Multicore Intermittent Computing","authors":"K. Akhunov, K. Yıldırım","doi":"10.1145/3550304","DOIUrl":"https://doi.org/10.1145/3550304","url":null,"abstract":"Recent studies on intermittent computing target single-core processors and underestimate the efficient parallel execution of highly-parallelizable machine learning tasks. Even though general-purpose multicore processors provide a high degree of parallelism and programming flexibility, intermittent computing has not exploited them yet. Filling this gap, we introduce AdaMICA (Adaptive Multicore Intermittent Computing) runtime that supports, for the first time, parallel intermittent computing and provides the highest degree of flexibility of programmable general-purpose multiple cores. AdaMICA is adaptive since it responds to the changes in the environmental power availability by dynamically reconfiguring the underlying multicore architecture to use the power most optimally. Our results demonstrate that AdaMICA significantly increases the throughput (52% on average) and decreases the latency (31% on average) by dynamically scaling the underlying architecture, considering the variations in the unpredictable harvested energy.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"349 1","pages":"98:1-98:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78081842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ke Li, Ruidong Zhang, Bo Li, François Guimbretière, Cheng Zhang
This paper presents EarIO, an AI-powered acoustic sensing technology that allows an earable (e.g., earphone) to continuously track facial expressions using two pairs of microphone and speaker (one on each side), which are widely available in commodity earphones. It emits acoustic signals from a speaker on an earable towards the face. Depending on facial expressions, the muscles, tissues, and skin around the ear would deform differently, resulting in unique echo profiles in the reflected signals captured by an on-device microphone. These received acoustic signals are processed and learned by a customized deep learning pipeline to continuously infer the full facial expressions represented by 52 parameters captured using a TruthDepth camera. Compared to similar technologies, it has significantly lower power consumption, as it can sample at 86 Hz with a power signature of 154 mW. A user study with 16 participants under three different scenarios, showed that EarIO can reliably estimate the detailed facial movements when the participants were sitting, walking or after remounting the device. Based on the encouraging results, we further discuss the potential opportunities and challenges on applying EarIO on future ear-mounted wearables.
{"title":"EarIO: A Low-power Acoustic Sensing Earable for Continuously Tracking Detailed Facial Movements","authors":"Ke Li, Ruidong Zhang, Bo Li, François Guimbretière, Cheng Zhang","doi":"10.1145/3534621","DOIUrl":"https://doi.org/10.1145/3534621","url":null,"abstract":"This paper presents EarIO, an AI-powered acoustic sensing technology that allows an earable (e.g., earphone) to continuously track facial expressions using two pairs of microphone and speaker (one on each side), which are widely available in commodity earphones. It emits acoustic signals from a speaker on an earable towards the face. Depending on facial expressions, the muscles, tissues, and skin around the ear would deform differently, resulting in unique echo profiles in the reflected signals captured by an on-device microphone. These received acoustic signals are processed and learned by a customized deep learning pipeline to continuously infer the full facial expressions represented by 52 parameters captured using a TruthDepth camera. Compared to similar technologies, it has significantly lower power consumption, as it can sample at 86 Hz with a power signature of 154 mW. A user study with 16 participants under three different scenarios, showed that EarIO can reliably estimate the detailed facial movements when the participants were sitting, walking or after remounting the device. Based on the encouraging results, we further discuss the potential opportunities and challenges on applying EarIO on future ear-mounted wearables.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"2013 1","pages":"62:1-62:24"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86279527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Typing while wearing a standalone Head Mounted Display (HMD)—systems without external input devices or sensors to support text entry—is hard. To address this issue, prior work has used external trackers to monitor finger movements to support in-air typing on virtual keyboards. While performance has been promising, current systems are practically infeasible: finger movements may be visually occluded from inside-out HMD based tracking systems or, otherwise, awkward and uncomfortable to perform. To address these issues, this paper explores an alternative approach. Taking inspiration from the prevalence of thumb-typing on mobile phones, we describe four studies exploring, defining and validating the performance of ThumbAir, an in-air thumb-typing system implemented on a commercial HMD. The first study explores viable target locations, ultimately recommending eight targets sites. The second study collects performance data for taps on pairs of these targets to both inform the design of a target selection procedure and also support a computational design process to select a keyboard layout. The final two studies validate the selected keyboard layout in word repetition and phrase entry tasks, ultimately achieving final WPMs of 27.1 and 13.73. Qualitative data captured in the final study indicate that the discreet movements required to operate ThumbAir, in comparison to the larger scale finger and hand motions used in a baseline design from prior work, lead to reduced levels of perceived exertion and physical demand and are rated as acceptable for use in a wider range of social situations.
{"title":"ThumbAir: In-Air Typing for Head Mounted Displays","authors":"Hyunjae Gil, Ian Oakley","doi":"10.1145/3569474","DOIUrl":"https://doi.org/10.1145/3569474","url":null,"abstract":"Typing while wearing a standalone Head Mounted Display (HMD)—systems without external input devices or sensors to support text entry—is hard. To address this issue, prior work has used external trackers to monitor finger movements to support in-air typing on virtual keyboards. While performance has been promising, current systems are practically infeasible: finger movements may be visually occluded from inside-out HMD based tracking systems or, otherwise, awkward and uncomfortable to perform. To address these issues, this paper explores an alternative approach. Taking inspiration from the prevalence of thumb-typing on mobile phones, we describe four studies exploring, defining and validating the performance of ThumbAir, an in-air thumb-typing system implemented on a commercial HMD. The first study explores viable target locations, ultimately recommending eight targets sites. The second study collects performance data for taps on pairs of these targets to both inform the design of a target selection procedure and also support a computational design process to select a keyboard layout. The final two studies validate the selected keyboard layout in word repetition and phrase entry tasks, ultimately achieving final WPMs of 27.1 and 13.73. Qualitative data captured in the final study indicate that the discreet movements required to operate ThumbAir, in comparison to the larger scale finger and hand motions used in a baseline design from prior work, lead to reduced levels of perceived exertion and physical demand and are rated as acceptable for use in a wider range of social situations.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"21 1","pages":"164:1-164:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79256287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amanda Watson, Claire Kendell, Anush Lingamoorthy, Insup Lee, James Weimer
Spectroscopy, the study of the interaction between electromagnetic radiation and matter, is a vital technique in many disciplines. This technique is limited to lab settings, and, as such, sensing is isolated and infrequent. Thus, it can only provide a brief snapshot of the monitored parameter. Wearable technology brings sensing and tracking technologies out into everyday life, creating longitudinal datasets that provide more insight into the monitored parameter. In this paper, we describe Lumos, an open-source device for wearable spectroscopy research. Lumos can facilitate on-body spectroscopy research in health monitoring, athletics, rehabilitation, and more. We developed an algorithm to determine the spectral response of a medium with a mean absolute error of 13nm. From this, researchers can determine the optimal spectrum and create customized sensors for their target application. We show the utility of Lumos in a pilot study, sensing of prediabetes, where we determine the relevant spectrum for glucose and create and evaluate a targeted tracking device.
{"title":"Lumos: An Open-Source Device for Wearable Spectroscopy Research","authors":"Amanda Watson, Claire Kendell, Anush Lingamoorthy, Insup Lee, James Weimer","doi":"10.1145/3569502","DOIUrl":"https://doi.org/10.1145/3569502","url":null,"abstract":"Spectroscopy, the study of the interaction between electromagnetic radiation and matter, is a vital technique in many disciplines. This technique is limited to lab settings, and, as such, sensing is isolated and infrequent. Thus, it can only provide a brief snapshot of the monitored parameter. Wearable technology brings sensing and tracking technologies out into everyday life, creating longitudinal datasets that provide more insight into the monitored parameter. In this paper, we describe Lumos, an open-source device for wearable spectroscopy research. Lumos can facilitate on-body spectroscopy research in health monitoring, athletics, rehabilitation, and more. We developed an algorithm to determine the spectral response of a medium with a mean absolute error of 13nm. From this, researchers can determine the optimal spectrum and create customized sensors for their target application. We show the utility of Lumos in a pilot study, sensing of prediabetes, where we determine the relevant spectrum for glucose and create and evaluate a targeted tracking device.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"114 1","pages":"187:1-187:24"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76723311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyunchul Lim, Yaxuan Li, Matthew Dressa, Fangwei Hu, Jae Hoon Kim, Ruidong Zhang, Cheng Zhang
In this paper, we present BodyTrak, an intelligent sensing technology that can estimate full body poses on a wristband.
在本文中,我们介绍了BodyTrak,一种可以在腕带上估计全身姿势的智能传感技术。
{"title":"BodyTrak: Inferring Full-body Poses from Body Silhouettes Using a Miniature Camera on a Wristband","authors":"Hyunchul Lim, Yaxuan Li, Matthew Dressa, Fangwei Hu, Jae Hoon Kim, Ruidong Zhang, Cheng Zhang","doi":"10.1145/3552312","DOIUrl":"https://doi.org/10.1145/3552312","url":null,"abstract":"In this paper, we present BodyTrak, an intelligent sensing technology that can estimate full body poses on a wristband.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"56 1 1","pages":"154:1-154:21"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83378125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}