首页 > 最新文献

Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.最新文献

英文 中文
SSpoon: A Shape-changing Spoon That Optimizes Bite Size for Eating Rate Regulation SSpoon:一种可改变形状的勺子,可以优化咀嚼大小,调节进食速度
Pub Date : 2022-01-01 DOI: 10.1145/3550312
Yang Chen, Katherine Fennedy, A. Fogel, Shengdong Zhao, Chaoyang Zhang, Lijuan Liu, C. Yen
One key strategy of combating obesity is to slow down eating; however, this is difficult to achieve due to people’s habitual nature. In this paper, we explored the feasibility of incorporating shape-changing interface into an eating spoon to directly intervene in undesirable eating behaviour. First, we investigated the optimal dimension (i.e., Z-depth) and ideal range of spoon transformation for different food forms that could affect bite size while maintaining usability. Those findings allowed the development of SSpoon prototype through a series of design explorations that are optimised for user’s adoption. Then, we applied two shape-changing strategies: instant transformations based on food forms and subtle transformations based on food intake) and examined in two comparative studies involving a full course meal using Wizard-of-Oz approach. The results indicated that SSpoon could achieve comparable effects to a small spoon (5ml) in reducing eating rate by 13.7-16.1% and food consumption by 4.4-4.6%, while retaining similar user satisfaction as a normal eating spoon (10ml). These results demonstrate the feasibility of a shape-changing eating utensil as a promising alternative to combat the growing concern of obesity. . These provide initial to RQ4 , suggesting that SSpoon may not influence the perceived despite the overall of in a standardized
对抗肥胖的一个关键策略是放慢进食速度;然而,由于人们的习惯性,这很难做到。在本文中,我们探讨了将形状改变界面整合到进食勺中以直接干预不良进食行为的可行性。首先,我们研究了不同食物形态的最佳尺寸(即Z-depth)和勺子变换的理想范围,这可能会影响咬大小,同时保持可用性。这些发现允许SSpoon原型的开发,通过一系列的设计探索,优化用户采用。然后,我们应用了两种形状变化策略:基于食物形式的即时转换和基于食物摄入量的微妙转换),并在两个比较研究中使用《绿野仙踪》的方法进行了检查。结果表明,SSpoon可以达到与小勺子(5ml)相当的效果,可以减少13.7-16.1%的进食率和4.4-4.6%的食物消耗,同时保持与普通进食勺子(10ml)相似的用户满意度。这些结果证明了一种可以改变形状的餐具作为对抗日益增长的肥胖问题的有希望的替代品的可行性。这些提供了RQ4的初始值,表明SSpoon可能不会影响感知,尽管总体上是标准化的
{"title":"SSpoon: A Shape-changing Spoon That Optimizes Bite Size for Eating Rate Regulation","authors":"Yang Chen, Katherine Fennedy, A. Fogel, Shengdong Zhao, Chaoyang Zhang, Lijuan Liu, C. Yen","doi":"10.1145/3550312","DOIUrl":"https://doi.org/10.1145/3550312","url":null,"abstract":"One key strategy of combating obesity is to slow down eating; however, this is difficult to achieve due to people’s habitual nature. In this paper, we explored the feasibility of incorporating shape-changing interface into an eating spoon to directly intervene in undesirable eating behaviour. First, we investigated the optimal dimension (i.e., Z-depth) and ideal range of spoon transformation for different food forms that could affect bite size while maintaining usability. Those findings allowed the development of SSpoon prototype through a series of design explorations that are optimised for user’s adoption. Then, we applied two shape-changing strategies: instant transformations based on food forms and subtle transformations based on food intake) and examined in two comparative studies involving a full course meal using Wizard-of-Oz approach. The results indicated that SSpoon could achieve comparable effects to a small spoon (5ml) in reducing eating rate by 13.7-16.1% and food consumption by 4.4-4.6%, while retaining similar user satisfaction as a normal eating spoon (10ml). These results demonstrate the feasibility of a shape-changing eating utensil as a promising alternative to combat the growing concern of obesity. . These provide initial to RQ4 , suggesting that SSpoon may not influence the perceived despite the overall of in a standardized","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"434 1","pages":"105:1-105:32"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79599477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
WearSign: Pushing the Limit of Sign Language Translation Using Inertial and EMG Wearables WearSign:利用惯性和肌电可穿戴设备推动手语翻译的极限
Pub Date : 2022-01-01 DOI: 10.1145/3517257
Qian Zhang, JiaZhen Jing, Dong Wang, Run Zhao
Sign language translation (SLT) is considered as the core technology to break the communication barrier between the deaf and hearing people. However, most studies only focus on recognizing the sequence of sign gestures (sign language recognition (SLR)), ignoring the significant difference of linguistic structures between sign language and spoken language. In this paper, we approach SLT as a spatio-temporal machine translation task and propose a wearable-based system, WearSign, to enable direct translation from the sign-induced sensory signals into spoken texts. WearSign leverages a smartwatch and an armband of ElectroMyoGraphy (EMG) sensors to capture the sophisticated sign gestures. In the design of the translation network, considering the significant modality and linguistic gap between sensory signals and spoken language, we design a multi-task encoder-decoder framework which uses sign glosses (sign gesture labels) for intermediate supervision to guide the end-to-end training. In addition, due to the lack of sufficient training data, the performance of prior studies usually degrades drastically when it comes to sentences with complex structures or unseen in the training set. To tackle this, we borrow the idea of back-translation and leverage the much more available spoken language data to synthesize the paired sign language data. We include the synthetic pairs into the training process, which enables the network to learn better sequence-to-sequence mapping as well as generate more fluent spoken language sentences. We construct an American sign language (ASL) dataset consisting of 250 commonly used sentences gathered from 15 volunteers. WearSign achieves 4.7% and 8.6% word error rate (WER) in user-independent tests and unseen sentence tests respectively. We also implement a real-time version of WearSign which runs fully on the smartphone with a low latency and energy overhead. CCS Concepts:
手语翻译(SLT)被认为是打破聋人与听人之间交流障碍的核心技术。然而,大多数研究只关注手势序列的识别(sign language recognition, SLR),而忽视了手语与口语之间语言结构的显著差异。在本文中,我们将语言翻译作为一种时空机器翻译任务,并提出了一种基于可穿戴设备的系统WearSign,以实现从符号诱导的感官信号到口语文本的直接翻译。WearSign利用智能手表和肌电图(EMG)传感器臂带来捕捉复杂的手势。在翻译网络的设计中,考虑到感官信号与口语之间存在显著的语态差异和语言差异,我们设计了一个多任务编码器-解码器框架,该框架使用手势符号作为中间监督,指导端到端训练。此外,由于缺乏足够的训练数据,当涉及到结构复杂的句子或未在训练集中看到的句子时,先前的研究的性能通常会急剧下降。为了解决这个问题,我们借用了反向翻译的思想,并利用更多可用的口语数据来合成成对的手语数据。我们将合成对纳入训练过程,这使得网络能够更好地学习序列到序列的映射,并生成更流利的口语句子。我们构建了一个由来自15名志愿者的250个常用句子组成的美国手语(ASL)数据集。在用户独立测试和未见句子测试中,WearSign分别实现了4.7%和8.6%的单词错误率(WER)。我们还实现了一个实时版本的WearSign,它完全运行在智能手机上,具有低延迟和低能耗。CCS的概念:
{"title":"WearSign: Pushing the Limit of Sign Language Translation Using Inertial and EMG Wearables","authors":"Qian Zhang, JiaZhen Jing, Dong Wang, Run Zhao","doi":"10.1145/3517257","DOIUrl":"https://doi.org/10.1145/3517257","url":null,"abstract":"Sign language translation (SLT) is considered as the core technology to break the communication barrier between the deaf and hearing people. However, most studies only focus on recognizing the sequence of sign gestures (sign language recognition (SLR)), ignoring the significant difference of linguistic structures between sign language and spoken language. In this paper, we approach SLT as a spatio-temporal machine translation task and propose a wearable-based system, WearSign, to enable direct translation from the sign-induced sensory signals into spoken texts. WearSign leverages a smartwatch and an armband of ElectroMyoGraphy (EMG) sensors to capture the sophisticated sign gestures. In the design of the translation network, considering the significant modality and linguistic gap between sensory signals and spoken language, we design a multi-task encoder-decoder framework which uses sign glosses (sign gesture labels) for intermediate supervision to guide the end-to-end training. In addition, due to the lack of sufficient training data, the performance of prior studies usually degrades drastically when it comes to sentences with complex structures or unseen in the training set. To tackle this, we borrow the idea of back-translation and leverage the much more available spoken language data to synthesize the paired sign language data. We include the synthetic pairs into the training process, which enables the network to learn better sequence-to-sequence mapping as well as generate more fluent spoken language sentences. We construct an American sign language (ASL) dataset consisting of 250 commonly used sentences gathered from 15 volunteers. WearSign achieves 4.7% and 8.6% word error rate (WER) in user-independent tests and unseen sentence tests respectively. We also implement a real-time version of WearSign which runs fully on the smartphone with a low latency and energy overhead. CCS Concepts:","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"22 1","pages":"35:1-35:27"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78004612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
EarSpiro: Earphone-based Spirometry for Lung Function Assessment EarSpiro:用于肺功能评估的耳机肺活量测定法
Pub Date : 2022-01-01 DOI: 10.1145/3569480
Wentao Xie, Qing Hu, Jin Zhang, Qian Zhang
Spirometry is the gold standard for evaluating lung functions. Recent research has proposed that mobile devices can measure lung function indices cost-efficiently. However, these designs fall short in two aspects. First, they cannot provide the flow-volume (F-V) curve, which is more informative than lung function indices. Secondly, these solutions lack inspiratory measurement, which is sensitive to lung diseases such as variable extrathoracic obstruction. In this paper, we present EarSpiro, an earphone-based solution that interprets the recorded airflow sound during a spirometry test into an F-V curve, including both the expiratory and inspiratory measurements. EarSpiro leverages a convolutional neural network (CNN) and a recurrent neural network (RNN) to capture the complex correlation between airflow sound and airflow speed. Meanwhile, EarSpiro adopts a clustering-based segmentation algorithm to track the weak inspiratory signals from the raw audio recording to enable inspiratory measurement. We also enable EarSpiro with daily mouthpiece-like objects such as a funnel using transfer learning and a decoder network with the help of only a few true lung function indices from the user. Extensive experiments with 60 subjects show that EarSpiro achieves mean errors of 0 . 20 𝐿 / 𝑠 and 0 . 42 𝐿 / 𝑠 for expiratory and inspiratory flow rate estimation, and 0 . 61 𝐿 / 𝑠 and 0 . 83 𝐿 / 𝑠 for expiratory and inspiratory F-V curve estimation. The mean correlation coefficient between the estimated F-V curve and the true one is 0 . 94. The mean estimation error for four common lung function indices is 7 . 3%.
肺活量测定法是评价肺功能的金标准。最近的研究表明,移动设备可以经济有效地测量肺功能指数。然而,这些设计在两个方面存在不足。首先,他们不能提供比肺功能指标更有信息量的流量-体积(F-V)曲线。其次,这些溶液缺乏吸气测量,对可变胸外梗阻等肺部疾病很敏感。在本文中,我们介绍了EarSpiro,一种基于耳机的解决方案,可以将肺活量测定测试中记录的气流声音解释为F-V曲线,包括呼气和吸气测量。EarSpiro利用卷积神经网络(CNN)和循环神经网络(RNN)来捕捉气流声音和气流速度之间的复杂相关性。同时,EarSpiro采用基于聚类的分割算法,对原始录音中的微弱吸气信号进行跟踪,实现吸气测量。我们还为EarSpiro提供了日常的类似于吹嘴的对象,例如使用迁移学习的漏斗和解码器网络,仅使用用户的一些真实肺功能指数。对60名受试者进行的大量实验表明,EarSpiro的平均误差为0。20𝐿/𝑠和0。42𝐿/𝑠用于呼气和吸气流速估算,0。61𝐿/𝑠和0。83𝐿/𝑠呼气和吸气F-V曲线估计。估计的F-V曲线与真实曲线的平均相关系数为0。94. 四种常用肺功能指标的平均估计误差为7。3%。
{"title":"EarSpiro: Earphone-based Spirometry for Lung Function Assessment","authors":"Wentao Xie, Qing Hu, Jin Zhang, Qian Zhang","doi":"10.1145/3569480","DOIUrl":"https://doi.org/10.1145/3569480","url":null,"abstract":"Spirometry is the gold standard for evaluating lung functions. Recent research has proposed that mobile devices can measure lung function indices cost-efficiently. However, these designs fall short in two aspects. First, they cannot provide the flow-volume (F-V) curve, which is more informative than lung function indices. Secondly, these solutions lack inspiratory measurement, which is sensitive to lung diseases such as variable extrathoracic obstruction. In this paper, we present EarSpiro, an earphone-based solution that interprets the recorded airflow sound during a spirometry test into an F-V curve, including both the expiratory and inspiratory measurements. EarSpiro leverages a convolutional neural network (CNN) and a recurrent neural network (RNN) to capture the complex correlation between airflow sound and airflow speed. Meanwhile, EarSpiro adopts a clustering-based segmentation algorithm to track the weak inspiratory signals from the raw audio recording to enable inspiratory measurement. We also enable EarSpiro with daily mouthpiece-like objects such as a funnel using transfer learning and a decoder network with the help of only a few true lung function indices from the user. Extensive experiments with 60 subjects show that EarSpiro achieves mean errors of 0 . 20 𝐿 / 𝑠 and 0 . 42 𝐿 / 𝑠 for expiratory and inspiratory flow rate estimation, and 0 . 61 𝐿 / 𝑠 and 0 . 83 𝐿 / 𝑠 for expiratory and inspiratory F-V curve estimation. The mean correlation coefficient between the estimated F-V curve and the true one is 0 . 94. The mean estimation error for four common lung function indices is 7 . 3%.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"111 1","pages":"188:1-188:27"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77870686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HaptiDrag: A Device with the Ability to Generate Varying Levels of Drag (Friction) Effects on Real Surfaces HaptiDrag:一种能够在真实表面上产生不同水平的拖动(摩擦)效果的设备
Pub Date : 2022-01-01 DOI: 10.1145/3550310
Abhijeet Mishra, Piyush Kumar, Jainendra Shukla, Aman Parnami
We presently rely on mechanical approaches to leverage drag (friction) effects for digital interaction as haptic feedback over real surfaces. Unfortunately, due to their mechanical nature, such methods are inconvenient, difficult to scale, and include object deployment issues. Accordingly, we present HaptiDrag, a thin (1 mm) and lightweight (2 gram) device that can reliably produce various intensities of on-surface drag effects through electroadhesion phenomenon. We first performed design evaluation to determine minimal size (5 cm x 5 cm) of HaptiDrag to enable drag effect. Further, with reference to eight distinct surfaces, we present technical performance of 2 sizes of HaptiDrag in real environment conditions. Later, we conducted two user studies; the first to discover absolute detection threshold friction spots of varying intensities common to all surfaces under test and the second to validate the absolute detection threshold points for noticeability with all sizes of HaptiDrag. Finally, we demonstrate device’s utility in different scenarios.
我们目前依靠机械方法来利用拖动(摩擦)效应,作为真实表面的触觉反馈进行数字交互。不幸的是,由于它们的机械性质,这些方法不方便,难以扩展,并且包含对象部署问题。因此,我们提出了HaptiDrag,这是一种薄(1毫米)轻(2克)的设备,可以通过电粘附现象可靠地产生各种强度的表面阻力效应。我们首先进行了设计评估,以确定HaptiDrag的最小尺寸(5cm x 5cm)以实现拖动效果。此外,参考八种不同的表面,我们展示了两种尺寸的HaptiDrag在真实环境条件下的技术性能。后来,我们进行了两次用户研究;第一个是发现所有测试表面共有的不同强度的绝对检测阈值摩擦点,第二个是验证所有尺寸的HaptiDrag的绝对检测阈值点的可注意性。最后,我们将演示设备在不同场景中的效用。
{"title":"HaptiDrag: A Device with the Ability to Generate Varying Levels of Drag (Friction) Effects on Real Surfaces","authors":"Abhijeet Mishra, Piyush Kumar, Jainendra Shukla, Aman Parnami","doi":"10.1145/3550310","DOIUrl":"https://doi.org/10.1145/3550310","url":null,"abstract":"We presently rely on mechanical approaches to leverage drag (friction) effects for digital interaction as haptic feedback over real surfaces. Unfortunately, due to their mechanical nature, such methods are inconvenient, difficult to scale, and include object deployment issues. Accordingly, we present HaptiDrag, a thin (1 mm) and lightweight (2 gram) device that can reliably produce various intensities of on-surface drag effects through electroadhesion phenomenon. We first performed design evaluation to determine minimal size (5 cm x 5 cm) of HaptiDrag to enable drag effect. Further, with reference to eight distinct surfaces, we present technical performance of 2 sizes of HaptiDrag in real environment conditions. Later, we conducted two user studies; the first to discover absolute detection threshold friction spots of varying intensities common to all surfaces under test and the second to validate the absolute detection threshold points for noticeability with all sizes of HaptiDrag. Finally, we demonstrate device’s utility in different scenarios.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"23 1","pages":"131:1-131:26"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72774743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
StudentSADD: Rapid Mobile Depression and Suicidal Ideation Screening of College Students during the Coronavirus Pandemic 新型冠状病毒大流行期间大学生移动抑郁与自杀意念的快速筛查
Pub Date : 2022-01-01 DOI: 10.1145/3534604
M. L. Tlachac, Ricardo Flores, Miranda Reisch, Rimsha Kayastha, -. Ninatau, Rich, V. Melican, Connor Bruneau, H. Caouette, E. Toto, E. Rundensteiner
The growing prevalence of depression and suicidal ideation among college students further exacerbated by the Coronavirus pandemic is alarming, highlighting the need for universal mental illness screening technology. With traditional screening questionnaires too burdensome to achieve universal screening in this population, data collected through mobile applications has the potential to rapidly identify at-risk students. While prior research has mostly focused on collecting passive smartphone modalities from students, smartphone sensors are also capable of capturing active modalities. The general public has demonstrated more willingness to share active than passive modalities through an app, yet no such dataset of active mobile modalities for mental illness screening exists for students. Knowing which active modalities hold strong screening capabilities for student populations is critical for developing targeted mental illness screening technology. Thus, we deployed a mobile application to over 300 students during the COVID-19 pandemic to collect the Student Suicidal Ideation and Depression Detection (StudentSADD) dataset. We report on a rich variety of machine learning models including cutting-edge multimodal pretrained deep learning classifiers on active text and voice replies to screen for depression and suicidal ideation. This unique StudentSADD dataset is a valuable resource for the community for developing mobile mental illness screening tools. © 2022 ACM.
冠状病毒大流行进一步加剧了大学生中抑郁症和自杀意念的日益流行,这令人担忧,凸显了对普遍精神疾病筛查技术的需求。由于传统的筛查问卷过于繁琐,无法在这一人群中实现普遍筛查,通过移动应用程序收集的数据有可能快速识别有风险的学生。虽然之前的研究主要集中在收集学生的被动智能手机模式,但智能手机传感器也能够捕捉主动模式。普通大众更愿意通过应用程序分享主动模式,而不是被动模式,但目前还没有针对学生的精神疾病筛查的主动移动模式数据集。了解哪种积极模式对学生群体具有强大的筛查能力,对于开发有针对性的精神疾病筛查技术至关重要。因此,我们在COVID-19大流行期间为300多名学生部署了一个移动应用程序,以收集学生自杀意念和抑郁检测(StudentSADD)数据集。我们报告了各种各样的机器学习模型,包括先进的多模态预训练深度学习分类器,用于主动文本和语音回复,以筛查抑郁症和自杀意念。这个独特的StudentSADD数据集是社区开发移动精神疾病筛查工具的宝贵资源。©2022 acm。
{"title":"StudentSADD: Rapid Mobile Depression and Suicidal Ideation Screening of College Students during the Coronavirus Pandemic","authors":"M. L. Tlachac, Ricardo Flores, Miranda Reisch, Rimsha Kayastha, -. Ninatau, Rich, V. Melican, Connor Bruneau, H. Caouette, E. Toto, E. Rundensteiner","doi":"10.1145/3534604","DOIUrl":"https://doi.org/10.1145/3534604","url":null,"abstract":"The growing prevalence of depression and suicidal ideation among college students further exacerbated by the Coronavirus pandemic is alarming, highlighting the need for universal mental illness screening technology. With traditional screening questionnaires too burdensome to achieve universal screening in this population, data collected through mobile applications has the potential to rapidly identify at-risk students. While prior research has mostly focused on collecting passive smartphone modalities from students, smartphone sensors are also capable of capturing active modalities. The general public has demonstrated more willingness to share active than passive modalities through an app, yet no such dataset of active mobile modalities for mental illness screening exists for students. Knowing which active modalities hold strong screening capabilities for student populations is critical for developing targeted mental illness screening technology. Thus, we deployed a mobile application to over 300 students during the COVID-19 pandemic to collect the Student Suicidal Ideation and Depression Detection (StudentSADD) dataset. We report on a rich variety of machine learning models including cutting-edge multimodal pretrained deep learning classifiers on active text and voice replies to screen for depression and suicidal ideation. This unique StudentSADD dataset is a valuable resource for the community for developing mobile mental illness screening tools. © 2022 ACM.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"104 1","pages":"76:1-76:32"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80849565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
AdaMICA: Adaptive Multicore Intermittent Computing AdaMICA:自适应多核间歇计算
Pub Date : 2022-01-01 DOI: 10.1145/3550304
K. Akhunov, K. Yıldırım
Recent studies on intermittent computing target single-core processors and underestimate the efficient parallel execution of highly-parallelizable machine learning tasks. Even though general-purpose multicore processors provide a high degree of parallelism and programming flexibility, intermittent computing has not exploited them yet. Filling this gap, we introduce AdaMICA (Adaptive Multicore Intermittent Computing) runtime that supports, for the first time, parallel intermittent computing and provides the highest degree of flexibility of programmable general-purpose multiple cores. AdaMICA is adaptive since it responds to the changes in the environmental power availability by dynamically reconfiguring the underlying multicore architecture to use the power most optimally. Our results demonstrate that AdaMICA significantly increases the throughput (52% on average) and decreases the latency (31% on average) by dynamically scaling the underlying architecture, considering the variations in the unpredictable harvested energy.
最近关于间歇性计算的研究以单核处理器为目标,低估了高并行性机器学习任务的高效并行执行。尽管通用多核处理器提供了高度的并行性和编程灵活性,但间歇性计算还没有充分利用它们。为了填补这一空白,我们引入了AdaMICA(自适应多核间歇计算)运行时,它首次支持并行间歇计算,并提供可编程通用多核的最高程度的灵活性。AdaMICA是自适应的,因为它通过动态重新配置底层多核架构来响应环境电源可用性的变化,以最优地使用电源。我们的结果表明,考虑到不可预测的能量收集的变化,通过动态扩展底层架构,AdaMICA显着提高了吞吐量(平均52%)并降低了延迟(平均31%)。
{"title":"AdaMICA: Adaptive Multicore Intermittent Computing","authors":"K. Akhunov, K. Yıldırım","doi":"10.1145/3550304","DOIUrl":"https://doi.org/10.1145/3550304","url":null,"abstract":"Recent studies on intermittent computing target single-core processors and underestimate the efficient parallel execution of highly-parallelizable machine learning tasks. Even though general-purpose multicore processors provide a high degree of parallelism and programming flexibility, intermittent computing has not exploited them yet. Filling this gap, we introduce AdaMICA (Adaptive Multicore Intermittent Computing) runtime that supports, for the first time, parallel intermittent computing and provides the highest degree of flexibility of programmable general-purpose multiple cores. AdaMICA is adaptive since it responds to the changes in the environmental power availability by dynamically reconfiguring the underlying multicore architecture to use the power most optimally. Our results demonstrate that AdaMICA significantly increases the throughput (52% on average) and decreases the latency (31% on average) by dynamically scaling the underlying architecture, considering the variations in the unpredictable harvested energy.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"349 1","pages":"98:1-98:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78081842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
EarIO: A Low-power Acoustic Sensing Earable for Continuously Tracking Detailed Facial Movements EarIO:一种低功耗声学传感耳机,用于连续跟踪详细的面部运动
Pub Date : 2022-01-01 DOI: 10.1145/3534621
Ke Li, Ruidong Zhang, Bo Li, François Guimbretière, Cheng Zhang
This paper presents EarIO, an AI-powered acoustic sensing technology that allows an earable (e.g., earphone) to continuously track facial expressions using two pairs of microphone and speaker (one on each side), which are widely available in commodity earphones. It emits acoustic signals from a speaker on an earable towards the face. Depending on facial expressions, the muscles, tissues, and skin around the ear would deform differently, resulting in unique echo profiles in the reflected signals captured by an on-device microphone. These received acoustic signals are processed and learned by a customized deep learning pipeline to continuously infer the full facial expressions represented by 52 parameters captured using a TruthDepth camera. Compared to similar technologies, it has significantly lower power consumption, as it can sample at 86 Hz with a power signature of 154 mW. A user study with 16 participants under three different scenarios, showed that EarIO can reliably estimate the detailed facial movements when the participants were sitting, walking or after remounting the device. Based on the encouraging results, we further discuss the potential opportunities and challenges on applying EarIO on future ear-mounted wearables.
本文介绍了EarIO,一种人工智能驱动的声学传感技术,允许可听设备(例如耳机)使用两对麦克风和扬声器(每侧一个)连续跟踪面部表情,这在商品耳机中广泛使用。它通过耳罩上的扬声器向脸部发射声音信号。根据面部表情的不同,耳朵周围的肌肉、组织和皮肤会发生不同的变形,从而在设备上的麦克风捕捉到的反射信号中产生独特的回声剖面。这些接收到的声音信号通过定制的深度学习管道进行处理和学习,以持续推断由TruthDepth相机捕获的52个参数所代表的完整面部表情。与同类技术相比,它的功耗显着降低,因为它可以在86 Hz的频率下采样,功率特征为154 mW。一项对16名参与者在三种不同场景下的用户研究表明,当参与者坐着、走路或重新安装设备后,EarIO可以可靠地估计出详细的面部运动。基于这些令人鼓舞的结果,我们进一步讨论了将EarIO应用于未来耳戴式可穿戴设备的潜在机遇和挑战。
{"title":"EarIO: A Low-power Acoustic Sensing Earable for Continuously Tracking Detailed Facial Movements","authors":"Ke Li, Ruidong Zhang, Bo Li, François Guimbretière, Cheng Zhang","doi":"10.1145/3534621","DOIUrl":"https://doi.org/10.1145/3534621","url":null,"abstract":"This paper presents EarIO, an AI-powered acoustic sensing technology that allows an earable (e.g., earphone) to continuously track facial expressions using two pairs of microphone and speaker (one on each side), which are widely available in commodity earphones. It emits acoustic signals from a speaker on an earable towards the face. Depending on facial expressions, the muscles, tissues, and skin around the ear would deform differently, resulting in unique echo profiles in the reflected signals captured by an on-device microphone. These received acoustic signals are processed and learned by a customized deep learning pipeline to continuously infer the full facial expressions represented by 52 parameters captured using a TruthDepth camera. Compared to similar technologies, it has significantly lower power consumption, as it can sample at 86 Hz with a power signature of 154 mW. A user study with 16 participants under three different scenarios, showed that EarIO can reliably estimate the detailed facial movements when the participants were sitting, walking or after remounting the device. Based on the encouraging results, we further discuss the potential opportunities and challenges on applying EarIO on future ear-mounted wearables.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"2013 1","pages":"62:1-62:24"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86279527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
ThumbAir: In-Air Typing for Head Mounted Displays ThumbAir:在空气中键入头戴式显示器
Pub Date : 2022-01-01 DOI: 10.1145/3569474
Hyunjae Gil, Ian Oakley
Typing while wearing a standalone Head Mounted Display (HMD)—systems without external input devices or sensors to support text entry—is hard. To address this issue, prior work has used external trackers to monitor finger movements to support in-air typing on virtual keyboards. While performance has been promising, current systems are practically infeasible: finger movements may be visually occluded from inside-out HMD based tracking systems or, otherwise, awkward and uncomfortable to perform. To address these issues, this paper explores an alternative approach. Taking inspiration from the prevalence of thumb-typing on mobile phones, we describe four studies exploring, defining and validating the performance of ThumbAir, an in-air thumb-typing system implemented on a commercial HMD. The first study explores viable target locations, ultimately recommending eight targets sites. The second study collects performance data for taps on pairs of these targets to both inform the design of a target selection procedure and also support a computational design process to select a keyboard layout. The final two studies validate the selected keyboard layout in word repetition and phrase entry tasks, ultimately achieving final WPMs of 27.1 and 13.73. Qualitative data captured in the final study indicate that the discreet movements required to operate ThumbAir, in comparison to the larger scale finger and hand motions used in a baseline design from prior work, lead to reduced levels of perceived exertion and physical demand and are rated as acceptable for use in a wider range of social situations.
戴着独立的头戴式显示器(HMD)打字很困难,因为该系统没有外部输入设备或传感器来支持文本输入。为了解决这个问题,之前的工作已经使用外部跟踪器来监控手指的运动,以支持在虚拟键盘上进行空中打字。虽然性能很有希望,但目前的系统实际上是不可行的:手指的运动可能会在视觉上被由内而外的基于HMD的跟踪系统遮挡,否则,操作起来会很尴尬和不舒服。为了解决这些问题,本文探讨了另一种方法。从手机拇指打字的流行中获得灵感,我们描述了四项研究,探索,定义和验证ThumbAir的性能,这是一个在商用HMD上实现的空中拇指打字系统。第一项研究探索了可行的目标地点,最终推荐了八个目标地点。第二项研究收集了敲击这些目标对的性能数据,以告知目标选择程序的设计,并支持选择键盘布局的计算设计过程。最后两项研究在单词重复和短语输入任务中验证了所选择的键盘布局,最终实现了27.1和13.73的最终wpm。最终研究中获得的定性数据表明,与之前工作中基线设计中使用的更大规模手指和手部运动相比,操作ThumbAir所需的谨慎动作导致感知劳累和体力需求水平降低,并被评为可接受的,用于更广泛的社交场合。
{"title":"ThumbAir: In-Air Typing for Head Mounted Displays","authors":"Hyunjae Gil, Ian Oakley","doi":"10.1145/3569474","DOIUrl":"https://doi.org/10.1145/3569474","url":null,"abstract":"Typing while wearing a standalone Head Mounted Display (HMD)—systems without external input devices or sensors to support text entry—is hard. To address this issue, prior work has used external trackers to monitor finger movements to support in-air typing on virtual keyboards. While performance has been promising, current systems are practically infeasible: finger movements may be visually occluded from inside-out HMD based tracking systems or, otherwise, awkward and uncomfortable to perform. To address these issues, this paper explores an alternative approach. Taking inspiration from the prevalence of thumb-typing on mobile phones, we describe four studies exploring, defining and validating the performance of ThumbAir, an in-air thumb-typing system implemented on a commercial HMD. The first study explores viable target locations, ultimately recommending eight targets sites. The second study collects performance data for taps on pairs of these targets to both inform the design of a target selection procedure and also support a computational design process to select a keyboard layout. The final two studies validate the selected keyboard layout in word repetition and phrase entry tasks, ultimately achieving final WPMs of 27.1 and 13.73. Qualitative data captured in the final study indicate that the discreet movements required to operate ThumbAir, in comparison to the larger scale finger and hand motions used in a baseline design from prior work, lead to reduced levels of perceived exertion and physical demand and are rated as acceptable for use in a wider range of social situations.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"21 1","pages":"164:1-164:30"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79256287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Lumos: An Open-Source Device for Wearable Spectroscopy Research Lumos:用于可穿戴光谱研究的开源设备
Pub Date : 2022-01-01 DOI: 10.1145/3569502
Amanda Watson, Claire Kendell, Anush Lingamoorthy, Insup Lee, James Weimer
Spectroscopy, the study of the interaction between electromagnetic radiation and matter, is a vital technique in many disciplines. This technique is limited to lab settings, and, as such, sensing is isolated and infrequent. Thus, it can only provide a brief snapshot of the monitored parameter. Wearable technology brings sensing and tracking technologies out into everyday life, creating longitudinal datasets that provide more insight into the monitored parameter. In this paper, we describe Lumos, an open-source device for wearable spectroscopy research. Lumos can facilitate on-body spectroscopy research in health monitoring, athletics, rehabilitation, and more. We developed an algorithm to determine the spectral response of a medium with a mean absolute error of 13nm. From this, researchers can determine the optimal spectrum and create customized sensors for their target application. We show the utility of Lumos in a pilot study, sensing of prediabetes, where we determine the relevant spectrum for glucose and create and evaluate a targeted tracking device.
光谱学是研究电磁辐射与物质之间相互作用的学科,在许多学科中都是一项至关重要的技术。该技术仅限于实验室设置,因此,传感是孤立的和不常见的。因此,它只能提供被监视参数的简要快照。可穿戴技术将传感和跟踪技术带入日常生活,创建纵向数据集,提供对监测参数的更多了解。本文介绍了一种用于可穿戴光谱研究的开源设备Lumos。荧光灯可以促进身体光谱研究在健康监测,运动,康复,和更多。我们开发了一种算法来确定平均绝对误差为13nm的介质的光谱响应。由此,研究人员可以确定最佳光谱,并为其目标应用创建定制的传感器。我们在前期研究中展示了Lumos的效用,检测糖尿病前期,我们确定了葡萄糖的相关光谱,并创建和评估了目标跟踪设备。
{"title":"Lumos: An Open-Source Device for Wearable Spectroscopy Research","authors":"Amanda Watson, Claire Kendell, Anush Lingamoorthy, Insup Lee, James Weimer","doi":"10.1145/3569502","DOIUrl":"https://doi.org/10.1145/3569502","url":null,"abstract":"Spectroscopy, the study of the interaction between electromagnetic radiation and matter, is a vital technique in many disciplines. This technique is limited to lab settings, and, as such, sensing is isolated and infrequent. Thus, it can only provide a brief snapshot of the monitored parameter. Wearable technology brings sensing and tracking technologies out into everyday life, creating longitudinal datasets that provide more insight into the monitored parameter. In this paper, we describe Lumos, an open-source device for wearable spectroscopy research. Lumos can facilitate on-body spectroscopy research in health monitoring, athletics, rehabilitation, and more. We developed an algorithm to determine the spectral response of a medium with a mean absolute error of 13nm. From this, researchers can determine the optimal spectrum and create customized sensors for their target application. We show the utility of Lumos in a pilot study, sensing of prediabetes, where we determine the relevant spectrum for glucose and create and evaluate a targeted tracking device.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"114 1","pages":"187:1-187:24"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76723311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
BodyTrak: Inferring Full-body Poses from Body Silhouettes Using a Miniature Camera on a Wristband BodyTrak:使用腕带上的微型相机从身体轮廓推断全身姿势
Pub Date : 2022-01-01 DOI: 10.1145/3552312
Hyunchul Lim, Yaxuan Li, Matthew Dressa, Fangwei Hu, Jae Hoon Kim, Ruidong Zhang, Cheng Zhang
In this paper, we present BodyTrak, an intelligent sensing technology that can estimate full body poses on a wristband.
在本文中,我们介绍了BodyTrak,一种可以在腕带上估计全身姿势的智能传感技术。
{"title":"BodyTrak: Inferring Full-body Poses from Body Silhouettes Using a Miniature Camera on a Wristband","authors":"Hyunchul Lim, Yaxuan Li, Matthew Dressa, Fangwei Hu, Jae Hoon Kim, Ruidong Zhang, Cheng Zhang","doi":"10.1145/3552312","DOIUrl":"https://doi.org/10.1145/3552312","url":null,"abstract":"In this paper, we present BodyTrak, an intelligent sensing technology that can estimate full body poses on a wristband.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"56 1 1","pages":"154:1-154:21"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83378125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1