首页 > 最新文献

Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.最新文献

英文 中文
Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction 鞋++:一种智能的可拆卸鞋底,用于社交脚对脚的互动
Pub Date : 2022-01-01 DOI: 10.1145/3534620
Zihan Yan, Jiayi Zhou, Wu Yufei, Guanhong Liu, Danli Luo, Zi Zhou, Mi Haipeng, Lingyun Sun, Xiang 'Anthony' Chen, Yang Zhang, Guanyun Wang
Feet are the foundation of our bodies that not only perform locomotion but also participate in intent and emotion expression. Thus, foot gestures are an intuitive and natural form of expression for interpersonal interaction. Recent studies have mostly introduced smart shoes as personal gadgets, while foot gestures used in multi-person foot interactions in social scenarios remain largely unexplored. We present Shoes++, which includes an inertial measurement unit (IMU)-mounted sole and an input vocabulary of social foot-to-foot gestures to support foot-based interaction. The gesture vocabulary is derived and condensed by a set of gestures elicited from a participatory design session with 12 users. We implement a machine learning model in Shoes++ which can recognize two-person and three-person social foot-to-foot gestures with 94.3% and 96.6% accuracies (N=18). In addition, the sole is designed to easily attach to and detach from various daily shoes to support comfortable social foot interaction without taking off the shoes. Based on users’ qualitative feedback, we also found that Shoes++ can support team collaboration and enhance emotion expression, thus making social interactions or interpersonal dynamics more engaging in an expanded design space. Additional Key and smart sole Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 2, (June 2022),
脚是我们身体的基础,不仅可以进行运动,还可以参与意图和情感表达。因此,在人际交往中,脚的手势是一种直观、自然的表达方式。最近的研究主要是将智能鞋作为个人小工具,而在社交场景中多人脚互动中使用的脚手势仍未得到充分研究。我们展示了鞋++,它包括一个安装在鞋底的惯性测量单元(IMU)和一个社交脚对脚手势的输入词汇表,以支持基于脚的交互。手势词汇是由12个用户参与设计会议中产生的一组手势派生和浓缩而成的。我们在鞋++中实现了一个机器学习模型,该模型可以识别两人和三人的社交脚对脚手势,准确率分别为94.3%和96.6% (N=18)。此外,鞋底的设计可以轻松地与各种日常鞋子连接和分离,以支持舒适的社交足部互动,而无需脱下鞋子。基于用户的定性反馈,我们还发现鞋++可以支持团队协作,增强情感表达,从而使社会互动或人际动态在扩大的设计空间中更具吸引力。额外的关键和智能鞋底鞋++:一个智能的可拆卸鞋底的社会脚对脚的互动。ACM交互过程。暴徒。可穿戴无处不在的技术,6,2,(2022年6月),
{"title":"Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction","authors":"Zihan Yan, Jiayi Zhou, Wu Yufei, Guanhong Liu, Danli Luo, Zi Zhou, Mi Haipeng, Lingyun Sun, Xiang 'Anthony' Chen, Yang Zhang, Guanyun Wang","doi":"10.1145/3534620","DOIUrl":"https://doi.org/10.1145/3534620","url":null,"abstract":"Feet are the foundation of our bodies that not only perform locomotion but also participate in intent and emotion expression. Thus, foot gestures are an intuitive and natural form of expression for interpersonal interaction. Recent studies have mostly introduced smart shoes as personal gadgets, while foot gestures used in multi-person foot interactions in social scenarios remain largely unexplored. We present Shoes++, which includes an inertial measurement unit (IMU)-mounted sole and an input vocabulary of social foot-to-foot gestures to support foot-based interaction. The gesture vocabulary is derived and condensed by a set of gestures elicited from a participatory design session with 12 users. We implement a machine learning model in Shoes++ which can recognize two-person and three-person social foot-to-foot gestures with 94.3% and 96.6% accuracies (N=18). In addition, the sole is designed to easily attach to and detach from various daily shoes to support comfortable social foot interaction without taking off the shoes. Based on users’ qualitative feedback, we also found that Shoes++ can support team collaboration and enhance emotion expression, thus making social interactions or interpersonal dynamics more engaging in an expanded design space. Additional Key and smart sole Shoes++: A Smart Detachable Sole for Social Foot-to-foot Interaction. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 2, (June 2022),","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74092177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
BLEselect: Gestural IoT Device Selection via Bluetooth Angle of Arrival Estimation from Smart Glasses BLEselect:通过智能眼镜的蓝牙到达角度估计进行手势物联网设备选择
Pub Date : 2022-01-01 DOI: 10.1145/3569482
Tengxiang Zhang, Zitong Lan, Chenren Xu, Yanrong Li, Yiqiang Chen
Spontaneous selection of IoT devices from the head-mounted device is key for user-centered pervasive interaction. BLEselect enables users to select an unmodified Bluetooth 5.1 compatible IoT device by nodding at, pointing at, or drawing a circle in the air around it. We designed a compact antenna array that fits on a pair of smart glasses to estimate the Angle of Arrival (AoA) of IoT and wrist-worn devices’ advertising signals. We then developed a sensing pipeline that supports all three selection gestures with lightweight machine learning models, which are trained in real-time for both hand gestures. Extensive characterizations and evaluations show that our system is accurate, natural, low-power, and privacy-preserving. Despite the small effective size of the antenna array, our system achieves a higher than 90% selection accuracy within a 3 meters distance in front of the user. In a user study that mimics real-life usage cases, the overall selection accuracy is 96.7% for a diverse set of 22 participants in terms of age, technology savviness, and body structures.
从头戴式设备中自发选择物联网设备是实现以用户为中心的无处不在的交互的关键。BLEselect允许用户通过点头、指向或在周围的空气中画圈来选择未修改的蓝牙5.1兼容物联网设备。我们设计了一种紧凑型天线阵列,可以安装在一副智能眼镜上,用于估计物联网的到达角(AoA)和腕带设备的广告信号。然后,我们开发了一个传感管道,支持所有三种选择手势,使用轻量级机器学习模型,这些模型可以实时训练两种手势。广泛的特征和评估表明,我们的系统是准确的、自然的、低功耗的和隐私保护的。尽管天线阵列的有效尺寸很小,但我们的系统在用户面前3米距离内的选择精度高于90%。在一项模拟现实生活用例的用户研究中,根据年龄、技术熟练程度和身体结构,22名不同的参与者的总体选择准确率为96.7%。
{"title":"BLEselect: Gestural IoT Device Selection via Bluetooth Angle of Arrival Estimation from Smart Glasses","authors":"Tengxiang Zhang, Zitong Lan, Chenren Xu, Yanrong Li, Yiqiang Chen","doi":"10.1145/3569482","DOIUrl":"https://doi.org/10.1145/3569482","url":null,"abstract":"Spontaneous selection of IoT devices from the head-mounted device is key for user-centered pervasive interaction. BLEselect enables users to select an unmodified Bluetooth 5.1 compatible IoT device by nodding at, pointing at, or drawing a circle in the air around it. We designed a compact antenna array that fits on a pair of smart glasses to estimate the Angle of Arrival (AoA) of IoT and wrist-worn devices’ advertising signals. We then developed a sensing pipeline that supports all three selection gestures with lightweight machine learning models, which are trained in real-time for both hand gestures. Extensive characterizations and evaluations show that our system is accurate, natural, low-power, and privacy-preserving. Despite the small effective size of the antenna array, our system achieves a higher than 90% selection accuracy within a 3 meters distance in front of the user. In a user study that mimics real-life usage cases, the overall selection accuracy is 96.7% for a diverse set of 22 participants in terms of age, technology savviness, and body structures.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80136760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
HIPPO: Pervasive Hand-Grip Estimation from Everyday Interactions 河马:从日常互动中估计普遍的握力
Pub Date : 2022-01-01 DOI: 10.1145/3570344
Zhigang Yin, M. Liyanage, Abdul-Rasheed Ottun, Souvik Paul, Agustin Zuniga, P. Nurmi, Huber Flores
Hand-grip strength is widely used to estimate muscle strength and it serves as a general indicator of the overall health of a person, particularly in aging adults. Hand-grip strength is typically estimated using dynamometers or specialized force resistant pressure sensors embedded onto objects. Both of these solutions require the user to interact with a dedicated measurement device which unnecessarily restricts the contexts where estimates are acquired. We contribute HIPPO, a novel non-intrusive and opportunistic method for estimating hand-grip strength from everyday interactions with objects. HIPPO re-purposes light sensors available in wearables (e.g., rings or gloves) to capture changes in light reflectivity when people interact with objects. This allows HIPPO to non-intrusively piggyback everyday interactions for health information without affecting the user’s everyday routines. We present two prototypes integrating HIPPO, an early smart glove proof-of-concept, and a further optimized solution that uses sensors integrated onto a ring. We validate HIPPO through extensive experiments and compare HIPPO against three baselines, including a clinical dynamometer. Our results show that HIPPO operates robustly across a wide range of everyday objects, and participants. The force strength estimates correlate with estimates produced by pressure-based devices, and can also determine the correct hand grip strength category with up to 86% accuracy. Our findings also suggest that users prefer our approach to existing solutions as HIPPO blends the estimation with everyday interactions.
握力被广泛用于估计肌肉力量,它是一个人的整体健康状况的一般指标,特别是在老年人中。手握强度通常是用测力计或嵌入物体上的专用抗力压力传感器来估计的。这两种解决方案都需要用户与专用的测量设备进行交互,这不必要地限制了获取估算的上下文。我们贡献了HIPPO,一种新颖的非侵入性和机会主义方法,用于估计日常与物体的相互作用中的手握力。HIPPO重新利用了可穿戴设备(如戒指或手套)中的光传感器,以捕捉人们与物体互动时光反射率的变化。这使得HIPPO可以在不影响用户日常生活的情况下,非侵入性地进行健康信息的日常交互。我们展示了两种集成HIPPO的原型,一种早期的智能手套概念验证,以及一种进一步优化的解决方案,该解决方案使用集成在戒指上的传感器。我们通过广泛的实验验证HIPPO,并将HIPPO与三条基线进行比较,包括临床测力计。我们的研究结果表明,HIPPO在广泛的日常物品和参与者中运行稳健。力强度估计值与基于压力的设备产生的估计值相关,并且还可以确定正确的手握力类别,准确率高达86%。我们的研究结果还表明,用户更喜欢我们的方法,而不是现有的解决方案,因为HIPPO将估计与日常交互混合在一起。
{"title":"HIPPO: Pervasive Hand-Grip Estimation from Everyday Interactions","authors":"Zhigang Yin, M. Liyanage, Abdul-Rasheed Ottun, Souvik Paul, Agustin Zuniga, P. Nurmi, Huber Flores","doi":"10.1145/3570344","DOIUrl":"https://doi.org/10.1145/3570344","url":null,"abstract":"Hand-grip strength is widely used to estimate muscle strength and it serves as a general indicator of the overall health of a person, particularly in aging adults. Hand-grip strength is typically estimated using dynamometers or specialized force resistant pressure sensors embedded onto objects. Both of these solutions require the user to interact with a dedicated measurement device which unnecessarily restricts the contexts where estimates are acquired. We contribute HIPPO, a novel non-intrusive and opportunistic method for estimating hand-grip strength from everyday interactions with objects. HIPPO re-purposes light sensors available in wearables (e.g., rings or gloves) to capture changes in light reflectivity when people interact with objects. This allows HIPPO to non-intrusively piggyback everyday interactions for health information without affecting the user’s everyday routines. We present two prototypes integrating HIPPO, an early smart glove proof-of-concept, and a further optimized solution that uses sensors integrated onto a ring. We validate HIPPO through extensive experiments and compare HIPPO against three baselines, including a clinical dynamometer. Our results show that HIPPO operates robustly across a wide range of everyday objects, and participants. The force strength estimates correlate with estimates produced by pressure-based devices, and can also determine the correct hand grip strength category with up to 86% accuracy. Our findings also suggest that users prefer our approach to existing solutions as HIPPO blends the estimation with everyday interactions.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74486798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
EarSpiro: Earphone-based Spirometry for Lung Function Assessment EarSpiro:用于肺功能评估的耳机肺活量测定法
Pub Date : 2022-01-01 DOI: 10.1145/3569480
Wentao Xie, Qing Hu, Jin Zhang, Qian Zhang
Spirometry is the gold standard for evaluating lung functions. Recent research has proposed that mobile devices can measure lung function indices cost-efficiently. However, these designs fall short in two aspects. First, they cannot provide the flow-volume (F-V) curve, which is more informative than lung function indices. Secondly, these solutions lack inspiratory measurement, which is sensitive to lung diseases such as variable extrathoracic obstruction. In this paper, we present EarSpiro, an earphone-based solution that interprets the recorded airflow sound during a spirometry test into an F-V curve, including both the expiratory and inspiratory measurements. EarSpiro leverages a convolutional neural network (CNN) and a recurrent neural network (RNN) to capture the complex correlation between airflow sound and airflow speed. Meanwhile, EarSpiro adopts a clustering-based segmentation algorithm to track the weak inspiratory signals from the raw audio recording to enable inspiratory measurement. We also enable EarSpiro with daily mouthpiece-like objects such as a funnel using transfer learning and a decoder network with the help of only a few true lung function indices from the user. Extensive experiments with 60 subjects show that EarSpiro achieves mean errors of 0 . 20 𝐿 / 𝑠 and 0 . 42 𝐿 / 𝑠 for expiratory and inspiratory flow rate estimation, and 0 . 61 𝐿 / 𝑠 and 0 . 83 𝐿 / 𝑠 for expiratory and inspiratory F-V curve estimation. The mean correlation coefficient between the estimated F-V curve and the true one is 0 . 94. The mean estimation error for four common lung function indices is 7 . 3%.
肺活量测定法是评价肺功能的金标准。最近的研究表明,移动设备可以经济有效地测量肺功能指数。然而,这些设计在两个方面存在不足。首先,他们不能提供比肺功能指标更有信息量的流量-体积(F-V)曲线。其次,这些溶液缺乏吸气测量,对可变胸外梗阻等肺部疾病很敏感。在本文中,我们介绍了EarSpiro,一种基于耳机的解决方案,可以将肺活量测定测试中记录的气流声音解释为F-V曲线,包括呼气和吸气测量。EarSpiro利用卷积神经网络(CNN)和循环神经网络(RNN)来捕捉气流声音和气流速度之间的复杂相关性。同时,EarSpiro采用基于聚类的分割算法,对原始录音中的微弱吸气信号进行跟踪,实现吸气测量。我们还为EarSpiro提供了日常的类似于吹嘴的对象,例如使用迁移学习的漏斗和解码器网络,仅使用用户的一些真实肺功能指数。对60名受试者进行的大量实验表明,EarSpiro的平均误差为0。20𝐿/𝑠和0。42𝐿/𝑠用于呼气和吸气流速估算,0。61𝐿/𝑠和0。83𝐿/𝑠呼气和吸气F-V曲线估计。估计的F-V曲线与真实曲线的平均相关系数为0。94. 四种常用肺功能指标的平均估计误差为7。3%。
{"title":"EarSpiro: Earphone-based Spirometry for Lung Function Assessment","authors":"Wentao Xie, Qing Hu, Jin Zhang, Qian Zhang","doi":"10.1145/3569480","DOIUrl":"https://doi.org/10.1145/3569480","url":null,"abstract":"Spirometry is the gold standard for evaluating lung functions. Recent research has proposed that mobile devices can measure lung function indices cost-efficiently. However, these designs fall short in two aspects. First, they cannot provide the flow-volume (F-V) curve, which is more informative than lung function indices. Secondly, these solutions lack inspiratory measurement, which is sensitive to lung diseases such as variable extrathoracic obstruction. In this paper, we present EarSpiro, an earphone-based solution that interprets the recorded airflow sound during a spirometry test into an F-V curve, including both the expiratory and inspiratory measurements. EarSpiro leverages a convolutional neural network (CNN) and a recurrent neural network (RNN) to capture the complex correlation between airflow sound and airflow speed. Meanwhile, EarSpiro adopts a clustering-based segmentation algorithm to track the weak inspiratory signals from the raw audio recording to enable inspiratory measurement. We also enable EarSpiro with daily mouthpiece-like objects such as a funnel using transfer learning and a decoder network with the help of only a few true lung function indices from the user. Extensive experiments with 60 subjects show that EarSpiro achieves mean errors of 0 . 20 𝐿 / 𝑠 and 0 . 42 𝐿 / 𝑠 for expiratory and inspiratory flow rate estimation, and 0 . 61 𝐿 / 𝑠 and 0 . 83 𝐿 / 𝑠 for expiratory and inspiratory F-V curve estimation. The mean correlation coefficient between the estimated F-V curve and the true one is 0 . 94. The mean estimation error for four common lung function indices is 7 . 3%.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77870686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HaptiDrag: A Device with the Ability to Generate Varying Levels of Drag (Friction) Effects on Real Surfaces HaptiDrag:一种能够在真实表面上产生不同水平的拖动(摩擦)效果的设备
Pub Date : 2022-01-01 DOI: 10.1145/3550310
Abhijeet Mishra, Piyush Kumar, Jainendra Shukla, Aman Parnami
We presently rely on mechanical approaches to leverage drag (friction) effects for digital interaction as haptic feedback over real surfaces. Unfortunately, due to their mechanical nature, such methods are inconvenient, difficult to scale, and include object deployment issues. Accordingly, we present HaptiDrag, a thin (1 mm) and lightweight (2 gram) device that can reliably produce various intensities of on-surface drag effects through electroadhesion phenomenon. We first performed design evaluation to determine minimal size (5 cm x 5 cm) of HaptiDrag to enable drag effect. Further, with reference to eight distinct surfaces, we present technical performance of 2 sizes of HaptiDrag in real environment conditions. Later, we conducted two user studies; the first to discover absolute detection threshold friction spots of varying intensities common to all surfaces under test and the second to validate the absolute detection threshold points for noticeability with all sizes of HaptiDrag. Finally, we demonstrate device’s utility in different scenarios.
我们目前依靠机械方法来利用拖动(摩擦)效应,作为真实表面的触觉反馈进行数字交互。不幸的是,由于它们的机械性质,这些方法不方便,难以扩展,并且包含对象部署问题。因此,我们提出了HaptiDrag,这是一种薄(1毫米)轻(2克)的设备,可以通过电粘附现象可靠地产生各种强度的表面阻力效应。我们首先进行了设计评估,以确定HaptiDrag的最小尺寸(5cm x 5cm)以实现拖动效果。此外,参考八种不同的表面,我们展示了两种尺寸的HaptiDrag在真实环境条件下的技术性能。后来,我们进行了两次用户研究;第一个是发现所有测试表面共有的不同强度的绝对检测阈值摩擦点,第二个是验证所有尺寸的HaptiDrag的绝对检测阈值点的可注意性。最后,我们将演示设备在不同场景中的效用。
{"title":"HaptiDrag: A Device with the Ability to Generate Varying Levels of Drag (Friction) Effects on Real Surfaces","authors":"Abhijeet Mishra, Piyush Kumar, Jainendra Shukla, Aman Parnami","doi":"10.1145/3550310","DOIUrl":"https://doi.org/10.1145/3550310","url":null,"abstract":"We presently rely on mechanical approaches to leverage drag (friction) effects for digital interaction as haptic feedback over real surfaces. Unfortunately, due to their mechanical nature, such methods are inconvenient, difficult to scale, and include object deployment issues. Accordingly, we present HaptiDrag, a thin (1 mm) and lightweight (2 gram) device that can reliably produce various intensities of on-surface drag effects through electroadhesion phenomenon. We first performed design evaluation to determine minimal size (5 cm x 5 cm) of HaptiDrag to enable drag effect. Further, with reference to eight distinct surfaces, we present technical performance of 2 sizes of HaptiDrag in real environment conditions. Later, we conducted two user studies; the first to discover absolute detection threshold friction spots of varying intensities common to all surfaces under test and the second to validate the absolute detection threshold points for noticeability with all sizes of HaptiDrag. Finally, we demonstrate device’s utility in different scenarios.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72774743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
StudentSADD: Rapid Mobile Depression and Suicidal Ideation Screening of College Students during the Coronavirus Pandemic 新型冠状病毒大流行期间大学生移动抑郁与自杀意念的快速筛查
Pub Date : 2022-01-01 DOI: 10.1145/3534604
M. L. Tlachac, Ricardo Flores, Miranda Reisch, Rimsha Kayastha, -. Ninatau, Rich, V. Melican, Connor Bruneau, H. Caouette, E. Toto, E. Rundensteiner
The growing prevalence of depression and suicidal ideation among college students further exacerbated by the Coronavirus pandemic is alarming, highlighting the need for universal mental illness screening technology. With traditional screening questionnaires too burdensome to achieve universal screening in this population, data collected through mobile applications has the potential to rapidly identify at-risk students. While prior research has mostly focused on collecting passive smartphone modalities from students, smartphone sensors are also capable of capturing active modalities. The general public has demonstrated more willingness to share active than passive modalities through an app, yet no such dataset of active mobile modalities for mental illness screening exists for students. Knowing which active modalities hold strong screening capabilities for student populations is critical for developing targeted mental illness screening technology. Thus, we deployed a mobile application to over 300 students during the COVID-19 pandemic to collect the Student Suicidal Ideation and Depression Detection (StudentSADD) dataset. We report on a rich variety of machine learning models including cutting-edge multimodal pretrained deep learning classifiers on active text and voice replies to screen for depression and suicidal ideation. This unique StudentSADD dataset is a valuable resource for the community for developing mobile mental illness screening tools. © 2022 ACM.
冠状病毒大流行进一步加剧了大学生中抑郁症和自杀意念的日益流行,这令人担忧,凸显了对普遍精神疾病筛查技术的需求。由于传统的筛查问卷过于繁琐,无法在这一人群中实现普遍筛查,通过移动应用程序收集的数据有可能快速识别有风险的学生。虽然之前的研究主要集中在收集学生的被动智能手机模式,但智能手机传感器也能够捕捉主动模式。普通大众更愿意通过应用程序分享主动模式,而不是被动模式,但目前还没有针对学生的精神疾病筛查的主动移动模式数据集。了解哪种积极模式对学生群体具有强大的筛查能力,对于开发有针对性的精神疾病筛查技术至关重要。因此,我们在COVID-19大流行期间为300多名学生部署了一个移动应用程序,以收集学生自杀意念和抑郁检测(StudentSADD)数据集。我们报告了各种各样的机器学习模型,包括先进的多模态预训练深度学习分类器,用于主动文本和语音回复,以筛查抑郁症和自杀意念。这个独特的StudentSADD数据集是社区开发移动精神疾病筛查工具的宝贵资源。©2022 acm。
{"title":"StudentSADD: Rapid Mobile Depression and Suicidal Ideation Screening of College Students during the Coronavirus Pandemic","authors":"M. L. Tlachac, Ricardo Flores, Miranda Reisch, Rimsha Kayastha, -. Ninatau, Rich, V. Melican, Connor Bruneau, H. Caouette, E. Toto, E. Rundensteiner","doi":"10.1145/3534604","DOIUrl":"https://doi.org/10.1145/3534604","url":null,"abstract":"The growing prevalence of depression and suicidal ideation among college students further exacerbated by the Coronavirus pandemic is alarming, highlighting the need for universal mental illness screening technology. With traditional screening questionnaires too burdensome to achieve universal screening in this population, data collected through mobile applications has the potential to rapidly identify at-risk students. While prior research has mostly focused on collecting passive smartphone modalities from students, smartphone sensors are also capable of capturing active modalities. The general public has demonstrated more willingness to share active than passive modalities through an app, yet no such dataset of active mobile modalities for mental illness screening exists for students. Knowing which active modalities hold strong screening capabilities for student populations is critical for developing targeted mental illness screening technology. Thus, we deployed a mobile application to over 300 students during the COVID-19 pandemic to collect the Student Suicidal Ideation and Depression Detection (StudentSADD) dataset. We report on a rich variety of machine learning models including cutting-edge multimodal pretrained deep learning classifiers on active text and voice replies to screen for depression and suicidal ideation. This unique StudentSADD dataset is a valuable resource for the community for developing mobile mental illness screening tools. © 2022 ACM.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80849565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
AdaMICA: Adaptive Multicore Intermittent Computing AdaMICA:自适应多核间歇计算
Pub Date : 2022-01-01 DOI: 10.1145/3550304
K. Akhunov, K. Yıldırım
Recent studies on intermittent computing target single-core processors and underestimate the efficient parallel execution of highly-parallelizable machine learning tasks. Even though general-purpose multicore processors provide a high degree of parallelism and programming flexibility, intermittent computing has not exploited them yet. Filling this gap, we introduce AdaMICA (Adaptive Multicore Intermittent Computing) runtime that supports, for the first time, parallel intermittent computing and provides the highest degree of flexibility of programmable general-purpose multiple cores. AdaMICA is adaptive since it responds to the changes in the environmental power availability by dynamically reconfiguring the underlying multicore architecture to use the power most optimally. Our results demonstrate that AdaMICA significantly increases the throughput (52% on average) and decreases the latency (31% on average) by dynamically scaling the underlying architecture, considering the variations in the unpredictable harvested energy.
最近关于间歇性计算的研究以单核处理器为目标,低估了高并行性机器学习任务的高效并行执行。尽管通用多核处理器提供了高度的并行性和编程灵活性,但间歇性计算还没有充分利用它们。为了填补这一空白,我们引入了AdaMICA(自适应多核间歇计算)运行时,它首次支持并行间歇计算,并提供可编程通用多核的最高程度的灵活性。AdaMICA是自适应的,因为它通过动态重新配置底层多核架构来响应环境电源可用性的变化,以最优地使用电源。我们的结果表明,考虑到不可预测的能量收集的变化,通过动态扩展底层架构,AdaMICA显着提高了吞吐量(平均52%)并降低了延迟(平均31%)。
{"title":"AdaMICA: Adaptive Multicore Intermittent Computing","authors":"K. Akhunov, K. Yıldırım","doi":"10.1145/3550304","DOIUrl":"https://doi.org/10.1145/3550304","url":null,"abstract":"Recent studies on intermittent computing target single-core processors and underestimate the efficient parallel execution of highly-parallelizable machine learning tasks. Even though general-purpose multicore processors provide a high degree of parallelism and programming flexibility, intermittent computing has not exploited them yet. Filling this gap, we introduce AdaMICA (Adaptive Multicore Intermittent Computing) runtime that supports, for the first time, parallel intermittent computing and provides the highest degree of flexibility of programmable general-purpose multiple cores. AdaMICA is adaptive since it responds to the changes in the environmental power availability by dynamically reconfiguring the underlying multicore architecture to use the power most optimally. Our results demonstrate that AdaMICA significantly increases the throughput (52% on average) and decreases the latency (31% on average) by dynamically scaling the underlying architecture, considering the variations in the unpredictable harvested energy.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78081842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
EarIO: A Low-power Acoustic Sensing Earable for Continuously Tracking Detailed Facial Movements EarIO:一种低功耗声学传感耳机,用于连续跟踪详细的面部运动
Pub Date : 2022-01-01 DOI: 10.1145/3534621
Ke Li, Ruidong Zhang, Bo Li, François Guimbretière, Cheng Zhang
This paper presents EarIO, an AI-powered acoustic sensing technology that allows an earable (e.g., earphone) to continuously track facial expressions using two pairs of microphone and speaker (one on each side), which are widely available in commodity earphones. It emits acoustic signals from a speaker on an earable towards the face. Depending on facial expressions, the muscles, tissues, and skin around the ear would deform differently, resulting in unique echo profiles in the reflected signals captured by an on-device microphone. These received acoustic signals are processed and learned by a customized deep learning pipeline to continuously infer the full facial expressions represented by 52 parameters captured using a TruthDepth camera. Compared to similar technologies, it has significantly lower power consumption, as it can sample at 86 Hz with a power signature of 154 mW. A user study with 16 participants under three different scenarios, showed that EarIO can reliably estimate the detailed facial movements when the participants were sitting, walking or after remounting the device. Based on the encouraging results, we further discuss the potential opportunities and challenges on applying EarIO on future ear-mounted wearables.
本文介绍了EarIO,一种人工智能驱动的声学传感技术,允许可听设备(例如耳机)使用两对麦克风和扬声器(每侧一个)连续跟踪面部表情,这在商品耳机中广泛使用。它通过耳罩上的扬声器向脸部发射声音信号。根据面部表情的不同,耳朵周围的肌肉、组织和皮肤会发生不同的变形,从而在设备上的麦克风捕捉到的反射信号中产生独特的回声剖面。这些接收到的声音信号通过定制的深度学习管道进行处理和学习,以持续推断由TruthDepth相机捕获的52个参数所代表的完整面部表情。与同类技术相比,它的功耗显着降低,因为它可以在86 Hz的频率下采样,功率特征为154 mW。一项对16名参与者在三种不同场景下的用户研究表明,当参与者坐着、走路或重新安装设备后,EarIO可以可靠地估计出详细的面部运动。基于这些令人鼓舞的结果,我们进一步讨论了将EarIO应用于未来耳戴式可穿戴设备的潜在机遇和挑战。
{"title":"EarIO: A Low-power Acoustic Sensing Earable for Continuously Tracking Detailed Facial Movements","authors":"Ke Li, Ruidong Zhang, Bo Li, François Guimbretière, Cheng Zhang","doi":"10.1145/3534621","DOIUrl":"https://doi.org/10.1145/3534621","url":null,"abstract":"This paper presents EarIO, an AI-powered acoustic sensing technology that allows an earable (e.g., earphone) to continuously track facial expressions using two pairs of microphone and speaker (one on each side), which are widely available in commodity earphones. It emits acoustic signals from a speaker on an earable towards the face. Depending on facial expressions, the muscles, tissues, and skin around the ear would deform differently, resulting in unique echo profiles in the reflected signals captured by an on-device microphone. These received acoustic signals are processed and learned by a customized deep learning pipeline to continuously infer the full facial expressions represented by 52 parameters captured using a TruthDepth camera. Compared to similar technologies, it has significantly lower power consumption, as it can sample at 86 Hz with a power signature of 154 mW. A user study with 16 participants under three different scenarios, showed that EarIO can reliably estimate the detailed facial movements when the participants were sitting, walking or after remounting the device. Based on the encouraging results, we further discuss the potential opportunities and challenges on applying EarIO on future ear-mounted wearables.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86279527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
ThumbAir: In-Air Typing for Head Mounted Displays ThumbAir:在空气中键入头戴式显示器
Pub Date : 2022-01-01 DOI: 10.1145/3569474
Hyunjae Gil, Ian Oakley
Typing while wearing a standalone Head Mounted Display (HMD)—systems without external input devices or sensors to support text entry—is hard. To address this issue, prior work has used external trackers to monitor finger movements to support in-air typing on virtual keyboards. While performance has been promising, current systems are practically infeasible: finger movements may be visually occluded from inside-out HMD based tracking systems or, otherwise, awkward and uncomfortable to perform. To address these issues, this paper explores an alternative approach. Taking inspiration from the prevalence of thumb-typing on mobile phones, we describe four studies exploring, defining and validating the performance of ThumbAir, an in-air thumb-typing system implemented on a commercial HMD. The first study explores viable target locations, ultimately recommending eight targets sites. The second study collects performance data for taps on pairs of these targets to both inform the design of a target selection procedure and also support a computational design process to select a keyboard layout. The final two studies validate the selected keyboard layout in word repetition and phrase entry tasks, ultimately achieving final WPMs of 27.1 and 13.73. Qualitative data captured in the final study indicate that the discreet movements required to operate ThumbAir, in comparison to the larger scale finger and hand motions used in a baseline design from prior work, lead to reduced levels of perceived exertion and physical demand and are rated as acceptable for use in a wider range of social situations.
戴着独立的头戴式显示器(HMD)打字很困难,因为该系统没有外部输入设备或传感器来支持文本输入。为了解决这个问题,之前的工作已经使用外部跟踪器来监控手指的运动,以支持在虚拟键盘上进行空中打字。虽然性能很有希望,但目前的系统实际上是不可行的:手指的运动可能会在视觉上被由内而外的基于HMD的跟踪系统遮挡,否则,操作起来会很尴尬和不舒服。为了解决这些问题,本文探讨了另一种方法。从手机拇指打字的流行中获得灵感,我们描述了四项研究,探索,定义和验证ThumbAir的性能,这是一个在商用HMD上实现的空中拇指打字系统。第一项研究探索了可行的目标地点,最终推荐了八个目标地点。第二项研究收集了敲击这些目标对的性能数据,以告知目标选择程序的设计,并支持选择键盘布局的计算设计过程。最后两项研究在单词重复和短语输入任务中验证了所选择的键盘布局,最终实现了27.1和13.73的最终wpm。最终研究中获得的定性数据表明,与之前工作中基线设计中使用的更大规模手指和手部运动相比,操作ThumbAir所需的谨慎动作导致感知劳累和体力需求水平降低,并被评为可接受的,用于更广泛的社交场合。
{"title":"ThumbAir: In-Air Typing for Head Mounted Displays","authors":"Hyunjae Gil, Ian Oakley","doi":"10.1145/3569474","DOIUrl":"https://doi.org/10.1145/3569474","url":null,"abstract":"Typing while wearing a standalone Head Mounted Display (HMD)—systems without external input devices or sensors to support text entry—is hard. To address this issue, prior work has used external trackers to monitor finger movements to support in-air typing on virtual keyboards. While performance has been promising, current systems are practically infeasible: finger movements may be visually occluded from inside-out HMD based tracking systems or, otherwise, awkward and uncomfortable to perform. To address these issues, this paper explores an alternative approach. Taking inspiration from the prevalence of thumb-typing on mobile phones, we describe four studies exploring, defining and validating the performance of ThumbAir, an in-air thumb-typing system implemented on a commercial HMD. The first study explores viable target locations, ultimately recommending eight targets sites. The second study collects performance data for taps on pairs of these targets to both inform the design of a target selection procedure and also support a computational design process to select a keyboard layout. The final two studies validate the selected keyboard layout in word repetition and phrase entry tasks, ultimately achieving final WPMs of 27.1 and 13.73. Qualitative data captured in the final study indicate that the discreet movements required to operate ThumbAir, in comparison to the larger scale finger and hand motions used in a baseline design from prior work, lead to reduced levels of perceived exertion and physical demand and are rated as acceptable for use in a wider range of social situations.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79256287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Lumos: An Open-Source Device for Wearable Spectroscopy Research Lumos:用于可穿戴光谱研究的开源设备
Pub Date : 2022-01-01 DOI: 10.1145/3569502
Amanda Watson, Claire Kendell, Anush Lingamoorthy, Insup Lee, James Weimer
Spectroscopy, the study of the interaction between electromagnetic radiation and matter, is a vital technique in many disciplines. This technique is limited to lab settings, and, as such, sensing is isolated and infrequent. Thus, it can only provide a brief snapshot of the monitored parameter. Wearable technology brings sensing and tracking technologies out into everyday life, creating longitudinal datasets that provide more insight into the monitored parameter. In this paper, we describe Lumos, an open-source device for wearable spectroscopy research. Lumos can facilitate on-body spectroscopy research in health monitoring, athletics, rehabilitation, and more. We developed an algorithm to determine the spectral response of a medium with a mean absolute error of 13nm. From this, researchers can determine the optimal spectrum and create customized sensors for their target application. We show the utility of Lumos in a pilot study, sensing of prediabetes, where we determine the relevant spectrum for glucose and create and evaluate a targeted tracking device.
光谱学是研究电磁辐射与物质之间相互作用的学科,在许多学科中都是一项至关重要的技术。该技术仅限于实验室设置,因此,传感是孤立的和不常见的。因此,它只能提供被监视参数的简要快照。可穿戴技术将传感和跟踪技术带入日常生活,创建纵向数据集,提供对监测参数的更多了解。本文介绍了一种用于可穿戴光谱研究的开源设备Lumos。荧光灯可以促进身体光谱研究在健康监测,运动,康复,和更多。我们开发了一种算法来确定平均绝对误差为13nm的介质的光谱响应。由此,研究人员可以确定最佳光谱,并为其目标应用创建定制的传感器。我们在前期研究中展示了Lumos的效用,检测糖尿病前期,我们确定了葡萄糖的相关光谱,并创建和评估了目标跟踪设备。
{"title":"Lumos: An Open-Source Device for Wearable Spectroscopy Research","authors":"Amanda Watson, Claire Kendell, Anush Lingamoorthy, Insup Lee, James Weimer","doi":"10.1145/3569502","DOIUrl":"https://doi.org/10.1145/3569502","url":null,"abstract":"Spectroscopy, the study of the interaction between electromagnetic radiation and matter, is a vital technique in many disciplines. This technique is limited to lab settings, and, as such, sensing is isolated and infrequent. Thus, it can only provide a brief snapshot of the monitored parameter. Wearable technology brings sensing and tracking technologies out into everyday life, creating longitudinal datasets that provide more insight into the monitored parameter. In this paper, we describe Lumos, an open-source device for wearable spectroscopy research. Lumos can facilitate on-body spectroscopy research in health monitoring, athletics, rehabilitation, and more. We developed an algorithm to determine the spectral response of a medium with a mean absolute error of 13nm. From this, researchers can determine the optimal spectrum and create customized sensors for their target application. We show the utility of Lumos in a pilot study, sensing of prediabetes, where we determine the relevant spectrum for glucose and create and evaluate a targeted tracking device.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76723311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1