首页 > 最新文献

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

英文 中文
Laser-Powered Vibrotactile Rendering 激光驱动的振动触觉渲染
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631449
Yuning Su, Yuhua Jin, Zhengqing Wang, Yonghao Shi, Da-Yuan Huang, Teng Han, Xing-Dong Yang
We investigate the feasibility of a vibrotactile device that is both battery-free and electronic-free. Our approach leverages lasers as a wireless power transfer and haptic control mechanism, which can drive small actuators commonly used in AR/VR and mobile applications with DC or AC signals. To validate the feasibility of our method, we developed a proof-of-concept prototype that includes low-cost eccentric rotating mass (ERM) motors and linear resonant actuators (LRAs) connected to photovoltaic (PV) cells. This prototype enabled us to capture laser energy from any distance across a room and analyze the impact of critical parameters on the effectiveness of our approach. Through a user study, testing 16 different vibration patterns rendered using either a single motor or two motors, we demonstrate the effectiveness of our approach in generating vibration patterns of comparable quality to a baseline, which rendered the patterns using a signal generator.
我们研究了一种无需电池和电子设备的振动触觉设备的可行性。我们的方法利用激光作为无线电力传输和触觉控制机制,它可以用直流或交流信号驱动 AR/VR 和移动应用中常用的小型致动器。为了验证我们方法的可行性,我们开发了一个概念验证原型,其中包括连接到光伏(PV)电池的低成本偏心旋转质量(ERM)电机和线性谐振致动器(LRA)。该原型使我们能够从房间的任何距离捕捉激光能量,并分析关键参数对我们方法有效性的影响。通过用户研究,测试使用单个电机或两个电机呈现的 16 种不同振动模式,我们证明了我们的方法在生成与使用信号发生器呈现模式的基线质量相当的振动模式方面的有效性。
{"title":"Laser-Powered Vibrotactile Rendering","authors":"Yuning Su, Yuhua Jin, Zhengqing Wang, Yonghao Shi, Da-Yuan Huang, Teng Han, Xing-Dong Yang","doi":"10.1145/3631449","DOIUrl":"https://doi.org/10.1145/3631449","url":null,"abstract":"We investigate the feasibility of a vibrotactile device that is both battery-free and electronic-free. Our approach leverages lasers as a wireless power transfer and haptic control mechanism, which can drive small actuators commonly used in AR/VR and mobile applications with DC or AC signals. To validate the feasibility of our method, we developed a proof-of-concept prototype that includes low-cost eccentric rotating mass (ERM) motors and linear resonant actuators (LRAs) connected to photovoltaic (PV) cells. This prototype enabled us to capture laser energy from any distance across a room and analyze the impact of critical parameters on the effectiveness of our approach. Through a user study, testing 16 different vibration patterns rendered using either a single motor or two motors, we demonstrate the effectiveness of our approach in generating vibration patterns of comparable quality to a baseline, which rendered the patterns using a signal generator.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 51","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Uncertain Trajectory Prediction Visualization in Highly Automated Vehicles on Trust, Situation Awareness, and Cognitive Load 高度自动驾驶汽车中的不确定轨迹预测可视化对信任、情景意识和认知负荷的影响
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631408
Mark Colley, Oliver Speidel, Jan Strohbeck, J. Rixen, Janina Belz, Enrico Rukzio
Automated vehicles are expected to improve safety, mobility, and inclusion. User acceptance is required for the successful introduction of this technology. One essential prerequisite for acceptance is appropriately trusting the vehicle's capabilities. System transparency via visualizing internal information could calibrate this trust by enabling the surveillance of the vehicle's detection and prediction capabilities, including its failures. Additionally, concurrently increased situation awareness could improve take-overs in case of emergency. This work reports the results of two online comparative video-based studies on visualizing prediction and maneuver-planning information. Effects on trust, cognitive load, and situation awareness were measured using a simulation (N=280) and state-of-the-art road user prediction and maneuver planning on a pre-recorded real-world video using a real prototype (N=238). Results show that color conveys uncertainty best, that the planned trajectory increased trust, and that the visualization of other predicted trajectories improved perceived safety.
自动驾驶汽车有望提高安全性、机动性和包容性。要成功引入这项技术,用户必须接受。接受的一个基本前提是适当信任车辆的能力。通过可视化内部信息实现系统透明化,可以监控车辆的检测和预测能力,包括其故障,从而校准这种信任。此外,同时增强的态势感知能力可以改善紧急情况下的接管能力。这项工作报告了两项基于视频的在线比较研究结果,研究内容是预测和机动规划信息的可视化。研究使用模拟(280 人)和使用真实原型(238 人)在预先录制的真实世界视频上测量了对信任、认知负荷和态势感知的影响。结果表明,颜色最能体现不确定性,规划的轨迹增加了信任感,其他预测轨迹的可视化提高了感知安全性。
{"title":"Effects of Uncertain Trajectory Prediction Visualization in Highly Automated Vehicles on Trust, Situation Awareness, and Cognitive Load","authors":"Mark Colley, Oliver Speidel, Jan Strohbeck, J. Rixen, Janina Belz, Enrico Rukzio","doi":"10.1145/3631408","DOIUrl":"https://doi.org/10.1145/3631408","url":null,"abstract":"Automated vehicles are expected to improve safety, mobility, and inclusion. User acceptance is required for the successful introduction of this technology. One essential prerequisite for acceptance is appropriately trusting the vehicle's capabilities. System transparency via visualizing internal information could calibrate this trust by enabling the surveillance of the vehicle's detection and prediction capabilities, including its failures. Additionally, concurrently increased situation awareness could improve take-overs in case of emergency. This work reports the results of two online comparative video-based studies on visualizing prediction and maneuver-planning information. Effects on trust, cognitive load, and situation awareness were measured using a simulation (N=280) and state-of-the-art road user prediction and maneuver planning on a pre-recorded real-world video using a real prototype (N=238). Results show that color conveys uncertainty best, that the planned trajectory increased trust, and that the visualization of other predicted trajectories improved perceived safety.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"3 12","pages":"1 - 23"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bias Mitigation in Federated Learning for Edge Computing 边缘计算联合学习中的偏差缓解
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631455
Yasmine Djebrouni, Nawel Benarba, Ousmane Touat, Pasquale De Rosa, Sara Bouchenak, Angela Bonifati, Pascal Felber, Vania Marangozova, V. Schiavoni
Federated learning (FL) is a distributed machine learning paradigm that enables data owners to collaborate on training models while preserving data privacy. As FL effectively leverages decentralized and sensitive data sources, it is increasingly used in ubiquitous computing including remote healthcare, activity recognition, and mobile applications. However, FL raises ethical and social concerns as it may introduce bias with regard to sensitive attributes such as race, gender, and location. Mitigating FL bias is thus a major research challenge. In this paper, we propose Astral, a novel bias mitigation system for FL. Astral provides a novel model aggregation approach to select the most effective aggregation weights to combine FL clients' models. It guarantees a predefined fairness objective by constraining bias below a given threshold while keeping model accuracy as high as possible. Astral handles the bias of single and multiple sensitive attributes and supports all bias metrics. Our comprehensive evaluation on seven real-world datasets with three popular bias metrics shows that Astral outperforms state-of-the-art FL bias mitigation techniques in terms of bias mitigation and model accuracy. Moreover, we show that Astral is robust against data heterogeneity and scalable in terms of data size and number of FL clients. Astral's code base is publicly available.
联合学习(FL)是一种分布式机器学习范式,它能让数据所有者在保护数据隐私的同时合作训练模型。由于联合学习能有效利用分散和敏感的数据源,因此越来越多地应用于包括远程医疗、活动识别和移动应用在内的泛在计算领域。然而,FL 引发了伦理和社会问题,因为它可能会在种族、性别和位置等敏感属性方面产生偏差。因此,减少 FL 偏差是一项重大的研究挑战。在本文中,我们提出了针对 FL 的新型偏差缓解系统 Astral。Astral 提供了一种新颖的模型聚合方法,用于选择最有效的聚合权重来组合 FL 客户的模型。它通过将偏差限制在给定阈值以下,同时保持尽可能高的模型准确性,来保证预定义的公平性目标。Astral 可处理单个和多个敏感属性的偏差,并支持所有偏差指标。我们使用三种流行的偏差度量标准对七个真实数据集进行了全面评估,结果表明 Astral 在偏差缓解和模型准确性方面优于最先进的 FL 偏差缓解技术。此外,我们还证明了 Astral 对数据异构性的鲁棒性,以及在数据大小和 FL 客户端数量方面的可扩展性。Astral 的代码库是公开的。
{"title":"Bias Mitigation in Federated Learning for Edge Computing","authors":"Yasmine Djebrouni, Nawel Benarba, Ousmane Touat, Pasquale De Rosa, Sara Bouchenak, Angela Bonifati, Pascal Felber, Vania Marangozova, V. Schiavoni","doi":"10.1145/3631455","DOIUrl":"https://doi.org/10.1145/3631455","url":null,"abstract":"Federated learning (FL) is a distributed machine learning paradigm that enables data owners to collaborate on training models while preserving data privacy. As FL effectively leverages decentralized and sensitive data sources, it is increasingly used in ubiquitous computing including remote healthcare, activity recognition, and mobile applications. However, FL raises ethical and social concerns as it may introduce bias with regard to sensitive attributes such as race, gender, and location. Mitigating FL bias is thus a major research challenge. In this paper, we propose Astral, a novel bias mitigation system for FL. Astral provides a novel model aggregation approach to select the most effective aggregation weights to combine FL clients' models. It guarantees a predefined fairness objective by constraining bias below a given threshold while keeping model accuracy as high as possible. Astral handles the bias of single and multiple sensitive attributes and supports all bias metrics. Our comprehensive evaluation on seven real-world datasets with three popular bias metrics shows that Astral outperforms state-of-the-art FL bias mitigation techniques in terms of bias mitigation and model accuracy. Moreover, we show that Astral is robust against data heterogeneity and scalable in terms of data size and number of FL clients. Astral's code base is publicly available.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"10 3","pages":"1 - 35"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic Loss 语义损失
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631407
Luca Arrotta, Gabriele Civitarese, Claudio Bettini
Deep Learning models are a standard solution for sensor-based Human Activity Recognition (HAR), but their deployment is often limited by labeled data scarcity and models' opacity. Neuro-Symbolic AI (NeSy) provides an interesting research direction to mitigate these issues by infusing knowledge about context information into HAR deep learning classifiers. However, existing NeSy methods for context-aware HAR require computationally expensive symbolic reasoners during classification, making them less suitable for deployment on resource-constrained devices (e.g., mobile devices). Additionally, NeSy approaches for context-aware HAR have never been evaluated on in-the-wild datasets, and their generalization capabilities in real-world scenarios are questionable. In this work, we propose a novel approach based on a semantic loss function that infuses knowledge constraints in the HAR model during the training phase, avoiding symbolic reasoning during classification. Our results on scripted and in-the-wild datasets show the impact of different semantic loss functions in outperforming a purely data-driven model. We also compare our solution with existing NeSy methods and analyze each approach's strengths and weaknesses. Our semantic loss remains the only NeSy solution that can be deployed as a single DNN without the need for symbolic reasoning modules, reaching recognition rates close (and better in some cases) to existing approaches.
深度学习模型是基于传感器的人类活动识别(HAR)的标准解决方案,但其部署往往受到标记数据稀缺和模型不透明的限制。神经符号人工智能(NeSy)提供了一个有趣的研究方向,通过将上下文信息知识注入 HAR 深度学习分类器来缓解这些问题。然而,现有的用于上下文感知 HAR 的 NeSy 方法在分类过程中需要计算昂贵的符号推理器,因此不太适合部署在资源受限的设备(如移动设备)上。此外,用于上下文感知 HAR 的 NeSy 方法从未在实际数据集上进行过评估,其在真实世界场景中的泛化能力也值得怀疑。在这项工作中,我们提出了一种基于语义损失函数的新方法,在训练阶段将知识约束注入 HAR 模型,避免了分类过程中的符号推理。我们在脚本数据集和野生数据集上的研究结果表明,不同的语义损失函数对超越纯数据驱动模型的影响。我们还将我们的解决方案与现有的 NeSy 方法进行了比较,并分析了每种方法的优缺点。我们的语义损失仍然是唯一可以作为单一 DNN 部署的 NeSy 解决方案,无需符号推理模块,识别率接近(在某些情况下甚至更高)现有方法。
{"title":"Semantic Loss","authors":"Luca Arrotta, Gabriele Civitarese, Claudio Bettini","doi":"10.1145/3631407","DOIUrl":"https://doi.org/10.1145/3631407","url":null,"abstract":"Deep Learning models are a standard solution for sensor-based Human Activity Recognition (HAR), but their deployment is often limited by labeled data scarcity and models' opacity. Neuro-Symbolic AI (NeSy) provides an interesting research direction to mitigate these issues by infusing knowledge about context information into HAR deep learning classifiers. However, existing NeSy methods for context-aware HAR require computationally expensive symbolic reasoners during classification, making them less suitable for deployment on resource-constrained devices (e.g., mobile devices). Additionally, NeSy approaches for context-aware HAR have never been evaluated on in-the-wild datasets, and their generalization capabilities in real-world scenarios are questionable. In this work, we propose a novel approach based on a semantic loss function that infuses knowledge constraints in the HAR model during the training phase, avoiding symbolic reasoning during classification. Our results on scripted and in-the-wild datasets show the impact of different semantic loss functions in outperforming a purely data-driven model. We also compare our solution with existing NeSy methods and analyze each approach's strengths and weaknesses. Our semantic loss remains the only NeSy solution that can be deployed as a single DNN without the need for symbolic reasoning modules, reaching recognition rates close (and better in some cases) to existing approaches.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"2 8","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HyperTracking 超级跟踪
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631434
Xiaoqiang Xu, Xuanqi Meng, Xinyu Tong, Xiulong Liu, Xin Xie, Wenyu Qu
Wireless sensing technology allows for non-intrusive sensing without the need for physical sensors worn by the target, enabling a wide range of applications, such as indoor tracking, and activity identification. To theoretically reveal the fundamental principles of wireless sensing, the Fresnel zone model has been introduced in the field of Wi-Fi sensing. While the Fresnel zone model is effective in explaining the sensing mechanism in line-of-sight (LoS) scenarios, achieving accurate sensing in non-line-of-sight (NLoS) situations continues to pose a significant challenge. In this paper, we propose a novel theoretical model called the Hyperbolic zone to reveal the fundamental sensing mechanism in NLoS scenarios. The main principle is to eliminate the complex NLoS path shared among different transmitter-receiver pairs, which allows us to obtain a series of simple "virtual" reflection paths among receivers. Since these "virtual" reflection paths satisfy the properties of the hyperbola, we propose the hyperbolic tracking model. Based on the proposed model, we implement the HyperTracking system using commercial Wi-Fi devices. The experimental results show that the proposed hyperbolic model is suitable for accurate tracking in both LoS and NLoS scenarios. We can reduce 0.36 m tracking error in contrast to the Fresnel zone model in NLoS scenarios. When we utilize the proposed hyperbolic model to train a typical LSTM neural network, we are able to further reduce the tracking error by 0.13 m and save the execution time by 281% with the same data. As a whole, our method can reduce the tracking error by 54% in NLoS scenarios compared with the Fresnel zone model.
无线传感技术可以实现非侵入式传感,而不需要目标佩戴物理传感器,从而实现了室内跟踪和活动识别等广泛应用。为了从理论上揭示无线传感的基本原理,Wi-Fi 传感领域引入了菲涅尔区模型。虽然菲涅尔区模型能有效解释视距(LoS)情况下的传感机制,但在非视距(NLoS)情况下实现精确传感仍然是一个重大挑战。在本文中,我们提出了一种名为 "双曲区 "的新型理论模型,以揭示非视距(NLoS)场景下的基本传感机制。其主要原理是消除不同发射机-接收机对之间共享的复杂 NLoS 路径,从而在接收机之间获得一系列简单的 "虚拟 "反射路径。由于这些 "虚拟 "反射路径符合双曲线的特性,因此我们提出了双曲线跟踪模型。基于所提出的模型,我们利用商用 Wi-Fi 设备实现了 HyperTracking 系统。实验结果表明,所提出的双曲线模型适用于 LoS 和 NLoS 场景下的精确跟踪。在 NLoS 场景中,与菲涅尔区模型相比,我们可以减少 0.36 米的跟踪误差。当我们利用所提出的双曲模型来训练一个典型的 LSTM 神经网络时,在相同数据的情况下,我们能将跟踪误差进一步降低 0.13 米,并将执行时间节省 281%。总体而言,与菲涅尔区域模型相比,我们的方法可将 NLoS 场景下的跟踪误差降低 54%。
{"title":"HyperTracking","authors":"Xiaoqiang Xu, Xuanqi Meng, Xinyu Tong, Xiulong Liu, Xin Xie, Wenyu Qu","doi":"10.1145/3631434","DOIUrl":"https://doi.org/10.1145/3631434","url":null,"abstract":"Wireless sensing technology allows for non-intrusive sensing without the need for physical sensors worn by the target, enabling a wide range of applications, such as indoor tracking, and activity identification. To theoretically reveal the fundamental principles of wireless sensing, the Fresnel zone model has been introduced in the field of Wi-Fi sensing. While the Fresnel zone model is effective in explaining the sensing mechanism in line-of-sight (LoS) scenarios, achieving accurate sensing in non-line-of-sight (NLoS) situations continues to pose a significant challenge. In this paper, we propose a novel theoretical model called the Hyperbolic zone to reveal the fundamental sensing mechanism in NLoS scenarios. The main principle is to eliminate the complex NLoS path shared among different transmitter-receiver pairs, which allows us to obtain a series of simple \"virtual\" reflection paths among receivers. Since these \"virtual\" reflection paths satisfy the properties of the hyperbola, we propose the hyperbolic tracking model. Based on the proposed model, we implement the HyperTracking system using commercial Wi-Fi devices. The experimental results show that the proposed hyperbolic model is suitable for accurate tracking in both LoS and NLoS scenarios. We can reduce 0.36 m tracking error in contrast to the Fresnel zone model in NLoS scenarios. When we utilize the proposed hyperbolic model to train a typical LSTM neural network, we are able to further reduce the tracking error by 0.13 m and save the execution time by 281% with the same data. As a whole, our method can reduce the tracking error by 54% in NLoS scenarios compared with the Fresnel zone model.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"13 11","pages":"1 - 26"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ToothFairy 牙仙
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631412
Yang Wang, Feng Hong, Yufei Jiang, Chenyu Bao, Chao Liu, Zhongwen Guo
Tooth brushing monitors have the potential to enhance oral hygiene and encourage the development of healthy brushing habits. However, previous studies fall short of recognizing each tooth due to limitations in external sensors and variations among users. To address these challenges, we present ToothFairy, a real-time tooth-by-tooth brushing monitor that uses earphone reverse signals captured within the oral cavity to identify each tooth during brushing. The key component of ToothFairy is a novel bone-conducted acoustic attenuation model, which quantifies sound propagation within the oral cavity. This model eliminates the need for machine learning and can be calibrated with just one second of brushing data for each tooth by a new user. ToothFairy also addresses practical issues such as brushing detection and tooth region determination. Results from extensive experiments, involving 10 volunteers and 25 combinations of five commercial off-the-shelf toothbrush and earphone models each, show that ToothFairy achieves tooth recognition with an average accuracy of 90.5%.
刷牙监测器具有改善口腔卫生和鼓励养成健康刷牙习惯的潜力。然而,由于外部传感器的限制和用户之间的差异,以往的研究无法识别每颗牙齿。为了应对这些挑战,我们推出了逐齿刷牙实时监测器 ToothFairy,它利用在口腔内捕获的耳机反向信号来识别刷牙过程中的每颗牙齿。ToothFairy 的关键部件是一个新颖的骨传导声学衰减模型,它可以量化声音在口腔内的传播。该模型无需机器学习,新用户只需一秒钟的刷牙数据即可对每颗牙齿进行校准。ToothFairy 还解决了刷牙检测和牙齿区域确定等实际问题。广泛的实验结果表明,ToothFairy 的牙齿识别平均准确率达到 90.5%。
{"title":"ToothFairy","authors":"Yang Wang, Feng Hong, Yufei Jiang, Chenyu Bao, Chao Liu, Zhongwen Guo","doi":"10.1145/3631412","DOIUrl":"https://doi.org/10.1145/3631412","url":null,"abstract":"Tooth brushing monitors have the potential to enhance oral hygiene and encourage the development of healthy brushing habits. However, previous studies fall short of recognizing each tooth due to limitations in external sensors and variations among users. To address these challenges, we present ToothFairy, a real-time tooth-by-tooth brushing monitor that uses earphone reverse signals captured within the oral cavity to identify each tooth during brushing. The key component of ToothFairy is a novel bone-conducted acoustic attenuation model, which quantifies sound propagation within the oral cavity. This model eliminates the need for machine learning and can be calibrated with just one second of brushing data for each tooth by a new user. ToothFairy also addresses practical issues such as brushing detection and tooth region determination. Results from extensive experiments, involving 10 volunteers and 25 combinations of five commercial off-the-shelf toothbrush and earphone models each, show that ToothFairy achieves tooth recognition with an average accuracy of 90.5%.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 50","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DIPA2 DIPA2
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631439
Anran Xu, Zhongyi Zhou, Kakeru Miyazaki, Ryo Yoshikawa, S. Hosio, Koji Yatani
The world today is increasingly visual. Many of the most popular online social networking services are largely powered by images, making image privacy protection a critical research topic in the fields of ubiquitous computing, usable security, and human-computer interaction (HCI). One topical issue is understanding privacy-threatening content in images that are shared online. This dataset article introduces DIPA2, an open-sourced image dataset that offers object-level annotations with high-level reasoning properties to show perceptions of privacy among different cultures. DIPA2 provides 5,897 annotations describing perceived privacy risks of 3,347 objects in 1,304 images. The annotations contain the type of the object and four additional privacy metrics: 1) information type indicating what kind of information may leak if the image containing the object is shared, 2) a 7-point Likert item estimating the perceived severity of privacy leakages, and 3) intended recipient scopes when annotators assume they are either image owners or allowing others to repost the image. Our dataset contains unique data from two cultures: We recruited annotators from both Japan and the U.K. to demonstrate the impact of culture on object-level privacy perceptions. In this paper, we first illustrate how we designed and performed the construction of DIPA2, along with data analysis of the collected annotations. Second, we provide two machine-learning baselines to demonstrate how DIPA2 challenges the current image privacy recognition task. DIPA2 facilitates various types of research on image privacy, including machine learning methods inferring privacy threats in complex scenarios, quantitative analysis of cultural influences on privacy preferences, understanding of image sharing behaviors, and promotion of cyber hygiene for general user populations.
当今世界越来越视觉化。许多最流行的在线社交网络服务在很大程度上都是由图像驱动的,这使得图像隐私保护成为泛在计算、可用安全和人机交互(HCI)领域的一个重要研究课题。其中一个热点问题是了解在线共享图片中威胁隐私的内容。本数据集文章介绍了 DIPA2,这是一个开源的图像数据集,提供具有高级推理属性的对象级注释,以显示不同文化中对隐私的看法。DIPA2 提供了 5,897 个注释,描述了 1,304 张图片中 3,347 个对象的隐私风险感知。这些注释包含对象类型和四个额外的隐私指标:1)信息类型,表示如果共享包含对象的图片,可能会泄露哪类信息;2)7 点 Likert 项目,估计感知到的隐私泄露严重程度;3)当注释者假定自己是图片所有者或允许他人转贴图片时,预期接收者范围。我们的数据集包含来自两种文化的独特数据:我们招募了来自日本和英国的注释者,以展示文化对对象级隐私感知的影响。在本文中,我们首先说明了如何设计和构建 DIPA2,以及对收集到的注释进行数据分析。其次,我们提供了两个机器学习基线,以展示 DIPA2 如何挑战当前的图像隐私识别任务。DIPA2 有助于各种类型的图像隐私研究,包括在复杂场景中推断隐私威胁的机器学习方法、对隐私偏好的文化影响的定量分析、对图像共享行为的理解,以及促进普通用户群体的网络卫生。
{"title":"DIPA2","authors":"Anran Xu, Zhongyi Zhou, Kakeru Miyazaki, Ryo Yoshikawa, S. Hosio, Koji Yatani","doi":"10.1145/3631439","DOIUrl":"https://doi.org/10.1145/3631439","url":null,"abstract":"The world today is increasingly visual. Many of the most popular online social networking services are largely powered by images, making image privacy protection a critical research topic in the fields of ubiquitous computing, usable security, and human-computer interaction (HCI). One topical issue is understanding privacy-threatening content in images that are shared online. This dataset article introduces DIPA2, an open-sourced image dataset that offers object-level annotations with high-level reasoning properties to show perceptions of privacy among different cultures. DIPA2 provides 5,897 annotations describing perceived privacy risks of 3,347 objects in 1,304 images. The annotations contain the type of the object and four additional privacy metrics: 1) information type indicating what kind of information may leak if the image containing the object is shared, 2) a 7-point Likert item estimating the perceived severity of privacy leakages, and 3) intended recipient scopes when annotators assume they are either image owners or allowing others to repost the image. Our dataset contains unique data from two cultures: We recruited annotators from both Japan and the U.K. to demonstrate the impact of culture on object-level privacy perceptions. In this paper, we first illustrate how we designed and performed the construction of DIPA2, along with data analysis of the collected annotations. Second, we provide two machine-learning baselines to demonstrate how DIPA2 challenges the current image privacy recognition task. DIPA2 facilitates various types of research on image privacy, including machine learning methods inferring privacy threats in complex scenarios, quantitative analysis of cultural influences on privacy preferences, understanding of image sharing behaviors, and promotion of cyber hygiene for general user populations.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"11 3","pages":"1 - 30"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unobtrusive Air Leakage Estimation for Earables with In-ear Microphones 带入耳式麦克风的耳机的无干扰漏气估计
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631405
B. U. Demirel, Ting Dang, Khaldoon Al-Naimi, F. Kawsar, A. Montanari
Earables (in-ear wearables) are gaining increasing attention for sensing applications and healthcare research thanks to their ergonomy and non-invasive nature. However, air leakages between the device and the user's ear, resulting from daily activities or wearing variabilities, can decrease the performance of applications, interfere with calibrations, and reduce the robustness of the overall system. Existing literature lacks established methods for estimating the degree of air leaks (i.e., seal integrity) to provide information for the earable applications. In this work, we proposed a novel unobtrusive method for estimating the air leakage level of earbuds based on an in-ear microphone. The proposed method aims to estimate the magnitude of distortions, reflections, and external noise in the ear canal while excluding the speaker output by learning the speaker-to-microphone transfer function which allows us to perform the task unobtrusively. Using the obtained residual signal in the ear canal, we extract three features and deploy a machine-learning model for estimating the air leakage level. We investigated our system under various conditions to validate its robustness and resilience against the motion and other artefacts. Our extensive experimental evaluation shows that the proposed method can track air leakage levels under different daily activities. "The best computer is a quiet, invisible servant." ~Mark Weiser
耳戴式设备(入耳式可穿戴设备)因其人体工学和非侵入性特点,在传感应用和医疗保健研究领域日益受到关注。然而,由于日常活动或佩戴的变化,设备和用户耳朵之间的空气泄漏会降低应用性能,干扰校准,并降低整个系统的鲁棒性。现有文献缺乏估算漏气程度(即密封完整性)的既定方法,无法为耳机应用提供信息。在这项工作中,我们提出了一种基于耳内麦克风估算耳塞漏气程度的新型非侵入式方法。该方法旨在通过学习扬声器到麦克风的传递函数来估算耳道中失真、反射和外部噪音的大小,同时排除扬声器的输出,从而使我们能够不露痕迹地完成任务。利用获得的耳道残余信号,我们提取了三个特征,并部署了一个机器学习模型来估计漏气水平。我们在各种条件下研究了我们的系统,以验证其对运动和其他伪影的鲁棒性和复原力。广泛的实验评估表明,所提出的方法可以在不同的日常活动中跟踪漏气水平。"最好的计算机是一个安静的隐形仆人"。~马克-韦泽
{"title":"Unobtrusive Air Leakage Estimation for Earables with In-ear Microphones","authors":"B. U. Demirel, Ting Dang, Khaldoon Al-Naimi, F. Kawsar, A. Montanari","doi":"10.1145/3631405","DOIUrl":"https://doi.org/10.1145/3631405","url":null,"abstract":"Earables (in-ear wearables) are gaining increasing attention for sensing applications and healthcare research thanks to their ergonomy and non-invasive nature. However, air leakages between the device and the user's ear, resulting from daily activities or wearing variabilities, can decrease the performance of applications, interfere with calibrations, and reduce the robustness of the overall system. Existing literature lacks established methods for estimating the degree of air leaks (i.e., seal integrity) to provide information for the earable applications. In this work, we proposed a novel unobtrusive method for estimating the air leakage level of earbuds based on an in-ear microphone. The proposed method aims to estimate the magnitude of distortions, reflections, and external noise in the ear canal while excluding the speaker output by learning the speaker-to-microphone transfer function which allows us to perform the task unobtrusively. Using the obtained residual signal in the ear canal, we extract three features and deploy a machine-learning model for estimating the air leakage level. We investigated our system under various conditions to validate its robustness and resilience against the motion and other artefacts. Our extensive experimental evaluation shows that the proposed method can track air leakage levels under different daily activities. \"The best computer is a quiet, invisible servant.\" ~Mark Weiser","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"13 6","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KeyStub 关键存根
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631442
John Nolan, Kun Qian, Xinyu Zhang
The proliferation of the Internet of Things is calling for new modalities that enable human interaction with smart objects. Recent research has explored RFID tags as passive sensors to detect finger touch. However, existing approaches either rely on custom-built RFID readers or are limited to pre-trained finger-swiping gestures. In this paper, we introduce KeyStub, which can discriminate multiple discrete keystrokes on an RFID tag. KeyStub interfaces with commodity RFID ICs with multiple microwave-band resonant stubs as keys. Each stub's geometry is designed to create a predefined impedance mismatch to the RFID IC upon a keystroke, which in turn translates into a known amplitude and phase shift, remotely detectable by an RFID reader. KeyStub combines two ICs' signals through a single common-mode antenna and performs differential detection to evade the need for calibration and ensure reliability in heavy multi-path environments. Our experiments using a commercial-off-the-shelf RFID reader and ICs show that up to 8 buttons can be detected and decoded with accuracy greater than 95%. KeyStub points towards a novel way of using resonant stubs to augment RF antenna structures, thus enabling new passive wireless interaction modalities.
物联网的普及要求采用新的模式来实现人类与智能物体的互动。最近的研究探索了将 RFID 标签作为被动传感器来检测手指触摸。然而,现有的方法要么依赖于定制的 RFID 阅读器,要么仅限于预先训练好的手指滑动手势。在本文中,我们介绍了 KeyStub,它可以分辨 RFID 标签上的多个离散按键。KeyStub 与商品 RFID IC 相连接,以多个微波带谐振存根作为按键。每个谐振块的几何形状都经过设计,可在按键时对 RFID IC 产生预定义的阻抗失配,进而转化为已知的振幅和相移,由 RFID 阅读器远程检测。KeyStub 通过一根共模天线将两个集成电路的信号结合在一起,并进行差分检测,从而避免了校准的需要,确保了在多路径环境下的可靠性。我们使用现成的商用 RFID 阅读器和集成电路进行的实验表明,最多可检测和解码 8 个按钮,准确率超过 95%。KeyStub指出了一种使用谐振存根增强射频天线结构的新方法,从而实现了新的无源无线交互模式。
{"title":"KeyStub","authors":"John Nolan, Kun Qian, Xinyu Zhang","doi":"10.1145/3631442","DOIUrl":"https://doi.org/10.1145/3631442","url":null,"abstract":"The proliferation of the Internet of Things is calling for new modalities that enable human interaction with smart objects. Recent research has explored RFID tags as passive sensors to detect finger touch. However, existing approaches either rely on custom-built RFID readers or are limited to pre-trained finger-swiping gestures. In this paper, we introduce KeyStub, which can discriminate multiple discrete keystrokes on an RFID tag. KeyStub interfaces with commodity RFID ICs with multiple microwave-band resonant stubs as keys. Each stub's geometry is designed to create a predefined impedance mismatch to the RFID IC upon a keystroke, which in turn translates into a known amplitude and phase shift, remotely detectable by an RFID reader. KeyStub combines two ICs' signals through a single common-mode antenna and performs differential detection to evade the need for calibration and ensure reliability in heavy multi-path environments. Our experiments using a commercial-off-the-shelf RFID reader and ICs show that up to 8 buttons can be detected and decoded with accuracy greater than 95%. KeyStub points towards a novel way of using resonant stubs to augment RF antenna structures, thus enabling new passive wireless interaction modalities.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"3 6","pages":"1 - 23"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BodyTouch 身体接触
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631426
Wen-Wei Cheng, Liwei Chan
This paper presents a study on the touch precision of an eye-free, body-based interface using on-body and near-body touch methods with and without skin contact. We evaluate user touch accuracy on four different button layouts. These layouts progressively increase the number of buttons between adjacent body joints, resulting in 12, 20, 28, and 36 touch buttons distributed across the body. Our study indicates that the on-body method achieved an accuracy beyond 95% for the 12- and 20-button layouts, whereas the near-body method only for the 12-button layout. Investigating user touch patterns, we applied SVM classifiers, which boost both the on-body and near-body methods to support up to the 28-button layouts by learning individual touch patterns. However, using generalized touch patterns did not significantly improve accuracy for more complex layouts, highlighting considerable differences in individual touch habits. When evaluating user experience metrics such as workload perception, confidence, convenience, and willingness-to-use, users consistently favored the 20-button layout regardless of the touch technique used. Remarkably, the 20-button layout, when applied to on-body touch methods, does not necessitate personal touch patterns, showcasing an optimal balance of practicality, effectiveness, and user experience without the need for trained models. In contrast, the near-body touch targeting the 20-button layout needs a personalized model; otherwise, the 12-button layout offers the best immediate practicality.
本文介绍了一项关于无眼球、基于身体的界面的触摸精度研究,该界面采用了有皮肤接触和无皮肤接触的身体和近身体触摸方法。我们评估了用户在四种不同按钮布局上的触摸精度。这些布局逐步增加了相邻身体关节之间的按钮数量,最终在整个身体上分布了 12、20、28 和 36 个触摸按钮。我们的研究表明,在 12 和 20 按钮布局中,身体上的方法达到了 95% 以上的准确率,而近身体的方法仅适用于 12 按钮布局。在研究用户触摸模式时,我们应用了 SVM 分类器,通过学习单个触摸模式,提高了身体上和近身体方法对 28 个按钮布局的支持。但是,对于更复杂的布局,使用通用触摸模式并不能显著提高准确性,这凸显了个人触摸习惯的巨大差异。在评估工作量感知、信心、便利性和使用意愿等用户体验指标时,无论使用哪种触摸技术,用户都一致倾向于 20 按钮布局。值得注意的是,当 20 按钮布局应用于身体触摸方法时,并不需要个人触摸模式,从而展示了实用性、有效性和用户体验之间的最佳平衡,而不需要训练有素的模型。与此相反,针对 20 按钮布局的近身触摸需要个性化模型;否则,12 按钮布局的即时实用性最佳。
{"title":"BodyTouch","authors":"Wen-Wei Cheng, Liwei Chan","doi":"10.1145/3631426","DOIUrl":"https://doi.org/10.1145/3631426","url":null,"abstract":"This paper presents a study on the touch precision of an eye-free, body-based interface using on-body and near-body touch methods with and without skin contact. We evaluate user touch accuracy on four different button layouts. These layouts progressively increase the number of buttons between adjacent body joints, resulting in 12, 20, 28, and 36 touch buttons distributed across the body. Our study indicates that the on-body method achieved an accuracy beyond 95% for the 12- and 20-button layouts, whereas the near-body method only for the 12-button layout. Investigating user touch patterns, we applied SVM classifiers, which boost both the on-body and near-body methods to support up to the 28-button layouts by learning individual touch patterns. However, using generalized touch patterns did not significantly improve accuracy for more complex layouts, highlighting considerable differences in individual touch habits. When evaluating user experience metrics such as workload perception, confidence, convenience, and willingness-to-use, users consistently favored the 20-button layout regardless of the touch technique used. Remarkably, the 20-button layout, when applied to on-body touch methods, does not necessitate personal touch patterns, showcasing an optimal balance of practicality, effectiveness, and user experience without the need for trained models. In contrast, the near-body touch targeting the 20-button layout needs a personalized model; otherwise, the 12-button layout offers the best immediate practicality.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"2 2","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1