首页 > 最新文献

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

英文 中文
SweatSkin SweatSkin
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631425
Chi-Jung Lee, David Yang, P. Ku, Hsin-Liu (Cindy) Kao
Sweat sensing affords monitoring essential bio-signals tailored for various well-being inspections. We present SweatSkin, the fabrication approach for customizable sweat-sensing on-skin interfaces. SweatSkin is unique in exploiting on-skin microfluidic channels to access bio-fluid secretes within the skin for personalized health monitoring. To lower the barrier to creating skin-conformable microfluidics capable of collecting and analyzing sweat, four fabrication methods utilizing accessible materials are proposed. Technical characterizations of paper- and polymer-based devices indicate that colorimetric analysis can effectively visualize sweat loss, chloride, glucose, and pH values. To support general to extreme sweating scenarios, we consulted five athletic experts on the SweatSkin devices' customization guidelines, application potential, and envisioned usages. The two-session fabrication workshop study with ten participants verified that the four fabrication methods are easy to learn and easy to make. Overall, SweatSkin is an extensible and user-friendly platform for designing and creating customizable on-skin sweat-sensing interfaces for UbiComp and HCI, affording ubiquitous personalized health sensing.
汗液感应可监测各种健康检查所需的重要生物信号。我们介绍的 SweatSkin 是可定制的汗液传感皮肤接口的制造方法。SweatSkin 的独特之处在于利用皮肤上的微流体通道来获取皮肤内的生物流体分泌物,从而实现个性化健康监测。为了降低制造能够收集和分析汗液的皮肤适形微流体的门槛,我们提出了四种利用可获得材料的制造方法。基于纸张和聚合物的设备的技术特性表明,比色分析能有效地显示汗液流失、氯化物、葡萄糖和 pH 值。为了支持一般到极端的出汗情况,我们就 SweatSkin 设备的定制指南、应用潜力和预期用途咨询了五位运动专家。有十名参与者参加的两节制作研讨会研究证实,四种制作方法简单易学,易于制作。总之,SweatSkin 是一个可扩展且用户友好的平台,可用于设计和创建可定制的皮肤汗液传感界面,为 UbiComp 和人机交互提供无所不在的个性化健康传感。
{"title":"SweatSkin","authors":"Chi-Jung Lee, David Yang, P. Ku, Hsin-Liu (Cindy) Kao","doi":"10.1145/3631425","DOIUrl":"https://doi.org/10.1145/3631425","url":null,"abstract":"Sweat sensing affords monitoring essential bio-signals tailored for various well-being inspections. We present SweatSkin, the fabrication approach for customizable sweat-sensing on-skin interfaces. SweatSkin is unique in exploiting on-skin microfluidic channels to access bio-fluid secretes within the skin for personalized health monitoring. To lower the barrier to creating skin-conformable microfluidics capable of collecting and analyzing sweat, four fabrication methods utilizing accessible materials are proposed. Technical characterizations of paper- and polymer-based devices indicate that colorimetric analysis can effectively visualize sweat loss, chloride, glucose, and pH values. To support general to extreme sweating scenarios, we consulted five athletic experts on the SweatSkin devices' customization guidelines, application potential, and envisioned usages. The two-session fabrication workshop study with ten participants verified that the four fabrication methods are easy to learn and easy to make. Overall, SweatSkin is an extensible and user-friendly platform for designing and creating customizable on-skin sweat-sensing interfaces for UbiComp and HCI, affording ubiquitous personalized health sensing.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"1 7","pages":"1 - 30"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Driver Maneuver Interaction Identification with Anomaly-Aware Federated Learning on Heterogeneous Feature Representations 利用异构特征表征上的异常感知联合学习识别驾驶员操纵交互
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631421
Mahan Tabatabaie, Suining He
Driver maneuver interaction learning (DMIL) refers to the classification task with the goal of identifying different driver-vehicle maneuver interactions (e.g., left/right turns). Existing conventional studies largely focused on the centralized collection of sensor data from the drivers' smartphones (say, inertial measurement units or IMUs, like accelerometer and gyroscope). Such a centralized mechanism might be precluded by data regulatory constraints. Furthermore, how to enable an adaptive and accurate DMIL framework remains challenging due to (i) complexity in heterogeneous driver maneuver patterns, and (ii) impacts of anomalous driver maneuvers due to, for instance, aggressive driving styles and behaviors. To overcome the above challenges, we propose AF-DMIL, an Anomaly-aware Federated Driver Maneuver Interaction Learning system. We focus on the real-world IMU sensor datasets (e.g., collected by smartphones) for our pilot case study. In particular, we have designed three heterogeneous representations for AF-DMIL regarding spectral, time series, and statistical features that are derived from the IMU sensor readings. We have designed a novel heterogeneous representation attention network (HetRANet) based on spectral channel attention, temporal sequence attention, and statistical feature learning mechanisms, jointly capturing and identifying the complex patterns within driver maneuver behaviors. Furthermore, we have designed a densely-connected convolutional neural network in HetRANet to enable the complex feature extraction and enhance the computational efficiency of HetRANet. In addition, we have designed within AF-DMIL a novel anomaly-aware federated learning approach for decentralized DMIL in response to anomalous maneuver data. To ease extraction of the maneuver patterns and evaluation of their mutual differences, we have designed an embedding projection network that projects the high-dimensional driver maneuver features into low-dimensional space, and further derives the exemplars that represent the driver maneuver patterns for mutual comparison. Then, AF-DMIL further leverages the mutual differences of the exemplars to identify those that exhibit anomalous patterns and deviate from others, and mitigates their impacts upon the federated DMIL. We have conducted extensive driver data analytics and experimental studies on three real-world datasets (one is harvested on our own) to evaluate the prototype of AF-DMIL, demonstrating AF-DMIL's accuracy and effectiveness compared to the state-of-the-art DMIL baselines (on average by more than 13% improvement in terms of DMIL accuracy), as well as fewer communication rounds (on average 29.20% fewer than existing distributed learning mechanisms).
驾驶员操作交互学习(DMIL)是指以识别不同驾驶员与车辆操作交互(如左转/右转)为目标的分类任务。现有的传统研究主要侧重于集中收集驾驶员智能手机中的传感器数据(例如惯性测量单元或 IMU,如加速度计和陀螺仪)。这种集中式机制可能会受到数据法规的限制。此外,由于(i)异构驾驶员操纵模式的复杂性,以及(ii)由于激进驾驶风格和行为等原因造成的异常驾驶员操纵的影响,如何启用自适应和准确的 DMIL 框架仍具有挑战性。为了克服上述挑战,我们提出了 AF-DMIL,一种异常感知的联合驾驶员操纵交互学习系统。我们以真实世界的 IMU 传感器数据集(如智能手机收集的数据集)为试点案例研究的重点。特别是,我们为 AF-DMIL 设计了三种异构表示法,分别涉及从 IMU 传感器读数中获得的频谱、时间序列和统计特征。我们设计了一种基于频谱通道关注、时间序列关注和统计特征学习机制的新型异构表征关注网络(HetRANet),可共同捕捉和识别驾驶员操纵行为中的复杂模式。此外,我们还在 HetRANet 中设计了一个密集连接的卷积神经网络,以实现复杂特征提取并提高 HetRANet 的计算效率。此外,我们还在 AF-DMIL 中设计了一种新颖的异常感知联合学习方法,用于分散式 DMIL,以应对异常操纵数据。为了便于提取操纵模式和评估它们之间的相互差异,我们设计了一个嵌入式投影网络,将高维驾驶员操纵特征投影到低维空间,并进一步得出代表驾驶员操纵模式的范例,以便进行相互比较。然后,AF-DMIL 进一步利用范例的相互差异来识别那些表现出异常模式和偏离其他模式的范例,并减轻它们对联合 DMIL 的影响。我们在三个真实数据集(其中一个数据集是我们自己采集的)上进行了广泛的驱动数据分析和实验研究,以评估 AF-DMIL 的原型,结果表明与最先进的 DMIL 基线相比,AF-DMIL 的准确性和有效性更高(在 DMIL 准确性方面平均提高了 13% 以上),而且通信轮数更少(与现有的分布式学习机制相比,平均减少了 29.20%)。
{"title":"Driver Maneuver Interaction Identification with Anomaly-Aware Federated Learning on Heterogeneous Feature Representations","authors":"Mahan Tabatabaie, Suining He","doi":"10.1145/3631421","DOIUrl":"https://doi.org/10.1145/3631421","url":null,"abstract":"Driver maneuver interaction learning (DMIL) refers to the classification task with the goal of identifying different driver-vehicle maneuver interactions (e.g., left/right turns). Existing conventional studies largely focused on the centralized collection of sensor data from the drivers' smartphones (say, inertial measurement units or IMUs, like accelerometer and gyroscope). Such a centralized mechanism might be precluded by data regulatory constraints. Furthermore, how to enable an adaptive and accurate DMIL framework remains challenging due to (i) complexity in heterogeneous driver maneuver patterns, and (ii) impacts of anomalous driver maneuvers due to, for instance, aggressive driving styles and behaviors. To overcome the above challenges, we propose AF-DMIL, an Anomaly-aware Federated Driver Maneuver Interaction Learning system. We focus on the real-world IMU sensor datasets (e.g., collected by smartphones) for our pilot case study. In particular, we have designed three heterogeneous representations for AF-DMIL regarding spectral, time series, and statistical features that are derived from the IMU sensor readings. We have designed a novel heterogeneous representation attention network (HetRANet) based on spectral channel attention, temporal sequence attention, and statistical feature learning mechanisms, jointly capturing and identifying the complex patterns within driver maneuver behaviors. Furthermore, we have designed a densely-connected convolutional neural network in HetRANet to enable the complex feature extraction and enhance the computational efficiency of HetRANet. In addition, we have designed within AF-DMIL a novel anomaly-aware federated learning approach for decentralized DMIL in response to anomalous maneuver data. To ease extraction of the maneuver patterns and evaluation of their mutual differences, we have designed an embedding projection network that projects the high-dimensional driver maneuver features into low-dimensional space, and further derives the exemplars that represent the driver maneuver patterns for mutual comparison. Then, AF-DMIL further leverages the mutual differences of the exemplars to identify those that exhibit anomalous patterns and deviate from others, and mitigates their impacts upon the federated DMIL. We have conducted extensive driver data analytics and experimental studies on three real-world datasets (one is harvested on our own) to evaluate the prototype of AF-DMIL, demonstrating AF-DMIL's accuracy and effectiveness compared to the state-of-the-art DMIL baselines (on average by more than 13% improvement in terms of DMIL accuracy), as well as fewer communication rounds (on average 29.20% fewer than existing distributed learning mechanisms).","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 4","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SurfShare 冲浪分享
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631418
Xincheng Huang, Robert Xiao
Shared Mixed Reality experiences allow two co-located users to collaborate on both physical and digital tasks with familiar social protocols. However, extending the same to remote collaboration is limited by cumbersome setups for aligning distinct physical environments and the lack of access to remote physical artifacts. We present SurfShare, a general-purpose symmetric remote collaboration system with mixed-reality head-mounted displays (HMDs). Our system shares a spatially consistent physical-virtual workspace between two remote users, anchored on a physical plane in each environment (e.g., a desk or wall). The video feed of each user's physical surface is overlaid virtually on the other side, creating a shared view of the physical space. We integrate the physical and virtual workspace through virtual replication. Users can transmute physical objects to the virtual space as virtual replicas. Our system is lightweight, implemented using only the capabilities of the headset, without requiring any modifications to the environment (e.g. cameras or motion tracking hardware). We discuss the design, implementation, and interaction capabilities of our prototype, and demonstrate the utility of SurfShare through four example applications. In a user experiment with a comprehensive prototyping task, we found that SurfShare provides a physical-virtual workspace that supports low-fi prototyping with flexible proxemics and fluid collaboration dynamics.
共享混合现实体验可以让两个同处一地的用户通过熟悉的社交协议就物理和数字任务进行协作。然而,将这一功能扩展到远程协作却受到了限制,因为要对齐不同的物理环境,需要进行繁琐的设置,而且无法访问远程物理工件。我们介绍的 SurfShare 是一种通用对称远程协作系统,带有混合现实头戴式显示器(HMD)。我们的系统在两个远程用户之间共享一个空间上一致的物理-虚拟工作空间,并固定在每个环境中的一个物理平面上(如桌子或墙壁)。每个用户物理表面的视频画面会虚拟地叠加到另一侧,形成物理空间的共享视图。我们通过虚拟复制来整合物理和虚拟工作空间。用户可以将物理对象转换为虚拟空间的虚拟复制品。我们的系统是轻量级的,仅使用耳机的功能即可实现,无需对环境(如摄像头或运动跟踪硬件)进行任何修改。我们将讨论原型的设计、实施和交互能力,并通过四个示例应用展示 SurfShare 的实用性。在一项综合原型设计任务的用户实验中,我们发现 SurfShare 提供了一个物理-虚拟工作空间,支持低保真原型设计、灵活的近距离操作和流畅的协作动态。
{"title":"SurfShare","authors":"Xincheng Huang, Robert Xiao","doi":"10.1145/3631418","DOIUrl":"https://doi.org/10.1145/3631418","url":null,"abstract":"Shared Mixed Reality experiences allow two co-located users to collaborate on both physical and digital tasks with familiar social protocols. However, extending the same to remote collaboration is limited by cumbersome setups for aligning distinct physical environments and the lack of access to remote physical artifacts. We present SurfShare, a general-purpose symmetric remote collaboration system with mixed-reality head-mounted displays (HMDs). Our system shares a spatially consistent physical-virtual workspace between two remote users, anchored on a physical plane in each environment (e.g., a desk or wall). The video feed of each user's physical surface is overlaid virtually on the other side, creating a shared view of the physical space. We integrate the physical and virtual workspace through virtual replication. Users can transmute physical objects to the virtual space as virtual replicas. Our system is lightweight, implemented using only the capabilities of the headset, without requiring any modifications to the environment (e.g. cameras or motion tracking hardware). We discuss the design, implementation, and interaction capabilities of our prototype, and demonstrate the utility of SurfShare through four example applications. In a user experiment with a comprehensive prototyping task, we found that SurfShare provides a physical-virtual workspace that supports low-fi prototyping with flexible proxemics and fluid collaboration dynamics.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"1 6","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wi-Painter Wi-Painter
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3633809
Dawei Yan, Panlong Yang, Fei Shang, Weiwei Jiang, Xiang-Yang Li
WiFi has gradually developed into one of the main candidate technologies for indoor environment sensing. In this paper, we are interested in using COTS WiFi devices to identify material details, including location, material type, and shape, of stationary objects in the surrounding environment, which may open up new opportunities for many applications. Specifically, we present Wi-Painter, a model-driven system that can accurately detects smooth-surfaced material types and their edges using COTS WiFi devices without modification. Different from previous arts for material identification, Wi-Painter subdivides the target into individual 2D pixels, and simultaneously forms a 2D image based on identifying the material type of each pixel. The key idea of Wi-Painter is to exploit the complex permittivity of the object surface which can be estimated by the different reflectivity of signals with different polarization directions. In particular, we construct the multi-incident angle model to characterize the material, using only the power ratios of the vertically and horizontally polarized signals measured at several different incident angles, which avoids the use of inaccurate WiFi signal phases. We implement and evaluate Wi-Painter in the real world, showing an average classification accuracy of 93.4% for different material types including metal, wood, rubber and plastic of different sizes and thicknesses, and across different environments. In addition, Wi-Painter can accurately detect the material type and edge of the word "LOVE" spliced with different materials, with an average size of 60cm × 80cm, and material edges with different orientations.
WiFi 已逐渐发展成为室内环境传感的主要候选技术之一。在本文中,我们希望利用 COTS WiFi 设备来识别周围环境中静止物体的材料细节,包括位置、材料类型和形状,这可能会为许多应用带来新的机遇。具体来说,我们提出的 Wi-Painter 是一个模型驱动的系统,无需修改即可使用 COTS WiFi 设备准确检测光滑表面的材料类型及其边缘。与以往的材料识别技术不同,Wi-Painter 将目标细分为单个二维像素,同时根据识别每个像素的材料类型形成二维图像。Wi-Painter 的主要思想是利用物体表面的复介电常数,通过不同偏振方向信号的不同反射率来估计物体表面的复介电常数。特别是,我们构建了多入射角模型来表征材料,仅使用在几个不同入射角测量到的垂直和水平极化信号的功率比,这避免了使用不准确的 WiFi 信号相位。我们在现实世界中实施并评估了 Wi-Painter,结果表明,在不同环境下,对不同材料类型(包括不同尺寸和厚度的金属、木材、橡胶和塑料)的平均分类准确率为 93.4%。此外,Wi-Painter 还能准确检测出不同材料拼接的 "LOVE "一词的材料类型和边缘,其平均尺寸为 60 厘米 × 80 厘米,材料边缘的方向也各不相同。
{"title":"Wi-Painter","authors":"Dawei Yan, Panlong Yang, Fei Shang, Weiwei Jiang, Xiang-Yang Li","doi":"10.1145/3633809","DOIUrl":"https://doi.org/10.1145/3633809","url":null,"abstract":"WiFi has gradually developed into one of the main candidate technologies for indoor environment sensing. In this paper, we are interested in using COTS WiFi devices to identify material details, including location, material type, and shape, of stationary objects in the surrounding environment, which may open up new opportunities for many applications. Specifically, we present Wi-Painter, a model-driven system that can accurately detects smooth-surfaced material types and their edges using COTS WiFi devices without modification. Different from previous arts for material identification, Wi-Painter subdivides the target into individual 2D pixels, and simultaneously forms a 2D image based on identifying the material type of each pixel. The key idea of Wi-Painter is to exploit the complex permittivity of the object surface which can be estimated by the different reflectivity of signals with different polarization directions. In particular, we construct the multi-incident angle model to characterize the material, using only the power ratios of the vertically and horizontally polarized signals measured at several different incident angles, which avoids the use of inaccurate WiFi signal phases. We implement and evaluate Wi-Painter in the real world, showing an average classification accuracy of 93.4% for different material types including metal, wood, rubber and plastic of different sizes and thicknesses, and across different environments. In addition, Wi-Painter can accurately detect the material type and edge of the word \"LOVE\" spliced with different materials, with an average size of 60cm × 80cm, and material edges with different orientations.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"1 4","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bias Mitigation in Federated Learning for Edge Computing 边缘计算联合学习中的偏差缓解
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631455
Yasmine Djebrouni, Nawel Benarba, Ousmane Touat, Pasquale De Rosa, Sara Bouchenak, Angela Bonifati, Pascal Felber, Vania Marangozova, V. Schiavoni
Federated learning (FL) is a distributed machine learning paradigm that enables data owners to collaborate on training models while preserving data privacy. As FL effectively leverages decentralized and sensitive data sources, it is increasingly used in ubiquitous computing including remote healthcare, activity recognition, and mobile applications. However, FL raises ethical and social concerns as it may introduce bias with regard to sensitive attributes such as race, gender, and location. Mitigating FL bias is thus a major research challenge. In this paper, we propose Astral, a novel bias mitigation system for FL. Astral provides a novel model aggregation approach to select the most effective aggregation weights to combine FL clients' models. It guarantees a predefined fairness objective by constraining bias below a given threshold while keeping model accuracy as high as possible. Astral handles the bias of single and multiple sensitive attributes and supports all bias metrics. Our comprehensive evaluation on seven real-world datasets with three popular bias metrics shows that Astral outperforms state-of-the-art FL bias mitigation techniques in terms of bias mitigation and model accuracy. Moreover, we show that Astral is robust against data heterogeneity and scalable in terms of data size and number of FL clients. Astral's code base is publicly available.
联合学习(FL)是一种分布式机器学习范式,它能让数据所有者在保护数据隐私的同时合作训练模型。由于联合学习能有效利用分散和敏感的数据源,因此越来越多地应用于包括远程医疗、活动识别和移动应用在内的泛在计算领域。然而,FL 引发了伦理和社会问题,因为它可能会在种族、性别和位置等敏感属性方面产生偏差。因此,减少 FL 偏差是一项重大的研究挑战。在本文中,我们提出了针对 FL 的新型偏差缓解系统 Astral。Astral 提供了一种新颖的模型聚合方法,用于选择最有效的聚合权重来组合 FL 客户的模型。它通过将偏差限制在给定阈值以下,同时保持尽可能高的模型准确性,来保证预定义的公平性目标。Astral 可处理单个和多个敏感属性的偏差,并支持所有偏差指标。我们使用三种流行的偏差度量标准对七个真实数据集进行了全面评估,结果表明 Astral 在偏差缓解和模型准确性方面优于最先进的 FL 偏差缓解技术。此外,我们还证明了 Astral 对数据异构性的鲁棒性,以及在数据大小和 FL 客户端数量方面的可扩展性。Astral 的代码库是公开的。
{"title":"Bias Mitigation in Federated Learning for Edge Computing","authors":"Yasmine Djebrouni, Nawel Benarba, Ousmane Touat, Pasquale De Rosa, Sara Bouchenak, Angela Bonifati, Pascal Felber, Vania Marangozova, V. Schiavoni","doi":"10.1145/3631455","DOIUrl":"https://doi.org/10.1145/3631455","url":null,"abstract":"Federated learning (FL) is a distributed machine learning paradigm that enables data owners to collaborate on training models while preserving data privacy. As FL effectively leverages decentralized and sensitive data sources, it is increasingly used in ubiquitous computing including remote healthcare, activity recognition, and mobile applications. However, FL raises ethical and social concerns as it may introduce bias with regard to sensitive attributes such as race, gender, and location. Mitigating FL bias is thus a major research challenge. In this paper, we propose Astral, a novel bias mitigation system for FL. Astral provides a novel model aggregation approach to select the most effective aggregation weights to combine FL clients' models. It guarantees a predefined fairness objective by constraining bias below a given threshold while keeping model accuracy as high as possible. Astral handles the bias of single and multiple sensitive attributes and supports all bias metrics. Our comprehensive evaluation on seven real-world datasets with three popular bias metrics shows that Astral outperforms state-of-the-art FL bias mitigation techniques in terms of bias mitigation and model accuracy. Moreover, we show that Astral is robust against data heterogeneity and scalable in terms of data size and number of FL clients. Astral's code base is publicly available.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"10 3","pages":"1 - 35"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic Loss 语义损失
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631407
Luca Arrotta, Gabriele Civitarese, Claudio Bettini
Deep Learning models are a standard solution for sensor-based Human Activity Recognition (HAR), but their deployment is often limited by labeled data scarcity and models' opacity. Neuro-Symbolic AI (NeSy) provides an interesting research direction to mitigate these issues by infusing knowledge about context information into HAR deep learning classifiers. However, existing NeSy methods for context-aware HAR require computationally expensive symbolic reasoners during classification, making them less suitable for deployment on resource-constrained devices (e.g., mobile devices). Additionally, NeSy approaches for context-aware HAR have never been evaluated on in-the-wild datasets, and their generalization capabilities in real-world scenarios are questionable. In this work, we propose a novel approach based on a semantic loss function that infuses knowledge constraints in the HAR model during the training phase, avoiding symbolic reasoning during classification. Our results on scripted and in-the-wild datasets show the impact of different semantic loss functions in outperforming a purely data-driven model. We also compare our solution with existing NeSy methods and analyze each approach's strengths and weaknesses. Our semantic loss remains the only NeSy solution that can be deployed as a single DNN without the need for symbolic reasoning modules, reaching recognition rates close (and better in some cases) to existing approaches.
深度学习模型是基于传感器的人类活动识别(HAR)的标准解决方案,但其部署往往受到标记数据稀缺和模型不透明的限制。神经符号人工智能(NeSy)提供了一个有趣的研究方向,通过将上下文信息知识注入 HAR 深度学习分类器来缓解这些问题。然而,现有的用于上下文感知 HAR 的 NeSy 方法在分类过程中需要计算昂贵的符号推理器,因此不太适合部署在资源受限的设备(如移动设备)上。此外,用于上下文感知 HAR 的 NeSy 方法从未在实际数据集上进行过评估,其在真实世界场景中的泛化能力也值得怀疑。在这项工作中,我们提出了一种基于语义损失函数的新方法,在训练阶段将知识约束注入 HAR 模型,避免了分类过程中的符号推理。我们在脚本数据集和野生数据集上的研究结果表明,不同的语义损失函数对超越纯数据驱动模型的影响。我们还将我们的解决方案与现有的 NeSy 方法进行了比较,并分析了每种方法的优缺点。我们的语义损失仍然是唯一可以作为单一 DNN 部署的 NeSy 解决方案,无需符号推理模块,识别率接近(在某些情况下甚至更高)现有方法。
{"title":"Semantic Loss","authors":"Luca Arrotta, Gabriele Civitarese, Claudio Bettini","doi":"10.1145/3631407","DOIUrl":"https://doi.org/10.1145/3631407","url":null,"abstract":"Deep Learning models are a standard solution for sensor-based Human Activity Recognition (HAR), but their deployment is often limited by labeled data scarcity and models' opacity. Neuro-Symbolic AI (NeSy) provides an interesting research direction to mitigate these issues by infusing knowledge about context information into HAR deep learning classifiers. However, existing NeSy methods for context-aware HAR require computationally expensive symbolic reasoners during classification, making them less suitable for deployment on resource-constrained devices (e.g., mobile devices). Additionally, NeSy approaches for context-aware HAR have never been evaluated on in-the-wild datasets, and their generalization capabilities in real-world scenarios are questionable. In this work, we propose a novel approach based on a semantic loss function that infuses knowledge constraints in the HAR model during the training phase, avoiding symbolic reasoning during classification. Our results on scripted and in-the-wild datasets show the impact of different semantic loss functions in outperforming a purely data-driven model. We also compare our solution with existing NeSy methods and analyze each approach's strengths and weaknesses. Our semantic loss remains the only NeSy solution that can be deployed as a single DNN without the need for symbolic reasoning modules, reaching recognition rates close (and better in some cases) to existing approaches.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"2 8","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HyperTracking 超级跟踪
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631434
Xiaoqiang Xu, Xuanqi Meng, Xinyu Tong, Xiulong Liu, Xin Xie, Wenyu Qu
Wireless sensing technology allows for non-intrusive sensing without the need for physical sensors worn by the target, enabling a wide range of applications, such as indoor tracking, and activity identification. To theoretically reveal the fundamental principles of wireless sensing, the Fresnel zone model has been introduced in the field of Wi-Fi sensing. While the Fresnel zone model is effective in explaining the sensing mechanism in line-of-sight (LoS) scenarios, achieving accurate sensing in non-line-of-sight (NLoS) situations continues to pose a significant challenge. In this paper, we propose a novel theoretical model called the Hyperbolic zone to reveal the fundamental sensing mechanism in NLoS scenarios. The main principle is to eliminate the complex NLoS path shared among different transmitter-receiver pairs, which allows us to obtain a series of simple "virtual" reflection paths among receivers. Since these "virtual" reflection paths satisfy the properties of the hyperbola, we propose the hyperbolic tracking model. Based on the proposed model, we implement the HyperTracking system using commercial Wi-Fi devices. The experimental results show that the proposed hyperbolic model is suitable for accurate tracking in both LoS and NLoS scenarios. We can reduce 0.36 m tracking error in contrast to the Fresnel zone model in NLoS scenarios. When we utilize the proposed hyperbolic model to train a typical LSTM neural network, we are able to further reduce the tracking error by 0.13 m and save the execution time by 281% with the same data. As a whole, our method can reduce the tracking error by 54% in NLoS scenarios compared with the Fresnel zone model.
无线传感技术可以实现非侵入式传感,而不需要目标佩戴物理传感器,从而实现了室内跟踪和活动识别等广泛应用。为了从理论上揭示无线传感的基本原理,Wi-Fi 传感领域引入了菲涅尔区模型。虽然菲涅尔区模型能有效解释视距(LoS)情况下的传感机制,但在非视距(NLoS)情况下实现精确传感仍然是一个重大挑战。在本文中,我们提出了一种名为 "双曲区 "的新型理论模型,以揭示非视距(NLoS)场景下的基本传感机制。其主要原理是消除不同发射机-接收机对之间共享的复杂 NLoS 路径,从而在接收机之间获得一系列简单的 "虚拟 "反射路径。由于这些 "虚拟 "反射路径符合双曲线的特性,因此我们提出了双曲线跟踪模型。基于所提出的模型,我们利用商用 Wi-Fi 设备实现了 HyperTracking 系统。实验结果表明,所提出的双曲线模型适用于 LoS 和 NLoS 场景下的精确跟踪。在 NLoS 场景中,与菲涅尔区模型相比,我们可以减少 0.36 米的跟踪误差。当我们利用所提出的双曲模型来训练一个典型的 LSTM 神经网络时,在相同数据的情况下,我们能将跟踪误差进一步降低 0.13 米,并将执行时间节省 281%。总体而言,与菲涅尔区域模型相比,我们的方法可将 NLoS 场景下的跟踪误差降低 54%。
{"title":"HyperTracking","authors":"Xiaoqiang Xu, Xuanqi Meng, Xinyu Tong, Xiulong Liu, Xin Xie, Wenyu Qu","doi":"10.1145/3631434","DOIUrl":"https://doi.org/10.1145/3631434","url":null,"abstract":"Wireless sensing technology allows for non-intrusive sensing without the need for physical sensors worn by the target, enabling a wide range of applications, such as indoor tracking, and activity identification. To theoretically reveal the fundamental principles of wireless sensing, the Fresnel zone model has been introduced in the field of Wi-Fi sensing. While the Fresnel zone model is effective in explaining the sensing mechanism in line-of-sight (LoS) scenarios, achieving accurate sensing in non-line-of-sight (NLoS) situations continues to pose a significant challenge. In this paper, we propose a novel theoretical model called the Hyperbolic zone to reveal the fundamental sensing mechanism in NLoS scenarios. The main principle is to eliminate the complex NLoS path shared among different transmitter-receiver pairs, which allows us to obtain a series of simple \"virtual\" reflection paths among receivers. Since these \"virtual\" reflection paths satisfy the properties of the hyperbola, we propose the hyperbolic tracking model. Based on the proposed model, we implement the HyperTracking system using commercial Wi-Fi devices. The experimental results show that the proposed hyperbolic model is suitable for accurate tracking in both LoS and NLoS scenarios. We can reduce 0.36 m tracking error in contrast to the Fresnel zone model in NLoS scenarios. When we utilize the proposed hyperbolic model to train a typical LSTM neural network, we are able to further reduce the tracking error by 0.13 m and save the execution time by 281% with the same data. As a whole, our method can reduce the tracking error by 54% in NLoS scenarios compared with the Fresnel zone model.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"13 11","pages":"1 - 26"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ToothFairy 牙仙
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631412
Yang Wang, Feng Hong, Yufei Jiang, Chenyu Bao, Chao Liu, Zhongwen Guo
Tooth brushing monitors have the potential to enhance oral hygiene and encourage the development of healthy brushing habits. However, previous studies fall short of recognizing each tooth due to limitations in external sensors and variations among users. To address these challenges, we present ToothFairy, a real-time tooth-by-tooth brushing monitor that uses earphone reverse signals captured within the oral cavity to identify each tooth during brushing. The key component of ToothFairy is a novel bone-conducted acoustic attenuation model, which quantifies sound propagation within the oral cavity. This model eliminates the need for machine learning and can be calibrated with just one second of brushing data for each tooth by a new user. ToothFairy also addresses practical issues such as brushing detection and tooth region determination. Results from extensive experiments, involving 10 volunteers and 25 combinations of five commercial off-the-shelf toothbrush and earphone models each, show that ToothFairy achieves tooth recognition with an average accuracy of 90.5%.
刷牙监测器具有改善口腔卫生和鼓励养成健康刷牙习惯的潜力。然而,由于外部传感器的限制和用户之间的差异,以往的研究无法识别每颗牙齿。为了应对这些挑战,我们推出了逐齿刷牙实时监测器 ToothFairy,它利用在口腔内捕获的耳机反向信号来识别刷牙过程中的每颗牙齿。ToothFairy 的关键部件是一个新颖的骨传导声学衰减模型,它可以量化声音在口腔内的传播。该模型无需机器学习,新用户只需一秒钟的刷牙数据即可对每颗牙齿进行校准。ToothFairy 还解决了刷牙检测和牙齿区域确定等实际问题。广泛的实验结果表明,ToothFairy 的牙齿识别平均准确率达到 90.5%。
{"title":"ToothFairy","authors":"Yang Wang, Feng Hong, Yufei Jiang, Chenyu Bao, Chao Liu, Zhongwen Guo","doi":"10.1145/3631412","DOIUrl":"https://doi.org/10.1145/3631412","url":null,"abstract":"Tooth brushing monitors have the potential to enhance oral hygiene and encourage the development of healthy brushing habits. However, previous studies fall short of recognizing each tooth due to limitations in external sensors and variations among users. To address these challenges, we present ToothFairy, a real-time tooth-by-tooth brushing monitor that uses earphone reverse signals captured within the oral cavity to identify each tooth during brushing. The key component of ToothFairy is a novel bone-conducted acoustic attenuation model, which quantifies sound propagation within the oral cavity. This model eliminates the need for machine learning and can be calibrated with just one second of brushing data for each tooth by a new user. ToothFairy also addresses practical issues such as brushing detection and tooth region determination. Results from extensive experiments, involving 10 volunteers and 25 combinations of five commercial off-the-shelf toothbrush and earphone models each, show that ToothFairy achieves tooth recognition with an average accuracy of 90.5%.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"12 50","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DIPA2 DIPA2
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631439
Anran Xu, Zhongyi Zhou, Kakeru Miyazaki, Ryo Yoshikawa, S. Hosio, Koji Yatani
The world today is increasingly visual. Many of the most popular online social networking services are largely powered by images, making image privacy protection a critical research topic in the fields of ubiquitous computing, usable security, and human-computer interaction (HCI). One topical issue is understanding privacy-threatening content in images that are shared online. This dataset article introduces DIPA2, an open-sourced image dataset that offers object-level annotations with high-level reasoning properties to show perceptions of privacy among different cultures. DIPA2 provides 5,897 annotations describing perceived privacy risks of 3,347 objects in 1,304 images. The annotations contain the type of the object and four additional privacy metrics: 1) information type indicating what kind of information may leak if the image containing the object is shared, 2) a 7-point Likert item estimating the perceived severity of privacy leakages, and 3) intended recipient scopes when annotators assume they are either image owners or allowing others to repost the image. Our dataset contains unique data from two cultures: We recruited annotators from both Japan and the U.K. to demonstrate the impact of culture on object-level privacy perceptions. In this paper, we first illustrate how we designed and performed the construction of DIPA2, along with data analysis of the collected annotations. Second, we provide two machine-learning baselines to demonstrate how DIPA2 challenges the current image privacy recognition task. DIPA2 facilitates various types of research on image privacy, including machine learning methods inferring privacy threats in complex scenarios, quantitative analysis of cultural influences on privacy preferences, understanding of image sharing behaviors, and promotion of cyber hygiene for general user populations.
当今世界越来越视觉化。许多最流行的在线社交网络服务在很大程度上都是由图像驱动的,这使得图像隐私保护成为泛在计算、可用安全和人机交互(HCI)领域的一个重要研究课题。其中一个热点问题是了解在线共享图片中威胁隐私的内容。本数据集文章介绍了 DIPA2,这是一个开源的图像数据集,提供具有高级推理属性的对象级注释,以显示不同文化中对隐私的看法。DIPA2 提供了 5,897 个注释,描述了 1,304 张图片中 3,347 个对象的隐私风险感知。这些注释包含对象类型和四个额外的隐私指标:1)信息类型,表示如果共享包含对象的图片,可能会泄露哪类信息;2)7 点 Likert 项目,估计感知到的隐私泄露严重程度;3)当注释者假定自己是图片所有者或允许他人转贴图片时,预期接收者范围。我们的数据集包含来自两种文化的独特数据:我们招募了来自日本和英国的注释者,以展示文化对对象级隐私感知的影响。在本文中,我们首先说明了如何设计和构建 DIPA2,以及对收集到的注释进行数据分析。其次,我们提供了两个机器学习基线,以展示 DIPA2 如何挑战当前的图像隐私识别任务。DIPA2 有助于各种类型的图像隐私研究,包括在复杂场景中推断隐私威胁的机器学习方法、对隐私偏好的文化影响的定量分析、对图像共享行为的理解,以及促进普通用户群体的网络卫生。
{"title":"DIPA2","authors":"Anran Xu, Zhongyi Zhou, Kakeru Miyazaki, Ryo Yoshikawa, S. Hosio, Koji Yatani","doi":"10.1145/3631439","DOIUrl":"https://doi.org/10.1145/3631439","url":null,"abstract":"The world today is increasingly visual. Many of the most popular online social networking services are largely powered by images, making image privacy protection a critical research topic in the fields of ubiquitous computing, usable security, and human-computer interaction (HCI). One topical issue is understanding privacy-threatening content in images that are shared online. This dataset article introduces DIPA2, an open-sourced image dataset that offers object-level annotations with high-level reasoning properties to show perceptions of privacy among different cultures. DIPA2 provides 5,897 annotations describing perceived privacy risks of 3,347 objects in 1,304 images. The annotations contain the type of the object and four additional privacy metrics: 1) information type indicating what kind of information may leak if the image containing the object is shared, 2) a 7-point Likert item estimating the perceived severity of privacy leakages, and 3) intended recipient scopes when annotators assume they are either image owners or allowing others to repost the image. Our dataset contains unique data from two cultures: We recruited annotators from both Japan and the U.K. to demonstrate the impact of culture on object-level privacy perceptions. In this paper, we first illustrate how we designed and performed the construction of DIPA2, along with data analysis of the collected annotations. Second, we provide two machine-learning baselines to demonstrate how DIPA2 challenges the current image privacy recognition task. DIPA2 facilitates various types of research on image privacy, including machine learning methods inferring privacy threats in complex scenarios, quantitative analysis of cultural influences on privacy preferences, understanding of image sharing behaviors, and promotion of cyber hygiene for general user populations.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"11 3","pages":"1 - 30"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unobtrusive Air Leakage Estimation for Earables with In-ear Microphones 带入耳式麦克风的耳机的无干扰漏气估计
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-12 DOI: 10.1145/3631405
B. U. Demirel, Ting Dang, Khaldoon Al-Naimi, F. Kawsar, A. Montanari
Earables (in-ear wearables) are gaining increasing attention for sensing applications and healthcare research thanks to their ergonomy and non-invasive nature. However, air leakages between the device and the user's ear, resulting from daily activities or wearing variabilities, can decrease the performance of applications, interfere with calibrations, and reduce the robustness of the overall system. Existing literature lacks established methods for estimating the degree of air leaks (i.e., seal integrity) to provide information for the earable applications. In this work, we proposed a novel unobtrusive method for estimating the air leakage level of earbuds based on an in-ear microphone. The proposed method aims to estimate the magnitude of distortions, reflections, and external noise in the ear canal while excluding the speaker output by learning the speaker-to-microphone transfer function which allows us to perform the task unobtrusively. Using the obtained residual signal in the ear canal, we extract three features and deploy a machine-learning model for estimating the air leakage level. We investigated our system under various conditions to validate its robustness and resilience against the motion and other artefacts. Our extensive experimental evaluation shows that the proposed method can track air leakage levels under different daily activities. "The best computer is a quiet, invisible servant." ~Mark Weiser
耳戴式设备(入耳式可穿戴设备)因其人体工学和非侵入性特点,在传感应用和医疗保健研究领域日益受到关注。然而,由于日常活动或佩戴的变化,设备和用户耳朵之间的空气泄漏会降低应用性能,干扰校准,并降低整个系统的鲁棒性。现有文献缺乏估算漏气程度(即密封完整性)的既定方法,无法为耳机应用提供信息。在这项工作中,我们提出了一种基于耳内麦克风估算耳塞漏气程度的新型非侵入式方法。该方法旨在通过学习扬声器到麦克风的传递函数来估算耳道中失真、反射和外部噪音的大小,同时排除扬声器的输出,从而使我们能够不露痕迹地完成任务。利用获得的耳道残余信号,我们提取了三个特征,并部署了一个机器学习模型来估计漏气水平。我们在各种条件下研究了我们的系统,以验证其对运动和其他伪影的鲁棒性和复原力。广泛的实验评估表明,所提出的方法可以在不同的日常活动中跟踪漏气水平。"最好的计算机是一个安静的隐形仆人"。~马克-韦泽
{"title":"Unobtrusive Air Leakage Estimation for Earables with In-ear Microphones","authors":"B. U. Demirel, Ting Dang, Khaldoon Al-Naimi, F. Kawsar, A. Montanari","doi":"10.1145/3631405","DOIUrl":"https://doi.org/10.1145/3631405","url":null,"abstract":"Earables (in-ear wearables) are gaining increasing attention for sensing applications and healthcare research thanks to their ergonomy and non-invasive nature. However, air leakages between the device and the user's ear, resulting from daily activities or wearing variabilities, can decrease the performance of applications, interfere with calibrations, and reduce the robustness of the overall system. Existing literature lacks established methods for estimating the degree of air leaks (i.e., seal integrity) to provide information for the earable applications. In this work, we proposed a novel unobtrusive method for estimating the air leakage level of earbuds based on an in-ear microphone. The proposed method aims to estimate the magnitude of distortions, reflections, and external noise in the ear canal while excluding the speaker output by learning the speaker-to-microphone transfer function which allows us to perform the task unobtrusively. Using the obtained residual signal in the ear canal, we extract three features and deploy a machine-learning model for estimating the air leakage level. We investigated our system under various conditions to validate its robustness and resilience against the motion and other artefacts. Our extensive experimental evaluation shows that the proposed method can track air leakage levels under different daily activities. \"The best computer is a quiet, invisible servant.\" ~Mark Weiser","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"13 6","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1