首页 > 最新文献

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

英文 中文
Investigating Generalizability of Speech-based Suicidal Ideation Detection Using Mobile Phones 利用移动电话调查基于语音的自杀意念检测的通用性
Q1 Computer Science Pub Date : 2024-01-12 DOI: 10.1145/3631452
Arvind Pillai, Trevor Cohen, Dror Ben-Zeev, Subigya Nepal, Weichen Wang, M. Nemesure, Michael Heinz, George Price, D. Lekkas, Amanda C. Collins, Tess Z Griffin, Benjamin Buck, S. Preum, Dror Nicholas Jacobson
Speech-based diaries from mobile phones can capture paralinguistic patterns that help detect mental illness symptoms such as suicidal ideation. However, previous studies have primarily evaluated machine learning models on a single dataset, making their performance unknown under distribution shifts. In this paper, we investigate the generalizability of speech-based suicidal ideation detection using mobile phones through cross-dataset experiments using four datasets with N=786 individuals experiencing major depressive disorder, auditory verbal hallucinations, persecutory thoughts, and students with suicidal thoughts. Our results show that machine and deep learning methods generalize poorly in many cases. Thus, we evaluate unsupervised domain adaptation (UDA) and semi-supervised domain adaptation (SSDA) to mitigate performance decreases owing to distribution shifts. While SSDA approaches showed superior performance, they are often ineffective, requiring large target datasets with limited labels for adversarial and contrastive training. Therefore, we propose sinusoidal similarity sub-sampling (S3), a method that selects optimal source subsets for the target domain by computing pair-wise scores using sinusoids. Compared to prior approaches, S3 does not use labeled target data or transform features. Fine-tuning using S3 improves the cross-dataset performance of deep models across the datasets, thus having implications in ubiquitous technology, mental health, and machine learning.
基于手机的语音日记可以捕捉副语言模式,帮助检测自杀意念等精神疾病症状。然而,以往的研究主要是在单一数据集上对机器学习模型进行评估,因此无法了解其在分布变化情况下的性能。在本文中,我们通过跨数据集实验研究了使用手机进行基于语音的自杀意念检测的普适性,实验中使用了四个数据集,包括重度抑郁障碍、听觉言语幻觉、受迫害意念和有自杀意念的学生,样本数为 786 人。我们的结果表明,机器学习和深度学习方法在许多情况下的泛化效果不佳。因此,我们对无监督领域适应(UDA)和半监督领域适应(SSDA)进行了评估,以缓解因分布变化而导致的性能下降。虽然 SSDA 方法表现出了卓越的性能,但它们往往效果不佳,因为它们需要具有有限标签的大型目标数据集来进行对抗性和对比性训练。因此,我们提出了正弦波相似性子采样(S3)方法,该方法通过使用正弦波计算成对分数,为目标域选择最佳源子集。与之前的方法相比,S3 不使用标注的目标数据或转换特征。使用 S3 进行微调可提高深度模型在不同数据集之间的跨数据集性能,从而对泛在技术、心理健康和机器学习产生影响。
{"title":"Investigating Generalizability of Speech-based Suicidal Ideation Detection Using Mobile Phones","authors":"Arvind Pillai, Trevor Cohen, Dror Ben-Zeev, Subigya Nepal, Weichen Wang, M. Nemesure, Michael Heinz, George Price, D. Lekkas, Amanda C. Collins, Tess Z Griffin, Benjamin Buck, S. Preum, Dror Nicholas Jacobson","doi":"10.1145/3631452","DOIUrl":"https://doi.org/10.1145/3631452","url":null,"abstract":"Speech-based diaries from mobile phones can capture paralinguistic patterns that help detect mental illness symptoms such as suicidal ideation. However, previous studies have primarily evaluated machine learning models on a single dataset, making their performance unknown under distribution shifts. In this paper, we investigate the generalizability of speech-based suicidal ideation detection using mobile phones through cross-dataset experiments using four datasets with N=786 individuals experiencing major depressive disorder, auditory verbal hallucinations, persecutory thoughts, and students with suicidal thoughts. Our results show that machine and deep learning methods generalize poorly in many cases. Thus, we evaluate unsupervised domain adaptation (UDA) and semi-supervised domain adaptation (SSDA) to mitigate performance decreases owing to distribution shifts. While SSDA approaches showed superior performance, they are often ineffective, requiring large target datasets with limited labels for adversarial and contrastive training. Therefore, we propose sinusoidal similarity sub-sampling (S3), a method that selects optimal source subsets for the target domain by computing pair-wise scores using sinusoids. Compared to prior approaches, S3 does not use labeled target data or transform features. Fine-tuning using S3 improves the cross-dataset performance of deep models across the datasets, thus having implications in ubiquitous technology, mental health, and machine learning.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EarSE 耳塞
Q1 Computer Science Pub Date : 2024-01-12 DOI: 10.1145/3631447
Di Duan, Yongliang Chen, Weitao Xu, Tianxing Li
Speech enhancement is regarded as the key to the quality of digital communication and is gaining increasing attention in the research field of audio processing. In this paper, we present EarSE, the first robust, hands-free, multi-modal speech enhancement solution using commercial off-the-shelf headphones. The key idea of EarSE is a novel hardware setting---leveraging the form factor of headphones equipped with a boom microphone to establish a stable acoustic sensing field across the user's face. Furthermore, we designed a sensing methodology based on Frequency-Modulated Continuous-Wave, which is an ultrasonic modality sensitive to capture subtle facial articulatory gestures of users when speaking. Moreover, we design a fully attention-based deep neural network to self-adaptively solve the user diversity problem by introducing the Vision Transformer network. We enhance the collaboration between the speech and ultrasonic modalities using a multi-head attention mechanism and a Factorized Bilinear Pooling gate. Extensive experiments demonstrate that EarSE achieves remarkable performance as increasing SiSDR by 14.61 dB and reducing the word error rate of user speech recognition by 22.45--66.41% in real-world application. EarSE not only outperforms seven baselines by 38.0% in SiSNR, 12.4% in STOI, and 20.5% in PESQ on average but also maintains practicality.
语音增强被视为数字通信质量的关键,在音频处理研究领域日益受到关注。在本文中,我们介绍了 EarSE,这是首个使用商用现成耳机的稳健、免提、多模态语音增强解决方案。EarSE 的核心理念是一种新颖的硬件设置--充分利用耳机配备吊杆麦克风的外形优势,在用户面部建立一个稳定的声学感应场。此外,我们还设计了一种基于频率调制连续波的传感方法,这是一种敏感的超声波模式,可以捕捉用户说话时细微的面部发音手势。此外,我们还设计了一种完全基于注意力的深度神经网络,通过引入视觉转换器网络来自适应地解决用户多样性问题。我们利用多头注意力机制和因子化双线性池化门增强了语音和超声波模式之间的协作。广泛的实验证明,EarSE在实际应用中取得了显著的性能,将SiSDR提高了14.61 dB,并将用户语音识别的词错误率降低了22.45%-66.41%。EarSE 不仅在 SiSNR、STOI 和 PESQ 方面平均分别比七种基线方法高出 38.0%、12.4% 和 20.5%,而且保持了实用性。
{"title":"EarSE","authors":"Di Duan, Yongliang Chen, Weitao Xu, Tianxing Li","doi":"10.1145/3631447","DOIUrl":"https://doi.org/10.1145/3631447","url":null,"abstract":"Speech enhancement is regarded as the key to the quality of digital communication and is gaining increasing attention in the research field of audio processing. In this paper, we present EarSE, the first robust, hands-free, multi-modal speech enhancement solution using commercial off-the-shelf headphones. The key idea of EarSE is a novel hardware setting---leveraging the form factor of headphones equipped with a boom microphone to establish a stable acoustic sensing field across the user's face. Furthermore, we designed a sensing methodology based on Frequency-Modulated Continuous-Wave, which is an ultrasonic modality sensitive to capture subtle facial articulatory gestures of users when speaking. Moreover, we design a fully attention-based deep neural network to self-adaptively solve the user diversity problem by introducing the Vision Transformer network. We enhance the collaboration between the speech and ultrasonic modalities using a multi-head attention mechanism and a Factorized Bilinear Pooling gate. Extensive experiments demonstrate that EarSE achieves remarkable performance as increasing SiSDR by 14.61 dB and reducing the word error rate of user speech recognition by 22.45--66.41% in real-world application. EarSE not only outperforms seven baselines by 38.0% in SiSNR, 12.4% in STOI, and 20.5% in PESQ on average but also maintains practicality.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Laser-Powered Vibrotactile Rendering 激光驱动的振动触觉渲染
Q1 Computer Science Pub Date : 2024-01-12 DOI: 10.1145/3631449
Yuning Su, Yuhua Jin, Zhengqing Wang, Yonghao Shi, Da-Yuan Huang, Teng Han, Xing-Dong Yang
We investigate the feasibility of a vibrotactile device that is both battery-free and electronic-free. Our approach leverages lasers as a wireless power transfer and haptic control mechanism, which can drive small actuators commonly used in AR/VR and mobile applications with DC or AC signals. To validate the feasibility of our method, we developed a proof-of-concept prototype that includes low-cost eccentric rotating mass (ERM) motors and linear resonant actuators (LRAs) connected to photovoltaic (PV) cells. This prototype enabled us to capture laser energy from any distance across a room and analyze the impact of critical parameters on the effectiveness of our approach. Through a user study, testing 16 different vibration patterns rendered using either a single motor or two motors, we demonstrate the effectiveness of our approach in generating vibration patterns of comparable quality to a baseline, which rendered the patterns using a signal generator.
我们研究了一种无需电池和电子设备的振动触觉设备的可行性。我们的方法利用激光作为无线电力传输和触觉控制机制,它可以用直流或交流信号驱动 AR/VR 和移动应用中常用的小型致动器。为了验证我们方法的可行性,我们开发了一个概念验证原型,其中包括连接到光伏(PV)电池的低成本偏心旋转质量(ERM)电机和线性谐振致动器(LRA)。该原型使我们能够从房间的任何距离捕捉激光能量,并分析关键参数对我们方法有效性的影响。通过用户研究,测试使用单个电机或两个电机呈现的 16 种不同振动模式,我们证明了我们的方法在生成与使用信号发生器呈现模式的基线质量相当的振动模式方面的有效性。
{"title":"Laser-Powered Vibrotactile Rendering","authors":"Yuning Su, Yuhua Jin, Zhengqing Wang, Yonghao Shi, Da-Yuan Huang, Teng Han, Xing-Dong Yang","doi":"10.1145/3631449","DOIUrl":"https://doi.org/10.1145/3631449","url":null,"abstract":"We investigate the feasibility of a vibrotactile device that is both battery-free and electronic-free. Our approach leverages lasers as a wireless power transfer and haptic control mechanism, which can drive small actuators commonly used in AR/VR and mobile applications with DC or AC signals. To validate the feasibility of our method, we developed a proof-of-concept prototype that includes low-cost eccentric rotating mass (ERM) motors and linear resonant actuators (LRAs) connected to photovoltaic (PV) cells. This prototype enabled us to capture laser energy from any distance across a room and analyze the impact of critical parameters on the effectiveness of our approach. Through a user study, testing 16 different vibration patterns rendered using either a single motor or two motors, we demonstrate the effectiveness of our approach in generating vibration patterns of comparable quality to a baseline, which rendered the patterns using a signal generator.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Uncertain Trajectory Prediction Visualization in Highly Automated Vehicles on Trust, Situation Awareness, and Cognitive Load 高度自动驾驶汽车中的不确定轨迹预测可视化对信任、情景意识和认知负荷的影响
Q1 Computer Science Pub Date : 2024-01-12 DOI: 10.1145/3631408
Mark Colley, Oliver Speidel, Jan Strohbeck, J. Rixen, Janina Belz, Enrico Rukzio
Automated vehicles are expected to improve safety, mobility, and inclusion. User acceptance is required for the successful introduction of this technology. One essential prerequisite for acceptance is appropriately trusting the vehicle's capabilities. System transparency via visualizing internal information could calibrate this trust by enabling the surveillance of the vehicle's detection and prediction capabilities, including its failures. Additionally, concurrently increased situation awareness could improve take-overs in case of emergency. This work reports the results of two online comparative video-based studies on visualizing prediction and maneuver-planning information. Effects on trust, cognitive load, and situation awareness were measured using a simulation (N=280) and state-of-the-art road user prediction and maneuver planning on a pre-recorded real-world video using a real prototype (N=238). Results show that color conveys uncertainty best, that the planned trajectory increased trust, and that the visualization of other predicted trajectories improved perceived safety.
自动驾驶汽车有望提高安全性、机动性和包容性。要成功引入这项技术,用户必须接受。接受的一个基本前提是适当信任车辆的能力。通过可视化内部信息实现系统透明化,可以监控车辆的检测和预测能力,包括其故障,从而校准这种信任。此外,同时增强的态势感知能力可以改善紧急情况下的接管能力。这项工作报告了两项基于视频的在线比较研究结果,研究内容是预测和机动规划信息的可视化。研究使用模拟(280 人)和使用真实原型(238 人)在预先录制的真实世界视频上测量了对信任、认知负荷和态势感知的影响。结果表明,颜色最能体现不确定性,规划的轨迹增加了信任感,其他预测轨迹的可视化提高了感知安全性。
{"title":"Effects of Uncertain Trajectory Prediction Visualization in Highly Automated Vehicles on Trust, Situation Awareness, and Cognitive Load","authors":"Mark Colley, Oliver Speidel, Jan Strohbeck, J. Rixen, Janina Belz, Enrico Rukzio","doi":"10.1145/3631408","DOIUrl":"https://doi.org/10.1145/3631408","url":null,"abstract":"Automated vehicles are expected to improve safety, mobility, and inclusion. User acceptance is required for the successful introduction of this technology. One essential prerequisite for acceptance is appropriately trusting the vehicle's capabilities. System transparency via visualizing internal information could calibrate this trust by enabling the surveillance of the vehicle's detection and prediction capabilities, including its failures. Additionally, concurrently increased situation awareness could improve take-overs in case of emergency. This work reports the results of two online comparative video-based studies on visualizing prediction and maneuver-planning information. Effects on trust, cognitive load, and situation awareness were measured using a simulation (N=280) and state-of-the-art road user prediction and maneuver planning on a pre-recorded real-world video using a real prototype (N=238). Results show that color conveys uncertainty best, that the planned trajectory increased trust, and that the visualization of other predicted trajectories improved perceived safety.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LoCal LoCal
Q1 Computer Science Pub Date : 2024-01-12 DOI: 10.1145/3631436
Duo Zhang, Xusheng Zhang, Yaxiong Xie, Fusang Zhang, Xuanzhi Wang, Yang Li, Daqing Zhang
Millimeter wave (mmWave) radar excels in accurately estimating the distance, speed, and angle of the signal reflectors relative to the radar. However, for diverse sensing applications reliant on radar's tracking capability, these estimates must be transformed from radar to room coordinates. This transformation hinges on the mmWave radar's location attribute, encompassing its position and orientation in room coordinates. Traditional outdoor calibration solutions for autonomous driving utilize corner reflectors as static reference points to derive the location attribute. When deployed in the indoor environment, it is challenging, even for the mmWave radar with GHz bandwidth and a large antenna array, to separate the static reference points from other multipath reflectors. To tackle the static multipath, we propose to deploy a moving reference point (a moving robot) to fully harness the velocity resolution of mmWave radar. Specifically, we select a SLAM-capable robot to accurately obtain its locations under room coordinates during motion, without requiring human intervention. Accurately pairing the locations of the robot under two coordinate systems requires tight synchronization between the mmWave radar and the robot. We therefore propose a novel trajectory correspondence based calibration algorithm that takes the estimated trajectories of two systems as input, decoupling the operations of two systems to the maximum. Extensive experimental results demonstrate that the proposed calibration solution exhibits very high accuracy (1.74 cm and 0.43° accuracy for location and orientation respectively) and could ensure outstanding performance in three representative applications: fall detection, point cloud fusion, and long-distance human tracking.
毫米波(mmWave)雷达在准确估计信号反射器相对于雷达的距离、速度和角度方面表现出色。然而,对于依赖雷达跟踪能力的各种传感应用来说,这些估计值必须从雷达转换到室内坐标。这种转换取决于毫米波雷达的位置属性,包括其在房间坐标中的位置和方向。传统的自动驾驶室外校准解决方案利用角反射器作为静态参考点来推导位置属性。在室内环境中部署时,即使是具有 GHz 带宽和大型天线阵列的毫米波雷达,要将静态参考点与其他多径反射体分开也是一项挑战。为了解决静态多径问题,我们建议部署一个移动参考点(移动机器人),以充分利用毫米波雷达的速度分辨率。具体来说,我们选择一个具有 SLAM 功能的机器人,以便在运动过程中根据房间坐标准确获取其位置,而无需人工干预。在两个坐标系下精确配对机器人的位置需要毫米波雷达和机器人之间的紧密同步。因此,我们提出了一种基于轨迹对应的新型校准算法,该算法将两个系统的估计轨迹作为输入,最大限度地解耦了两个系统的操作。广泛的实验结果表明,所提出的校准解决方案具有极高的精度(定位和定向精度分别为 1.74 厘米和 0.43°),可确保在跌倒检测、点云融合和远距离人体跟踪这三个具有代表性的应用中发挥出色的性能。
{"title":"LoCal","authors":"Duo Zhang, Xusheng Zhang, Yaxiong Xie, Fusang Zhang, Xuanzhi Wang, Yang Li, Daqing Zhang","doi":"10.1145/3631436","DOIUrl":"https://doi.org/10.1145/3631436","url":null,"abstract":"Millimeter wave (mmWave) radar excels in accurately estimating the distance, speed, and angle of the signal reflectors relative to the radar. However, for diverse sensing applications reliant on radar's tracking capability, these estimates must be transformed from radar to room coordinates. This transformation hinges on the mmWave radar's location attribute, encompassing its position and orientation in room coordinates. Traditional outdoor calibration solutions for autonomous driving utilize corner reflectors as static reference points to derive the location attribute. When deployed in the indoor environment, it is challenging, even for the mmWave radar with GHz bandwidth and a large antenna array, to separate the static reference points from other multipath reflectors. To tackle the static multipath, we propose to deploy a moving reference point (a moving robot) to fully harness the velocity resolution of mmWave radar. Specifically, we select a SLAM-capable robot to accurately obtain its locations under room coordinates during motion, without requiring human intervention. Accurately pairing the locations of the robot under two coordinate systems requires tight synchronization between the mmWave radar and the robot. We therefore propose a novel trajectory correspondence based calibration algorithm that takes the estimated trajectories of two systems as input, decoupling the operations of two systems to the maximum. Extensive experimental results demonstrate that the proposed calibration solution exhibits very high accuracy (1.74 cm and 0.43° accuracy for location and orientation respectively) and could ensure outstanding performance in three representative applications: fall detection, point cloud fusion, and long-distance human tracking.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SweatSkin SweatSkin
Q1 Computer Science Pub Date : 2024-01-12 DOI: 10.1145/3631425
Chi-Jung Lee, David Yang, P. Ku, Hsin-Liu (Cindy) Kao
Sweat sensing affords monitoring essential bio-signals tailored for various well-being inspections. We present SweatSkin, the fabrication approach for customizable sweat-sensing on-skin interfaces. SweatSkin is unique in exploiting on-skin microfluidic channels to access bio-fluid secretes within the skin for personalized health monitoring. To lower the barrier to creating skin-conformable microfluidics capable of collecting and analyzing sweat, four fabrication methods utilizing accessible materials are proposed. Technical characterizations of paper- and polymer-based devices indicate that colorimetric analysis can effectively visualize sweat loss, chloride, glucose, and pH values. To support general to extreme sweating scenarios, we consulted five athletic experts on the SweatSkin devices' customization guidelines, application potential, and envisioned usages. The two-session fabrication workshop study with ten participants verified that the four fabrication methods are easy to learn and easy to make. Overall, SweatSkin is an extensible and user-friendly platform for designing and creating customizable on-skin sweat-sensing interfaces for UbiComp and HCI, affording ubiquitous personalized health sensing.
汗液感应可监测各种健康检查所需的重要生物信号。我们介绍的 SweatSkin 是可定制的汗液传感皮肤接口的制造方法。SweatSkin 的独特之处在于利用皮肤上的微流体通道来获取皮肤内的生物流体分泌物,从而实现个性化健康监测。为了降低制造能够收集和分析汗液的皮肤适形微流体的门槛,我们提出了四种利用可获得材料的制造方法。基于纸张和聚合物的设备的技术特性表明,比色分析能有效地显示汗液流失、氯化物、葡萄糖和 pH 值。为了支持一般到极端的出汗情况,我们就 SweatSkin 设备的定制指南、应用潜力和预期用途咨询了五位运动专家。有十名参与者参加的两节制作研讨会研究证实,四种制作方法简单易学,易于制作。总之,SweatSkin 是一个可扩展且用户友好的平台,可用于设计和创建可定制的皮肤汗液传感界面,为 UbiComp 和人机交互提供无所不在的个性化健康传感。
{"title":"SweatSkin","authors":"Chi-Jung Lee, David Yang, P. Ku, Hsin-Liu (Cindy) Kao","doi":"10.1145/3631425","DOIUrl":"https://doi.org/10.1145/3631425","url":null,"abstract":"Sweat sensing affords monitoring essential bio-signals tailored for various well-being inspections. We present SweatSkin, the fabrication approach for customizable sweat-sensing on-skin interfaces. SweatSkin is unique in exploiting on-skin microfluidic channels to access bio-fluid secretes within the skin for personalized health monitoring. To lower the barrier to creating skin-conformable microfluidics capable of collecting and analyzing sweat, four fabrication methods utilizing accessible materials are proposed. Technical characterizations of paper- and polymer-based devices indicate that colorimetric analysis can effectively visualize sweat loss, chloride, glucose, and pH values. To support general to extreme sweating scenarios, we consulted five athletic experts on the SweatSkin devices' customization guidelines, application potential, and envisioned usages. The two-session fabrication workshop study with ten participants verified that the four fabrication methods are easy to learn and easy to make. Overall, SweatSkin is an extensible and user-friendly platform for designing and creating customizable on-skin sweat-sensing interfaces for UbiComp and HCI, affording ubiquitous personalized health sensing.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Driver Maneuver Interaction Identification with Anomaly-Aware Federated Learning on Heterogeneous Feature Representations 利用异构特征表征上的异常感知联合学习识别驾驶员操纵交互
Q1 Computer Science Pub Date : 2024-01-12 DOI: 10.1145/3631421
Mahan Tabatabaie, Suining He
Driver maneuver interaction learning (DMIL) refers to the classification task with the goal of identifying different driver-vehicle maneuver interactions (e.g., left/right turns). Existing conventional studies largely focused on the centralized collection of sensor data from the drivers' smartphones (say, inertial measurement units or IMUs, like accelerometer and gyroscope). Such a centralized mechanism might be precluded by data regulatory constraints. Furthermore, how to enable an adaptive and accurate DMIL framework remains challenging due to (i) complexity in heterogeneous driver maneuver patterns, and (ii) impacts of anomalous driver maneuvers due to, for instance, aggressive driving styles and behaviors. To overcome the above challenges, we propose AF-DMIL, an Anomaly-aware Federated Driver Maneuver Interaction Learning system. We focus on the real-world IMU sensor datasets (e.g., collected by smartphones) for our pilot case study. In particular, we have designed three heterogeneous representations for AF-DMIL regarding spectral, time series, and statistical features that are derived from the IMU sensor readings. We have designed a novel heterogeneous representation attention network (HetRANet) based on spectral channel attention, temporal sequence attention, and statistical feature learning mechanisms, jointly capturing and identifying the complex patterns within driver maneuver behaviors. Furthermore, we have designed a densely-connected convolutional neural network in HetRANet to enable the complex feature extraction and enhance the computational efficiency of HetRANet. In addition, we have designed within AF-DMIL a novel anomaly-aware federated learning approach for decentralized DMIL in response to anomalous maneuver data. To ease extraction of the maneuver patterns and evaluation of their mutual differences, we have designed an embedding projection network that projects the high-dimensional driver maneuver features into low-dimensional space, and further derives the exemplars that represent the driver maneuver patterns for mutual comparison. Then, AF-DMIL further leverages the mutual differences of the exemplars to identify those that exhibit anomalous patterns and deviate from others, and mitigates their impacts upon the federated DMIL. We have conducted extensive driver data analytics and experimental studies on three real-world datasets (one is harvested on our own) to evaluate the prototype of AF-DMIL, demonstrating AF-DMIL's accuracy and effectiveness compared to the state-of-the-art DMIL baselines (on average by more than 13% improvement in terms of DMIL accuracy), as well as fewer communication rounds (on average 29.20% fewer than existing distributed learning mechanisms).
驾驶员操作交互学习(DMIL)是指以识别不同驾驶员与车辆操作交互(如左转/右转)为目标的分类任务。现有的传统研究主要侧重于集中收集驾驶员智能手机中的传感器数据(例如惯性测量单元或 IMU,如加速度计和陀螺仪)。这种集中式机制可能会受到数据法规的限制。此外,由于(i)异构驾驶员操纵模式的复杂性,以及(ii)由于激进驾驶风格和行为等原因造成的异常驾驶员操纵的影响,如何启用自适应和准确的 DMIL 框架仍具有挑战性。为了克服上述挑战,我们提出了 AF-DMIL,一种异常感知的联合驾驶员操纵交互学习系统。我们以真实世界的 IMU 传感器数据集(如智能手机收集的数据集)为试点案例研究的重点。特别是,我们为 AF-DMIL 设计了三种异构表示法,分别涉及从 IMU 传感器读数中获得的频谱、时间序列和统计特征。我们设计了一种基于频谱通道关注、时间序列关注和统计特征学习机制的新型异构表征关注网络(HetRANet),可共同捕捉和识别驾驶员操纵行为中的复杂模式。此外,我们还在 HetRANet 中设计了一个密集连接的卷积神经网络,以实现复杂特征提取并提高 HetRANet 的计算效率。此外,我们还在 AF-DMIL 中设计了一种新颖的异常感知联合学习方法,用于分散式 DMIL,以应对异常操纵数据。为了便于提取操纵模式和评估它们之间的相互差异,我们设计了一个嵌入式投影网络,将高维驾驶员操纵特征投影到低维空间,并进一步得出代表驾驶员操纵模式的范例,以便进行相互比较。然后,AF-DMIL 进一步利用范例的相互差异来识别那些表现出异常模式和偏离其他模式的范例,并减轻它们对联合 DMIL 的影响。我们在三个真实数据集(其中一个数据集是我们自己采集的)上进行了广泛的驱动数据分析和实验研究,以评估 AF-DMIL 的原型,结果表明与最先进的 DMIL 基线相比,AF-DMIL 的准确性和有效性更高(在 DMIL 准确性方面平均提高了 13% 以上),而且通信轮数更少(与现有的分布式学习机制相比,平均减少了 29.20%)。
{"title":"Driver Maneuver Interaction Identification with Anomaly-Aware Federated Learning on Heterogeneous Feature Representations","authors":"Mahan Tabatabaie, Suining He","doi":"10.1145/3631421","DOIUrl":"https://doi.org/10.1145/3631421","url":null,"abstract":"Driver maneuver interaction learning (DMIL) refers to the classification task with the goal of identifying different driver-vehicle maneuver interactions (e.g., left/right turns). Existing conventional studies largely focused on the centralized collection of sensor data from the drivers' smartphones (say, inertial measurement units or IMUs, like accelerometer and gyroscope). Such a centralized mechanism might be precluded by data regulatory constraints. Furthermore, how to enable an adaptive and accurate DMIL framework remains challenging due to (i) complexity in heterogeneous driver maneuver patterns, and (ii) impacts of anomalous driver maneuvers due to, for instance, aggressive driving styles and behaviors. To overcome the above challenges, we propose AF-DMIL, an Anomaly-aware Federated Driver Maneuver Interaction Learning system. We focus on the real-world IMU sensor datasets (e.g., collected by smartphones) for our pilot case study. In particular, we have designed three heterogeneous representations for AF-DMIL regarding spectral, time series, and statistical features that are derived from the IMU sensor readings. We have designed a novel heterogeneous representation attention network (HetRANet) based on spectral channel attention, temporal sequence attention, and statistical feature learning mechanisms, jointly capturing and identifying the complex patterns within driver maneuver behaviors. Furthermore, we have designed a densely-connected convolutional neural network in HetRANet to enable the complex feature extraction and enhance the computational efficiency of HetRANet. In addition, we have designed within AF-DMIL a novel anomaly-aware federated learning approach for decentralized DMIL in response to anomalous maneuver data. To ease extraction of the maneuver patterns and evaluation of their mutual differences, we have designed an embedding projection network that projects the high-dimensional driver maneuver features into low-dimensional space, and further derives the exemplars that represent the driver maneuver patterns for mutual comparison. Then, AF-DMIL further leverages the mutual differences of the exemplars to identify those that exhibit anomalous patterns and deviate from others, and mitigates their impacts upon the federated DMIL. We have conducted extensive driver data analytics and experimental studies on three real-world datasets (one is harvested on our own) to evaluate the prototype of AF-DMIL, demonstrating AF-DMIL's accuracy and effectiveness compared to the state-of-the-art DMIL baselines (on average by more than 13% improvement in terms of DMIL accuracy), as well as fewer communication rounds (on average 29.20% fewer than existing distributed learning mechanisms).","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SurfShare 冲浪分享
Q1 Computer Science Pub Date : 2024-01-12 DOI: 10.1145/3631418
Xincheng Huang, Robert Xiao
Shared Mixed Reality experiences allow two co-located users to collaborate on both physical and digital tasks with familiar social protocols. However, extending the same to remote collaboration is limited by cumbersome setups for aligning distinct physical environments and the lack of access to remote physical artifacts. We present SurfShare, a general-purpose symmetric remote collaboration system with mixed-reality head-mounted displays (HMDs). Our system shares a spatially consistent physical-virtual workspace between two remote users, anchored on a physical plane in each environment (e.g., a desk or wall). The video feed of each user's physical surface is overlaid virtually on the other side, creating a shared view of the physical space. We integrate the physical and virtual workspace through virtual replication. Users can transmute physical objects to the virtual space as virtual replicas. Our system is lightweight, implemented using only the capabilities of the headset, without requiring any modifications to the environment (e.g. cameras or motion tracking hardware). We discuss the design, implementation, and interaction capabilities of our prototype, and demonstrate the utility of SurfShare through four example applications. In a user experiment with a comprehensive prototyping task, we found that SurfShare provides a physical-virtual workspace that supports low-fi prototyping with flexible proxemics and fluid collaboration dynamics.
共享混合现实体验可以让两个同处一地的用户通过熟悉的社交协议就物理和数字任务进行协作。然而,将这一功能扩展到远程协作却受到了限制,因为要对齐不同的物理环境,需要进行繁琐的设置,而且无法访问远程物理工件。我们介绍的 SurfShare 是一种通用对称远程协作系统,带有混合现实头戴式显示器(HMD)。我们的系统在两个远程用户之间共享一个空间上一致的物理-虚拟工作空间,并固定在每个环境中的一个物理平面上(如桌子或墙壁)。每个用户物理表面的视频画面会虚拟地叠加到另一侧,形成物理空间的共享视图。我们通过虚拟复制来整合物理和虚拟工作空间。用户可以将物理对象转换为虚拟空间的虚拟复制品。我们的系统是轻量级的,仅使用耳机的功能即可实现,无需对环境(如摄像头或运动跟踪硬件)进行任何修改。我们将讨论原型的设计、实施和交互能力,并通过四个示例应用展示 SurfShare 的实用性。在一项综合原型设计任务的用户实验中,我们发现 SurfShare 提供了一个物理-虚拟工作空间,支持低保真原型设计、灵活的近距离操作和流畅的协作动态。
{"title":"SurfShare","authors":"Xincheng Huang, Robert Xiao","doi":"10.1145/3631418","DOIUrl":"https://doi.org/10.1145/3631418","url":null,"abstract":"Shared Mixed Reality experiences allow two co-located users to collaborate on both physical and digital tasks with familiar social protocols. However, extending the same to remote collaboration is limited by cumbersome setups for aligning distinct physical environments and the lack of access to remote physical artifacts. We present SurfShare, a general-purpose symmetric remote collaboration system with mixed-reality head-mounted displays (HMDs). Our system shares a spatially consistent physical-virtual workspace between two remote users, anchored on a physical plane in each environment (e.g., a desk or wall). The video feed of each user's physical surface is overlaid virtually on the other side, creating a shared view of the physical space. We integrate the physical and virtual workspace through virtual replication. Users can transmute physical objects to the virtual space as virtual replicas. Our system is lightweight, implemented using only the capabilities of the headset, without requiring any modifications to the environment (e.g. cameras or motion tracking hardware). We discuss the design, implementation, and interaction capabilities of our prototype, and demonstrate the utility of SurfShare through four example applications. In a user experiment with a comprehensive prototyping task, we found that SurfShare provides a physical-virtual workspace that supports low-fi prototyping with flexible proxemics and fluid collaboration dynamics.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wi-Painter Wi-Painter
Q1 Computer Science Pub Date : 2024-01-12 DOI: 10.1145/3633809
Dawei Yan, Panlong Yang, Fei Shang, Weiwei Jiang, Xiang-Yang Li
WiFi has gradually developed into one of the main candidate technologies for indoor environment sensing. In this paper, we are interested in using COTS WiFi devices to identify material details, including location, material type, and shape, of stationary objects in the surrounding environment, which may open up new opportunities for many applications. Specifically, we present Wi-Painter, a model-driven system that can accurately detects smooth-surfaced material types and their edges using COTS WiFi devices without modification. Different from previous arts for material identification, Wi-Painter subdivides the target into individual 2D pixels, and simultaneously forms a 2D image based on identifying the material type of each pixel. The key idea of Wi-Painter is to exploit the complex permittivity of the object surface which can be estimated by the different reflectivity of signals with different polarization directions. In particular, we construct the multi-incident angle model to characterize the material, using only the power ratios of the vertically and horizontally polarized signals measured at several different incident angles, which avoids the use of inaccurate WiFi signal phases. We implement and evaluate Wi-Painter in the real world, showing an average classification accuracy of 93.4% for different material types including metal, wood, rubber and plastic of different sizes and thicknesses, and across different environments. In addition, Wi-Painter can accurately detect the material type and edge of the word "LOVE" spliced with different materials, with an average size of 60cm × 80cm, and material edges with different orientations.
WiFi 已逐渐发展成为室内环境传感的主要候选技术之一。在本文中,我们希望利用 COTS WiFi 设备来识别周围环境中静止物体的材料细节,包括位置、材料类型和形状,这可能会为许多应用带来新的机遇。具体来说,我们提出的 Wi-Painter 是一个模型驱动的系统,无需修改即可使用 COTS WiFi 设备准确检测光滑表面的材料类型及其边缘。与以往的材料识别技术不同,Wi-Painter 将目标细分为单个二维像素,同时根据识别每个像素的材料类型形成二维图像。Wi-Painter 的主要思想是利用物体表面的复介电常数,通过不同偏振方向信号的不同反射率来估计物体表面的复介电常数。特别是,我们构建了多入射角模型来表征材料,仅使用在几个不同入射角测量到的垂直和水平极化信号的功率比,这避免了使用不准确的 WiFi 信号相位。我们在现实世界中实施并评估了 Wi-Painter,结果表明,在不同环境下,对不同材料类型(包括不同尺寸和厚度的金属、木材、橡胶和塑料)的平均分类准确率为 93.4%。此外,Wi-Painter 还能准确检测出不同材料拼接的 "LOVE "一词的材料类型和边缘,其平均尺寸为 60 厘米 × 80 厘米,材料边缘的方向也各不相同。
{"title":"Wi-Painter","authors":"Dawei Yan, Panlong Yang, Fei Shang, Weiwei Jiang, Xiang-Yang Li","doi":"10.1145/3633809","DOIUrl":"https://doi.org/10.1145/3633809","url":null,"abstract":"WiFi has gradually developed into one of the main candidate technologies for indoor environment sensing. In this paper, we are interested in using COTS WiFi devices to identify material details, including location, material type, and shape, of stationary objects in the surrounding environment, which may open up new opportunities for many applications. Specifically, we present Wi-Painter, a model-driven system that can accurately detects smooth-surfaced material types and their edges using COTS WiFi devices without modification. Different from previous arts for material identification, Wi-Painter subdivides the target into individual 2D pixels, and simultaneously forms a 2D image based on identifying the material type of each pixel. The key idea of Wi-Painter is to exploit the complex permittivity of the object surface which can be estimated by the different reflectivity of signals with different polarization directions. In particular, we construct the multi-incident angle model to characterize the material, using only the power ratios of the vertically and horizontally polarized signals measured at several different incident angles, which avoids the use of inaccurate WiFi signal phases. We implement and evaluate Wi-Painter in the real world, showing an average classification accuracy of 93.4% for different material types including metal, wood, rubber and plastic of different sizes and thicknesses, and across different environments. In addition, Wi-Painter can accurately detect the material type and edge of the word \"LOVE\" spliced with different materials, with an average size of 60cm × 80cm, and material edges with different orientations.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139438034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HyperTracking 超级跟踪
Q1 Computer Science Pub Date : 2024-01-12 DOI: 10.1145/3631434
Xiaoqiang Xu, Xuanqi Meng, Xinyu Tong, Xiulong Liu, Xin Xie, Wenyu Qu
Wireless sensing technology allows for non-intrusive sensing without the need for physical sensors worn by the target, enabling a wide range of applications, such as indoor tracking, and activity identification. To theoretically reveal the fundamental principles of wireless sensing, the Fresnel zone model has been introduced in the field of Wi-Fi sensing. While the Fresnel zone model is effective in explaining the sensing mechanism in line-of-sight (LoS) scenarios, achieving accurate sensing in non-line-of-sight (NLoS) situations continues to pose a significant challenge. In this paper, we propose a novel theoretical model called the Hyperbolic zone to reveal the fundamental sensing mechanism in NLoS scenarios. The main principle is to eliminate the complex NLoS path shared among different transmitter-receiver pairs, which allows us to obtain a series of simple "virtual" reflection paths among receivers. Since these "virtual" reflection paths satisfy the properties of the hyperbola, we propose the hyperbolic tracking model. Based on the proposed model, we implement the HyperTracking system using commercial Wi-Fi devices. The experimental results show that the proposed hyperbolic model is suitable for accurate tracking in both LoS and NLoS scenarios. We can reduce 0.36 m tracking error in contrast to the Fresnel zone model in NLoS scenarios. When we utilize the proposed hyperbolic model to train a typical LSTM neural network, we are able to further reduce the tracking error by 0.13 m and save the execution time by 281% with the same data. As a whole, our method can reduce the tracking error by 54% in NLoS scenarios compared with the Fresnel zone model.
无线传感技术可以实现非侵入式传感,而不需要目标佩戴物理传感器,从而实现了室内跟踪和活动识别等广泛应用。为了从理论上揭示无线传感的基本原理,Wi-Fi 传感领域引入了菲涅尔区模型。虽然菲涅尔区模型能有效解释视距(LoS)情况下的传感机制,但在非视距(NLoS)情况下实现精确传感仍然是一个重大挑战。在本文中,我们提出了一种名为 "双曲区 "的新型理论模型,以揭示非视距(NLoS)场景下的基本传感机制。其主要原理是消除不同发射机-接收机对之间共享的复杂 NLoS 路径,从而在接收机之间获得一系列简单的 "虚拟 "反射路径。由于这些 "虚拟 "反射路径符合双曲线的特性,因此我们提出了双曲线跟踪模型。基于所提出的模型,我们利用商用 Wi-Fi 设备实现了 HyperTracking 系统。实验结果表明,所提出的双曲线模型适用于 LoS 和 NLoS 场景下的精确跟踪。在 NLoS 场景中,与菲涅尔区模型相比,我们可以减少 0.36 米的跟踪误差。当我们利用所提出的双曲模型来训练一个典型的 LSTM 神经网络时,在相同数据的情况下,我们能将跟踪误差进一步降低 0.13 米,并将执行时间节省 281%。总体而言,与菲涅尔区域模型相比,我们的方法可将 NLoS 场景下的跟踪误差降低 54%。
{"title":"HyperTracking","authors":"Xiaoqiang Xu, Xuanqi Meng, Xinyu Tong, Xiulong Liu, Xin Xie, Wenyu Qu","doi":"10.1145/3631434","DOIUrl":"https://doi.org/10.1145/3631434","url":null,"abstract":"Wireless sensing technology allows for non-intrusive sensing without the need for physical sensors worn by the target, enabling a wide range of applications, such as indoor tracking, and activity identification. To theoretically reveal the fundamental principles of wireless sensing, the Fresnel zone model has been introduced in the field of Wi-Fi sensing. While the Fresnel zone model is effective in explaining the sensing mechanism in line-of-sight (LoS) scenarios, achieving accurate sensing in non-line-of-sight (NLoS) situations continues to pose a significant challenge. In this paper, we propose a novel theoretical model called the Hyperbolic zone to reveal the fundamental sensing mechanism in NLoS scenarios. The main principle is to eliminate the complex NLoS path shared among different transmitter-receiver pairs, which allows us to obtain a series of simple \"virtual\" reflection paths among receivers. Since these \"virtual\" reflection paths satisfy the properties of the hyperbola, we propose the hyperbolic tracking model. Based on the proposed model, we implement the HyperTracking system using commercial Wi-Fi devices. The experimental results show that the proposed hyperbolic model is suitable for accurate tracking in both LoS and NLoS scenarios. We can reduce 0.36 m tracking error in contrast to the Fresnel zone model in NLoS scenarios. When we utilize the proposed hyperbolic model to train a typical LSTM neural network, we are able to further reduce the tracking error by 0.13 m and save the execution time by 281% with the same data. As a whole, our method can reduce the tracking error by 54% in NLoS scenarios compared with the Fresnel zone model.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1