首页 > 最新文献

Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.最新文献

英文 中文
HyperHAR: Inter-sensing Device Bilateral Correlations and Hyper-correlations Learning Approach for Wearable Sensing Device Based Human Activity Recognition HyperHAR:基于可穿戴传感设备的人类活动识别的传感设备间双边相关性和超相关性学习方法
Pub Date : 2024-03-06 DOI: 10.1145/3643511
Nafees Ahmad, Ho-fung Leung
Human activity recognition (HAR) has emerged as a prominent research field in recent years. Current HAR models are only able to model bilateral correlations between two sensing devices for feature extraction. However, for some activities, exploiting correlations among more than two sensing devices, which we call hyper-correlations in this paper, is essential for extracting discriminatory features. In this work, we propose a novel HyperHAR framework that automatically models both bilateral and hyper-correlations among sensing devices. The HyperHAR consists of three modules. The Intra-sensing Device Feature Extraction Module generates latent representation across the data of each sensing device, based on which the Inter-sensing Device Multi-order Correlations Learning Module simultaneously learns both bilateral correlations and hyper-correlations. Lastly, the Information Aggregation Module generates a representation for an individual sensing device by aggregating the bilateral correlations and hyper-correlations it involves in. It also generates the representation for a pair of sensing devices by aggregating the hyper-correlations between the pair and other different individual sensing devices. We also propose a computationally more efficient HyperHAR-Lite framework, a lightweight variant of the HyperHAR framework, at a small cost of accuracy. Both the HyperHAR and HyperHAR-Lite outperform SOTA models across three commonly used benchmark datasets with significant margins. We validate the efficiency and effectiveness of the proposed frameworks through an ablation study and quantitative and qualitative analysis.
近年来,人类活动识别(HAR)已成为一个突出的研究领域。目前的人类活动识别模型只能对两个传感设备之间的双边相关性进行建模,以提取特征。然而,对于某些活动,利用两个以上传感设备之间的相关性(本文称之为超相关性)对于提取识别特征至关重要。在这项工作中,我们提出了一个新颖的 HyperHAR 框架,它能自动为传感设备之间的双边和超相关性建模。HyperHAR 由三个模块组成。传感设备内部特征提取模块生成每个传感设备数据的潜在表征,在此基础上,传感设备间多阶相关性学习模块同时学习双边相关性和超相关性。最后,信息聚合模块通过聚合单个传感设备所涉及的双边相关性和超相关性,为其生成一个表征。信息聚合模块还通过聚合一对传感设备和其他不同传感设备之间的超相关性,生成这对传感设备的表征。我们还提出了一种计算效率更高的 HyperHAR-Lite 框架,它是 HyperHAR 框架的轻量级变体,只需付出较小的精度代价。在三个常用基准数据集上,HyperHAR 和 HyperHAR-Lite 的性能都明显优于 SOTA 模型。我们通过一项消融研究以及定量和定性分析验证了所建议框架的效率和有效性。
{"title":"HyperHAR: Inter-sensing Device Bilateral Correlations and Hyper-correlations Learning Approach for Wearable Sensing Device Based Human Activity Recognition","authors":"Nafees Ahmad, Ho-fung Leung","doi":"10.1145/3643511","DOIUrl":"https://doi.org/10.1145/3643511","url":null,"abstract":"Human activity recognition (HAR) has emerged as a prominent research field in recent years. Current HAR models are only able to model bilateral correlations between two sensing devices for feature extraction. However, for some activities, exploiting correlations among more than two sensing devices, which we call hyper-correlations in this paper, is essential for extracting discriminatory features. In this work, we propose a novel HyperHAR framework that automatically models both bilateral and hyper-correlations among sensing devices. The HyperHAR consists of three modules. The Intra-sensing Device Feature Extraction Module generates latent representation across the data of each sensing device, based on which the Inter-sensing Device Multi-order Correlations Learning Module simultaneously learns both bilateral correlations and hyper-correlations. Lastly, the Information Aggregation Module generates a representation for an individual sensing device by aggregating the bilateral correlations and hyper-correlations it involves in. It also generates the representation for a pair of sensing devices by aggregating the hyper-correlations between the pair and other different individual sensing devices. We also propose a computationally more efficient HyperHAR-Lite framework, a lightweight variant of the HyperHAR framework, at a small cost of accuracy. Both the HyperHAR and HyperHAR-Lite outperform SOTA models across three commonly used benchmark datasets with significant margins. We validate the efficiency and effectiveness of the proposed frameworks through an ablation study and quantitative and qualitative analysis.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140261817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-Preserving and Cross-Domain Human Sensing by Federated Domain Adaptation with Semantic Knowledge Correction 通过具有语义知识校正功能的联合域适应技术实现隐私保护和跨域人体感应
Pub Date : 2024-03-06 DOI: 10.1145/3643503
Kaijie Gong, Yi Gao, Wei Dong
Federated Learning (FL) enables distributed training of human sensing models in a privacy-preserving manner. While promising, federated global models suffer from cross-domain accuracy degradation when the labeled source domains statistically differ from the unlabeled target domain. To tackle this problem, recent methods perform pairwise computation on the source and target domains to minimize the domain discrepancy by adversarial strategy. However, these methods are limited by the fact that pairwise source-target adversarial alignment alone only achieves domain-level alignment, which entails the alignment of domain-invariant as well as environment-dependent features. The misalignment of environment-dependent features may cause negative impact on the performance of the federated global model. In this paper, we introduce FDAS, a Federated adversarial Domain Adaptation with Semantic Knowledge Correction method. FDAS achieves concurrent alignment at both domain and semantic levels to improve the semantic quality of the aligned features, thereby reducing the misalignment of environment-dependent features. Moreover, we design a cross-domain semantic similarity metric and further devise feature selection and feature refinement mechanisms to enhance the two-level alignment. In addition, we propose a similarity-aware model fine-tuning strategy to further improve the target model performance. We evaluate the performance of FDAS extensively on four public and a real-world human sensing datasets. Extensive experiments demonstrate the superior effectiveness of FDAS and its potential in the real-world ubiquitous computing scenarios.
联合学习(FL)能够以保护隐私的方式对人类传感模型进行分布式训练。虽然联合全局模型前景广阔,但当标记的源域与未标记的目标域在统计上存在差异时,联合全局模型就会出现跨域精度下降的问题。为了解决这个问题,最近的方法通过对抗策略对源域和目标域进行成对计算,以最小化域差异。然而,这些方法的局限性在于,单独的源-目标成对对抗对齐只能实现领域级对齐,这就需要对领域不变特征和环境依赖特征进行对齐。环境相关特征的错误配准可能会对联合全局模型的性能造成负面影响。在本文中,我们介绍了一种具有语义知识校正功能的联邦对抗性域适应方法(FDAS)。FDAS 在领域和语义两个层面实现并发对齐,以提高对齐特征的语义质量,从而减少依赖于环境的特征的错误对齐。此外,我们还设计了一种跨领域语义相似度量,并进一步设计了特征选择和特征细化机制,以增强两级对齐。此外,我们还提出了一种相似性感知模型微调策略,以进一步提高目标模型的性能。我们在四个公开数据集和一个真实世界的人类传感数据集上广泛评估了 FDAS 的性能。广泛的实验证明了 FDAS 的卓越功效及其在现实世界泛在计算场景中的潜力。
{"title":"Privacy-Preserving and Cross-Domain Human Sensing by Federated Domain Adaptation with Semantic Knowledge Correction","authors":"Kaijie Gong, Yi Gao, Wei Dong","doi":"10.1145/3643503","DOIUrl":"https://doi.org/10.1145/3643503","url":null,"abstract":"Federated Learning (FL) enables distributed training of human sensing models in a privacy-preserving manner. While promising, federated global models suffer from cross-domain accuracy degradation when the labeled source domains statistically differ from the unlabeled target domain. To tackle this problem, recent methods perform pairwise computation on the source and target domains to minimize the domain discrepancy by adversarial strategy. However, these methods are limited by the fact that pairwise source-target adversarial alignment alone only achieves domain-level alignment, which entails the alignment of domain-invariant as well as environment-dependent features. The misalignment of environment-dependent features may cause negative impact on the performance of the federated global model. In this paper, we introduce FDAS, a Federated adversarial Domain Adaptation with Semantic Knowledge Correction method. FDAS achieves concurrent alignment at both domain and semantic levels to improve the semantic quality of the aligned features, thereby reducing the misalignment of environment-dependent features. Moreover, we design a cross-domain semantic similarity metric and further devise feature selection and feature refinement mechanisms to enhance the two-level alignment. In addition, we propose a similarity-aware model fine-tuning strategy to further improve the target model performance. We evaluate the performance of FDAS extensively on four public and a real-world human sensing datasets. Extensive experiments demonstrate the superior effectiveness of FDAS and its potential in the real-world ubiquitous computing scenarios.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140262659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UFace: Your Smartphone Can "Hear" Your Facial Expression! UFace:智能手机能 "听 "到你的面部表情
Pub Date : 2024-03-06 DOI: 10.1145/3643546
Shuning Wang, Linghui Zhong, Yongjian Fu, Lili Chen, Ju Ren, Yaoxue Zhang
Facial expression recognition (FER) is a crucial task for human-computer interaction and a multitude of multimedia applications that typically call for friendly, unobtrusive, ubiquitous, and even long-term monitoring. Achieving such a FER system meeting these multi-requirements faces critical challenges, mainly including the tiny irregular non-periodic deformation of emotion movements, high variability in facial positions and severe self-interference caused by users' own other behavior. In this work, we present UFace, a long-term, unobtrusive and reliable FER system for daily life using acoustic signals generated by a portable smartphone. We design an innovative network model with dual-stream input based on the attention mechanism, which can leverage distance-time profile features from various viewpoints to extract fine-grained emotion-related signal changes, thus enabling accurate identification of many kinds of expressions. Meanwhile, we propose effective mechanisms to deal with a series of interference issues during actual use. We implement UFace prototype with a daily-used smartphone and conduct extensive experiments in various real-world environments. The results demonstrate that UFace can successfully recognize 7 typical facial expressions with an average accuracy of 87.8% across 20 participants. Besides, the evaluation of different distances, angles, and interferences proves the great potential of the proposed system to be employed in practical scenarios.
面部表情识别(FER)是人机交互和众多多媒体应用的一项重要任务,这些应用通常需要友好、无干扰、无处不在甚至长期的监控。实现符合这些多重要求的表情识别系统面临着严峻的挑战,主要包括情绪运动的微小不规则非周期性变形、面部位置的高度可变性以及用户自身其他行为造成的严重自我干扰。在这项工作中,我们利用便携式智能手机产生的声学信号,为日常生活提供了一个长期、不显眼且可靠的 FER 系统--UFace。我们设计了一种基于注意力机制的双流输入创新网络模型,该模型可利用来自不同视角的距离-时间轮廓特征来提取与情绪相关的细粒度信号变化,从而实现对多种表情的准确识别。同时,我们提出了有效的机制来应对实际使用过程中的一系列干扰问题。我们利用日常使用的智能手机实现了 UFace 原型,并在各种真实环境中进行了广泛的实验。结果表明,UFace 可以成功识别 7 种典型的面部表情,20 名参与者的平均识别准确率为 87.8%。此外,对不同距离、角度和干扰的评估也证明了该系统在实际应用中的巨大潜力。
{"title":"UFace: Your Smartphone Can \"Hear\" Your Facial Expression!","authors":"Shuning Wang, Linghui Zhong, Yongjian Fu, Lili Chen, Ju Ren, Yaoxue Zhang","doi":"10.1145/3643546","DOIUrl":"https://doi.org/10.1145/3643546","url":null,"abstract":"Facial expression recognition (FER) is a crucial task for human-computer interaction and a multitude of multimedia applications that typically call for friendly, unobtrusive, ubiquitous, and even long-term monitoring. Achieving such a FER system meeting these multi-requirements faces critical challenges, mainly including the tiny irregular non-periodic deformation of emotion movements, high variability in facial positions and severe self-interference caused by users' own other behavior. In this work, we present UFace, a long-term, unobtrusive and reliable FER system for daily life using acoustic signals generated by a portable smartphone. We design an innovative network model with dual-stream input based on the attention mechanism, which can leverage distance-time profile features from various viewpoints to extract fine-grained emotion-related signal changes, thus enabling accurate identification of many kinds of expressions. Meanwhile, we propose effective mechanisms to deal with a series of interference issues during actual use. We implement UFace prototype with a daily-used smartphone and conduct extensive experiments in various real-world environments. The results demonstrate that UFace can successfully recognize 7 typical facial expressions with an average accuracy of 87.8% across 20 participants. Besides, the evaluation of different distances, angles, and interferences proves the great potential of the proposed system to be employed in practical scenarios.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140262848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UHead: Driver Attention Monitoring System Using UWB Radar UHead: 使用 UWB 雷达的驾驶员注意力监测系统
Pub Date : 2024-03-06 DOI: 10.1145/3643551
Chongzhi Xu, Xiaolong Zheng, Z. Ren, Liang Liu, Huadong Ma
The focus of Advanced driver-assistance systems (ADAS) is extending from the vehicle and road conditions to the driver because the driver's attention is critical to driving safety. Although existing sensor and camera based methods can monitor driver attention, they rely on specialised hardware and environmental conditions. In this paper, we aim to develop an effective and easy-to-use driver attention monitoring system based on UWB radar. We exploit the strong association between head motions and driver attention and propose UHead that infers driver attention by monitoring the direction and angle of the driver's head rotation. The core idea is to extract rotational time-frequency representation from reflected signals and to estimate head rotation angles from complex head reflections. To eliminate the dynamic noise generated by other body parts, UHead leverages the large magnitude and high velocity of head rotation to extract head motion information from the dynamically coupled information. UHead uses a bilinear joint time-frequency representation to avoid the loss of time and frequency resolution caused by windowing of traditional methods. We also design a head structure-based rotation angle estimation algorithm to accurately estimate the rotation angle from the time-varying rotation information of multiple reflection points in the head. Experimental results show that we achieve 12.96° median error of 3D head rotation angle estimation in real vehicle scenes.
高级驾驶辅助系统(ADAS)的关注点正从车辆和路况扩展到驾驶员,因为驾驶员的注意力对驾驶安全至关重要。虽然现有的基于传感器和摄像头的方法可以监控驾驶员的注意力,但它们依赖于专业的硬件和环境条件。在本文中,我们旨在开发一种基于 UWB 雷达的有效且易于使用的驾驶员注意力监控系统。我们利用头部运动与驾驶员注意力之间的紧密联系,提出了 UHead,通过监测驾驶员头部旋转的方向和角度来推断驾驶员的注意力。其核心思想是从反射信号中提取旋转时频表示,并从复杂的头部反射信号中估计头部旋转角度。为了消除其他身体部位产生的动态噪声,UHead 利用头部旋转的大幅度和高速度,从动态耦合信息中提取头部运动信息。UHead 采用双线性联合时频表示法,避免了传统方法的窗口化造成的时间和频率分辨率损失。我们还设计了一种基于头部结构的旋转角度估计算法,从头部多个反射点的时变旋转信息中准确估计出旋转角度。实验结果表明,我们在真实车辆场景中实现了 12.96° 的三维头部旋转角度估计中值误差。
{"title":"UHead: Driver Attention Monitoring System Using UWB Radar","authors":"Chongzhi Xu, Xiaolong Zheng, Z. Ren, Liang Liu, Huadong Ma","doi":"10.1145/3643551","DOIUrl":"https://doi.org/10.1145/3643551","url":null,"abstract":"The focus of Advanced driver-assistance systems (ADAS) is extending from the vehicle and road conditions to the driver because the driver's attention is critical to driving safety. Although existing sensor and camera based methods can monitor driver attention, they rely on specialised hardware and environmental conditions. In this paper, we aim to develop an effective and easy-to-use driver attention monitoring system based on UWB radar. We exploit the strong association between head motions and driver attention and propose UHead that infers driver attention by monitoring the direction and angle of the driver's head rotation. The core idea is to extract rotational time-frequency representation from reflected signals and to estimate head rotation angles from complex head reflections. To eliminate the dynamic noise generated by other body parts, UHead leverages the large magnitude and high velocity of head rotation to extract head motion information from the dynamically coupled information. UHead uses a bilinear joint time-frequency representation to avoid the loss of time and frequency resolution caused by windowing of traditional methods. We also design a head structure-based rotation angle estimation algorithm to accurately estimate the rotation angle from the time-varying rotation information of multiple reflection points in the head. Experimental results show that we achieve 12.96° median error of 3D head rotation angle estimation in real vehicle scenes.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140260960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeltaLCA: Comparative Life-Cycle Assessment for Electronics Design DeltaLCA:电子设计生命周期比较评估
Pub Date : 2024-03-06 DOI: 10.1145/3643561
Zhihang Zhang, Felix Hähnlein, Yuxuan Mei, Zachary Englhardt, Shwetak Patel, Adriana Schulz, Vikram Iyer
Reducing the environmental footprint of electronics and computing devices requires new tools that empower designers to make informed decisions about sustainability during the design process itself. This is not possible with current tools for life cycle assessment (LCA) which require substantial domain expertise and time to evaluate the numerous chips and other components that make up a device. We observe first that informed decision-making does not require absolute metrics and can instead be done by comparing designs. Second, we can use domain-specific heuristics to perform these comparisons. We combine these insights to develop DeltaLCA, an open-source interactive design tool that addresses the dual challenges of automating life cycle inventory generation and data availability by performing comparative analyses of electronics designs. Users can upload standard design files from Electronic Design Automation (EDA) software and the tool will guide them through determining which one has greater carbon footprints. DeltaLCA leverages electronics-specific LCA datasets and heuristics and tries to automatically rank the two designs, prompting users to provide additional information only when necessary. We show through case studies DeltaLCA achieves the same result as evaluating full LCAs, and that it accelerates LCA comparisons from eight expert-hours to a single click for devices with ~30 components, and 15 minutes for more complex devices with ~100 components.
减少电子和计算设备的环境足迹需要新的工具,使设计人员能够在设计过程中就可持续发展做出明智的决定。目前的生命周期评估(LCA)工具无法做到这一点,因为这些工具需要大量的专业领域知识和时间来评估构成设备的众多芯片和其他组件。我们首先发现,明智的决策并不需要绝对的指标,而是可以通过比较设计来实现。其次,我们可以使用特定领域的启发式方法来进行这些比较。我们将这些见解结合起来,开发出了开源交互式设计工具 DeltaLCA,通过对电子设计进行比较分析,解决了自动生成生命周期清单和数据可用性的双重挑战。用户可以从电子设计自动化 (EDA) 软件上传标准设计文件,该工具将引导用户确定哪种设计的碳足迹更大。DeltaLCA 利用电子产品特有的生命周期评估数据集和启发式方法,自动对两种设计进行排序,仅在必要时提示用户提供额外信息。我们通过案例研究表明,DeltaLCA 可实现与评估完整生命周期评估相同的结果,而且它可加快生命周期评估比较的速度,对于约 30 个组件的设备,只需点击一下鼠标即可完成,而对于约 100 个组件的更复杂设备,只需点击一下鼠标即可完成。
{"title":"DeltaLCA: Comparative Life-Cycle Assessment for Electronics Design","authors":"Zhihang Zhang, Felix Hähnlein, Yuxuan Mei, Zachary Englhardt, Shwetak Patel, Adriana Schulz, Vikram Iyer","doi":"10.1145/3643561","DOIUrl":"https://doi.org/10.1145/3643561","url":null,"abstract":"Reducing the environmental footprint of electronics and computing devices requires new tools that empower designers to make informed decisions about sustainability during the design process itself. This is not possible with current tools for life cycle assessment (LCA) which require substantial domain expertise and time to evaluate the numerous chips and other components that make up a device. We observe first that informed decision-making does not require absolute metrics and can instead be done by comparing designs. Second, we can use domain-specific heuristics to perform these comparisons. We combine these insights to develop DeltaLCA, an open-source interactive design tool that addresses the dual challenges of automating life cycle inventory generation and data availability by performing comparative analyses of electronics designs. Users can upload standard design files from Electronic Design Automation (EDA) software and the tool will guide them through determining which one has greater carbon footprints. DeltaLCA leverages electronics-specific LCA datasets and heuristics and tries to automatically rank the two designs, prompting users to provide additional information only when necessary. We show through case studies DeltaLCA achieves the same result as evaluating full LCAs, and that it accelerates LCA comparisons from eight expert-hours to a single click for devices with ~30 components, and 15 minutes for more complex devices with ~100 components.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140261211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Daily-Life Logging in Free-living Environment Using Non-Visual Egocentric Sensors on a Smartphone 在自由生活环境中使用智能手机上的非视觉自我中心传感器进行多模态日常生活记录
Pub Date : 2024-03-06 DOI: 10.1145/3643553
Ke Sun, Chunyu Xia, Xinyu Zhang, Hao Chen, C. Zhang
Egocentric non-intrusive sensing of human activities of daily living (ADL) in free-living environments represents a holy grail in ubiquitous computing. Existing approaches, such as egocentric vision and wearable motion sensors, either can be intrusive or have limitations in capturing non-ambulatory actions. To address these challenges, we propose EgoADL, the first egocentric ADL sensing system that uses an in-pocket smartphone as a multi-modal sensor hub to capture body motion, interactions with the physical environment and daily objects using non-visual sensors (audio, wireless sensing, and motion sensors). We collected a 120-hour multimodal dataset and annotated 20-hour data into 221 ADL, 70 object interactions, and 91 actions. EgoADL proposes multi-modal frame-wise slow-fast encoders to learn the feature representation of multi-sensory data that characterizes the complementary advantages of different modalities and adapt a transformer-based sequence-to-sequence model to decode the time-series sensor signals into a sequence of words that represent ADL. In addition, we introduce a self-supervised learning framework that extracts intrinsic supervisory signals from the multi-modal sensing data to overcome the lack of labeling data and achieve better generalization and extensibility. Our experiments in free-living environments demonstrate that EgoADL can achieve comparable performance with video-based approaches, bringing the vision of ambient intelligence closer to reality.
在自由生活环境中以自我为中心对人类日常生活(ADL)活动进行非侵入式传感,是泛在计算领域的一个圣杯。现有的方法,如以自我为中心的视觉和可穿戴运动传感器,要么具有侵入性,要么在捕捉非步行动作方面存在局限性。为了应对这些挑战,我们提出了 EgoADL,这是首个以自我为中心的日常活动量传感系统,它使用口袋中的智能手机作为多模式传感器中枢,利用非视觉传感器(音频、无线传感和运动传感器)捕捉身体运动、与物理环境和日常物品的互动。我们收集了 120 小时的多模态数据集,并将 20 小时的数据注释为 221 项日常活动、70 项物体互动和 91 项行动。EgoADL 提出了多模态帧慢-快编码器来学习多感官数据的特征表征,该表征能体现不同模态的互补优势,并调整基于变换器的序列-序列模型,将时间序列传感器信号解码为代表 ADL 的词序列。此外,我们还引入了自监督学习框架,从多模态传感数据中提取内在监督信号,以克服标签数据的缺乏,实现更好的泛化和扩展性。我们在自由生活环境中进行的实验证明,EgoADL 可以达到与基于视频的方法相当的性能,使环境智能的愿景更接近现实。
{"title":"Multimodal Daily-Life Logging in Free-living Environment Using Non-Visual Egocentric Sensors on a Smartphone","authors":"Ke Sun, Chunyu Xia, Xinyu Zhang, Hao Chen, C. Zhang","doi":"10.1145/3643553","DOIUrl":"https://doi.org/10.1145/3643553","url":null,"abstract":"Egocentric non-intrusive sensing of human activities of daily living (ADL) in free-living environments represents a holy grail in ubiquitous computing. Existing approaches, such as egocentric vision and wearable motion sensors, either can be intrusive or have limitations in capturing non-ambulatory actions. To address these challenges, we propose EgoADL, the first egocentric ADL sensing system that uses an in-pocket smartphone as a multi-modal sensor hub to capture body motion, interactions with the physical environment and daily objects using non-visual sensors (audio, wireless sensing, and motion sensors). We collected a 120-hour multimodal dataset and annotated 20-hour data into 221 ADL, 70 object interactions, and 91 actions. EgoADL proposes multi-modal frame-wise slow-fast encoders to learn the feature representation of multi-sensory data that characterizes the complementary advantages of different modalities and adapt a transformer-based sequence-to-sequence model to decode the time-series sensor signals into a sequence of words that represent ADL. In addition, we introduce a self-supervised learning framework that extracts intrinsic supervisory signals from the multi-modal sensing data to overcome the lack of labeling data and achieve better generalization and extensibility. Our experiments in free-living environments demonstrate that EgoADL can achieve comparable performance with video-based approaches, bringing the vision of ambient intelligence closer to reality.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140261276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiquImager: Fine-grained Liquid Identification and Container Imaging System with COTS WiFi Devices LiquImager:使用 COTS WiFi 设备的精细液体识别和容器成像系统
Pub Date : 2024-03-06 DOI: 10.1145/3643509
Fei Shang, Panlong Yang, Dawei Yan, Sijia Zhang, Xiang-Yang Li
WiFi has gradually developed into one of the main candidate technologies for ubiquitous sensing. Based on commercial off-the-shelf (COTS) WiFi devices, this paper proposes LiquImager, which can simultaneously identify liquid and image container regardless of container shape and position. Since the container size is close to the wavelength, diffraction makes the effect of the liquid on the signal difficult to approximate with a simple geometric model (such as ray tracking). Based on Maxwell's equations, we construct an electric field scattering sensing model. Using few measurements provided by COTS WiFi devices, we solve the scattering model to obtain the medium distribution of the sensing domain, which is used for identifing and imaging liquids. To suppress the signal noise, we propose LiqU-Net for image enhancement. For the centimeter-scale container that is randomly placed in an area of 25 cm × 25 cm, LiquImager can identify the liquid more than 90% accuracy. In terms of container imaging, LiquImager can accurately find the edge of the container for 4 types of containers with a volume less than 500 ml.
WiFi 已逐渐发展成为泛在感知的主要候选技术之一。基于现成的商用 WiFi 设备,本文提出了液体成像仪(LiquImager),它可以同时识别液体和成像容器,而不受容器形状和位置的影响。由于容器尺寸与波长接近,衍射使得液体对信号的影响难以用简单的几何模型(如射线跟踪)来近似。基于麦克斯韦方程,我们构建了一个电场散射传感模型。利用 COTS WiFi 设备提供的少量测量数据,我们对散射模型进行求解,从而获得传感域的介质分布,用于对液体进行识别和成像。为了抑制信号噪声,我们提出了用于图像增强的 LiqU-Net。对于在 25 cm × 25 cm 区域内随机放置的厘米级容器,LiquImager 识别液体的准确率超过 90%。在容器成像方面,LiquImager 可以准确找到 4 种容积小于 500 毫升的容器的边缘。
{"title":"LiquImager: Fine-grained Liquid Identification and Container Imaging System with COTS WiFi Devices","authors":"Fei Shang, Panlong Yang, Dawei Yan, Sijia Zhang, Xiang-Yang Li","doi":"10.1145/3643509","DOIUrl":"https://doi.org/10.1145/3643509","url":null,"abstract":"WiFi has gradually developed into one of the main candidate technologies for ubiquitous sensing. Based on commercial off-the-shelf (COTS) WiFi devices, this paper proposes LiquImager, which can simultaneously identify liquid and image container regardless of container shape and position. Since the container size is close to the wavelength, diffraction makes the effect of the liquid on the signal difficult to approximate with a simple geometric model (such as ray tracking). Based on Maxwell's equations, we construct an electric field scattering sensing model. Using few measurements provided by COTS WiFi devices, we solve the scattering model to obtain the medium distribution of the sensing domain, which is used for identifing and imaging liquids. To suppress the signal noise, we propose LiqU-Net for image enhancement. For the centimeter-scale container that is randomly placed in an area of 25 cm × 25 cm, LiquImager can identify the liquid more than 90% accuracy. In terms of container imaging, LiquImager can accurately find the edge of the container for 4 types of containers with a volume less than 500 ml.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140262334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SpeciFingers: Finger Identification and Error Correction on Capacitive Touchscreens SpeciFingers:电容式触摸屏上的手指识别和纠错
Pub Date : 2024-03-06 DOI: 10.1145/3643559
Zeyuan Huang, Cangjun Gao, Haiyan Wang, Xiaoming Deng, Yu-Kun Lai, Cuixia Ma, Sheng-feng Qin, Yong-Jin Liu, Hongan Wang
The inadequate use of finger properties has limited the input space of touch interaction. By leveraging the category of contacting fingers, finger-specific interaction is able to expand input vocabulary. However, accurate finger identification remains challenging, as it requires either additional sensors or limited sets of identifiable fingers to achieve ideal accuracy in previous works. We introduce SpeciFingers, a novel approach to identify fingers with the capacitive raw data on touchscreens. We apply a neural network of an encoder-decoder architecture, which captures the spatio-temporal features in capacitive image sequences. To assist users in recovering from misidentification, we propose a correction mechanism to replace the existing undo-redo process. Also, we present a design space of finger-specific interaction with example interaction techniques. In particular, we designed and implemented a use case of optimizing the performance in pointing on small targets. We evaluated our identification model and error correction mechanism in our use case.
手指属性的使用不足限制了触摸交互的输入空间。通过利用接触手指的类别,特定手指的交互能够扩大输入词汇量。然而,准确识别手指仍然具有挑战性,因为这需要额外的传感器或有限的可识别手指集,才能达到以往工作中的理想精度。我们介绍的 SpeciFingers 是一种利用触摸屏上的电容原始数据识别手指的新方法。我们采用编码器-解码器架构的神经网络,捕捉电容式图像序列中的时空特征。为了帮助用户从错误识别中恢复过来,我们提出了一种纠正机制来取代现有的撤销重做过程。此外,我们还提出了手指特定交互的设计空间,并举例说明了交互技术。特别是,我们设计并实施了一个使用案例,以优化指向小目标的性能。我们在使用案例中评估了我们的识别模型和纠错机制。
{"title":"SpeciFingers: Finger Identification and Error Correction on Capacitive Touchscreens","authors":"Zeyuan Huang, Cangjun Gao, Haiyan Wang, Xiaoming Deng, Yu-Kun Lai, Cuixia Ma, Sheng-feng Qin, Yong-Jin Liu, Hongan Wang","doi":"10.1145/3643559","DOIUrl":"https://doi.org/10.1145/3643559","url":null,"abstract":"The inadequate use of finger properties has limited the input space of touch interaction. By leveraging the category of contacting fingers, finger-specific interaction is able to expand input vocabulary. However, accurate finger identification remains challenging, as it requires either additional sensors or limited sets of identifiable fingers to achieve ideal accuracy in previous works. We introduce SpeciFingers, a novel approach to identify fingers with the capacitive raw data on touchscreens. We apply a neural network of an encoder-decoder architecture, which captures the spatio-temporal features in capacitive image sequences. To assist users in recovering from misidentification, we propose a correction mechanism to replace the existing undo-redo process. Also, we present a design space of finger-specific interaction with example interaction techniques. In particular, we designed and implemented a use case of optimizing the performance in pointing on small targets. We evaluated our identification model and error correction mechanism in our use case.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140262365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Subject 3D Human Mesh Construction Using Commodity WiFi 利用商品 WiFi 构建多主体 3D 人体网格
Pub Date : 2024-03-06 DOI: 10.1145/3643504
Yichao Wang, Yili Ren, Jie Yang
This paper introduces MultiMesh, a multi-subject 3D human mesh construction system based on commodity WiFi. Our system can reuse commodity WiFi devices in the environment and is capable of working in non-line-of-sight (NLoS) conditions compared with the traditional computer vision-based approach. Specifically, we leverage an L-shaped antenna array to generate the two-dimensional angle of arrival (2D AoA) of reflected signals for subject separation in the physical space. We further leverage the angle of departure and time of flight of the signal to enhance the resolvability for precise separation of close subjects. Then we exploit information from various signal dimensions to mitigate the interference of indirect reflections according to different signal propagation paths. Moreover, we employ the continuity of human movement in the spatial-temporal domain to track weak reflected signals of faraway subjects. Finally, we utilize a deep learning model to digitize 2D AoA images of each subject into the 3D human mesh. We conducted extensive experiments in real-world multi-subject scenarios under various environments to evaluate the performance of our system. For example, we conduct experiments with occlusion and perform human mesh construction for different distances between two subjects and different distances between subjects and WiFi devices. The results show that MultiMesh can accurately construct 3D human meshes for multiple users with an average vertex error of 4cm. The evaluations also demonstrate that our system could achieve comparable performance for unseen environments and people. Moreover, we also evaluate the accuracy of spatial information extraction and the performance of subject detection. These evaluations demonstrate the robustness and effectiveness of our system.
本文介绍了基于商用 WiFi 的多主体 3D 人体网格构建系统 MultiMesh。与传统的基于计算机视觉的方法相比,我们的系统可以重复使用环境中的商用 WiFi 设备,并且能够在非视线(NLoS)条件下工作。具体来说,我们利用 L 型天线阵列生成反射信号的二维到达角(2D AoA),以便在物理空间中进行主体分离。我们进一步利用信号的离去角和飞行时间来提高精确分离近距离物体的分辨率。然后,我们利用各种信号维度的信息,根据不同的信号传播路径来减轻间接反射的干扰。此外,我们还利用时空域中人体运动的连续性来跟踪远处主体的微弱反射信号。最后,我们利用深度学习模型将每个受试者的 2D AoA 图像数字化为 3D 人体网格。我们在各种环境下的真实世界多主体场景中进行了大量实验,以评估我们系统的性能。例如,我们进行了遮挡实验,并在两个受试者之间的不同距离以及受试者与 WiFi 设备之间的不同距离下进行了人体网格构建。结果表明,MultiMesh 可以为多个用户准确构建三维人体网格,平均顶点误差为 4 厘米。评估结果还表明,我们的系统可以在不可见的环境和人物中实现相当的性能。此外,我们还评估了空间信息提取的准确性和主体检测的性能。这些评估证明了我们系统的鲁棒性和有效性。
{"title":"Multi-Subject 3D Human Mesh Construction Using Commodity WiFi","authors":"Yichao Wang, Yili Ren, Jie Yang","doi":"10.1145/3643504","DOIUrl":"https://doi.org/10.1145/3643504","url":null,"abstract":"This paper introduces MultiMesh, a multi-subject 3D human mesh construction system based on commodity WiFi. Our system can reuse commodity WiFi devices in the environment and is capable of working in non-line-of-sight (NLoS) conditions compared with the traditional computer vision-based approach. Specifically, we leverage an L-shaped antenna array to generate the two-dimensional angle of arrival (2D AoA) of reflected signals for subject separation in the physical space. We further leverage the angle of departure and time of flight of the signal to enhance the resolvability for precise separation of close subjects. Then we exploit information from various signal dimensions to mitigate the interference of indirect reflections according to different signal propagation paths. Moreover, we employ the continuity of human movement in the spatial-temporal domain to track weak reflected signals of faraway subjects. Finally, we utilize a deep learning model to digitize 2D AoA images of each subject into the 3D human mesh. We conducted extensive experiments in real-world multi-subject scenarios under various environments to evaluate the performance of our system. For example, we conduct experiments with occlusion and perform human mesh construction for different distances between two subjects and different distances between subjects and WiFi devices. The results show that MultiMesh can accurately construct 3D human meshes for multiple users with an average vertex error of 4cm. The evaluations also demonstrate that our system could achieve comparable performance for unseen environments and people. Moreover, we also evaluate the accuracy of spatial information extraction and the performance of subject detection. These evaluations demonstrate the robustness and effectiveness of our system.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140260952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IOTeeth: Intra-Oral Teeth Sensing System for Dental Occlusal Diseases Recognition IOTeeth:用于识别牙齿咬合疾病的口腔内牙齿传感系统
Pub Date : 2024-03-06 DOI: 10.1145/3643516
Zhizhang Hu, Amir Radmehr, Yue Zhang, Shijia Pan, Phuc Nguyen
While occlusal diseases - the main cause of tooth loss -- significantly impact patients' teeth and well-being, they are the most underdiagnosed dental diseases nowadays. Experiencing occlusal diseases could result in difficulties in eating, speaking, and chronicle headaches, ultimately impacting patients' quality of life. Although attempts have been made to develop sensing systems for teeth activity monitoring, solutions that support sufficient sensing resolution for occlusal monitoring are missing. To fill that gap, this paper presents IOTeeth, a cost-effective and automated intra-oral sensing system for continuous and fine-grained monitoring of occlusal diseases. The IOTeeth system includes an intra-oral piezoelectric-based sensing array integrated into a dental retainer platform to support reliable occlusal disease recognition. IOTeeth focuses on biting and grinding activities from the canines and front teeth, which contain essential information of occlusion. IOTeeth's intra-oral wearable collects signals from the sensors and fetches them into a lightweight and robust deep learning model called Physioaware Attention Network (PAN Net) for occlusal disease recognition. We evaluate IOTeeth with 12 articulator teeth models from dental clinic patients. Evaluation results show an F1 score of 0.97 for activity recognition with leave-one-out validation and an average F1 score of 0.92 for dental disease recognition for different activities with leave-one-out validation.
咬合疾病--牙齿脱落的主要原因--严重影响患者的牙齿和健康,但却是目前最容易被忽视的牙科疾病。咬合疾病会导致进食困难、说话困难和长期头痛,最终影响患者的生活质量。虽然人们一直在尝试开发用于牙齿活动监测的传感系统,但目前还缺少能够为咬合监测提供足够传感分辨率的解决方案。为了填补这一空白,本文介绍了 IOTeeth,这是一种经济高效的自动口内传感系统,可对咬合疾病进行连续、精细的监测。IOTeeth 系统包括一个口内压电传感阵列,集成在一个牙科保持器平台上,支持可靠的咬合疾病识别。IOTeeth 主要监测犬齿和前牙的咬合和磨牙活动,这些活动包含咬合的基本信息。IOTeeth 的口内可穿戴设备收集来自传感器的信号,并将这些信号提取到一个名为 "物理感知注意力网络(PAN Net)"的轻量级鲁棒深度学习模型中,用于咬合疾病识别。我们使用牙科诊所患者的 12 个铰接牙齿模型对 IOTeeth 进行了评估。评估结果显示,在留空验证的情况下,活动识别的 F1 得分为 0.97,在留空验证的情况下,不同活动的牙科疾病识别平均 F1 得分为 0.92。
{"title":"IOTeeth: Intra-Oral Teeth Sensing System for Dental Occlusal Diseases Recognition","authors":"Zhizhang Hu, Amir Radmehr, Yue Zhang, Shijia Pan, Phuc Nguyen","doi":"10.1145/3643516","DOIUrl":"https://doi.org/10.1145/3643516","url":null,"abstract":"While occlusal diseases - the main cause of tooth loss -- significantly impact patients' teeth and well-being, they are the most underdiagnosed dental diseases nowadays. Experiencing occlusal diseases could result in difficulties in eating, speaking, and chronicle headaches, ultimately impacting patients' quality of life. Although attempts have been made to develop sensing systems for teeth activity monitoring, solutions that support sufficient sensing resolution for occlusal monitoring are missing. To fill that gap, this paper presents IOTeeth, a cost-effective and automated intra-oral sensing system for continuous and fine-grained monitoring of occlusal diseases. The IOTeeth system includes an intra-oral piezoelectric-based sensing array integrated into a dental retainer platform to support reliable occlusal disease recognition. IOTeeth focuses on biting and grinding activities from the canines and front teeth, which contain essential information of occlusion. IOTeeth's intra-oral wearable collects signals from the sensors and fetches them into a lightweight and robust deep learning model called Physioaware Attention Network (PAN Net) for occlusal disease recognition. We evaluate IOTeeth with 12 articulator teeth models from dental clinic patients. Evaluation results show an F1 score of 0.97 for activity recognition with leave-one-out validation and an average F1 score of 0.92 for dental disease recognition for different activities with leave-one-out validation.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140262213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1