首页 > 最新文献

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

英文 中文
PulmoListener PulmoListener
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610889
Sejal Bhalla, Salaar Liaqat, Robert Wu, Andrea S. Gershon, Eyal de Lara, Alex Mariakakis
Prior work has shown the utility of acoustic analysis in controlled settings for assessing chronic obstructive pulmonary disease (COPD) --- one of the most common respiratory diseases that impacts millions of people worldwide. However, such assessments require active user input and may not represent the true characteristics of a patient's voice. We propose PulmoListener, an end-to-end speech processing pipeline that identifies segments of the patient's speech from smartwatch audio collected during daily living and analyzes them to classify COPD symptom severity. To evaluate our approach, we conducted a study with 8 COPD patients over 164 ± 92 days on average. We found that PulmoListener achieved an average sensitivity of 0.79 ± 0.03 and a specificity of 0.83 ± 0.05 per patient when classifying their symptom severity on the same day. PulmoListener can also predict the severity level up to 4 days in advance with an average sensitivity of 0.75 ± 0.02 and a specificity of 0.74 ± 0.07. The results of our study demonstrate the feasibility of leveraging natural speech for monitoring COPD in real-world settings, offering a promising solution for disease management and even diagnosis.
先前的工作表明,声学分析在受控环境中用于评估慢性阻塞性肺疾病(COPD)的效用,慢性阻塞性肺疾病是影响全球数百万人的最常见的呼吸系统疾病之一。然而,这种评估需要用户主动输入,可能不能代表患者声音的真实特征。我们提出PulmoListener,一个端到端语音处理管道,从日常生活中收集的智能手表音频中识别患者的语音片段,并对其进行分析,以分类COPD症状的严重程度。为了评估我们的方法,我们对8名COPD患者进行了一项研究,平均随访时间为164±92天。我们发现PulmoListener在对患者同一天的症状严重程度进行分类时,平均灵敏度为0.79±0.03,特异性为0.83±0.05。PulmoListener还可以提前4天预测严重程度,平均灵敏度为0.75±0.02,特异性为0.74±0.07。我们的研究结果证明了在现实环境中利用自然语音监测COPD的可行性,为疾病管理甚至诊断提供了一个有希望的解决方案。
{"title":"PulmoListener","authors":"Sejal Bhalla, Salaar Liaqat, Robert Wu, Andrea S. Gershon, Eyal de Lara, Alex Mariakakis","doi":"10.1145/3610889","DOIUrl":"https://doi.org/10.1145/3610889","url":null,"abstract":"Prior work has shown the utility of acoustic analysis in controlled settings for assessing chronic obstructive pulmonary disease (COPD) --- one of the most common respiratory diseases that impacts millions of people worldwide. However, such assessments require active user input and may not represent the true characteristics of a patient's voice. We propose PulmoListener, an end-to-end speech processing pipeline that identifies segments of the patient's speech from smartwatch audio collected during daily living and analyzes them to classify COPD symptom severity. To evaluate our approach, we conducted a study with 8 COPD patients over 164 ± 92 days on average. We found that PulmoListener achieved an average sensitivity of 0.79 ± 0.03 and a specificity of 0.83 ± 0.05 per patient when classifying their symptom severity on the same day. PulmoListener can also predict the severity level up to 4 days in advance with an average sensitivity of 0.75 ± 0.02 and a specificity of 0.74 ± 0.07. The results of our study demonstrate the feasibility of leveraging natural speech for monitoring COPD in real-world settings, offering a promising solution for disease management and even diagnosis.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Radio Map Construction with Domain Disentangled Learning for Wireless Localization 基于领域解纠缠学习的无线定位快速无线电地图构建
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610922
Weina Jiang, Lin Shi, Qun Niu, Ning Liu
The accuracy of wireless fingerprint-based indoor localization largely depends on the precision and density of radio maps. Although many research efforts have been devoted to incremental updating of radio maps, few consider the laborious initial construction of a new site. In this work, we propose an accurate and generalizable framework for efficient radio map construction, which takes advantage of readily-available fine-grained radio maps and constructs fine-grained radio maps of a new site with a small proportion of measurements in it. Specifically, we regard radio maps as domains and propose a Radio Map construction approach based on Domain Adaptation (RMDA). We first employ the domain disentanglement feature extractor to learn domain-invariant features for aligning the source domains (available radio maps) with the target domain (initial radio map) in the domain-invariant latent space. Furthermore, we propose a dynamic weighting strategy, which learns the relevancy of the source and target domain in the domain adaptation. Then, we extract the domain-specific features based on the site's floorplan and use them to constrain the super-resolution of the domain-invariant features. Experimental results demonstrate that RMDA constructs a fine-grained initial radio map of a target site efficiently with a limited number of measurements. Meanwhile, the localization accuracy of the refined radio map with RMDA significantly improved by about 41.35% after construction and is comparable with the dense surveyed radio map (the reduction is less than 8%).
基于无线指纹的室内定位的精度很大程度上取决于无线地图的精度和密度。虽然许多研究工作都致力于逐步更新无线电地图,但很少有人考虑到一个新站点的艰苦初始建设。在这项工作中,我们提出了一个精确和可推广的框架,用于高效的无线电地图构建,该框架利用现成的细粒度无线电地图,并在其中构建具有小比例测量的新站点的细粒度无线电地图。具体而言,我们将无线电地图视为域,提出了一种基于域自适应(RMDA)的无线电地图构建方法。我们首先使用域解纠缠特征提取器来学习域不变特征,以便在域不变潜在空间中将源域(可用无线电波图)与目标域(初始无线电波图)对齐。此外,我们提出了一种动态加权策略,该策略在域适应中学习源域和目标域的相关性。然后,我们根据场地平面图提取特定领域的特征,并利用它们来约束领域不变特征的超分辨率。实验结果表明,RMDA可以在有限的测量次数下有效地构建目标位置的细粒度初始无线电地图。同时,RMDA优化后的射电图定位精度提高了约41.35%,与密集调查射电图的定位精度相当(降低幅度小于8%)。
{"title":"Fast Radio Map Construction with Domain Disentangled Learning for Wireless Localization","authors":"Weina Jiang, Lin Shi, Qun Niu, Ning Liu","doi":"10.1145/3610922","DOIUrl":"https://doi.org/10.1145/3610922","url":null,"abstract":"The accuracy of wireless fingerprint-based indoor localization largely depends on the precision and density of radio maps. Although many research efforts have been devoted to incremental updating of radio maps, few consider the laborious initial construction of a new site. In this work, we propose an accurate and generalizable framework for efficient radio map construction, which takes advantage of readily-available fine-grained radio maps and constructs fine-grained radio maps of a new site with a small proportion of measurements in it. Specifically, we regard radio maps as domains and propose a Radio Map construction approach based on Domain Adaptation (RMDA). We first employ the domain disentanglement feature extractor to learn domain-invariant features for aligning the source domains (available radio maps) with the target domain (initial radio map) in the domain-invariant latent space. Furthermore, we propose a dynamic weighting strategy, which learns the relevancy of the source and target domain in the domain adaptation. Then, we extract the domain-specific features based on the site's floorplan and use them to constrain the super-resolution of the domain-invariant features. Experimental results demonstrate that RMDA constructs a fine-grained initial radio map of a target site efficiently with a limited number of measurements. Meanwhile, the localization accuracy of the refined radio map with RMDA significantly improved by about 41.35% after construction and is comparable with the dense surveyed radio map (the reduction is less than 8%).","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interaction Harvesting 交互收获
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610880
John Mamish, Amy Guo, Thomas Cohen, Julian Richey, Yang Zhang, Josiah Hester
Whenever a user interacts with a device, mechanical work is performed to actuate the user interface elements; the resulting energy is typically wasted, dissipated as sound and heat. Previous work has shown that many devices can be powered entirely from this otherwise wasted user interface energy. For these devices, wires and batteries, along with the related hassles of replacement and charging, become unnecessary and onerous. So far, these works have been restricted to proof-of-concept demonstrations; a specific bespoke harvesting and sensing circuit is constructed for the application at hand. The challenge of harvesting energy while simultaneously sensing fine-grained input signals from diverse modalities makes prototyping new devices difficult. To fill this gap, we present a hardware toolkit which provides a common electrical interface for harvesting energy from user interface elements. This facilitates exploring the composability, utility, and breadth of enabled applications of interaction-powered smart devices. We design a set of "energy as input" harvesting circuits, a standard connective interface with 3D printed enclosures, and software libraries to enable the exploration of devices where the user action generates the energy needed to perform the device's primary function. This exploration culminated in a demonstration campaign where we prototype several exemplar popular toys and gadgets, including battery-free Bop-It--- a popular 90s rhythm game, an electronic Etch-a-sketch, a "Simon-Says"-style memory game, and a service rating device. We run exploratory user studies to understand how generativity, creativity, and composability are hampered or facilitated by these devices. These demonstrations, user study takeaways, and the toolkit itself provide a foundation for building interactive and user-focused gadgets whose usability is not affected by battery charge and whose service lifetime is not limited by battery wear.
每当用户与设备交互时,执行机械工作以驱动用户界面元素;产生的能量通常被浪费,以声音和热量的形式消散。先前的工作表明,许多设备可以完全由这种浪费的用户界面能量供电。对于这些设备,电线和电池,以及更换和充电的相关麻烦,变得不必要和繁重。到目前为止,这些工作仅限于概念验证演示;为手头的应用构建了一个特定的定制采集和传感电路。在收集能量的同时,还要感知来自不同模式的细粒度输入信号,这一挑战使得新设备的原型设计变得困难。为了填补这一空白,我们提出了一个硬件工具包,它提供了一个从用户界面元素收集能量的通用电气接口。这有助于探索交互式智能设备的可组合性、实用性和应用的广度。我们设计了一套“能量作为输入”的收集电路,一个带有3D打印外壳的标准连接接口,以及软件库,以便探索用户动作产生执行设备主要功能所需能量的设备。这一探索在演示活动中达到了高潮,我们制作了几个典型的流行玩具和小工具的原型,包括无电池的Bop-It——一款流行的90年代节奏游戏,电子蚀刻草图,“西蒙说”式的记忆游戏,以及服务评级设备。我们进行探索性用户研究,以了解这些设备如何阻碍或促进生成性、创造性和可组合性。这些演示、用户研究要点和工具包本身为构建以用户为中心的交互式小工具提供了基础,这些小工具的可用性不受电池充电的影响,使用寿命不受电池磨损的限制。
{"title":"Interaction Harvesting","authors":"John Mamish, Amy Guo, Thomas Cohen, Julian Richey, Yang Zhang, Josiah Hester","doi":"10.1145/3610880","DOIUrl":"https://doi.org/10.1145/3610880","url":null,"abstract":"Whenever a user interacts with a device, mechanical work is performed to actuate the user interface elements; the resulting energy is typically wasted, dissipated as sound and heat. Previous work has shown that many devices can be powered entirely from this otherwise wasted user interface energy. For these devices, wires and batteries, along with the related hassles of replacement and charging, become unnecessary and onerous. So far, these works have been restricted to proof-of-concept demonstrations; a specific bespoke harvesting and sensing circuit is constructed for the application at hand. The challenge of harvesting energy while simultaneously sensing fine-grained input signals from diverse modalities makes prototyping new devices difficult. To fill this gap, we present a hardware toolkit which provides a common electrical interface for harvesting energy from user interface elements. This facilitates exploring the composability, utility, and breadth of enabled applications of interaction-powered smart devices. We design a set of \"energy as input\" harvesting circuits, a standard connective interface with 3D printed enclosures, and software libraries to enable the exploration of devices where the user action generates the energy needed to perform the device's primary function. This exploration culminated in a demonstration campaign where we prototype several exemplar popular toys and gadgets, including battery-free Bop-It--- a popular 90s rhythm game, an electronic Etch-a-sketch, a \"Simon-Says\"-style memory game, and a service rating device. We run exploratory user studies to understand how generativity, creativity, and composability are hampered or facilitated by these devices. These demonstrations, user study takeaways, and the toolkit itself provide a foundation for building interactive and user-focused gadgets whose usability is not affected by battery charge and whose service lifetime is not limited by battery wear.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
I Know Your Intent 我知道你的意图
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610906
Jingyu Xiao, Qingsong Zou, Qing Li, Dan Zhao, Kang Li, Zixuan Weng, Ruoyu Li, Yong Jiang
With the booming of smart home market, intelligent Internet of Things (IoT) devices have been increasingly involved in home life. To improve the user experience of smart homes, some prior works have explored how to use machine learning for predicting interactions between users and devices. However, the existing solutions have inferior User Device Interaction (UDI) prediction accuracy, as they ignore three key factors: routine, intent and multi-level periodicity of human behaviors. In this paper, we present SmartUDI, a novel accurate UDI prediction approach for smart homes. First, we propose a Message-Passing-based Routine Extraction (MPRE) algorithm to mine routine behaviors, then the contrastive loss is applied to narrow representations among behaviors from the same routines and alienate representations among behaviors from different routines. Second, we propose an Intent-aware Capsule Graph Attention Network (ICGAT) to encode multiple intents of users while considering complex transitions between different behaviors. Third, we design a Cluster-based Historical Attention Mechanism (CHAM) to capture the multi-level periodicity by aggregating the current sequence and the semantically nearest historical sequence representations through the attention mechanism. SmartUDI can be seamlessly deployed on cloud infrastructures of IoT device vendors and edge nodes, enabling the delivery of personalized device service recommendations to users. Comprehensive experiments on four real-world datasets show that SmartUDI consistently outperforms the state-of-the-art baselines with more accurate and highly interpretable results.
随着智能家居市场的蓬勃发展,智能物联网设备越来越多地参与到家庭生活中。为了改善智能家居的用户体验,一些先前的工作已经探索了如何使用机器学习来预测用户和设备之间的交互。然而,现有的解决方案忽略了三个关键因素:人类行为的常规性、意图性和多层次周期性,导致UDI预测精度较低。在本文中,我们提出了一种新的智能家居精确UDI预测方法SmartUDI。首先,我们提出了一种基于消息传递的例程提取(MPRE)算法来挖掘例程行为,然后应用对比损失来缩小来自相同例程的行为之间的表示,并疏远来自不同例程的行为之间的表示。其次,我们提出了一个意图感知的胶囊图注意网络(ICGAT)来编码用户的多个意图,同时考虑不同行为之间的复杂转换。第三,设计了基于聚类的历史关注机制(CHAM),通过关注机制将当前序列和语义上最近的历史序列表示聚合在一起,捕获多层次的周期性。SmartUDI可以无缝部署在物联网设备供应商和边缘节点的云基础设施上,为用户提供个性化的设备服务推荐。在四个真实数据集上的综合实验表明,SmartUDI始终优于最先进的基线,具有更准确和高度可解释性的结果。
{"title":"I Know Your Intent","authors":"Jingyu Xiao, Qingsong Zou, Qing Li, Dan Zhao, Kang Li, Zixuan Weng, Ruoyu Li, Yong Jiang","doi":"10.1145/3610906","DOIUrl":"https://doi.org/10.1145/3610906","url":null,"abstract":"With the booming of smart home market, intelligent Internet of Things (IoT) devices have been increasingly involved in home life. To improve the user experience of smart homes, some prior works have explored how to use machine learning for predicting interactions between users and devices. However, the existing solutions have inferior User Device Interaction (UDI) prediction accuracy, as they ignore three key factors: routine, intent and multi-level periodicity of human behaviors. In this paper, we present SmartUDI, a novel accurate UDI prediction approach for smart homes. First, we propose a Message-Passing-based Routine Extraction (MPRE) algorithm to mine routine behaviors, then the contrastive loss is applied to narrow representations among behaviors from the same routines and alienate representations among behaviors from different routines. Second, we propose an Intent-aware Capsule Graph Attention Network (ICGAT) to encode multiple intents of users while considering complex transitions between different behaviors. Third, we design a Cluster-based Historical Attention Mechanism (CHAM) to capture the multi-level periodicity by aggregating the current sequence and the semantically nearest historical sequence representations through the attention mechanism. SmartUDI can be seamlessly deployed on cloud infrastructures of IoT device vendors and edge nodes, enabling the delivery of personalized device service recommendations to users. Comprehensive experiments on four real-world datasets show that SmartUDI consistently outperforms the state-of-the-art baselines with more accurate and highly interpretable results.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Modality Graph-based Language and Sensor Data Co-Learning of Human-Mobility Interaction 基于跨模态图语言和传感器数据的人-移动交互协同学习
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610904
Mahan Tabatabaie, Suining He, Kang G. Shin
Learning the human--mobility interaction (HMI) on interactive scenes (e.g., how a vehicle turns at an intersection in response to traffic lights and other oncoming vehicles) can enhance the safety, efficiency, and resilience of smart mobility systems (e.g., autonomous vehicles) and many other ubiquitous computing applications. Towards the ubiquitous and understandable HMI learning, this paper considers both "spoken language" (e.g., human textual annotations) and "unspoken language" (e.g., visual and sensor-based behavioral mobility information related to the HMI scenes) in terms of information modalities from the real-world HMI scenarios. We aim to extract the important but possibly implicit HMI concepts (as the named entities) from the textual annotations (provided by human annotators) through a novel human language and sensor data co-learning design. To this end, we propose CG-HMI, a novel Cross-modality Graph fusion approach for extracting important Human-Mobility Interaction concepts from co-learning of textual annotations as well as the visual and behavioral sensor data. In order to fuse both unspoken and spoken "languages", we have designed a unified representation called the human--mobility interaction graph (HMIG) for each modality related to the HMI scenes, i.e., textual annotations, visual video frames, and behavioral sensor time-series (e.g., from the on-board or smartphone inertial measurement units). The nodes of the HMIG in these modalities correspond to the textual words (tokenized for ease of processing) related to HMI concepts, the detected traffic participant/environment categories, and the vehicle maneuver behavior types determined from the behavioral sensor time-series. To extract the inter- and intra-modality semantic correspondences and interactions in the HMIG, we have designed a novel graph interaction fusion approach with differentiable pooling-based graph attention. The resulting graph embeddings are then processed to identify and retrieve the HMI concepts within the annotations, which can benefit the downstream human-computer interaction and ubiquitous computing applications. We have developed and implemented CG-HMI into a system prototype, and performed extensive studies upon three real-world HMI datasets (two on car driving and the third one on e-scooter riding). We have corroborated the excellent performance (on average 13.11% higher accuracy than the other baselines in terms of precision, recall, and F1 measure) and effectiveness of CG-HMI in recognizing and extracting the important HMI concepts through cross-modality learning. Our CG-HMI studies also provide real-world implications (e.g., road safety and driving behaviors) about the interactions between the drivers and other traffic participants.
在交互场景中学习人机交互(HMI)(例如,车辆如何在十字路口转弯以响应交通灯和其他迎面而至的车辆)可以提高智能移动系统(例如自动驾驶汽车)和许多其他无处不在的计算应用程序的安全性、效率和弹性。为了实现无所不在和可理解的HMI学习,本文从现实HMI场景的信息模态角度考虑了“口头语言”(如人类文本注释)和“非口头语言”(如与HMI场景相关的基于视觉和传感器的行为移动信息)。我们的目标是通过一种新的人类语言和传感器数据共同学习设计,从文本注释(由人类注释者提供)中提取重要但可能隐含的HMI概念(作为命名实体)。为此,我们提出了CG-HMI,一种新的跨模态图融合方法,用于从文本注释以及视觉和行为传感器数据的共同学习中提取重要的人类移动交互概念。为了融合非言语和口头的“语言”,我们为与HMI场景相关的每种模态设计了一个统一的表示,称为人-移动交互图(HMIG),即文本注释、视觉视频帧和行为传感器时间序列(例如,来自车载或智能手机惯性测量单元)。这些模式中的HMIG节点对应于与HMI概念、检测到的交通参与者/环境类别以及从行为传感器时间序列确定的车辆机动行为类型相关的文本单词(为便于处理而标记)。为了提取HMIG中模态间和模态内的语义对应和交互,我们设计了一种基于可微池的图注意的图交互融合方法。然后对生成的图嵌入进行处理,以识别和检索注释中的HMI概念,这有利于下游的人机交互和无处不在的计算应用程序。我们已经将CG-HMI开发并实现为系统原型,并对三个现实世界的HMI数据集(两个关于汽车驾驶,第三个关于电动滑板车骑)进行了广泛的研究。我们已经证实了CG-HMI通过跨模态学习在识别和提取重要HMI概念方面的优异性能(在精度、召回率和F1测量方面平均比其他基线高出13.11%)和有效性。我们的CG-HMI研究还提供了驾驶员和其他交通参与者之间互动的现实意义(例如,道路安全和驾驶行为)。
{"title":"Cross-Modality Graph-based Language and Sensor Data Co-Learning of Human-Mobility Interaction","authors":"Mahan Tabatabaie, Suining He, Kang G. Shin","doi":"10.1145/3610904","DOIUrl":"https://doi.org/10.1145/3610904","url":null,"abstract":"Learning the human--mobility interaction (HMI) on interactive scenes (e.g., how a vehicle turns at an intersection in response to traffic lights and other oncoming vehicles) can enhance the safety, efficiency, and resilience of smart mobility systems (e.g., autonomous vehicles) and many other ubiquitous computing applications. Towards the ubiquitous and understandable HMI learning, this paper considers both \"spoken language\" (e.g., human textual annotations) and \"unspoken language\" (e.g., visual and sensor-based behavioral mobility information related to the HMI scenes) in terms of information modalities from the real-world HMI scenarios. We aim to extract the important but possibly implicit HMI concepts (as the named entities) from the textual annotations (provided by human annotators) through a novel human language and sensor data co-learning design. To this end, we propose CG-HMI, a novel Cross-modality Graph fusion approach for extracting important Human-Mobility Interaction concepts from co-learning of textual annotations as well as the visual and behavioral sensor data. In order to fuse both unspoken and spoken \"languages\", we have designed a unified representation called the human--mobility interaction graph (HMIG) for each modality related to the HMI scenes, i.e., textual annotations, visual video frames, and behavioral sensor time-series (e.g., from the on-board or smartphone inertial measurement units). The nodes of the HMIG in these modalities correspond to the textual words (tokenized for ease of processing) related to HMI concepts, the detected traffic participant/environment categories, and the vehicle maneuver behavior types determined from the behavioral sensor time-series. To extract the inter- and intra-modality semantic correspondences and interactions in the HMIG, we have designed a novel graph interaction fusion approach with differentiable pooling-based graph attention. The resulting graph embeddings are then processed to identify and retrieve the HMI concepts within the annotations, which can benefit the downstream human-computer interaction and ubiquitous computing applications. We have developed and implemented CG-HMI into a system prototype, and performed extensive studies upon three real-world HMI datasets (two on car driving and the third one on e-scooter riding). We have corroborated the excellent performance (on average 13.11% higher accuracy than the other baselines in terms of precision, recall, and F1 measure) and effectiveness of CG-HMI in recognizing and extracting the important HMI concepts through cross-modality learning. Our CG-HMI studies also provide real-world implications (e.g., road safety and driving behaviors) about the interactions between the drivers and other traffic participants.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TAO TAO
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610896
Sudershan Boovaraghavan, Prasoon Patidar, Yuvraj Agarwal
Translating fine-grained activity detection (e.g., phone ring, talking interspersed with silence and walking) into semantically meaningful and richer contextual information (e.g., on a phone call for 20 minutes while exercising) is essential towards enabling a range of healthcare and human-computer interaction applications. Prior work has proposed building ontologies or temporal analysis of activity patterns with limited success in capturing complex real-world context patterns. We present TAO, a hybrid system that leverages OWL-based ontologies and temporal clustering approaches to detect high-level contexts from human activities. TAO can characterize sequential activities that happen one after the other and activities that are interleaved or occur in parallel to detect a richer set of contexts more accurately than prior work. We evaluate TAO on real-world activity datasets (Casas and Extrasensory) and show that our system achieves, on average, 87% and 80% accuracy for context detection, respectively. We deploy and evaluate TAO in a real-world setting with eight participants using our system for three hours each, demonstrating TAO's ability to capture semantically meaningful contexts in the real world. Finally, to showcase the usefulness of contexts, we prototype wellness applications that assess productivity and stress and show that the wellness metrics calculated using contexts provided by TAO are much closer to the ground truth (on average within 1.1%), as compared to the baseline approach (on average within 30%).
将细粒度的活动检测(例如,电话铃声、在沉默和行走中穿插的谈话)转换为语义上有意义和更丰富的上下文信息(例如,在锻炼时打20分钟的电话)对于实现一系列医疗保健和人机交互应用程序至关重要。先前的工作提出了构建本体或活动模式的时间分析,但在捕获复杂的现实世界上下文模式方面收效甚微。我们提出了TAO,这是一个混合系统,它利用基于owl的本体和时间聚类方法来检测人类活动的高级上下文。TAO可以描述一个接一个发生的连续活动,以及交错或并行发生的活动,以比以前的工作更准确地检测更丰富的上下文集。我们在真实世界的活动数据集(Casas和extrasory)上评估了TAO,并表明我们的系统在上下文检测方面分别达到了平均87%和80%的准确率。我们在一个真实世界的环境中部署和评估了TAO,有八名参与者使用我们的系统,每人使用三个小时,展示了TAO在真实世界中捕获语义上有意义的上下文的能力。最后,为了展示上下文的有用性,我们对评估生产力和压力的健康应用程序进行了原型化,并表明使用TAO提供的上下文计算的健康指标更接近基本事实(平均在1.1%以内),而基线方法(平均在30%以内)。
{"title":"TAO","authors":"Sudershan Boovaraghavan, Prasoon Patidar, Yuvraj Agarwal","doi":"10.1145/3610896","DOIUrl":"https://doi.org/10.1145/3610896","url":null,"abstract":"Translating fine-grained activity detection (e.g., phone ring, talking interspersed with silence and walking) into semantically meaningful and richer contextual information (e.g., on a phone call for 20 minutes while exercising) is essential towards enabling a range of healthcare and human-computer interaction applications. Prior work has proposed building ontologies or temporal analysis of activity patterns with limited success in capturing complex real-world context patterns. We present TAO, a hybrid system that leverages OWL-based ontologies and temporal clustering approaches to detect high-level contexts from human activities. TAO can characterize sequential activities that happen one after the other and activities that are interleaved or occur in parallel to detect a richer set of contexts more accurately than prior work. We evaluate TAO on real-world activity datasets (Casas and Extrasensory) and show that our system achieves, on average, 87% and 80% accuracy for context detection, respectively. We deploy and evaluate TAO in a real-world setting with eight participants using our system for three hours each, demonstrating TAO's ability to capture semantically meaningful contexts in the real world. Finally, to showcase the usefulness of contexts, we prototype wellness applications that assess productivity and stress and show that the wellness metrics calculated using contexts provided by TAO are much closer to the ground truth (on average within 1.1%), as compared to the baseline approach (on average within 30%).","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
E3D E3D
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610897
Abul Al Arabi, Xue Wang, Yang Zhang, Jeeeun Kim
The increase of distributed embedded systems has enabled pervasive sensing, actuation, and information displays across buildings and surrounding environments, yet also entreats huge cost expenditure for energy and human labor for maintenance. Our daily interactions, from opening a window to closing a drawer to twisting a doorknob, are great potential sources of energy but are often neglected. Existing commercial devices to harvest energy from these ambient sources are unaffordable, and DIY solutions are left with inaccessibility for non-experts preventing fully imbuing daily innovations in end-users. We present E3D, an end-to-end fabrication toolkit to customize self-powered smart devices at low cost. We contribute to a taxonomy of everyday kinetic activities that are potential sources of energy, a library of parametric mechanisms to harvest energy from manual operations of kinetic objects, and a holistic design system for end-user developers to capture design requirements by demonstrations then customize augmentation devices to harvest energy that meets unique lifestyle.
分布式嵌入式系统的增加使遍及建筑物和周围环境的传感、驱动和信息显示成为可能,但也为维护带来了巨大的能源和人力成本支出。我们的日常互动,从开窗到关抽屉到拧门把手,都是巨大的潜在能源,但往往被忽视。现有的从这些环境资源中获取能源的商业设备价格昂贵,非专家无法获得DIY解决方案,从而阻碍了最终用户的日常创新。我们提出了E3D,一个端到端制造工具包,以低成本定制自供电智能设备。我们为日常动能活动的分类做出了贡献,这些活动是潜在的能量来源,一个参数化机制库,从动能物体的手动操作中收集能量,以及一个整体设计系统,供最终用户开发人员通过演示捕获设计需求,然后定制增强设备,以收集满足独特生活方式的能量。
{"title":"E3D","authors":"Abul Al Arabi, Xue Wang, Yang Zhang, Jeeeun Kim","doi":"10.1145/3610897","DOIUrl":"https://doi.org/10.1145/3610897","url":null,"abstract":"The increase of distributed embedded systems has enabled pervasive sensing, actuation, and information displays across buildings and surrounding environments, yet also entreats huge cost expenditure for energy and human labor for maintenance. Our daily interactions, from opening a window to closing a drawer to twisting a doorknob, are great potential sources of energy but are often neglected. Existing commercial devices to harvest energy from these ambient sources are unaffordable, and DIY solutions are left with inaccessibility for non-experts preventing fully imbuing daily innovations in end-users. We present E3D, an end-to-end fabrication toolkit to customize self-powered smart devices at low cost. We contribute to a taxonomy of everyday kinetic activities that are potential sources of energy, a library of parametric mechanisms to harvest energy from manual operations of kinetic objects, and a holistic design system for end-user developers to capture design requirements by demonstrations then customize augmentation devices to harvest energy that meets unique lifestyle.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"It's Not an Issue of Malice, but of Ignorance" 这不是恶意的问题,而是无知的问题
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610901
Josh Urban Davis, Hongwei Wang, Parmit K. Chilana, Xing-Dong Yang
As video conferencing (VC) has become necessary for many professional, educational, and social tasks, people who are d/Deaf and hard of hearing (DHH) face distinct accessibility barriers. We conducted studies to understand the challenges faced by DHH people during VCs and found that they struggled to easily present or communicate effectively due to accessibility limitations of VC platforms. These limitations include the lack of tools for DHH speakers to discreetly communicate their accommodation needs to the group. Based on these findings, we prototyped a suite of tools, called Erato that enables DHH speakers to be aware of their performance while speaking and remind participants of proper etiquette. We evaluated Erato by running a mock classroom case study over VC for three sessions. All participants felt more confident in their speaking ability and paid closer attention to making the classroom more inclusive while using our tool. We share implications of these results for the design of VC interfaces and human-the-the-loop assistive systems that can support users who are DHH to communicate effectively and advocate for their accessibility needs.
随着视频会议(VC)在许多专业、教育和社会任务中变得必不可少,聋人和重听人(DHH)面临着明显的无障碍障碍。我们进行了研究,以了解DHH人员在风险投资期间面临的挑战,并发现由于风险投资平台的可访问性限制,他们难以轻松呈现或有效沟通。这些限制包括DHH讲者缺乏工具来谨慎地向群体传达他们的住宿需求。基于这些发现,我们制作了一套名为Erato的工具原型,使DHH演讲者能够在演讲时意识到自己的表现,并提醒参与者适当的礼仪。我们对Erato进行了评估,在VC上进行了三次模拟课堂案例研究。所有的参与者都对自己的口语能力更有信心,并且在使用我们的工具时更加注意使课堂更具包容性。我们分享了这些结果对VC界面和人-环辅助系统设计的启示,这些辅助系统可以支持DHH用户有效地沟通并倡导他们的可访问性需求。
{"title":"\"It's Not an Issue of Malice, but of Ignorance\"","authors":"Josh Urban Davis, Hongwei Wang, Parmit K. Chilana, Xing-Dong Yang","doi":"10.1145/3610901","DOIUrl":"https://doi.org/10.1145/3610901","url":null,"abstract":"As video conferencing (VC) has become necessary for many professional, educational, and social tasks, people who are d/Deaf and hard of hearing (DHH) face distinct accessibility barriers. We conducted studies to understand the challenges faced by DHH people during VCs and found that they struggled to easily present or communicate effectively due to accessibility limitations of VC platforms. These limitations include the lack of tools for DHH speakers to discreetly communicate their accommodation needs to the group. Based on these findings, we prototyped a suite of tools, called Erato that enables DHH speakers to be aware of their performance while speaking and remind participants of proper etiquette. We evaluated Erato by running a mock classroom case study over VC for three sessions. All participants felt more confident in their speaking ability and paid closer attention to making the classroom more inclusive while using our tool. We share implications of these results for the design of VC interfaces and human-the-the-loop assistive systems that can support users who are DHH to communicate effectively and advocate for their accessibility needs.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LapTouch LapTouch
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610878
Tzu-Wei Mi, Jia-Jun Wang, Liwei Chan
Use of virtual reality while seated is common, but studies on seated interaction beyond the use of controllers or hand gestures have been sparse. This work present LapTouch, which makes use of the lap as a touch interface and includes two user studies to inform the design of direct and indirect touch interaction using the lap with visual feedback that guides the user touch, as well as eye-free interaction in which users are not provided with such visual feedback. The first study suggests that direct interaction can provide effective layouts with 95% accuracy with up to a 4×4 layout and a shorter completion time, while indirect interaction can provide effective layouts with up to a 4×5 layout but a longer completion time. Considering user experience, which revealed that 4-row and 5-column layouts are not preferred, it is recommended to use both direct and indirect interaction with a maximum of a 3×4 layout. According to the second study, increasing the eye-free interaction with support vector machine (SVM) allows for a 2×2 layout with a generalized model and 2×2, 2×3 and 3×2 layouts with personalized models.
坐着时使用虚拟现实是很常见的,但除了使用控制器或手势之外,关于坐着互动的研究还很少。这项工作介绍了LapTouch,它利用膝盖作为触摸界面,包括两个用户研究,以告知使用带有视觉反馈的膝盖指导用户触摸的直接和间接触摸交互的设计,以及不向用户提供这种视觉反馈的无眼交互。第一项研究表明,直接交互可以提供有效的布局,准确率高达95%,布局为4×4,完成时间更短,而间接交互可以提供有效的布局,布局为4×5,但完成时间更长。考虑到用户体验,显示4行和5列布局不受欢迎,建议使用直接和间接交互,最大限度地使用3×4布局。根据第二项研究,增加与支持向量机(SVM)的无眼交互允许具有广义模型的2×2布局和具有个性化模型的2×2, 2×3和3×2布局。
{"title":"LapTouch","authors":"Tzu-Wei Mi, Jia-Jun Wang, Liwei Chan","doi":"10.1145/3610878","DOIUrl":"https://doi.org/10.1145/3610878","url":null,"abstract":"Use of virtual reality while seated is common, but studies on seated interaction beyond the use of controllers or hand gestures have been sparse. This work present LapTouch, which makes use of the lap as a touch interface and includes two user studies to inform the design of direct and indirect touch interaction using the lap with visual feedback that guides the user touch, as well as eye-free interaction in which users are not provided with such visual feedback. The first study suggests that direct interaction can provide effective layouts with 95% accuracy with up to a 4×4 layout and a shorter completion time, while indirect interaction can provide effective layouts with up to a 4×5 layout but a longer completion time. Considering user experience, which revealed that 4-row and 5-column layouts are not preferred, it is recommended to use both direct and indirect interaction with a maximum of a 3×4 layout. According to the second study, increasing the eye-free interaction with support vector machine (SVM) allows for a 2×2 layout with a generalized model and 2×2, 2×3 and 3×2 layouts with personalized models.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MR Object Identification and Interaction MR对象识别与交互
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610879
Jannis Strecker, Khakim Akhunov, Federico Carbone, Kimberly García, Kenan Bektaş, Andres Gomez, Simon Mayer, Kasim Sinan Yildirim
The increasing number of objects in ubiquitous computing environments creates a need for effective object detection and identification mechanisms that permit users to intuitively initiate interactions with these objects. While multiple approaches to such object detection -- including through visual object detection, fiducial markers, relative localization, or absolute spatial referencing -- are available, each of these suffers from drawbacks that limit their applicability. In this paper, we propose ODIF, an architecture that permits the fusion of object situation information from such heterogeneous sources and that remains vertically and horizontally modular to allow extending and upgrading systems that are constructed accordingly. We furthermore present BLEARVIS, a prototype system that builds on the proposed architecture and integrates computer-vision (CV) based object detection with radio-frequency (RF) angle of arrival (AoA) estimation to identify BLE-tagged objects. In our system, the front camera of a Mixed Reality (MR) head-mounted display (HMD) provides a live image stream to a vision-based object detection module, while an antenna array that is mounted on the HMD collects AoA information from ambient devices. In this way, BLEARVIS is able to differentiate between visually identical objects in the same environment and can provide an MR overlay of information (data and controls) that relates to them. We include experimental evaluations of both, the CV-based object detection and the RF-based AoA estimation, and discuss the applicability of the combined RF and CV pipelines in different ubiquitous computing scenarios. This research can form a starting point to spawn the integration of diverse object detection, identification, and interaction approaches that function across the electromagnetic spectrum, and beyond.
无处不在的计算环境中对象的数量不断增加,因此需要有效的对象检测和识别机制,以允许用户直观地启动与这些对象的交互。虽然有多种方法可以用于这种对象检测,包括通过视觉对象检测、基准标记、相对定位或绝对空间参考,但每种方法都有缺点,限制了它们的适用性。在本文中,我们提出了ODIF,这是一种架构,它允许从这些异构源融合对象情况信息,并保持垂直和水平模块化,以允许扩展和升级相应构建的系统。我们进一步提出了blevis,这是一个基于所提出架构的原型系统,将基于计算机视觉(CV)的目标检测与射频(RF)到达角(AoA)估计相结合,以识别ble标记的目标。在我们的系统中,混合现实(MR)头戴式显示器(HMD)的前置摄像头向基于视觉的目标检测模块提供实时图像流,而安装在HMD上的天线阵列从周围设备收集AoA信息。通过这种方式,blevis能够在相同的环境中区分视觉上相同的物体,并提供与它们相关的信息(数据和控制)的MR覆盖。我们对基于CV的目标检测和基于RF的AoA估计进行了实验评估,并讨论了RF和CV组合管道在不同普适计算场景中的适用性。这项研究可以形成一个起点,以产生跨电磁频谱和其他功能的各种目标检测,识别和交互方法的集成。
{"title":"MR Object Identification and Interaction","authors":"Jannis Strecker, Khakim Akhunov, Federico Carbone, Kimberly García, Kenan Bektaş, Andres Gomez, Simon Mayer, Kasim Sinan Yildirim","doi":"10.1145/3610879","DOIUrl":"https://doi.org/10.1145/3610879","url":null,"abstract":"The increasing number of objects in ubiquitous computing environments creates a need for effective object detection and identification mechanisms that permit users to intuitively initiate interactions with these objects. While multiple approaches to such object detection -- including through visual object detection, fiducial markers, relative localization, or absolute spatial referencing -- are available, each of these suffers from drawbacks that limit their applicability. In this paper, we propose ODIF, an architecture that permits the fusion of object situation information from such heterogeneous sources and that remains vertically and horizontally modular to allow extending and upgrading systems that are constructed accordingly. We furthermore present BLEARVIS, a prototype system that builds on the proposed architecture and integrates computer-vision (CV) based object detection with radio-frequency (RF) angle of arrival (AoA) estimation to identify BLE-tagged objects. In our system, the front camera of a Mixed Reality (MR) head-mounted display (HMD) provides a live image stream to a vision-based object detection module, while an antenna array that is mounted on the HMD collects AoA information from ambient devices. In this way, BLEARVIS is able to differentiate between visually identical objects in the same environment and can provide an MR overlay of information (data and controls) that relates to them. We include experimental evaluations of both, the CV-based object detection and the RF-based AoA estimation, and discuss the applicability of the combined RF and CV pipelines in different ubiquitous computing scenarios. This research can form a starting point to spawn the integration of diverse object detection, identification, and interaction approaches that function across the electromagnetic spectrum, and beyond.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1