首页 > 最新文献

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

英文 中文
A Digital Companion Architecture for Ambient Intelligence 环境智能数字伴侣架构
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659610
Kimberly García, Jonathan Vontobel, Simon Mayer
Ambient Intelligence (AmI) focuses on creating environments capable of proactively and transparently adapting to users and their activities. Traditionally, AmI focused on the availability of computational devices, the pervasiveness of networked environments, and means to interact with users. In this paper, we propose a renewed AmI architecture that takes into account current technological advancements while focusing on proactive adaptation for assisting and protecting users. This architecture consist of four phases: Perceive, Interpret, Decide, and Interact. The AmI systems we propose, called Digital Companions (DC), can be embodied in a variety of ways (e.g., through physical robots or virtual agents) and are structured according to these phases to assist and protect their users. We further categorize DCs into Expert DCs and Personal DCs, and show that this induces a favorable separation of concerns in AmI systems, where user concerns (including personal user data and preferences) are handled by Personal DCs and environment concerns (including interfacing with environmental artifacts) are assigned to Expert DCs; this separation has favorable privacy implications as well. Herein, we introduce this architecture and validate it through a prototype in an industrial scenario where robots and humans collaborate to perform a task.
环境智能(Ambient Intelligence,AMI)的重点是创造能够主动、透明地适应用户及其活动的环境。传统上,环境智能侧重于计算设备的可用性、网络环境的普及性以及与用户互动的手段。在本文中,我们提出了一个全新的 AmI 架构,该架构考虑到了当前的技术进步,同时侧重于主动适应,以协助和保护用户。该架构包括四个阶段:感知、解释、决策和互动。我们提出的 AmI 系统被称为 "数字伴侣"(Digital Companions,DC),可以通过各种方式(如实体机器人或虚拟代理)体现出来,并根据这些阶段进行结构化,以协助和保护用户。我们进一步将DC分为专家DC和个人DC,并证明这在人工智能系统中形成了有利的关注点分离,其中用户关注点(包括个人用户数据和偏好)由个人DC处理,而环境关注点(包括与环境人工制品的接口)则分配给专家DC;这种分离还具有有利的隐私影响。在这里,我们将介绍这种架构,并通过一个机器人与人类合作执行任务的工业场景原型对其进行验证。
{"title":"A Digital Companion Architecture for Ambient Intelligence","authors":"Kimberly García, Jonathan Vontobel, Simon Mayer","doi":"10.1145/3659610","DOIUrl":"https://doi.org/10.1145/3659610","url":null,"abstract":"Ambient Intelligence (AmI) focuses on creating environments capable of proactively and transparently adapting to users and their activities. Traditionally, AmI focused on the availability of computational devices, the pervasiveness of networked environments, and means to interact with users. In this paper, we propose a renewed AmI architecture that takes into account current technological advancements while focusing on proactive adaptation for assisting and protecting users. This architecture consist of four phases: Perceive, Interpret, Decide, and Interact. The AmI systems we propose, called Digital Companions (DC), can be embodied in a variety of ways (e.g., through physical robots or virtual agents) and are structured according to these phases to assist and protect their users. We further categorize DCs into Expert DCs and Personal DCs, and show that this induces a favorable separation of concerns in AmI systems, where user concerns (including personal user data and preferences) are handled by Personal DCs and environment concerns (including interfacing with environmental artifacts) are assigned to Expert DCs; this separation has favorable privacy implications as well. Herein, we introduce this architecture and validate it through a prototype in an industrial scenario where robots and humans collaborate to perform a task.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140982078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SmallMap: Low-cost Community Road Map Sensing with Uncertain Delivery Behavior 小地图:低成本社区路线图传感与不确定的交付行为
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659596
Zhiqing Hong, Haotian Wang, Yi Ding, Guang Wang, Tian He, Desheng Zhang
Accurate road networks play a crucial role in modern mobile applications such as navigation and last-mile delivery. Most existing studies primarily focus on generating road networks in open areas like main roads and avenues, but little attention has been given to the generation of community road networks in closed areas such as residential areas, which becomes more and more significant due to the growing demand for door-to-door services such as food delivery. This lack of research is primarily attributed to challenges related to sensing data availability and quality. In this paper, we design a novel framework called SmallMap that leverages ubiquitous multi-modal sensing data from last-mile delivery to automatically generate community road networks with low costs. Our SmallMap consists of two key modules: (1) a Trajectory of Interest Detection module enhanced by exploiting multi-modal sensing data collected from the delivery process; and (2) a Dual Spatio-temporal Generative Adversarial Network module that incorporates Trajectory of Interest by unsupervised road network adaptation to generate road networks automatically. To evaluate the effectiveness of SmallMap, we utilize a two-month dataset from one of the largest logistics companies in China. The extensive evaluation results demonstrate that our framework significantly outperforms state-of-the-art baselines, achieving a precision of 90.5%, a recall of 87.5%, and an F1-score of 88.9%, respectively. Moreover, we conduct three case studies in Beijing City for courier workload estimation, Estimated Time of Arrival (ETA) in last-mile delivery, and fine-grained order assignment.
精确的道路网络在导航和最后一英里配送等现代移动应用中发挥着至关重要的作用。现有的大多数研究主要集中在生成主干道和林荫大道等开放区域的道路网络,但很少关注生成住宅区等封闭区域的社区道路网络,而由于送餐等门到门服务的需求日益增长,社区道路网络的生成变得越来越重要。缺乏研究的主要原因是与传感数据的可用性和质量有关的挑战。在本文中,我们设计了一个名为 "SmallMap "的新型框架,该框架利用来自最后一英里配送的无处不在的多模式传感数据,以低成本自动生成社区道路网络。我们的SmallMap由两个关键模块组成:(1)兴趣轨迹检测模块,该模块通过利用从配送过程中收集的多模态传感数据进行增强;以及(2)双时空生成对抗网络模块,该模块通过无监督路网自适应将兴趣轨迹纳入其中,从而自动生成路网。为了评估 SmallMap 的有效性,我们使用了中国最大的物流公司之一提供的为期两个月的数据集。广泛的评估结果表明,我们的框架明显优于最先进的基线,精确度达到 90.5%,召回率达到 87.5%,F1 分数达到 88.9%。此外,我们还在北京市进行了三个案例研究,分别涉及快递员工作量估算、最后一英里配送的预计到达时间(ETA)以及细粒度订单分配。
{"title":"SmallMap: Low-cost Community Road Map Sensing with Uncertain Delivery Behavior","authors":"Zhiqing Hong, Haotian Wang, Yi Ding, Guang Wang, Tian He, Desheng Zhang","doi":"10.1145/3659596","DOIUrl":"https://doi.org/10.1145/3659596","url":null,"abstract":"Accurate road networks play a crucial role in modern mobile applications such as navigation and last-mile delivery. Most existing studies primarily focus on generating road networks in open areas like main roads and avenues, but little attention has been given to the generation of community road networks in closed areas such as residential areas, which becomes more and more significant due to the growing demand for door-to-door services such as food delivery. This lack of research is primarily attributed to challenges related to sensing data availability and quality. In this paper, we design a novel framework called SmallMap that leverages ubiquitous multi-modal sensing data from last-mile delivery to automatically generate community road networks with low costs. Our SmallMap consists of two key modules: (1) a Trajectory of Interest Detection module enhanced by exploiting multi-modal sensing data collected from the delivery process; and (2) a Dual Spatio-temporal Generative Adversarial Network module that incorporates Trajectory of Interest by unsupervised road network adaptation to generate road networks automatically. To evaluate the effectiveness of SmallMap, we utilize a two-month dataset from one of the largest logistics companies in China. The extensive evaluation results demonstrate that our framework significantly outperforms state-of-the-art baselines, achieving a precision of 90.5%, a recall of 87.5%, and an F1-score of 88.9%, respectively. Moreover, we conduct three case studies in Beijing City for courier workload estimation, Estimated Time of Arrival (ETA) in last-mile delivery, and fine-grained order assignment.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140984189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital Forms for All: A Holistic Multimodal Large Language Model Agent for Health Data Entry 全民数字表单:用于健康数据输入的整体多模态大语言模型代理
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659624
Andrea Cuadra, Justine Breuch, Samantha Estrada, David Ihim, Isabelle Hung, Derek Askaryar, Marwan Hassanien, Kristen L. Fessele, James A. Landay
Digital forms help us access services and opportunities, but they are not equally accessible to everyone, such as older adults or those with sensory impairments. Large language models (LLMs) and multimodal interfaces offer a unique opportunity to increase form accessibility. Informed by prior literature and needfinding, we built a holistic multimodal LLM agent for health data entry. We describe the process of designing and building our system, and the results of a study with older adults (N =10). All participants, regardless of age or disability status, were able to complete a standard 47-question form independently using our system---one blind participant said it was "a prayer answered." Our video analysis revealed how different modalities provided alternative interaction paths in complementary ways (e.g., the buttons helped resolve transcription errors and speech helped provide more options when the pre-canned answer choices were insufficient). We highlight key design guidelines, such as designing systems that dynamically adapt to individual needs.
数字表单可以帮助我们获取服务和机会,但并不是每个人,如老年人或有感官障碍的人,都能平等地使用这些表单。大语言模型(LLM)和多模态界面为提高表单的无障碍性提供了一个独特的机会。根据先前的文献和需求调查,我们建立了一个用于健康数据录入的整体多模态 LLM 代理。我们将介绍设计和构建系统的过程,以及对老年人(10 人)进行研究的结果。所有参与者,无论年龄或残疾状况如何,都能使用我们的系统独立完成 47 个问题的标准表格--一位盲人参与者说这是 "祈祷得到了回应"。我们的视频分析揭示了不同模式是如何以互补的方式提供替代性交互路径的(例如,按钮有助于解决转录错误,语音有助于在预制答案选项不足时提供更多选项)。我们强调了关键的设计准则,例如设计能动态适应个人需求的系统。
{"title":"Digital Forms for All: A Holistic Multimodal Large Language Model Agent for Health Data Entry","authors":"Andrea Cuadra, Justine Breuch, Samantha Estrada, David Ihim, Isabelle Hung, Derek Askaryar, Marwan Hassanien, Kristen L. Fessele, James A. Landay","doi":"10.1145/3659624","DOIUrl":"https://doi.org/10.1145/3659624","url":null,"abstract":"Digital forms help us access services and opportunities, but they are not equally accessible to everyone, such as older adults or those with sensory impairments. Large language models (LLMs) and multimodal interfaces offer a unique opportunity to increase form accessibility. Informed by prior literature and needfinding, we built a holistic multimodal LLM agent for health data entry. We describe the process of designing and building our system, and the results of a study with older adults (N =10). All participants, regardless of age or disability status, were able to complete a standard 47-question form independently using our system---one blind participant said it was \"a prayer answered.\" Our video analysis revealed how different modalities provided alternative interaction paths in complementary ways (e.g., the buttons helped resolve transcription errors and speech helped provide more options when the pre-canned answer choices were insufficient). We highlight key design guidelines, such as designing systems that dynamically adapt to individual needs.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140985269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EarSleep: In-ear Acoustic-based Physical and Physiological Activity Recognition for Sleep Stage Detection EarSleep:基于声学的入耳式物理和生理活动识别,用于睡眠阶段检测
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659595
Feiyu Han, Panlong Yang, Yuanhao Feng, Weiwei Jiang, Youwei Zhang, Xiang-Yang Li
Since sleep plays an important role in people's daily lives, sleep monitoring has attracted the attention of many researchers. Physical and physiological activities occurring in sleep exhibit unique patterns in different sleep stages. It indicates that recognizing a wide range of sleep activities (events) can provide more fine-grained information for sleep stage detection. However, most of the prior works are designed to capture limited sleep events and coarse-grained information, which cannot meet the needs of fine-grained sleep monitoring. In our work, we leverage ubiquitous in-ear microphones on sleep earbuds to design a sleep monitoring system, named EarSleep1, which interprets in-ear body sounds induced by various representative sleep events into sleep stages. Based on differences among physical occurrence mechanisms of sleep activities, EarSleep extracts unique acoustic response patterns from in-ear body sounds to recognize a wide range of sleep events, including body movements, sound activities, heartbeat, and respiration. With the help of sleep medicine knowledge, interpretable acoustic features are derived from these representative sleep activities. EarSleep leverages a carefully designed deep learning model to establish the complex correlation between acoustic features and sleep stages. We conduct extensive experiments with 48 nights of 18 participants over three months to validate the performance of our system. The experimental results show that our system can accurately detect a rich set of sleep activities. Furthermore, in terms of sleep stage detection, EarSleep outperforms state-of-the-art solutions by 7.12% and 9.32% in average precision and average recall, respectively.
由于睡眠在人们的日常生活中扮演着重要角色,因此睡眠监测引起了许多研究人员的关注。睡眠中发生的物理和生理活动在不同的睡眠阶段表现出独特的模式。这表明,识别各种睡眠活动(事件)可以为睡眠阶段检测提供更精细的信息。然而,之前的大多数工作都是为了捕捉有限的睡眠事件和粗粒度信息而设计的,无法满足细粒度睡眠监测的需求。在我们的工作中,我们利用睡眠耳塞上无处不在的入耳式麦克风,设计了一种名为 EarSleep1 的睡眠监测系统,它能将各种代表性睡眠事件引起的入耳式体声解释为睡眠阶段。根据睡眠活动物理发生机制的差异,EarSleep 从耳内体声中提取出独特的声学响应模式,以识别各种睡眠事件,包括身体运动、声音活动、心跳和呼吸。在睡眠医学知识的帮助下,从这些具有代表性的睡眠活动中提取出可解释的声学特征。EarSleep 利用精心设计的深度学习模型来建立声学特征与睡眠阶段之间的复杂关联。我们在三个月内对 18 名参与者的 48 个夜晚进行了广泛的实验,以验证我们系统的性能。实验结果表明,我们的系统可以准确检测出丰富的睡眠活动。此外,在睡眠阶段检测方面,EarSleep 的平均精确度和平均召回率分别比最先进的解决方案高出 7.12% 和 9.32%。
{"title":"EarSleep: In-ear Acoustic-based Physical and Physiological Activity Recognition for Sleep Stage Detection","authors":"Feiyu Han, Panlong Yang, Yuanhao Feng, Weiwei Jiang, Youwei Zhang, Xiang-Yang Li","doi":"10.1145/3659595","DOIUrl":"https://doi.org/10.1145/3659595","url":null,"abstract":"Since sleep plays an important role in people's daily lives, sleep monitoring has attracted the attention of many researchers. Physical and physiological activities occurring in sleep exhibit unique patterns in different sleep stages. It indicates that recognizing a wide range of sleep activities (events) can provide more fine-grained information for sleep stage detection. However, most of the prior works are designed to capture limited sleep events and coarse-grained information, which cannot meet the needs of fine-grained sleep monitoring. In our work, we leverage ubiquitous in-ear microphones on sleep earbuds to design a sleep monitoring system, named EarSleep1, which interprets in-ear body sounds induced by various representative sleep events into sleep stages. Based on differences among physical occurrence mechanisms of sleep activities, EarSleep extracts unique acoustic response patterns from in-ear body sounds to recognize a wide range of sleep events, including body movements, sound activities, heartbeat, and respiration. With the help of sleep medicine knowledge, interpretable acoustic features are derived from these representative sleep activities. EarSleep leverages a carefully designed deep learning model to establish the complex correlation between acoustic features and sleep stages. We conduct extensive experiments with 48 nights of 18 participants over three months to validate the performance of our system. The experimental results show that our system can accurately detect a rich set of sleep activities. Furthermore, in terms of sleep stage detection, EarSleep outperforms state-of-the-art solutions by 7.12% and 9.32% in average precision and average recall, respectively.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140983909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PRECYSE: Predicting Cybersickness using Transformer for Multimodal Time-Series Sensor Data PRECYSE:利用多模态时间序列传感器数据变换器预测晕机症状
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659594
Dayoung Jeong, Kyungsik Han
Cybersickness, a factor that hinders user immersion in VR, has been the subject of ongoing attempts to predict it using AI. Previous studies have used CNN and LSTM for prediction models and used attention mechanisms and XAI for data analysis, yet none explored a transformer that can better reflect the spatial and temporal characteristics of the data, beneficial for enhancing prediction and feature importance analysis. In this paper, we propose cybersickness prediction models using multimodal time-series sensor data (i.e., eye movement, head movement, and physiological signals) based on a transformer algorithm, considering sensor data pre-processing and multimodal data fusion methods. We constructed the MSCVR dataset consisting of normalized sensor data, spectrogram formatted sensor data, and cybersickness levels collected from 45 participants through a user study. We proposed two methods for embedding multimodal time-series sensor data into the transformer: modality-specific spatial and temporal transformer encoders for normalized sensor data (MS-STTN) and modality-specific spatial-temporal transformer encoder for spectrogram (MS-STTS). MS-STTN yielded the highest performance in the ablation study and the comparison of the existing models. Furthermore, by analyzing the importance of data features, we determined their relevance to cybersickness over time, especially the salience of eye movement features. Our results and insights derived from multimodal time-series sensor data and the transformer model provide a comprehensive understanding of cybersickness and its association with sensor data. Our MSCVR dataset and code are publicly available: https://github.com/dayoung-jeong/PRECYSE.git.
晕机症是阻碍用户沉浸在虚拟现实中的一个因素,人们一直在尝试利用人工智能来预测晕机症。以往的研究使用 CNN 和 LSTM 建立预测模型,并使用注意力机制和 XAI 进行数据分析,但都没有探索出一种能更好地反映数据时空特征的变换器,从而有利于加强预测和特征重要性分析。本文在考虑传感器数据预处理和多模态数据融合方法的基础上,利用多模态时间序列传感器数据(即眼动、头动和生理信号)提出了基于变换器算法的晕机预测模型。我们构建了 MSCVR 数据集,其中包括归一化传感器数据、频谱图格式化传感器数据以及通过用户研究从 45 名参与者那里收集的晕机水平。我们提出了两种将多模态时间序列传感器数据嵌入变压器的方法:归一化传感器数据的特定模态空间和时间变压器编码器(MS-STTN)和频谱图的特定模态空间-时间变压器编码器(MS-STTS)。在消融研究和现有模型的比较中,MS-STTN 的性能最高。此外,通过分析数据特征的重要性,我们确定了这些特征随着时间的推移与晕机症的相关性,尤其是眼动特征的显著性。我们从多模态时间序列传感器数据和转换器模型中得出的结果和见解,为我们提供了对晕机及其与传感器数据关联的全面理解。我们的 MSCVR 数据集和代码可公开获取:https://github.com/dayoung-jeong/PRECYSE.git。
{"title":"PRECYSE: Predicting Cybersickness using Transformer for Multimodal Time-Series Sensor Data","authors":"Dayoung Jeong, Kyungsik Han","doi":"10.1145/3659594","DOIUrl":"https://doi.org/10.1145/3659594","url":null,"abstract":"Cybersickness, a factor that hinders user immersion in VR, has been the subject of ongoing attempts to predict it using AI. Previous studies have used CNN and LSTM for prediction models and used attention mechanisms and XAI for data analysis, yet none explored a transformer that can better reflect the spatial and temporal characteristics of the data, beneficial for enhancing prediction and feature importance analysis. In this paper, we propose cybersickness prediction models using multimodal time-series sensor data (i.e., eye movement, head movement, and physiological signals) based on a transformer algorithm, considering sensor data pre-processing and multimodal data fusion methods. We constructed the MSCVR dataset consisting of normalized sensor data, spectrogram formatted sensor data, and cybersickness levels collected from 45 participants through a user study. We proposed two methods for embedding multimodal time-series sensor data into the transformer: modality-specific spatial and temporal transformer encoders for normalized sensor data (MS-STTN) and modality-specific spatial-temporal transformer encoder for spectrogram (MS-STTS). MS-STTN yielded the highest performance in the ablation study and the comparison of the existing models. Furthermore, by analyzing the importance of data features, we determined their relevance to cybersickness over time, especially the salience of eye movement features. Our results and insights derived from multimodal time-series sensor data and the transformer model provide a comprehensive understanding of cybersickness and its association with sensor data. Our MSCVR dataset and code are publicly available: https://github.com/dayoung-jeong/PRECYSE.git.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140985603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Personality Dimensions GPT-3 Expresses During Human-Chatbot Interactions GPT-3 在人类与聊天机器人互动过程中表达的人格维度
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659626
N. Kovačević, Christian Holz, M. Gross, Rafael Wampfler
Large language models such as GPT-3 and ChatGPT can mimic human-to-human conversation with unprecedented fidelity, which enables many applications such as conversational agents for education and non-player characters in video games. In this work, we investigate the underlying personality structure that a GPT-3-based chatbot expresses during conversations with a human. We conducted a user study to collect 147 chatbot personality descriptors from 86 participants while they interacted with the GPT-3-based chatbot for three weeks. Then, 425 new participants rated the 147 personality descriptors in an online survey. We conducted an exploratory factor analysis on the collected descriptors and show that, though overlapping, human personality models do not fully transfer to the chatbot's personality as perceived by humans. We also show that the perceived personality is significantly different from that of virtual personal assistants, where users focus rather on serviceability and functionality. We discuss the implications of ever-evolving large language models and the change they affect in users' perception of agent personalities.
GPT-3 和 ChatGPT 等大型语言模型能以前所未有的逼真度模拟人与人之间的对话,这使得教育对话代理和视频游戏中的非玩家角色等许多应用成为可能。在这项工作中,我们研究了基于 GPT-3 的聊天机器人在与人类对话时所表达的潜在人格结构。我们进行了一项用户研究,从 86 名参与者那里收集了 147 个聊天机器人个性描述,这些参与者与基于 GPT-3 的聊天机器人进行了为期三周的互动。然后,425 名新参与者在在线调查中对这 147 项个性描述进行了评分。我们对收集到的描述符进行了探索性因子分析,结果表明,人类人格模型虽然相互重叠,但并不能完全转化为人类感知到的聊天机器人人格。我们还表明,用户感知到的个性与虚拟个人助理的个性有很大不同,虚拟个人助理更注重服务性和功能性。我们讨论了不断发展的大型语言模型的意义,以及它们对用户感知代理个性的影响。
{"title":"The Personality Dimensions GPT-3 Expresses During Human-Chatbot Interactions","authors":"N. Kovačević, Christian Holz, M. Gross, Rafael Wampfler","doi":"10.1145/3659626","DOIUrl":"https://doi.org/10.1145/3659626","url":null,"abstract":"Large language models such as GPT-3 and ChatGPT can mimic human-to-human conversation with unprecedented fidelity, which enables many applications such as conversational agents for education and non-player characters in video games. In this work, we investigate the underlying personality structure that a GPT-3-based chatbot expresses during conversations with a human. We conducted a user study to collect 147 chatbot personality descriptors from 86 participants while they interacted with the GPT-3-based chatbot for three weeks. Then, 425 new participants rated the 147 personality descriptors in an online survey. We conducted an exploratory factor analysis on the collected descriptors and show that, though overlapping, human personality models do not fully transfer to the chatbot's personality as perceived by humans. We also show that the perceived personality is significantly different from that of virtual personal assistants, where users focus rather on serviceability and functionality. We discuss the implications of ever-evolving large language models and the change they affect in users' perception of agent personalities.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140983158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CrossHAR: Generalizing Cross-dataset Human Activity Recognition via Hierarchical Self-Supervised Pretraining CrossHAR:通过分层自我监督预训练实现跨数据集人类活动识别的通用化
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659597
Zhiqing Hong, Zelong Li, Shuxin Zhong, Wenjun Lyu, Haotian Wang, Yi Ding, Tian He, Desheng Zhang
The increasing availability of low-cost wearable devices and smartphones has significantly advanced the field of sensor-based human activity recognition (HAR), attracting considerable research interest. One of the major challenges in HAR is the domain shift problem in cross-dataset activity recognition, which occurs due to variations in users, device types, and sensor placements between the source dataset and the target dataset. Although domain adaptation methods have shown promise, they typically require access to the target dataset during the training process, which might not be practical in some scenarios. To address these issues, we introduce CrossHAR, a new HAR model designed to improve model performance on unseen target datasets. CrossHAR involves three main steps: (i) CrossHAR explores the sensor data generation principle to diversify the data distribution and augment the raw sensor data. (ii) CrossHAR then employs a hierarchical self-supervised pretraining approach with the augmented data to develop a generalizable representation. (iii) Finally, CrossHAR fine-tunes the pretrained model with a small set of labeled data in the source dataset, enhancing its performance in cross-dataset HAR. Our extensive experiments across multiple real-world HAR datasets demonstrate that CrossHAR outperforms current state-of-the-art methods by 10.83% in accuracy, demonstrating its effectiveness in generalizing to unseen target datasets.
低成本可穿戴设备和智能手机的日益普及极大地推动了基于传感器的人类活动识别(HAR)领域的发展,吸引了大量研究人员的关注。跨数据集活动识别中的域转移问题是 HAR 面临的主要挑战之一,由于源数据集和目标数据集之间的用户、设备类型和传感器位置存在差异,域转移问题时有发生。虽然域适应方法已经显示出前景,但它们通常需要在训练过程中访问目标数据集,这在某些情况下可能并不实用。为了解决这些问题,我们引入了 CrossHAR,这是一种新的 HAR 模型,旨在提高模型在未见过的目标数据集上的性能。CrossHAR 包括三个主要步骤:(i) CrossHAR 探索传感器数据生成原理,使数据分布多样化并增强原始传感器数据。(ii) 然后,CrossHAR 利用增强数据采用分层自监督预训练方法,以开发可通用的表征。(iii) 最后,CrossHAR 利用源数据集中的一小部分标注数据对预训练模型进行微调,从而提高其在跨数据集 HAR 中的性能。我们在多个真实世界 HAR 数据集上进行的大量实验表明,CrossHAR 的准确率比目前最先进的方法高出 10.83%,证明了它在泛化到未见过的目标数据集方面的有效性。
{"title":"CrossHAR: Generalizing Cross-dataset Human Activity Recognition via Hierarchical Self-Supervised Pretraining","authors":"Zhiqing Hong, Zelong Li, Shuxin Zhong, Wenjun Lyu, Haotian Wang, Yi Ding, Tian He, Desheng Zhang","doi":"10.1145/3659597","DOIUrl":"https://doi.org/10.1145/3659597","url":null,"abstract":"The increasing availability of low-cost wearable devices and smartphones has significantly advanced the field of sensor-based human activity recognition (HAR), attracting considerable research interest. One of the major challenges in HAR is the domain shift problem in cross-dataset activity recognition, which occurs due to variations in users, device types, and sensor placements between the source dataset and the target dataset. Although domain adaptation methods have shown promise, they typically require access to the target dataset during the training process, which might not be practical in some scenarios. To address these issues, we introduce CrossHAR, a new HAR model designed to improve model performance on unseen target datasets. CrossHAR involves three main steps: (i) CrossHAR explores the sensor data generation principle to diversify the data distribution and augment the raw sensor data. (ii) CrossHAR then employs a hierarchical self-supervised pretraining approach with the augmented data to develop a generalizable representation. (iii) Finally, CrossHAR fine-tunes the pretrained model with a small set of labeled data in the source dataset, enhancing its performance in cross-dataset HAR. Our extensive experiments across multiple real-world HAR datasets demonstrate that CrossHAR outperforms current state-of-the-art methods by 10.83% in accuracy, demonstrating its effectiveness in generalizing to unseen target datasets.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140983579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hey, What's Going On? 嘿,怎么了?
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659618
Luca-Maxim Meinhardt, Maximilian Rück, Julian Zähnle, Maryam Elhaidary, Mark Colley, Michael Rietzler, Enrico Rukzio
Highly Automated Vehicles offer a new level of independence to people who are blind or visually impaired. However, due to their limited vision, gaining knowledge of the surrounding traffic can be challenging. To address this issue, we conducted an interactive, participatory workshop (N=4) to develop an auditory interface and OnBoard- a tactile interface with expandable elements - to convey traffic information to visually impaired people. In a user study with N=14 participants, we explored usability, situation awareness, predictability, and engagement with OnBoard and the auditory interface. Our qualitative and quantitative results show that tactile cues, similar to auditory cues, are able to convey traffic information to users. In particular, there is a trend that participants with reduced visual acuity showed increased engagement with both interfaces. However, the diversity of visual impairments and individual information needs underscores the importance of a highly tailored multimodal approach as the ideal solution.
高度自动驾驶汽车为盲人或视力受损者提供了新的独立性。然而,由于他们的视力有限,要了解周围的交通情况可能具有挑战性。为了解决这个问题,我们举办了一个参与式互动研讨会(N=4),开发了一个听觉界面和一个带有可扩展元素的触觉界面--OnBoard,向视障人士传递交通信息。在对 14 名参与者进行的用户研究中,我们探讨了 OnBoard 和听觉界面的可用性、情景感知、可预测性和参与度。我们的定性和定量研究结果表明,触觉提示与听觉提示类似,都能向用户传达交通信息。特别是,视力下降的参与者对两种界面的参与度都有提高的趋势。然而,视觉障碍和个人对信息需求的多样性凸显了高度定制的多模态方法作为理想解决方案的重要性。
{"title":"Hey, What's Going On?","authors":"Luca-Maxim Meinhardt, Maximilian Rück, Julian Zähnle, Maryam Elhaidary, Mark Colley, Michael Rietzler, Enrico Rukzio","doi":"10.1145/3659618","DOIUrl":"https://doi.org/10.1145/3659618","url":null,"abstract":"Highly Automated Vehicles offer a new level of independence to people who are blind or visually impaired. However, due to their limited vision, gaining knowledge of the surrounding traffic can be challenging. To address this issue, we conducted an interactive, participatory workshop (N=4) to develop an auditory interface and OnBoard- a tactile interface with expandable elements - to convey traffic information to visually impaired people. In a user study with N=14 participants, we explored usability, situation awareness, predictability, and engagement with OnBoard and the auditory interface. Our qualitative and quantitative results show that tactile cues, similar to auditory cues, are able to convey traffic information to users. In particular, there is a trend that participants with reduced visual acuity showed increased engagement with both interfaces. However, the diversity of visual impairments and individual information needs underscores the importance of a highly tailored multimodal approach as the ideal solution.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140983041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ShuffleFL: Addressing Heterogeneity in Multi-Device Federated Learning ShuffleFL:解决多设备联合学习中的异质性问题
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659621
Ran Zhu, Mingkun Yang, Qing Wang
Federated Learning (FL) has emerged as a privacy-preserving paradigm for collaborative deep learning model training across distributed data silos. Despite its importance, FL faces challenges such as high latency and less effective global models. In this paper, we propose ShuffleFL, an innovative framework stemming from the hierarchical FL, which introduces a user layer between the FL devices and the FL server. ShuffleFL naturally groups devices based on their affiliations, e.g., belonging to the same user, to ease the strict privacy restriction-"data at the FL devices cannot be shared with others", thereby enabling the exchange of local samples among them. The user layer assumes a multi-faceted role, not just aggregating local updates but also coordinating data shuffling within affiliated devices. We formulate this data shuffling as an optimization problem, detailing our objectives to align local data closely with device computing capabilities and to ensure a more balanced data distribution at the intra-user devices. Through extensive experiments using realistic device profiles and five non-IID datasets, we demonstrate that ShuffleFL can improve inference accuracy by 2.81% to 7.85% and speed up the convergence by 4.11x to 36.56x when reaching the target accuracy.
联盟学习(Federated Learning,FL)已成为跨分布式数据孤岛协作式深度学习模型训练的一种隐私保护范例。尽管其重要性不言而喻,但联邦学习仍面临着高延迟和全局模型效率较低等挑战。在本文中,我们提出了源自分层 FL 的创新框架 ShuffleFL,它在 FL 设备和 FL 服务器之间引入了用户层。ShuffleFL 根据设备的隶属关系(如属于同一用户)对设备进行自然分组,以简化严格的隐私限制--"FL 设备上的数据不能与他人共享",从而实现设备之间的本地样本交换。用户层承担着多方面的角色,不仅要聚合本地更新,还要协调附属设备内部的数据洗牌。我们将这种数据洗牌表述为一个优化问题,详细说明了我们的目标,即使本地数据与设备计算能力密切配合,并确保用户内部设备的数据分布更加均衡。通过使用现实设备配置文件和五个非 IID 数据集进行大量实验,我们证明 ShuffleFL 可以将推理准确率提高 2.81% 至 7.85%,并在达到目标准确率时将收敛速度提高 4.11 倍至 36.56 倍。
{"title":"ShuffleFL: Addressing Heterogeneity in Multi-Device Federated Learning","authors":"Ran Zhu, Mingkun Yang, Qing Wang","doi":"10.1145/3659621","DOIUrl":"https://doi.org/10.1145/3659621","url":null,"abstract":"Federated Learning (FL) has emerged as a privacy-preserving paradigm for collaborative deep learning model training across distributed data silos. Despite its importance, FL faces challenges such as high latency and less effective global models. In this paper, we propose ShuffleFL, an innovative framework stemming from the hierarchical FL, which introduces a user layer between the FL devices and the FL server. ShuffleFL naturally groups devices based on their affiliations, e.g., belonging to the same user, to ease the strict privacy restriction-\"data at the FL devices cannot be shared with others\", thereby enabling the exchange of local samples among them. The user layer assumes a multi-faceted role, not just aggregating local updates but also coordinating data shuffling within affiliated devices. We formulate this data shuffling as an optimization problem, detailing our objectives to align local data closely with device computing capabilities and to ensure a more balanced data distribution at the intra-user devices. Through extensive experiments using realistic device profiles and five non-IID datasets, we demonstrate that ShuffleFL can improve inference accuracy by 2.81% to 7.85% and speed up the convergence by 4.11x to 36.56x when reaching the target accuracy.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140983399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lipwatch: Enabling Silent Speech Recognition on Smartwatches using Acoustic Sensing Lipwatch:利用声学传感在智能手表上实现无声语音识别
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659614
Qian Zhang, Yubin Lan, Kaiyi Guo, Dong Wang
Silent Speech Interfaces (SSI) on mobile devices offer a privacy-friendly alternative to conventional voice input methods. Previous research has primarily focused on smartphones. In this paper, we introduce Lipwatch, a novel system that utilizes acoustic sensing techniques to enable SSI on smartwatches. Lipwatch leverages the inaudible waves emitted by the watch's speaker to capture lip movements and then analyzes the echo to enable SSI. In contrast to acoustic sensing-based SSI on smartphones, our development of Lipwatch takes into full consideration the specific scenarios and requirements associated with smartwatches. Firstly, we elaborate a wake-up-free mechanism, allowing users to interact without the need for a wake-up phrase or button presses. The mechanism utilizes the inertial sensors on the smartwatch to detect gestures, in combination with acoustic signals that detecting lip movements to determine whether SSI should be activated. Secondly, we design a flexible silent speech recognition mechanism that explores limited vocabulary recognition to comprehend a broader range of user commands, even those not present in the training dataset, relieving users from strict adherence to predefined commands. We evaluate Lipwatch on 15 individuals using a set of the 80 most common interaction commands on smartwatches. The system achieves a Word Error Rate (WER) of 13.7% in user-independent test. Even when users utter commands containing words absent in the training set, Lipwatch still demonstrates a remarkable 88.7% top-3 accuracy. We implement a real-time version of Lipwatch on a commercial smartwatch. The user study shows that Lipwatch can be a practical and promising option to enable SSI on smartwatches.
移动设备上的无声语音接口(SSI)为传统语音输入法提供了一种隐私友好型替代方案。以往的研究主要集中在智能手机上。在本文中,我们介绍了 Lipwatch,这是一种利用声学传感技术在智能手表上实现 SSI 的新型系统。Lipwatch 利用手表扬声器发出的不可听波来捕捉嘴唇动作,然后分析回声,从而实现 SSI。与智能手机上基于声学传感的唇部识别相比,我们在开发 Lipwatch 时充分考虑了与智能手表相关的特定场景和要求。首先,我们精心设计了免唤醒机制,让用户无需唤醒词或按键即可进行交互。该机制利用智能手表上的惯性传感器检测手势,并结合检测嘴唇动作的声学信号来确定是否应激活 SSI。其次,我们设计了一种灵活的无声语音识别机制,利用有限的词汇识别能力来理解更广泛的用户指令,甚至是训练数据集中没有的指令,从而使用户不必严格遵守预定义的指令。我们使用智能手表上最常见的 80 个交互命令集对 15 名用户进行了 Lipwatch 评估。在独立于用户的测试中,该系统的词错误率(WER)为 13.7%。即使用户发出的命令中包含了训练集中没有的单词,Lipwatch 仍能以 88.7% 的准确率名列前三。我们在商用智能手表上实现了实时版 Lipwatch。用户研究表明,Lipwatch 是在智能手表上实现 SSI 的一个实用而有前途的选择。
{"title":"Lipwatch: Enabling Silent Speech Recognition on Smartwatches using Acoustic Sensing","authors":"Qian Zhang, Yubin Lan, Kaiyi Guo, Dong Wang","doi":"10.1145/3659614","DOIUrl":"https://doi.org/10.1145/3659614","url":null,"abstract":"Silent Speech Interfaces (SSI) on mobile devices offer a privacy-friendly alternative to conventional voice input methods. Previous research has primarily focused on smartphones. In this paper, we introduce Lipwatch, a novel system that utilizes acoustic sensing techniques to enable SSI on smartwatches. Lipwatch leverages the inaudible waves emitted by the watch's speaker to capture lip movements and then analyzes the echo to enable SSI. In contrast to acoustic sensing-based SSI on smartphones, our development of Lipwatch takes into full consideration the specific scenarios and requirements associated with smartwatches. Firstly, we elaborate a wake-up-free mechanism, allowing users to interact without the need for a wake-up phrase or button presses. The mechanism utilizes the inertial sensors on the smartwatch to detect gestures, in combination with acoustic signals that detecting lip movements to determine whether SSI should be activated. Secondly, we design a flexible silent speech recognition mechanism that explores limited vocabulary recognition to comprehend a broader range of user commands, even those not present in the training dataset, relieving users from strict adherence to predefined commands. We evaluate Lipwatch on 15 individuals using a set of the 80 most common interaction commands on smartwatches. The system achieves a Word Error Rate (WER) of 13.7% in user-independent test. Even when users utter commands containing words absent in the training set, Lipwatch still demonstrates a remarkable 88.7% top-3 accuracy. We implement a real-time version of Lipwatch on a commercial smartwatch. The user study shows that Lipwatch can be a practical and promising option to enable SSI on smartwatches.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140983148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1