首页 > 最新文献

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

英文 中文
Can Large Language Models Be Good Companions? 大型语言模型能否成为好伙伴?
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659600
Zhenyu Xu, Hailin Xu, Zhouyang Lu, Yingying Zhao, Rui Zhu, Yujiang Wang, Mingzhi Dong, Yuhu Chang, Qin Lv, Robert P. Dick, Fan Yang, Tun Lu, Ning Gu, L. Shang
Developing chatbots as personal companions has long been a goal of artificial intelligence researchers. Recent advances in Large Language Models (LLMs) have delivered a practical solution for endowing chatbots with anthropomorphic language capabilities. However, it takes more than LLMs to enable chatbots that can act as companions. Humans use their understanding of individual personalities to drive conversations. Chatbots also require this capability to enable human-like companionship. They should act based on personalized, real-time, and time-evolving knowledge of their users. We define such essential knowledge as the common ground between chatbots and their users, and we propose to build a common-ground-aware dialogue system from an LLM-based module, named OS-1, to enable chatbot companionship. Hosted by eyewear, OS-1 can sense the visual and audio signals the user receives and extract real-time contextual semantics. Those semantics are categorized and recorded to formulate historical contexts from which the user's profile is distilled and evolves over time, i.e., OS-1 gradually learns about its user. OS-1 combines knowledge from real-time semantics, historical contexts, and user-specific profiles to produce a common-ground-aware prompt input into the LLM module. The LLM's output is converted to audio, spoken to the wearer when appropriate. We conduct laboratory and in-field studies to assess OS-1's ability to build common ground between the chatbot and its user. The technical feasibility and capabilities of the system are also evaluated. Our results show that by utilizing personal context, OS-1 progressively develops a better understanding of its users. This enhances user satisfaction and potentially leads to various personal service scenarios, such as emotional support and assistance.
将聊天机器人开发成个人伴侣一直是人工智能研究人员的目标。大型语言模型(LLM)的最新进展为赋予聊天机器人拟人化的语言能力提供了实用的解决方案。不过,要让聊天机器人成为伴侣,需要的不仅仅是 LLM。人类利用对个人性格的理解来推动对话。聊天机器人也需要这种能力,以实现与人类一样的陪伴。聊天机器人应根据对用户的个性化、实时和随时间变化的了解采取行动。我们将这种基本知识定义为聊天机器人与其用户之间的共同点,并建议从一个基于 LLM 的模块(名为 OS-1)中建立一个共同点感知对话系统,以实现聊天机器人的陪伴。OS-1 由眼镜托管,可以感知用户接收到的视觉和音频信号,并实时提取上下文语义。这些语义经过分类和记录,形成历史语境,并从中提炼出用户的个人资料,随着时间的推移不断演变,也就是说,OS-1 会逐渐了解用户。OS-1 将来自实时语义、历史语境和用户特定档案的知识结合在一起,生成一个具有共地意识的提示输入到 LLM 模块中。LLM 的输出被转换成音频,并在适当的时候向佩戴者播放。我们进行了实验室和现场研究,以评估 OS-1 在聊天机器人和用户之间建立共同点的能力。我们还对系统的技术可行性和能力进行了评估。我们的研究结果表明,通过利用个人语境,OS-1 逐步加深了对用户的理解。这提高了用户满意度,并有可能带来各种个人服务场景,如情感支持和帮助。
{"title":"Can Large Language Models Be Good Companions?","authors":"Zhenyu Xu, Hailin Xu, Zhouyang Lu, Yingying Zhao, Rui Zhu, Yujiang Wang, Mingzhi Dong, Yuhu Chang, Qin Lv, Robert P. Dick, Fan Yang, Tun Lu, Ning Gu, L. Shang","doi":"10.1145/3659600","DOIUrl":"https://doi.org/10.1145/3659600","url":null,"abstract":"Developing chatbots as personal companions has long been a goal of artificial intelligence researchers. Recent advances in Large Language Models (LLMs) have delivered a practical solution for endowing chatbots with anthropomorphic language capabilities. However, it takes more than LLMs to enable chatbots that can act as companions. Humans use their understanding of individual personalities to drive conversations. Chatbots also require this capability to enable human-like companionship. They should act based on personalized, real-time, and time-evolving knowledge of their users. We define such essential knowledge as the common ground between chatbots and their users, and we propose to build a common-ground-aware dialogue system from an LLM-based module, named OS-1, to enable chatbot companionship. Hosted by eyewear, OS-1 can sense the visual and audio signals the user receives and extract real-time contextual semantics. Those semantics are categorized and recorded to formulate historical contexts from which the user's profile is distilled and evolves over time, i.e., OS-1 gradually learns about its user. OS-1 combines knowledge from real-time semantics, historical contexts, and user-specific profiles to produce a common-ground-aware prompt input into the LLM module. The LLM's output is converted to audio, spoken to the wearer when appropriate. We conduct laboratory and in-field studies to assess OS-1's ability to build common ground between the chatbot and its user. The technical feasibility and capabilities of the system are also evaluated. Our results show that by utilizing personal context, OS-1 progressively develops a better understanding of its users. This enhances user satisfaction and potentially leads to various personal service scenarios, such as emotional support and assistance.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140985040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Push the Limit of Highly Accurate Ranging on Commercial UWB Devices 突破商用 UWB 设备高精度测距的极限
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659602
Junqi Ma, Fusang Zhang, Beihong Jin, C. Su, Siheng Li, Zhi Wang, Jiazhi Ni
Ranging plays a crucial role in many wireless sensing applications. Among the wireless techniques employed for ranging, Ultra-Wideband (UWB) has received much attention due to its excellent performance and widespread integration into consumer-level electronics. However, the ranging accuracy of the current UWB systems is limited to the centimeter level due to bandwidth limitation, hindering their use for applications that require a very high resolution. This paper proposes a novel system that achieves sub-millimeter-level ranging accuracy on commercial UWB devices for the first time. Our approach leverages the fine-grained phase information of commercial UWB devices. To eliminate the phase drift, we design a fine-grained phase recovery method by utilizing the bi-directional messages in UWB two-way ranging. We further present a dual-frequency switching method to resolve phase ambiguity. Building upon this, we design and implement the ranging system on commercial UWB modules. Extensive experiments demonstrate that our system achieves a median ranging error of just 0.77 mm, reducing the error by 96.54% compared to the state-of-the-art method. We also present three real-life applications to showcase the fine-grained sensing capabilities of our system, including i) smart speaker control, ii) free-style user handwriting, and iii) 3D tracking for virtual-reality (VR) controllers.
测距在许多无线传感应用中发挥着至关重要的作用。在用于测距的无线技术中,超宽带(UWB)因其卓越的性能和广泛集成到消费级电子产品中而备受关注。然而,由于带宽的限制,目前的超宽带系统的测距精度仅限于厘米级,阻碍了其在需要极高分辨率的应用中的使用。本文提出了一种新型系统,首次在商用 UWB 设备上实现了亚毫米级的测距精度。我们的方法利用了商用 UWB 设备的细粒度相位信息。为了消除相位漂移,我们利用 UWB 双向测距中的双向信息设计了一种细粒度相位恢复方法。我们进一步提出了一种解决相位模糊的双频切换方法。在此基础上,我们在商用 UWB 模块上设计并实现了测距系统。广泛的实验证明,我们的系统实现的中值测距误差仅为 0.77 mm,与最先进的方法相比,误差减少了 96.54%。我们还介绍了三个实际应用,以展示我们系统的细粒度传感能力,包括 i) 智能扬声器控制;ii) 自由式用户手写;iii) 虚拟现实(VR)控制器的 3D 跟踪。
{"title":"Push the Limit of Highly Accurate Ranging on Commercial UWB Devices","authors":"Junqi Ma, Fusang Zhang, Beihong Jin, C. Su, Siheng Li, Zhi Wang, Jiazhi Ni","doi":"10.1145/3659602","DOIUrl":"https://doi.org/10.1145/3659602","url":null,"abstract":"Ranging plays a crucial role in many wireless sensing applications. Among the wireless techniques employed for ranging, Ultra-Wideband (UWB) has received much attention due to its excellent performance and widespread integration into consumer-level electronics. However, the ranging accuracy of the current UWB systems is limited to the centimeter level due to bandwidth limitation, hindering their use for applications that require a very high resolution. This paper proposes a novel system that achieves sub-millimeter-level ranging accuracy on commercial UWB devices for the first time. Our approach leverages the fine-grained phase information of commercial UWB devices. To eliminate the phase drift, we design a fine-grained phase recovery method by utilizing the bi-directional messages in UWB two-way ranging. We further present a dual-frequency switching method to resolve phase ambiguity. Building upon this, we design and implement the ranging system on commercial UWB modules. Extensive experiments demonstrate that our system achieves a median ranging error of just 0.77 mm, reducing the error by 96.54% compared to the state-of-the-art method. We also present three real-life applications to showcase the fine-grained sensing capabilities of our system, including i) smart speaker control, ii) free-style user handwriting, and iii) 3D tracking for virtual-reality (VR) controllers.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140984729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User-directed Assembly Code Transformations Enabling Efficient Batteryless Arduino Applications 用户指导的汇编代码转换实现了高效的无电池 Arduino 应用程序
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659590
Christopher Kraemer, William Gelder, Josiah D. Hester
The time for battery-free computing is now. Lithium mining depletes and pollutes local water supplies and dead batteries in landfills leak toxic metals into the ground[20][12]. Battery-free devices represent a probable future for sustainable ubiquitous computing and we will need many more new devices and programmers to bring that future into reality. Yet, energy harvesting and battery-free devices that frequently fail are challenging to program. The maker movement has organically developed a considerable variety of platforms to prototype and program ubiquitous sensing and computing devices, but only a few have been modified to be usable with energy harvesting and to hide those pesky power failures that are the norm from variable energy availability (platforms like Microsoft's Makecode and AdaFruit's CircuitPython). Many platforms, especially Arduino (the first and most famous maker platform), do not support energy harvesting devices and intermittent computing. To bridge this gap and lay a strong foundation for potential new platforms for maker programming, we build a tool called BOOTHAMMER: a lightweight assembly re-writer for ARM Thumb. BOOTHAMMER analyzes and rewrites the low-level assembly to insert careful checkpoint and restore operations to enable programs to persist through power failures. The approach is easily insertable in existing toolchains and is general-purpose enough to be resilient to future platforms and devices/chipsets. We close the loop with the user by designing a small set of program annotations that any maker coder can use to provide extra information to this low-level tool that will significantly increase checkpoint efficiency and resolution. These optional extensions represent a way to include the user in decision-making about energy harvesting while ensuring the tool supports existing platforms. We conduct an extensive evaluation using various program benchmarks with Arduino as our chosen evaluation platform. We also demonstrate the usability of this approach by evaluating BOOTHAMMER with a user study and show that makers feel very confident in their ability to write intermittent computing programs using this tool. With this new tool, we enable maker hardware and software for sustainable, energy-harvesting-based computing for all.
现在是实现无电池计算的时候了。锂矿开采会消耗和污染当地的水源,垃圾填埋场中的废旧电池会将有毒金属渗入地下[20][12]。无电池设备代表了可持续泛在计算的可能未来,我们需要更多新设备和程序员来实现这一未来。然而,能量收集和无电池设备经常出现故障,编程难度很大。创客运动已经有机地开发出了相当多的平台,用于泛在感知和计算设备的原型设计和编程,但只有少数平台经过修改,可用于能量采集,并能隐藏因能源供应不稳定而导致的令人讨厌的电源故障(如微软的 Makecode 和 AdaFruit 的 CircuitPython 等平台)。许多平台,尤其是 Arduino(第一个也是最著名的创客平台),都不支持能量收集设备和间歇计算。为了弥补这一差距,并为潜在的创客编程新平台奠定坚实的基础,我们开发了一款名为 BOOTHAMMER 的工具:ARM Thumb 的轻量级汇编重写器。BOOTHAMMER 分析并重写底层汇编,插入谨慎的检查点和恢复操作,使程序能够在断电情况下持续运行。这种方法可轻松插入现有工具链,并具有足够的通用性,能够适应未来的平台和设备/芯片组。我们设计了一小套程序注释,任何程序员都可以利用这些注释为这一底层工具提供额外信息,从而显著提高检查点效率和分辨率。这些可选扩展代表了一种让用户参与能量收集决策的方式,同时确保工具支持现有平台。我们以 Arduino 为评估平台,使用各种程序基准进行了广泛的评估。我们还通过一项用户研究对 BOOTHAMMER 进行了评估,证明了这种方法的可用性,并表明制作者对使用该工具编写间歇计算程序的能力非常有信心。有了这个新工具,我们就能让创客硬件和软件为所有人提供可持续的、基于能源收集的计算。
{"title":"User-directed Assembly Code Transformations Enabling Efficient Batteryless Arduino Applications","authors":"Christopher Kraemer, William Gelder, Josiah D. Hester","doi":"10.1145/3659590","DOIUrl":"https://doi.org/10.1145/3659590","url":null,"abstract":"The time for battery-free computing is now. Lithium mining depletes and pollutes local water supplies and dead batteries in landfills leak toxic metals into the ground[20][12]. Battery-free devices represent a probable future for sustainable ubiquitous computing and we will need many more new devices and programmers to bring that future into reality. Yet, energy harvesting and battery-free devices that frequently fail are challenging to program. The maker movement has organically developed a considerable variety of platforms to prototype and program ubiquitous sensing and computing devices, but only a few have been modified to be usable with energy harvesting and to hide those pesky power failures that are the norm from variable energy availability (platforms like Microsoft's Makecode and AdaFruit's CircuitPython). Many platforms, especially Arduino (the first and most famous maker platform), do not support energy harvesting devices and intermittent computing. To bridge this gap and lay a strong foundation for potential new platforms for maker programming, we build a tool called BOOTHAMMER: a lightweight assembly re-writer for ARM Thumb. BOOTHAMMER analyzes and rewrites the low-level assembly to insert careful checkpoint and restore operations to enable programs to persist through power failures. The approach is easily insertable in existing toolchains and is general-purpose enough to be resilient to future platforms and devices/chipsets. We close the loop with the user by designing a small set of program annotations that any maker coder can use to provide extra information to this low-level tool that will significantly increase checkpoint efficiency and resolution. These optional extensions represent a way to include the user in decision-making about energy harvesting while ensuring the tool supports existing platforms. We conduct an extensive evaluation using various program benchmarks with Arduino as our chosen evaluation platform. We also demonstrate the usability of this approach by evaluating BOOTHAMMER with a user study and show that makers feel very confident in their ability to write intermittent computing programs using this tool. With this new tool, we enable maker hardware and software for sustainable, energy-harvesting-based computing for all.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140982739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seeing through the Tactile 透过触觉看世界
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659612
Ziyu Wu, Fangting Xie, Yiran Fang, Zhen Liang, Quan Wan, Yufan Xiong, Xiaohui Cai
Humans spend about one-third of their lives resting. Reconstructing human dynamics in in-bed scenarios is of considerable significance in sleep studies, bedsore monitoring, and biomedical factor extractions. However, the mainstream human pose and shape estimation methods mainly focus on visual cues, facing serious issues in non-line-of-sight environments. Since in-bed scenarios contain complicated human-environment contact, pressure-sensing bedsheets provide a non-invasive and privacy-preserving approach to capture the pressure distribution on the contact surface, and have shown prospects in many downstream tasks. However, few studies focus on in-bed human mesh recovery. To explore the potential of reconstructing human meshes from the sensed pressure distribution, we first build a high-quality temporal human in-bed pose dataset, TIP, with 152K multi-modality synchronized images. We then propose a label generation pipeline for in-bed scenarios to generate reliable 3D mesh labels with a SMPLify-based optimizer. Finally, we present PIMesh, a simple yet effective temporal human shape estimator to directly generate human meshes from pressure image sequences. We conduct various experiments to evaluate PIMesh's performance, showing that PIMesh archives 79.17mm joint position errors on our TIP dataset. The results demonstrate that the pressure-sensing bedsheet could be a promising alternative for long-term in-bed human shape estimation.
人类一生中约有三分之一的时间在休息。在睡眠研究、褥疮监测和生物医学因素提取中,重建床上场景中的人体动态具有相当重要的意义。然而,主流的人体姿态和形状估计方法主要侧重于视觉线索,在非视线环境中面临严重问题。由于床上场景包含复杂的人与环境接触,压力传感床单提供了一种无创、保护隐私的方法来捕捉接触面的压力分布,并在许多下游任务中展现了前景。然而,很少有研究关注床上人体网状结构的恢复。为了探索从感应到的压力分布重建人体网格的潜力,我们首先建立了一个包含 152K 张多模态同步图像的高质量时态床内人体姿态数据集 TIP。然后,我们提出了床上场景的标签生成管道,利用基于 SMPLify 的优化器生成可靠的 3D 网格标签。最后,我们介绍了 PIMesh,这是一种简单而有效的时间人体形状估计器,可直接从压力图像序列生成人体网格。我们进行了各种实验来评估 PIMesh 的性能,结果表明 PIMesh 在 TIP 数据集上归档了 79.17 毫米的关节位置误差。结果表明,压力传感床单可作为长期床内人体形状估计的一种有前途的替代方法。
{"title":"Seeing through the Tactile","authors":"Ziyu Wu, Fangting Xie, Yiran Fang, Zhen Liang, Quan Wan, Yufan Xiong, Xiaohui Cai","doi":"10.1145/3659612","DOIUrl":"https://doi.org/10.1145/3659612","url":null,"abstract":"Humans spend about one-third of their lives resting. Reconstructing human dynamics in in-bed scenarios is of considerable significance in sleep studies, bedsore monitoring, and biomedical factor extractions. However, the mainstream human pose and shape estimation methods mainly focus on visual cues, facing serious issues in non-line-of-sight environments. Since in-bed scenarios contain complicated human-environment contact, pressure-sensing bedsheets provide a non-invasive and privacy-preserving approach to capture the pressure distribution on the contact surface, and have shown prospects in many downstream tasks. However, few studies focus on in-bed human mesh recovery. To explore the potential of reconstructing human meshes from the sensed pressure distribution, we first build a high-quality temporal human in-bed pose dataset, TIP, with 152K multi-modality synchronized images. We then propose a label generation pipeline for in-bed scenarios to generate reliable 3D mesh labels with a SMPLify-based optimizer. Finally, we present PIMesh, a simple yet effective temporal human shape estimator to directly generate human meshes from pressure image sequences. We conduct various experiments to evaluate PIMesh's performance, showing that PIMesh archives 79.17mm joint position errors on our TIP dataset. The results demonstrate that the pressure-sensing bedsheet could be a promising alternative for long-term in-bed human shape estimation.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140983342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting Users' Emotional States during Passive Social Media Use 检测用户被动使用社交媒体时的情绪状态
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659606
Christoph Gebhardt, Andreas Brombach, Tiffany Luong, Otmar Hilliges, Christian Holz
The widespread use of social media significantly impacts users' emotions. Negative emotions, in particular, are frequently produced, which can drastically affect mental health. Recognizing these emotional states is essential for implementing effective warning systems for social networks. However, detecting emotions during passive social media use---the predominant mode of engagement---is challenging. We introduce the first predictive model that estimates user emotions during passive social media consumption alone. We conducted a study with 29 participants who interacted with a controlled social media feed. Our apparatus captured participants' behavior and their physiological signals while they browsed the feed and filled out self-reports from two validated emotion models. Using this data for supervised training, our emotion classifier robustly detected up to 8 emotional states and achieved 83% peak accuracy to classify affect. Our analysis shows that behavioral features were sufficient to robustly recognize participants' emotions. It further highlights that within 8 seconds following a change in media content, objective features reveal a participant's new emotional state. We show that grounding labels in a componential emotion model outperforms dimensional models in higher-resolutional state detection. Our findings also demonstrate that using emotional properties of images, predicted by a deep learning model, further improves emotion recognition.
社交媒体的广泛使用极大地影响了用户的情绪。尤其是负面情绪的频繁产生,会严重影响心理健康。要在社交网络中实施有效的预警系统,识别这些情绪状态至关重要。然而,检测被动使用社交媒体时的情绪--即主要的参与模式--具有挑战性。我们介绍了首个预测模型,该模型可估测用户在被动使用社交媒体时的情绪。我们对 29 名参与者进行了一项研究,他们与受控社交媒体馈送进行了互动。我们的仪器捕捉了参与者浏览信息源时的行为和生理信号,并根据两个经过验证的情绪模型填写了自我报告。利用这些数据进行监督训练后,我们的情绪分类器能稳健地检测出多达 8 种情绪状态,情绪分类的峰值准确率达到 83%。我们的分析表明,行为特征足以稳健地识别参与者的情绪。它还进一步强调,在媒体内容发生变化后的 8 秒钟内,客观特征就能揭示参与者的新情绪状态。我们的研究表明,在更高分辨率的状态检测中,以成分情感模型为基础的标签优于维度模型。我们的研究结果还表明,利用深度学习模型预测的图像情感属性可进一步提高情感识别能力。
{"title":"Detecting Users' Emotional States during Passive Social Media Use","authors":"Christoph Gebhardt, Andreas Brombach, Tiffany Luong, Otmar Hilliges, Christian Holz","doi":"10.1145/3659606","DOIUrl":"https://doi.org/10.1145/3659606","url":null,"abstract":"The widespread use of social media significantly impacts users' emotions. Negative emotions, in particular, are frequently produced, which can drastically affect mental health. Recognizing these emotional states is essential for implementing effective warning systems for social networks. However, detecting emotions during passive social media use---the predominant mode of engagement---is challenging. We introduce the first predictive model that estimates user emotions during passive social media consumption alone. We conducted a study with 29 participants who interacted with a controlled social media feed. Our apparatus captured participants' behavior and their physiological signals while they browsed the feed and filled out self-reports from two validated emotion models. Using this data for supervised training, our emotion classifier robustly detected up to 8 emotional states and achieved 83% peak accuracy to classify affect. Our analysis shows that behavioral features were sufficient to robustly recognize participants' emotions. It further highlights that within 8 seconds following a change in media content, objective features reveal a participant's new emotional state. We show that grounding labels in a componential emotion model outperforms dimensional models in higher-resolutional state detection. Our findings also demonstrate that using emotional properties of images, predicted by a deep learning model, further improves emotion recognition.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140982847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Changing Your Tune: Lessons for Using Music to Encourage Physical Activity 改变你的旋律利用音乐鼓励体育锻炼的经验教训
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659611
Matthew Clark, Afsaneh Doryab
Our research investigated whether music can communicate physical activity levels in daily life. Past studies have shown that simple musical tunes can provide wellness information, but no study has examined whether musical feedback can affect daily behavior or lead to healthier habits. We conducted a within-subject study with 62 participants over a period of 76 days, providing either musical or text-based feedback on their daily physical activity. The music was built and personalized based on participants' step counts and baseline wellness perceptions. Results showed that participants were marginally more active during the music feedback compared to their baseline period, and significantly more active compared to the text-based feedback (p = 0.000). We also find that the participant's average activity may influence the musical features they find most inspiration within a song. Finally, context influenced how musical feedback was interpreted, and specific musical features correlated with higher activity levels regardless of baseline perceptions. We discuss lessons learned for designing music-based feedback systems for health communication.
我们的研究调查了音乐能否传达日常生活中的体育锻炼水平。过去的研究表明,简单的音乐曲调可以提供健康信息,但还没有研究探讨音乐反馈是否会影响日常行为或导致更健康的生活习惯。我们对 62 名参与者进行了为期 76 天的受试者内研究,为他们的日常体育锻炼提供音乐或文字反馈。音乐是根据参与者的步数和对健康的基本看法制作和个性化的。结果显示,与基线期相比,参与者在音乐反馈期间的活动量略有增加,而与文字反馈相比,活动量明显增加(p = 0.000)。我们还发现,参与者的平均活动量可能会影响他们认为歌曲中最能激发灵感的音乐特征。最后,情境影响了音乐反馈的解读方式,无论基线认知如何,特定的音乐特征都与较高的活动水平相关。我们讨论了为健康交流设计基于音乐的反馈系统的经验教训。
{"title":"Changing Your Tune: Lessons for Using Music to Encourage Physical Activity","authors":"Matthew Clark, Afsaneh Doryab","doi":"10.1145/3659611","DOIUrl":"https://doi.org/10.1145/3659611","url":null,"abstract":"Our research investigated whether music can communicate physical activity levels in daily life. Past studies have shown that simple musical tunes can provide wellness information, but no study has examined whether musical feedback can affect daily behavior or lead to healthier habits. We conducted a within-subject study with 62 participants over a period of 76 days, providing either musical or text-based feedback on their daily physical activity. The music was built and personalized based on participants' step counts and baseline wellness perceptions. Results showed that participants were marginally more active during the music feedback compared to their baseline period, and significantly more active compared to the text-based feedback (p = 0.000). We also find that the participant's average activity may influence the musical features they find most inspiration within a song. Finally, context influenced how musical feedback was interpreted, and specific musical features correlated with higher activity levels regardless of baseline perceptions. We discuss lessons learned for designing music-based feedback systems for health communication.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140985657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WatchCap: Improving Scanning Efficiency in People with Low Vision through Compensatory Head Movement Stimulation WatchCap:通过补偿性头部运动刺激提高低视力者的扫描效率
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659592
Taewoo Jo, Dohyeon Yeo, Gwangbin Kim, Seokhyun Hwang, SeungJun Kim
Individuals with low vision (LV) frequently face challenges in scanning performance, which in turn complicates daily activities requiring visual recognition. Although those with PVL can theoretically compensate for these scanning deficiencies through the use of active head movements, few practical applications have sought to capitalize on this potential, especially during visual recognition tasks. In this paper, we present WatchCap, a novel device that leverages the hanger reflex phenomenon to naturally elicit head movements through stimulation feedback. Our user studies, conducted with both sighted individuals in a simulated environment and people with glaucoma-related PVL, demonstrated that WatchCap's scanning-contingent stimulation enhances visual exploration. This improvement is evidenced by the fixation and saccade-related features and positive feedback from participants, which did not cause discomfort to the users. This study highlights the promise of facilitating head movements to aid those with LVs in visual recognition tasks. Critically, since WatchCap functions independently of predefined or task-specific cues, it has a wide scope of applicability, even in ambient task situations. This independence positions WatchCap to complement existing tools aimed at detailed visual information acquisition, allowing integration with existing tools and facilitating a comprehensive approach to assisting individuals with LV.
低视力(LV)患者经常面临扫描性能方面的挑战,这反过来又使需要视觉识别的日常活动变得复杂。虽然理论上低视力者可以通过头部的主动运动来弥补扫描方面的不足,但很少有实际应用能够利用这一潜力,尤其是在视觉识别任务中。在本文中,我们介绍了一种新型设备 WatchCap,它利用衣架反射现象,通过刺激反馈自然地诱发头部运动。我们在模拟环境中对视力正常的人和患有青光眼相关性视力障碍的人进行了用户研究,结果表明 WatchCap 的扫描条件刺激增强了视觉探索能力。这种改进体现在与定点和囊状波相关的功能以及参与者的积极反馈上,而且不会给用户带来不适。这项研究强调了促进头部运动以帮助低视力患者完成视觉识别任务的前景。重要的是,由于 WatchCap 的功能不受预定义或特定任务线索的影响,因此它具有广泛的适用性,即使在环境任务情况下也是如此。这种独立性使 WatchCap 能够补充现有的旨在获取详细视觉信息的工具,从而与现有工具进行整合,并促进以综合方法来帮助弱视患者。
{"title":"WatchCap: Improving Scanning Efficiency in People with Low Vision through Compensatory Head Movement Stimulation","authors":"Taewoo Jo, Dohyeon Yeo, Gwangbin Kim, Seokhyun Hwang, SeungJun Kim","doi":"10.1145/3659592","DOIUrl":"https://doi.org/10.1145/3659592","url":null,"abstract":"Individuals with low vision (LV) frequently face challenges in scanning performance, which in turn complicates daily activities requiring visual recognition. Although those with PVL can theoretically compensate for these scanning deficiencies through the use of active head movements, few practical applications have sought to capitalize on this potential, especially during visual recognition tasks. In this paper, we present WatchCap, a novel device that leverages the hanger reflex phenomenon to naturally elicit head movements through stimulation feedback. Our user studies, conducted with both sighted individuals in a simulated environment and people with glaucoma-related PVL, demonstrated that WatchCap's scanning-contingent stimulation enhances visual exploration. This improvement is evidenced by the fixation and saccade-related features and positive feedback from participants, which did not cause discomfort to the users. This study highlights the promise of facilitating head movements to aid those with LVs in visual recognition tasks. Critically, since WatchCap functions independently of predefined or task-specific cues, it has a wide scope of applicability, even in ambient task situations. This independence positions WatchCap to complement existing tools aimed at detailed visual information acquisition, allowing integration with existing tools and facilitating a comprehensive approach to assisting individuals with LV.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140984758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AutoAugHAR: Automated Data Augmentation for Sensor-based Human Activity Recognition AutoAugHAR:基于传感器的人类活动识别自动数据扩增
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659589
Yexu Zhou, Hai-qiang Zhao, Yiran Huang, Tobias Röddiger, Murat Kurnaz, T. Riedel, M. Beigl
Sensor-based HAR models face challenges in cross-subject generalization due to the complexities of data collection and annotation, impacting the size and representativeness of datasets. While data augmentation has been successfully employed in domains like natural language and image processing, its application in HAR remains underexplored. This study presents AutoAugHAR, an innovative two-stage gradient-based data augmentation optimization framework. AutoAugHAR is designed to take into account the unique attributes of candidate augmentation operations and the unique nature and challenges of HAR tasks. Notably, it optimizes the augmentation pipeline during HAR model training without substantially extending the training duration. In evaluations on eight inertial-measurement-units-based benchmark datasets using five HAR models, AutoAugHAR has demonstrated superior robustness and effectiveness compared to other leading data augmentation frameworks. A salient feature of AutoAugHAR is its model-agnostic design, allowing for its seamless integration with any HAR model without the need for structural modifications. Furthermore, we also demonstrate the generalizability and flexible extensibility of AutoAugHAR on four datasets from other adjacent domains. We strongly recommend its integration as a standard protocol in HAR model training and will release it as an open-source tool1.
由于数据收集和标注的复杂性,影响了数据集的规模和代表性,基于传感器的 HAR 模型在跨主体泛化方面面临挑战。虽然数据扩增已成功应用于自然语言和图像处理等领域,但其在 HAR 中的应用仍未得到充分探索。本研究提出了基于梯度的两阶段数据扩增优化框架 AutoAugHAR。AutoAugHAR 的设计考虑到了候选扩增操作的独特属性以及 HAR 任务的独特性质和挑战。值得注意的是,它能在 HAR 模型训练期间优化增强管道,而不会大幅延长训练时间。在使用五种 HAR 模型对八个基于惯性测量单位的基准数据集进行的评估中,与其他领先的数据增强框架相比,AutoAugHAR 展示了卓越的鲁棒性和有效性。AutoAugHAR 的一个显著特点是其与模型无关的设计,可与任何 HAR 模型无缝集成,无需进行结构修改。此外,我们还在其他相邻领域的四个数据集上展示了 AutoAugHAR 的通用性和灵活扩展性。我们强烈建议将其整合为 HAR 模型训练的标准协议,并将其作为开源工具发布1。
{"title":"AutoAugHAR: Automated Data Augmentation for Sensor-based Human Activity Recognition","authors":"Yexu Zhou, Hai-qiang Zhao, Yiran Huang, Tobias Röddiger, Murat Kurnaz, T. Riedel, M. Beigl","doi":"10.1145/3659589","DOIUrl":"https://doi.org/10.1145/3659589","url":null,"abstract":"Sensor-based HAR models face challenges in cross-subject generalization due to the complexities of data collection and annotation, impacting the size and representativeness of datasets. While data augmentation has been successfully employed in domains like natural language and image processing, its application in HAR remains underexplored. This study presents AutoAugHAR, an innovative two-stage gradient-based data augmentation optimization framework. AutoAugHAR is designed to take into account the unique attributes of candidate augmentation operations and the unique nature and challenges of HAR tasks. Notably, it optimizes the augmentation pipeline during HAR model training without substantially extending the training duration. In evaluations on eight inertial-measurement-units-based benchmark datasets using five HAR models, AutoAugHAR has demonstrated superior robustness and effectiveness compared to other leading data augmentation frameworks. A salient feature of AutoAugHAR is its model-agnostic design, allowing for its seamless integration with any HAR model without the need for structural modifications. Furthermore, we also demonstrate the generalizability and flexible extensibility of AutoAugHAR on four datasets from other adjacent domains. We strongly recommend its integration as a standard protocol in HAR model training and will release it as an open-source tool1.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140985058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Waving Hand as Infrared Source for Ubiquitous Gas Sensing 挥动的手作为红外源,实现无所不在的气体传感
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659605
Zimo Liao, Meng Jin, Shun An, Chaoyue Niu, Fan Wu, Tao Deng, Guihai Chen
Gases in the environment can significantly affect our health and safety. As mobile devices gain popularity, we consider to explore a human-centered gas detection system that can be integrated into commercial mobile devices to realize ubiquitous gas detection. However, existing gas sensors either have too long response delays or are too cumbersome. This paper shows the feasibility of performing gas sensing by shining infrared (IR) signals emitted from our hands through the gas, allowing the system to rely on a single IR detector. The core opportunity arises from the fact that the human hand can provide stable, broadband, and omnidirectional IR radiation. Considering that IR signals experience distinct attenuation when passing through different gases or gases with different concentrations, we can integrate the human hand into the gas sensing system to enable extremely low-power and sustainable gas sensing. Yet, it is challenging to build up a robust system directly utilizing the hand's IR radiation. Practical issues include low IR radiation from the hand, unstable optical path, impact of environmental factors such as ambient temperature, etc. To tackle these issues, we on one hand modulate the IR radiation from the hand leveraging the controllability of the human hand, which improves the hand's IR radiation. On the other hand, we provide a dual-channel IR detector design to filter out the impact of environmental factors and gases in the environment. Extensive experiments show that our system can realize ethanol, gaseous water, and CO2 detection with 96.7%, 92.1% and 94.2%, respectively.
环境中的气体会严重影响我们的健康和安全。随着移动设备的普及,我们考虑探索一种以人为本的气体检测系统,将其集成到商用移动设备中,实现无处不在的气体检测。然而,现有的气体传感器要么响应延迟过长,要么过于笨重。本文展示了通过手部发射的红外(IR)信号照射气体来进行气体检测的可行性,从而使系统可以依赖于单个红外检测器。核心机遇来自于人手可以提供稳定、宽带和全方位的红外辐射这一事实。考虑到红外信号在通过不同气体或不同浓度的气体时会产生不同的衰减,我们可以将人手集成到气体传感系统中,从而实现极低功耗和可持续的气体传感。然而,直接利用手的红外辐射来建立一个强大的系统是一项挑战。实际问题包括手部红外辐射低、光路不稳定、环境温度等环境因素的影响等。为了解决这些问题,我们一方面利用人体手部的可控性来调节手部的红外辐射,从而提高手部的红外辐射。另一方面,我们提供了一种双通道红外探测器设计,以过滤环境因素和环境中气体的影响。大量实验表明,我们的系统可以实现对乙醇、气态水和二氧化碳的检测,检测率分别为 96.7%、92.1% 和 94.2%。
{"title":"Waving Hand as Infrared Source for Ubiquitous Gas Sensing","authors":"Zimo Liao, Meng Jin, Shun An, Chaoyue Niu, Fan Wu, Tao Deng, Guihai Chen","doi":"10.1145/3659605","DOIUrl":"https://doi.org/10.1145/3659605","url":null,"abstract":"Gases in the environment can significantly affect our health and safety. As mobile devices gain popularity, we consider to explore a human-centered gas detection system that can be integrated into commercial mobile devices to realize ubiquitous gas detection. However, existing gas sensors either have too long response delays or are too cumbersome. This paper shows the feasibility of performing gas sensing by shining infrared (IR) signals emitted from our hands through the gas, allowing the system to rely on a single IR detector. The core opportunity arises from the fact that the human hand can provide stable, broadband, and omnidirectional IR radiation. Considering that IR signals experience distinct attenuation when passing through different gases or gases with different concentrations, we can integrate the human hand into the gas sensing system to enable extremely low-power and sustainable gas sensing. Yet, it is challenging to build up a robust system directly utilizing the hand's IR radiation. Practical issues include low IR radiation from the hand, unstable optical path, impact of environmental factors such as ambient temperature, etc. To tackle these issues, we on one hand modulate the IR radiation from the hand leveraging the controllability of the human hand, which improves the hand's IR radiation. On the other hand, we provide a dual-channel IR detector design to filter out the impact of environmental factors and gases in the environment. Extensive experiments show that our system can realize ethanol, gaseous water, and CO2 detection with 96.7%, 92.1% and 94.2%, respectively.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140982362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
G-VOILA: Gaze-Facilitated Information Querying in Daily Scenarios G-VOILA:日常场景中的凝视辅助信息查询
Q1 Computer Science Pub Date : 2024-05-13 DOI: 10.1145/3659623
Zeyu Wang, Yuanchun Shi, Yuntao Wang, Yuchen Yao, Kun Yan, Yuhan Wang, Lei Ji, Xuhai Xu, Chun Yu
Modern information querying systems are progressively incorporating multimodal inputs like vision and audio. However, the integration of gaze --- a modality deeply linked to user intent and increasingly accessible via gaze-tracking wearables --- remains underexplored. This paper introduces a novel gaze-facilitated information querying paradigm, named G-VOILA, which synergizes users' gaze, visual field, and voice-based natural language queries to facilitate a more intuitive querying process. In a user-enactment study involving 21 participants in 3 daily scenarios (p = 21, scene = 3), we revealed the ambiguity in users' query language and a gaze-voice coordination pattern in users' natural query behaviors with G-VOILA. Based on the quantitative and qualitative findings, we developed a design framework for the G-VOILA paradigm, which effectively integrates the gaze data with the in-situ querying context. Then we implemented a G-VOILA proof-of-concept using cutting-edge deep learning techniques. A follow-up user study (p = 16, scene = 2) demonstrates its effectiveness by achieving both higher objective score and subjective score, compared to a baseline without gaze data. We further conducted interviews and provided insights for future gaze-facilitated information querying systems.
现代信息查询系统正逐步整合视觉和音频等多模态输入。然而,凝视这一与用户意图密切相关的模态,以及越来越多的可通过凝视跟踪可穿戴设备获取的模态的整合仍未得到充分探索。本文介绍了一种名为G-VOILA的新型凝视辅助信息查询范例,它将用户的凝视、视场和基于语音的自然语言查询协同起来,以促进更直观的查询过程。在一项由 21 名参与者参与的用户行为研究中,我们在 3 个日常场景(p = 21,场景 = 3)中揭示了用户查询语言的模糊性,以及 G-VOILA 在用户自然查询行为中的注视-语音协同模式。在定量和定性研究结果的基础上,我们为 G-VOILA 范式开发了一个设计框架,该框架将注视数据与现场查询语境进行了有效整合。然后,我们利用最先进的深度学习技术实现了 G-VOILA 概念验证。后续的用户研究(P = 16,场景 = 2)表明,与没有凝视数据的基线相比,G-VOILA 获得了更高的客观分数和主观分数,从而证明了它的有效性。我们还进行了访谈,为未来的凝视辅助信息查询系统提供了见解。
{"title":"G-VOILA: Gaze-Facilitated Information Querying in Daily Scenarios","authors":"Zeyu Wang, Yuanchun Shi, Yuntao Wang, Yuchen Yao, Kun Yan, Yuhan Wang, Lei Ji, Xuhai Xu, Chun Yu","doi":"10.1145/3659623","DOIUrl":"https://doi.org/10.1145/3659623","url":null,"abstract":"Modern information querying systems are progressively incorporating multimodal inputs like vision and audio. However, the integration of gaze --- a modality deeply linked to user intent and increasingly accessible via gaze-tracking wearables --- remains underexplored. This paper introduces a novel gaze-facilitated information querying paradigm, named G-VOILA, which synergizes users' gaze, visual field, and voice-based natural language queries to facilitate a more intuitive querying process. In a user-enactment study involving 21 participants in 3 daily scenarios (p = 21, scene = 3), we revealed the ambiguity in users' query language and a gaze-voice coordination pattern in users' natural query behaviors with G-VOILA. Based on the quantitative and qualitative findings, we developed a design framework for the G-VOILA paradigm, which effectively integrates the gaze data with the in-situ querying context. Then we implemented a G-VOILA proof-of-concept using cutting-edge deep learning techniques. A follow-up user study (p = 16, scene = 2) demonstrates its effectiveness by achieving both higher objective score and subjective score, compared to a baseline without gaze data. We further conducted interviews and provided insights for future gaze-facilitated information querying systems.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140982957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1