首页 > 最新文献

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

英文 中文
Surveying the Social Comfort of Body, Device, and Environment-Based Augmented Reality Interactions in Confined Passenger Spaces Using Mixed Reality Composite Videos 使用混合现实合成视频调查在受限乘客空间中基于身体、设备和环境的增强现实交互的社会舒适度
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610923
Daniel Medeiros, Romane Dubus, Julie Williamson, Graham Wilson, Katharina Pöhlmann, Mark McGill
Augmented Reality (AR) headsets could significantly improve the passenger experience, freeing users from the restrictions of physical smartphones, tablets and seatback displays. However, the confined space of public transport and the varying proximity to other passengers may restrict what interaction techniques are deemed socially acceptable for AR users - particularly considering current reliance on mid-air interactions in consumer headsets. We contribute and utilize a novel approach to social acceptability video surveys, employing mixed reality composited videos to present a real user performing interactions across different virtual transport environments. This approach allows for controlled evaluation of perceived social acceptability whilst freeing researchers to present interactions in any simulated context. Our resulting survey (N=131) explores the social comfort of body, device, and environment-based interactions across seven transit seating arrangements. We reflect on the advantages of discreet inputs over mid-air and the unique challenges of face-to-face seating for passenger AR.
增强现实(AR)耳机可以显著改善乘客体验,将用户从实体智能手机、平板电脑和座椅靠背显示器的限制中解放出来。然而,公共交通的有限空间和与其他乘客的不同距离可能会限制AR用户认为可接受的交互技术,特别是考虑到目前消费者耳机对空中交互的依赖。我们为社会可接受性视频调查贡献并利用了一种新颖的方法,采用混合现实合成视频来呈现真实用户在不同虚拟交通环境中进行交互。这种方法允许对感知到的社会可接受性进行控制评估,同时释放研究人员在任何模拟环境中呈现互动。我们的调查结果(N=131)探讨了七种交通座位安排中身体、设备和基于环境的互动的社会舒适度。我们反思了相对于空中的谨慎输入的优势,以及面对面座位对乘客AR的独特挑战。
{"title":"Surveying the Social Comfort of Body, Device, and Environment-Based Augmented Reality Interactions in Confined Passenger Spaces Using Mixed Reality Composite Videos","authors":"Daniel Medeiros, Romane Dubus, Julie Williamson, Graham Wilson, Katharina Pöhlmann, Mark McGill","doi":"10.1145/3610923","DOIUrl":"https://doi.org/10.1145/3610923","url":null,"abstract":"Augmented Reality (AR) headsets could significantly improve the passenger experience, freeing users from the restrictions of physical smartphones, tablets and seatback displays. However, the confined space of public transport and the varying proximity to other passengers may restrict what interaction techniques are deemed socially acceptable for AR users - particularly considering current reliance on mid-air interactions in consumer headsets. We contribute and utilize a novel approach to social acceptability video surveys, employing mixed reality composited videos to present a real user performing interactions across different virtual transport environments. This approach allows for controlled evaluation of perceived social acceptability whilst freeing researchers to present interactions in any simulated context. Our resulting survey (N=131) explores the social comfort of body, device, and environment-based interactions across seven transit seating arrangements. We reflect on the advantages of discreet inputs over mid-air and the unique challenges of face-to-face seating for passenger AR.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What and When to Explain? 解释什么,什么时候解释?
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610886
Gwangbin Kim, Dohyeon Yeo, Taewoo Jo, Daniela Rus, SeungJun Kim
Explanations in automated vehicles help passengers understand the vehicle's state and capabilities, leading to increased trust in the technology. Specifically, for passengers of SAE Level 4 and 5 vehicles who are not engaged in the driving process, the enhanced sense of control provided by explanations reduces potential anxieties, enabling them to fully leverage the benefits of automation. To construct explanations that enhance trust and situational awareness without disturbing passengers, we suggest testing with people who ultimately employ such explanations, ideally under real-world driving conditions. In this study, we examined the impact of various visual explanation types (perception, attention, perception+attention) and timing mechanisms (constantly provided or only under risky scenarios) on passenger experience under naturalistic driving scenarios using actual vehicles with mixed-reality support. Our findings indicate that visualizing the vehicle's perception state improves the perceived usability, trust, safety, and situational awareness without adding cognitive burden, even without explaining the underlying causes. We also demonstrate that the traffic risk probability could be used to control the timing of an explanation delivery, particularly when passengers are overwhelmed with information. Our study's on-road evaluation method offers a safe and reliable testing environment and can be easily customized for other AI models and explanation modalities.
自动驾驶汽车的解释有助于乘客了解车辆的状态和能力,从而增加对这项技术的信任。具体来说,对于没有参与驾驶过程的SAE 4级和5级车辆的乘客来说,解释所提供的控制感增强减少了潜在的焦虑,使他们能够充分利用自动化带来的好处。为了在不打扰乘客的情况下构建增强信任和态势感知的解释,我们建议对最终采用这些解释的人进行测试,理想情况下是在真实的驾驶条件下。在本研究中,我们使用具有混合现实支持的真实车辆,研究了各种视觉解释类型(感知、注意、感知+注意)和定时机制(持续提供或仅在危险场景下提供)对自然驾驶场景下乘客体验的影响。我们的研究结果表明,可视化车辆的感知状态在不增加认知负担的情况下提高了感知可用性、信任度、安全性和态势感知,甚至没有解释潜在的原因。我们还证明了交通风险概率可以用来控制解释交付的时间,特别是当乘客被信息淹没时。我们研究的道路评估方法提供了一个安全可靠的测试环境,并且可以很容易地为其他人工智能模型和解释方式定制。
{"title":"What and When to Explain?","authors":"Gwangbin Kim, Dohyeon Yeo, Taewoo Jo, Daniela Rus, SeungJun Kim","doi":"10.1145/3610886","DOIUrl":"https://doi.org/10.1145/3610886","url":null,"abstract":"Explanations in automated vehicles help passengers understand the vehicle's state and capabilities, leading to increased trust in the technology. Specifically, for passengers of SAE Level 4 and 5 vehicles who are not engaged in the driving process, the enhanced sense of control provided by explanations reduces potential anxieties, enabling them to fully leverage the benefits of automation. To construct explanations that enhance trust and situational awareness without disturbing passengers, we suggest testing with people who ultimately employ such explanations, ideally under real-world driving conditions. In this study, we examined the impact of various visual explanation types (perception, attention, perception+attention) and timing mechanisms (constantly provided or only under risky scenarios) on passenger experience under naturalistic driving scenarios using actual vehicles with mixed-reality support. Our findings indicate that visualizing the vehicle's perception state improves the perceived usability, trust, safety, and situational awareness without adding cognitive burden, even without explaining the underlying causes. We also demonstrate that the traffic risk probability could be used to control the timing of an explanation delivery, particularly when passengers are overwhelmed with information. Our study's on-road evaluation method offers a safe and reliable testing environment and can be easily customized for other AI models and explanation modalities.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GlassMessaging GlassMessaging
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610931
Nuwan Janaka, Jie Gao, Lin Zhu, Shengdong Zhao, Lan Lyu, Peisen Xu, Maximilian Nabokow, Silang Wang, Yanch Ong
Communicating with others while engaging in simple daily activities is both common and natural for people. However, due to the hands- and eyes-busy nature of existing digital messaging applications, it is challenging to message someone while performing simple daily activities. We present GlassMessaging, a messaging application on Optical See-Through Head-Mounted Displays (OHMDs), to support messaging with voice and manual inputs in hands- and eyes-busy scenarios. GlassMessaging is iteratively developed through a formative study identifying current messaging behaviors and challenges in common multitasking with messaging scenarios. We then evaluated this application against the mobile phone platform on varying texting complexities in eating and walking scenarios. Our results showed that, compared to phone-based messaging, GlassMessaging increased messaging opportunities during multitasking due to its hands-free, wearable nature, and multimodal input capabilities. The affordance of GlassMessaging also allows users easier access to voice input than the phone, which thus reduces the response time by 33.1% and increases the texting speed by 40.3%, with a cost in texting accuracy of 2.5%, particularly when the texting complexity increases. Lastly, we discuss trade-offs and insights to lay a foundation for future OHMD-based messaging applications.
在进行简单的日常活动的同时与他人交流,这对人们来说既常见又自然。然而,由于现有的数字消息应用程序的手和眼睛忙碌的性质,在执行简单的日常活动时给某人发送消息是具有挑战性的。我们介绍了GlassMessaging,一个基于光学透明头戴式显示器(ohmd)的消息传递应用程序,以支持在手和眼睛忙碌的情况下通过语音和手动输入进行消息传递。GlassMessaging是通过一项形成性研究来迭代开发的,该研究确定了当前的消息传递行为和在消息传递场景中常见的多任务处理中的挑战。然后,我们在手机平台上评估了这个应用程序在吃饭和走路场景下的不同短信复杂性。我们的研究结果表明,与基于手机的消息传递相比,GlassMessaging由于其免提、可穿戴和多模式输入功能,增加了在多任务处理期间的消息传递机会。GlassMessaging的功能还允许用户比使用手机更容易地进行语音输入,从而减少了33.1%的响应时间,提高了40.3%的发短信速度,但代价是发短信的准确性降低了2.5%,尤其是当发短信的复杂性增加时。最后,我们将讨论权衡和见解,为未来基于ohmd的消息传递应用程序奠定基础。
{"title":"GlassMessaging","authors":"Nuwan Janaka, Jie Gao, Lin Zhu, Shengdong Zhao, Lan Lyu, Peisen Xu, Maximilian Nabokow, Silang Wang, Yanch Ong","doi":"10.1145/3610931","DOIUrl":"https://doi.org/10.1145/3610931","url":null,"abstract":"Communicating with others while engaging in simple daily activities is both common and natural for people. However, due to the hands- and eyes-busy nature of existing digital messaging applications, it is challenging to message someone while performing simple daily activities. We present GlassMessaging, a messaging application on Optical See-Through Head-Mounted Displays (OHMDs), to support messaging with voice and manual inputs in hands- and eyes-busy scenarios. GlassMessaging is iteratively developed through a formative study identifying current messaging behaviors and challenges in common multitasking with messaging scenarios. We then evaluated this application against the mobile phone platform on varying texting complexities in eating and walking scenarios. Our results showed that, compared to phone-based messaging, GlassMessaging increased messaging opportunities during multitasking due to its hands-free, wearable nature, and multimodal input capabilities. The affordance of GlassMessaging also allows users easier access to voice input than the phone, which thus reduces the response time by 33.1% and increases the texting speed by 40.3%, with a cost in texting accuracy of 2.5%, particularly when the texting complexity increases. Lastly, we discuss trade-offs and insights to lay a foundation for future OHMD-based messaging applications.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135471528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ProxiFit ProxiFit
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610920
Jiha Kim, Younho Nam, Jungeun Lee, Young-Joo Suh, Inseok Hwang
Although many works bring exercise monitoring to smartphone and smartwatch, inertial sensors used in such systems require device to be in motion to detect exercises. We introduce ProxiFit, a highly practical on-device exercise monitoring system capable of classifying and counting exercises even if the device stays still. Utilizing novel proximity sensing of natural magnetism in exercise equipment, ProxiFit brings (1) a new category of exercise not involving device motion such as lower-body machine exercise, and (2) a new off-body exercise monitoring mode where a smartphone can be conveniently viewed in front of the user during workouts. ProxiFit addresses common issues of faint magnetic sensing by choosing appropriate preprocessing, negating adversarial motion artifacts, and designing a lightweight yet noise-tolerant classifier. Also, application-specific challenges such as a wide variety of equipment and the impracticality of obtaining large datasets are overcome by devising a unique yet challenging training policy. We evaluate ProxiFit on up to 10 weight machines (5 lower- and 5 upper-body) and 4 free-weight exercises, on both wearable and signage mode, with 19 users, at 3 gyms, over 14 months, and verify robustness against user and weather variations, spatial and rotational device location deviations, and neighboring machine interference.
尽管许多工作将运动监测带到智能手机和智能手表上,但在这些系统中使用的惯性传感器需要设备处于运动状态才能检测运动。我们介绍ProxiFit,这是一款非常实用的设备上运动监测系统,即使设备静止不动,也能对运动进行分类和计数。ProxiFit利用运动器材中天然磁性的新型近距离感应,带来了(1)一种不涉及设备运动的新运动类别,如下半身机器运动;(2)一种新的体外运动监测模式,用户在锻炼时可以方便地在面前查看智能手机。ProxiFit通过选择适当的预处理,消除对抗性运动伪影,并设计轻量级但耐噪的分类器来解决微弱磁感测的常见问题。此外,通过设计独特但具有挑战性的培训政策,可以克服特定于应用的挑战,例如各种各样的设备和获取大型数据集的不可行性。我们在多达10台重量机器(5台下体和5台上体)和4台自由重量练习上评估ProxiFit,在可穿戴和标牌模式下,19名用户,在3个健身房,超过14个月,并验证对用户和天气变化,空间和旋转设备位置偏差以及邻近机器干扰的稳健性。
{"title":"ProxiFit","authors":"Jiha Kim, Younho Nam, Jungeun Lee, Young-Joo Suh, Inseok Hwang","doi":"10.1145/3610920","DOIUrl":"https://doi.org/10.1145/3610920","url":null,"abstract":"Although many works bring exercise monitoring to smartphone and smartwatch, inertial sensors used in such systems require device to be in motion to detect exercises. We introduce ProxiFit, a highly practical on-device exercise monitoring system capable of classifying and counting exercises even if the device stays still. Utilizing novel proximity sensing of natural magnetism in exercise equipment, ProxiFit brings (1) a new category of exercise not involving device motion such as lower-body machine exercise, and (2) a new off-body exercise monitoring mode where a smartphone can be conveniently viewed in front of the user during workouts. ProxiFit addresses common issues of faint magnetic sensing by choosing appropriate preprocessing, negating adversarial motion artifacts, and designing a lightweight yet noise-tolerant classifier. Also, application-specific challenges such as a wide variety of equipment and the impracticality of obtaining large datasets are overcome by devising a unique yet challenging training policy. We evaluate ProxiFit on up to 10 weight machines (5 lower- and 5 upper-body) and 4 free-weight exercises, on both wearable and signage mode, with 19 users, at 3 gyms, over 14 months, and verify robustness against user and weather variations, spatial and rotational device location deviations, and neighboring machine interference.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DYPA DYPA
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610908
Shuhan Zhong, Sizhe Song, Tianhao Tang, Fei Nie, Xinrui Zhou, Yankun Zhao, Yizhe Zhao, Kuen Fung Sin, S.-H. Gary Chan
Identifying early a person with dyslexia, a learning disorder with reading and writing, is critical for effective treatment. As accredited specialists for clinical diagnosis of dyslexia are costly and undersupplied, we research and develop a computer-assisted approach to efficiently prescreen dyslexic Chinese children so that timely resources can be channelled to those at higher risk. Previous works in this area are mostly for English and other alphabetic languages, tailored narrowly for the reading disorder, or require costly specialized equipment. To overcome that, we present DYPA, a novel DYslexia Prescreening mobile Application for Chinese children. DYPA collects multimodal data from children through a set of specially designed interactive reading and writing tests in Chinese, and comprehensively analyzes their cognitive-linguistic skills with machine learning. To better account for the dyslexia-associated features in handwritten characters, DYPA employs a deep learning based multilevel Chinese handwriting analysis framework to extract features across the stroke, radical and character levels. We have implemented and installed DYPA in tablets, and our extensive trials with more than 200 pupils in Hong Kong validate its high predictive accuracy (81.14%), sensitivity (74.27%) and specificity (82.71%).
早期识别一个患有阅读障碍的人,这是一种阅读和写作的学习障碍,对有效治疗至关重要。由于临床诊断阅读障碍的专家费用昂贵且供应不足,我们研究并开发了一种计算机辅助方法来有效地预先筛查阅读障碍的中国儿童,以便及时将资源输送给风险较高的儿童。以前在这一领域的工作大多是针对英语和其他字母语言,为阅读障碍量身定制的,或者需要昂贵的专业设备。为了克服这一问题,我们提出了一种新的针对中国儿童的阅读障碍预筛查移动应用程序。DYPA通过一套专门设计的中文阅读和写作互动测试,收集儿童的多模态数据,并通过机器学习全面分析他们的认知语言技能。为了更好地解释手写体中与阅读困难相关的特征,DYPA采用了基于深度学习的多层次中文手写分析框架来提取笔画、根号和字符水平的特征。我们已经在片剂中实施并安装了DYPA,我们在香港对200多名学生进行了广泛的试验,验证了其高预测准确率(81.14%)、灵敏度(74.27%)和特异性(82.71%)。
{"title":"DYPA","authors":"Shuhan Zhong, Sizhe Song, Tianhao Tang, Fei Nie, Xinrui Zhou, Yankun Zhao, Yizhe Zhao, Kuen Fung Sin, S.-H. Gary Chan","doi":"10.1145/3610908","DOIUrl":"https://doi.org/10.1145/3610908","url":null,"abstract":"Identifying early a person with dyslexia, a learning disorder with reading and writing, is critical for effective treatment. As accredited specialists for clinical diagnosis of dyslexia are costly and undersupplied, we research and develop a computer-assisted approach to efficiently prescreen dyslexic Chinese children so that timely resources can be channelled to those at higher risk. Previous works in this area are mostly for English and other alphabetic languages, tailored narrowly for the reading disorder, or require costly specialized equipment. To overcome that, we present DYPA, a novel DYslexia Prescreening mobile Application for Chinese children. DYPA collects multimodal data from children through a set of specially designed interactive reading and writing tests in Chinese, and comprehensively analyzes their cognitive-linguistic skills with machine learning. To better account for the dyslexia-associated features in handwritten characters, DYPA employs a deep learning based multilevel Chinese handwriting analysis framework to extract features across the stroke, radical and character levels. We have implemented and installed DYPA in tablets, and our extensive trials with more than 200 pupils in Hong Kong validate its high predictive accuracy (81.14%), sensitivity (74.27%) and specificity (82.71%).","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting Social Contexts from Mobile Sensing Indicators in Virtual Interactions with Socially Anxious Individuals 从与社交焦虑个体的虚拟互动中检测移动感知指标的社会背景
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610916
Zhiyuan Wang, Maria A. Larrazabal, Mark Rucker, Emma R. Toner, Katharine E. Daniel, Shashwat Kumar, Mehdi Boukhechba, Bethany A. Teachman, Laura E. Barnes
Mobile sensing is a ubiquitous and useful tool to make inferences about individuals' mental health based on physiology and behavior patterns. Along with sensing features directly associated with mental health, it can be valuable to detect different features of social contexts to learn about social interaction patterns over time and across different environments. This can provide insight into diverse communities' academic, work and social lives, and their social networks. We posit that passively detecting social contexts can be particularly useful for social anxiety research, as it may ultimately help identify changes in social anxiety status and patterns of social avoidance and withdrawal. To this end, we recruited a sample of highly socially anxious undergraduate students (N=46) to examine whether we could detect the presence of experimentally manipulated virtual social contexts via wristband sensors. Using a multitask machine learning pipeline, we leveraged passively sensed biobehavioral streams to detect contexts relevant to social anxiety, including (1) whether people were in a social situation, (2) size of the social group, (3) degree of social evaluation, and (4) phase of social situation (anticipating, actively experiencing, or had just participated in an experience). Results demonstrated the feasibility of detecting most virtual social contexts, with stronger predictive accuracy when detecting whether individuals were in a social situation or not and the phase of the situation, and weaker predictive accuracy when detecting the level of social evaluation. They also indicated that sensing streams are differentially important to prediction based on the context being predicted. Our findings also provide useful information regarding design elements relevant to passive context detection, including optimal sensing duration, the utility of different sensing modalities, and the need for personalization. We discuss implications of these findings for future work on context detection (e.g., just-in-time adaptive intervention development).
移动传感是一种普遍而有用的工具,可以根据生理和行为模式推断个体的心理健康状况。除了与心理健康直接相关的感知特征外,检测社会背景的不同特征以了解随时间和不同环境的社会互动模式可能很有价值。这可以让我们深入了解不同社区的学术、工作和社交生活,以及他们的社交网络。我们认为,被动地检测社会环境对社交焦虑研究特别有用,因为它可能最终有助于识别社交焦虑状态的变化以及社交回避和退缩的模式。为此,我们招募了一组高度社交焦虑的大学生(N=46),以检验我们是否可以通过腕带传感器检测到实验操纵的虚拟社会背景的存在。使用多任务机器学习管道,我们利用被动感知的生物行为流来检测与社交焦虑相关的情境,包括(1)人们是否处于社交情境中,(2)社交群体的规模,(3)社会评价程度,以及(4)社交情境的阶段(预期、积极体验或刚刚参与体验)。结果表明,大多数虚拟社会情境检测都是可行的,在检测个体是否处于社会情境和情境的阶段时,预测准确率较高,而在检测社会评价水平时,预测准确率较低。他们还指出,根据预测的环境,感知流对预测的重要性是不同的。我们的研究结果还提供了与被动上下文检测相关的设计元素的有用信息,包括最佳感知持续时间,不同感知模式的效用以及个性化的需求。我们讨论了这些发现对未来情境检测工作的影响(例如,即时适应性干预发展)。
{"title":"Detecting Social Contexts from Mobile Sensing Indicators in Virtual Interactions with Socially Anxious Individuals","authors":"Zhiyuan Wang, Maria A. Larrazabal, Mark Rucker, Emma R. Toner, Katharine E. Daniel, Shashwat Kumar, Mehdi Boukhechba, Bethany A. Teachman, Laura E. Barnes","doi":"10.1145/3610916","DOIUrl":"https://doi.org/10.1145/3610916","url":null,"abstract":"Mobile sensing is a ubiquitous and useful tool to make inferences about individuals' mental health based on physiology and behavior patterns. Along with sensing features directly associated with mental health, it can be valuable to detect different features of social contexts to learn about social interaction patterns over time and across different environments. This can provide insight into diverse communities' academic, work and social lives, and their social networks. We posit that passively detecting social contexts can be particularly useful for social anxiety research, as it may ultimately help identify changes in social anxiety status and patterns of social avoidance and withdrawal. To this end, we recruited a sample of highly socially anxious undergraduate students (N=46) to examine whether we could detect the presence of experimentally manipulated virtual social contexts via wristband sensors. Using a multitask machine learning pipeline, we leveraged passively sensed biobehavioral streams to detect contexts relevant to social anxiety, including (1) whether people were in a social situation, (2) size of the social group, (3) degree of social evaluation, and (4) phase of social situation (anticipating, actively experiencing, or had just participated in an experience). Results demonstrated the feasibility of detecting most virtual social contexts, with stronger predictive accuracy when detecting whether individuals were in a social situation or not and the phase of the situation, and weaker predictive accuracy when detecting the level of social evaluation. They also indicated that sensing streams are differentially important to prediction based on the context being predicted. Our findings also provide useful information regarding design elements relevant to passive context detection, including optimal sensing duration, the utility of different sensing modalities, and the need for personalization. We discuss implications of these findings for future work on context detection (e.g., just-in-time adaptive intervention development).","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Symptom Improvement During Depression Treatment Using Sleep Sensory Data 使用睡眠感觉数据预测抑郁症治疗期间症状改善
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610932
Chinmaey Shende, Soumyashree Sahoo, Stephen Sam, Parit Patel, Reynaldo Morillo, Xinyu Wang, Shweta Ware, Jinbo Bi, Jayesh Kamath, Alexander Russell, Dongjin Song, Bing Wang
Depression is a serious mental illness. The current best guideline in depression treatment is closely monitoring patients and adjusting treatment as needed. Close monitoring of patients through physician-administered follow-ups or self-administered questionnaires, however, is difficult in clinical settings due to high cost, lack of trained professionals, and burden to the patients. Sensory data collected from mobile devices has been shown to provide a promising direction for long-term monitoring of depression symptoms. Most existing studies in this direction, however, focus on depression detection; the few studies that are on predicting changes in depression are not in clinical settings. In this paper, we investigate using one type of sensory data, sleep data, collected from wearables to predict improvement of depression symptoms over time after a patient initiates a new pharmacological treatment. We apply sleep trend filtering to noisy sleep sensory data to extract high-level sleep characteristics and develop a family of machine learning models that use simple sleep features (mean and variation of sleep duration) to predict symptom improvement. Our results show that using such simple sleep features can already lead to validation F1 score up to 0.68, indicating that using sensory data for predicting depression improvement during treatment is a promising direction.
抑郁症是一种严重的精神疾病。目前抑郁症治疗的最佳指导方针是密切监测患者并根据需要调整治疗。然而,由于成本高、缺乏训练有素的专业人员以及给患者带来负担,在临床环境中很难通过医生管理的随访或自我管理的问卷对患者进行密切监测。从移动设备收集的感官数据已被证明为抑郁症症状的长期监测提供了一个有希望的方向。然而,在这个方向上的大多数现有研究都集中在抑郁检测上;少数预测抑郁症变化的研究不是在临床环境中进行的。在本文中,我们研究使用一种感官数据,即从可穿戴设备收集的睡眠数据,来预测患者开始新的药物治疗后抑郁症状的改善情况。我们将睡眠趋势滤波应用于嘈杂的睡眠感官数据,以提取高水平的睡眠特征,并开发了一系列机器学习模型,这些模型使用简单的睡眠特征(睡眠持续时间的平均值和变化)来预测症状的改善。我们的研究结果表明,使用这些简单的睡眠特征已经可以使F1得分达到0.68,这表明使用感官数据来预测治疗期间抑郁症的改善是一个有前途的方向。
{"title":"Predicting Symptom Improvement During Depression Treatment Using Sleep Sensory Data","authors":"Chinmaey Shende, Soumyashree Sahoo, Stephen Sam, Parit Patel, Reynaldo Morillo, Xinyu Wang, Shweta Ware, Jinbo Bi, Jayesh Kamath, Alexander Russell, Dongjin Song, Bing Wang","doi":"10.1145/3610932","DOIUrl":"https://doi.org/10.1145/3610932","url":null,"abstract":"Depression is a serious mental illness. The current best guideline in depression treatment is closely monitoring patients and adjusting treatment as needed. Close monitoring of patients through physician-administered follow-ups or self-administered questionnaires, however, is difficult in clinical settings due to high cost, lack of trained professionals, and burden to the patients. Sensory data collected from mobile devices has been shown to provide a promising direction for long-term monitoring of depression symptoms. Most existing studies in this direction, however, focus on depression detection; the few studies that are on predicting changes in depression are not in clinical settings. In this paper, we investigate using one type of sensory data, sleep data, collected from wearables to predict improvement of depression symptoms over time after a patient initiates a new pharmacological treatment. We apply sleep trend filtering to noisy sleep sensory data to extract high-level sleep characteristics and develop a family of machine learning models that use simple sleep features (mean and variation of sleep duration) to predict symptom improvement. Our results show that using such simple sleep features can already lead to validation F1 score up to 0.68, indicating that using sensory data for predicting depression improvement during treatment is a promising direction.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Passive Haptic Learning of Piano Songs Using Three Tactile Sensations of Vibration, Stroking and Tapping 用振动、抚摸、敲击三种触觉研究钢琴歌曲的被动触觉学习
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610899
Likun Fang, Timo Müller, Erik Pescara, Nikola Fischer, Yiran Huang, Michael Beigl
Passive Haptic Learning (PHL) is a method by which users are able to learn motor skills without paying active attention. In past research, vibration is widely applied in PHL as the signal delivered on the participant's skin. The human somatosensory system provides not only discriminative input (the perception of pressure, vibration, slip, and texture, etc.) to the brain but also an affective input (sliding, tapping and stroking, etc.). The former is often described as being mediated by low-threshold mechanosensitive (LTM) units with rapidly conducting large myelinated (Aᵬ) afferents, while the latter is mediated by a class of LTM afferents called C-tactile afferents (CTs). We investigated whether different tactile sensations (tapping, light stroking, and vibration) influence the learning effect of PHL in this work. We built three wearable systems corresponding to the three sensations respectively. 17 participants were invited to learn to play three different note sequences passively via three different systems. The subjects were then tested on their remembered note sequences after each learning session. Our results indicate that the sensations of tapping or stroking are as effective as the vibration system in passive haptic learning of piano songs, providing viable alternatives to the vibration sensations that have been used so far. We also found that participants on average made up to 1.06 errors less when using affective inputs, namely tapping or stroking. As the first work exploring the differences in multiple types of tactile sensations in PHL, we offer our design to the readers and hope they may employ our works for further research of PHL.
被动触觉学习(Passive Haptic Learning, PHL)是一种让使用者在没有主动注意的情况下学习运动技能的方法。在过去的研究中,振动作为传递到参与者皮肤上的信号被广泛应用于物理物理。人体体感系统不仅向大脑提供判别输入(对压力、振动、滑动和纹理等的感知),而且还提供情感输入(滑动、敲击和抚摸等)。前者通常被描述为由具有快速传导大髓鞘(Aᵬ)传入的低阈值机械敏感(LTM)单元介导,而后者由一类称为c -触觉传入(ct)的LTM传入介导。在这项研究中,我们研究了不同的触觉感觉(敲击、轻触和振动)是否会影响PHL的学习效果。我们分别针对这三种感觉构建了三个可穿戴系统。17名参与者被邀请通过三种不同的系统被动地学习演奏三种不同的音符序列。在每次学习结束后,研究人员对受试者记忆的音符序列进行了测试。我们的研究结果表明,在钢琴歌曲的被动触觉学习中,敲击或抚摸的感觉与振动系统一样有效,为迄今为止使用的振动感觉提供了可行的替代方案。我们还发现,参与者在使用情感输入(即敲击或抚摸)时,平均减少了1.06个错误。作为探索PHL中多种类型触觉差异的第一个作品,我们将我们的设计提供给读者,希望他们可以利用我们的作品进一步研究PHL。
{"title":"Investigating Passive Haptic Learning of Piano Songs Using Three Tactile Sensations of Vibration, Stroking and Tapping","authors":"Likun Fang, Timo Müller, Erik Pescara, Nikola Fischer, Yiran Huang, Michael Beigl","doi":"10.1145/3610899","DOIUrl":"https://doi.org/10.1145/3610899","url":null,"abstract":"Passive Haptic Learning (PHL) is a method by which users are able to learn motor skills without paying active attention. In past research, vibration is widely applied in PHL as the signal delivered on the participant's skin. The human somatosensory system provides not only discriminative input (the perception of pressure, vibration, slip, and texture, etc.) to the brain but also an affective input (sliding, tapping and stroking, etc.). The former is often described as being mediated by low-threshold mechanosensitive (LTM) units with rapidly conducting large myelinated (Aᵬ) afferents, while the latter is mediated by a class of LTM afferents called C-tactile afferents (CTs). We investigated whether different tactile sensations (tapping, light stroking, and vibration) influence the learning effect of PHL in this work. We built three wearable systems corresponding to the three sensations respectively. 17 participants were invited to learn to play three different note sequences passively via three different systems. The subjects were then tested on their remembered note sequences after each learning session. Our results indicate that the sensations of tapping or stroking are as effective as the vibration system in passive haptic learning of piano songs, providing viable alternatives to the vibration sensations that have been used so far. We also found that participants on average made up to 1.06 errors less when using affective inputs, namely tapping or stroking. As the first work exploring the differences in multiple types of tactile sensations in PHL, we offer our design to the readers and hope they may employ our works for further research of PHL.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Smart Speaker and Smart Meter to Infer Your Residential Power Usage by Self-supervised Cross-modal Learning 结合智能扬声器和智能电表,通过自监督跨模式学习推断您的住宅用电量
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610905
Guanzhou Zhu, Dong Zhao, Kuo Tian, Zhengyuan Zhang, Rui Yuan, Huadong Ma
Energy disaggregation is a key enabling technology for residential power usage monitoring, which benefits various applications such as carbon emission monitoring and human activity recognition. However, existing methods are difficult to balance the accuracy and usage burden (device costs, data labeling and prior knowledge). As the high penetration of smart speakers offers a low-cost way for sound-assisted residential power usage monitoring, this work aims to combine a smart speaker and a smart meter in a house to liberate the system from a high usage burden. However, it is still challenging to extract and leverage the consistent/complementary information (two types of relationships between acoustic and power features) from acoustic and power data without data labeling or prior knowledge. To this end, we design COMFORT, a cross-modality system for self-supervised power usage monitoring, including (i) a cross-modality learning component to automatically learn the consistent and complementary information, and (ii) a cross-modality inference component to utilize the consistent and complementary information. We implement and evaluate COMFORT with a self-collected dataset from six houses in 14 days, demonstrating that COMFORT finds the most appliances (98%), improves the appliance recognition performance in F-measure by at least 41.1%, and reduces the Mean Absolute Error (MAE) of energy disaggregation by at least 30.4% over other alternative solutions.
能源分解是住宅用电监测的关键使能技术,有利于碳排放监测和人类活动识别等多种应用。然而,现有的方法很难平衡准确性和使用负担(设备成本、数据标注和先验知识)。由于智能扬声器的高普及率为声音辅助住宅用电监测提供了一种低成本的方式,因此本研究旨在将智能扬声器和智能电表结合在一起,从而将系统从高使用负担中解放出来。然而,在没有数据标记或先验知识的情况下,从声学和功率数据中提取和利用一致/互补信息(声学和功率特征之间的两种类型的关系)仍然具有挑战性。为此,我们设计了一个用于自监督电力使用监测的跨模态系统COMFORT,该系统包括(i)自动学习一致和互补信息的跨模态学习组件,以及(ii)利用一致和互补信息的跨模态推理组件。我们在14天内使用来自6个家庭的自收集数据集实施和评估COMFORT,表明COMFORT发现了最多的家电(98%),将F-measure中的家电识别性能提高了至少41.1%,并将能量分解的平均绝对误差(MAE)降低了至少30.4%。
{"title":"Combining Smart Speaker and Smart Meter to Infer Your Residential Power Usage by Self-supervised Cross-modal Learning","authors":"Guanzhou Zhu, Dong Zhao, Kuo Tian, Zhengyuan Zhang, Rui Yuan, Huadong Ma","doi":"10.1145/3610905","DOIUrl":"https://doi.org/10.1145/3610905","url":null,"abstract":"Energy disaggregation is a key enabling technology for residential power usage monitoring, which benefits various applications such as carbon emission monitoring and human activity recognition. However, existing methods are difficult to balance the accuracy and usage burden (device costs, data labeling and prior knowledge). As the high penetration of smart speakers offers a low-cost way for sound-assisted residential power usage monitoring, this work aims to combine a smart speaker and a smart meter in a house to liberate the system from a high usage burden. However, it is still challenging to extract and leverage the consistent/complementary information (two types of relationships between acoustic and power features) from acoustic and power data without data labeling or prior knowledge. To this end, we design COMFORT, a cross-modality system for self-supervised power usage monitoring, including (i) a cross-modality learning component to automatically learn the consistent and complementary information, and (ii) a cross-modality inference component to utilize the consistent and complementary information. We implement and evaluate COMFORT with a self-collected dataset from six houses in 14 days, demonstrating that COMFORT finds the most appliances (98%), improves the appliance recognition performance in F-measure by at least 41.1%, and reduces the Mean Absolute Error (MAE) of energy disaggregation by at least 30.4% over other alternative solutions.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abacus Gestures Abacus的手势
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610898
Md Ehtesham-Ul-Haque, Syed Masum Billah
Designing an extensive set of mid-air gestures that are both easy to learn and perform quickly presents a significant challenge. Further complicating this challenge is achieving high-accuracy detection of such gestures using commonly available hardware, like a 2D commodity camera. Previous work often proposed smaller, application-specific gesture sets, requiring specialized hardware and struggling with adaptability across diverse environments. Addressing these limitations, this paper introduces Abacus Gestures, a comprehensive collection of 100 mid-air gestures. Drawing on the metaphor of Finger Abacus counting, gestures are formed from various combinations of open and closed fingers, each assigned different values. We developed an algorithm using an off-the-shelf computer vision library capable of detecting these gestures from a 2D commodity camera feed with an accuracy exceeding 98% for palms facing the camera and 95% for palms facing the body. We assessed the detection accuracy, ease of learning, and usability of these gestures in a user study involving 20 participants. The study found that participants could learn Abacus Gestures within five minutes after executing just 15 gestures and could recall them after a four-month interval. Additionally, most participants developed motor memory for these gestures after performing 100 gestures. Most of the gestures were easy to execute with the designated finger combinations, and the flexibility in executing the gestures using multiple finger combinations further enhanced the usability. Based on these findings, we created a taxonomy that categorizes Abacus Gestures into five groups based on motor memory development and three difficulty levels according to their ease of execution. Finally, we provided design guidelines and proposed potential use cases for Abacus Gestures in the realm of mid-air interaction.
设计一套广泛的空中手势,既容易学习,又能快速执行,这是一个重大的挑战。使这一挑战进一步复杂化的是,如何使用常见的硬件(如2D商用相机)实现对此类手势的高精度检测。以前的工作通常提出更小的、特定于应用程序的手势集,这需要专门的硬件,并且难以适应不同的环境。针对这些限制,本文介绍了Abacus手势,一个全面的收集100个空中手势。借用手指算盘计数的比喻,手势是由张开和闭合的手指的各种组合形成的,每个手指被赋予不同的值。我们使用现成的计算机视觉库开发了一种算法,该算法能够从2D商用相机馈馈线中检测这些手势,手掌面向相机的准确率超过98%,手掌面向身体的准确率超过95%。我们在一项涉及20名参与者的用户研究中评估了这些手势的检测准确性、易学性和可用性。研究发现,参与者在完成15个手势后,可以在5分钟内学会珠算手势,并在4个月后回忆起来。此外,大多数参与者在做了100个手势后,对这些手势产生了运动记忆。大多数手势都很容易通过指定的手指组合来执行,而使用多个手指组合来执行手势的灵活性进一步增强了可用性。基于这些发现,我们创建了一个分类法,根据运动记忆的发展将算盘手势分为五组,根据执行的难易程度分为三个难度级别。最后,我们提供了设计指南,并提出了Abacus手势在空中交互领域的潜在用例。
{"title":"Abacus Gestures","authors":"Md Ehtesham-Ul-Haque, Syed Masum Billah","doi":"10.1145/3610898","DOIUrl":"https://doi.org/10.1145/3610898","url":null,"abstract":"Designing an extensive set of mid-air gestures that are both easy to learn and perform quickly presents a significant challenge. Further complicating this challenge is achieving high-accuracy detection of such gestures using commonly available hardware, like a 2D commodity camera. Previous work often proposed smaller, application-specific gesture sets, requiring specialized hardware and struggling with adaptability across diverse environments. Addressing these limitations, this paper introduces Abacus Gestures, a comprehensive collection of 100 mid-air gestures. Drawing on the metaphor of Finger Abacus counting, gestures are formed from various combinations of open and closed fingers, each assigned different values. We developed an algorithm using an off-the-shelf computer vision library capable of detecting these gestures from a 2D commodity camera feed with an accuracy exceeding 98% for palms facing the camera and 95% for palms facing the body. We assessed the detection accuracy, ease of learning, and usability of these gestures in a user study involving 20 participants. The study found that participants could learn Abacus Gestures within five minutes after executing just 15 gestures and could recall them after a four-month interval. Additionally, most participants developed motor memory for these gestures after performing 100 gestures. Most of the gestures were easy to execute with the designated finger combinations, and the flexibility in executing the gestures using multiple finger combinations further enhanced the usability. Based on these findings, we created a taxonomy that categorizes Abacus Gestures into five groups based on motor memory development and three difficulty levels according to their ease of execution. Finally, we provided design guidelines and proposed potential use cases for Abacus Gestures in the realm of mid-air interaction.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1