首页 > 最新文献

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

英文 中文
DYPA DYPA
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610908
Shuhan Zhong, Sizhe Song, Tianhao Tang, Fei Nie, Xinrui Zhou, Yankun Zhao, Yizhe Zhao, Kuen Fung Sin, S.-H. Gary Chan
Identifying early a person with dyslexia, a learning disorder with reading and writing, is critical for effective treatment. As accredited specialists for clinical diagnosis of dyslexia are costly and undersupplied, we research and develop a computer-assisted approach to efficiently prescreen dyslexic Chinese children so that timely resources can be channelled to those at higher risk. Previous works in this area are mostly for English and other alphabetic languages, tailored narrowly for the reading disorder, or require costly specialized equipment. To overcome that, we present DYPA, a novel DYslexia Prescreening mobile Application for Chinese children. DYPA collects multimodal data from children through a set of specially designed interactive reading and writing tests in Chinese, and comprehensively analyzes their cognitive-linguistic skills with machine learning. To better account for the dyslexia-associated features in handwritten characters, DYPA employs a deep learning based multilevel Chinese handwriting analysis framework to extract features across the stroke, radical and character levels. We have implemented and installed DYPA in tablets, and our extensive trials with more than 200 pupils in Hong Kong validate its high predictive accuracy (81.14%), sensitivity (74.27%) and specificity (82.71%).
早期识别一个患有阅读障碍的人,这是一种阅读和写作的学习障碍,对有效治疗至关重要。由于临床诊断阅读障碍的专家费用昂贵且供应不足,我们研究并开发了一种计算机辅助方法来有效地预先筛查阅读障碍的中国儿童,以便及时将资源输送给风险较高的儿童。以前在这一领域的工作大多是针对英语和其他字母语言,为阅读障碍量身定制的,或者需要昂贵的专业设备。为了克服这一问题,我们提出了一种新的针对中国儿童的阅读障碍预筛查移动应用程序。DYPA通过一套专门设计的中文阅读和写作互动测试,收集儿童的多模态数据,并通过机器学习全面分析他们的认知语言技能。为了更好地解释手写体中与阅读困难相关的特征,DYPA采用了基于深度学习的多层次中文手写分析框架来提取笔画、根号和字符水平的特征。我们已经在片剂中实施并安装了DYPA,我们在香港对200多名学生进行了广泛的试验,验证了其高预测准确率(81.14%)、灵敏度(74.27%)和特异性(82.71%)。
{"title":"DYPA","authors":"Shuhan Zhong, Sizhe Song, Tianhao Tang, Fei Nie, Xinrui Zhou, Yankun Zhao, Yizhe Zhao, Kuen Fung Sin, S.-H. Gary Chan","doi":"10.1145/3610908","DOIUrl":"https://doi.org/10.1145/3610908","url":null,"abstract":"Identifying early a person with dyslexia, a learning disorder with reading and writing, is critical for effective treatment. As accredited specialists for clinical diagnosis of dyslexia are costly and undersupplied, we research and develop a computer-assisted approach to efficiently prescreen dyslexic Chinese children so that timely resources can be channelled to those at higher risk. Previous works in this area are mostly for English and other alphabetic languages, tailored narrowly for the reading disorder, or require costly specialized equipment. To overcome that, we present DYPA, a novel DYslexia Prescreening mobile Application for Chinese children. DYPA collects multimodal data from children through a set of specially designed interactive reading and writing tests in Chinese, and comprehensively analyzes their cognitive-linguistic skills with machine learning. To better account for the dyslexia-associated features in handwritten characters, DYPA employs a deep learning based multilevel Chinese handwriting analysis framework to extract features across the stroke, radical and character levels. We have implemented and installed DYPA in tablets, and our extensive trials with more than 200 pupils in Hong Kong validate its high predictive accuracy (81.14%), sensitivity (74.27%) and specificity (82.71%).","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting Social Contexts from Mobile Sensing Indicators in Virtual Interactions with Socially Anxious Individuals 从与社交焦虑个体的虚拟互动中检测移动感知指标的社会背景
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610916
Zhiyuan Wang, Maria A. Larrazabal, Mark Rucker, Emma R. Toner, Katharine E. Daniel, Shashwat Kumar, Mehdi Boukhechba, Bethany A. Teachman, Laura E. Barnes
Mobile sensing is a ubiquitous and useful tool to make inferences about individuals' mental health based on physiology and behavior patterns. Along with sensing features directly associated with mental health, it can be valuable to detect different features of social contexts to learn about social interaction patterns over time and across different environments. This can provide insight into diverse communities' academic, work and social lives, and their social networks. We posit that passively detecting social contexts can be particularly useful for social anxiety research, as it may ultimately help identify changes in social anxiety status and patterns of social avoidance and withdrawal. To this end, we recruited a sample of highly socially anxious undergraduate students (N=46) to examine whether we could detect the presence of experimentally manipulated virtual social contexts via wristband sensors. Using a multitask machine learning pipeline, we leveraged passively sensed biobehavioral streams to detect contexts relevant to social anxiety, including (1) whether people were in a social situation, (2) size of the social group, (3) degree of social evaluation, and (4) phase of social situation (anticipating, actively experiencing, or had just participated in an experience). Results demonstrated the feasibility of detecting most virtual social contexts, with stronger predictive accuracy when detecting whether individuals were in a social situation or not and the phase of the situation, and weaker predictive accuracy when detecting the level of social evaluation. They also indicated that sensing streams are differentially important to prediction based on the context being predicted. Our findings also provide useful information regarding design elements relevant to passive context detection, including optimal sensing duration, the utility of different sensing modalities, and the need for personalization. We discuss implications of these findings for future work on context detection (e.g., just-in-time adaptive intervention development).
移动传感是一种普遍而有用的工具,可以根据生理和行为模式推断个体的心理健康状况。除了与心理健康直接相关的感知特征外,检测社会背景的不同特征以了解随时间和不同环境的社会互动模式可能很有价值。这可以让我们深入了解不同社区的学术、工作和社交生活,以及他们的社交网络。我们认为,被动地检测社会环境对社交焦虑研究特别有用,因为它可能最终有助于识别社交焦虑状态的变化以及社交回避和退缩的模式。为此,我们招募了一组高度社交焦虑的大学生(N=46),以检验我们是否可以通过腕带传感器检测到实验操纵的虚拟社会背景的存在。使用多任务机器学习管道,我们利用被动感知的生物行为流来检测与社交焦虑相关的情境,包括(1)人们是否处于社交情境中,(2)社交群体的规模,(3)社会评价程度,以及(4)社交情境的阶段(预期、积极体验或刚刚参与体验)。结果表明,大多数虚拟社会情境检测都是可行的,在检测个体是否处于社会情境和情境的阶段时,预测准确率较高,而在检测社会评价水平时,预测准确率较低。他们还指出,根据预测的环境,感知流对预测的重要性是不同的。我们的研究结果还提供了与被动上下文检测相关的设计元素的有用信息,包括最佳感知持续时间,不同感知模式的效用以及个性化的需求。我们讨论了这些发现对未来情境检测工作的影响(例如,即时适应性干预发展)。
{"title":"Detecting Social Contexts from Mobile Sensing Indicators in Virtual Interactions with Socially Anxious Individuals","authors":"Zhiyuan Wang, Maria A. Larrazabal, Mark Rucker, Emma R. Toner, Katharine E. Daniel, Shashwat Kumar, Mehdi Boukhechba, Bethany A. Teachman, Laura E. Barnes","doi":"10.1145/3610916","DOIUrl":"https://doi.org/10.1145/3610916","url":null,"abstract":"Mobile sensing is a ubiquitous and useful tool to make inferences about individuals' mental health based on physiology and behavior patterns. Along with sensing features directly associated with mental health, it can be valuable to detect different features of social contexts to learn about social interaction patterns over time and across different environments. This can provide insight into diverse communities' academic, work and social lives, and their social networks. We posit that passively detecting social contexts can be particularly useful for social anxiety research, as it may ultimately help identify changes in social anxiety status and patterns of social avoidance and withdrawal. To this end, we recruited a sample of highly socially anxious undergraduate students (N=46) to examine whether we could detect the presence of experimentally manipulated virtual social contexts via wristband sensors. Using a multitask machine learning pipeline, we leveraged passively sensed biobehavioral streams to detect contexts relevant to social anxiety, including (1) whether people were in a social situation, (2) size of the social group, (3) degree of social evaluation, and (4) phase of social situation (anticipating, actively experiencing, or had just participated in an experience). Results demonstrated the feasibility of detecting most virtual social contexts, with stronger predictive accuracy when detecting whether individuals were in a social situation or not and the phase of the situation, and weaker predictive accuracy when detecting the level of social evaluation. They also indicated that sensing streams are differentially important to prediction based on the context being predicted. Our findings also provide useful information regarding design elements relevant to passive context detection, including optimal sensing duration, the utility of different sensing modalities, and the need for personalization. We discuss implications of these findings for future work on context detection (e.g., just-in-time adaptive intervention development).","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Symptom Improvement During Depression Treatment Using Sleep Sensory Data 使用睡眠感觉数据预测抑郁症治疗期间症状改善
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610932
Chinmaey Shende, Soumyashree Sahoo, Stephen Sam, Parit Patel, Reynaldo Morillo, Xinyu Wang, Shweta Ware, Jinbo Bi, Jayesh Kamath, Alexander Russell, Dongjin Song, Bing Wang
Depression is a serious mental illness. The current best guideline in depression treatment is closely monitoring patients and adjusting treatment as needed. Close monitoring of patients through physician-administered follow-ups or self-administered questionnaires, however, is difficult in clinical settings due to high cost, lack of trained professionals, and burden to the patients. Sensory data collected from mobile devices has been shown to provide a promising direction for long-term monitoring of depression symptoms. Most existing studies in this direction, however, focus on depression detection; the few studies that are on predicting changes in depression are not in clinical settings. In this paper, we investigate using one type of sensory data, sleep data, collected from wearables to predict improvement of depression symptoms over time after a patient initiates a new pharmacological treatment. We apply sleep trend filtering to noisy sleep sensory data to extract high-level sleep characteristics and develop a family of machine learning models that use simple sleep features (mean and variation of sleep duration) to predict symptom improvement. Our results show that using such simple sleep features can already lead to validation F1 score up to 0.68, indicating that using sensory data for predicting depression improvement during treatment is a promising direction.
抑郁症是一种严重的精神疾病。目前抑郁症治疗的最佳指导方针是密切监测患者并根据需要调整治疗。然而,由于成本高、缺乏训练有素的专业人员以及给患者带来负担,在临床环境中很难通过医生管理的随访或自我管理的问卷对患者进行密切监测。从移动设备收集的感官数据已被证明为抑郁症症状的长期监测提供了一个有希望的方向。然而,在这个方向上的大多数现有研究都集中在抑郁检测上;少数预测抑郁症变化的研究不是在临床环境中进行的。在本文中,我们研究使用一种感官数据,即从可穿戴设备收集的睡眠数据,来预测患者开始新的药物治疗后抑郁症状的改善情况。我们将睡眠趋势滤波应用于嘈杂的睡眠感官数据,以提取高水平的睡眠特征,并开发了一系列机器学习模型,这些模型使用简单的睡眠特征(睡眠持续时间的平均值和变化)来预测症状的改善。我们的研究结果表明,使用这些简单的睡眠特征已经可以使F1得分达到0.68,这表明使用感官数据来预测治疗期间抑郁症的改善是一个有前途的方向。
{"title":"Predicting Symptom Improvement During Depression Treatment Using Sleep Sensory Data","authors":"Chinmaey Shende, Soumyashree Sahoo, Stephen Sam, Parit Patel, Reynaldo Morillo, Xinyu Wang, Shweta Ware, Jinbo Bi, Jayesh Kamath, Alexander Russell, Dongjin Song, Bing Wang","doi":"10.1145/3610932","DOIUrl":"https://doi.org/10.1145/3610932","url":null,"abstract":"Depression is a serious mental illness. The current best guideline in depression treatment is closely monitoring patients and adjusting treatment as needed. Close monitoring of patients through physician-administered follow-ups or self-administered questionnaires, however, is difficult in clinical settings due to high cost, lack of trained professionals, and burden to the patients. Sensory data collected from mobile devices has been shown to provide a promising direction for long-term monitoring of depression symptoms. Most existing studies in this direction, however, focus on depression detection; the few studies that are on predicting changes in depression are not in clinical settings. In this paper, we investigate using one type of sensory data, sleep data, collected from wearables to predict improvement of depression symptoms over time after a patient initiates a new pharmacological treatment. We apply sleep trend filtering to noisy sleep sensory data to extract high-level sleep characteristics and develop a family of machine learning models that use simple sleep features (mean and variation of sleep duration) to predict symptom improvement. Our results show that using such simple sleep features can already lead to validation F1 score up to 0.68, indicating that using sensory data for predicting depression improvement during treatment is a promising direction.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Passive Haptic Learning of Piano Songs Using Three Tactile Sensations of Vibration, Stroking and Tapping 用振动、抚摸、敲击三种触觉研究钢琴歌曲的被动触觉学习
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610899
Likun Fang, Timo Müller, Erik Pescara, Nikola Fischer, Yiran Huang, Michael Beigl
Passive Haptic Learning (PHL) is a method by which users are able to learn motor skills without paying active attention. In past research, vibration is widely applied in PHL as the signal delivered on the participant's skin. The human somatosensory system provides not only discriminative input (the perception of pressure, vibration, slip, and texture, etc.) to the brain but also an affective input (sliding, tapping and stroking, etc.). The former is often described as being mediated by low-threshold mechanosensitive (LTM) units with rapidly conducting large myelinated (Aᵬ) afferents, while the latter is mediated by a class of LTM afferents called C-tactile afferents (CTs). We investigated whether different tactile sensations (tapping, light stroking, and vibration) influence the learning effect of PHL in this work. We built three wearable systems corresponding to the three sensations respectively. 17 participants were invited to learn to play three different note sequences passively via three different systems. The subjects were then tested on their remembered note sequences after each learning session. Our results indicate that the sensations of tapping or stroking are as effective as the vibration system in passive haptic learning of piano songs, providing viable alternatives to the vibration sensations that have been used so far. We also found that participants on average made up to 1.06 errors less when using affective inputs, namely tapping or stroking. As the first work exploring the differences in multiple types of tactile sensations in PHL, we offer our design to the readers and hope they may employ our works for further research of PHL.
被动触觉学习(Passive Haptic Learning, PHL)是一种让使用者在没有主动注意的情况下学习运动技能的方法。在过去的研究中,振动作为传递到参与者皮肤上的信号被广泛应用于物理物理。人体体感系统不仅向大脑提供判别输入(对压力、振动、滑动和纹理等的感知),而且还提供情感输入(滑动、敲击和抚摸等)。前者通常被描述为由具有快速传导大髓鞘(Aᵬ)传入的低阈值机械敏感(LTM)单元介导,而后者由一类称为c -触觉传入(ct)的LTM传入介导。在这项研究中,我们研究了不同的触觉感觉(敲击、轻触和振动)是否会影响PHL的学习效果。我们分别针对这三种感觉构建了三个可穿戴系统。17名参与者被邀请通过三种不同的系统被动地学习演奏三种不同的音符序列。在每次学习结束后,研究人员对受试者记忆的音符序列进行了测试。我们的研究结果表明,在钢琴歌曲的被动触觉学习中,敲击或抚摸的感觉与振动系统一样有效,为迄今为止使用的振动感觉提供了可行的替代方案。我们还发现,参与者在使用情感输入(即敲击或抚摸)时,平均减少了1.06个错误。作为探索PHL中多种类型触觉差异的第一个作品,我们将我们的设计提供给读者,希望他们可以利用我们的作品进一步研究PHL。
{"title":"Investigating Passive Haptic Learning of Piano Songs Using Three Tactile Sensations of Vibration, Stroking and Tapping","authors":"Likun Fang, Timo Müller, Erik Pescara, Nikola Fischer, Yiran Huang, Michael Beigl","doi":"10.1145/3610899","DOIUrl":"https://doi.org/10.1145/3610899","url":null,"abstract":"Passive Haptic Learning (PHL) is a method by which users are able to learn motor skills without paying active attention. In past research, vibration is widely applied in PHL as the signal delivered on the participant's skin. The human somatosensory system provides not only discriminative input (the perception of pressure, vibration, slip, and texture, etc.) to the brain but also an affective input (sliding, tapping and stroking, etc.). The former is often described as being mediated by low-threshold mechanosensitive (LTM) units with rapidly conducting large myelinated (Aᵬ) afferents, while the latter is mediated by a class of LTM afferents called C-tactile afferents (CTs). We investigated whether different tactile sensations (tapping, light stroking, and vibration) influence the learning effect of PHL in this work. We built three wearable systems corresponding to the three sensations respectively. 17 participants were invited to learn to play three different note sequences passively via three different systems. The subjects were then tested on their remembered note sequences after each learning session. Our results indicate that the sensations of tapping or stroking are as effective as the vibration system in passive haptic learning of piano songs, providing viable alternatives to the vibration sensations that have been used so far. We also found that participants on average made up to 1.06 errors less when using affective inputs, namely tapping or stroking. As the first work exploring the differences in multiple types of tactile sensations in PHL, we offer our design to the readers and hope they may employ our works for further research of PHL.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Smart Speaker and Smart Meter to Infer Your Residential Power Usage by Self-supervised Cross-modal Learning 结合智能扬声器和智能电表,通过自监督跨模式学习推断您的住宅用电量
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610905
Guanzhou Zhu, Dong Zhao, Kuo Tian, Zhengyuan Zhang, Rui Yuan, Huadong Ma
Energy disaggregation is a key enabling technology for residential power usage monitoring, which benefits various applications such as carbon emission monitoring and human activity recognition. However, existing methods are difficult to balance the accuracy and usage burden (device costs, data labeling and prior knowledge). As the high penetration of smart speakers offers a low-cost way for sound-assisted residential power usage monitoring, this work aims to combine a smart speaker and a smart meter in a house to liberate the system from a high usage burden. However, it is still challenging to extract and leverage the consistent/complementary information (two types of relationships between acoustic and power features) from acoustic and power data without data labeling or prior knowledge. To this end, we design COMFORT, a cross-modality system for self-supervised power usage monitoring, including (i) a cross-modality learning component to automatically learn the consistent and complementary information, and (ii) a cross-modality inference component to utilize the consistent and complementary information. We implement and evaluate COMFORT with a self-collected dataset from six houses in 14 days, demonstrating that COMFORT finds the most appliances (98%), improves the appliance recognition performance in F-measure by at least 41.1%, and reduces the Mean Absolute Error (MAE) of energy disaggregation by at least 30.4% over other alternative solutions.
能源分解是住宅用电监测的关键使能技术,有利于碳排放监测和人类活动识别等多种应用。然而,现有的方法很难平衡准确性和使用负担(设备成本、数据标注和先验知识)。由于智能扬声器的高普及率为声音辅助住宅用电监测提供了一种低成本的方式,因此本研究旨在将智能扬声器和智能电表结合在一起,从而将系统从高使用负担中解放出来。然而,在没有数据标记或先验知识的情况下,从声学和功率数据中提取和利用一致/互补信息(声学和功率特征之间的两种类型的关系)仍然具有挑战性。为此,我们设计了一个用于自监督电力使用监测的跨模态系统COMFORT,该系统包括(i)自动学习一致和互补信息的跨模态学习组件,以及(ii)利用一致和互补信息的跨模态推理组件。我们在14天内使用来自6个家庭的自收集数据集实施和评估COMFORT,表明COMFORT发现了最多的家电(98%),将F-measure中的家电识别性能提高了至少41.1%,并将能量分解的平均绝对误差(MAE)降低了至少30.4%。
{"title":"Combining Smart Speaker and Smart Meter to Infer Your Residential Power Usage by Self-supervised Cross-modal Learning","authors":"Guanzhou Zhu, Dong Zhao, Kuo Tian, Zhengyuan Zhang, Rui Yuan, Huadong Ma","doi":"10.1145/3610905","DOIUrl":"https://doi.org/10.1145/3610905","url":null,"abstract":"Energy disaggregation is a key enabling technology for residential power usage monitoring, which benefits various applications such as carbon emission monitoring and human activity recognition. However, existing methods are difficult to balance the accuracy and usage burden (device costs, data labeling and prior knowledge). As the high penetration of smart speakers offers a low-cost way for sound-assisted residential power usage monitoring, this work aims to combine a smart speaker and a smart meter in a house to liberate the system from a high usage burden. However, it is still challenging to extract and leverage the consistent/complementary information (two types of relationships between acoustic and power features) from acoustic and power data without data labeling or prior knowledge. To this end, we design COMFORT, a cross-modality system for self-supervised power usage monitoring, including (i) a cross-modality learning component to automatically learn the consistent and complementary information, and (ii) a cross-modality inference component to utilize the consistent and complementary information. We implement and evaluate COMFORT with a self-collected dataset from six houses in 14 days, demonstrating that COMFORT finds the most appliances (98%), improves the appliance recognition performance in F-measure by at least 41.1%, and reduces the Mean Absolute Error (MAE) of energy disaggregation by at least 30.4% over other alternative solutions.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abacus Gestures Abacus的手势
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610898
Md Ehtesham-Ul-Haque, Syed Masum Billah
Designing an extensive set of mid-air gestures that are both easy to learn and perform quickly presents a significant challenge. Further complicating this challenge is achieving high-accuracy detection of such gestures using commonly available hardware, like a 2D commodity camera. Previous work often proposed smaller, application-specific gesture sets, requiring specialized hardware and struggling with adaptability across diverse environments. Addressing these limitations, this paper introduces Abacus Gestures, a comprehensive collection of 100 mid-air gestures. Drawing on the metaphor of Finger Abacus counting, gestures are formed from various combinations of open and closed fingers, each assigned different values. We developed an algorithm using an off-the-shelf computer vision library capable of detecting these gestures from a 2D commodity camera feed with an accuracy exceeding 98% for palms facing the camera and 95% for palms facing the body. We assessed the detection accuracy, ease of learning, and usability of these gestures in a user study involving 20 participants. The study found that participants could learn Abacus Gestures within five minutes after executing just 15 gestures and could recall them after a four-month interval. Additionally, most participants developed motor memory for these gestures after performing 100 gestures. Most of the gestures were easy to execute with the designated finger combinations, and the flexibility in executing the gestures using multiple finger combinations further enhanced the usability. Based on these findings, we created a taxonomy that categorizes Abacus Gestures into five groups based on motor memory development and three difficulty levels according to their ease of execution. Finally, we provided design guidelines and proposed potential use cases for Abacus Gestures in the realm of mid-air interaction.
设计一套广泛的空中手势,既容易学习,又能快速执行,这是一个重大的挑战。使这一挑战进一步复杂化的是,如何使用常见的硬件(如2D商用相机)实现对此类手势的高精度检测。以前的工作通常提出更小的、特定于应用程序的手势集,这需要专门的硬件,并且难以适应不同的环境。针对这些限制,本文介绍了Abacus手势,一个全面的收集100个空中手势。借用手指算盘计数的比喻,手势是由张开和闭合的手指的各种组合形成的,每个手指被赋予不同的值。我们使用现成的计算机视觉库开发了一种算法,该算法能够从2D商用相机馈馈线中检测这些手势,手掌面向相机的准确率超过98%,手掌面向身体的准确率超过95%。我们在一项涉及20名参与者的用户研究中评估了这些手势的检测准确性、易学性和可用性。研究发现,参与者在完成15个手势后,可以在5分钟内学会珠算手势,并在4个月后回忆起来。此外,大多数参与者在做了100个手势后,对这些手势产生了运动记忆。大多数手势都很容易通过指定的手指组合来执行,而使用多个手指组合来执行手势的灵活性进一步增强了可用性。基于这些发现,我们创建了一个分类法,根据运动记忆的发展将算盘手势分为五组,根据执行的难易程度分为三个难度级别。最后,我们提供了设计指南,并提出了Abacus手势在空中交互领域的潜在用例。
{"title":"Abacus Gestures","authors":"Md Ehtesham-Ul-Haque, Syed Masum Billah","doi":"10.1145/3610898","DOIUrl":"https://doi.org/10.1145/3610898","url":null,"abstract":"Designing an extensive set of mid-air gestures that are both easy to learn and perform quickly presents a significant challenge. Further complicating this challenge is achieving high-accuracy detection of such gestures using commonly available hardware, like a 2D commodity camera. Previous work often proposed smaller, application-specific gesture sets, requiring specialized hardware and struggling with adaptability across diverse environments. Addressing these limitations, this paper introduces Abacus Gestures, a comprehensive collection of 100 mid-air gestures. Drawing on the metaphor of Finger Abacus counting, gestures are formed from various combinations of open and closed fingers, each assigned different values. We developed an algorithm using an off-the-shelf computer vision library capable of detecting these gestures from a 2D commodity camera feed with an accuracy exceeding 98% for palms facing the camera and 95% for palms facing the body. We assessed the detection accuracy, ease of learning, and usability of these gestures in a user study involving 20 participants. The study found that participants could learn Abacus Gestures within five minutes after executing just 15 gestures and could recall them after a four-month interval. Additionally, most participants developed motor memory for these gestures after performing 100 gestures. Most of the gestures were easy to execute with the designated finger combinations, and the flexibility in executing the gestures using multiple finger combinations further enhanced the usability. Based on these findings, we created a taxonomy that categorizes Abacus Gestures into five groups based on motor memory development and three difficulty levels according to their ease of execution. Finally, we provided design guidelines and proposed potential use cases for Abacus Gestures in the realm of mid-air interaction.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MicroCam 微型摄像头
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610921
Yongquan Hu, Hui-Shyong Yeo, Mingyue Yuan, Haoran Fan, Don Samitha Elvitigala, Wen Hu, Aaron Quigley
The primary focus of this research is the discreet and subtle everyday contact interactions between mobile phones and their surrounding surfaces. Such interactions are anticipated to facilitate mobile context awareness, encompassing aspects such as dispensing medication updates, intelligently switching modes (e.g., silent mode), or initiating commands (e.g., deactivating an alarm). We introduce MicroCam, a contact-based sensing system that employs smartphone IMU data to detect the routine state of phone placement and utilizes a built-in microscope camera to capture intricate surface details. In particular, a natural dataset is collected to acquire authentic surface textures in situ for training and testing. Moreover, we optimize the deep neural network component of the algorithm, based on continual learning, to accurately discriminate between object categories (e.g., tables) and material constituents (e.g., wood). Experimental results highlight the superior accuracy, robustness and generalization of the proposed method. Lastly, we conducted a comprehensive discussion centered on our prototype, encompassing topics such as system performance and potential applications and scenarios.
本研究的主要焦点是手机与其周围表面之间的日常接触互动。预计这样的交互将促进移动上下文感知,包括诸如分配药物更新,智能切换模式(例如,静音模式)或启动命令(例如,停用警报)等方面。我们介绍了MicroCam,这是一种基于接触的传感系统,它利用智能手机IMU数据来检测手机放置的常规状态,并利用内置的显微镜相机来捕捉复杂的表面细节。特别是,收集自然数据集以获取真实的表面纹理,用于训练和测试。此外,基于持续学习,我们优化了算法的深度神经网络组件,以准确区分对象类别(例如,桌子)和材料成分(例如,木材)。实验结果表明,该方法具有较好的准确性、鲁棒性和泛化性。最后,我们以原型为中心进行了全面的讨论,包括系统性能和潜在的应用程序和场景等主题。
{"title":"MicroCam","authors":"Yongquan Hu, Hui-Shyong Yeo, Mingyue Yuan, Haoran Fan, Don Samitha Elvitigala, Wen Hu, Aaron Quigley","doi":"10.1145/3610921","DOIUrl":"https://doi.org/10.1145/3610921","url":null,"abstract":"The primary focus of this research is the discreet and subtle everyday contact interactions between mobile phones and their surrounding surfaces. Such interactions are anticipated to facilitate mobile context awareness, encompassing aspects such as dispensing medication updates, intelligently switching modes (e.g., silent mode), or initiating commands (e.g., deactivating an alarm). We introduce MicroCam, a contact-based sensing system that employs smartphone IMU data to detect the routine state of phone placement and utilizes a built-in microscope camera to capture intricate surface details. In particular, a natural dataset is collected to acquire authentic surface textures in situ for training and testing. Moreover, we optimize the deep neural network component of the algorithm, based on continual learning, to accurately discriminate between object categories (e.g., tables) and material constituents (e.g., wood). Experimental results highlight the superior accuracy, robustness and generalization of the proposed method. Lastly, we conducted a comprehensive discussion centered on our prototype, encompassing topics such as system performance and potential applications and scenarios.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
PATCH 补丁
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610885
Juexing Wang, Guangjing Wang, Xiao Zhang, Li Liu, Huacheng Zeng, Li Xiao, Zhichao Cao, Lin Gu, Tianxing Li
Recent advancements in deep learning have shown that multimodal inference can be particularly useful in tasks like autonomous driving, human health, and production line monitoring. However, deploying state-of-the-art multimodal models in distributed IoT systems poses unique challenges since the sensor data from low-cost edge devices can get corrupted, lost, or delayed before reaching the cloud. These problems are magnified in the presence of asymmetric data generation rates from different sensor modalities, wireless network dynamics, or unpredictable sensor behavior, leading to either increased latency or degradation in inference accuracy, which could affect the normal operation of the system with severe consequences like human injury or car accident. In this paper, we propose PATCH, a framework of speculative inference to adapt to these complex scenarios. PATCH serves as a plug-in module in the existing multimodal models, and it enables speculative inference of these off-the-shelf deep learning models. PATCH consists of 1) a Masked-AutoEncoder-based cross-modality imputation module to impute missing data using partially-available sensor data, 2) a lightweight feature pair ranking module that effectively limits the searching space for the optimal imputation configuration with low computation overhead, and 3) a data alignment module that aligns multimodal heterogeneous data streams without using accurate timestamp or external synchronization mechanisms. We implement PATCH in nine popular multimodal models using five public datasets and one self-collected dataset. The experimental results show that PATCH achieves up to 13% mean accuracy improvement over the state-of-art method while only using 10% of training data and reducing the training overhead by 73% compared to the original cost of retraining the model.
深度学习的最新进展表明,多模态推理在自动驾驶、人类健康和生产线监控等任务中特别有用。然而,在分布式物联网系统中部署最先进的多模态模型带来了独特的挑战,因为来自低成本边缘设备的传感器数据在到达云之前可能会损坏、丢失或延迟。这些问题在来自不同传感器模式、无线网络动态或不可预测的传感器行为的不对称数据生成率的存在下被放大,导致延迟增加或推理精度下降,这可能会影响系统的正常运行,造成严重后果,如人身伤害或车祸。在本文中,我们提出了PATCH,一个推测推理框架,以适应这些复杂的场景。PATCH在现有的多模态模型中充当插件模块,它可以对这些现成的深度学习模型进行推测推断。PATCH包括:1)基于mask - autoencoder的跨模态输入模块,该模块使用部分可用的传感器数据来输入缺失数据;2)轻量级特征对排序模块,该模块有效地限制了最佳输入配置的搜索空间,且计算开销低;3)数据对齐模块,该模块无需使用精确的时间戳或外部同步机制即可对齐多模态异构数据流。我们使用5个公共数据集和1个自收集数据集在9个流行的多模态模型中实现PATCH。实验结果表明,与现有方法相比,PATCH在只使用10%的训练数据的情况下,平均准确率提高了13%,与原始模型再训练成本相比,训练开销减少了73%。
{"title":"PATCH","authors":"Juexing Wang, Guangjing Wang, Xiao Zhang, Li Liu, Huacheng Zeng, Li Xiao, Zhichao Cao, Lin Gu, Tianxing Li","doi":"10.1145/3610885","DOIUrl":"https://doi.org/10.1145/3610885","url":null,"abstract":"Recent advancements in deep learning have shown that multimodal inference can be particularly useful in tasks like autonomous driving, human health, and production line monitoring. However, deploying state-of-the-art multimodal models in distributed IoT systems poses unique challenges since the sensor data from low-cost edge devices can get corrupted, lost, or delayed before reaching the cloud. These problems are magnified in the presence of asymmetric data generation rates from different sensor modalities, wireless network dynamics, or unpredictable sensor behavior, leading to either increased latency or degradation in inference accuracy, which could affect the normal operation of the system with severe consequences like human injury or car accident. In this paper, we propose PATCH, a framework of speculative inference to adapt to these complex scenarios. PATCH serves as a plug-in module in the existing multimodal models, and it enables speculative inference of these off-the-shelf deep learning models. PATCH consists of 1) a Masked-AutoEncoder-based cross-modality imputation module to impute missing data using partially-available sensor data, 2) a lightweight feature pair ranking module that effectively limits the searching space for the optimal imputation configuration with low computation overhead, and 3) a data alignment module that aligns multimodal heterogeneous data streams without using accurate timestamp or external synchronization mechanisms. We implement PATCH in nine popular multimodal models using five public datasets and one self-collected dataset. The experimental results show that PATCH achieves up to 13% mean accuracy improvement over the state-of-art method while only using 10% of training data and reducing the training overhead by 73% compared to the original cost of retraining the model.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can You Ear Me? 你能听到我说话吗?
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610925
Dennis Stanke, Tim Duente, Kerem Can Demir, Michael Rohs
The earlobe is a well-known location for wearing jewelry, but might also be promising for electronic output, such as presenting notifications. This work elaborates the pros and cons of different notification channels for the earlobe. Notifications on the earlobe can be private (only noticeable by the wearer) as well as public (noticeable in the immediate vicinity in a given social situation). A user study with 18 participants showed that the reaction times for the private channels (Poke, Vibration, Private Sound, Electrotactile) were on average less than 1 s with an error rate (missed notifications) of less than 1 %. Thermal Warm and Cold took significantly longer and Cold was least reliable (26 % error rate). The participants preferred Electrotactile and Vibration. Among the public channels the recognition time did not differ significantly between Sound (738 ms) and LED (828 ms), but Display took much longer (3175 ms). At 22 % the error rate of Display was highest. The participants generally felt comfortable wearing notification devices on their earlobe. The results show that the earlobe indeed is a suitable location for wearable technology, if properly miniaturized, which is possible for Electrotactile and LED. We present application scenarios and discuss design considerations. A small field study in a fitness center demonstrates the suitability of the earlobe notification concept in a sports context.
耳垂是佩戴珠宝的好地方,但也有可能用于电子输出,比如显示通知。本文阐述了耳垂不同通知通道的优缺点。耳垂上的通知可以是私密的(只有佩戴者可以注意到),也可以是公开的(在特定的社交场合,在附近可以注意到)。一项有18名参与者参与的用户研究表明,私人渠道(戳、振动、私人声音、电触觉)的反应时间平均不到15秒,错误率(错过通知)不到1%。热、暖、冷两种方法用的时间明显更长,而冷两种方法最不可靠(错误率为26%)。参与者更喜欢电触觉和振动。在公共通道中,声音(738 ms)和LED (828 ms)的识别时间没有显著差异,但显示(3175 ms)的识别时间要长得多。显示的错误率最高,为22%。参与者普遍觉得在耳垂上佩戴通知装置很舒服。结果表明,耳垂确实是可穿戴技术的合适位置,如果适当小型化,这对于电触觉和LED是可能的。我们介绍了应用场景并讨论了设计注意事项。在健身中心进行的一项小型实地研究证明了耳垂通知概念在运动环境中的适用性。
{"title":"Can You Ear Me?","authors":"Dennis Stanke, Tim Duente, Kerem Can Demir, Michael Rohs","doi":"10.1145/3610925","DOIUrl":"https://doi.org/10.1145/3610925","url":null,"abstract":"The earlobe is a well-known location for wearing jewelry, but might also be promising for electronic output, such as presenting notifications. This work elaborates the pros and cons of different notification channels for the earlobe. Notifications on the earlobe can be private (only noticeable by the wearer) as well as public (noticeable in the immediate vicinity in a given social situation). A user study with 18 participants showed that the reaction times for the private channels (Poke, Vibration, Private Sound, Electrotactile) were on average less than 1 s with an error rate (missed notifications) of less than 1 %. Thermal Warm and Cold took significantly longer and Cold was least reliable (26 % error rate). The participants preferred Electrotactile and Vibration. Among the public channels the recognition time did not differ significantly between Sound (738 ms) and LED (828 ms), but Display took much longer (3175 ms). At 22 % the error rate of Display was highest. The participants generally felt comfortable wearing notification devices on their earlobe. The results show that the earlobe indeed is a suitable location for wearable technology, if properly miniaturized, which is possible for Electrotactile and LED. We present application scenarios and discuss design considerations. A small field study in a fitness center demonstrates the suitability of the earlobe notification concept in a sports context.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Headar Headar
Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1145/3610900
Xiaoying Yang, Xue Wang, Gaofeng Dong, Zihan Yan, Mani Srivastava, Eiji Hayashi, Yang Zhang
Nod and shake of one's head are intuitive and universal gestures in communication. As smartwatches become increasingly intelligent through advances in user activity sensing technologies, many use scenarios of smartwatches demand quick responses from users in confirmation dialogs, to accept or dismiss proposed actions. Such proposed actions include making emergency calls, taking service recommendations, and starting or stopping exercise timers. Head gestures in these scenarios could be preferable to touch interactions for being hands-free and easy to perform. We propose Headar to recognize these gestures on smartwatches using wearable millimeter wave sensing. We first surveyed head gestures to understand how they are performed in conversational settings. We then investigated positions and orientations to which users raise their smartwatches. Insights from these studies guided the implementation of Headar. Additionally, we conducted modeling and simulation to verify our sensing principle. We developed a real-time sensing and inference pipeline using contemporary deep learning techniques, and proved the feasibility of our proposed approach with a user study (n=15) and a live test (n=8). Our evaluation yielded an average accuracy of 84.0% in the user study across 9 classes including nod and shake as well as seven other signals -- still, speech, touch interaction, and four non-gestural head motions (i.e., head up, left, right, and down). Furthermore, we obtained an accuracy of 72.6% in the live test which reveals rich insights into the performance of our approach in various realistic conditions.
点头和摇头是一种直观的、通用的交流手势。随着用户活动感知技术的进步,智能手表变得越来越智能,许多智能手表的使用场景需要用户在确认对话框中快速响应,接受或拒绝提议的操作。这些建议的行动包括拨打紧急电话,接受服务建议,启动或停止锻炼计时器。在这些情况下,头部手势可能比触摸交互更可取,因为它不需要手,而且易于操作。我们建议Headar使用可穿戴毫米波传感技术来识别智能手表上的这些手势。我们首先调查了头部手势,以了解它们在对话环境中的表现。然后,我们调查了用户举起智能手表的位置和方向。这些研究的见解指导了Headar的实施。此外,我们还进行了建模和仿真来验证我们的传感原理。我们使用当代深度学习技术开发了一个实时传感和推理管道,并通过用户研究(n=15)和现场测试(n=8)证明了我们提出的方法的可行性。我们的评估在9个类别的用户研究中产生了84.0%的平均准确率,包括点头和摇晃以及其他七种信号——静止、语音、触摸交互和四种非手势头部运动(即头向上、向左、向右和向下)。此外,我们在现场测试中获得了72.6%的准确率,这揭示了我们的方法在各种现实条件下的性能的丰富见解。
{"title":"Headar","authors":"Xiaoying Yang, Xue Wang, Gaofeng Dong, Zihan Yan, Mani Srivastava, Eiji Hayashi, Yang Zhang","doi":"10.1145/3610900","DOIUrl":"https://doi.org/10.1145/3610900","url":null,"abstract":"Nod and shake of one's head are intuitive and universal gestures in communication. As smartwatches become increasingly intelligent through advances in user activity sensing technologies, many use scenarios of smartwatches demand quick responses from users in confirmation dialogs, to accept or dismiss proposed actions. Such proposed actions include making emergency calls, taking service recommendations, and starting or stopping exercise timers. Head gestures in these scenarios could be preferable to touch interactions for being hands-free and easy to perform. We propose Headar to recognize these gestures on smartwatches using wearable millimeter wave sensing. We first surveyed head gestures to understand how they are performed in conversational settings. We then investigated positions and orientations to which users raise their smartwatches. Insights from these studies guided the implementation of Headar. Additionally, we conducted modeling and simulation to verify our sensing principle. We developed a real-time sensing and inference pipeline using contemporary deep learning techniques, and proved the feasibility of our proposed approach with a user study (n=15) and a live test (n=8). Our evaluation yielded an average accuracy of 84.0% in the user study across 9 classes including nod and shake as well as seven other signals -- still, speech, touch interaction, and four non-gestural head motions (i.e., head up, left, right, and down). Furthermore, we obtained an accuracy of 72.6% in the live test which reveals rich insights into the performance of our approach in various realistic conditions.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1