首页 > 最新文献

Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.最新文献

英文 中文
SeRaNDiP: Leveraging Inherent Sensor Random Noise for Differential Privacy Preservation in Wearable Community Sensing Applications SeRaNDiP:在可穿戴社区传感应用中利用固有传感器随机噪声进行差分隐私保护
Pub Date : 2023-01-01 DOI: 10.1145/3596252
Ayanga Imesha Kumari Kalupahana, A. N. Balaji, X. Xiao, L. Peh
{"title":"SeRaNDiP: Leveraging Inherent Sensor Random Noise for Differential Privacy Preservation in Wearable Community Sensing Applications","authors":"Ayanga Imesha Kumari Kalupahana, A. N. Balaji, X. Xiao, L. Peh","doi":"10.1145/3596252","DOIUrl":"https://doi.org/10.1145/3596252","url":null,"abstract":"","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75906840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TwinkleTwinkle: Interacting with Your Smart Devices by Eye Blink TwinkleTwinkle:通过眨眼与智能设备互动
Pub Date : 2023-01-01 DOI: 10.1145/3596238
Haiming Cheng, W. Lou, Yanni Yang, Yi-pu Chen, Xinyu Zhang
Recent years have witnessed the rapid boom of mobile devices interweaving with changes the epidemic has made to people’s lives. Though a tremendous amount of novel human-device interaction techniques have been put forward to facilitate various audiences and scenarios, limitations and inconveniences still occur to people having difficulty speaking or using their fingers/hands/arms or wearing masks/glasses/gloves. To fill the gap of such interaction contexts beyond using hands, voice, face, or mouth, in this work, we take the first step to propose a novel Human-Computer Interaction (HCI) system, TwinkleTwinkle , which senses and recognizes eye blink patterns in a contact-free and training-free manner leveraging ultrasound signals on commercial devices. TwinkleTwinkle first applies a phase difference based approach to depicting candidate eye blink motion profiles without removing any noises, followed by modeling intrinsic characteristics of blink motions through adaptive constraints to separate tiny patterns from interferences in conditions where blink habits and involuntary movements vary between individuals. We propose a vote-based approach to get final patterns designed to map with number combinations either self-defined or based on carriers like ASCII code and Morse code to make interaction seamlessly embedded with normal and well-known language systems. We implement TwinkleTwinkle on smartphones with all methods realized in the time domain and conduct extensive evaluations in various settings. Results show that TwinkleTwinkle achieves about 91% accuracy in recognizing 23 blink patterns among different people.
近年来,移动设备的快速发展与疫情给人们生活带来的变化交织在一起。尽管已经提出了大量新颖的人机交互技术,以方便各种受众和场景,但对于说话或使用手指/手/手臂或戴口罩/眼镜/手套有困难的人来说,仍然存在限制和不便。为了填补除了使用手、声音、脸或嘴之外的这种交互环境的空白,在这项工作中,我们迈出了第一步,提出了一种新的人机交互(HCI)系统,TwinkleTwinkle,它利用商业设备上的超声信号以无接触和无训练的方式感知和识别眨眼模式。TwinkleTwinkle首先采用基于相位差的方法,在不去除任何噪声的情况下描绘候选的眨眼运动特征,然后通过自适应约束来建模眨眼运动的内在特征,从而在眨眼习惯和无意识运动因人而异的情况下,将微小的模式与干扰分离开来。我们提出了一种基于投票的方法来获得最终模式,该模式设计用于与自定义或基于ASCII码和莫尔斯电码等载体的数字组合进行映射,从而使交互无缝嵌入正常和知名的语言系统。我们在智能手机上实现了在时域内实现的所有方法,并在各种设置下进行了广泛的评估。结果表明,在识别不同人的23种眨眼模式时,TwinkleTwinkle的准确率约为91%。
{"title":"TwinkleTwinkle: Interacting with Your Smart Devices by Eye Blink","authors":"Haiming Cheng, W. Lou, Yanni Yang, Yi-pu Chen, Xinyu Zhang","doi":"10.1145/3596238","DOIUrl":"https://doi.org/10.1145/3596238","url":null,"abstract":"Recent years have witnessed the rapid boom of mobile devices interweaving with changes the epidemic has made to people’s lives. Though a tremendous amount of novel human-device interaction techniques have been put forward to facilitate various audiences and scenarios, limitations and inconveniences still occur to people having difficulty speaking or using their fingers/hands/arms or wearing masks/glasses/gloves. To fill the gap of such interaction contexts beyond using hands, voice, face, or mouth, in this work, we take the first step to propose a novel Human-Computer Interaction (HCI) system, TwinkleTwinkle , which senses and recognizes eye blink patterns in a contact-free and training-free manner leveraging ultrasound signals on commercial devices. TwinkleTwinkle first applies a phase difference based approach to depicting candidate eye blink motion profiles without removing any noises, followed by modeling intrinsic characteristics of blink motions through adaptive constraints to separate tiny patterns from interferences in conditions where blink habits and involuntary movements vary between individuals. We propose a vote-based approach to get final patterns designed to map with number combinations either self-defined or based on carriers like ASCII code and Morse code to make interaction seamlessly embedded with normal and well-known language systems. We implement TwinkleTwinkle on smartphones with all methods realized in the time domain and conduct extensive evaluations in various settings. Results show that TwinkleTwinkle achieves about 91% accuracy in recognizing 23 blink patterns among different people.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81728422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VoiceCloak: Adversarial Example Enabled Voice De-Identification with Balanced Privacy and Utility VoiceCloak:具有平衡隐私和效用的对抗性示例启用语音去识别
Pub Date : 2023-01-01 DOI: 10.1145/3596266
Meng Chen, Liwang Lu, Junhao Wang, Jiadi Yu, Ying Chen, Zhibo Wang, Zhongjie Ba, Feng Lin, Kui Ren
Faced with the threat of identity leakage during voice data publishing, users are engaged in a privacy-utility dilemma when enjoying the utility of voice services. Existing machine-centric studies employ direct modification or text-based re-synthesis to de-identify users’ voices but cause inconsistent audibility for human participants in emerging online communication scenarios, such as virtual meetings. In this paper, we propose a human-centric voice de-identification system, VoiceCloak , which uses adversarial examples to balance the privacy and utility of voice services. Instead of typical additive examples inducing perceivable distortions, we design a novel convolutional adversarial example that modulates perturbations into real-world room impulse responses. Benefiting from this, VoiceCloak could preserve user identity from exposure by Automatic Speaker Identification (ASI), while remaining the voice perceptual quality for non-intrusive de-identification. Moreover, VoiceCloak learns a compact speaker distribution through a conditional variational auto-encoder to synthesize diverse targets on demand. Guided by these pseudo targets, VoiceCloak constructs adversarial examples in an input-specific manner, enabling any-to-any identity transformation for robust de-identification. Experimental results show that VoiceCloak could achieve over 92% and 84% successful de-identification on mainstream ASIs and commercial systems with excellent voiceprint consistency, speech integrity, and audio quality.
面对语音数据发布过程中身份泄露的威胁,用户在享受语音服务的效用时陷入了隐私-效用困境。现有的以机器为中心的研究采用直接修改或基于文本的重新合成来消除用户的声音,但在新兴的在线交流场景(如虚拟会议)中,人类参与者的可听性不一致。在本文中,我们提出了一个以人为中心的语音去识别系统,VoiceCloak,它使用对抗性示例来平衡语音服务的隐私性和实用性。我们设计了一个新的卷积对抗示例,将扰动调制到现实世界的房间脉冲响应中,而不是典型的可加性示例引起可感知的扭曲。得益于此,VoiceCloak可以通过自动说话人识别(ASI)保护用户身份,同时保留语音感知质量以进行非侵入性去识别。此外,VoiceCloak通过条件变分自编码器学习紧凑的扬声器分布,以根据需要合成不同的目标。在这些伪目标的指导下,VoiceCloak以特定于输入的方式构造对抗性示例,支持任意到任意的身份转换,以实现健壮的去标识化。实验结果表明,VoiceCloak在主流ASIs和商用系统上的去识别成功率分别超过92%和84%,具有良好的声纹一致性、语音完整性和音频质量。
{"title":"VoiceCloak: Adversarial Example Enabled Voice De-Identification with Balanced Privacy and Utility","authors":"Meng Chen, Liwang Lu, Junhao Wang, Jiadi Yu, Ying Chen, Zhibo Wang, Zhongjie Ba, Feng Lin, Kui Ren","doi":"10.1145/3596266","DOIUrl":"https://doi.org/10.1145/3596266","url":null,"abstract":"Faced with the threat of identity leakage during voice data publishing, users are engaged in a privacy-utility dilemma when enjoying the utility of voice services. Existing machine-centric studies employ direct modification or text-based re-synthesis to de-identify users’ voices but cause inconsistent audibility for human participants in emerging online communication scenarios, such as virtual meetings. In this paper, we propose a human-centric voice de-identification system, VoiceCloak , which uses adversarial examples to balance the privacy and utility of voice services. Instead of typical additive examples inducing perceivable distortions, we design a novel convolutional adversarial example that modulates perturbations into real-world room impulse responses. Benefiting from this, VoiceCloak could preserve user identity from exposure by Automatic Speaker Identification (ASI), while remaining the voice perceptual quality for non-intrusive de-identification. Moreover, VoiceCloak learns a compact speaker distribution through a conditional variational auto-encoder to synthesize diverse targets on demand. Guided by these pseudo targets, VoiceCloak constructs adversarial examples in an input-specific manner, enabling any-to-any identity transformation for robust de-identification. Experimental results show that VoiceCloak could achieve over 92% and 84% successful de-identification on mainstream ASIs and commercial systems with excellent voiceprint consistency, speech integrity, and audio quality.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82624188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
UQRCom: Underwater Wireless Communication Based on QR Code UQRCom:基于二维码的水下无线通信
Pub Date : 2022-12-21 DOI: 10.1145/3571588
Xinyang Liu, Lei Wang, Jie Xiong, Chi Lin, Xinhua Gao, Jiale Li, Yibo Wang
While communication in the air has been a norm with the pervasiveness of WiFi and LTE infrastructure, underwater communication still faces a lot of challenges. Even nowadays, the main communication method for divers in underwater environment is hand gesture. There are multiple issues associated with gesture-based communication including limited amount of information and ambiguity. On the other hand, traditional RF-based wireless communication technologies which have achieved great success in the air can hardly work in underwater environment due to the extremely severe attenuation. In this paper, we propose UQRCom, an underwater wireless communication system designed for divers. We design a UQR code which stems from QR code and address the unique challenges in underwater environment such as color cast, contrast reduction and light interfere. With both real-world experiments and simulation, we show that the proposed system can achieve robust real-time communication in underwater environment. For UQR codes with a size of 19.8 cm x 19.8 cm, the communication distance can be 11.2 m and the achieved data rate (6.9 kbps ~ 13.6 kbps) is high enough for voice communication between divers.
随着WiFi和LTE基础设施的普及,空中通信已经成为一种常态,但水下通信仍然面临着许多挑战。即使在今天,潜水员在水下环境中的主要交流方式也是手势。基于手势的交流存在许多问题,包括信息量有限和模糊性。另一方面,传统的基于射频的无线通信技术在空中取得了巨大的成功,但由于水下环境的衰减非常严重,很难在水下环境下工作。本文提出了一种水下无线通信系统UQRCom。我们设计了一种UQR码,它源于QR码,并解决了水下环境中的独特挑战,如色偏、对比度降低和光干扰。实验和仿真结果表明,该系统能够实现水下环境下的鲁棒实时通信。对于大小为19的UQR码。8𝑐𝑚× 19。8 .𝑐𝑚,通讯距离可达11。2𝑚,实现的数据速率(6.9𝑘𝑏𝑝𝑠~ 13.6𝑘𝑏𝑝𝑠)足以实现潜水员之间的语音通信。
{"title":"UQRCom: Underwater Wireless Communication Based on QR Code","authors":"Xinyang Liu, Lei Wang, Jie Xiong, Chi Lin, Xinhua Gao, Jiale Li, Yibo Wang","doi":"10.1145/3571588","DOIUrl":"https://doi.org/10.1145/3571588","url":null,"abstract":"While communication in the air has been a norm with the pervasiveness of WiFi and LTE infrastructure, underwater communication still faces a lot of challenges. Even nowadays, the main communication method for divers in underwater environment is hand gesture. There are multiple issues associated with gesture-based communication including limited amount of information and ambiguity. On the other hand, traditional RF-based wireless communication technologies which have achieved great success in the air can hardly work in underwater environment due to the extremely severe attenuation. In this paper, we propose UQRCom, an underwater wireless communication system designed for divers. We design a UQR code which stems from QR code and address the unique challenges in underwater environment such as color cast, contrast reduction and light interfere. With both real-world experiments and simulation, we show that the proposed system can achieve robust real-time communication in underwater environment. For UQR codes with a size of 19.8 cm x 19.8 cm, the communication distance can be 11.2 m and the achieved data rate (6.9 kbps ~ 13.6 kbps) is high enough for voice communication between divers.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87485709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
SmokeMon: Unobtrusive Extraction of Smoking Topography Using Wearable Energy-Efficient Thermal. SmokeMon:使用可穿戴节能热成像技术对烟雾地形进行低调提取。
Pub Date : 2022-12-01 Epub Date: 2023-01-11 DOI: 10.1145/3569460
Rawan Alharbi, Soroush Shahi, Stefany Cruz, Lingfeng Li, Sougata Sen, Mahdi Pedram, Christopher Romano, Josiah Hester, Aggelos K Katsaggelos, Nabil Alshurafa

Smoking is the leading cause of preventable death worldwide. Cigarette smoke includes thousands of chemicals that are harmful and cause tobacco-related diseases. To date, the causality between human exposure to specific compounds and the harmful effects is unknown. A first step in closing the gap in knowledge has been measuring smoking topography, or how the smoker smokes the cigarette (puffs, puff volume, and duration). However, current gold-standard approaches to smoking topography involve expensive, bulky, and obtrusive sensor devices, creating unnatural smoking behavior and preventing their potential for real-time interventions in the wild. Although motion-based wearable sensors and their corresponding machine-learned models have shown promise in unobtrusively tracking smoking gestures, they are notorious for confounding smoking with other similar hand-to-mouth gestures such as eating and drinking. In this paper, we present SmokeMon, a chest-worn thermal-sensing wearable system that can capture spatial, temporal, and thermal information around the wearer and cigarette all day to unobtrusively and passively detect smoking events. We also developed a deep learning-based framework to extract puffs and smoking topography. We evaluate SmokeMon in both controlled and free-living experiments with a total of 19 participants, more than 110 hours of data, and 115 smoking sessions achieving an F1-score of 0.9 for puff detection in the laboratory and 0.8 in the wild. By providing SmokeMon as an open platform, we provide measurement of smoking topography in free-living settings to enable testing of smoking topography in the real world, with potential to facilitate timely smoking cessation interventions.

吸烟是全世界可预防死亡的主要原因。香烟烟雾中含有数千种有害的化学物质,会导致与烟草有关的疾病。迄今为止,人类接触特定化合物与有害影响之间的因果关系尚不清楚。缩小知识差距的第一步是测量吸烟地形,即吸烟者如何吸烟(吞云吐、吞云吐量和持续时间)。然而,目前研究吸烟地形的黄金标准方法涉及昂贵、笨重和突兀的传感器设备,造成了不自然的吸烟行为,并阻碍了它们在野外实时干预的潜力。尽管基于动作的可穿戴传感器及其相应的机器学习模型在不引人注目地跟踪吸烟手势方面显示出了希望,但它们因将吸烟与其他类似的手对嘴手势(如吃饭和喝水)混淆而臭名昭著。在本文中,我们介绍了SmokeMon,这是一种可穿戴式胸部热传感可穿戴系统,可以全天捕获佩戴者和香烟周围的空间、时间和热信息,以不显眼地被动检测吸烟事件。我们还开发了一个基于深度学习的框架来提取烟雾和吸烟地形。我们在对照实验和自由生活实验中对SmokeMon进行了评估,总共有19名参与者,超过110小时的数据,115次吸烟,在实验室的烟雾检测得分为0.9,在野外为0.8。通过提供SmokeMon作为一个开放平台,我们提供了在自由生活环境中吸烟地形的测量,从而能够在现实世界中测试吸烟地形,从而有可能促进及时的戒烟干预。
{"title":"SmokeMon: Unobtrusive Extraction of Smoking Topography Using Wearable Energy-Efficient Thermal.","authors":"Rawan Alharbi, Soroush Shahi, Stefany Cruz, Lingfeng Li, Sougata Sen, Mahdi Pedram, Christopher Romano, Josiah Hester, Aggelos K Katsaggelos, Nabil Alshurafa","doi":"10.1145/3569460","DOIUrl":"10.1145/3569460","url":null,"abstract":"<p><p>Smoking is the leading cause of preventable death worldwide. Cigarette smoke includes thousands of chemicals that are harmful and cause tobacco-related diseases. To date, the causality between human exposure to specific compounds and the harmful effects is unknown. A first step in closing the gap in knowledge has been measuring smoking topography, or how the smoker smokes the cigarette (puffs, puff volume, and duration). However, current gold-standard approaches to smoking topography involve expensive, bulky, and obtrusive sensor devices, creating unnatural smoking behavior and preventing their potential for real-time interventions in the wild. Although motion-based wearable sensors and their corresponding machine-learned models have shown promise in unobtrusively tracking smoking gestures, they are notorious for confounding smoking with other similar hand-to-mouth gestures such as eating and drinking. In this paper, we present SmokeMon, a chest-worn thermal-sensing wearable system that can capture spatial, temporal, and thermal information around the wearer and cigarette all day to unobtrusively and passively detect smoking events. We also developed a deep learning-based framework to extract puffs and smoking topography. We evaluate SmokeMon in both controlled and free-living experiments with a total of 19 participants, more than 110 hours of data, and 115 smoking sessions achieving an F1-score of 0.9 for puff detection in the laboratory and 0.8 in the wild. By providing SmokeMon as an open platform, we provide measurement of smoking topography in free-living settings to enable testing of smoking topography in the real world, with potential to facilitate timely smoking cessation interventions.</p>","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10686292/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72699185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
MotorBeat: Acoustic Communication for Home Appliances via Variable Pulse Width Modulation 机动节拍:通过可变脉宽调制的家用电器声学通信
Pub Date : 2022-09-30 DOI: 10.1145/3517255
Weiguo Wang, Jinming Li, Yuan He, Xiuzhen Guo, Yunhao Liu
More and more home appliances are now connected to the Internet, thus enabling various smart home applications. However, a critical problem that may impede the further development of smart home is overlooked: Small appliances account for the majority of home appliances, but they receive little attention and most of them are cut off from the Internet. To fill this gap, we propose MotorBeat, an acoustic communication approach that connects small appliances to a smart speaker. Our key idea is to exploit direct current (DC) motors, which are common components of small appliances, to transmit acoustic messages. We design a novel scheme named Variable Pulse Width Modulation (V-PWM) to drive DC motors. MotorBeat achieves the following 3C goals: (1) Comfortable to hear, (2) Compatible with multiple motor modes, and (3) Concurrent transmission. We implement MotorBeat with commercial devices and evaluate its performance on three small appliances and ten DC motors. The results show that the communication range can be up to 10 m
现在越来越多的家电接入互联网,从而实现各种智能家居应用。然而,一个可能阻碍智能家居进一步发展的关键问题被忽视了:小家电占家电的绝大部分,但却很少受到关注,而且大部分都与互联网隔绝。为了填补这一空白,我们提出了MotorBeat,一种将小家电连接到智能扬声器的声学通信方法。我们的主要想法是利用直流电动机来传输声音信息,直流电动机是小型电器的常见部件。本文设计了一种新颖的变脉宽调制(V-PWM)方案来驱动直流电机。MotorBeat达到以下3C目标:(1)听起来舒适;(2)兼容多种电机模式;(3)并发传输。我们在商用设备上实现了MotorBeat,并在3台小型电器和10台直流电机上评估了其性能。结果表明,该系统的通信距离可达10米
{"title":"MotorBeat: Acoustic Communication for Home Appliances via Variable Pulse Width Modulation","authors":"Weiguo Wang, Jinming Li, Yuan He, Xiuzhen Guo, Yunhao Liu","doi":"10.1145/3517255","DOIUrl":"https://doi.org/10.1145/3517255","url":null,"abstract":"More and more home appliances are now connected to the Internet, thus enabling various smart home applications. However, a critical problem that may impede the further development of smart home is overlooked: Small appliances account for the majority of home appliances, but they receive little attention and most of them are cut off from the Internet. To fill this gap, we propose MotorBeat, an acoustic communication approach that connects small appliances to a smart speaker. Our key idea is to exploit direct current (DC) motors, which are common components of small appliances, to transmit acoustic messages. We design a novel scheme named Variable Pulse Width Modulation (V-PWM) to drive DC motors. MotorBeat achieves the following 3C goals: (1) Comfortable to hear, (2) Compatible with multiple motor modes, and (3) Concurrent transmission. We implement MotorBeat with commercial devices and evaluate its performance on three small appliances and ten DC motors. The results show that the communication range can be up to 10 m","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80254080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MuteIt: Jaw Motion Based Unvoiced Command Recognition Using Earable MuteIt:使用Earable基于下颌运动的静音命令识别
Pub Date : 2022-09-06 DOI: 10.1145/3550281
Tanmay Srivastava, Prerna Khanna, Shijia Pan, Phuc Nguyen, S. Jain
In this paper, we present MuteIt, an ear-worn system for recognizing unvoiced human commands. MuteIt presents an intuitive alternative to voice-based interactions that can be unreliable in noisy environments, disruptive to those around us, and compromise our privacy. We propose a twin-IMU set up to track the user's jaw motion and cancel motion artifacts caused by head and body movements. MuteIt processes jaw motion during word articulation to break each word signal into its constituent syllables, and further each syllable into phonemes (vowels, visemes, and plosives). Recognizing unvoiced commands by only tracking jaw motion is challenging. As a secondary articulator, jaw motion is not distinctive enough for unvoiced speech recognition. MuteIt combines IMU data with the anatomy of jaw movement as well as principles from linguistics, to model the task of word recognition as an estimation problem. Rather than employing machine learning to train a word classifier, we reconstruct each word as a sequence of phonemes using a bi-directional particle filter, enabling the system to be easily scaled to a large set of words. We validate MuteIt for 20 subjects with diverse speech accents to recognize 100 common command words. MuteIt achieves a mean word recognition accuracy of 94.8% in noise-free conditions. When compared with common voice assistants, MuteIt outperforms them in noisy acoustic environments, achieving higher than 90% recognition accuracy. Even in the presence of motion artifacts, such as head movement, walking, and riding in a moving vehicle, MuteIt achieves mean word recognition accuracy of 91% over all scenarios.
在本文中,我们提出了MuteIt,一个耳戴式系统,用于识别无声的人类命令。MuteIt为基于语音的交互提供了一种直观的替代方案,语音交互在嘈杂的环境中可能不可靠,对我们周围的人造成干扰,并损害我们的隐私。我们提出了一种双imu装置来跟踪用户的下巴运动,并消除由头部和身体运动引起的运动伪影。它在发音时处理下颚的运动,将每个单词信号分解成它的组成音节,并进一步将每个音节分解成音素(元音、音素和爆破音)。仅通过跟踪下巴运动来识别未发音的命令是具有挑战性的。作为一个次要的发音器,下颌的运动在无声语音识别中不够明显。它将IMU数据与颌骨运动解剖以及语言学原理相结合,将单词识别任务建模为估计问题。我们没有使用机器学习来训练单词分类器,而是使用双向粒子过滤器将每个单词重构为音素序列,从而使系统能够轻松扩展到大量单词。我们为20个具有不同语音口音的主题验证MuteIt,以识别100个常见命令词。它达到了94的平均单词识别精度。8%在无噪声条件下。与普通语音助手相比,MuteIt在嘈杂的声音环境中表现出色,识别准确率达到90%以上。即使在存在运动伪影的情况下,例如头部运动、行走和骑在移动的车辆中,MuteIt在所有场景中也能达到91%的平均单词识别准确率。
{"title":"MuteIt: Jaw Motion Based Unvoiced Command Recognition Using Earable","authors":"Tanmay Srivastava, Prerna Khanna, Shijia Pan, Phuc Nguyen, S. Jain","doi":"10.1145/3550281","DOIUrl":"https://doi.org/10.1145/3550281","url":null,"abstract":"In this paper, we present MuteIt, an ear-worn system for recognizing unvoiced human commands. MuteIt presents an intuitive alternative to voice-based interactions that can be unreliable in noisy environments, disruptive to those around us, and compromise our privacy. We propose a twin-IMU set up to track the user's jaw motion and cancel motion artifacts caused by head and body movements. MuteIt processes jaw motion during word articulation to break each word signal into its constituent syllables, and further each syllable into phonemes (vowels, visemes, and plosives). Recognizing unvoiced commands by only tracking jaw motion is challenging. As a secondary articulator, jaw motion is not distinctive enough for unvoiced speech recognition. MuteIt combines IMU data with the anatomy of jaw movement as well as principles from linguistics, to model the task of word recognition as an estimation problem. Rather than employing machine learning to train a word classifier, we reconstruct each word as a sequence of phonemes using a bi-directional particle filter, enabling the system to be easily scaled to a large set of words. We validate MuteIt for 20 subjects with diverse speech accents to recognize 100 common command words. MuteIt achieves a mean word recognition accuracy of 94.8% in noise-free conditions. When compared with common voice assistants, MuteIt outperforms them in noisy acoustic environments, achieving higher than 90% recognition accuracy. Even in the presence of motion artifacts, such as head movement, walking, and riding in a moving vehicle, MuteIt achieves mean word recognition accuracy of 91% over all scenarios.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86970635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
TransRisk: Mobility Privacy Risk Prediction based on Transferred Knowledge TransRisk:基于转移知识的移动隐私风险预测
Pub Date : 2022-07-04 DOI: 10.1145/3534581
Xiaoyang Xie, Zhiqing Hong, Zhou Qin, Zhihan Fang, Yuan Tian, Desheng Zhang
Human mobility data may lead to privacy concerns because a resident can be re-identified from these data by malicious attacks even with anonymized user IDs. For an urban service collecting mobility data, an efficient privacy risk assessment is essential for the privacy protection of its users. The existing methods enable efficient privacy risk assessments for service operators to fast adjust the quality of sensing data to lower privacy risk by using prediction models. However, for these prediction models, most of them require massive training data, which has to be collected and stored first. Such a large-scale long-term training data collection contradicts the purpose of privacy risk prediction for new urban services, which is to ensure that the quality of high-risk human mobility data is adjusted to low privacy risk within a short time. To solve this problem, we present a privacy risk prediction model based on transfer learning, i.e., TransRisk, to predict the privacy risk for a new target urban service through (1) small-scale short-term data of its own, and (2) the knowledge learned from data from other existing urban services. We envision the application of TransRisk on the traffic camera surveillance system and evaluate it with real-world mobility datasets already collected in a Chinese city, Shenzhen, including four source datasets, i.e., (i) one call detail record dataset (CDR) with 1.2 million users; (ii) one cellphone connection data dataset (CONN) with 1.2 million users; (iii) a vehicular GPS dataset (Vehicles) with 10 thousand vehicles; (iv) an electronic toll collection transaction dataset (ETC) with 156 thousand users, and a target dataset, i.e., a camera dataset (Camera) with 248 cameras. The results show that our model outperforms the state-of-the-art methods in terms of RMSE and MAE. Our work also provides valuable insights and implications on mobility data privacy risk assessment for both current and future large-scale services.
人类移动数据可能会导致隐私问题,因为即使使用匿名用户id,恶意攻击也可以通过这些数据重新识别居民。对于收集移动数据的城市服务,有效的隐私风险评估对于保护用户的隐私至关重要。现有的方法可以通过预测模型快速调整感知数据的质量以降低隐私风险,从而为服务运营商提供有效的隐私风险评估。然而,对于这些预测模型,大多数都需要大量的训练数据,这些训练数据必须先收集和存储。如此大规模的长期训练数据收集,与城市新型服务隐私风险预测的目的相矛盾,该目的是确保在短时间内将高风险的人类出行数据的质量调整到低隐私风险。为了解决这一问题,我们提出了一种基于迁移学习的隐私风险预测模型TransRisk,通过(1)自身的小规模短期数据和(2)从其他现有城市服务数据中学习的知识来预测新的目标城市服务的隐私风险。我们设想TransRisk在轨迹摄像头监控系统上的应用,并使用已经在中国城市深圳收集的真实移动数据集进行评估,包括四个源数据集,即(i)一个呼叫详细记录数据集(CDR),拥有120万用户;(ii)一个120万用户的手机连接数据集(CONN);(iii) 1万辆车辆的车载GPS数据集(车辆);(iv)拥有15.6万用户的电子收费交易数据集(ETC),以及一个目标数据集,即拥有248个摄像头的摄像头数据集(camera)。结果表明,我们的模型在RMSE和MAE方面优于最先进的方法。我们的工作还为当前和未来大规模服务的移动数据隐私风险评估提供了有价值的见解和启示。
{"title":"TransRisk: Mobility Privacy Risk Prediction based on Transferred Knowledge","authors":"Xiaoyang Xie, Zhiqing Hong, Zhou Qin, Zhihan Fang, Yuan Tian, Desheng Zhang","doi":"10.1145/3534581","DOIUrl":"https://doi.org/10.1145/3534581","url":null,"abstract":"Human mobility data may lead to privacy concerns because a resident can be re-identified from these data by malicious attacks even with anonymized user IDs. For an urban service collecting mobility data, an efficient privacy risk assessment is essential for the privacy protection of its users. The existing methods enable efficient privacy risk assessments for service operators to fast adjust the quality of sensing data to lower privacy risk by using prediction models. However, for these prediction models, most of them require massive training data, which has to be collected and stored first. Such a large-scale long-term training data collection contradicts the purpose of privacy risk prediction for new urban services, which is to ensure that the quality of high-risk human mobility data is adjusted to low privacy risk within a short time. To solve this problem, we present a privacy risk prediction model based on transfer learning, i.e., TransRisk, to predict the privacy risk for a new target urban service through (1) small-scale short-term data of its own, and (2) the knowledge learned from data from other existing urban services. We envision the application of TransRisk on the traffic camera surveillance system and evaluate it with real-world mobility datasets already collected in a Chinese city, Shenzhen, including four source datasets, i.e., (i) one call detail record dataset (CDR) with 1.2 million users; (ii) one cellphone connection data dataset (CONN) with 1.2 million users; (iii) a vehicular GPS dataset (Vehicles) with 10 thousand vehicles; (iv) an electronic toll collection transaction dataset (ETC) with 156 thousand users, and a target dataset, i.e., a camera dataset (Camera) with 248 cameras. The results show that our model outperforms the state-of-the-art methods in terms of RMSE and MAE. Our work also provides valuable insights and implications on mobility data privacy risk assessment for both current and future large-scale services.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86751898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FocalPoint: Adaptive Direct Manipulation for Selecting Small 3D Virtual Objects FocalPoint:用于选择小型3D虚拟对象的自适应直接操作
Pub Date : 2022-03-27 DOI: 10.1145/3580856
Jiaju Ma, Jing Qian, Tongyu Zhou, Jeffson Huang
We propose FocalPoint, a direct manipulation technique in smartphone augmented reality (AR) for selecting small densely-packed objects within reach, a fundamental yet challenging task in AR due to the required accuracy and precision. FocalPoint adaptively and continuously updates a cylindrical geometry for selection disambiguation based on the user's selection history and hand movements. This design is informed by a preliminary study which revealed that participants preferred selecting objects appearing in particular regions of the screen. We evaluate FocalPoint against a baseline direct manipulation technique in a 12-participant study with two tasks: selecting a 3 mm wide target from a pile of cubes and virtually decorating a house with LEGO pieces. FocalPoint was three times as accurate for selecting the correct object and 5.5 seconds faster on average; participants using FocalPoint decorated their houses more and were more satisfied with the result. We further demonstrate the finer control enabled by FocalPoint in example applications of robot repair, 3D modeling, and neural network visualizations.
我们提出FocalPoint,这是智能手机增强现实(AR)中的一种直接操作技术,用于选择触手可及的小型密集物体,这是AR中一项基本但具有挑战性的任务,因为需要精度和精度。FocalPoint根据用户的选择历史和手部运动自适应地持续更新圆柱几何形状,以消除选择歧义。这项设计是根据一项初步研究得出的,该研究表明,参与者更喜欢选择出现在屏幕特定区域的物体。在一项12人参与的研究中,我们对FocalPoint进行了基线直接操作技术的评估,该研究有两个任务:从一堆立方体中选择一个3毫米宽的目标,并虚拟地装饰一所房子
{"title":"FocalPoint: Adaptive Direct Manipulation for Selecting Small 3D Virtual Objects","authors":"Jiaju Ma, Jing Qian, Tongyu Zhou, Jeffson Huang","doi":"10.1145/3580856","DOIUrl":"https://doi.org/10.1145/3580856","url":null,"abstract":"We propose FocalPoint, a direct manipulation technique in smartphone augmented reality (AR) for selecting small densely-packed objects within reach, a fundamental yet challenging task in AR due to the required accuracy and precision. FocalPoint adaptively and continuously updates a cylindrical geometry for selection disambiguation based on the user's selection history and hand movements. This design is informed by a preliminary study which revealed that participants preferred selecting objects appearing in particular regions of the screen. We evaluate FocalPoint against a baseline direct manipulation technique in a 12-participant study with two tasks: selecting a 3 mm wide target from a pile of cubes and virtually decorating a house with LEGO pieces. FocalPoint was three times as accurate for selecting the correct object and 5.5 seconds faster on average; participants using FocalPoint decorated their houses more and were more satisfied with the result. We further demonstrate the finer control enabled by FocalPoint in example applications of robot repair, 3D modeling, and neural network visualizations.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81308451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ForceSticker: Wireless, Batteryless, Thin & Flexible Force Sensors 力传感器:无线,无电池,薄和柔性力传感器
Pub Date : 2022-03-27 DOI: 10.1145/3580793
Agrim Gupta, D. Park, Shayaun Bashar, C. Girerd, Nagarjun Bhat, Siddhi Mundhra, Tania. K. Morimoto, Dinesh Bharadia
Any two objects in contact with each other exert a force that could be simply due to gravity or mechanical contact, such as any ubiquitous object exerting weight on a platform or the contact between two bones at our knee joints. The most ideal way of capturing these contact forces is to have a flexible force sensor which can conform well to the contact surface. Further, the sensor should be thin enough to not affect the contact physics between the two objects. In this paper, we showcase the design of such thin, flexible sticker-like force sensors dubbed as 'ForceStickers', ushering into a new era of miniaturized force sensors. ForceSticker achieves this miniaturization by creating new class of capacitive force sensors which avoid both batteries, as well as wires. The wireless and batteryless readout is enabled via hybrid analog-digital backscatter, by piggybacking analog sensor data onto a digitally identified RFID link. Hence, ForceSticker finds natural applications in space and battery-constraint in-vivo usecases, like force-sensor backed orthopaedic implants, surgical robots. Further, ForceSticker finds applications in ubiquiti-constraint scenarios. For example, these force-stickers enable cheap, digitally readable barcodes that can provide weight information, with possible usecases in warehouse integrity checks. To meet these varied application scenarios, we showcase the general framework behind design of ForceSticker. With ForceSticker framework, we design 4mm*2mm sensor prototypes, with two different polymer layers of ecoflex and neoprene rubber, having force ranges of 0-6N and 0-40N respectively, with readout errors of 0.25, 1.6 N error each (<5% of max. force). Further, we stress test ForceSticker by >10,000 force applications without significant error degradation. We also showcase two case-studies onto the possible applications of ForceSticker: sensing forces from a toy knee-joint model and integrity checks of warehouse packaging.
任何两个相互接触的物体都会产生一种力,这种力可能仅仅是由于重力或机械接触,比如任何无处不在的物体在平台上施加重量,或者我们膝关节的两块骨头之间的接触。捕获这些接触力的最理想的方法是具有能够很好地符合接触面的柔性力传感器。此外,传感器应该足够薄,以不影响两个物体之间的接触物理。在本文中,我们展示了这种被称为“力贴纸”的薄而灵活的贴纸状力传感器的设计,开创了小型化力传感器的新时代。ForceSticker通过创造新型电容式力传感器实现了这种小型化,这种传感器避免了电池和电线的使用。无线和无电池读取是通过混合模拟-数字反向散射实现的,通过将模拟传感器数据装载到数字识别的RFID链路上。因此,ForceSticker在空间和电池限制的活体应用中找到了自然的应用,如力传感器支持的骨科植入物,手术机器人。此外,ForceSticker还可以在普遍约束的场景中找到应用程序。例如,这些力贴实现了廉价的、数字可读的条形码,可以提供重量信息,可能用于仓库完整性检查。为了满足这些不同的应用场景,我们展示了ForceSticker设计背后的一般框架。在forceticker框架下,我们设计了4mm*2mm的传感器原型,采用两种不同的聚合物层ecoflex和氯丁橡胶,分别具有0-6N和0-40N的力范围,在10,000个力应用中,每个读数误差为0.25,1.6 N,误差没有明显的下降。我们还展示了forceticker可能应用的两个案例研究:来自玩具膝关节模型的传感力和仓库包装的完整性检查
{"title":"ForceSticker: Wireless, Batteryless, Thin & Flexible Force Sensors","authors":"Agrim Gupta, D. Park, Shayaun Bashar, C. Girerd, Nagarjun Bhat, Siddhi Mundhra, Tania. K. Morimoto, Dinesh Bharadia","doi":"10.1145/3580793","DOIUrl":"https://doi.org/10.1145/3580793","url":null,"abstract":"Any two objects in contact with each other exert a force that could be simply due to gravity or mechanical contact, such as any ubiquitous object exerting weight on a platform or the contact between two bones at our knee joints. The most ideal way of capturing these contact forces is to have a flexible force sensor which can conform well to the contact surface. Further, the sensor should be thin enough to not affect the contact physics between the two objects. In this paper, we showcase the design of such thin, flexible sticker-like force sensors dubbed as 'ForceStickers', ushering into a new era of miniaturized force sensors. ForceSticker achieves this miniaturization by creating new class of capacitive force sensors which avoid both batteries, as well as wires. The wireless and batteryless readout is enabled via hybrid analog-digital backscatter, by piggybacking analog sensor data onto a digitally identified RFID link. Hence, ForceSticker finds natural applications in space and battery-constraint in-vivo usecases, like force-sensor backed orthopaedic implants, surgical robots. Further, ForceSticker finds applications in ubiquiti-constraint scenarios. For example, these force-stickers enable cheap, digitally readable barcodes that can provide weight information, with possible usecases in warehouse integrity checks. To meet these varied application scenarios, we showcase the general framework behind design of ForceSticker. With ForceSticker framework, we design 4mm*2mm sensor prototypes, with two different polymer layers of ecoflex and neoprene rubber, having force ranges of 0-6N and 0-40N respectively, with readout errors of 0.25, 1.6 N error each (<5% of max. force). Further, we stress test ForceSticker by >10,000 force applications without significant error degradation. We also showcase two case-studies onto the possible applications of ForceSticker: sensing forces from a toy knee-joint model and integrity checks of warehouse packaging.","PeriodicalId":20463,"journal":{"name":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80924679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1