首页 > 最新文献

ACM Transactions on Internet of Things最新文献

英文 中文
MFD: Multi-object Frequency Feature Recognition and State Detection Based on RFID-single Tag 基于rfid单标签的多目标频率特征识别与状态检测
IF 2.7 Pub Date : 2023-08-24 DOI: 10.1145/3615665
Biaokai Zhu, Zejiao Yang, Yupeng Jia, Shengxin Chen, Jie Song, Sanman Liu, P. Li, Feng Li, Deng-ao Li
Vibration is a normal reaction that occurs during the operation of machinery and is very common in industrial systems. How to turn fine-grained vibration perception into visualization, and further predict mechanical failures and reduce property losses based on visual vibration information, which has aroused our thinking. In this paper, the phase information generated by the tag is processed and analyzed, and MFD is proposed, a real-time vibration monitoring and fault-sensing discrimination system. MFD extracts phase information from the original RF signal and converts it into a markov transition map by introducing White Gaussian Noise and a low-pass filter for denoising. To accurately predict the failure of machinery, a deep and machine learning model is introduced to calculate the accuracy of failure analysis, realizing real-time monitoring and fault judgment. The test results show that the average recognition accuracy of vibration can reach 96.07%, and the average recognition accuracy of forward rotation, reverse rotation, oil spill, and screw loosening of motor equipment during long-term operation can reach 98.53%, 99.44%, 97.87%, and 99.91%, respectively, with high robustness.
振动是机械运行过程中发生的一种正常反应,在工业系统中很常见。如何将细粒度的振动感知转化为可视化,并基于视觉振动信息进一步预测机械故障,减少财产损失,这引起了我们的思考。本文对标签产生的相位信息进行处理和分析,提出了一种实时振动监测与故障感知识别系统MFD。MFD从原始射频信号中提取相位信息,通过引入高斯白噪声和低通滤波器进行降噪,将其转换成马尔可夫转换图。为准确预测机械故障,引入深度机器学习模型计算故障分析精度,实现实时监测和故障判断。试验结果表明,该系统对振动的平均识别精度可达96.07%,对电机设备长期运行时的正转、反转、溢油和螺钉松动的平均识别精度分别可达98.53%、99.44%、97.87%和99.91%,具有较高的鲁棒性。
{"title":"MFD: Multi-object Frequency Feature Recognition and State Detection Based on RFID-single Tag","authors":"Biaokai Zhu, Zejiao Yang, Yupeng Jia, Shengxin Chen, Jie Song, Sanman Liu, P. Li, Feng Li, Deng-ao Li","doi":"10.1145/3615665","DOIUrl":"https://doi.org/10.1145/3615665","url":null,"abstract":"Vibration is a normal reaction that occurs during the operation of machinery and is very common in industrial systems. How to turn fine-grained vibration perception into visualization, and further predict mechanical failures and reduce property losses based on visual vibration information, which has aroused our thinking. In this paper, the phase information generated by the tag is processed and analyzed, and MFD is proposed, a real-time vibration monitoring and fault-sensing discrimination system. MFD extracts phase information from the original RF signal and converts it into a markov transition map by introducing White Gaussian Noise and a low-pass filter for denoising. To accurately predict the failure of machinery, a deep and machine learning model is introduced to calculate the accuracy of failure analysis, realizing real-time monitoring and fault judgment. The test results show that the average recognition accuracy of vibration can reach 96.07%, and the average recognition accuracy of forward rotation, reverse rotation, oil spill, and screw loosening of motor equipment during long-term operation can reach 98.53%, 99.44%, 97.87%, and 99.91%, respectively, with high robustness.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75478093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
mmHSV: In-Air Handwritten Signature Verification via Millimeter-wave Radar mmHSV:基于毫米波雷达的空中手写签名验证
IF 2.7 Pub Date : 2023-08-12 DOI: 10.1145/3614443
Wanqing Li, Tongtong He, Nan Jing, Lin Wang
Electronic signatures are widely used in financial business, telecommuting and identity authentication. Offline electronic signatures are vulnerable to copy or replay attacks. Contact-based online electronic signatures are limited by indirect contact such as handwriting pads and may threaten the health of users. Consider combining hand shape features and writing process features to form electronic signatures, the paper proposes an in-air handwritten signature verification system with millimeter-wave(mmWave) radar, namely mmHSV. First, the biometrics of the handwritten signature process are modeled, and phase-dependent biometrics and behavioral features are extracted from the mmWave radar mixture signal. Secondly, a handwritten feature recognition network based on few-sample learning is presented to fuse multi-dimensional features and determine user legitimacy. Finally, mmHSV is implemented and evaluated with commercial mmWave devices in different scenarios and attack mode conditions. Experimental results show that the mmHSV can achieve accurate, efficient, robust and scalable handwritten signature verification. Area Under Curve (AUC) is 98.96 (% ) , False Acceptance Rate (FAR) is 5.1 (% ) at the fixed threshold, AUC is 97.79 (% ) for untrained users.
电子签名广泛应用于金融业务、远程办公和身份认证等领域。离线电子签名容易受到复制或重放攻击。基于接触的在线电子签名受到手写板等间接接触的限制,可能威胁用户的健康。考虑结合手形特征和书写过程特征形成电子签名,本文提出了一种利用毫米波(mmWave)雷达的空中手写签名验证系统,即mmHSV。首先,对手写签名过程的生物特征进行建模,并从毫米波雷达混合信号中提取相位相关的生物特征和行为特征。其次,提出了一种基于少样本学习的手写体特征识别网络,融合多维特征,确定用户合法性;最后,利用商用毫米波器件在不同场景和攻击模式条件下实现和评估mmHSV。实验结果表明,mmHSV可以实现准确、高效、鲁棒和可扩展的手写签名验证。曲线下面积(AUC)为98.96 (% ),固定阈值下的错误接受率(FAR)为5.1 (% ),未经训练的用户的AUC为97.79 (% )。
{"title":"mmHSV: In-Air Handwritten Signature Verification via Millimeter-wave Radar","authors":"Wanqing Li, Tongtong He, Nan Jing, Lin Wang","doi":"10.1145/3614443","DOIUrl":"https://doi.org/10.1145/3614443","url":null,"abstract":"Electronic signatures are widely used in financial business, telecommuting and identity authentication. Offline electronic signatures are vulnerable to copy or replay attacks. Contact-based online electronic signatures are limited by indirect contact such as handwriting pads and may threaten the health of users. Consider combining hand shape features and writing process features to form electronic signatures, the paper proposes an in-air handwritten signature verification system with millimeter-wave(mmWave) radar, namely mmHSV. First, the biometrics of the handwritten signature process are modeled, and phase-dependent biometrics and behavioral features are extracted from the mmWave radar mixture signal. Secondly, a handwritten feature recognition network based on few-sample learning is presented to fuse multi-dimensional features and determine user legitimacy. Finally, mmHSV is implemented and evaluated with commercial mmWave devices in different scenarios and attack mode conditions. Experimental results show that the mmHSV can achieve accurate, efficient, robust and scalable handwritten signature verification. Area Under Curve (AUC) is 98.96 (% ) , False Acceptance Rate (FAR) is 5.1 (% ) at the fixed threshold, AUC is 97.79 (% ) for untrained users.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82573824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
mmDrive: Fine-Grained Fatigue Driving Detection Using mmWave Radar mmDrive:使用毫米波雷达进行细粒度疲劳驾驶检测
IF 2.7 Pub Date : 2023-08-10 DOI: 10.1145/3614437
Juncen Zhu, Jiannong Cao, Yanni Yang, Wei Ren, Huizi Han
Early detection of fatigue driving is pivotal for safety of drivers and pedestrians. Traditional approaches mainly employ cameras and wearable sensors to detect fatigue features, which are intrusive to drivers. Recent advances in radio frequency (RF) sensing enable non-intrusive fatigue feature detection from the signal reflected by driver’s body. However, existing RF-based solutions only detect partial or coarse-grained fatigue features, which reduces the detection accuracy. To tackle above limitations, we propose a mmWave-based fatigue driving detection system, called mmDrive, which can detect multiple fine-grained fatigue features from different body parts. However, achieving accurate detection of various fatigue features during driving encounters practical challenges. Specifically, normal driving activities and driver’s involuntary facial movements inevitably cause interference to fatigue features. Thus, we exploit unique geometric and behavioral characteristics of fatigue features and design effective signal processing methods to remove noises from fatigue-irrelevant activities. Based on the detected fatigue features, we further develop a fatigue determination algorithm to decide driver’s fatigue state. Extensive experiment results from both simulated and real driving environments show that the average accuracy for detecting nodding and yawning features is about (96% ) , and the average errors for estimating eye blink, respiration, and heartbeat rates are around 2.21bpm, 0.54bpm, and 2.52bpm, respectively. And the accuracy of the fatigue detection algorithm we proposed reached (97.63% ) .
早期发现疲劳驾驶对驾驶员和行人的安全至关重要。传统方法主要采用摄像头和可穿戴传感器来检测疲劳特征,这对驾驶员来说是一种干扰。射频(RF)传感技术的最新进展使驾驶员身体反射的信号能够进行非侵入式疲劳特征检测。然而,现有的基于rf的解决方案只能检测部分或粗粒度的疲劳特征,从而降低了检测精度。为了解决上述限制,我们提出了一种基于毫米波的疲劳驱动检测系统,称为mmDrive,它可以检测来自不同身体部位的多个细粒度疲劳特征。然而,在驾驶过程中实现各种疲劳特征的准确检测遇到了实际挑战。具体来说,正常的驾驶活动和驾驶员不自觉的面部运动不可避免地会对疲劳特征产生干扰。因此,我们利用疲劳特征的独特几何和行为特征,设计有效的信号处理方法来去除疲劳无关活动的噪声。基于检测到的疲劳特征,进一步开发了疲劳判定算法,确定驾驶员的疲劳状态。模拟和真实驾驶环境的大量实验结果表明,检测点头和打哈欠特征的平均准确率约为(96% ),估计眨眼、呼吸和心跳速率的平均误差分别约为2.21bpm、0.54bpm和2.52bpm。提出的疲劳检测算法的精度达到(97.63% )。
{"title":"mmDrive: Fine-Grained Fatigue Driving Detection Using mmWave Radar","authors":"Juncen Zhu, Jiannong Cao, Yanni Yang, Wei Ren, Huizi Han","doi":"10.1145/3614437","DOIUrl":"https://doi.org/10.1145/3614437","url":null,"abstract":"Early detection of fatigue driving is pivotal for safety of drivers and pedestrians. Traditional approaches mainly employ cameras and wearable sensors to detect fatigue features, which are intrusive to drivers. Recent advances in radio frequency (RF) sensing enable non-intrusive fatigue feature detection from the signal reflected by driver’s body. However, existing RF-based solutions only detect partial or coarse-grained fatigue features, which reduces the detection accuracy. To tackle above limitations, we propose a mmWave-based fatigue driving detection system, called mmDrive, which can detect multiple fine-grained fatigue features from different body parts. However, achieving accurate detection of various fatigue features during driving encounters practical challenges. Specifically, normal driving activities and driver’s involuntary facial movements inevitably cause interference to fatigue features. Thus, we exploit unique geometric and behavioral characteristics of fatigue features and design effective signal processing methods to remove noises from fatigue-irrelevant activities. Based on the detected fatigue features, we further develop a fatigue determination algorithm to decide driver’s fatigue state. Extensive experiment results from both simulated and real driving environments show that the average accuracy for detecting nodding and yawning features is about (96% ) , and the average errors for estimating eye blink, respiration, and heartbeat rates are around 2.21bpm, 0.54bpm, and 2.52bpm, respectively. And the accuracy of the fatigue detection algorithm we proposed reached (97.63% ) .","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84647069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ViWise: Fusing Visual and Wireless Sensing Data for Trajectory Relationship Recognition ViWise:融合视觉和无线传感数据用于轨迹关系识别
IF 2.7 Pub Date : 2023-08-10 DOI: 10.1145/3614441
Fang-Jing Wu, Sheng-Wun Lai, Sok-Ian Sou
People usually form a social structure (e.g., a leader-follower, companion, or independent group) for better interactions among them and thus share similar perceptions of visible scenes and invisible wireless signals encountered while moving. Many mobility-driven applications have paid much attention to recognizing trajectory relationships among people. This work models visual and wireless data to quantify the trajectory similarity between a pair of users. We design a visual and wireless sensor fusion system, called ViWise, which incorporates the first-person video frames collected by a wearable visual device and the wireless packets broadcast by a personal mobile device for recognizing finer-grained trajectory relationships within a mobility group. When people take similar trajectories, they usually share similar visual scenes. Their wireless packets observed by ambient wireless base stations (called wireless scanners in this work) usually contain similar patterns. We model the visual characteristics of physical objects seen by a user from two perspectives: micro-scale image structure with pixel-wise features and macro-scale semantic context. On the other hand, we model characteristics of wireless packets based on the encountered wireless scanners along the user’s trajectory. Given two users’ trajectories, their trajectory characteristics behind the visible video frames and invisible wireless packets are fused together to compute the visual-wireless data similarity that quantifies the correlation between trajectories taken by them. We exploit modeled visual-wireless data similarity to recognize the social structure within user trajectories. Comprehensive experimental results in indoor and outdoor environments show that the proposed ViWise is robust in trajectory relationship recognition with an accuracy of above 90%.
人们通常会形成一种社会结构(例如,领导者-追随者,同伴或独立团体),以便更好地相互作用,从而对移动时遇到的可见场景和不可见无线信号具有相似的感知。许多移动驱动的应用程序都非常重视识别人与人之间的轨迹关系。这项工作为视觉和无线数据建模,以量化一对用户之间的轨迹相似性。我们设计了一个视觉和无线传感器融合系统,称为ViWise,它结合了由可穿戴视觉设备收集的第一人称视频帧和由个人移动设备广播的无线数据包,用于识别移动群体中更细粒度的轨迹关系。当人们走相似的轨迹时,他们通常会分享相似的视觉场景。它们的无线数据包被周围的无线基站(在这项工作中称为无线扫描器)观察到,通常包含类似的模式。我们从两个角度对用户看到的物理对象的视觉特征进行建模:具有像素特征的微观尺度图像结构和宏观尺度语义上下文。另一方面,我们根据用户轨迹上遇到的无线扫描器对无线数据包的特征进行建模。给定两个用户的轨迹,将其可见视频帧和不可见无线数据包背后的轨迹特征融合在一起,以计算视觉-无线数据相似度,从而量化他们所采取的轨迹之间的相关性。我们利用建模的视觉无线数据相似性来识别用户轨迹中的社会结构。室内和室外环境的综合实验结果表明,所提出的ViWise在轨迹关系识别方面具有很强的鲁棒性,准确率在90%以上。
{"title":"ViWise: Fusing Visual and Wireless Sensing Data for Trajectory Relationship Recognition","authors":"Fang-Jing Wu, Sheng-Wun Lai, Sok-Ian Sou","doi":"10.1145/3614441","DOIUrl":"https://doi.org/10.1145/3614441","url":null,"abstract":"People usually form a social structure (e.g., a leader-follower, companion, or independent group) for better interactions among them and thus share similar perceptions of visible scenes and invisible wireless signals encountered while moving. Many mobility-driven applications have paid much attention to recognizing trajectory relationships among people. This work models visual and wireless data to quantify the trajectory similarity between a pair of users. We design a visual and wireless sensor fusion system, called ViWise, which incorporates the first-person video frames collected by a wearable visual device and the wireless packets broadcast by a personal mobile device for recognizing finer-grained trajectory relationships within a mobility group. When people take similar trajectories, they usually share similar visual scenes. Their wireless packets observed by ambient wireless base stations (called wireless scanners in this work) usually contain similar patterns. We model the visual characteristics of physical objects seen by a user from two perspectives: micro-scale image structure with pixel-wise features and macro-scale semantic context. On the other hand, we model characteristics of wireless packets based on the encountered wireless scanners along the user’s trajectory. Given two users’ trajectories, their trajectory characteristics behind the visible video frames and invisible wireless packets are fused together to compute the visual-wireless data similarity that quantifies the correlation between trajectories taken by them. We exploit modeled visual-wireless data similarity to recognize the social structure within user trajectories. Comprehensive experimental results in indoor and outdoor environments show that the proposed ViWise is robust in trajectory relationship recognition with an accuracy of above 90%.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78465506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UltraSnoop: Placement-agnostic Keystroke Snooping via Smartphone-based Ultrasonic Sonar UltraSnoop:通过基于智能手机的超声波声纳进行位置无关的击键窥探
IF 2.7 Pub Date : 2023-08-10 DOI: 10.1145/3614440
Yanchao Zhao, Yiming Zhao, Si Li, Hao Han, Linfu Xie
Keystroke snooping is an effective way to steal sensitive information from the victims. Recent research on acoustic emanation based techniques has greatly improved the accessibility by non-professional adversaries. However, these approaches either require multiple smartphones or require specific placement of the smartphone relative to the keyboards, which tremendously restrict the application scenarios. In this paper, we propose UltraSnoop, a training-free, transferable, and placement-agnostic scheme, which manages to infer user’s input using a single smartphone placed within the range covered by a microphone and speaker. The innovation of Ultrasnoop is that we propose an ultrasonic anchor-keystroke positioning method and an MFCCs clustering algorithm, synthesis of which could infer the relative position between the smartphone and the keyboard. Along with the keystroke TDoA, our method could infer the keystrokes and even gradually improve the accuracy as the snooping proceeds. Our real-world experiments show that UltraSnoop could achieve more than 85% top-3 snooping accuracy when the smartphone is placed within the range of 30-60cm from the keyboard.
击键窥探是窃取用户敏感信息的有效手段。最近基于声发射技术的研究大大提高了非专业对手的可及性。然而,这些方法要么需要多个智能手机,要么需要智能手机相对于键盘的特定位置,这极大地限制了应用场景。在本文中,我们提出了UltraSnoop,这是一种无需培训、可转移且与位置无关的方案,它可以通过放置在麦克风和扬声器覆盖范围内的单个智能手机来推断用户的输入。Ultrasnoop的创新之处在于我们提出了一种超声波锚击定位方法和一种MFCCs聚类算法,两者的综合可以推断出智能手机与键盘之间的相对位置。随着按键的TDoA,我们的方法可以推断出按键,甚至随着窥探的进行逐渐提高准确率。我们的实际实验表明,当智能手机放置在距离键盘30-60厘米的范围内时,UltraSnoop可以达到85%以上的top-3窥探精度。
{"title":"UltraSnoop: Placement-agnostic Keystroke Snooping via Smartphone-based Ultrasonic Sonar","authors":"Yanchao Zhao, Yiming Zhao, Si Li, Hao Han, Linfu Xie","doi":"10.1145/3614440","DOIUrl":"https://doi.org/10.1145/3614440","url":null,"abstract":"Keystroke snooping is an effective way to steal sensitive information from the victims. Recent research on acoustic emanation based techniques has greatly improved the accessibility by non-professional adversaries. However, these approaches either require multiple smartphones or require specific placement of the smartphone relative to the keyboards, which tremendously restrict the application scenarios. In this paper, we propose UltraSnoop, a training-free, transferable, and placement-agnostic scheme, which manages to infer user’s input using a single smartphone placed within the range covered by a microphone and speaker. The innovation of Ultrasnoop is that we propose an ultrasonic anchor-keystroke positioning method and an MFCCs clustering algorithm, synthesis of which could infer the relative position between the smartphone and the keyboard. Along with the keystroke TDoA, our method could infer the keystrokes and even gradually improve the accuracy as the snooping proceeds. Our real-world experiments show that UltraSnoop could achieve more than 85% top-3 snooping accuracy when the smartphone is placed within the range of 30-60cm from the keyboard.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78737930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
I am an Earphone and I can Hear my Users Face: Facial Landmark Tracking using Smart Earphones 我是一个耳机,我可以听到我的用户的脸:面部地标跟踪使用智能耳机
IF 2.7 Pub Date : 2023-08-09 DOI: 10.1145/3614438
Shijia Zhang, Taiting Lu, Hao Zhou, Yilin Liu, Runze Liu, Mahanth K. Gowda
This paper presents EARFace, a system that shows the feasibility of tracking facial landmarks for 3D facial reconstruction using in-ear acoustic sensors embedded within smart earphones. This enables a number of applications in the areas of facial expression tracking, user-interfaces, AR/VR applications, affective computing, accessibility, etc. While conventional vision-based solutions break down under poor lighting, occlusions, and also suffer from privacy concerns, earphone platforms are robust to ambient conditions, while being privacy-preserving. In contrast to prior work on earable platforms that perform outer-ear sensing for facial motion tracking, EARFace shows the feasibility of completely in-ear sensing with a natural earphone form-factor, thus enhancing the comfort levels of wearing. The core intuition exploited by EARFace is that the shape of the ear canal changes due to the movement of facial muscles during facial motion. EARFace tracks the changes in shape of the ear canal by measuring ultrasonic channel frequency response (CFR) of the inner ear, ultimately resulting in tracking of the facial motion. A transformer based machine learning (ML) model is designed to exploit spectral and temporal relationships in the ultrasonic CFR data to predict the facial landmarks of the user with an accuracy of 1.83 mm. Using these predicted landmarks, a 3D graphical model of the face that replicates the precise facial motion of the user is then reconstructed. Domain adaptation is further performed by adapting the weights of layers using a group-wise and differential learning rate. This decreases the training overhead in EARFace. The transformer based ML model runs on smartphone devices with a processing latency of 13 ms and an overall low power consumption profile. Finally, usability studies indicate higher levels of comforts of wearing EARFace’s earphone platform in comparison with alternative form-factors.
本文介绍了EARFace系统,该系统显示了使用嵌入智能耳机的入耳式声学传感器跟踪面部地标进行3D面部重建的可行性。这使得面部表情跟踪、用户界面、AR/VR应用、情感计算、可访问性等领域的许多应用成为可能。虽然传统的基于视觉的解决方案在光线不足、遮挡和隐私问题下会失效,但耳机平台在保护隐私的同时,对环境条件也很强大。与之前使用外耳感应进行面部运动跟踪的可穿戴平台相比,EARFace展示了完全入耳感应的可行性,具有自然的耳机形状因素,从而提高了佩戴的舒适度。EARFace利用的核心直觉是,在面部运动时,由于面部肌肉的运动,耳道的形状会发生变化。EARFace通过测量内耳的超声通道频率响应(CFR)来跟踪耳道形状的变化,最终实现对面部运动的跟踪。基于变压器的机器学习(ML)模型旨在利用超声CFR数据中的光谱和时间关系来预测用户的面部地标,精度为1.83 mm。使用这些预测的地标,然后重建一个面部的3D图形模型,该模型复制了用户精确的面部运动。通过使用分组和差分学习率来调整层的权重,进一步进行域自适应。这减少了EARFace的训练开销。基于变压器的ML模型运行在智能手机设备上,处理延迟为13毫秒,总体功耗低。最后,可用性研究表明,与其他形式的因素相比,佩戴EARFace耳机平台的舒适度更高。
{"title":"I am an Earphone and I can Hear my Users Face: Facial Landmark Tracking using Smart Earphones","authors":"Shijia Zhang, Taiting Lu, Hao Zhou, Yilin Liu, Runze Liu, Mahanth K. Gowda","doi":"10.1145/3614438","DOIUrl":"https://doi.org/10.1145/3614438","url":null,"abstract":"This paper presents EARFace, a system that shows the feasibility of tracking facial landmarks for 3D facial reconstruction using in-ear acoustic sensors embedded within smart earphones. This enables a number of applications in the areas of facial expression tracking, user-interfaces, AR/VR applications, affective computing, accessibility, etc. While conventional vision-based solutions break down under poor lighting, occlusions, and also suffer from privacy concerns, earphone platforms are robust to ambient conditions, while being privacy-preserving. In contrast to prior work on earable platforms that perform outer-ear sensing for facial motion tracking, EARFace shows the feasibility of completely in-ear sensing with a natural earphone form-factor, thus enhancing the comfort levels of wearing. The core intuition exploited by EARFace is that the shape of the ear canal changes due to the movement of facial muscles during facial motion. EARFace tracks the changes in shape of the ear canal by measuring ultrasonic channel frequency response (CFR) of the inner ear, ultimately resulting in tracking of the facial motion. A transformer based machine learning (ML) model is designed to exploit spectral and temporal relationships in the ultrasonic CFR data to predict the facial landmarks of the user with an accuracy of 1.83 mm. Using these predicted landmarks, a 3D graphical model of the face that replicates the precise facial motion of the user is then reconstructed. Domain adaptation is further performed by adapting the weights of layers using a group-wise and differential learning rate. This decreases the training overhead in EARFace. The transformer based ML model runs on smartphone devices with a processing latency of 13 ms and an overall low power consumption profile. Finally, usability studies indicate higher levels of comforts of wearing EARFace’s earphone platform in comparison with alternative form-factors.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85993332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
airBP: Monitor Your Blood Pressure with Millimeter-Wave in the Air airBP:用毫米波监测你的血压
IF 2.7 Pub Date : 2023-08-09 DOI: 10.1145/3614439
Yumeng Liang, Anfu Zhou, Xinzhe Wen, Wei Huang, Pu Shi, Lingyu Pu, Huanhuan Zhang, Huadong Ma
Blood pressure (BP), an important vital sign to assess human health, is expected to be monitored conveniently. The existing BP monitoring methods, either traditional cuff-based or newly-emerging wearable-based, all require skin contact, which may cause unpleasant user experience and is even injurious to certain users. In this paper, we explore contact-less BP monitoring and propose airBP, which emits millimeter-wave signals toward a user’s wrist, and captures the reflected signal bounded off from the pulsating artery underlying the wrist. By analyzing the reflected signal strength of the signal, airBP generates arterial pulse and further estimates BP by exploiting the relationship between the arterial pulse and BP. To realize airBP, we design a new beam-forming method to keep focusing on the tiny and hidden wrist artery, by leveraging the inherent periodicity of the arterial pulse. Moreover, we custom-design a pre-training and neural network architecture, to combat the challenges from the arterial pulse sparsity and ambiguity, so as to estimate BP accurately. We prototype airBP using a coin-size COTS mmWave radar and perform extensive experiments on 41 subjects. The results demonstrate that airBP accurately estimates systolic and diastolic BP, with the mean error of -0.30 mmHg and -0.23 mmHg, as well as the standard deviation error of 4.80 mmHg and 3.79 mmHg (within the acceptable range regulated by the FDA’s AAMI protocol), respectively, at a distance up to 26 cm.
血压(BP)是衡量人体健康的重要生命指标,有望实现便捷的监测。现有的血压监测方法,无论是传统的袖带式还是新兴的可穿戴式,都需要与皮肤接触,这可能会造成不愉快的用户体验,甚至对某些用户造成伤害。在本文中,我们探索了非接触式血压监测,并提出了airBP,它向用户的手腕发射毫米波信号,并捕获来自手腕下方脉动动脉的反射信号。通过分析信号的反射信号强度,airBP产生动脉脉搏,并利用动脉脉搏与血压之间的关系进一步估计血压。为了实现airBP,我们设计了一种新的波束形成方法,利用动脉脉冲固有的周期性来持续聚焦微小且隐藏的手腕动脉。此外,我们定制了一种预训练和神经网络架构,以克服动脉脉冲稀疏性和模糊性带来的挑战,从而准确地估计BP。我们使用硬币大小的COTS毫米波雷达对airBP进行原型设计,并对41名受试者进行了广泛的实验。结果表明,在长达26厘米的距离内,airBP准确地估计了收缩压和舒张压,平均误差分别为-0.30 mmHg和-0.23 mmHg,标准差误差分别为4.80 mmHg和3.79 mmHg(在FDA AAMI协议规定的可接受范围内)。
{"title":"airBP: Monitor Your Blood Pressure with Millimeter-Wave in the Air","authors":"Yumeng Liang, Anfu Zhou, Xinzhe Wen, Wei Huang, Pu Shi, Lingyu Pu, Huanhuan Zhang, Huadong Ma","doi":"10.1145/3614439","DOIUrl":"https://doi.org/10.1145/3614439","url":null,"abstract":"Blood pressure (BP), an important vital sign to assess human health, is expected to be monitored conveniently. The existing BP monitoring methods, either traditional cuff-based or newly-emerging wearable-based, all require skin contact, which may cause unpleasant user experience and is even injurious to certain users. In this paper, we explore contact-less BP monitoring and propose airBP, which emits millimeter-wave signals toward a user’s wrist, and captures the reflected signal bounded off from the pulsating artery underlying the wrist. By analyzing the reflected signal strength of the signal, airBP generates arterial pulse and further estimates BP by exploiting the relationship between the arterial pulse and BP. To realize airBP, we design a new beam-forming method to keep focusing on the tiny and hidden wrist artery, by leveraging the inherent periodicity of the arterial pulse. Moreover, we custom-design a pre-training and neural network architecture, to combat the challenges from the arterial pulse sparsity and ambiguity, so as to estimate BP accurately. We prototype airBP using a coin-size COTS mmWave radar and perform extensive experiments on 41 subjects. The results demonstrate that airBP accurately estimates systolic and diastolic BP, with the mean error of -0.30 mmHg and -0.23 mmHg, as well as the standard deviation error of 4.80 mmHg and 3.79 mmHg (within the acceptable range regulated by the FDA’s AAMI protocol), respectively, at a distance up to 26 cm.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90865251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Query Interface for Smart City Internet of Things Data Marketplaces: A Case Study 智慧城市物联网数据市场查询接口:案例研究
IF 2.7 Pub Date : 2023-07-18 DOI: 10.1145/3609336
Naeima Hamed, A. Gaglione, A. Gluhak, Omer F. Rana, Charith Perera
Cities are increasingly becoming augmented with sensors through public, private, and academic sector initiatives. Most of the time, these sensors are deployed with a primary purpose (objective) in mind (e.g., deploy sensors to understand noise pollution) by a sensor owner (i.e., the organization that invests in sensing hardware, e.g., a city council). Over the past few years, communities undertaking smart city development projects have understood the importance of making the sensor data available to a wider community—beyond their primary usage. Different business models have been proposed to achieve this, including creating data marketplaces. The vision is to encourage new startups and small and medium-scale businesses to create novel products and services using sensor data to generate additional economic value. Currently, data are sold as pre-defined independent datasets (e.g., noise level and parking status data may be sold separately). This approach creates several challenges, such as (i) difficulties in pricing, which leads to higher prices (per dataset); (ii) higher network communication and bandwidth requirements; and (iii) information overload for data consumers (i.e., those who purchase data). We investigate the benefit of semantic representation and its reasoning capabilities toward creating a business model that offers data on demand within smart city Internet of Things data marketplaces. The objective is to help data consumers (i.e., small and medium enterprises) acquire the most relevant data they need. We demonstrate the utility of our approach by integrating it into a real-world IoT data marketplace (developed by the synchronicity-iot.eu project). We discuss design decisions and their consequences (i.e., tradeoffs) on the choice and selection of datasets. Subsequently, we present a series of data modeling principles and recommendations for implementing IoT data marketplaces.
通过公共、私营和学术部门的倡议,城市越来越多地增加了传感器。大多数情况下,这些传感器是由传感器所有者(即投资传感硬件的组织,例如市议会)部署的,其主要目的(目标)是(例如,部署传感器以了解噪音污染)。在过去的几年里,从事智慧城市发展项目的社区已经认识到将传感器数据提供给更广泛的社区的重要性,而不仅仅是它们的主要用途。为了实现这一目标,已经提出了不同的商业模式,包括创建数据市场。其愿景是鼓励新的初创企业和中小型企业利用传感器数据创造新的产品和服务,以产生额外的经济价值。目前,数据是作为预定义的独立数据集出售的(例如,噪音水平和停车状态数据可能会单独出售)。这种方法带来了一些挑战,例如(i)定价困难,导致更高的价格(每个数据集);(ii)更高的网络通信和带宽要求;(iii)数据消费者(即购买数据的人)的信息过载。我们研究了语义表示的好处及其推理能力,以创建一个在智慧城市物联网数据市场中按需提供数据的商业模型。目标是帮助数据消费者(即中小型企业)获得他们需要的最相关的数据。我们通过将其集成到现实世界的物联网数据市场(由synchronicity-iot开发)来展示我们方法的实用性。欧盟项目)。我们讨论设计决策及其结果(即权衡)对数据集的选择和选择。随后,我们提出了一系列数据建模原则和实施物联网数据市场的建议。
{"title":"Query Interface for Smart City Internet of Things Data Marketplaces: A Case Study","authors":"Naeima Hamed, A. Gaglione, A. Gluhak, Omer F. Rana, Charith Perera","doi":"10.1145/3609336","DOIUrl":"https://doi.org/10.1145/3609336","url":null,"abstract":"Cities are increasingly becoming augmented with sensors through public, private, and academic sector initiatives. Most of the time, these sensors are deployed with a primary purpose (objective) in mind (e.g., deploy sensors to understand noise pollution) by a sensor owner (i.e., the organization that invests in sensing hardware, e.g., a city council). Over the past few years, communities undertaking smart city development projects have understood the importance of making the sensor data available to a wider community—beyond their primary usage. Different business models have been proposed to achieve this, including creating data marketplaces. The vision is to encourage new startups and small and medium-scale businesses to create novel products and services using sensor data to generate additional economic value. Currently, data are sold as pre-defined independent datasets (e.g., noise level and parking status data may be sold separately). This approach creates several challenges, such as (i) difficulties in pricing, which leads to higher prices (per dataset); (ii) higher network communication and bandwidth requirements; and (iii) information overload for data consumers (i.e., those who purchase data). We investigate the benefit of semantic representation and its reasoning capabilities toward creating a business model that offers data on demand within smart city Internet of Things data marketplaces. The objective is to help data consumers (i.e., small and medium enterprises) acquire the most relevant data they need. We demonstrate the utility of our approach by integrating it into a real-world IoT data marketplace (developed by the synchronicity-iot.eu project). We discuss design decisions and their consequences (i.e., tradeoffs) on the choice and selection of datasets. Subsequently, we present a series of data modeling principles and recommendations for implementing IoT data marketplaces.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85147445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FL4IoT: IoT Device Fingerprinting and Identification Using Federated Learning FL4IoT:使用联邦学习的物联网设备指纹识别和识别
IF 2.7 Pub Date : 2023-06-09 DOI: 10.1145/3603257
Han Wang, David Eklund, Alina Oprea, S. Raza
Unidentified devices in a network can result in devastating consequences. It is, therefore, necessary to fingerprint and identify IoT devices connected to private or critical networks. With the proliferation of massive but heterogeneous IoT devices, it is getting challenging to detect vulnerable devices connected to networks. Current machine learning-based techniques for fingerprinting and identifying devices necessitate a significant amount of data gathered from IoT networks that must be transmitted to a central cloud. Nevertheless, private IoT data cannot be shared with the central cloud in numerous sensitive scenarios. Federated learning (FL) has been regarded as a promising paradigm for decentralized learning and has been applied in many different use cases. It enables machine learning models to be trained in a privacy-preserving way. In this article, we propose a privacy-preserved IoT device fingerprinting and identification mechanisms using FL; we call it FL4IoT. FL4IoT is a two-phased system combining unsupervised-learning-based device fingerprinting and supervised-learning-based device identification. FL4IoT shows its practicality in different performance metrics in a federated and centralized setup. For instance, in the best cases, empirical results show that FL4IoT achieves ∼99% accuracy and F1-Score in identifying IoT devices using a federated setup without exposing any private data to a centralized cloud entity. In addition, FL4IoT can detect spoofed devices with over 99% accuracy.
网络中未识别的设备可能会导致毁灭性的后果。因此,有必要对连接到专用或关键网络的物联网设备进行指纹识别和识别。随着大量异构物联网设备的激增,检测连接到网络的易受攻击设备变得越来越具有挑战性。当前基于机器学习的指纹识别和设备识别技术需要从物联网网络收集大量数据,这些数据必须传输到中央云。然而,在许多敏感场景中,私有物联网数据无法与中心云共享。联邦学习(FL)被认为是分散学习的一种很有前途的范例,并已被应用于许多不同的用例中。它使机器学习模型能够以一种保护隐私的方式进行训练。在本文中,我们提出了一种使用FL保护隐私的物联网设备指纹和识别机制;我们称之为FL4IoT。FL4IoT是基于无监督学习的设备指纹识别和基于监督学习的设备识别相结合的两阶段系统。FL4IoT在联邦和集中式设置的不同性能指标中显示了它的实用性。例如,在最好的情况下,经验结果表明,FL4IoT在使用联邦设置识别物联网设备方面达到了~ 99%的准确率和F1-Score,而不会将任何私有数据暴露给集中式云实体。此外,FL4IoT可以检测欺骗设备,准确率超过99%。
{"title":"FL4IoT: IoT Device Fingerprinting and Identification Using Federated Learning","authors":"Han Wang, David Eklund, Alina Oprea, S. Raza","doi":"10.1145/3603257","DOIUrl":"https://doi.org/10.1145/3603257","url":null,"abstract":"Unidentified devices in a network can result in devastating consequences. It is, therefore, necessary to fingerprint and identify IoT devices connected to private or critical networks. With the proliferation of massive but heterogeneous IoT devices, it is getting challenging to detect vulnerable devices connected to networks. Current machine learning-based techniques for fingerprinting and identifying devices necessitate a significant amount of data gathered from IoT networks that must be transmitted to a central cloud. Nevertheless, private IoT data cannot be shared with the central cloud in numerous sensitive scenarios. Federated learning (FL) has been regarded as a promising paradigm for decentralized learning and has been applied in many different use cases. It enables machine learning models to be trained in a privacy-preserving way. In this article, we propose a privacy-preserved IoT device fingerprinting and identification mechanisms using FL; we call it FL4IoT. FL4IoT is a two-phased system combining unsupervised-learning-based device fingerprinting and supervised-learning-based device identification. FL4IoT shows its practicality in different performance metrics in a federated and centralized setup. For instance, in the best cases, empirical results show that FL4IoT achieves ∼99% accuracy and F1-Score in identifying IoT devices using a federated setup without exposing any private data to a centralized cloud entity. In addition, FL4IoT can detect spoofed devices with over 99% accuracy.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79886241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interactive Privacy Management: Toward Enhancing Privacy Awareness and Control in the Internet of Things 交互式隐私管理:面向物联网环境下增强隐私意识与控制
IF 2.7 Pub Date : 2023-06-07 DOI: 10.1145/3600096
Bayan AL MUHANDER, Jason Wiese, Omer F. Rana, Charith Perera
The balance between protecting user privacy while providing cost-effective devices that are functional and usable is a key challenge in the burgeoning Internet of Things (IoT). In traditional desktop and mobile contexts, the primary user interface is a screen; however, in IoT devices, screens are rare or very small, invalidating many existing approaches to protecting user privacy. Privacy visualizations are a common approach for assisting users in understanding the privacy implications of web and mobile services. To gain a thorough understanding of IoT privacy, we examine existing web, mobile, and IoT visualization approaches. Following that, we define five major privacy factors in the IoT context: type, usage, storage, retention period, and access. We then describe notification methods used in various contexts as reported in the literature. We aim to highlight key approaches that developers and researchers can use for creating effective IoT privacy notices that improve user privacy management (awareness and control). Using a toolkit, a use case scenario, and two examples from the literature, we demonstrate how privacy visualization approaches can be supported in practice.
在保护用户隐私和提供具有成本效益的功能和可用设备之间取得平衡是新兴的物联网(IoT)的关键挑战。在传统的桌面和移动环境中,主要的用户界面是屏幕;然而,在物联网设备中,屏幕很少或非常小,使许多现有的保护用户隐私的方法失效。隐私可视化是一种常见的方法来帮助用户理解的隐私影响网络和移动服务。为了全面了解物联网隐私,我们研究了现有的web、移动和物联网可视化方法。接下来,我们定义了物联网环境中的五个主要隐私因素:类型、使用、存储、保留期限和访问。然后,我们描述了在文献中报道的各种上下文中使用的通知方法。我们的目标是强调开发人员和研究人员可以用来创建有效的物联网隐私通知的关键方法,这些通知可以改善用户隐私管理(意识和控制)。通过使用一个工具箱、一个用例场景和两个文献中的例子,我们演示了在实践中如何支持隐私可视化方法。
{"title":"Interactive Privacy Management: Toward Enhancing Privacy Awareness and Control in the Internet of Things","authors":"Bayan AL MUHANDER, Jason Wiese, Omer F. Rana, Charith Perera","doi":"10.1145/3600096","DOIUrl":"https://doi.org/10.1145/3600096","url":null,"abstract":"The balance between protecting user privacy while providing cost-effective devices that are functional and usable is a key challenge in the burgeoning Internet of Things (IoT). In traditional desktop and mobile contexts, the primary user interface is a screen; however, in IoT devices, screens are rare or very small, invalidating many existing approaches to protecting user privacy. Privacy visualizations are a common approach for assisting users in understanding the privacy implications of web and mobile services. To gain a thorough understanding of IoT privacy, we examine existing web, mobile, and IoT visualization approaches. Following that, we define five major privacy factors in the IoT context: type, usage, storage, retention period, and access. We then describe notification methods used in various contexts as reported in the literature. We aim to highlight key approaches that developers and researchers can use for creating effective IoT privacy notices that improve user privacy management (awareness and control). Using a toolkit, a use case scenario, and two examples from the literature, we demonstrate how privacy visualization approaches can be supported in practice.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84464387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Internet of Things
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1