Yuchen Su, Shiyue Huang, Hongbo Liu, Yuefeng Chen, Yicong Du, Yan Wang, Yanzhi Ren, Yingying Chen
{"title":"PPG-Hear:利用光脉搏传感器的实用窃听攻击","authors":"Yuchen Su, Shiyue Huang, Hongbo Liu, Yuefeng Chen, Yicong Du, Yan Wang, Yanzhi Ren, Yingying Chen","doi":"10.1145/3659603","DOIUrl":null,"url":null,"abstract":"Photoplethysmography (PPG) sensors have become integral components of wearable and portable health devices in the current technological landscape. These sensors offer easy access to heart rate and blood oxygenation, facilitating continuous long-term health monitoring in clinic and non-clinic environments. While people understand that health-related information provided by PPG is private, no existing research has demonstrated that PPG sensors are dangerous devices capable of obtaining sensitive information other than health-related data. This work introduces PPG-Hear, a novel side-channel attack that exploits PPG sensors to intercept nearby audio information covertly. Specifically, PPG-Hear exploits low-frequency PPG measurements to discern and reconstruct human speech emitted from proximate speakers. This technology allows attackers to eavesdrop on sensitive conversations (e.g., audio passwords, business decisions, or intellectual properties) without being noticed. To achieve this non-trivial attack on commodity PPG-enabled devices, we employ differentiation and filtering techniques to mitigate the impact of temperature drift and user-specific gestures. We develop the first PPG-based speech reconstructor, which can identify speech patterns in the PPG spectrogram and establish the correlation between PPG and speech spectrograms. By integrating a MiniRocket-based classifier with a PixelGAN model, PPG-Hear can reconstruct human speech using low-sampling-rate PPG measurements. Through an array of real-world experiments, encompassing common eavesdropping scenarios such as surrounding speakers and the device's own speakers, we show that PPG-Hear can achieve remarkable accuracy of 90% for recognizing human speech, surpassing the current state-of-the-art side-channel eavesdropping attacks using motion sensors operating at equivalent sampling rates (i.e., 50Hz to 500Hz).","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":3.6000,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"PPG-Hear: A Practical Eavesdropping Attack with Photoplethysmography Sensors\",\"authors\":\"Yuchen Su, Shiyue Huang, Hongbo Liu, Yuefeng Chen, Yicong Du, Yan Wang, Yanzhi Ren, Yingying Chen\",\"doi\":\"10.1145/3659603\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Photoplethysmography (PPG) sensors have become integral components of wearable and portable health devices in the current technological landscape. These sensors offer easy access to heart rate and blood oxygenation, facilitating continuous long-term health monitoring in clinic and non-clinic environments. While people understand that health-related information provided by PPG is private, no existing research has demonstrated that PPG sensors are dangerous devices capable of obtaining sensitive information other than health-related data. This work introduces PPG-Hear, a novel side-channel attack that exploits PPG sensors to intercept nearby audio information covertly. Specifically, PPG-Hear exploits low-frequency PPG measurements to discern and reconstruct human speech emitted from proximate speakers. This technology allows attackers to eavesdrop on sensitive conversations (e.g., audio passwords, business decisions, or intellectual properties) without being noticed. To achieve this non-trivial attack on commodity PPG-enabled devices, we employ differentiation and filtering techniques to mitigate the impact of temperature drift and user-specific gestures. We develop the first PPG-based speech reconstructor, which can identify speech patterns in the PPG spectrogram and establish the correlation between PPG and speech spectrograms. By integrating a MiniRocket-based classifier with a PixelGAN model, PPG-Hear can reconstruct human speech using low-sampling-rate PPG measurements. Through an array of real-world experiments, encompassing common eavesdropping scenarios such as surrounding speakers and the device's own speakers, we show that PPG-Hear can achieve remarkable accuracy of 90% for recognizing human speech, surpassing the current state-of-the-art side-channel eavesdropping attacks using motion sensors operating at equivalent sampling rates (i.e., 50Hz to 500Hz).\",\"PeriodicalId\":20553,\"journal\":{\"name\":\"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2024-05-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3659603\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3659603","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
PPG-Hear: A Practical Eavesdropping Attack with Photoplethysmography Sensors
Photoplethysmography (PPG) sensors have become integral components of wearable and portable health devices in the current technological landscape. These sensors offer easy access to heart rate and blood oxygenation, facilitating continuous long-term health monitoring in clinic and non-clinic environments. While people understand that health-related information provided by PPG is private, no existing research has demonstrated that PPG sensors are dangerous devices capable of obtaining sensitive information other than health-related data. This work introduces PPG-Hear, a novel side-channel attack that exploits PPG sensors to intercept nearby audio information covertly. Specifically, PPG-Hear exploits low-frequency PPG measurements to discern and reconstruct human speech emitted from proximate speakers. This technology allows attackers to eavesdrop on sensitive conversations (e.g., audio passwords, business decisions, or intellectual properties) without being noticed. To achieve this non-trivial attack on commodity PPG-enabled devices, we employ differentiation and filtering techniques to mitigate the impact of temperature drift and user-specific gestures. We develop the first PPG-based speech reconstructor, which can identify speech patterns in the PPG spectrogram and establish the correlation between PPG and speech spectrograms. By integrating a MiniRocket-based classifier with a PixelGAN model, PPG-Hear can reconstruct human speech using low-sampling-rate PPG measurements. Through an array of real-world experiments, encompassing common eavesdropping scenarios such as surrounding speakers and the device's own speakers, we show that PPG-Hear can achieve remarkable accuracy of 90% for recognizing human speech, surpassing the current state-of-the-art side-channel eavesdropping attacks using motion sensors operating at equivalent sampling rates (i.e., 50Hz to 500Hz).