Biaokai Zhu, Zejiao Yang, Yupeng Jia, Shengxin Chen, Jie Song, Sanman Liu, P. Li, Feng Li, Deng-ao Li
Vibration is a normal reaction that occurs during the operation of machinery and is very common in industrial systems. How to turn fine-grained vibration perception into visualization, and further predict mechanical failures and reduce property losses based on visual vibration information, which has aroused our thinking. In this paper, the phase information generated by the tag is processed and analyzed, and MFD is proposed, a real-time vibration monitoring and fault-sensing discrimination system. MFD extracts phase information from the original RF signal and converts it into a markov transition map by introducing White Gaussian Noise and a low-pass filter for denoising. To accurately predict the failure of machinery, a deep and machine learning model is introduced to calculate the accuracy of failure analysis, realizing real-time monitoring and fault judgment. The test results show that the average recognition accuracy of vibration can reach 96.07%, and the average recognition accuracy of forward rotation, reverse rotation, oil spill, and screw loosening of motor equipment during long-term operation can reach 98.53%, 99.44%, 97.87%, and 99.91%, respectively, with high robustness.
{"title":"MFD: Multi-object Frequency Feature Recognition and State Detection Based on RFID-single Tag","authors":"Biaokai Zhu, Zejiao Yang, Yupeng Jia, Shengxin Chen, Jie Song, Sanman Liu, P. Li, Feng Li, Deng-ao Li","doi":"10.1145/3615665","DOIUrl":"https://doi.org/10.1145/3615665","url":null,"abstract":"Vibration is a normal reaction that occurs during the operation of machinery and is very common in industrial systems. How to turn fine-grained vibration perception into visualization, and further predict mechanical failures and reduce property losses based on visual vibration information, which has aroused our thinking. In this paper, the phase information generated by the tag is processed and analyzed, and MFD is proposed, a real-time vibration monitoring and fault-sensing discrimination system. MFD extracts phase information from the original RF signal and converts it into a markov transition map by introducing White Gaussian Noise and a low-pass filter for denoising. To accurately predict the failure of machinery, a deep and machine learning model is introduced to calculate the accuracy of failure analysis, realizing real-time monitoring and fault judgment. The test results show that the average recognition accuracy of vibration can reach 96.07%, and the average recognition accuracy of forward rotation, reverse rotation, oil spill, and screw loosening of motor equipment during long-term operation can reach 98.53%, 99.44%, 97.87%, and 99.91%, respectively, with high robustness.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75478093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electronic signatures are widely used in financial business, telecommuting and identity authentication. Offline electronic signatures are vulnerable to copy or replay attacks. Contact-based online electronic signatures are limited by indirect contact such as handwriting pads and may threaten the health of users. Consider combining hand shape features and writing process features to form electronic signatures, the paper proposes an in-air handwritten signature verification system with millimeter-wave(mmWave) radar, namely mmHSV. First, the biometrics of the handwritten signature process are modeled, and phase-dependent biometrics and behavioral features are extracted from the mmWave radar mixture signal. Secondly, a handwritten feature recognition network based on few-sample learning is presented to fuse multi-dimensional features and determine user legitimacy. Finally, mmHSV is implemented and evaluated with commercial mmWave devices in different scenarios and attack mode conditions. Experimental results show that the mmHSV can achieve accurate, efficient, robust and scalable handwritten signature verification. Area Under Curve (AUC) is 98.96 (% ) , False Acceptance Rate (FAR) is 5.1 (% ) at the fixed threshold, AUC is 97.79 (% ) for untrained users.
{"title":"mmHSV: In-Air Handwritten Signature Verification via Millimeter-wave Radar","authors":"Wanqing Li, Tongtong He, Nan Jing, Lin Wang","doi":"10.1145/3614443","DOIUrl":"https://doi.org/10.1145/3614443","url":null,"abstract":"Electronic signatures are widely used in financial business, telecommuting and identity authentication. Offline electronic signatures are vulnerable to copy or replay attacks. Contact-based online electronic signatures are limited by indirect contact such as handwriting pads and may threaten the health of users. Consider combining hand shape features and writing process features to form electronic signatures, the paper proposes an in-air handwritten signature verification system with millimeter-wave(mmWave) radar, namely mmHSV. First, the biometrics of the handwritten signature process are modeled, and phase-dependent biometrics and behavioral features are extracted from the mmWave radar mixture signal. Secondly, a handwritten feature recognition network based on few-sample learning is presented to fuse multi-dimensional features and determine user legitimacy. Finally, mmHSV is implemented and evaluated with commercial mmWave devices in different scenarios and attack mode conditions. Experimental results show that the mmHSV can achieve accurate, efficient, robust and scalable handwritten signature verification. Area Under Curve (AUC) is 98.96 (% ) , False Acceptance Rate (FAR) is 5.1 (% ) at the fixed threshold, AUC is 97.79 (% ) for untrained users.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82573824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juncen Zhu, Jiannong Cao, Yanni Yang, Wei Ren, Huizi Han
Early detection of fatigue driving is pivotal for safety of drivers and pedestrians. Traditional approaches mainly employ cameras and wearable sensors to detect fatigue features, which are intrusive to drivers. Recent advances in radio frequency (RF) sensing enable non-intrusive fatigue feature detection from the signal reflected by driver’s body. However, existing RF-based solutions only detect partial or coarse-grained fatigue features, which reduces the detection accuracy. To tackle above limitations, we propose a mmWave-based fatigue driving detection system, called mmDrive, which can detect multiple fine-grained fatigue features from different body parts. However, achieving accurate detection of various fatigue features during driving encounters practical challenges. Specifically, normal driving activities and driver’s involuntary facial movements inevitably cause interference to fatigue features. Thus, we exploit unique geometric and behavioral characteristics of fatigue features and design effective signal processing methods to remove noises from fatigue-irrelevant activities. Based on the detected fatigue features, we further develop a fatigue determination algorithm to decide driver’s fatigue state. Extensive experiment results from both simulated and real driving environments show that the average accuracy for detecting nodding and yawning features is about (96% ) , and the average errors for estimating eye blink, respiration, and heartbeat rates are around 2.21bpm, 0.54bpm, and 2.52bpm, respectively. And the accuracy of the fatigue detection algorithm we proposed reached (97.63% ) .
{"title":"mmDrive: Fine-Grained Fatigue Driving Detection Using mmWave Radar","authors":"Juncen Zhu, Jiannong Cao, Yanni Yang, Wei Ren, Huizi Han","doi":"10.1145/3614437","DOIUrl":"https://doi.org/10.1145/3614437","url":null,"abstract":"Early detection of fatigue driving is pivotal for safety of drivers and pedestrians. Traditional approaches mainly employ cameras and wearable sensors to detect fatigue features, which are intrusive to drivers. Recent advances in radio frequency (RF) sensing enable non-intrusive fatigue feature detection from the signal reflected by driver’s body. However, existing RF-based solutions only detect partial or coarse-grained fatigue features, which reduces the detection accuracy. To tackle above limitations, we propose a mmWave-based fatigue driving detection system, called mmDrive, which can detect multiple fine-grained fatigue features from different body parts. However, achieving accurate detection of various fatigue features during driving encounters practical challenges. Specifically, normal driving activities and driver’s involuntary facial movements inevitably cause interference to fatigue features. Thus, we exploit unique geometric and behavioral characteristics of fatigue features and design effective signal processing methods to remove noises from fatigue-irrelevant activities. Based on the detected fatigue features, we further develop a fatigue determination algorithm to decide driver’s fatigue state. Extensive experiment results from both simulated and real driving environments show that the average accuracy for detecting nodding and yawning features is about (96% ) , and the average errors for estimating eye blink, respiration, and heartbeat rates are around 2.21bpm, 0.54bpm, and 2.52bpm, respectively. And the accuracy of the fatigue detection algorithm we proposed reached (97.63% ) .","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84647069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People usually form a social structure (e.g., a leader-follower, companion, or independent group) for better interactions among them and thus share similar perceptions of visible scenes and invisible wireless signals encountered while moving. Many mobility-driven applications have paid much attention to recognizing trajectory relationships among people. This work models visual and wireless data to quantify the trajectory similarity between a pair of users. We design a visual and wireless sensor fusion system, called ViWise, which incorporates the first-person video frames collected by a wearable visual device and the wireless packets broadcast by a personal mobile device for recognizing finer-grained trajectory relationships within a mobility group. When people take similar trajectories, they usually share similar visual scenes. Their wireless packets observed by ambient wireless base stations (called wireless scanners in this work) usually contain similar patterns. We model the visual characteristics of physical objects seen by a user from two perspectives: micro-scale image structure with pixel-wise features and macro-scale semantic context. On the other hand, we model characteristics of wireless packets based on the encountered wireless scanners along the user’s trajectory. Given two users’ trajectories, their trajectory characteristics behind the visible video frames and invisible wireless packets are fused together to compute the visual-wireless data similarity that quantifies the correlation between trajectories taken by them. We exploit modeled visual-wireless data similarity to recognize the social structure within user trajectories. Comprehensive experimental results in indoor and outdoor environments show that the proposed ViWise is robust in trajectory relationship recognition with an accuracy of above 90%.
{"title":"ViWise: Fusing Visual and Wireless Sensing Data for Trajectory Relationship Recognition","authors":"Fang-Jing Wu, Sheng-Wun Lai, Sok-Ian Sou","doi":"10.1145/3614441","DOIUrl":"https://doi.org/10.1145/3614441","url":null,"abstract":"People usually form a social structure (e.g., a leader-follower, companion, or independent group) for better interactions among them and thus share similar perceptions of visible scenes and invisible wireless signals encountered while moving. Many mobility-driven applications have paid much attention to recognizing trajectory relationships among people. This work models visual and wireless data to quantify the trajectory similarity between a pair of users. We design a visual and wireless sensor fusion system, called ViWise, which incorporates the first-person video frames collected by a wearable visual device and the wireless packets broadcast by a personal mobile device for recognizing finer-grained trajectory relationships within a mobility group. When people take similar trajectories, they usually share similar visual scenes. Their wireless packets observed by ambient wireless base stations (called wireless scanners in this work) usually contain similar patterns. We model the visual characteristics of physical objects seen by a user from two perspectives: micro-scale image structure with pixel-wise features and macro-scale semantic context. On the other hand, we model characteristics of wireless packets based on the encountered wireless scanners along the user’s trajectory. Given two users’ trajectories, their trajectory characteristics behind the visible video frames and invisible wireless packets are fused together to compute the visual-wireless data similarity that quantifies the correlation between trajectories taken by them. We exploit modeled visual-wireless data similarity to recognize the social structure within user trajectories. Comprehensive experimental results in indoor and outdoor environments show that the proposed ViWise is robust in trajectory relationship recognition with an accuracy of above 90%.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78465506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanchao Zhao, Yiming Zhao, Si Li, Hao Han, Linfu Xie
Keystroke snooping is an effective way to steal sensitive information from the victims. Recent research on acoustic emanation based techniques has greatly improved the accessibility by non-professional adversaries. However, these approaches either require multiple smartphones or require specific placement of the smartphone relative to the keyboards, which tremendously restrict the application scenarios. In this paper, we propose UltraSnoop, a training-free, transferable, and placement-agnostic scheme, which manages to infer user’s input using a single smartphone placed within the range covered by a microphone and speaker. The innovation of Ultrasnoop is that we propose an ultrasonic anchor-keystroke positioning method and an MFCCs clustering algorithm, synthesis of which could infer the relative position between the smartphone and the keyboard. Along with the keystroke TDoA, our method could infer the keystrokes and even gradually improve the accuracy as the snooping proceeds. Our real-world experiments show that UltraSnoop could achieve more than 85% top-3 snooping accuracy when the smartphone is placed within the range of 30-60cm from the keyboard.
{"title":"UltraSnoop: Placement-agnostic Keystroke Snooping via Smartphone-based Ultrasonic Sonar","authors":"Yanchao Zhao, Yiming Zhao, Si Li, Hao Han, Linfu Xie","doi":"10.1145/3614440","DOIUrl":"https://doi.org/10.1145/3614440","url":null,"abstract":"Keystroke snooping is an effective way to steal sensitive information from the victims. Recent research on acoustic emanation based techniques has greatly improved the accessibility by non-professional adversaries. However, these approaches either require multiple smartphones or require specific placement of the smartphone relative to the keyboards, which tremendously restrict the application scenarios. In this paper, we propose UltraSnoop, a training-free, transferable, and placement-agnostic scheme, which manages to infer user’s input using a single smartphone placed within the range covered by a microphone and speaker. The innovation of Ultrasnoop is that we propose an ultrasonic anchor-keystroke positioning method and an MFCCs clustering algorithm, synthesis of which could infer the relative position between the smartphone and the keyboard. Along with the keystroke TDoA, our method could infer the keystrokes and even gradually improve the accuracy as the snooping proceeds. Our real-world experiments show that UltraSnoop could achieve more than 85% top-3 snooping accuracy when the smartphone is placed within the range of 30-60cm from the keyboard.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78737930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents EARFace, a system that shows the feasibility of tracking facial landmarks for 3D facial reconstruction using in-ear acoustic sensors embedded within smart earphones. This enables a number of applications in the areas of facial expression tracking, user-interfaces, AR/VR applications, affective computing, accessibility, etc. While conventional vision-based solutions break down under poor lighting, occlusions, and also suffer from privacy concerns, earphone platforms are robust to ambient conditions, while being privacy-preserving. In contrast to prior work on earable platforms that perform outer-ear sensing for facial motion tracking, EARFace shows the feasibility of completely in-ear sensing with a natural earphone form-factor, thus enhancing the comfort levels of wearing. The core intuition exploited by EARFace is that the shape of the ear canal changes due to the movement of facial muscles during facial motion. EARFace tracks the changes in shape of the ear canal by measuring ultrasonic channel frequency response (CFR) of the inner ear, ultimately resulting in tracking of the facial motion. A transformer based machine learning (ML) model is designed to exploit spectral and temporal relationships in the ultrasonic CFR data to predict the facial landmarks of the user with an accuracy of 1.83 mm. Using these predicted landmarks, a 3D graphical model of the face that replicates the precise facial motion of the user is then reconstructed. Domain adaptation is further performed by adapting the weights of layers using a group-wise and differential learning rate. This decreases the training overhead in EARFace. The transformer based ML model runs on smartphone devices with a processing latency of 13 ms and an overall low power consumption profile. Finally, usability studies indicate higher levels of comforts of wearing EARFace’s earphone platform in comparison with alternative form-factors.
{"title":"I am an Earphone and I can Hear my Users Face: Facial Landmark Tracking using Smart Earphones","authors":"Shijia Zhang, Taiting Lu, Hao Zhou, Yilin Liu, Runze Liu, Mahanth K. Gowda","doi":"10.1145/3614438","DOIUrl":"https://doi.org/10.1145/3614438","url":null,"abstract":"This paper presents EARFace, a system that shows the feasibility of tracking facial landmarks for 3D facial reconstruction using in-ear acoustic sensors embedded within smart earphones. This enables a number of applications in the areas of facial expression tracking, user-interfaces, AR/VR applications, affective computing, accessibility, etc. While conventional vision-based solutions break down under poor lighting, occlusions, and also suffer from privacy concerns, earphone platforms are robust to ambient conditions, while being privacy-preserving. In contrast to prior work on earable platforms that perform outer-ear sensing for facial motion tracking, EARFace shows the feasibility of completely in-ear sensing with a natural earphone form-factor, thus enhancing the comfort levels of wearing. The core intuition exploited by EARFace is that the shape of the ear canal changes due to the movement of facial muscles during facial motion. EARFace tracks the changes in shape of the ear canal by measuring ultrasonic channel frequency response (CFR) of the inner ear, ultimately resulting in tracking of the facial motion. A transformer based machine learning (ML) model is designed to exploit spectral and temporal relationships in the ultrasonic CFR data to predict the facial landmarks of the user with an accuracy of 1.83 mm. Using these predicted landmarks, a 3D graphical model of the face that replicates the precise facial motion of the user is then reconstructed. Domain adaptation is further performed by adapting the weights of layers using a group-wise and differential learning rate. This decreases the training overhead in EARFace. The transformer based ML model runs on smartphone devices with a processing latency of 13 ms and an overall low power consumption profile. Finally, usability studies indicate higher levels of comforts of wearing EARFace’s earphone platform in comparison with alternative form-factors.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85993332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yumeng Liang, Anfu Zhou, Xinzhe Wen, Wei Huang, Pu Shi, Lingyu Pu, Huanhuan Zhang, Huadong Ma
Blood pressure (BP), an important vital sign to assess human health, is expected to be monitored conveniently. The existing BP monitoring methods, either traditional cuff-based or newly-emerging wearable-based, all require skin contact, which may cause unpleasant user experience and is even injurious to certain users. In this paper, we explore contact-less BP monitoring and propose airBP, which emits millimeter-wave signals toward a user’s wrist, and captures the reflected signal bounded off from the pulsating artery underlying the wrist. By analyzing the reflected signal strength of the signal, airBP generates arterial pulse and further estimates BP by exploiting the relationship between the arterial pulse and BP. To realize airBP, we design a new beam-forming method to keep focusing on the tiny and hidden wrist artery, by leveraging the inherent periodicity of the arterial pulse. Moreover, we custom-design a pre-training and neural network architecture, to combat the challenges from the arterial pulse sparsity and ambiguity, so as to estimate BP accurately. We prototype airBP using a coin-size COTS mmWave radar and perform extensive experiments on 41 subjects. The results demonstrate that airBP accurately estimates systolic and diastolic BP, with the mean error of -0.30 mmHg and -0.23 mmHg, as well as the standard deviation error of 4.80 mmHg and 3.79 mmHg (within the acceptable range regulated by the FDA’s AAMI protocol), respectively, at a distance up to 26 cm.
{"title":"airBP: Monitor Your Blood Pressure with Millimeter-Wave in the Air","authors":"Yumeng Liang, Anfu Zhou, Xinzhe Wen, Wei Huang, Pu Shi, Lingyu Pu, Huanhuan Zhang, Huadong Ma","doi":"10.1145/3614439","DOIUrl":"https://doi.org/10.1145/3614439","url":null,"abstract":"Blood pressure (BP), an important vital sign to assess human health, is expected to be monitored conveniently. The existing BP monitoring methods, either traditional cuff-based or newly-emerging wearable-based, all require skin contact, which may cause unpleasant user experience and is even injurious to certain users. In this paper, we explore contact-less BP monitoring and propose airBP, which emits millimeter-wave signals toward a user’s wrist, and captures the reflected signal bounded off from the pulsating artery underlying the wrist. By analyzing the reflected signal strength of the signal, airBP generates arterial pulse and further estimates BP by exploiting the relationship between the arterial pulse and BP. To realize airBP, we design a new beam-forming method to keep focusing on the tiny and hidden wrist artery, by leveraging the inherent periodicity of the arterial pulse. Moreover, we custom-design a pre-training and neural network architecture, to combat the challenges from the arterial pulse sparsity and ambiguity, so as to estimate BP accurately. We prototype airBP using a coin-size COTS mmWave radar and perform extensive experiments on 41 subjects. The results demonstrate that airBP accurately estimates systolic and diastolic BP, with the mean error of -0.30 mmHg and -0.23 mmHg, as well as the standard deviation error of 4.80 mmHg and 3.79 mmHg (within the acceptable range regulated by the FDA’s AAMI protocol), respectively, at a distance up to 26 cm.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90865251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naeima Hamed, A. Gaglione, A. Gluhak, Omer F. Rana, Charith Perera
Cities are increasingly becoming augmented with sensors through public, private, and academic sector initiatives. Most of the time, these sensors are deployed with a primary purpose (objective) in mind (e.g., deploy sensors to understand noise pollution) by a sensor owner (i.e., the organization that invests in sensing hardware, e.g., a city council). Over the past few years, communities undertaking smart city development projects have understood the importance of making the sensor data available to a wider community—beyond their primary usage. Different business models have been proposed to achieve this, including creating data marketplaces. The vision is to encourage new startups and small and medium-scale businesses to create novel products and services using sensor data to generate additional economic value. Currently, data are sold as pre-defined independent datasets (e.g., noise level and parking status data may be sold separately). This approach creates several challenges, such as (i) difficulties in pricing, which leads to higher prices (per dataset); (ii) higher network communication and bandwidth requirements; and (iii) information overload for data consumers (i.e., those who purchase data). We investigate the benefit of semantic representation and its reasoning capabilities toward creating a business model that offers data on demand within smart city Internet of Things data marketplaces. The objective is to help data consumers (i.e., small and medium enterprises) acquire the most relevant data they need. We demonstrate the utility of our approach by integrating it into a real-world IoT data marketplace (developed by the synchronicity-iot.eu project). We discuss design decisions and their consequences (i.e., tradeoffs) on the choice and selection of datasets. Subsequently, we present a series of data modeling principles and recommendations for implementing IoT data marketplaces.
{"title":"Query Interface for Smart City Internet of Things Data Marketplaces: A Case Study","authors":"Naeima Hamed, A. Gaglione, A. Gluhak, Omer F. Rana, Charith Perera","doi":"10.1145/3609336","DOIUrl":"https://doi.org/10.1145/3609336","url":null,"abstract":"Cities are increasingly becoming augmented with sensors through public, private, and academic sector initiatives. Most of the time, these sensors are deployed with a primary purpose (objective) in mind (e.g., deploy sensors to understand noise pollution) by a sensor owner (i.e., the organization that invests in sensing hardware, e.g., a city council). Over the past few years, communities undertaking smart city development projects have understood the importance of making the sensor data available to a wider community—beyond their primary usage. Different business models have been proposed to achieve this, including creating data marketplaces. The vision is to encourage new startups and small and medium-scale businesses to create novel products and services using sensor data to generate additional economic value. Currently, data are sold as pre-defined independent datasets (e.g., noise level and parking status data may be sold separately). This approach creates several challenges, such as (i) difficulties in pricing, which leads to higher prices (per dataset); (ii) higher network communication and bandwidth requirements; and (iii) information overload for data consumers (i.e., those who purchase data). We investigate the benefit of semantic representation and its reasoning capabilities toward creating a business model that offers data on demand within smart city Internet of Things data marketplaces. The objective is to help data consumers (i.e., small and medium enterprises) acquire the most relevant data they need. We demonstrate the utility of our approach by integrating it into a real-world IoT data marketplace (developed by the synchronicity-iot.eu project). We discuss design decisions and their consequences (i.e., tradeoffs) on the choice and selection of datasets. Subsequently, we present a series of data modeling principles and recommendations for implementing IoT data marketplaces.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85147445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unidentified devices in a network can result in devastating consequences. It is, therefore, necessary to fingerprint and identify IoT devices connected to private or critical networks. With the proliferation of massive but heterogeneous IoT devices, it is getting challenging to detect vulnerable devices connected to networks. Current machine learning-based techniques for fingerprinting and identifying devices necessitate a significant amount of data gathered from IoT networks that must be transmitted to a central cloud. Nevertheless, private IoT data cannot be shared with the central cloud in numerous sensitive scenarios. Federated learning (FL) has been regarded as a promising paradigm for decentralized learning and has been applied in many different use cases. It enables machine learning models to be trained in a privacy-preserving way. In this article, we propose a privacy-preserved IoT device fingerprinting and identification mechanisms using FL; we call it FL4IoT. FL4IoT is a two-phased system combining unsupervised-learning-based device fingerprinting and supervised-learning-based device identification. FL4IoT shows its practicality in different performance metrics in a federated and centralized setup. For instance, in the best cases, empirical results show that FL4IoT achieves ∼99% accuracy and F1-Score in identifying IoT devices using a federated setup without exposing any private data to a centralized cloud entity. In addition, FL4IoT can detect spoofed devices with over 99% accuracy.
{"title":"FL4IoT: IoT Device Fingerprinting and Identification Using Federated Learning","authors":"Han Wang, David Eklund, Alina Oprea, S. Raza","doi":"10.1145/3603257","DOIUrl":"https://doi.org/10.1145/3603257","url":null,"abstract":"Unidentified devices in a network can result in devastating consequences. It is, therefore, necessary to fingerprint and identify IoT devices connected to private or critical networks. With the proliferation of massive but heterogeneous IoT devices, it is getting challenging to detect vulnerable devices connected to networks. Current machine learning-based techniques for fingerprinting and identifying devices necessitate a significant amount of data gathered from IoT networks that must be transmitted to a central cloud. Nevertheless, private IoT data cannot be shared with the central cloud in numerous sensitive scenarios. Federated learning (FL) has been regarded as a promising paradigm for decentralized learning and has been applied in many different use cases. It enables machine learning models to be trained in a privacy-preserving way. In this article, we propose a privacy-preserved IoT device fingerprinting and identification mechanisms using FL; we call it FL4IoT. FL4IoT is a two-phased system combining unsupervised-learning-based device fingerprinting and supervised-learning-based device identification. FL4IoT shows its practicality in different performance metrics in a federated and centralized setup. For instance, in the best cases, empirical results show that FL4IoT achieves ∼99% accuracy and F1-Score in identifying IoT devices using a federated setup without exposing any private data to a centralized cloud entity. In addition, FL4IoT can detect spoofed devices with over 99% accuracy.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79886241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bayan AL MUHANDER, Jason Wiese, Omer F. Rana, Charith Perera
The balance between protecting user privacy while providing cost-effective devices that are functional and usable is a key challenge in the burgeoning Internet of Things (IoT). In traditional desktop and mobile contexts, the primary user interface is a screen; however, in IoT devices, screens are rare or very small, invalidating many existing approaches to protecting user privacy. Privacy visualizations are a common approach for assisting users in understanding the privacy implications of web and mobile services. To gain a thorough understanding of IoT privacy, we examine existing web, mobile, and IoT visualization approaches. Following that, we define five major privacy factors in the IoT context: type, usage, storage, retention period, and access. We then describe notification methods used in various contexts as reported in the literature. We aim to highlight key approaches that developers and researchers can use for creating effective IoT privacy notices that improve user privacy management (awareness and control). Using a toolkit, a use case scenario, and two examples from the literature, we demonstrate how privacy visualization approaches can be supported in practice.
{"title":"Interactive Privacy Management: Toward Enhancing Privacy Awareness and Control in the Internet of Things","authors":"Bayan AL MUHANDER, Jason Wiese, Omer F. Rana, Charith Perera","doi":"10.1145/3600096","DOIUrl":"https://doi.org/10.1145/3600096","url":null,"abstract":"The balance between protecting user privacy while providing cost-effective devices that are functional and usable is a key challenge in the burgeoning Internet of Things (IoT). In traditional desktop and mobile contexts, the primary user interface is a screen; however, in IoT devices, screens are rare or very small, invalidating many existing approaches to protecting user privacy. Privacy visualizations are a common approach for assisting users in understanding the privacy implications of web and mobile services. To gain a thorough understanding of IoT privacy, we examine existing web, mobile, and IoT visualization approaches. Following that, we define five major privacy factors in the IoT context: type, usage, storage, retention period, and access. We then describe notification methods used in various contexts as reported in the literature. We aim to highlight key approaches that developers and researchers can use for creating effective IoT privacy notices that improve user privacy management (awareness and control). Using a toolkit, a use case scenario, and two examples from the literature, we demonstrate how privacy visualization approaches can be supported in practice.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84464387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}