Shuheng Li, Ranak Roy Chowdhury, Jingbo Shang, Rajesh K. Gupta, Dezhi Hong
Discovering patterns in time series data is essential to many key tasks in intelligent sensing systems, such as human activity recognition and event detection. These tasks involve the classification of sensory information from physical measurements such as inertial or temperature change measurements. Due to differences in the underlying physics, existing methods for classification use handcrafted features combined with traditional learning algorithms, or employ distinct deep neural models to directly learn from raw data. We propose here a unified neural architecture, UniTS, for sensory time series classification in various tasks, which obviates the need for domain-specific feature, model customization or polished hyper-parameter tuning. This is possible as we believe that discriminative patterns in sensory measurements would manifest when we combine information from both the time and frequency domains. In particular, to reveal the commonality of sensory signals, we integrate Short-Time Fourier Transform (STFT) into neural networks by initializing convolutional filter weights as the Fourier coefficients. Instead of treating STFT as a static linear transform with fixed coefficients, we make these weights optimizable during network training, which essentially learns to weigh each frequency channel. Recognizing that time-domain signals might represent intuitive physics such as temperature and acceleration, we combine linearly transformed time-domain hidden features with the frequency components within each time chunk. We further extend our model to multiple branches with different time-frequency resolutions to avoid the need of hyper-parameter search. We conducted experiments on four public datasets containing time-series data from various IoT systems, including motion, WiFi, EEG, and air quality, and compared UniTS with numerous recent models. Results demonstrate that our proposed method achieves an average F1 score of 91.85% with a 2.3-point improvement over the state of the art. We also verified the efficacy of STFT-inspired structures through numerous quantitative studies.
{"title":"UniTS: Short-Time Fourier Inspired Neural Networks for Sensory Time Series Classification","authors":"Shuheng Li, Ranak Roy Chowdhury, Jingbo Shang, Rajesh K. Gupta, Dezhi Hong","doi":"10.1145/3485730.3485942","DOIUrl":"https://doi.org/10.1145/3485730.3485942","url":null,"abstract":"Discovering patterns in time series data is essential to many key tasks in intelligent sensing systems, such as human activity recognition and event detection. These tasks involve the classification of sensory information from physical measurements such as inertial or temperature change measurements. Due to differences in the underlying physics, existing methods for classification use handcrafted features combined with traditional learning algorithms, or employ distinct deep neural models to directly learn from raw data. We propose here a unified neural architecture, UniTS, for sensory time series classification in various tasks, which obviates the need for domain-specific feature, model customization or polished hyper-parameter tuning. This is possible as we believe that discriminative patterns in sensory measurements would manifest when we combine information from both the time and frequency domains. In particular, to reveal the commonality of sensory signals, we integrate Short-Time Fourier Transform (STFT) into neural networks by initializing convolutional filter weights as the Fourier coefficients. Instead of treating STFT as a static linear transform with fixed coefficients, we make these weights optimizable during network training, which essentially learns to weigh each frequency channel. Recognizing that time-domain signals might represent intuitive physics such as temperature and acceleration, we combine linearly transformed time-domain hidden features with the frequency components within each time chunk. We further extend our model to multiple branches with different time-frequency resolutions to avoid the need of hyper-parameter search. We conducted experiments on four public datasets containing time-series data from various IoT systems, including motion, WiFi, EEG, and air quality, and compared UniTS with numerous recent models. Results demonstrate that our proposed method achieves an average F1 score of 91.85% with a 2.3-point improvement over the state of the art. We also verified the efficacy of STFT-inspired structures through numerous quantitative studies.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134174054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Compared to RGB camera and Lidar, single chip automotive radar is a promising alternative sensor with robustness to adverse weathers. But the sparseness of radar output drastically hinders its usefulness for autonomous driving tasks. Up-sampling via image style transfer could be a cure for a sparse measurement. However, it remains unknown whether style transfer can be an effective solution to automotive radar which features different and unique sparse and noisy issues. In this paper, we evaluate a variety of predominant image style transfer methods for a typical ego-vehicle pose estimation task on the public nuScenes dataset, and find that though image style transfer methods can improve the visual quality of automotive radar measurements, they can hardly contribute to the utility of radar for downstream tasks.
{"title":"Can Image Style Transfer Save Automotive Radar?","authors":"Jianning Deng, Kaiwen Cai, Chris Xiaoxuan Lu","doi":"10.1145/3485730.3492888","DOIUrl":"https://doi.org/10.1145/3485730.3492888","url":null,"abstract":"Compared to RGB camera and Lidar, single chip automotive radar is a promising alternative sensor with robustness to adverse weathers. But the sparseness of radar output drastically hinders its usefulness for autonomous driving tasks. Up-sampling via image style transfer could be a cure for a sparse measurement. However, it remains unknown whether style transfer can be an effective solution to automotive radar which features different and unique sparse and noisy issues. In this paper, we evaluate a variety of predominant image style transfer methods for a typical ego-vehicle pose estimation task on the public nuScenes dataset, and find that though image style transfer methods can improve the visual quality of automotive radar measurements, they can hardly contribute to the utility of radar for downstream tasks.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134196545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent years have witnessed an emerging class of real-time applications, e.g., autonomous driving, in which resource-constrained edge platforms need to execute a set of real-time mixed Deep Learning (DL) tasks concurrently. Such an application paradigm poses major challenges due to the huge compute workload of deep neural network models, diverse performance requirements of different tasks, and the lack of real-time support from existing DL frameworks. In this paper, we present RT-mDL, a novel framework to support mixed real-time DL tasks on edge platform with heterogeneous CPU and GPU resource. RT-mDL aims to optimize the mixed DL task execution to meet their diverse real-time/accuracy requirements by exploiting unique compute characteristics of DL tasks. RT-mDL employs a novel storage-bounded model scaling method to generate a series of model variants, and systematically optimizes the DL task execution by joint model variants selection and task priority assignment. To improve the CPU/GPU utilization of mixed DL tasks, RT-mDL also includes a new priority-based scheduler which employs a GPU packing mechanism and executes the CPU/GPU tasks independently. Our implementation on an F1/10 autonomous driving testbed shows that, RT-mDL can enable multiple concurrent DL tasks to achieve satisfactory real-time performance in traffic light detection and sign recognition. Moreover, compared to state-of-the-art baselines, RT-mDL can reduce deadline missing rate by 40.12% while only sacrificing 1.7% model accuracy.
{"title":"RT-mDL: Supporting Real-Time Mixed Deep Learning Tasks on Edge Platforms","authors":"Neiwen Ling, Kai Wang, Yuze He, G. Xing, Daqi Xie","doi":"10.1145/3485730.3485938","DOIUrl":"https://doi.org/10.1145/3485730.3485938","url":null,"abstract":"Recent years have witnessed an emerging class of real-time applications, e.g., autonomous driving, in which resource-constrained edge platforms need to execute a set of real-time mixed Deep Learning (DL) tasks concurrently. Such an application paradigm poses major challenges due to the huge compute workload of deep neural network models, diverse performance requirements of different tasks, and the lack of real-time support from existing DL frameworks. In this paper, we present RT-mDL, a novel framework to support mixed real-time DL tasks on edge platform with heterogeneous CPU and GPU resource. RT-mDL aims to optimize the mixed DL task execution to meet their diverse real-time/accuracy requirements by exploiting unique compute characteristics of DL tasks. RT-mDL employs a novel storage-bounded model scaling method to generate a series of model variants, and systematically optimizes the DL task execution by joint model variants selection and task priority assignment. To improve the CPU/GPU utilization of mixed DL tasks, RT-mDL also includes a new priority-based scheduler which employs a GPU packing mechanism and executes the CPU/GPU tasks independently. Our implementation on an F1/10 autonomous driving testbed shows that, RT-mDL can enable multiple concurrent DL tasks to achieve satisfactory real-time performance in traffic light detection and sign recognition. Moreover, compared to state-of-the-art baselines, RT-mDL can reduce deadline missing rate by 40.12% while only sacrificing 1.7% model accuracy.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"678 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122973034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Realizing the vision of ubiquitous battery-free sensing has proven to be challenging, mainly due to the practical energy and range limitations of current wireless communication systems. To address this, we design the first wide-area and scalable backscatter network with multiple receivers (RX) and transmitters (TX) base units to communicate with battery-free sensor nodes. Our system circumvents the inherent limitations of backscatter systems -including the limited coverage area, frequency-dependent operability, and sensor node limitations in handling network tasks- by introducing several coordination techniques between the base units starting from a single RX-TX pair to networks with many RX and TX units. We build low-cost RX and TX base units and battery-free sensor nodes with multiple sensing modalities and evaluate the performance of the MultiScatter system in various deployments. Our evaluation shows that we can successfully communicate with battery-free sensor nodes across 23400 ft2 of a two-floor educational complex using 5 RX and 20 TX units, costing $569. Also, we show that the aggregated throughput of the backscatter network increases linearly as the number of RX units and the network coverage grows.
{"title":"MultiScatter","authors":"Mohamad Katanbaf, Ali Saffari, Joshua R. Smith","doi":"10.1145/3485730.3485939","DOIUrl":"https://doi.org/10.1145/3485730.3485939","url":null,"abstract":"Realizing the vision of ubiquitous battery-free sensing has proven to be challenging, mainly due to the practical energy and range limitations of current wireless communication systems. To address this, we design the first wide-area and scalable backscatter network with multiple receivers (RX) and transmitters (TX) base units to communicate with battery-free sensor nodes. Our system circumvents the inherent limitations of backscatter systems -including the limited coverage area, frequency-dependent operability, and sensor node limitations in handling network tasks- by introducing several coordination techniques between the base units starting from a single RX-TX pair to networks with many RX and TX units. We build low-cost RX and TX base units and battery-free sensor nodes with multiple sensing modalities and evaluate the performance of the MultiScatter system in various deployments. Our evaluation shows that we can successfully communicate with battery-free sensor nodes across 23400 ft2 of a two-floor educational complex using 5 RX and 20 TX units, costing $569. Also, we show that the aggregated throughput of the backscatter network increases linearly as the number of RX units and the network coverage grows.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"472 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123446262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guangliang Zhao, Guy Ben-Yosef, Jianwei Qiu, Yang Zhao, Prabhu Janakaraj, S. Boppana, A. R. Schnore
Person Re-ID is a challenging problem and is gaining more attention due to demands in security, intelligent system and other applications. Most person Re-ID works are vision-based, such as image, video, or broadly speaking, face recognition-based techniques. Recently, several multi-modal person Re-ID datasets were released, including RGB+IR, RGB+text, RGB+WiFi, which shows the potential of the multi-modal sensor-based person Re-ID approach. However, there are several common issues in public datasets, such as short time duration, lack of appearance change, and limited activities, resulting in un-robust models. For example, vision-based Re-ID models are sensitive to appearance change. In this work, a person Re-ID testbed with multi-modal sensors is created, allowing the collection of sensing modalities including RGB, IR, depth, WiFi, radar, and audio. This novel dataset will cover normal daily office activities with large time span over multi-seasons. Initial analytic results are obtained for evaluating different person Re-ID models, based on small datasets collected in this testbed.
{"title":"Person Re-ID Testbed with Multi-Modal Sensors","authors":"Guangliang Zhao, Guy Ben-Yosef, Jianwei Qiu, Yang Zhao, Prabhu Janakaraj, S. Boppana, A. R. Schnore","doi":"10.1145/3485730.3494113","DOIUrl":"https://doi.org/10.1145/3485730.3494113","url":null,"abstract":"Person Re-ID is a challenging problem and is gaining more attention due to demands in security, intelligent system and other applications. Most person Re-ID works are vision-based, such as image, video, or broadly speaking, face recognition-based techniques. Recently, several multi-modal person Re-ID datasets were released, including RGB+IR, RGB+text, RGB+WiFi, which shows the potential of the multi-modal sensor-based person Re-ID approach. However, there are several common issues in public datasets, such as short time duration, lack of appearance change, and limited activities, resulting in un-robust models. For example, vision-based Re-ID models are sensitive to appearance change. In this work, a person Re-ID testbed with multi-modal sensors is created, allowing the collection of sensing modalities including RGB, IR, depth, WiFi, radar, and audio. This novel dataset will cover normal daily office activities with large time span over multi-seasons. Initial analytic results are obtained for evaluating different person Re-ID models, based on small datasets collected in this testbed.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121599505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Snapshot GNSS is a more energy-efficient approach to location estimation than traditional GNSS positioning methods. This is beneficial for applications with long deployments on battery such as wildlife tracking. However, only a few snapshot GNSS implementations have been presented so far and all have disadvantages. Most significantly, they typically require the GNSS signals to be captured with a certain minimum resolution, which demands complex receiver hardware capable of capturing multi-bit data at sampling rates of 16 MHz and more. By contrast, we develop fast algorithms that reliably estimate locations from twelve-millisecond signals that are sampled at just 4 MHz and quantised with only a single bit per sample. This allows us to build a snapshot receiver at an unmatched low cost of $14, which can acquire one position per hour for a year. On a challenging public dataset with thousands of snapshots from real-world scenarios, our system achieves 97% reliability and 11 m median accuracy, comparable to existing solutions with more complex and expensive hardware and higher energy consumption. We provide an open implementation of the algorithms as well as a public web service for cloud-based location estimation from low-quality GNSS signal snapshots.
{"title":"SnapperGPS","authors":"J. Beuchert, A. Rogers","doi":"10.1145/3485730.3485931","DOIUrl":"https://doi.org/10.1145/3485730.3485931","url":null,"abstract":"Snapshot GNSS is a more energy-efficient approach to location estimation than traditional GNSS positioning methods. This is beneficial for applications with long deployments on battery such as wildlife tracking. However, only a few snapshot GNSS implementations have been presented so far and all have disadvantages. Most significantly, they typically require the GNSS signals to be captured with a certain minimum resolution, which demands complex receiver hardware capable of capturing multi-bit data at sampling rates of 16 MHz and more. By contrast, we develop fast algorithms that reliably estimate locations from twelve-millisecond signals that are sampled at just 4 MHz and quantised with only a single bit per sample. This allows us to build a snapshot receiver at an unmatched low cost of $14, which can acquire one position per hour for a year. On a challenging public dataset with thousands of snapshots from real-world scenarios, our system achieves 97% reliability and 11 m median accuracy, comparable to existing solutions with more complex and expensive hardware and higher energy consumption. We provide an open implementation of the algorithms as well as a public web service for cloud-based location estimation from low-quality GNSS signal snapshots.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122448399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Footstep-induced floor vibration sensing has been used in many smart home applications, such as elderly/patient monitoring. These systems often leverage data-driven models to infer human information. Therefore, characterizing datasets is crucial for the generalization of this new modality. This dataset contains 144-minute floor vibration signals from two pedestrians in eight environments. We analyze the reusability of this dataset in three different research areas, including vibration-based information inference, knowledge transferring, and multimodal learning. We further characterize the dataset transferability on the occupant identification task, to provide quantitative insights for the transfer learning problems in the real-world floor vibration sensing applications. The characterization is conducted with three metrics, including distribution distance, information dependency, and influencing factor bias. Analysis results depict that the dataset covers different levels of transferability caused by multiple influencing factors. As a result, there are multiple future directions in which the dataset can be reused.
{"title":"Footstep-Induced Floor Vibration Dataset: Reusability and Transferability Analysis","authors":"Zhizhang Hu, Yue Zhang, Shijia Pan","doi":"10.1145/3485730.3494117","DOIUrl":"https://doi.org/10.1145/3485730.3494117","url":null,"abstract":"Footstep-induced floor vibration sensing has been used in many smart home applications, such as elderly/patient monitoring. These systems often leverage data-driven models to infer human information. Therefore, characterizing datasets is crucial for the generalization of this new modality. This dataset contains 144-minute floor vibration signals from two pedestrians in eight environments. We analyze the reusability of this dataset in three different research areas, including vibration-based information inference, knowledge transferring, and multimodal learning. We further characterize the dataset transferability on the occupant identification task, to provide quantitative insights for the transfer learning problems in the real-world floor vibration sensing applications. The characterization is conducted with three metrics, including distribution distance, information dependency, and influencing factor bias. Analysis results depict that the dataset covers different levels of transferability caused by multiple influencing factors. As a result, there are multiple future directions in which the dataset can be reused.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116615890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ontology matching enables harmonizing heterogeneous data models. Existing ontology matching approaches include machine learning. In particular, recent works leverage weak supervision (WS) through programmatic labeling to avoid the intensive hand-labeling for large ontologies. Programmatic labeling relies on heuristics and rules, called Labeling Functions (LFs), that generate noisy and incomplete labels. However, to cover a reasonable portion of the dataset, programmatic labeling might require a significant number of LFs that might be time expensive and not always straightforward to program. This paper proposes a novel system, namely OntoAugment, that augments LF labels for the ontology matching problem, starting from outcomes of the LFs. Our solution leverages the "similarity of similarities" between ontology concept bipairs that are two pairs of concepts. OntoAugment projects a label yielded by an LF for a concept pair to a similar pair that the same LF does not label. Thus, a wider portion of the dataset is covered even with a limited set of LFs. Experimentation results show that OntoAugment provides significant improvements (up to 11 F1 points) compared to the state-of-the-art WS approach when fewer LFs are used, whereas it maintains the performance without creating additional noise when a higher number of LFs already achieves high performance.
{"title":"OntoAugment","authors":"Fabio Maresca, Gürkan Solmaz, Flavio Cirillo","doi":"10.1145/3485730.3493445","DOIUrl":"https://doi.org/10.1145/3485730.3493445","url":null,"abstract":"Ontology matching enables harmonizing heterogeneous data models. Existing ontology matching approaches include machine learning. In particular, recent works leverage weak supervision (WS) through programmatic labeling to avoid the intensive hand-labeling for large ontologies. Programmatic labeling relies on heuristics and rules, called Labeling Functions (LFs), that generate noisy and incomplete labels. However, to cover a reasonable portion of the dataset, programmatic labeling might require a significant number of LFs that might be time expensive and not always straightforward to program. This paper proposes a novel system, namely OntoAugment, that augments LF labels for the ontology matching problem, starting from outcomes of the LFs. Our solution leverages the \"similarity of similarities\" between ontology concept bipairs that are two pairs of concepts. OntoAugment projects a label yielded by an LF for a concept pair to a similar pair that the same LF does not label. Thus, a wider portion of the dataset is covered even with a limited set of LFs. Experimentation results show that OntoAugment provides significant improvements (up to 11 F1 points) compared to the state-of-the-art WS approach when fewer LFs are used, whereas it maintains the performance without creating additional noise when a higher number of LFs already achieves high performance.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116221213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distracted driving has become a serious problem for traffic safety with the increasing number of fatalities every year. Existing systems have shortcomings of requiring additional hardware or vehicle motion data separation. Moreover, the excessive use of motion sensors can cause fast battery drain which is impractical for everyday use. In this work, we present a wearable-based distracted driving detection system that leverages Bluetooth. The proposed system exploits already in-vehicle BLE compatible devices to track the driver's hand position and infer potential unsafe driving behaviors. Preliminary study shows our system can achieve over 95% detection accuracy for various distracted driving behaviors.
{"title":"A Wearable-based Distracted Driving Detection Leveraging BLE","authors":"Travis Mewborne, Linghan Zhang, Sheng Tan","doi":"10.1145/3485730.3492872","DOIUrl":"https://doi.org/10.1145/3485730.3492872","url":null,"abstract":"Distracted driving has become a serious problem for traffic safety with the increasing number of fatalities every year. Existing systems have shortcomings of requiring additional hardware or vehicle motion data separation. Moreover, the excessive use of motion sensors can cause fast battery drain which is impractical for everyday use. In this work, we present a wearable-based distracted driving detection system that leverages Bluetooth. The proposed system exploits already in-vehicle BLE compatible devices to track the driver's hand position and infer potential unsafe driving behaviors. Preliminary study shows our system can achieve over 95% detection accuracy for various distracted driving behaviors.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116656600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Advancements in IoT technology has provided great benefits, unfortunately, IoT adoption also causes an increase in the attack surface which intensify security risks. As a consequence, different types of IoT smart devices have become the main targets of many high-profile cyber-attacks. To safeguard against threats such as introduction of fake IoT nodes and identity theft, the IoT needs scalable and resilient device authentication management. Contrary to existing mechanisms for IoT device authentication which are unsuitable for huge number of devices, my research focuses on decentralised and distributed security mechanisms that will improve current protocols such as Oauth2, GDOI and GNAP.
{"title":"Decentralised and Scalable Security for IoT Devices","authors":"Munkenyi Mukhandi","doi":"10.1145/3485730.3492901","DOIUrl":"https://doi.org/10.1145/3485730.3492901","url":null,"abstract":"Advancements in IoT technology has provided great benefits, unfortunately, IoT adoption also causes an increase in the attack surface which intensify security risks. As a consequence, different types of IoT smart devices have become the main targets of many high-profile cyber-attacks. To safeguard against threats such as introduction of fake IoT nodes and identity theft, the IoT needs scalable and resilient device authentication management. Contrary to existing mechanisms for IoT device authentication which are unsuitable for huge number of devices, my research focuses on decentralised and distributed security mechanisms that will improve current protocols such as Oauth2, GDOI and GNAP.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129736652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}