Shuheng Li, Ranak Roy Chowdhury, Jingbo Shang, Rajesh K. Gupta, Dezhi Hong
Discovering patterns in time series data is essential to many key tasks in intelligent sensing systems, such as human activity recognition and event detection. These tasks involve the classification of sensory information from physical measurements such as inertial or temperature change measurements. Due to differences in the underlying physics, existing methods for classification use handcrafted features combined with traditional learning algorithms, or employ distinct deep neural models to directly learn from raw data. We propose here a unified neural architecture, UniTS, for sensory time series classification in various tasks, which obviates the need for domain-specific feature, model customization or polished hyper-parameter tuning. This is possible as we believe that discriminative patterns in sensory measurements would manifest when we combine information from both the time and frequency domains. In particular, to reveal the commonality of sensory signals, we integrate Short-Time Fourier Transform (STFT) into neural networks by initializing convolutional filter weights as the Fourier coefficients. Instead of treating STFT as a static linear transform with fixed coefficients, we make these weights optimizable during network training, which essentially learns to weigh each frequency channel. Recognizing that time-domain signals might represent intuitive physics such as temperature and acceleration, we combine linearly transformed time-domain hidden features with the frequency components within each time chunk. We further extend our model to multiple branches with different time-frequency resolutions to avoid the need of hyper-parameter search. We conducted experiments on four public datasets containing time-series data from various IoT systems, including motion, WiFi, EEG, and air quality, and compared UniTS with numerous recent models. Results demonstrate that our proposed method achieves an average F1 score of 91.85% with a 2.3-point improvement over the state of the art. We also verified the efficacy of STFT-inspired structures through numerous quantitative studies.
{"title":"UniTS: Short-Time Fourier Inspired Neural Networks for Sensory Time Series Classification","authors":"Shuheng Li, Ranak Roy Chowdhury, Jingbo Shang, Rajesh K. Gupta, Dezhi Hong","doi":"10.1145/3485730.3485942","DOIUrl":"https://doi.org/10.1145/3485730.3485942","url":null,"abstract":"Discovering patterns in time series data is essential to many key tasks in intelligent sensing systems, such as human activity recognition and event detection. These tasks involve the classification of sensory information from physical measurements such as inertial or temperature change measurements. Due to differences in the underlying physics, existing methods for classification use handcrafted features combined with traditional learning algorithms, or employ distinct deep neural models to directly learn from raw data. We propose here a unified neural architecture, UniTS, for sensory time series classification in various tasks, which obviates the need for domain-specific feature, model customization or polished hyper-parameter tuning. This is possible as we believe that discriminative patterns in sensory measurements would manifest when we combine information from both the time and frequency domains. In particular, to reveal the commonality of sensory signals, we integrate Short-Time Fourier Transform (STFT) into neural networks by initializing convolutional filter weights as the Fourier coefficients. Instead of treating STFT as a static linear transform with fixed coefficients, we make these weights optimizable during network training, which essentially learns to weigh each frequency channel. Recognizing that time-domain signals might represent intuitive physics such as temperature and acceleration, we combine linearly transformed time-domain hidden features with the frequency components within each time chunk. We further extend our model to multiple branches with different time-frequency resolutions to avoid the need of hyper-parameter search. We conducted experiments on four public datasets containing time-series data from various IoT systems, including motion, WiFi, EEG, and air quality, and compared UniTS with numerous recent models. Results demonstrate that our proposed method achieves an average F1 score of 91.85% with a 2.3-point improvement over the state of the art. We also verified the efficacy of STFT-inspired structures through numerous quantitative studies.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134174054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Razanne Abu-Aisheh, F. Bronzino, M. Rifai, Lou Salaun, T. Watteyne
We envision swarms of mm-scale micro-robots to be able to carry out critical missions such as exploration and mapping for hazard detection and search and rescue. These missions share the need to reach full coverage of the explorable space and build a complete map of the environment. To minimize completion time, robots in the swarm must be able to exchange information about the environment with each other. However, communication between swarm members is often assumed to be perfect, an assumption that does not reflect real-world conditions, where impairments can affect the Packet Delivery Ratio (PDR) of the wireless links. This paper studies how communication impairments can have a drastic impact on the performance of a robotic swarm. We present Atlas 2.0, an exploration algorithm that natively takes packet loss into account. We simulate the effect of various PDRs on robotic swarm exploration and mapping in three different scenarios. Our results show that the time it takes to complete the mapping mission increases significantly as the PDR decreases: on average, halving the PDR triples the time it takes to complete mapping. We emphasise the importance of considering methods to compensate for the delay caused by lossy communication when designing and implementing algorithms for robotics swarm coordination.
{"title":"Coordinating a Swarm of Micro-Robots Under Lossy Communication","authors":"Razanne Abu-Aisheh, F. Bronzino, M. Rifai, Lou Salaun, T. Watteyne","doi":"10.1145/3485730.3494040","DOIUrl":"https://doi.org/10.1145/3485730.3494040","url":null,"abstract":"We envision swarms of mm-scale micro-robots to be able to carry out critical missions such as exploration and mapping for hazard detection and search and rescue. These missions share the need to reach full coverage of the explorable space and build a complete map of the environment. To minimize completion time, robots in the swarm must be able to exchange information about the environment with each other. However, communication between swarm members is often assumed to be perfect, an assumption that does not reflect real-world conditions, where impairments can affect the Packet Delivery Ratio (PDR) of the wireless links. This paper studies how communication impairments can have a drastic impact on the performance of a robotic swarm. We present Atlas 2.0, an exploration algorithm that natively takes packet loss into account. We simulate the effect of various PDRs on robotic swarm exploration and mapping in three different scenarios. Our results show that the time it takes to complete the mapping mission increases significantly as the PDR decreases: on average, halving the PDR triples the time it takes to complete mapping. We emphasise the importance of considering methods to compensate for the delay caused by lossy communication when designing and implementing algorithms for robotics swarm coordination.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130411780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Snapshot GNSS is a more energy-efficient approach to location estimation than traditional GNSS positioning methods. This is beneficial for applications with long deployments on battery such as wildlife tracking. However, only a few snapshot GNSS implementations have been presented so far and all have disadvantages. Most significantly, they typically require the GNSS signals to be captured with a certain minimum resolution, which demands complex receiver hardware capable of capturing multi-bit data at sampling rates of 16 MHz and more. By contrast, we develop fast algorithms that reliably estimate locations from twelve-millisecond signals that are sampled at just 4 MHz and quantised with only a single bit per sample. This allows us to build a snapshot receiver at an unmatched low cost of $14, which can acquire one position per hour for a year. On a challenging public dataset with thousands of snapshots from real-world scenarios, our system achieves 97% reliability and 11 m median accuracy, comparable to existing solutions with more complex and expensive hardware and higher energy consumption. We provide an open implementation of the algorithms as well as a public web service for cloud-based location estimation from low-quality GNSS signal snapshots.
{"title":"SnapperGPS","authors":"J. Beuchert, A. Rogers","doi":"10.1145/3485730.3485931","DOIUrl":"https://doi.org/10.1145/3485730.3485931","url":null,"abstract":"Snapshot GNSS is a more energy-efficient approach to location estimation than traditional GNSS positioning methods. This is beneficial for applications with long deployments on battery such as wildlife tracking. However, only a few snapshot GNSS implementations have been presented so far and all have disadvantages. Most significantly, they typically require the GNSS signals to be captured with a certain minimum resolution, which demands complex receiver hardware capable of capturing multi-bit data at sampling rates of 16 MHz and more. By contrast, we develop fast algorithms that reliably estimate locations from twelve-millisecond signals that are sampled at just 4 MHz and quantised with only a single bit per sample. This allows us to build a snapshot receiver at an unmatched low cost of $14, which can acquire one position per hour for a year. On a challenging public dataset with thousands of snapshots from real-world scenarios, our system achieves 97% reliability and 11 m median accuracy, comparable to existing solutions with more complex and expensive hardware and higher energy consumption. We provide an open implementation of the algorithms as well as a public web service for cloud-based location estimation from low-quality GNSS signal snapshots.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122448399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Advancements in IoT technology has provided great benefits, unfortunately, IoT adoption also causes an increase in the attack surface which intensify security risks. As a consequence, different types of IoT smart devices have become the main targets of many high-profile cyber-attacks. To safeguard against threats such as introduction of fake IoT nodes and identity theft, the IoT needs scalable and resilient device authentication management. Contrary to existing mechanisms for IoT device authentication which are unsuitable for huge number of devices, my research focuses on decentralised and distributed security mechanisms that will improve current protocols such as Oauth2, GDOI and GNAP.
{"title":"Decentralised and Scalable Security for IoT Devices","authors":"Munkenyi Mukhandi","doi":"10.1145/3485730.3492901","DOIUrl":"https://doi.org/10.1145/3485730.3492901","url":null,"abstract":"Advancements in IoT technology has provided great benefits, unfortunately, IoT adoption also causes an increase in the attack surface which intensify security risks. As a consequence, different types of IoT smart devices have become the main targets of many high-profile cyber-attacks. To safeguard against threats such as introduction of fake IoT nodes and identity theft, the IoT needs scalable and resilient device authentication management. Contrary to existing mechanisms for IoT device authentication which are unsuitable for huge number of devices, my research focuses on decentralised and distributed security mechanisms that will improve current protocols such as Oauth2, GDOI and GNAP.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129736652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distracted driving has become a serious problem for traffic safety with the increasing number of fatalities every year. Existing systems have shortcomings of requiring additional hardware or vehicle motion data separation. Moreover, the excessive use of motion sensors can cause fast battery drain which is impractical for everyday use. In this work, we present a wearable-based distracted driving detection system that leverages Bluetooth. The proposed system exploits already in-vehicle BLE compatible devices to track the driver's hand position and infer potential unsafe driving behaviors. Preliminary study shows our system can achieve over 95% detection accuracy for various distracted driving behaviors.
{"title":"A Wearable-based Distracted Driving Detection Leveraging BLE","authors":"Travis Mewborne, Linghan Zhang, Sheng Tan","doi":"10.1145/3485730.3492872","DOIUrl":"https://doi.org/10.1145/3485730.3492872","url":null,"abstract":"Distracted driving has become a serious problem for traffic safety with the increasing number of fatalities every year. Existing systems have shortcomings of requiring additional hardware or vehicle motion data separation. Moreover, the excessive use of motion sensors can cause fast battery drain which is impractical for everyday use. In this work, we present a wearable-based distracted driving detection system that leverages Bluetooth. The proposed system exploits already in-vehicle BLE compatible devices to track the driver's hand position and infer potential unsafe driving behaviors. Preliminary study shows our system can achieve over 95% detection accuracy for various distracted driving behaviors.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116656600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guangliang Zhao, Guy Ben-Yosef, Jianwei Qiu, Yang Zhao, Prabhu Janakaraj, S. Boppana, A. R. Schnore
Person Re-ID is a challenging problem and is gaining more attention due to demands in security, intelligent system and other applications. Most person Re-ID works are vision-based, such as image, video, or broadly speaking, face recognition-based techniques. Recently, several multi-modal person Re-ID datasets were released, including RGB+IR, RGB+text, RGB+WiFi, which shows the potential of the multi-modal sensor-based person Re-ID approach. However, there are several common issues in public datasets, such as short time duration, lack of appearance change, and limited activities, resulting in un-robust models. For example, vision-based Re-ID models are sensitive to appearance change. In this work, a person Re-ID testbed with multi-modal sensors is created, allowing the collection of sensing modalities including RGB, IR, depth, WiFi, radar, and audio. This novel dataset will cover normal daily office activities with large time span over multi-seasons. Initial analytic results are obtained for evaluating different person Re-ID models, based on small datasets collected in this testbed.
{"title":"Person Re-ID Testbed with Multi-Modal Sensors","authors":"Guangliang Zhao, Guy Ben-Yosef, Jianwei Qiu, Yang Zhao, Prabhu Janakaraj, S. Boppana, A. R. Schnore","doi":"10.1145/3485730.3494113","DOIUrl":"https://doi.org/10.1145/3485730.3494113","url":null,"abstract":"Person Re-ID is a challenging problem and is gaining more attention due to demands in security, intelligent system and other applications. Most person Re-ID works are vision-based, such as image, video, or broadly speaking, face recognition-based techniques. Recently, several multi-modal person Re-ID datasets were released, including RGB+IR, RGB+text, RGB+WiFi, which shows the potential of the multi-modal sensor-based person Re-ID approach. However, there are several common issues in public datasets, such as short time duration, lack of appearance change, and limited activities, resulting in un-robust models. For example, vision-based Re-ID models are sensitive to appearance change. In this work, a person Re-ID testbed with multi-modal sensors is created, allowing the collection of sensing modalities including RGB, IR, depth, WiFi, radar, and audio. This novel dataset will cover normal daily office activities with large time span over multi-seasons. Initial analytic results are obtained for evaluating different person Re-ID models, based on small datasets collected in this testbed.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121599505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent years have witnessed an emerging class of real-time applications, e.g., autonomous driving, in which resource-constrained edge platforms need to execute a set of real-time mixed Deep Learning (DL) tasks concurrently. Such an application paradigm poses major challenges due to the huge compute workload of deep neural network models, diverse performance requirements of different tasks, and the lack of real-time support from existing DL frameworks. In this paper, we present RT-mDL, a novel framework to support mixed real-time DL tasks on edge platform with heterogeneous CPU and GPU resource. RT-mDL aims to optimize the mixed DL task execution to meet their diverse real-time/accuracy requirements by exploiting unique compute characteristics of DL tasks. RT-mDL employs a novel storage-bounded model scaling method to generate a series of model variants, and systematically optimizes the DL task execution by joint model variants selection and task priority assignment. To improve the CPU/GPU utilization of mixed DL tasks, RT-mDL also includes a new priority-based scheduler which employs a GPU packing mechanism and executes the CPU/GPU tasks independently. Our implementation on an F1/10 autonomous driving testbed shows that, RT-mDL can enable multiple concurrent DL tasks to achieve satisfactory real-time performance in traffic light detection and sign recognition. Moreover, compared to state-of-the-art baselines, RT-mDL can reduce deadline missing rate by 40.12% while only sacrificing 1.7% model accuracy.
{"title":"RT-mDL: Supporting Real-Time Mixed Deep Learning Tasks on Edge Platforms","authors":"Neiwen Ling, Kai Wang, Yuze He, G. Xing, Daqi Xie","doi":"10.1145/3485730.3485938","DOIUrl":"https://doi.org/10.1145/3485730.3485938","url":null,"abstract":"Recent years have witnessed an emerging class of real-time applications, e.g., autonomous driving, in which resource-constrained edge platforms need to execute a set of real-time mixed Deep Learning (DL) tasks concurrently. Such an application paradigm poses major challenges due to the huge compute workload of deep neural network models, diverse performance requirements of different tasks, and the lack of real-time support from existing DL frameworks. In this paper, we present RT-mDL, a novel framework to support mixed real-time DL tasks on edge platform with heterogeneous CPU and GPU resource. RT-mDL aims to optimize the mixed DL task execution to meet their diverse real-time/accuracy requirements by exploiting unique compute characteristics of DL tasks. RT-mDL employs a novel storage-bounded model scaling method to generate a series of model variants, and systematically optimizes the DL task execution by joint model variants selection and task priority assignment. To improve the CPU/GPU utilization of mixed DL tasks, RT-mDL also includes a new priority-based scheduler which employs a GPU packing mechanism and executes the CPU/GPU tasks independently. Our implementation on an F1/10 autonomous driving testbed shows that, RT-mDL can enable multiple concurrent DL tasks to achieve satisfactory real-time performance in traffic light detection and sign recognition. Moreover, compared to state-of-the-art baselines, RT-mDL can reduce deadline missing rate by 40.12% while only sacrificing 1.7% model accuracy.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"678 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122973034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Realizing the vision of ubiquitous battery-free sensing has proven to be challenging, mainly due to the practical energy and range limitations of current wireless communication systems. To address this, we design the first wide-area and scalable backscatter network with multiple receivers (RX) and transmitters (TX) base units to communicate with battery-free sensor nodes. Our system circumvents the inherent limitations of backscatter systems -including the limited coverage area, frequency-dependent operability, and sensor node limitations in handling network tasks- by introducing several coordination techniques between the base units starting from a single RX-TX pair to networks with many RX and TX units. We build low-cost RX and TX base units and battery-free sensor nodes with multiple sensing modalities and evaluate the performance of the MultiScatter system in various deployments. Our evaluation shows that we can successfully communicate with battery-free sensor nodes across 23400 ft2 of a two-floor educational complex using 5 RX and 20 TX units, costing $569. Also, we show that the aggregated throughput of the backscatter network increases linearly as the number of RX units and the network coverage grows.
{"title":"MultiScatter","authors":"Mohamad Katanbaf, Ali Saffari, Joshua R. Smith","doi":"10.1145/3485730.3485939","DOIUrl":"https://doi.org/10.1145/3485730.3485939","url":null,"abstract":"Realizing the vision of ubiquitous battery-free sensing has proven to be challenging, mainly due to the practical energy and range limitations of current wireless communication systems. To address this, we design the first wide-area and scalable backscatter network with multiple receivers (RX) and transmitters (TX) base units to communicate with battery-free sensor nodes. Our system circumvents the inherent limitations of backscatter systems -including the limited coverage area, frequency-dependent operability, and sensor node limitations in handling network tasks- by introducing several coordination techniques between the base units starting from a single RX-TX pair to networks with many RX and TX units. We build low-cost RX and TX base units and battery-free sensor nodes with multiple sensing modalities and evaluate the performance of the MultiScatter system in various deployments. Our evaluation shows that we can successfully communicate with battery-free sensor nodes across 23400 ft2 of a two-floor educational complex using 5 RX and 20 TX units, costing $569. Also, we show that the aggregated throughput of the backscatter network increases linearly as the number of RX units and the network coverage grows.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"472 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123446262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Ahmad, Xiao Sha, M. Stanaćević, A. Athalye, P. Djurić, Samir R Das
Backscattering tags transmit passively without an on-board active radio transmitter. Almost all present-day backscatter systems, however, rely on active radio receivers. This presents a significant scalability, power and cost challenge for backscatter systems. To overcome this barrier, recent research has empowered these passive tags with the ability to reliably receive backscatter signals from other tags. This forms the building block of passive networks wherein tags talk to each other without an active radio on either the transmit or receive side. For wider functionality, accurate localization of such tags is critical. All known backscatter tag localization techniques rely on active receivers for measuring and characterizing the received signal. As a result, they cannot be directly applied to passive tag-to-tag networks. This paper overcomes the gap by developing a localization technique for such passive networks based on a novel method for phase-based ranging in passive receivers. This method allows pairs of passive tags to collaboratively determine the inter-tag channel phase while effectively minimizing the effects of multipath and noise in the surrounding environment. Building on this, we develop a localization technique that benefits from large link diversity uniquely available in a passive tag-to-tag network. We evaluate the performance of our techniques with extensive micro-benchmarking experiments in an indoor environment using fabricated prototypes of tag hardware. We show that our phase-based ranging performs similar to active receivers, providing median 1D ranging error <1 cm and median localization error also <1 cm. Benefiting from the large-scale link diversity our localization technique outperforms several state-of-the-art techniques that use active receivers.
{"title":"Enabling Passive Backscatter Tag Localization Without Active Receivers","authors":"A. Ahmad, Xiao Sha, M. Stanaćević, A. Athalye, P. Djurić, Samir R Das","doi":"10.1145/3485730.3485950","DOIUrl":"https://doi.org/10.1145/3485730.3485950","url":null,"abstract":"Backscattering tags transmit passively without an on-board active radio transmitter. Almost all present-day backscatter systems, however, rely on active radio receivers. This presents a significant scalability, power and cost challenge for backscatter systems. To overcome this barrier, recent research has empowered these passive tags with the ability to reliably receive backscatter signals from other tags. This forms the building block of passive networks wherein tags talk to each other without an active radio on either the transmit or receive side. For wider functionality, accurate localization of such tags is critical. All known backscatter tag localization techniques rely on active receivers for measuring and characterizing the received signal. As a result, they cannot be directly applied to passive tag-to-tag networks. This paper overcomes the gap by developing a localization technique for such passive networks based on a novel method for phase-based ranging in passive receivers. This method allows pairs of passive tags to collaboratively determine the inter-tag channel phase while effectively minimizing the effects of multipath and noise in the surrounding environment. Building on this, we develop a localization technique that benefits from large link diversity uniquely available in a passive tag-to-tag network. We evaluate the performance of our techniques with extensive micro-benchmarking experiments in an indoor environment using fabricated prototypes of tag hardware. We show that our phase-based ranging performs similar to active receivers, providing median 1D ranging error <1 cm and median localization error also <1 cm. Benefiting from the large-scale link diversity our localization technique outperforms several state-of-the-art techniques that use active receivers.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126295587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ontology matching enables harmonizing heterogeneous data models. Existing ontology matching approaches include machine learning. In particular, recent works leverage weak supervision (WS) through programmatic labeling to avoid the intensive hand-labeling for large ontologies. Programmatic labeling relies on heuristics and rules, called Labeling Functions (LFs), that generate noisy and incomplete labels. However, to cover a reasonable portion of the dataset, programmatic labeling might require a significant number of LFs that might be time expensive and not always straightforward to program. This paper proposes a novel system, namely OntoAugment, that augments LF labels for the ontology matching problem, starting from outcomes of the LFs. Our solution leverages the "similarity of similarities" between ontology concept bipairs that are two pairs of concepts. OntoAugment projects a label yielded by an LF for a concept pair to a similar pair that the same LF does not label. Thus, a wider portion of the dataset is covered even with a limited set of LFs. Experimentation results show that OntoAugment provides significant improvements (up to 11 F1 points) compared to the state-of-the-art WS approach when fewer LFs are used, whereas it maintains the performance without creating additional noise when a higher number of LFs already achieves high performance.
{"title":"OntoAugment","authors":"Fabio Maresca, Gürkan Solmaz, Flavio Cirillo","doi":"10.1145/3485730.3493445","DOIUrl":"https://doi.org/10.1145/3485730.3493445","url":null,"abstract":"Ontology matching enables harmonizing heterogeneous data models. Existing ontology matching approaches include machine learning. In particular, recent works leverage weak supervision (WS) through programmatic labeling to avoid the intensive hand-labeling for large ontologies. Programmatic labeling relies on heuristics and rules, called Labeling Functions (LFs), that generate noisy and incomplete labels. However, to cover a reasonable portion of the dataset, programmatic labeling might require a significant number of LFs that might be time expensive and not always straightforward to program. This paper proposes a novel system, namely OntoAugment, that augments LF labels for the ontology matching problem, starting from outcomes of the LFs. Our solution leverages the \"similarity of similarities\" between ontology concept bipairs that are two pairs of concepts. OntoAugment projects a label yielded by an LF for a concept pair to a similar pair that the same LF does not label. Thus, a wider portion of the dataset is covered even with a limited set of LFs. Experimentation results show that OntoAugment provides significant improvements (up to 11 F1 points) compared to the state-of-the-art WS approach when fewer LFs are used, whereas it maintains the performance without creating additional noise when a higher number of LFs already achieves high performance.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116221213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}