Juan Ye, Pakawat Nakwijit, Martin Schiemer, Saurav Jha, F. Zambonelli
Continual learning is an emerging research challenge in human activity recognition (HAR). As an increasing number of HAR applications are deployed in real-world environments, it is important and essential to extend the activity model to adapt to the change in people’s activity routine. Otherwise, HAR applications can become obsolete and fail to deliver activity-aware services. The existing research in HAR has focused on detecting abnormal sensor events or new activities, however, extending the activity model is currently under-explored. To directly tackle this challenge, we build on the recent advance in the area of lifelong machine learning and design a continual activity recognition system, called HAR-GAN, to grow the activity model over time. HAR-GAN does not require a prior knowledge on what new activity classes might be and it does not require to store historical data by leveraging the use of Generative Adversarial Networks (GAN) to generate sensor data on the previously learned activities. We have evaluated HAR-GAN on four third-party, public datasets collected on binary sensors and accelerometers. Our extensive empirical results demonstrate the effectiveness of HAR-GAN in continual activity recognition and shed insight on the future challenges.
{"title":"Continual Activity Recognition with Generative Adversarial Networks","authors":"Juan Ye, Pakawat Nakwijit, Martin Schiemer, Saurav Jha, F. Zambonelli","doi":"10.1145/3440036","DOIUrl":"https://doi.org/10.1145/3440036","url":null,"abstract":"Continual learning is an emerging research challenge in human activity recognition (HAR). As an increasing number of HAR applications are deployed in real-world environments, it is important and essential to extend the activity model to adapt to the change in people’s activity routine. Otherwise, HAR applications can become obsolete and fail to deliver activity-aware services. The existing research in HAR has focused on detecting abnormal sensor events or new activities, however, extending the activity model is currently under-explored. To directly tackle this challenge, we build on the recent advance in the area of lifelong machine learning and design a continual activity recognition system, called HAR-GAN, to grow the activity model over time. HAR-GAN does not require a prior knowledge on what new activity classes might be and it does not require to store historical data by leveraging the use of Generative Adversarial Networks (GAN) to generate sensor data on the previously learned activities. We have evaluated HAR-GAN on four third-party, public datasets collected on binary sensors and accelerometers. Our extensive empirical results demonstrate the effectiveness of HAR-GAN in continual activity recognition and shed insight on the future challenges.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87077087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hossam ElHussini, C. Assi, Bassam Moussa, Ribal Atallah, A. Ghrayeb
With the growing market of Electric Vehicles (EV), the procurement of their charging infrastructure plays a crucial role in their adoption. Within the revolution of Internet of Things, the EV charging infrastructure is getting on board with the introduction of smart Electric Vehicle Charging Stations (EVCS), a myriad set of communication protocols, and different entities. We provide in this article an overview of this infrastructure detailing the participating entities and the communication protocols. Further, we contextualize the current deployment of EVCSs through the use of available public data. In the light of such a survey, we identify two key concerns, the lack of standardization and multiple points of failures, which renders the current deployment of EV charging infrastructure vulnerable to an array of different attacks. Moreover, we propose a novel attack scenario that exploits the unique characteristics of the EVCSs and their protocol (such as high power wattage and support for reverse power flow) to cause disturbances to the power grid. We investigate three different attack variations; sudden surge in power demand, sudden surge in power supply, and a switching attack. To support our claims, we showcase using a real-world example how an adversary can compromise an EVCS and create a traffic bottleneck by tampering with the charging schedules of EVs. Further, we perform a simulation-based study of the impact of our proposed attack variations on the WSCC 9 bus system. Our simulations show that an adversary can cause devastating effects on the power grid, which might result in blackout and cascading failure by comprising a small number of EVCSs.
{"title":"A Tale of Two Entities","authors":"Hossam ElHussini, C. Assi, Bassam Moussa, Ribal Atallah, A. Ghrayeb","doi":"10.1145/3437258","DOIUrl":"https://doi.org/10.1145/3437258","url":null,"abstract":"With the growing market of Electric Vehicles (EV), the procurement of their charging infrastructure plays a crucial role in their adoption. Within the revolution of Internet of Things, the EV charging infrastructure is getting on board with the introduction of smart Electric Vehicle Charging Stations (EVCS), a myriad set of communication protocols, and different entities. We provide in this article an overview of this infrastructure detailing the participating entities and the communication protocols. Further, we contextualize the current deployment of EVCSs through the use of available public data. In the light of such a survey, we identify two key concerns, the lack of standardization and multiple points of failures, which renders the current deployment of EV charging infrastructure vulnerable to an array of different attacks. Moreover, we propose a novel attack scenario that exploits the unique characteristics of the EVCSs and their protocol (such as high power wattage and support for reverse power flow) to cause disturbances to the power grid. We investigate three different attack variations; sudden surge in power demand, sudden surge in power supply, and a switching attack. To support our claims, we showcase using a real-world example how an adversary can compromise an EVCS and create a traffic bottleneck by tampering with the charging schedules of EVs. Further, we perform a simulation-based study of the impact of our proposed attack variations on the WSCC 9 bus system. Our simulations show that an adversary can cause devastating effects on the power grid, which might result in blackout and cascading failure by comprising a small number of EVCSs.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82548225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Source Location Privacy (SLP) is an important property for monitoring assets in privacy-critical sensor network and Internet of Things applications. Many SLP-aware routing techniques exist, with most striking a tradeoff between SLP and other key metrics such as energy (due to battery power). Typically, the number of messages sent has been used as a proxy for the energy consumed. Existing work (for SLP against a local attacker) does not consider the impact of sleeping via duty cycling to reduce the energy cost of an SLP-aware routing protocol. Therefore, two main challenges exist: (i) how to achieve a low duty cycle without loss of control messages that configure the SLP protocol and (ii) how to achieve high SLP without requiring a long time spent awake. In this article, we present a novel formalisation of a duty cycling protocol as a transformation process. Using derived transformation rules, we present the first duty cycling protocol for an SLP-aware routing protocol for a local eavesdropping attacker. Simulation results on grids demonstrate a duty cycle of 10%, while only increasing the capture ratio of the source by 3 percentage points, and testbed experiments on FlockLab demonstrate an 80% reduction in the average current draw.
{"title":"A Spatial Source Location Privacy-aware Duty Cycle for Internet of Things Sensor Networks","authors":"M. Bradbury, A. Jhumka, C. Maple","doi":"10.1145/3430379","DOIUrl":"https://doi.org/10.1145/3430379","url":null,"abstract":"Source Location Privacy (SLP) is an important property for monitoring assets in privacy-critical sensor network and Internet of Things applications. Many SLP-aware routing techniques exist, with most striking a tradeoff between SLP and other key metrics such as energy (due to battery power). Typically, the number of messages sent has been used as a proxy for the energy consumed. Existing work (for SLP against a local attacker) does not consider the impact of sleeping via duty cycling to reduce the energy cost of an SLP-aware routing protocol. Therefore, two main challenges exist: (i) how to achieve a low duty cycle without loss of control messages that configure the SLP protocol and (ii) how to achieve high SLP without requiring a long time spent awake. In this article, we present a novel formalisation of a duty cycling protocol as a transformation process. Using derived transformation rules, we present the first duty cycling protocol for an SLP-aware routing protocol for a local eavesdropping attacker. Simulation results on grids demonstrate a duty cycle of 10%, while only increasing the capture ratio of the source by 3 percentage points, and testbed experiments on FlockLab demonstrate an 80% reduction in the average current draw.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75074523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As Electric Vehicles (EVs) become increasingly popular, their battery-related problems (e.g., short driving range and heavy battery weight) must be resolved as soon as possible. Velocity optimization of EVs to minimize energy consumption in driving is an effective alternative to handle these problems. However, previous velocity optimization methods assume that vehicles will pass through traffic lights immediately at green traffic signals. Actually, a vehicle may still experience a delay to pass a green traffic light due to a vehicle waiting queue in front of the traffic light. Also, as velocity optimization is for individual vehicles, previous methods cannot avoid rear-end collisions. That is, a vehicle following its optimal velocity profile may experience rear-end collisions with its frontal vehicle on the road. In this article, for the first time, we propose a velocity optimization system that enables EVs to immediately pass green traffic lights without delay and to avoid rear-end collisions to ensure driving safety when EVs follow optimal velocity profiles on the road. We collected real driving data on road sections of US-25 highway (with two driving lanes in each direction and relatively low traffic volume) to conduct extensive trace-driven simulation studies. Results show that our velocity optimization system reduces energy consumption by up to 17.5% compared with real driving patterns without increasing trip time. Also, it helps EVs to avoid possible collisions compared with existing collision avoidance methods.
{"title":"Velocity Optimization of Pure Electric Vehicles with Traffic Dynamics and Driving Safety Considerations","authors":"Liuwang Kang, Ankur Sarker, Haiying Shen","doi":"10.1145/3433678","DOIUrl":"https://doi.org/10.1145/3433678","url":null,"abstract":"As Electric Vehicles (EVs) become increasingly popular, their battery-related problems (e.g., short driving range and heavy battery weight) must be resolved as soon as possible. Velocity optimization of EVs to minimize energy consumption in driving is an effective alternative to handle these problems. However, previous velocity optimization methods assume that vehicles will pass through traffic lights immediately at green traffic signals. Actually, a vehicle may still experience a delay to pass a green traffic light due to a vehicle waiting queue in front of the traffic light. Also, as velocity optimization is for individual vehicles, previous methods cannot avoid rear-end collisions. That is, a vehicle following its optimal velocity profile may experience rear-end collisions with its frontal vehicle on the road. In this article, for the first time, we propose a velocity optimization system that enables EVs to immediately pass green traffic lights without delay and to avoid rear-end collisions to ensure driving safety when EVs follow optimal velocity profiles on the road. We collected real driving data on road sections of US-25 highway (with two driving lanes in each direction and relatively low traffic volume) to conduct extensive trace-driven simulation studies. Results show that our velocity optimization system reduces energy consumption by up to 17.5% compared with real driving patterns without increasing trip time. Also, it helps EVs to avoid possible collisions compared with existing collision avoidance methods.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82491871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The design and development process for internet of things (IoT) applications is more complicated than that for desktop, mobile, or web applications. First, IoT applications require both software and hardware to work together across many different types of nodes with different capabilities under different conditions. Second, IoT application development involves different types of software engineers such as desktop, web, embedded, and mobile to work together. Furthermore, non-software engineering personnel such as business analysts are also involved in the design process. In addition to the complexity of having multiple software engineering specialists cooperating to merge different hardware and software components together, the development process requires different software and hardware stacks to be integrated together (e.g., different stacks from different companies such as Microsoft Azure and IBM Bluemix). Due to the above complexities, non-functional requirements (such as security and privacy, which are highly important in the context of the IoT) tend to be ignored or treated as though they are less important in the IoT application development process. This article reviews techniques, methods, and tools to support security and privacy requirements in existing non-IoT application designs, enabling their use and integration into IoT applications. This article primarily focuses on design notations, models, and languages that facilitate capturing non-functional requirements (i.e., security and privacy). Our goal is not only to analyse, compare, and consolidate the empirical research but also to appreciate their findings and discuss their applicability for the IoT.
{"title":"Security and Privacy Requirements for the Internet of Things","authors":"Nada Alhirabi, O. Rana, Charith Perera","doi":"10.1145/3437537","DOIUrl":"https://doi.org/10.1145/3437537","url":null,"abstract":"The design and development process for internet of things (IoT) applications is more complicated than that for desktop, mobile, or web applications. First, IoT applications require both software and hardware to work together across many different types of nodes with different capabilities under different conditions. Second, IoT application development involves different types of software engineers such as desktop, web, embedded, and mobile to work together. Furthermore, non-software engineering personnel such as business analysts are also involved in the design process. In addition to the complexity of having multiple software engineering specialists cooperating to merge different hardware and software components together, the development process requires different software and hardware stacks to be integrated together (e.g., different stacks from different companies such as Microsoft Azure and IBM Bluemix). Due to the above complexities, non-functional requirements (such as security and privacy, which are highly important in the context of the IoT) tend to be ignored or treated as though they are less important in the IoT application development process. This article reviews techniques, methods, and tools to support security and privacy requirements in existing non-IoT application designs, enabling their use and integration into IoT applications. This article primarily focuses on design notations, models, and languages that facilitate capturing non-functional requirements (i.e., security and privacy). Our goal is not only to analyse, compare, and consolidate the empirical research but also to appreciate their findings and discuss their applicability for the IoT.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83751973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongsen Ma, S. Arshad, Swetha Muniraju, E. Torkildson, Enrico Rantala, K. Doppler, Gang Zhou
In recent years, Channel State Information (CSI) measured by WiFi is widely used for human activity recognition. In this article, we propose a deep learning design for location- and person-independent activity recognition with WiFi. The proposed design consists of three Deep Neural Networks (DNNs): a 2D Convolutional Neural Network (CNN) as the recognition algorithm, a 1D CNN as the state machine, and a reinforcement learning agent for neural architecture search. The recognition algorithm learns location- and person-independent features from different perspectives of CSI data. The state machine learns temporal dependency information from history classification results. The reinforcement learning agent optimizes the neural architecture of the recognition algorithm using a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM). The proposed design is evaluated in a lab environment with different WiFi device locations, antenna orientations, sitting/standing/walking locations/orientations, and multiple persons. The proposed design has 97% average accuracy when testing devices and persons are not seen during training. The proposed design is also evaluated by two public datasets with accuracy of 80% and 83%. The proposed design needs very little human efforts for ground truth labeling, feature engineering, signal processing, and tuning of learning parameters and hyperparameters.
近年来,WiFi测量的信道状态信息(Channel State Information, CSI)被广泛用于人体活动识别。在本文中,我们提出了一种基于WiFi的独立于位置和个人的活动识别的深度学习设计。提出的设计由三个深度神经网络(dnn)组成:二维卷积神经网络(CNN)作为识别算法,一维卷积神经网络作为状态机,以及用于神经结构搜索的强化学习代理。该识别算法从CSI数据的不同角度学习与位置和人无关的特征。状态机从历史分类结果中学习时间依赖信息。强化学习智能体使用具有长短期记忆(LSTM)的递归神经网络(RNN)优化识别算法的神经结构。在不同的WiFi设备位置、天线方向、坐/站/行走位置/方向和多人的实验室环境中对所提出的设计进行了评估。当训练期间没有看到测试设备和人员时,所提出的设计的平均准确率为97%。该设计还通过两个公共数据集进行了评估,准确率分别为80%和83%。所提出的设计需要很少的人力来进行地面真值标记、特征工程、信号处理以及学习参数和超参数的调整。
{"title":"Location- and Person-Independent Activity Recognition with WiFi, Deep Neural Networks, and Reinforcement Learning","authors":"Yongsen Ma, S. Arshad, Swetha Muniraju, E. Torkildson, Enrico Rantala, K. Doppler, Gang Zhou","doi":"10.1145/3424739","DOIUrl":"https://doi.org/10.1145/3424739","url":null,"abstract":"In recent years, Channel State Information (CSI) measured by WiFi is widely used for human activity recognition. In this article, we propose a deep learning design for location- and person-independent activity recognition with WiFi. The proposed design consists of three Deep Neural Networks (DNNs): a 2D Convolutional Neural Network (CNN) as the recognition algorithm, a 1D CNN as the state machine, and a reinforcement learning agent for neural architecture search. The recognition algorithm learns location- and person-independent features from different perspectives of CSI data. The state machine learns temporal dependency information from history classification results. The reinforcement learning agent optimizes the neural architecture of the recognition algorithm using a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM). The proposed design is evaluated in a lab environment with different WiFi device locations, antenna orientations, sitting/standing/walking locations/orientations, and multiple persons. The proposed design has 97% average accuracy when testing devices and persons are not seen during training. The proposed design is also evaluated by two public datasets with accuracy of 80% and 83%. The proposed design needs very little human efforts for ground truth labeling, feature engineering, signal processing, and tuning of learning parameters and hyperparameters.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2021-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78014146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network bootstrapping is one of the initial tasks executed in any wireless network such as Industrial Internet of Things (IIoT). Fast formation of IIoT network helps in resource conservation and efficient data collection. Our probabilistic analysis reveals that the performance of 6TiSCH based IIoT network formation degrades with time because of the following reasons: (i) IETF 6TiSCH Minimal Configuration (6TiSCH-MC) standard considered that beacon frame has the highest priority over all other control packets, (ii) 6TiSCH-MC provides minimal routing information during network formation, and (iii) sometimes, joined node can not transmit control packets due to high congestion in shared slots. To deal with these problems, this article proposes two schemes—opportunistic priority alternation and rate control (OPR) and opportunistic channel access (OCA). OPR dynamically adjusts the priority of control packets and provides sufficient routing information during network bootstrapping, whereas OCA allows the nodes having urgent packet to transmit it in less time. Along with the theoretical analysis of the proposed schemes, we also provide comparison-based simulation and real testbed experiment results to validate the proposed schemes together. The received results show significant performance improvements in terms of joining time and energy consumption.
{"title":"Opportunistic Transmission of Control Packets for Faster Formation of 6TiSCH Network","authors":"Alakesh Kalita, M. Khatua","doi":"10.1145/3430380","DOIUrl":"https://doi.org/10.1145/3430380","url":null,"abstract":"Network bootstrapping is one of the initial tasks executed in any wireless network such as Industrial Internet of Things (IIoT). Fast formation of IIoT network helps in resource conservation and efficient data collection. Our probabilistic analysis reveals that the performance of 6TiSCH based IIoT network formation degrades with time because of the following reasons: (i) IETF 6TiSCH Minimal Configuration (6TiSCH-MC) standard considered that beacon frame has the highest priority over all other control packets, (ii) 6TiSCH-MC provides minimal routing information during network formation, and (iii) sometimes, joined node can not transmit control packets due to high congestion in shared slots. To deal with these problems, this article proposes two schemes—opportunistic priority alternation and rate control (OPR) and opportunistic channel access (OCA). OPR dynamically adjusts the priority of control packets and provides sufficient routing information during network bootstrapping, whereas OCA allows the nodes having urgent packet to transmit it in less time. Along with the theoretical analysis of the proposed schemes, we also provide comparison-based simulation and real testbed experiment results to validate the proposed schemes together. The received results show significant performance improvements in terms of joining time and energy consumption.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86121200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things (IoT) will be a main data generation infrastructure for achieving better system intelligence. This article considers the design and implementation of a practical privacy-preserving collaborative learning scheme, in which a curious learning coordinator trains a better machine learning model based on the data samples contributed by a number of IoT objects, while the confidentiality of the raw forms of the training data is protected against the coordinator. Existing distributed machine learning and data encryption approaches incur significant computation and communication overhead, rendering them ill-suited for resource-constrained IoT objects. We study an approach that applies independent random projection at each IoT object to obfuscate data and trains a deep neural network at the coordinator based on the projected data from the IoT objects. This approach introduces light computation overhead to the IoT objects and moves most workload to the coordinator that can have sufficient computing resources. Although the independent projections performed by the IoT objects address the potential collusion between the curious coordinator and some compromised IoT objects, they significantly increase the complexity of the projected data. In this article, we leverage the superior learning capability of deep learning in capturing sophisticated patterns to maintain good learning performance. Extensive comparative evaluation shows that this approach outperforms other lightweight approaches that apply additive noisification for differential privacy and/or support vector machines for learning in the applications with light to moderate data pattern complexities.
{"title":"On Lightweight Privacy-preserving Collaborative Learning for Internet of Things by Independent Random Projections","authors":"Linshan Jiang, Rui Tan, Xin Lou, Guosheng Lin","doi":"10.1145/3441303","DOIUrl":"https://doi.org/10.1145/3441303","url":null,"abstract":"The Internet of Things (IoT) will be a main data generation infrastructure for achieving better system intelligence. This article considers the design and implementation of a practical privacy-preserving collaborative learning scheme, in which a curious learning coordinator trains a better machine learning model based on the data samples contributed by a number of IoT objects, while the confidentiality of the raw forms of the training data is protected against the coordinator. Existing distributed machine learning and data encryption approaches incur significant computation and communication overhead, rendering them ill-suited for resource-constrained IoT objects. We study an approach that applies independent random projection at each IoT object to obfuscate data and trains a deep neural network at the coordinator based on the projected data from the IoT objects. This approach introduces light computation overhead to the IoT objects and moves most workload to the coordinator that can have sufficient computing resources. Although the independent projections performed by the IoT objects address the potential collusion between the curious coordinator and some compromised IoT objects, they significantly increase the complexity of the projected data. In this article, we leverage the superior learning capability of deep learning in capturing sophisticated patterns to maintain good learning performance. Extensive comparative evaluation shows that this approach outperforms other lightweight approaches that apply additive noisification for differential privacy and/or support vector machines for learning in the applications with light to moderate data pattern complexities.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2020-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81054621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet of Things (IoT) sensors in smart buildings are becoming increasingly ubiquitous, making buildings more livable, energy efficient, and sustainable. These devices sense the environment and generate multivariate temporal data of paramount importance for detecting anomalies and improving the prediction of energy usage in smart buildings. However, detecting these anomalies in centralized systems is often plagued by a huge delay in response time. To overcome this issue, we formulate the anomaly detection problem in a federated learning setting by leveraging the multi-task learning paradigm, which aims at solving multiple tasks simultaneously while taking advantage of the similarities and differences across tasks. We propose a novel privacy-by-design federated learning model using a stacked long short-time memory (LSTM) model, and we demonstrate that it is more than twice as fast during training convergence compared to the centralized LSTM. The effectiveness of our federated learning approach is demonstrated on three real-world datasets generated by the IoT production system at General Electric Current smart building, achieving state-of-the-art performance compared to baseline methods in both classification and regression tasks. Our experimental results demonstrate the effectiveness of the proposed framework in reducing the overall training cost without compromising the prediction performance.
{"title":"A Federated Learning Approach to Anomaly Detection in Smart Buildings","authors":"Raed Abdel Sater, A. Hamza","doi":"10.1145/3467981","DOIUrl":"https://doi.org/10.1145/3467981","url":null,"abstract":"Internet of Things (IoT) sensors in smart buildings are becoming increasingly ubiquitous, making buildings more livable, energy efficient, and sustainable. These devices sense the environment and generate multivariate temporal data of paramount importance for detecting anomalies and improving the prediction of energy usage in smart buildings. However, detecting these anomalies in centralized systems is often plagued by a huge delay in response time. To overcome this issue, we formulate the anomaly detection problem in a federated learning setting by leveraging the multi-task learning paradigm, which aims at solving multiple tasks simultaneously while taking advantage of the similarities and differences across tasks. We propose a novel privacy-by-design federated learning model using a stacked long short-time memory (LSTM) model, and we demonstrate that it is more than twice as fast during training convergence compared to the centralized LSTM. The effectiveness of our federated learning approach is demonstrated on three real-world datasets generated by the IoT production system at General Electric Current smart building, achieving state-of-the-art performance compared to baseline methods in both classification and regression tasks. Our experimental results demonstrate the effectiveness of the proposed framework in reducing the overall training cost without compromising the prediction performance.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2020-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80335269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Anzanpour, Delaram Amiri, I. Azimi, M. Levorato, N. Dutt, P. Liljeberg, A. Rahmani
Recent advances in pervasive Internet of Things technologies and edge computing have opened new avenues for development of ubiquitous health monitoring applications. Delivering an acceptable level of usability and accuracy for these healthcare Internet of Things applications requires optimization of both system-driven and data-driven aspects, which are typically done in a disjoint manner. Although decoupled optimization of these processes yields local optima at each level, synergistic coupling of the system and data levels can lead to a holistic solution opening new opportunities for optimization. In this article, we present an edge-assisted resource manager that dynamically controls the fidelity and duration of sensing w.r.t. changes in the patient’s activity and health state, thus fine-tuning the trade-off between energy efficiency and measurement accuracy. The cornerstone of our proposed solution is an intelligent low-latency real-time controller implemented at the edge layer that detects abnormalities in the patient’s condition and accordingly adjusts the sensing parameters of a reconfigurable wireless sensor node. We assess the efficiency of our proposed system via a case study of the photoplethysmography-based medical early warning score system. Our experiments on a real full hardware-software early warning score system reveal up to 49% power savings while maintaining the accuracy of the sensory data.
{"title":"Edge-Assisted Control for Healthcare Internet of Things","authors":"A. Anzanpour, Delaram Amiri, I. Azimi, M. Levorato, N. Dutt, P. Liljeberg, A. Rahmani","doi":"10.1145/3407091","DOIUrl":"https://doi.org/10.1145/3407091","url":null,"abstract":"Recent advances in pervasive Internet of Things technologies and edge computing have opened new avenues for development of ubiquitous health monitoring applications. Delivering an acceptable level of usability and accuracy for these healthcare Internet of Things applications requires optimization of both system-driven and data-driven aspects, which are typically done in a disjoint manner. Although decoupled optimization of these processes yields local optima at each level, synergistic coupling of the system and data levels can lead to a holistic solution opening new opportunities for optimization. In this article, we present an edge-assisted resource manager that dynamically controls the fidelity and duration of sensing w.r.t. changes in the patient’s activity and health state, thus fine-tuning the trade-off between energy efficiency and measurement accuracy. The cornerstone of our proposed solution is an intelligent low-latency real-time controller implemented at the edge layer that detects abnormalities in the patient’s condition and accordingly adjusts the sensing parameters of a reconfigurable wireless sensor node. We assess the efficiency of our proposed system via a case study of the photoplethysmography-based medical early warning score system. Our experiments on a real full hardware-software early warning score system reveal up to 49% power savings while maintaining the accuracy of the sensory data.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2020-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76847146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}