The design and development process for internet of things (IoT) applications is more complicated than that for desktop, mobile, or web applications. First, IoT applications require both software and hardware to work together across many different types of nodes with different capabilities under different conditions. Second, IoT application development involves different types of software engineers such as desktop, web, embedded, and mobile to work together. Furthermore, non-software engineering personnel such as business analysts are also involved in the design process. In addition to the complexity of having multiple software engineering specialists cooperating to merge different hardware and software components together, the development process requires different software and hardware stacks to be integrated together (e.g., different stacks from different companies such as Microsoft Azure and IBM Bluemix). Due to the above complexities, non-functional requirements (such as security and privacy, which are highly important in the context of the IoT) tend to be ignored or treated as though they are less important in the IoT application development process. This article reviews techniques, methods, and tools to support security and privacy requirements in existing non-IoT application designs, enabling their use and integration into IoT applications. This article primarily focuses on design notations, models, and languages that facilitate capturing non-functional requirements (i.e., security and privacy). Our goal is not only to analyse, compare, and consolidate the empirical research but also to appreciate their findings and discuss their applicability for the IoT.
{"title":"Security and Privacy Requirements for the Internet of Things","authors":"Nada Alhirabi, O. Rana, Charith Perera","doi":"10.1145/3437537","DOIUrl":"https://doi.org/10.1145/3437537","url":null,"abstract":"The design and development process for internet of things (IoT) applications is more complicated than that for desktop, mobile, or web applications. First, IoT applications require both software and hardware to work together across many different types of nodes with different capabilities under different conditions. Second, IoT application development involves different types of software engineers such as desktop, web, embedded, and mobile to work together. Furthermore, non-software engineering personnel such as business analysts are also involved in the design process. In addition to the complexity of having multiple software engineering specialists cooperating to merge different hardware and software components together, the development process requires different software and hardware stacks to be integrated together (e.g., different stacks from different companies such as Microsoft Azure and IBM Bluemix). Due to the above complexities, non-functional requirements (such as security and privacy, which are highly important in the context of the IoT) tend to be ignored or treated as though they are less important in the IoT application development process. This article reviews techniques, methods, and tools to support security and privacy requirements in existing non-IoT application designs, enabling their use and integration into IoT applications. This article primarily focuses on design notations, models, and languages that facilitate capturing non-functional requirements (i.e., security and privacy). Our goal is not only to analyse, compare, and consolidate the empirical research but also to appreciate their findings and discuss their applicability for the IoT.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"86 1","pages":"1 - 37"},"PeriodicalIF":2.7,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83751973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongsen Ma, S. Arshad, Swetha Muniraju, E. Torkildson, Enrico Rantala, K. Doppler, Gang Zhou
In recent years, Channel State Information (CSI) measured by WiFi is widely used for human activity recognition. In this article, we propose a deep learning design for location- and person-independent activity recognition with WiFi. The proposed design consists of three Deep Neural Networks (DNNs): a 2D Convolutional Neural Network (CNN) as the recognition algorithm, a 1D CNN as the state machine, and a reinforcement learning agent for neural architecture search. The recognition algorithm learns location- and person-independent features from different perspectives of CSI data. The state machine learns temporal dependency information from history classification results. The reinforcement learning agent optimizes the neural architecture of the recognition algorithm using a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM). The proposed design is evaluated in a lab environment with different WiFi device locations, antenna orientations, sitting/standing/walking locations/orientations, and multiple persons. The proposed design has 97% average accuracy when testing devices and persons are not seen during training. The proposed design is also evaluated by two public datasets with accuracy of 80% and 83%. The proposed design needs very little human efforts for ground truth labeling, feature engineering, signal processing, and tuning of learning parameters and hyperparameters.
近年来,WiFi测量的信道状态信息(Channel State Information, CSI)被广泛用于人体活动识别。在本文中,我们提出了一种基于WiFi的独立于位置和个人的活动识别的深度学习设计。提出的设计由三个深度神经网络(dnn)组成:二维卷积神经网络(CNN)作为识别算法,一维卷积神经网络作为状态机,以及用于神经结构搜索的强化学习代理。该识别算法从CSI数据的不同角度学习与位置和人无关的特征。状态机从历史分类结果中学习时间依赖信息。强化学习智能体使用具有长短期记忆(LSTM)的递归神经网络(RNN)优化识别算法的神经结构。在不同的WiFi设备位置、天线方向、坐/站/行走位置/方向和多人的实验室环境中对所提出的设计进行了评估。当训练期间没有看到测试设备和人员时,所提出的设计的平均准确率为97%。该设计还通过两个公共数据集进行了评估,准确率分别为80%和83%。所提出的设计需要很少的人力来进行地面真值标记、特征工程、信号处理以及学习参数和超参数的调整。
{"title":"Location- and Person-Independent Activity Recognition with WiFi, Deep Neural Networks, and Reinforcement Learning","authors":"Yongsen Ma, S. Arshad, Swetha Muniraju, E. Torkildson, Enrico Rantala, K. Doppler, Gang Zhou","doi":"10.1145/3424739","DOIUrl":"https://doi.org/10.1145/3424739","url":null,"abstract":"In recent years, Channel State Information (CSI) measured by WiFi is widely used for human activity recognition. In this article, we propose a deep learning design for location- and person-independent activity recognition with WiFi. The proposed design consists of three Deep Neural Networks (DNNs): a 2D Convolutional Neural Network (CNN) as the recognition algorithm, a 1D CNN as the state machine, and a reinforcement learning agent for neural architecture search. The recognition algorithm learns location- and person-independent features from different perspectives of CSI data. The state machine learns temporal dependency information from history classification results. The reinforcement learning agent optimizes the neural architecture of the recognition algorithm using a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM). The proposed design is evaluated in a lab environment with different WiFi device locations, antenna orientations, sitting/standing/walking locations/orientations, and multiple persons. The proposed design has 97% average accuracy when testing devices and persons are not seen during training. The proposed design is also evaluated by two public datasets with accuracy of 80% and 83%. The proposed design needs very little human efforts for ground truth labeling, feature engineering, signal processing, and tuning of learning parameters and hyperparameters.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"34 1","pages":"1 - 25"},"PeriodicalIF":2.7,"publicationDate":"2021-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78014146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network bootstrapping is one of the initial tasks executed in any wireless network such as Industrial Internet of Things (IIoT). Fast formation of IIoT network helps in resource conservation and efficient data collection. Our probabilistic analysis reveals that the performance of 6TiSCH based IIoT network formation degrades with time because of the following reasons: (i) IETF 6TiSCH Minimal Configuration (6TiSCH-MC) standard considered that beacon frame has the highest priority over all other control packets, (ii) 6TiSCH-MC provides minimal routing information during network formation, and (iii) sometimes, joined node can not transmit control packets due to high congestion in shared slots. To deal with these problems, this article proposes two schemes—opportunistic priority alternation and rate control (OPR) and opportunistic channel access (OCA). OPR dynamically adjusts the priority of control packets and provides sufficient routing information during network bootstrapping, whereas OCA allows the nodes having urgent packet to transmit it in less time. Along with the theoretical analysis of the proposed schemes, we also provide comparison-based simulation and real testbed experiment results to validate the proposed schemes together. The received results show significant performance improvements in terms of joining time and energy consumption.
{"title":"Opportunistic Transmission of Control Packets for Faster Formation of 6TiSCH Network","authors":"Alakesh Kalita, M. Khatua","doi":"10.1145/3430380","DOIUrl":"https://doi.org/10.1145/3430380","url":null,"abstract":"Network bootstrapping is one of the initial tasks executed in any wireless network such as Industrial Internet of Things (IIoT). Fast formation of IIoT network helps in resource conservation and efficient data collection. Our probabilistic analysis reveals that the performance of 6TiSCH based IIoT network formation degrades with time because of the following reasons: (i) IETF 6TiSCH Minimal Configuration (6TiSCH-MC) standard considered that beacon frame has the highest priority over all other control packets, (ii) 6TiSCH-MC provides minimal routing information during network formation, and (iii) sometimes, joined node can not transmit control packets due to high congestion in shared slots. To deal with these problems, this article proposes two schemes—opportunistic priority alternation and rate control (OPR) and opportunistic channel access (OCA). OPR dynamically adjusts the priority of control packets and provides sufficient routing information during network bootstrapping, whereas OCA allows the nodes having urgent packet to transmit it in less time. Along with the theoretical analysis of the proposed schemes, we also provide comparison-based simulation and real testbed experiment results to validate the proposed schemes together. The received results show significant performance improvements in terms of joining time and energy consumption.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"9 1","pages":"1 - 29"},"PeriodicalIF":2.7,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86121200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things (IoT) will be a main data generation infrastructure for achieving better system intelligence. This article considers the design and implementation of a practical privacy-preserving collaborative learning scheme, in which a curious learning coordinator trains a better machine learning model based on the data samples contributed by a number of IoT objects, while the confidentiality of the raw forms of the training data is protected against the coordinator. Existing distributed machine learning and data encryption approaches incur significant computation and communication overhead, rendering them ill-suited for resource-constrained IoT objects. We study an approach that applies independent random projection at each IoT object to obfuscate data and trains a deep neural network at the coordinator based on the projected data from the IoT objects. This approach introduces light computation overhead to the IoT objects and moves most workload to the coordinator that can have sufficient computing resources. Although the independent projections performed by the IoT objects address the potential collusion between the curious coordinator and some compromised IoT objects, they significantly increase the complexity of the projected data. In this article, we leverage the superior learning capability of deep learning in capturing sophisticated patterns to maintain good learning performance. Extensive comparative evaluation shows that this approach outperforms other lightweight approaches that apply additive noisification for differential privacy and/or support vector machines for learning in the applications with light to moderate data pattern complexities.
{"title":"On Lightweight Privacy-preserving Collaborative Learning for Internet of Things by Independent Random Projections","authors":"Linshan Jiang, Rui Tan, Xin Lou, Guosheng Lin","doi":"10.1145/3441303","DOIUrl":"https://doi.org/10.1145/3441303","url":null,"abstract":"The Internet of Things (IoT) will be a main data generation infrastructure for achieving better system intelligence. This article considers the design and implementation of a practical privacy-preserving collaborative learning scheme, in which a curious learning coordinator trains a better machine learning model based on the data samples contributed by a number of IoT objects, while the confidentiality of the raw forms of the training data is protected against the coordinator. Existing distributed machine learning and data encryption approaches incur significant computation and communication overhead, rendering them ill-suited for resource-constrained IoT objects. We study an approach that applies independent random projection at each IoT object to obfuscate data and trains a deep neural network at the coordinator based on the projected data from the IoT objects. This approach introduces light computation overhead to the IoT objects and moves most workload to the coordinator that can have sufficient computing resources. Although the independent projections performed by the IoT objects address the potential collusion between the curious coordinator and some compromised IoT objects, they significantly increase the complexity of the projected data. In this article, we leverage the superior learning capability of deep learning in capturing sophisticated patterns to maintain good learning performance. Extensive comparative evaluation shows that this approach outperforms other lightweight approaches that apply additive noisification for differential privacy and/or support vector machines for learning in the applications with light to moderate data pattern complexities.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"66 1","pages":"1 - 32"},"PeriodicalIF":2.7,"publicationDate":"2020-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81054621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet of Things (IoT) sensors in smart buildings are becoming increasingly ubiquitous, making buildings more livable, energy efficient, and sustainable. These devices sense the environment and generate multivariate temporal data of paramount importance for detecting anomalies and improving the prediction of energy usage in smart buildings. However, detecting these anomalies in centralized systems is often plagued by a huge delay in response time. To overcome this issue, we formulate the anomaly detection problem in a federated learning setting by leveraging the multi-task learning paradigm, which aims at solving multiple tasks simultaneously while taking advantage of the similarities and differences across tasks. We propose a novel privacy-by-design federated learning model using a stacked long short-time memory (LSTM) model, and we demonstrate that it is more than twice as fast during training convergence compared to the centralized LSTM. The effectiveness of our federated learning approach is demonstrated on three real-world datasets generated by the IoT production system at General Electric Current smart building, achieving state-of-the-art performance compared to baseline methods in both classification and regression tasks. Our experimental results demonstrate the effectiveness of the proposed framework in reducing the overall training cost without compromising the prediction performance.
{"title":"A Federated Learning Approach to Anomaly Detection in Smart Buildings","authors":"Raed Abdel Sater, A. Hamza","doi":"10.1145/3467981","DOIUrl":"https://doi.org/10.1145/3467981","url":null,"abstract":"Internet of Things (IoT) sensors in smart buildings are becoming increasingly ubiquitous, making buildings more livable, energy efficient, and sustainable. These devices sense the environment and generate multivariate temporal data of paramount importance for detecting anomalies and improving the prediction of energy usage in smart buildings. However, detecting these anomalies in centralized systems is often plagued by a huge delay in response time. To overcome this issue, we formulate the anomaly detection problem in a federated learning setting by leveraging the multi-task learning paradigm, which aims at solving multiple tasks simultaneously while taking advantage of the similarities and differences across tasks. We propose a novel privacy-by-design federated learning model using a stacked long short-time memory (LSTM) model, and we demonstrate that it is more than twice as fast during training convergence compared to the centralized LSTM. The effectiveness of our federated learning approach is demonstrated on three real-world datasets generated by the IoT production system at General Electric Current smart building, achieving state-of-the-art performance compared to baseline methods in both classification and regression tasks. Our experimental results demonstrate the effectiveness of the proposed framework in reducing the overall training cost without compromising the prediction performance.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"164 1","pages":"1 - 23"},"PeriodicalIF":2.7,"publicationDate":"2020-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80335269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Anzanpour, Delaram Amiri, I. Azimi, M. Levorato, N. Dutt, P. Liljeberg, A. Rahmani
Recent advances in pervasive Internet of Things technologies and edge computing have opened new avenues for development of ubiquitous health monitoring applications. Delivering an acceptable level of usability and accuracy for these healthcare Internet of Things applications requires optimization of both system-driven and data-driven aspects, which are typically done in a disjoint manner. Although decoupled optimization of these processes yields local optima at each level, synergistic coupling of the system and data levels can lead to a holistic solution opening new opportunities for optimization. In this article, we present an edge-assisted resource manager that dynamically controls the fidelity and duration of sensing w.r.t. changes in the patient’s activity and health state, thus fine-tuning the trade-off between energy efficiency and measurement accuracy. The cornerstone of our proposed solution is an intelligent low-latency real-time controller implemented at the edge layer that detects abnormalities in the patient’s condition and accordingly adjusts the sensing parameters of a reconfigurable wireless sensor node. We assess the efficiency of our proposed system via a case study of the photoplethysmography-based medical early warning score system. Our experiments on a real full hardware-software early warning score system reveal up to 49% power savings while maintaining the accuracy of the sensory data.
{"title":"Edge-Assisted Control for Healthcare Internet of Things","authors":"A. Anzanpour, Delaram Amiri, I. Azimi, M. Levorato, N. Dutt, P. Liljeberg, A. Rahmani","doi":"10.1145/3407091","DOIUrl":"https://doi.org/10.1145/3407091","url":null,"abstract":"Recent advances in pervasive Internet of Things technologies and edge computing have opened new avenues for development of ubiquitous health monitoring applications. Delivering an acceptable level of usability and accuracy for these healthcare Internet of Things applications requires optimization of both system-driven and data-driven aspects, which are typically done in a disjoint manner. Although decoupled optimization of these processes yields local optima at each level, synergistic coupling of the system and data levels can lead to a holistic solution opening new opportunities for optimization. In this article, we present an edge-assisted resource manager that dynamically controls the fidelity and duration of sensing w.r.t. changes in the patient’s activity and health state, thus fine-tuning the trade-off between energy efficiency and measurement accuracy. The cornerstone of our proposed solution is an intelligent low-latency real-time controller implemented at the edge layer that detects abnormalities in the patient’s condition and accordingly adjusts the sensing parameters of a reconfigurable wireless sensor node. We assess the efficiency of our proposed system via a case study of the photoplethysmography-based medical early warning score system. Our experiments on a real full hardware-software early warning score system reveal up to 49% power savings while maintaining the accuracy of the sensory data.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"661 1","pages":"1 - 21"},"PeriodicalIF":2.7,"publicationDate":"2020-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76847146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuri Murillo, A. Chiumento, B. Reynders, S. Pollin
The Internet of Things (IoT) paradigm combines the interconnection of massive amounts of battery-constrained and low-computational-power devices with low-latency and high-reliability network requirements. Additionally, diverse end-to-end services and applications with different Quality of Service (QoS) requirements are expected to coexist in the same network infrastructure. Software-defined Networking (SDN) is a paradigm designed to solve these problems, but its implementation in wireless networks and especially in the resource-constrained IoT systems is extremely challenging and has seen very limited adoption, since it requires isolation of data and control plane information flows and a reliable and scalable control plane. In this work, Bluetooth Low Energy (BLE) mesh is introduced as an adequate technology for an all-wireless SDN-BLE implementation, which is a technology that has become the de-facto standard for IoT. The proposed SDN-BLE framework uses a routing network slice for the data plane information flow and a flooding network slice for the control plane information flow, ensuring their isolation while still being transmitted over the wireless medium. The design and implementation of all the classical SDN layers on a hybrid BLE mesh testbed is given, where the data plane is formed by the BLE nodes and the control plane can be centralized on a server or distributed over several WiFi gateways. Several controllers are described and implemented, allowing the framework to obtain end-to-end network knowledge to manage individual nodes over the air and configure their behavior to meet application requirements. An experimental characterization of the SDN-BLE framework is given, where the impact of the different parameters of the system on the network reliability, overhead, and energy consumption is studied. Additionally, the distributed versus centralized control plane operation modes are experimentally characterized, and it is shown that the distributed approach can provide the same performance as the centralized one when careful system design is performed. Finally, a proof of concept for the SDN-BLE framework is presented, where a network congestion is automatically detected and the nodes responsible of such congestion are identified and reconfigured over the air, bypassing the congested links, to resume regular network performance.
{"title":"An All-wireless SDN Framework for BLE Mesh","authors":"Yuri Murillo, A. Chiumento, B. Reynders, S. Pollin","doi":"10.1145/3403581","DOIUrl":"https://doi.org/10.1145/3403581","url":null,"abstract":"The Internet of Things (IoT) paradigm combines the interconnection of massive amounts of battery-constrained and low-computational-power devices with low-latency and high-reliability network requirements. Additionally, diverse end-to-end services and applications with different Quality of Service (QoS) requirements are expected to coexist in the same network infrastructure. Software-defined Networking (SDN) is a paradigm designed to solve these problems, but its implementation in wireless networks and especially in the resource-constrained IoT systems is extremely challenging and has seen very limited adoption, since it requires isolation of data and control plane information flows and a reliable and scalable control plane. In this work, Bluetooth Low Energy (BLE) mesh is introduced as an adequate technology for an all-wireless SDN-BLE implementation, which is a technology that has become the de-facto standard for IoT. The proposed SDN-BLE framework uses a routing network slice for the data plane information flow and a flooding network slice for the control plane information flow, ensuring their isolation while still being transmitted over the wireless medium. The design and implementation of all the classical SDN layers on a hybrid BLE mesh testbed is given, where the data plane is formed by the BLE nodes and the control plane can be centralized on a server or distributed over several WiFi gateways. Several controllers are described and implemented, allowing the framework to obtain end-to-end network knowledge to manage individual nodes over the air and configure their behavior to meet application requirements. An experimental characterization of the SDN-BLE framework is given, where the impact of the different parameters of the system on the network reliability, overhead, and energy consumption is studied. Additionally, the distributed versus centralized control plane operation modes are experimentally characterized, and it is shown that the distributed approach can provide the same performance as the centralized one when careful system design is performed. Finally, a proof of concept for the SDN-BLE framework is presented, where a network congestion is automatically detected and the nodes responsible of such congestion are identified and reconfigured over the air, bypassing the congested links, to resume regular network performance.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"5 1","pages":"1 - 30"},"PeriodicalIF":2.7,"publicationDate":"2020-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3403581","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72525156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, most Internet of Things devices in smart homes rely on radio frequency channels for communication, making them exposed to various attacks such as spoofing and eavesdropping attacks. Existing methods using encryption keys may be inapplicable on these resource-constrained devices that cannot afford the computationally expensive encryption operations. Thus, in this article, we design a key-free communication method for such devices in a smart home. In particular, we introduce the Home-limited Channel (HLC) that can be accessed only within a house yet inaccessible for outside-house attackers. Utilizing HLCs, we propose HlcAuth, a challenge-response mechanism to authenticate the communications between smart devices without keys. The advantages of HlcAuth are low cost, lightweight as well as key-free, and requiring no human intervention. According to the security analysis, HlcAuth can defeat replay attacks, message-forgery attacks, and man-in-the-middle (MiTM) attacks, among others. We further evaluate HlcAuth in four different physical scenarios, and results show that HlcAuth achieves 100% true positive rate (TPR) within 4.2m for in-house devices while 0% false positive rate (FPR) for outside attackers, i.e., guaranteeing a high-level usability and security for in-house communications. Finally, we implement HlcAuth in both single-room and multi-room scenarios.
{"title":"Authenticating Smart Home Devices via Home Limited Channels","authors":"Xiaoyu Ji, Chaohao Li, Xinyan Zhou, Juchuan Zhang, Yanmiao Zhang, Wenyuan Xu","doi":"10.1145/3399432","DOIUrl":"https://doi.org/10.1145/3399432","url":null,"abstract":"Nowadays, most Internet of Things devices in smart homes rely on radio frequency channels for communication, making them exposed to various attacks such as spoofing and eavesdropping attacks. Existing methods using encryption keys may be inapplicable on these resource-constrained devices that cannot afford the computationally expensive encryption operations. Thus, in this article, we design a key-free communication method for such devices in a smart home. In particular, we introduce the Home-limited Channel (HLC) that can be accessed only within a house yet inaccessible for outside-house attackers. Utilizing HLCs, we propose HlcAuth, a challenge-response mechanism to authenticate the communications between smart devices without keys. The advantages of HlcAuth are low cost, lightweight as well as key-free, and requiring no human intervention. According to the security analysis, HlcAuth can defeat replay attacks, message-forgery attacks, and man-in-the-middle (MiTM) attacks, among others. We further evaluate HlcAuth in four different physical scenarios, and results show that HlcAuth achieves 100% true positive rate (TPR) within 4.2m for in-house devices while 0% false positive rate (FPR) for outside attackers, i.e., guaranteeing a high-level usability and security for in-house communications. Finally, we implement HlcAuth in both single-room and multi-room scenarios.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"61 1","pages":"1 - 24"},"PeriodicalIF":2.7,"publicationDate":"2020-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88019349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Righetti, C. Vallati, Sajal K. Das, G. Anastasi
The IETF is currently defining the 6TiSCH architecture for the Industrial Internet of Things to ensure reliable and timely communication. 6TiSCH relies on the IEEE TSCH MAC protocol and defines different scheduling approaches for managing TSCH cells, including a distributed (neighbor-to-neighbor) scheduling scheme, where cells are allocated by nodes in a cooperative way. Each node leverages a Scheduling Function (SF) to compute the required number of cells, and the 6top (6P) protocol to negotiate them with neighbors. Currently, the Minimal Scheduling Function (MSF) is under consideration for standardization. However, multiple SFs are expected to be used in real deployments, in order to accommodate the requirements of different use cases. In this article, we carry out a comprehensive analysis of 6TiSCH distributed scheduling to assess its performance under realistic conditions. Firstly, we derive an analytical model to assess the 6P protocol, and we show that 6P transactions take a long time to complete and may also fail. Then, we evaluate the performance of MSF and other distributed SFs through simulations and real experiments. The results show that their performance is affected by the failure of 6P transactions and the instability of the routing protocol, which may lead to congestion from which the network is unable to recover. Finally, we propose a new SF (E-OTF) and show, through simulations and real experiments, that it can effectively improve the overall performance, by allowing nodes to quickly recover from congestion.
{"title":"An Evaluation of the 6TiSCH Distributed Resource Management Mode","authors":"F. Righetti, C. Vallati, Sajal K. Das, G. Anastasi","doi":"10.1145/3395927","DOIUrl":"https://doi.org/10.1145/3395927","url":null,"abstract":"The IETF is currently defining the 6TiSCH architecture for the Industrial Internet of Things to ensure reliable and timely communication. 6TiSCH relies on the IEEE TSCH MAC protocol and defines different scheduling approaches for managing TSCH cells, including a distributed (neighbor-to-neighbor) scheduling scheme, where cells are allocated by nodes in a cooperative way. Each node leverages a Scheduling Function (SF) to compute the required number of cells, and the 6top (6P) protocol to negotiate them with neighbors. Currently, the Minimal Scheduling Function (MSF) is under consideration for standardization. However, multiple SFs are expected to be used in real deployments, in order to accommodate the requirements of different use cases. In this article, we carry out a comprehensive analysis of 6TiSCH distributed scheduling to assess its performance under realistic conditions. Firstly, we derive an analytical model to assess the 6P protocol, and we show that 6P transactions take a long time to complete and may also fail. Then, we evaluate the performance of MSF and other distributed SFs through simulations and real experiments. The results show that their performance is affected by the failure of 6P transactions and the instability of the routing protocol, which may lead to congestion from which the network is unable to recover. Finally, we propose a new SF (E-OTF) and show, through simulations and real experiments, that it can effectively improve the overall performance, by allowing nodes to quickly recover from congestion.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"1 1","pages":"1 - 31"},"PeriodicalIF":2.7,"publicationDate":"2020-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77082630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The continuous monitoring of crop growth is crucial for site-specific and sustainable farm management in the context of precision agriculture. With the help of precise in situ information, agricultural practices, such as irrigation, fertilization, and plant protection, can be dynamically adapted to the changing needs of individual sites, thereby supporting yield increases and resource optimization. Nowadays, IoT technology with networked sensors deployed in greenhouses and farmlands already contributes to in situ information. In addition to existing soil sensors for moisture or nutrient monitoring, there are also (mainly optical) sensors to assess growth developments and vital conditions of crops. This article presents a novel and complementary approach for a low-cost crop sensing that is based on temporal variations of the signal strength of low-power IoT radio communication. To this end, the relationship between crop growth, represented by the leaf area index (LAI), and the attenuation of signal propagation of low-cost radio transceivers is investigated. Real-world experiments in wheat fields show a significant correlation between LAI and received signal strength indicator (RSSI) time series. Moreover, influencing meteorological factors are identified and their effects are analyzed. Including these factors, a multiple linear model is finally developed that enables an RSSI-based LAI estimation with great potential.
{"title":"Towards a Low-cost RSSI-based Crop Monitoring","authors":"Jan Bauer, N. Aschenbruck","doi":"10.1145/3393667","DOIUrl":"https://doi.org/10.1145/3393667","url":null,"abstract":"The continuous monitoring of crop growth is crucial for site-specific and sustainable farm management in the context of precision agriculture. With the help of precise in situ information, agricultural practices, such as irrigation, fertilization, and plant protection, can be dynamically adapted to the changing needs of individual sites, thereby supporting yield increases and resource optimization. Nowadays, IoT technology with networked sensors deployed in greenhouses and farmlands already contributes to in situ information. In addition to existing soil sensors for moisture or nutrient monitoring, there are also (mainly optical) sensors to assess growth developments and vital conditions of crops. This article presents a novel and complementary approach for a low-cost crop sensing that is based on temporal variations of the signal strength of low-power IoT radio communication. To this end, the relationship between crop growth, represented by the leaf area index (LAI), and the attenuation of signal propagation of low-cost radio transceivers is investigated. Real-world experiments in wheat fields show a significant correlation between LAI and received signal strength indicator (RSSI) time series. Moreover, influencing meteorological factors are identified and their effects are analyzed. Including these factors, a multiple linear model is finally developed that enables an RSSI-based LAI estimation with great potential.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"43 1","pages":"1 - 26"},"PeriodicalIF":2.7,"publicationDate":"2020-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74135108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}