Tiago C. S. Xavier, Flávia Coimbra Delicato, Paulo F. Pires, Cláudio L. Amorim, Wei Li, Albert Y. Zomaya
In the Internet of Things (IoT) environment, the computing resources available in the cloud are often unable to meet the latency constraints of time critical applications due to the large distance between the cloud and data sources (IoT devices). The adoption of edge computing can help the cloud deliver services that meet time critical application requirements. However, it is challenging to meet the IoT application demands while using the resources smartly to reduce energy consumption at the edge of the network. In this context, we propose a fully distributed resource allocation algorithm for the IoT-edge-cloud environment, which (i) increases the infrastructure resource usage by promoting the collaboration between edge nodes, (ii) supports the heterogeneity and generic requirements of applications, and (iii) reduces the application latency and increases the energy efficiency of the edge. We compare our algorithm with a non-collaborative vertical offloading and with a horizontal approach based on edge collaboration. Results of simulations showed that the proposed algorithm is able to reduce 49.95% of the IoT application request end-to-end latency, increase 95.35% of the edge node utilization, and enhance the energy efficiency in terms of the edge node power consumption by 92.63% in comparison to the best performances of vertical and collaboration approaches.
{"title":"Managing Heterogeneous and Time-Sensitive IoT Applications through Collaborative and Energy-Aware Resource Allocation","authors":"Tiago C. S. Xavier, Flávia Coimbra Delicato, Paulo F. Pires, Cláudio L. Amorim, Wei Li, Albert Y. Zomaya","doi":"10.1145/3488248","DOIUrl":"https://doi.org/10.1145/3488248","url":null,"abstract":"In the Internet of Things (IoT) environment, the computing resources available in the cloud are often unable to meet the latency constraints of time critical applications due to the large distance between the cloud and data sources (IoT devices). The adoption of edge computing can help the cloud deliver services that meet time critical application requirements. However, it is challenging to meet the IoT application demands while using the resources smartly to reduce energy consumption at the edge of the network. In this context, we propose a fully distributed resource allocation algorithm for the IoT-edge-cloud environment, which (i) increases the infrastructure resource usage by promoting the collaboration between edge nodes, (ii) supports the heterogeneity and generic requirements of applications, and (iii) reduces the application latency and increases the energy efficiency of the edge. We compare our algorithm with a non-collaborative vertical offloading and with a horizontal approach based on edge collaboration. Results of simulations showed that the proposed algorithm is able to reduce 49.95% of the IoT application request end-to-end latency, increase 95.35% of the edge node utilization, and enhance the energy efficiency in terms of the edge node power consumption by 92.63% in comparison to the best performances of vertical and collaboration approaches.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"61 1","pages":"1 - 28"},"PeriodicalIF":2.7,"publicationDate":"2022-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90728269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many real-world scenarios, machine learning models fall short in prediction performance due to data characteristics changing from training on one source domain to testing on a target domain. There has been extensive research to address this problem with Domain Adaptation (DA) for learning domain invariant features. However, when considering advances for time series, those methods remain limited to the use of hard parameter sharing (HPS) between source and target models, and the use of domain adaptation objective function. To address these challenges, we propose a soft parameter sharing (SPS) DA architecture with representation learning while modeling the relation as non-linear between parameters of source and target models and modeling the adaptation loss function as the squared Maximum Mean Discrepancy (MMD). The proposed architecture advances the state-of-the-art for time series in the context of activity recognition and in fields with other modalities, where SPS has been limited to a linear relation. An additional contribution of our work is to provide a study that demonstrates the strengths and limitations of HPS versus SPS. Experiment results showed the success of the method in three domain adaptation cases of multivariate time series activity recognition with different users and sensors.
{"title":"Domain Adaptation with Representation Learning and Nonlinear Relation for Time Series","authors":"A. Hussein, Hazem Hajj","doi":"10.1145/3502905","DOIUrl":"https://doi.org/10.1145/3502905","url":null,"abstract":"In many real-world scenarios, machine learning models fall short in prediction performance due to data characteristics changing from training on one source domain to testing on a target domain. There has been extensive research to address this problem with Domain Adaptation (DA) for learning domain invariant features. However, when considering advances for time series, those methods remain limited to the use of hard parameter sharing (HPS) between source and target models, and the use of domain adaptation objective function. To address these challenges, we propose a soft parameter sharing (SPS) DA architecture with representation learning while modeling the relation as non-linear between parameters of source and target models and modeling the adaptation loss function as the squared Maximum Mean Discrepancy (MMD). The proposed architecture advances the state-of-the-art for time series in the context of activity recognition and in fields with other modalities, where SPS has been limited to a linear relation. An additional contribution of our work is to provide a study that demonstrates the strengths and limitations of HPS versus SPS. Experiment results showed the success of the method in three domain adaptation cases of multivariate time series activity recognition with different users and sensors.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"27 1","pages":"1 - 26"},"PeriodicalIF":2.7,"publicationDate":"2022-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89505762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mikhail Fomichev, L. F. Abanto-Leon, Maximilian Stiegler, Alejandro Molina, Jakob Link, M. Hollick
Context-based copresence detection schemes are a necessary prerequisite to building secure and usable authentication systems in the Internet of Things (IoT). Such schemes allow one device to verify proximity of another device without user assistance utilizing their physical context (e.g., audio). The state-of-the-art copresence detection schemes suffer from two major limitations: (1) They cannot accurately detect copresence in low-entropy context (e.g., empty room with few events occurring) and insufficiently separated environments (e.g., adjacent rooms), (2) They require devices to have common sensors (e.g., microphones) to capture context, making them impractical on devices with heterogeneous sensors. We address these limitations, proposing Next2You, a novel copresence detection scheme utilizing channel state information (CSI). In particular, we leverage magnitude and phase values from a range of subcarriers specifying a Wi-Fi channel to capture a robust wireless context created when devices communicate. We implement Next2You on off-the-shelf smartphones relying only on ubiquitous Wi-Fi chipsets and evaluate it based on over 95 hours of CSI measurements that we collect in five real-world scenarios. Next2You achieves error rates below 4%, maintaining accurate copresence detection both in low-entropy context and insufficiently separated environments. We also demonstrate the capability of Next2You to work reliably in real-time and its robustness to various attacks.
{"title":"Next2You: Robust Copresence Detection Based on Channel State Information","authors":"Mikhail Fomichev, L. F. Abanto-Leon, Maximilian Stiegler, Alejandro Molina, Jakob Link, M. Hollick","doi":"10.1145/3491244","DOIUrl":"https://doi.org/10.1145/3491244","url":null,"abstract":"Context-based copresence detection schemes are a necessary prerequisite to building secure and usable authentication systems in the Internet of Things (IoT). Such schemes allow one device to verify proximity of another device without user assistance utilizing their physical context (e.g., audio). The state-of-the-art copresence detection schemes suffer from two major limitations: (1) They cannot accurately detect copresence in low-entropy context (e.g., empty room with few events occurring) and insufficiently separated environments (e.g., adjacent rooms), (2) They require devices to have common sensors (e.g., microphones) to capture context, making them impractical on devices with heterogeneous sensors. We address these limitations, proposing Next2You, a novel copresence detection scheme utilizing channel state information (CSI). In particular, we leverage magnitude and phase values from a range of subcarriers specifying a Wi-Fi channel to capture a robust wireless context created when devices communicate. We implement Next2You on off-the-shelf smartphones relying only on ubiquitous Wi-Fi chipsets and evaluate it based on over 95 hours of CSI measurements that we collect in five real-world scenarios. Next2You achieves error rates below 4%, maintaining accurate copresence detection both in low-entropy context and insufficiently separated environments. We also demonstrate the capability of Next2You to work reliably in real-time and its robustness to various attacks.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"66 1","pages":"1 - 31"},"PeriodicalIF":2.7,"publicationDate":"2021-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75436572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chang-Yang Lin, Hamzeh Khazaei, Andrew Walenstein, A. Malton
Embedded sensors and smart devices have turned the environments around us into smart spaces that could automatically evolve, depending on the needs of users, and adapt to the new conditions. While smart spaces are beneficial and desired in many aspects, they could be compromised and expose privacy, security, or render the whole environment a hostile space in which regular tasks cannot be accomplished anymore. In fact, ensuring the security of smart spaces is a very challenging task due to the heterogeneity of devices, vast attack surface, and device resource limitations. The key objective of this study is to minimize the manual work in enforcing the security of smart spaces by leveraging the autonomic computing paradigm in the management of IoT environments. More specifically, we strive to build an autonomic manager that can monitor the smart space continuously, analyze the context, plan and execute countermeasures to maintain the desired level of security, and reduce liability and risks of security breaches. We follow the microservice architecture pattern and propose a generic ontology named Secure Smart Space Ontology (SSSO) for describing dynamic contextual information in security-enhanced smart spaces. Based on SSSO, we build an autonomic security manager with four layers that continuously monitors the managed spaces, analyzes contextual information and events, and automatically plans and implements adaptive security policies. As the evaluation, focusing on a current BlackBerry customer problem, we deployed the proposed autonomic security manager to maintain the security of a smart conference room with 32 devices and 66 services. The high performance of the proposed solution was also evaluated on a large-scale deployment with over 1.8 million triples.
{"title":"Autonomic Security Management for IoT Smart Spaces","authors":"Chang-Yang Lin, Hamzeh Khazaei, Andrew Walenstein, A. Malton","doi":"10.1145/3466696","DOIUrl":"https://doi.org/10.1145/3466696","url":null,"abstract":"Embedded sensors and smart devices have turned the environments around us into smart spaces that could automatically evolve, depending on the needs of users, and adapt to the new conditions. While smart spaces are beneficial and desired in many aspects, they could be compromised and expose privacy, security, or render the whole environment a hostile space in which regular tasks cannot be accomplished anymore. In fact, ensuring the security of smart spaces is a very challenging task due to the heterogeneity of devices, vast attack surface, and device resource limitations. The key objective of this study is to minimize the manual work in enforcing the security of smart spaces by leveraging the autonomic computing paradigm in the management of IoT environments. More specifically, we strive to build an autonomic manager that can monitor the smart space continuously, analyze the context, plan and execute countermeasures to maintain the desired level of security, and reduce liability and risks of security breaches. We follow the microservice architecture pattern and propose a generic ontology named Secure Smart Space Ontology (SSSO) for describing dynamic contextual information in security-enhanced smart spaces. Based on SSSO, we build an autonomic security manager with four layers that continuously monitors the managed spaces, analyzes contextual information and events, and automatically plans and implements adaptive security policies. As the evaluation, focusing on a current BlackBerry customer problem, we deployed the proposed autonomic security manager to maintain the security of a smart conference room with 32 devices and 66 services. The high performance of the proposed solution was also evaluated on a large-scale deployment with over 1.8 million triples.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"45 1","pages":"1 - 20"},"PeriodicalIF":2.7,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80647608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emotional cognitive ability is a key technical indicator to measure the friendliness of interaction. Therefore, this research aims to explore robots with human emotion cognitively. By discussing the prospects of 5G technology and cognitive robots, the main direction of the study is cognitive robots. For the emotional cognitive robots, the analysis logic similar to humans is difficult to imitate; the information processing levels of robots are divided into three levels in this study: cognitive algorithm, feature extraction, and information collection by comparing human information processing levels. In addition, a multi-scale rectangular direction gradient histogram is used for facial expression recognition, and robust principal component analysis algorithm is used for facial expression recognition. In the pictures where humans intuitively feel smiles in sad emotions, the proportion of emotions obtained by the method in this study are as follows: calmness accounted for 0%, sadness accounted for 15.78%, fear accounted for 0%, happiness accounted for 76.53%, disgust accounted for 7.69%, anger accounted for 0%, and astonishment accounted for 0%. In the recognition of micro-expressions, humans intuitively feel negative emotions such as surprise and fear, and the proportion of emotions obtained by the method adopted in this study are as follows: calmness accounted for 32.34%, sadness accounted for 34.07%, fear accounted for 6.79%, happiness accounted for 0%, disgust accounted for 0%, anger accounted for 13.91%, and astonishment accounted for 15.89%. Therefore, the algorithm explored in this study can realize accuracy in cognition of emotions. From the preceding research results, it can be seen that the research method in this study can intuitively reflect the proportion of human expressions, and the recognition methods based on facial expressions and micro-expressions have good recognition effects, which is in line with human intuitive experience.
{"title":"Cognitive Robotics on 5G Networks","authors":"Zhihan Lv, Liang Qiao, Qingjun Wang","doi":"10.1145/3414842","DOIUrl":"https://doi.org/10.1145/3414842","url":null,"abstract":"Emotional cognitive ability is a key technical indicator to measure the friendliness of interaction. Therefore, this research aims to explore robots with human emotion cognitively. By discussing the prospects of 5G technology and cognitive robots, the main direction of the study is cognitive robots. For the emotional cognitive robots, the analysis logic similar to humans is difficult to imitate; the information processing levels of robots are divided into three levels in this study: cognitive algorithm, feature extraction, and information collection by comparing human information processing levels. In addition, a multi-scale rectangular direction gradient histogram is used for facial expression recognition, and robust principal component analysis algorithm is used for facial expression recognition. In the pictures where humans intuitively feel smiles in sad emotions, the proportion of emotions obtained by the method in this study are as follows: calmness accounted for 0%, sadness accounted for 15.78%, fear accounted for 0%, happiness accounted for 76.53%, disgust accounted for 7.69%, anger accounted for 0%, and astonishment accounted for 0%. In the recognition of micro-expressions, humans intuitively feel negative emotions such as surprise and fear, and the proportion of emotions obtained by the method adopted in this study are as follows: calmness accounted for 32.34%, sadness accounted for 34.07%, fear accounted for 6.79%, happiness accounted for 0%, disgust accounted for 0%, anger accounted for 13.91%, and astonishment accounted for 15.89%. Therefore, the algorithm explored in this study can realize accuracy in cognition of emotions. From the preceding research results, it can be seen that the research method in this study can intuitively reflect the proportion of human expressions, and the recognition methods based on facial expressions and micro-expressions have good recognition effects, which is in line with human intuitive experience.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"431 1","pages":"1 - 18"},"PeriodicalIF":2.7,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78772670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peining Zhen, Hai-Bao Chen, Yuan Cheng, Zhigang Ji, Bin Liu, Hao Yu
Mobile devices usually suffer from limited computation and storage resources, which seriously hinders them from deep neural network applications. In this article, we introduce a deeply tensor-compressed long short-term memory (LSTM) neural network for fast video-based facial expression recognition on mobile devices. First, a spatio-temporal facial expression recognition LSTM model is built by extracting time-series feature maps from facial clips. The LSTM-based spatio-temporal model is further deeply compressed by means of quantization and tensorization for mobile device implementation. Based on datasets of Extended Cohn-Kanade (CK+), MMI, and Acted Facial Expression in Wild 7.0, experimental results show that the proposed method achieves 97.96%, 97.33%, and 55.60% classification accuracy and significantly compresses the size of network model up to 221× with reduced training time per epoch by 60%. Our work is further implemented on the RK3399Pro mobile device with a Neural Process Engine. The latency of the feature extractor and LSTM predictor can be reduced 30.20× and 6.62× , respectively, on board with the leveraged compression methods. Furthermore, the spatio-temporal model costs only 57.19 MB of DRAM and 5.67W of power when running on the board.
{"title":"Fast Video Facial Expression Recognition by a Deeply Tensor-Compressed LSTM Neural Network for Mobile Devices","authors":"Peining Zhen, Hai-Bao Chen, Yuan Cheng, Zhigang Ji, Bin Liu, Hao Yu","doi":"10.1145/3464941","DOIUrl":"https://doi.org/10.1145/3464941","url":null,"abstract":"Mobile devices usually suffer from limited computation and storage resources, which seriously hinders them from deep neural network applications. In this article, we introduce a deeply tensor-compressed long short-term memory (LSTM) neural network for fast video-based facial expression recognition on mobile devices. First, a spatio-temporal facial expression recognition LSTM model is built by extracting time-series feature maps from facial clips. The LSTM-based spatio-temporal model is further deeply compressed by means of quantization and tensorization for mobile device implementation. Based on datasets of Extended Cohn-Kanade (CK+), MMI, and Acted Facial Expression in Wild 7.0, experimental results show that the proposed method achieves 97.96%, 97.33%, and 55.60% classification accuracy and significantly compresses the size of network model up to 221× with reduced training time per epoch by 60%. Our work is further implemented on the RK3399Pro mobile device with a Neural Process Engine. The latency of the feature extractor and LSTM predictor can be reduced 30.20× and 6.62× , respectively, on board with the leveraged compression methods. Furthermore, the spatio-temporal model costs only 57.19 MB of DRAM and 5.67W of power when running on the board.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"13 1","pages":"1 - 26"},"PeriodicalIF":2.7,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75133616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we first investigate the quality of aerial air pollution measurements and characterize the main error sources of drone-mounted gas sensors. To that end, we build ASTRO+, an aerial-ground pollution monitoring platform, and use it to collect a comprehensive dataset of both aerial and reference air pollution measurements. We show that the dynamic airflow caused by drones affects temperature and humidity levels of the ambient air, which then affect the measurement quality of gas sensors. Then, in the second part of this article, we leverage the effects of weather conditions on pollution measurements’ quality in order to design an unmanned aerial vehicle mission planning algorithm that adapts the trajectory of the drones while taking into account the quality of aerial measurements. We evaluate our mission planning approach based on a Volatile Organic Compound pollution dataset and show a high-performance improvement that is maintained even when pollution dynamics are high.
{"title":"Robust Environmental Sensing Using UAVs","authors":"Ahmed Boubrima, E. Knightly","doi":"10.1145/3464943","DOIUrl":"https://doi.org/10.1145/3464943","url":null,"abstract":"In this article, we first investigate the quality of aerial air pollution measurements and characterize the main error sources of drone-mounted gas sensors. To that end, we build ASTRO+, an aerial-ground pollution monitoring platform, and use it to collect a comprehensive dataset of both aerial and reference air pollution measurements. We show that the dynamic airflow caused by drones affects temperature and humidity levels of the ambient air, which then affect the measurement quality of gas sensors. Then, in the second part of this article, we leverage the effects of weather conditions on pollution measurements’ quality in order to design an unmanned aerial vehicle mission planning algorithm that adapts the trajectory of the drones while taking into account the quality of aerial measurements. We evaluate our mission planning approach based on a Volatile Organic Compound pollution dataset and show a high-performance improvement that is maintained even when pollution dynamics are high.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"26 1","pages":"1 - 20"},"PeriodicalIF":2.7,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81440381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Riccardo Petrolo, Zhambyl Shaikhanov, Yingyan Lin, E. Knightly
We present the design, implementation, and experimental evaluation of ASTRO, a modular end-to-end system for distributed sensing missions with autonomous networked drones. We introduce the fundamental system architecture features that enable agnostic sensing missions on top of the ASTRO drones. We demonstrate the key principles of ASTRO by using on-board software-defined radios to find and track a mobile radio target. We show how simple distributed on-board machine learning methods can be used to find and track a mobile target, even if all drones lose contact with a ground control. Also, we show that ASTRO is able to find the target even if it is hiding under a three-ton concrete slab, representing a highly irregular propagation environment. Our findings reveal that, despite no prior training and noisy sensory measurements, ASTRO drones are able to learn the propagation environment in the scale of seconds and localize a target with a mean accuracy of 8 m. Moreover, ASTRO drones are able to track the target with relatively constant error over time, even as it moves at a speed close to the maximum drone speed.
{"title":"ASTRO","authors":"Riccardo Petrolo, Zhambyl Shaikhanov, Yingyan Lin, E. Knightly","doi":"10.1145/3464942","DOIUrl":"https://doi.org/10.1145/3464942","url":null,"abstract":"We present the design, implementation, and experimental evaluation of ASTRO, a modular end-to-end system for distributed sensing missions with autonomous networked drones. We introduce the fundamental system architecture features that enable agnostic sensing missions on top of the ASTRO drones. We demonstrate the key principles of ASTRO by using on-board software-defined radios to find and track a mobile radio target. We show how simple distributed on-board machine learning methods can be used to find and track a mobile target, even if all drones lose contact with a ground control. Also, we show that ASTRO is able to find the target even if it is hiding under a three-ton concrete slab, representing a highly irregular propagation environment. Our findings reveal that, despite no prior training and noisy sensory measurements, ASTRO drones are able to learn the propagation environment in the scale of seconds and localize a target with a mean accuracy of 8 m. Moreover, ASTRO drones are able to track the target with relatively constant error over time, even as it moves at a speed close to the maximum drone speed.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"54 1","pages":"1 - 22"},"PeriodicalIF":2.7,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74068095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Morshed U. Chowdhury, B. Ray, Sujan Chowdhury, S. Rajasegarar
Due to the widespread functional benefits, such as supporting internet connectivity, having high visibility and enabling easy connectivity between sensors, the Internet of Things (IoT) has become popular and used in many applications, such as for smart city, smart health, smart home, and smart vehicle realizations. These IoT-based systems contribute to both daily life and business, including sensitive and emergency situations. In general, the devices or sensors used in the IoT have very limited computational power, storage capacity, and communication capabilities, but they help to collect a large amount of data as well as maintain communication with the other devices in the network. Since most of the IoT devices have no physical security, and often are open to everyone via radio communication and via the internet, they are highly vulnerable to existing and emerging novel security attacks. Further, the IoT devices are usually integrated with the corporate networks; in this case, the impact of attacks will be much more significant than operating in isolation. Due to the constraints of the IoT devices, and the nature of their operation, existing security mechanisms are less effective for countering the attacks that are specific to the IoT-based systems. This article presents a new insider attack, named loophole attack, that exploits the vulnerabilities present in a widely used IPv6 routing protocol in IoT-based systems, called RPL (Routing over Low Power and Lossy Networks). To protect the IoT system from this insider attack, a machine learning based security mechanism is presented. The proposed attack has been implemented using a Contiki IoT operating system that runs on the Cooja simulator, and the impacts of the attack are analyzed. Evaluation on the collected network traffic data demonstrates that the machine learning based approaches, along with the proposed features, help to accurately detect the insider attack from the network traffic data.
{"title":"A Novel Insider Attack and Machine Learning Based Detection for the Internet of Things","authors":"Morshed U. Chowdhury, B. Ray, Sujan Chowdhury, S. Rajasegarar","doi":"10.1145/3466721","DOIUrl":"https://doi.org/10.1145/3466721","url":null,"abstract":"Due to the widespread functional benefits, such as supporting internet connectivity, having high visibility and enabling easy connectivity between sensors, the Internet of Things (IoT) has become popular and used in many applications, such as for smart city, smart health, smart home, and smart vehicle realizations. These IoT-based systems contribute to both daily life and business, including sensitive and emergency situations. In general, the devices or sensors used in the IoT have very limited computational power, storage capacity, and communication capabilities, but they help to collect a large amount of data as well as maintain communication with the other devices in the network. Since most of the IoT devices have no physical security, and often are open to everyone via radio communication and via the internet, they are highly vulnerable to existing and emerging novel security attacks. Further, the IoT devices are usually integrated with the corporate networks; in this case, the impact of attacks will be much more significant than operating in isolation. Due to the constraints of the IoT devices, and the nature of their operation, existing security mechanisms are less effective for countering the attacks that are specific to the IoT-based systems. This article presents a new insider attack, named loophole attack, that exploits the vulnerabilities present in a widely used IPv6 routing protocol in IoT-based systems, called RPL (Routing over Low Power and Lossy Networks). To protect the IoT system from this insider attack, a machine learning based security mechanism is presented. The proposed attack has been implemented using a Contiki IoT operating system that runs on the Cooja simulator, and the impacts of the attack are analyzed. Evaluation on the collected network traffic data demonstrates that the machine learning based approaches, along with the proposed features, help to accurately detect the insider attack from the network traffic data.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"34 1","pages":"1 - 23"},"PeriodicalIF":2.7,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89893050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sauptik Dhar, Junyao Guo, Jiayi Liu, S. Tripathi, Unmesh Kurup, Mohak Shah
The predominant paradigm for using machine learning models on a device is to train a model in the cloud and perform inference using the trained model on the device. However, with increasing numbers of smart devices and improved hardware, there is interest in performing model training on the device. Given this surge in interest, a comprehensive survey of the field from a device-agnostic perspective sets the stage for both understanding the state of the art and for identifying open challenges and future avenues of research. However, on-device learning is an expansive field with connections to a large number of related topics in AI and machine learning (including online learning, model adaptation, one/few-shot learning, etc.). Hence, covering such a large number of topics in a single survey is impractical. This survey finds a middle ground by reformulating the problem of on-device learning as resource constrained learning where the resources are compute and memory. This reformulation allows tools, techniques, and algorithms from a wide variety of research areas to be compared equitably. In addition to summarizing the state of the art, the survey also identifies a number of challenges and next steps for both the algorithmic and theoretical aspects of on-device learning.
{"title":"A Survey of On-Device Machine Learning","authors":"Sauptik Dhar, Junyao Guo, Jiayi Liu, S. Tripathi, Unmesh Kurup, Mohak Shah","doi":"10.1145/3450494","DOIUrl":"https://doi.org/10.1145/3450494","url":null,"abstract":"The predominant paradigm for using machine learning models on a device is to train a model in the cloud and perform inference using the trained model on the device. However, with increasing numbers of smart devices and improved hardware, there is interest in performing model training on the device. Given this surge in interest, a comprehensive survey of the field from a device-agnostic perspective sets the stage for both understanding the state of the art and for identifying open challenges and future avenues of research. However, on-device learning is an expansive field with connections to a large number of related topics in AI and machine learning (including online learning, model adaptation, one/few-shot learning, etc.). Hence, covering such a large number of topics in a single survey is impractical. This survey finds a middle ground by reformulating the problem of on-device learning as resource constrained learning where the resources are compute and memory. This reformulation allows tools, techniques, and algorithms from a wide variety of research areas to be compared equitably. In addition to summarizing the state of the art, the survey also identifies a number of challenges and next steps for both the algorithmic and theoretical aspects of on-device learning.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"4 1","pages":"1 - 49"},"PeriodicalIF":2.7,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85209834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}