Pub Date : 2022-03-28DOI: 10.1109/ciot53061.2022.9766688
Matheus B. de A. Rodrigues, Ana Carolina R. Mendes, Marcos Paulo C. de Mendonça, G. R. Carrara, Luiz Claudio S. Magalhães, C. Albuquerque, Dianne S. V. Medeiros, D. M. F. Mattos
The dynamical users' association with wireless access points and the requirement for maximum network coverage foster the challenge of providing energy efficiency alongside network availability for large-scale wireless networks. This paper proposes an access-point provisioning strategy based on a multi-objective optimization heuristic. The heuristic purposes are maximizing coverage, ensuring high network availability, and minimizing the number of active access points, while improving energy efficiency. We evaluate our proposal by simulating a connected component of the Universidade Federal Fluminense (UFF - Brazil) wireless network, comprising 363 access points in a university campus. The simulation considers actual flows and features of users' association to the network. The results show that the best performing strategy is a greedy heuristic, which activates access points with the most significant number of potential neighbors that are not active. Our proposal implies 2% of unserved users while activating only 23% of the access points, ensuring high availability and energy efficiency.
{"title":"An Efficient Strategy with High Availability for Dynamic Provisioning of Access Points in Large-Scale Wireless Networks","authors":"Matheus B. de A. Rodrigues, Ana Carolina R. Mendes, Marcos Paulo C. de Mendonça, G. R. Carrara, Luiz Claudio S. Magalhães, C. Albuquerque, Dianne S. V. Medeiros, D. M. F. Mattos","doi":"10.1109/ciot53061.2022.9766688","DOIUrl":"https://doi.org/10.1109/ciot53061.2022.9766688","url":null,"abstract":"The dynamical users' association with wireless access points and the requirement for maximum network coverage foster the challenge of providing energy efficiency alongside network availability for large-scale wireless networks. This paper proposes an access-point provisioning strategy based on a multi-objective optimization heuristic. The heuristic purposes are maximizing coverage, ensuring high network availability, and minimizing the number of active access points, while improving energy efficiency. We evaluate our proposal by simulating a connected component of the Universidade Federal Fluminense (UFF - Brazil) wireless network, comprising 363 access points in a university campus. The simulation considers actual flows and features of users' association to the network. The results show that the best performing strategy is a greedy heuristic, which activates access points with the most significant number of potential neighbors that are not active. Our proposal implies 2% of unserved users while activating only 23% of the access points, ensuring high availability and energy efficiency.","PeriodicalId":180813,"journal":{"name":"2022 5th Conference on Cloud and Internet of Things (CIoT)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121579996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-28DOI: 10.1109/ciot53061.2022.9766809
A. Adel, Amr H. El Mougy
Volunteer and cloud computing are heterogeneous environments that aggregate the capabilities of their resources to solve large scale computationally-intensive problems and provide various services to users. Due to the dynamic nature of these environments, performance states of resources rapidly change, making elasticity characteristic and task allocation very challenging aspects. In order to implement a scalable elastic mechanism while utilizing the resources efficiently and maintaining the overall balance of these systems, real-time performance data need to be collected periodically. However, data collection may significantly increase the communication overhead in the cloud and volunteer network and consume from the limited processing power, energy and bandwidth of resources. Accordingly, this paper proposes solutions for balancing the load while reducing the communication overhead. A reactive and proactive resource auto-scaling task allocation algorithms are proposed. The proactive auto-scaling algorithm is based on the Hidden Markov Model (HMM). Performance evaluation using computer simulations show that the proposed algorithm achieves high prediction accuracy, enhances the overall system utilization and significantly decreases the communication overhead.
{"title":"Cloud Computing Predictive Resource Management Framework Using Hidden Markov Model","authors":"A. Adel, Amr H. El Mougy","doi":"10.1109/ciot53061.2022.9766809","DOIUrl":"https://doi.org/10.1109/ciot53061.2022.9766809","url":null,"abstract":"Volunteer and cloud computing are heterogeneous environments that aggregate the capabilities of their resources to solve large scale computationally-intensive problems and provide various services to users. Due to the dynamic nature of these environments, performance states of resources rapidly change, making elasticity characteristic and task allocation very challenging aspects. In order to implement a scalable elastic mechanism while utilizing the resources efficiently and maintaining the overall balance of these systems, real-time performance data need to be collected periodically. However, data collection may significantly increase the communication overhead in the cloud and volunteer network and consume from the limited processing power, energy and bandwidth of resources. Accordingly, this paper proposes solutions for balancing the load while reducing the communication overhead. A reactive and proactive resource auto-scaling task allocation algorithms are proposed. The proactive auto-scaling algorithm is based on the Hidden Markov Model (HMM). Performance evaluation using computer simulations show that the proposed algorithm achieves high prediction accuracy, enhances the overall system utilization and significantly decreases the communication overhead.","PeriodicalId":180813,"journal":{"name":"2022 5th Conference on Cloud and Internet of Things (CIoT)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115082206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-28DOI: 10.1109/ciot53061.2022.9766655
Hubert Djuitcheu, Maik Debes, Matthias Aumüller, J. Seitz
The use of the Internet of Things (IoT) in almost all domains nowadays makes it the network of the future. Due to the high attention since its creation, this network is the target of numerous attacks of different purpose and nature, of which one of the most perpetrated and virulent is the distributed denial of services (DDoS) attack. This article reviews the different security requirements and some of the attacks within IoT networks. It then focuses on DDoS attacks on the IoT and summarizes some methods of countermeasures for this attack, from the oldest to the most recent ones. Based on this study, it seems that the benefits of machine learning (ML) and deep learning (DL) combined with other technologies such as software defined networking (SDN) are very promising approaches against DDoS attacks.
{"title":"Recent review of Distributed Denial of Service Attacks in the Internet of Things","authors":"Hubert Djuitcheu, Maik Debes, Matthias Aumüller, J. Seitz","doi":"10.1109/ciot53061.2022.9766655","DOIUrl":"https://doi.org/10.1109/ciot53061.2022.9766655","url":null,"abstract":"The use of the Internet of Things (IoT) in almost all domains nowadays makes it the network of the future. Due to the high attention since its creation, this network is the target of numerous attacks of different purpose and nature, of which one of the most perpetrated and virulent is the distributed denial of services (DDoS) attack. This article reviews the different security requirements and some of the attacks within IoT networks. It then focuses on DDoS attacks on the IoT and summarizes some methods of countermeasures for this attack, from the oldest to the most recent ones. Based on this study, it seems that the benefits of machine learning (ML) and deep learning (DL) combined with other technologies such as software defined networking (SDN) are very promising approaches against DDoS attacks.","PeriodicalId":180813,"journal":{"name":"2022 5th Conference on Cloud and Internet of Things (CIoT)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115123242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-28DOI: 10.1109/ciot53061.2022.9766588
Antoine Boudermine, R. Khatoun, Jean-Henri Choyer
Nowadays, networks are exposed to a set of risks and threats that can potentially cause damage and losses for companies. The security of networks must be assessed in order to measure the effectiveness of the protective measures that have been implemented. However, the impact of the dynamic behavior of these systems on the attacker's strategy is rarely considered. In this paper, we propose an attack graph-based solution that consider the evolution of system properties such as network topology changes, vulnerability discovery and patching, as well as attack detection and wiping of some system components. The topology of the attack graph evolves over time considering the evolution of the system state. Several simulations of the attacker infiltration in the system are performed by following the attack paths present in the attack graph in order to assess the security of the system. The proposed solution has been tested on a use case where a user is in remote work. By considering the changes in the network topology, new attack paths can be identified.
{"title":"Attack Graph-based Solution for Vulnerabilities Impact Assessment in Dynamic Environment","authors":"Antoine Boudermine, R. Khatoun, Jean-Henri Choyer","doi":"10.1109/ciot53061.2022.9766588","DOIUrl":"https://doi.org/10.1109/ciot53061.2022.9766588","url":null,"abstract":"Nowadays, networks are exposed to a set of risks and threats that can potentially cause damage and losses for companies. The security of networks must be assessed in order to measure the effectiveness of the protective measures that have been implemented. However, the impact of the dynamic behavior of these systems on the attacker's strategy is rarely considered. In this paper, we propose an attack graph-based solution that consider the evolution of system properties such as network topology changes, vulnerability discovery and patching, as well as attack detection and wiping of some system components. The topology of the attack graph evolves over time considering the evolution of the system state. Several simulations of the attacker infiltration in the system are performed by following the attack paths present in the attack graph in order to assess the security of the system. The proposed solution has been tested on a use case where a user is in remote work. By considering the changes in the network topology, new attack paths can be identified.","PeriodicalId":180813,"journal":{"name":"2022 5th Conference on Cloud and Internet of Things (CIoT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130961888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-28DOI: 10.1109/ciot53061.2022.9766509
Farhan Sufyan, Mohd Sameen Chishti, Amit Banerjee
Computation Offloading is a technique that utilizes cloud resources to maintain the QoS of computation-intensive applications executed on resource-constrained smart devices (SDs). Researchers have proposed various profiling-based offloading frameworks to minimize the execution delay and extend the battery lifetime of the SDs. Most of these offloading strategies rely on the availability of infinite cloud resources to spun independent VMs for profiling the SDs, which may not be an efficient method to handle the increasing application demands of the SDs. To address this, we investigate a generic mobile cloud computing (MCC) computation offloading framework for handling the computational demands generated by a large number of SDs. The framework utilizes appropriate queuing models to simulate the traffic generated by the SDs and formulate a non-linear multi-objective optimization problem to minimize the energy consumption and execution delay of the SDs. Finally, we propose a Stochastic Gradient descent (SGD) solution that jointly optimizes offloading probability and transmission power to find the optimal trade-off between the offloading objectives. Simulation results show the proposed system's effectiveness and efficiency for an increasing number of SDs.
{"title":"Energy and Delay Aware Computation Offloading Scheme in MCC Environment","authors":"Farhan Sufyan, Mohd Sameen Chishti, Amit Banerjee","doi":"10.1109/ciot53061.2022.9766509","DOIUrl":"https://doi.org/10.1109/ciot53061.2022.9766509","url":null,"abstract":"Computation Offloading is a technique that utilizes cloud resources to maintain the QoS of computation-intensive applications executed on resource-constrained smart devices (SDs). Researchers have proposed various profiling-based offloading frameworks to minimize the execution delay and extend the battery lifetime of the SDs. Most of these offloading strategies rely on the availability of infinite cloud resources to spun independent VMs for profiling the SDs, which may not be an efficient method to handle the increasing application demands of the SDs. To address this, we investigate a generic mobile cloud computing (MCC) computation offloading framework for handling the computational demands generated by a large number of SDs. The framework utilizes appropriate queuing models to simulate the traffic generated by the SDs and formulate a non-linear multi-objective optimization problem to minimize the energy consumption and execution delay of the SDs. Finally, we propose a Stochastic Gradient descent (SGD) solution that jointly optimizes offloading probability and transmission power to find the optimal trade-off between the offloading objectives. Simulation results show the proposed system's effectiveness and efficiency for an increasing number of SDs.","PeriodicalId":180813,"journal":{"name":"2022 5th Conference on Cloud and Internet of Things (CIoT)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134457309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-28DOI: 10.1109/ciot53061.2022.9766520
Ambar Prajapati, Bonny Banerjee
This paper presents the simulation of distributed wireless sensor networks (WSNs) consisting of autonomous mobile nodes that communicate, with or without a central/root node, as desired for edge artificial intelligence (edge-AI). We harness the high-resolution and multidimensional sensing characteristics of IEEE 802.15.4 standard and Routing Protocol for Low-Power and Lossy Networks (RPL) to implement dynamic, asynchronous, event-driven, targeted communication in distributed WSNs. We choose Contiki-NG/Cooja to simulate two WSNs, one with and the other without a root node. The simulations are assessed on the network Quality of Service (QoS) parameters such as throughput, network lifetime, power consumption, and packet delivery ratio. The simulation outputs show that the sensor nodes at the edge communicate successfully with the specific targets responding to particular events in an autonomous and asynchronous manner. The performance is slightly degraded when using the RPL WSN with a root node. This work shows how to simulate and evaluate distributed WSNs using the Cooja simulator which would be useful for designing such networks for edge-AI applications, such as visual surveillance, monitoring in assisted living facilities, intelligent transportation with connected vehicles, automated factory floors, immersive social media experience, and so on.
{"title":"Simulating Distributed Wireless Sensor Networks for Edge-AI","authors":"Ambar Prajapati, Bonny Banerjee","doi":"10.1109/ciot53061.2022.9766520","DOIUrl":"https://doi.org/10.1109/ciot53061.2022.9766520","url":null,"abstract":"This paper presents the simulation of distributed wireless sensor networks (WSNs) consisting of autonomous mobile nodes that communicate, with or without a central/root node, as desired for edge artificial intelligence (edge-AI). We harness the high-resolution and multidimensional sensing characteristics of IEEE 802.15.4 standard and Routing Protocol for Low-Power and Lossy Networks (RPL) to implement dynamic, asynchronous, event-driven, targeted communication in distributed WSNs. We choose Contiki-NG/Cooja to simulate two WSNs, one with and the other without a root node. The simulations are assessed on the network Quality of Service (QoS) parameters such as throughput, network lifetime, power consumption, and packet delivery ratio. The simulation outputs show that the sensor nodes at the edge communicate successfully with the specific targets responding to particular events in an autonomous and asynchronous manner. The performance is slightly degraded when using the RPL WSN with a root node. This work shows how to simulate and evaluate distributed WSNs using the Cooja simulator which would be useful for designing such networks for edge-AI applications, such as visual surveillance, monitoring in assisted living facilities, intelligent transportation with connected vehicles, automated factory floors, immersive social media experience, and so on.","PeriodicalId":180813,"journal":{"name":"2022 5th Conference on Cloud and Internet of Things (CIoT)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114206688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-28DOI: 10.1109/ciot53061.2022.9766604
Hailong Feng, Zhengqi Cui, Tingting Yang
With the increasing storage capacity of Internet of Things (IoT) mobile devices, cache-enabled device-to-device (D2D) networks enable efficient information sharing, thereby increasing the transmission efficiency of the entire network. The efficiency is further improved by the rational deployment of caching strategies on mobile devices in combination with traditional base station transmission methods. In this paper, the mobile-aware caching strategy is divided into two problems to solve. The first problem is to solve the user's latency-minimizing cache placement problem. We transform the problem into a decision problem, propose a low-complexity algorithm that approximates the optimal solution, and justify the method using the properties of submodular functions. The second problem addresses external restriction parameters, such as cache file type, cache upper limit, and deadline. We find through simulation that there is a bottleneck in the performance improvement of the whole system as the external parameters change. A suitable formulation of these parameters can put the system in a range where the input and output are most effective, further maximizing the performance of the optimization method. We introduce the concept of marginal efficiency and use Bayesian optimization to solve the selection of these parameters. The final validation is obtained by simulation with real data.
{"title":"Cache Optimization Strategy for Mobile Edge Computing in Maritime IoT","authors":"Hailong Feng, Zhengqi Cui, Tingting Yang","doi":"10.1109/ciot53061.2022.9766604","DOIUrl":"https://doi.org/10.1109/ciot53061.2022.9766604","url":null,"abstract":"With the increasing storage capacity of Internet of Things (IoT) mobile devices, cache-enabled device-to-device (D2D) networks enable efficient information sharing, thereby increasing the transmission efficiency of the entire network. The efficiency is further improved by the rational deployment of caching strategies on mobile devices in combination with traditional base station transmission methods. In this paper, the mobile-aware caching strategy is divided into two problems to solve. The first problem is to solve the user's latency-minimizing cache placement problem. We transform the problem into a decision problem, propose a low-complexity algorithm that approximates the optimal solution, and justify the method using the properties of submodular functions. The second problem addresses external restriction parameters, such as cache file type, cache upper limit, and deadline. We find through simulation that there is a bottleneck in the performance improvement of the whole system as the external parameters change. A suitable formulation of these parameters can put the system in a range where the input and output are most effective, further maximizing the performance of the optimization method. We introduce the concept of marginal efficiency and use Bayesian optimization to solve the selection of these parameters. The final validation is obtained by simulation with real data.","PeriodicalId":180813,"journal":{"name":"2022 5th Conference on Cloud and Internet of Things (CIoT)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114496558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-28DOI: 10.1109/ciot53061.2022.9766728
{"title":"3rd Cloudification of the Internet of Things Conference","authors":"","doi":"10.1109/ciot53061.2022.9766728","DOIUrl":"https://doi.org/10.1109/ciot53061.2022.9766728","url":null,"abstract":"","PeriodicalId":180813,"journal":{"name":"2022 5th Conference on Cloud and Internet of Things (CIoT)","volume":"2004 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125795850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-28DOI: 10.1109/ciot53061.2022.9766814
Z. Trabelsi
In recent years, Internet of Things (IoT) become widely used in various domains. Particularly, consumers are increasingly using IoT devices to build smart homes. These IoT devices can collect data and enable users to manage and secure their smart home environment. However, IoT devices are the target of malicious users and activities. Hence, the security is particularly important for IoT based smart home devices. This paper aims to experimentally evaluate the robustness and resilience of a particular type of IoT devices, known as smart home security cameras, against several common cyber attacks. The attack platform is Kali Linux operation system, which is installed with different types of penetration-testing and cyber-attack programs. The experimental results demonstrate clearly that the evaluated smart home security cameras are very vul-nerable to the tested cyber-attacks, and do not deploy built-in efficient security features. As a consequence, this investigation contributes to confirm the belief that most current IoT based smart home devices are designed and built without sufficient security considerations and solutions and may not be very reliable in untrust and unsafe environments.
{"title":"Investigating the Robustness of IoT Security Cameras against Cyber Attacks*","authors":"Z. Trabelsi","doi":"10.1109/ciot53061.2022.9766814","DOIUrl":"https://doi.org/10.1109/ciot53061.2022.9766814","url":null,"abstract":"In recent years, Internet of Things (IoT) become widely used in various domains. Particularly, consumers are increasingly using IoT devices to build smart homes. These IoT devices can collect data and enable users to manage and secure their smart home environment. However, IoT devices are the target of malicious users and activities. Hence, the security is particularly important for IoT based smart home devices. This paper aims to experimentally evaluate the robustness and resilience of a particular type of IoT devices, known as smart home security cameras, against several common cyber attacks. The attack platform is Kali Linux operation system, which is installed with different types of penetration-testing and cyber-attack programs. The experimental results demonstrate clearly that the evaluated smart home security cameras are very vul-nerable to the tested cyber-attacks, and do not deploy built-in efficient security features. As a consequence, this investigation contributes to confirm the belief that most current IoT based smart home devices are designed and built without sufficient security considerations and solutions and may not be very reliable in untrust and unsafe environments.","PeriodicalId":180813,"journal":{"name":"2022 5th Conference on Cloud and Internet of Things (CIoT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121125594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-28DOI: 10.1109/ciot53061.2022.9766512
Sevda Ozge Bursa, Özlem Durmaz Incel, G. Alptekin
Mobile and wearable sensor technologies have gradually extended their usability into a wide range of applications, from well-being to healthcare. The amount of collected data can quickly become immense to be processed. These time and resource-consuming computations require efficient methods of classification and analysis, where deep learning is a promising technique. However, it is challenging to train and run deep learning algorithms on mobile devices due to resource constraints, such as limited battery power, memory, and computation units. In this paper, we have focused on evaluating the performance of four different deep architectures when optimized with the Tensorflow Lite platform to be deployed on mobile devices in the field of human activity recognition. We have used two datasets from the literature (WISDM and MobiAct) and trained the deep learning algorithms. We have compared the performance of the original models in terms of model accuracy, model size, and resource usages, such as CPU, memory, and energy usage, with their optimized versions. As a result of the experiments, we observe that the model sizes and resource consumption were significantly reduced when the models are optimized compared to the original models.
{"title":"Transforming Deep Learning Models for Resource-Efficient Activity Recognition on Mobile Devices","authors":"Sevda Ozge Bursa, Özlem Durmaz Incel, G. Alptekin","doi":"10.1109/ciot53061.2022.9766512","DOIUrl":"https://doi.org/10.1109/ciot53061.2022.9766512","url":null,"abstract":"Mobile and wearable sensor technologies have gradually extended their usability into a wide range of applications, from well-being to healthcare. The amount of collected data can quickly become immense to be processed. These time and resource-consuming computations require efficient methods of classification and analysis, where deep learning is a promising technique. However, it is challenging to train and run deep learning algorithms on mobile devices due to resource constraints, such as limited battery power, memory, and computation units. In this paper, we have focused on evaluating the performance of four different deep architectures when optimized with the Tensorflow Lite platform to be deployed on mobile devices in the field of human activity recognition. We have used two datasets from the literature (WISDM and MobiAct) and trained the deep learning algorithms. We have compared the performance of the original models in terms of model accuracy, model size, and resource usages, such as CPU, memory, and energy usage, with their optimized versions. As a result of the experiments, we observe that the model sizes and resource consumption were significantly reduced when the models are optimized compared to the original models.","PeriodicalId":180813,"journal":{"name":"2022 5th Conference on Cloud and Internet of Things (CIoT)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114473582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}