Pub Date : 2022-10-10DOI: 10.1109/RTC56148.2022.9945060
M. Gromov, David Arnold, J. Saniie
Continued growth and adoption of the Internet of Things (IoT) has greatly increased the number of dispersed resources within both corporate and private networks. IoT devices benefit the user by providing more local access to computation and observation compared to dedicated servers within a centralized data center. However, years of lax or nonexistent cybersecurity standards leave IoT devices as easy prey for hackers looking for easy targets. Further, IoT devices normally operate at the edge of the network, far from sophisticated cyberattack detection and network monitoring tools. When hacked, IoT can be used as a launching point to attack more sensitive targets or can be collected into a larger botnet. These botnets are frequently utilized for targeted Distributed Denial of Service (DDoS) attacks against service providers and servers, decreasing response time or overwhelming the system. In order to protect these vulnerable resources, we propose an edge computing system for detecting active threats against local IoT devices. Our system will utilize deep learning, specifically a Convolutional Neural Network (CNN) for detecting attacks. Incoming network traffic will be converted into an image before beings supplied to the CNN for classification. The network will be trained using the N-BaIoT dataset. Since the system is designed to operate at the edge of the network, it will run on the Jetson Nano for real-time attack detection.
{"title":"Edge Computing for Real Time Botnet Propagation Detection","authors":"M. Gromov, David Arnold, J. Saniie","doi":"10.1109/RTC56148.2022.9945060","DOIUrl":"https://doi.org/10.1109/RTC56148.2022.9945060","url":null,"abstract":"Continued growth and adoption of the Internet of Things (IoT) has greatly increased the number of dispersed resources within both corporate and private networks. IoT devices benefit the user by providing more local access to computation and observation compared to dedicated servers within a centralized data center. However, years of lax or nonexistent cybersecurity standards leave IoT devices as easy prey for hackers looking for easy targets. Further, IoT devices normally operate at the edge of the network, far from sophisticated cyberattack detection and network monitoring tools. When hacked, IoT can be used as a launching point to attack more sensitive targets or can be collected into a larger botnet. These botnets are frequently utilized for targeted Distributed Denial of Service (DDoS) attacks against service providers and servers, decreasing response time or overwhelming the system. In order to protect these vulnerable resources, we propose an edge computing system for detecting active threats against local IoT devices. Our system will utilize deep learning, specifically a Convolutional Neural Network (CNN) for detecting attacks. Incoming network traffic will be converted into an image before beings supplied to the CNN for classification. The network will be trained using the N-BaIoT dataset. Since the system is designed to operate at the edge of the network, it will run on the Jetson Nano for real-time attack detection.","PeriodicalId":437897,"journal":{"name":"2022 IEEE International Conference and Expo on Real Time Communications at IIT (RTC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128298282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-10DOI: 10.1109/RTC56148.2022.9945059
Ali Gaber Mohamed, Mahmoud Gamal
Cognitive radio can be considered as a viable frequency access framework that overcomes the disadvantages of the licensed-based transmission procedure by allowing the secondary users to access the spectrum to transmit their data. However, due to the evolution of latency-sensitive real-time communication applications such as gaming and extended reality, it becomes more vital that cognitive radio networks should be studied with latency requirements on the data transmission. Recently, Age of Information has introduced itself as a important metric for evaluating the freshness of the transmitted data. In this paper, we investigate the latency and stability analysis of a two-user cognitive radio network that consists of one primary user, one secondary user and their destinations. The latency requirements of the transmitted data packets are taken into consideration by imposing Age of Information constraints on the data transmission of the users. We present two optimization problems, in the first problem, the secondary user stable throughput is maximized under an Age of Information constraint imposed on the data transmission of the secondary user. While, in the second problem, we maximize the stable throughput of the secondary user with respect to Age of Information constraints set on the data transmission of both the primary and secondary users. The resultant problems are found to be non-linear programming optimization problems. An appropriate algorithm is used to solve the problems and provide the numerical solutions. Our results characterize the impact of setting Age of Information constraints on the stability region of the network; we demonstrate that the stability region, in certain cases, is reduced by only 11% with strict Age of Information restrictions if compared to the scenario where no latency requirements is considered. Our results also show the potential accuracy of the algorithm adopted in this paper to solve the formulated optimization problems.
{"title":"Maximizing Stable Throughput in Age of Information-Based Cognitive Radio","authors":"Ali Gaber Mohamed, Mahmoud Gamal","doi":"10.1109/RTC56148.2022.9945059","DOIUrl":"https://doi.org/10.1109/RTC56148.2022.9945059","url":null,"abstract":"Cognitive radio can be considered as a viable frequency access framework that overcomes the disadvantages of the licensed-based transmission procedure by allowing the secondary users to access the spectrum to transmit their data. However, due to the evolution of latency-sensitive real-time communication applications such as gaming and extended reality, it becomes more vital that cognitive radio networks should be studied with latency requirements on the data transmission. Recently, Age of Information has introduced itself as a important metric for evaluating the freshness of the transmitted data. In this paper, we investigate the latency and stability analysis of a two-user cognitive radio network that consists of one primary user, one secondary user and their destinations. The latency requirements of the transmitted data packets are taken into consideration by imposing Age of Information constraints on the data transmission of the users. We present two optimization problems, in the first problem, the secondary user stable throughput is maximized under an Age of Information constraint imposed on the data transmission of the secondary user. While, in the second problem, we maximize the stable throughput of the secondary user with respect to Age of Information constraints set on the data transmission of both the primary and secondary users. The resultant problems are found to be non-linear programming optimization problems. An appropriate algorithm is used to solve the problems and provide the numerical solutions. Our results characterize the impact of setting Age of Information constraints on the stability region of the network; we demonstrate that the stability region, in certain cases, is reduced by only 11% with strict Age of Information restrictions if compared to the scenario where no latency requirements is considered. Our results also show the potential accuracy of the algorithm adopted in this paper to solve the formulated optimization problems.","PeriodicalId":437897,"journal":{"name":"2022 IEEE International Conference and Expo on Real Time Communications at IIT (RTC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128851881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-10DOI: 10.1109/RTC56148.2022.9945055
Aurelien Agniel, David Arnold, J. Saniie
Continued adoption of the Internet of Things (IoT) redefines the paradigm of network architectures. Historically, network architectures relied on centralized resources and data centers. The introduction of the IoT challenges this notion by placing computing resources and observation at the edge of the network. As a result, decentralized approaches for information processing and gathering can be adopted and explored. However, this shift greatly expands the network footprint and shifts traffic away from the center of the network, where observation and cybersecurity monitoring tools are frequently located. Further, IoT devices are often computationally constrained, limiting their readiness to deal with cyber-threats. These security vulnerabilities make the IoT an easy target for hacking groups and lead to the proliferation of zombie networks of compromised devices. Frequently, zombie networks, otherwise known as botnets, are coordinated to attack targets and overload network resources through a Distributed Denial of Service (DDoS) attack. In order to crack down on these botnets, it is essential to develop new methods for quickly and efficiently detecting botnet activity. This study proposes a novel botnet detection technique that first pre-processes network data through computer vision and image processing. The processed dataset is then sent to a neural network for final classification. Two neural networks will be explored, a sequential model and an auto-encoder model. The application of image processing has two advantages over current methods. First, the image processing is simple enough to be completed at the edge of the network by the IoT devices. Second, preprocessing the data allows us to use a shallower network, decreasing detection time further. We will utilize the N-BaIoT dataset and compare our findings to their results.
{"title":"Image Processing for Detecting Botnet Attacks: A Novel Approach for Flexibility and Scalability","authors":"Aurelien Agniel, David Arnold, J. Saniie","doi":"10.1109/RTC56148.2022.9945055","DOIUrl":"https://doi.org/10.1109/RTC56148.2022.9945055","url":null,"abstract":"Continued adoption of the Internet of Things (IoT) redefines the paradigm of network architectures. Historically, network architectures relied on centralized resources and data centers. The introduction of the IoT challenges this notion by placing computing resources and observation at the edge of the network. As a result, decentralized approaches for information processing and gathering can be adopted and explored. However, this shift greatly expands the network footprint and shifts traffic away from the center of the network, where observation and cybersecurity monitoring tools are frequently located. Further, IoT devices are often computationally constrained, limiting their readiness to deal with cyber-threats. These security vulnerabilities make the IoT an easy target for hacking groups and lead to the proliferation of zombie networks of compromised devices. Frequently, zombie networks, otherwise known as botnets, are coordinated to attack targets and overload network resources through a Distributed Denial of Service (DDoS) attack. In order to crack down on these botnets, it is essential to develop new methods for quickly and efficiently detecting botnet activity. This study proposes a novel botnet detection technique that first pre-processes network data through computer vision and image processing. The processed dataset is then sent to a neural network for final classification. Two neural networks will be explored, a sequential model and an auto-encoder model. The application of image processing has two advantages over current methods. First, the image processing is simple enough to be completed at the edge of the network by the IoT devices. Second, preprocessing the data allows us to use a shallower network, decreasing detection time further. We will utilize the N-BaIoT dataset and compare our findings to their results.","PeriodicalId":437897,"journal":{"name":"2022 IEEE International Conference and Expo on Real Time Communications at IIT (RTC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116986024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}