Pub Date : 2024-08-08DOI: 10.5815/ijcnis.2024.04.04
Jaydip Kumar, Hemant Kumar, K. Singh, Vipin Saxena
Information security in cloud computing refers to the protection of data items such as text, images, audios and video files. In the modern era, data size is increasing rapidly from gigabytes to terabytes or even petabytes, due to development of a significant amount of real-time data. The majority of data is stored in cloud computing environments and is sent or received over the internet. Due to the fact that cloud computing offers internet-based services, there are various attackers and illegal users over the internet who are consistently trying to gain access to user’s private data without the appropriate permission. Hackers frequently replace any fake data with actual data. As a result, data security has recently generated a lot of attention. To provide access rights of files, the cloud computing is only option for authorized user. To overcome from security threats, a security model is proposed for cloud computing to enhance the security of cloud data through the fingerprint authentication for access control and genetic algorithm is also used for encryption/decryption of cloud data. To search desired data from cloud, fuzzy encrypted keyword search technique is used. The encrypted keyword is stored in cloud storage using SHA256 hashing techniques. The proposed model minimizes the computation time and maximizes the security threats over the cloud. The computed results are presented in the form of figures and tables.
{"title":"Secure Data Storage and Retrieval over the Encrypted Cloud Computing","authors":"Jaydip Kumar, Hemant Kumar, K. Singh, Vipin Saxena","doi":"10.5815/ijcnis.2024.04.04","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.04","url":null,"abstract":"Information security in cloud computing refers to the protection of data items such as text, images, audios and video files. In the modern era, data size is increasing rapidly from gigabytes to terabytes or even petabytes, due to development of a significant amount of real-time data. The majority of data is stored in cloud computing environments and is sent or received over the internet. Due to the fact that cloud computing offers internet-based services, there are various attackers and illegal users over the internet who are consistently trying to gain access to user’s private data without the appropriate permission. Hackers frequently replace any fake data with actual data. As a result, data security has recently generated a lot of attention. To provide access rights of files, the cloud computing is only option for authorized user. To overcome from security threats, a security model is proposed for cloud computing to enhance the security of cloud data through the fingerprint authentication for access control and genetic algorithm is also used for encryption/decryption of cloud data. To search desired data from cloud, fuzzy encrypted keyword search technique is used. The encrypted keyword is stored in cloud storage using SHA256 hashing techniques. The proposed model minimizes the computation time and maximizes the security threats over the cloud. The computed results are presented in the form of figures and tables.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"20 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141927029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.5815/ijcnis.2024.04.09
P. S., S. Kuzhalvaimozhi, Bhuvan K., Ramitha R., Tanisha Machaiah M.
Multi-access edge computing has the ability to provide high bandwidth, and low latency, ensuring high efficiency in performing network operations and thus, it seems to be promising in the technical field. MEC allows processing and analysis of data at the network edges but it has finite number of resources which can be used. To overcome this restriction, a scheduling algorithm can be used by an orchestrator to deliver high quality services by choosing when and where each process should be executed. The scheduling algorithm must meet the expected outcome by utilizing lesser number of resources. This paper provides a scheduling algorithm containing two cooperative levels with an orchestrator layer acting at the center. The first level schedules local processes on the MEC servers and the next layer represents the orchestrator and allocates processes to nearby stations or cloud. Depending on latency and throughput, the processes are executed according to their priority. A resource optimization algorithm has also been proposed for extra performance. This offers a cost-efficient solution which provides good service availability. The proposed algorithm has a balanced wait time (Avg) and blocking percentage (Avg) of 2.37ms and 0.4 respectively. The blocking percentage is 1.65 times better than Shortest Job First Scheduling (SJFS) and 1.3 times better than Earliest Deadline First Scheduling (EDFS). The optimization algorithm can work on many kinds of network traffic models such as uniformly distributed and base stations with unbalanced loads.
{"title":"An Enhanced Process Scheduler Using Multi-Access Edge Computing in An IoT Network","authors":"P. S., S. Kuzhalvaimozhi, Bhuvan K., Ramitha R., Tanisha Machaiah M.","doi":"10.5815/ijcnis.2024.04.09","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.09","url":null,"abstract":"Multi-access edge computing has the ability to provide high bandwidth, and low latency, ensuring high efficiency in performing network operations and thus, it seems to be promising in the technical field. MEC allows processing and analysis of data at the network edges but it has finite number of resources which can be used. To overcome this restriction, a scheduling algorithm can be used by an orchestrator to deliver high quality services by choosing when and where each process should be executed. The scheduling algorithm must meet the expected outcome by utilizing lesser number of resources. This paper provides a scheduling algorithm containing two cooperative levels with an orchestrator layer acting at the center. The first level schedules local processes on the MEC servers and the next layer represents the orchestrator and allocates processes to nearby stations or cloud. Depending on latency and throughput, the processes are executed according to their priority. A resource optimization algorithm has also been proposed for extra performance. This offers a cost-efficient solution which provides good service availability. The proposed algorithm has a balanced wait time (Avg) and blocking percentage (Avg) of 2.37ms and 0.4 respectively. The blocking percentage is 1.65 times better than Shortest Job First Scheduling (SJFS) and 1.3 times better than Earliest Deadline First Scheduling (EDFS). The optimization algorithm can work on many kinds of network traffic models such as uniformly distributed and base stations with unbalanced loads.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"18 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141927620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.5815/ijcnis.2024.04.05
Serhii Vladov, Ruslan Yakovliev, Victoria Vysotska, Dmytro Uhryn, Yuriy Ushenko
This work focuses on developing a universal onboard neural network system for restoring information when helicopter turboshaft engine sensors fail. A mathematical task was formulated to determine the occurrence and location of these sensor failures using a multi-class Bayesian classification model that incorporates prior knowledge and updates probabilities with new data. The Bayesian approach was employed for identifying and localizing sensor failures, utilizing a Bayesian neural network with a 4–6–3 structure as the core of the developed system. A training algorithm for the Bayesian neural network was created, which estimates the prior distribution of network parameters through variational approximation, maximizes the evidence lower bound of direct likelihood instead, and updates parameters by calculating gradients of the log-likelihood and evidence lower bound, while adding regularization terms for warnings, distributions, and uncertainty estimates to interpret results. This approach ensures balanced data handling, effective training (achieving nearly 100% accuracy on both training and validation sets), and improved model understanding (with training losses not exceeding 2.5%). An example is provided that demonstrates solving the information restoration task in the event of a gas-generator rotor r.p.m. sensor failure in the TV3-117 helicopter turboshaft engine. The developed onboard neural network system implementing feasibility on a helicopter using the neuro-processor Intel Neural Compute Stick 2 has been analytically proven.
{"title":"Universal On-board Neural Network System for Restoring Information in Case of Helicopter Turboshaft Engine Sensor Failure","authors":"Serhii Vladov, Ruslan Yakovliev, Victoria Vysotska, Dmytro Uhryn, Yuriy Ushenko","doi":"10.5815/ijcnis.2024.04.05","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.05","url":null,"abstract":"This work focuses on developing a universal onboard neural network system for restoring information when helicopter turboshaft engine sensors fail. A mathematical task was formulated to determine the occurrence and location of these sensor failures using a multi-class Bayesian classification model that incorporates prior knowledge and updates probabilities with new data. The Bayesian approach was employed for identifying and localizing sensor failures, utilizing a Bayesian neural network with a 4–6–3 structure as the core of the developed system. A training algorithm for the Bayesian neural network was created, which estimates the prior distribution of network parameters through variational approximation, maximizes the evidence lower bound of direct likelihood instead, and updates parameters by calculating gradients of the log-likelihood and evidence lower bound, while adding regularization terms for warnings, distributions, and uncertainty estimates to interpret results. This approach ensures balanced data handling, effective training (achieving nearly 100% accuracy on both training and validation sets), and improved model understanding (with training losses not exceeding 2.5%). An example is provided that demonstrates solving the information restoration task in the event of a gas-generator rotor r.p.m. sensor failure in the TV3-117 helicopter turboshaft engine. The developed onboard neural network system implementing feasibility on a helicopter using the neuro-processor Intel Neural Compute Stick 2 has been analytically proven.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"17 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141926018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.5815/ijcnis.2024.04.06
Santhosh Kumar Medishetti, Ganesh Reddy
Cloud-fog computing frameworks are innovative frameworks that have been designed to improve the present Internet of Things (IoT) infrastructures. The major limitation for IoT applications is the availability of ongoing energy sources for fog computing servers because transmitting the enormous amount of data generated by IoT devices will increase network bandwidth overhead and slow down the responsive time. Therefore, in this paper, the Butterfly Spotted Hyena Optimization algorithm (BSHOA) is proposed to find an alternative energy-aware task scheduling technique for IoT requests in a cloud-fog environment. In this hybrid BSHOA algorithm, the Butterfly optimization algorithm (BOA) is combined with Spotted Hyena Optimization (SHO) to enhance the global and local search behavior of BOA in the process of finding the optimal solution for the problem under consideration. To show the applicability and efficiency of the presented BSHOA approach, experiments will be done on real workloads taken from the Parallel Workload Archive comprising NASA Ames iPSC/860 and HP2CN (High-Performance Computing Center North) workloads. The investigation findings indicate that BSHOA has a strong capacity for dealing with the task scheduling issue and outperforms other approaches in terms of performance parameters including throughput, energy usage, and makespan time.
{"title":"BSHOA: Energy Efficient Task Scheduling in Cloud-fog Environment","authors":"Santhosh Kumar Medishetti, Ganesh Reddy","doi":"10.5815/ijcnis.2024.04.06","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.06","url":null,"abstract":"Cloud-fog computing frameworks are innovative frameworks that have been designed to improve the present Internet of Things (IoT) infrastructures. The major limitation for IoT applications is the availability of ongoing energy sources for fog computing servers because transmitting the enormous amount of data generated by IoT devices will increase network bandwidth overhead and slow down the responsive time. Therefore, in this paper, the Butterfly Spotted Hyena Optimization algorithm (BSHOA) is proposed to find an alternative energy-aware task scheduling technique for IoT requests in a cloud-fog environment. In this hybrid BSHOA algorithm, the Butterfly optimization algorithm (BOA) is combined with Spotted Hyena Optimization (SHO) to enhance the global and local search behavior of BOA in the process of finding the optimal solution for the problem under consideration. To show the applicability and efficiency of the presented BSHOA approach, experiments will be done on real workloads taken from the Parallel Workload Archive comprising NASA Ames iPSC/860 and HP2CN (High-Performance Computing Center North) workloads. The investigation findings indicate that BSHOA has a strong capacity for dealing with the task scheduling issue and outperforms other approaches in terms of performance parameters including throughput, energy usage, and makespan time.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"52 15","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141928029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.5815/ijcnis.2024.04.10
Z. Avkurova, Sergiy Gnatyuk, Bayan Abduraimova, Kaiyrbek Makulov
The number of new cybersecurity threats and opportunities is increasing over time, as well as the amount of information that is generated, processed, stored and transmitted using ICTs. Particularly sensitive are the objects of critical infrastructure of the state, which include the mining industry, transport, telecommunications, the banking system, etc. From these positions, the development of systems for detecting attacks and identifying intruders (including the critical infrastructure of the state) is an important and relevant scientific task, which determined the tasks of this article. The paper identifies the main factors influencing the choice of the most effective method for calculating the importance coefficients to increase the objectivity and simplicity of expert assessment of security events in cyberspace. Also, a methodology for conducting an experimental study was developed, in which the goals and objectives of the experiment, input and output parameters, the hypothesis and research criteria, the sufficiency of experimental objects and the sequence of necessary actions were determined. The conducted experimental study confirmed the adequacy of the models proposed in the work, as well as the ability of the method and system created on their basis to detect targeted attacks and identify intruders in cyberspace at an early stage, which is not included in the functionality of modern intrusion detection and prevention systems.
{"title":"Targeted Attacks Detection and Security Intruders Identification in the Cyber Space","authors":"Z. Avkurova, Sergiy Gnatyuk, Bayan Abduraimova, Kaiyrbek Makulov","doi":"10.5815/ijcnis.2024.04.10","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.10","url":null,"abstract":"The number of new cybersecurity threats and opportunities is increasing over time, as well as the amount of information that is generated, processed, stored and transmitted using ICTs. Particularly sensitive are the objects of critical infrastructure of the state, which include the mining industry, transport, telecommunications, the banking system, etc. From these positions, the development of systems for detecting attacks and identifying intruders (including the critical infrastructure of the state) is an important and relevant scientific task, which determined the tasks of this article. The paper identifies the main factors influencing the choice of the most effective method for calculating the importance coefficients to increase the objectivity and simplicity of expert assessment of security events in cyberspace. Also, a methodology for conducting an experimental study was developed, in which the goals and objectives of the experiment, input and output parameters, the hypothesis and research criteria, the sufficiency of experimental objects and the sequence of necessary actions were determined. The conducted experimental study confirmed the adequacy of the models proposed in the work, as well as the ability of the method and system created on their basis to detect targeted attacks and identify intruders in cyberspace at an early stage, which is not included in the functionality of modern intrusion detection and prevention systems.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"56 50","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141928930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented Reality (AR) and Virtual Reality (VR) are innovative technologies that are experiencing a widespread recognition. These technologies possess the capability to transform and redefine our interactions with the surrounding environment. However, as these technologies spread, they also introduce new security challenges. In this paper, we discuss the security challenges posed by Augmented reality and Virtual Reality, and propose a Machine Learning-based approach to address these challenges. We also discuss how Machine Learning can be used to detect and prevent attacks in Augmented reality and Virtual Reality. By leveraging the power of Machine Learning algorithms, we aim to bolster the security defences of Augmented reality and Virtual Reality systems. To accomplish this, we have conducted a comprehensive evaluation of various Machine Learning algorithms, meticulously analysing their performance and efficacy in enhancing security. Our results show that Machine Learning can be an effective way to improve the security of Augmented reality and virtual reality systems.
{"title":"Attack Modeling and Security Analysis Using Machine Learning Algorithms Enabled with Augmented Reality and Virtual Reality","authors":"Momina Mushtaq, Rakesh Kumar Jha, Manish Sabraj, Shubha Jain","doi":"10.5815/ijcnis.2024.04.08","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.08","url":null,"abstract":"Augmented Reality (AR) and Virtual Reality (VR) are innovative technologies that are experiencing a widespread recognition. These technologies possess the capability to transform and redefine our interactions with the surrounding environment. However, as these technologies spread, they also introduce new security challenges. In this paper, we discuss the security challenges posed by Augmented reality and Virtual Reality, and propose a Machine Learning-based approach to address these challenges. We also discuss how Machine Learning can be used to detect and prevent attacks in Augmented reality and Virtual Reality. By leveraging the power of Machine Learning algorithms, we aim to bolster the security defences of Augmented reality and Virtual Reality systems. To accomplish this, we have conducted a comprehensive evaluation of various Machine Learning algorithms, meticulously analysing their performance and efficacy in enhancing security. Our results show that Machine Learning can be an effective way to improve the security of Augmented reality and virtual reality systems.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"42 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141929415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.5815/ijcnis.2024.04.07
Vishwas C. G. M., R. Kunte
Image encryption is an efficient mechanism by which digital images can be secured during transmission over communication in which key sequence generation plays a vital role. The proposed system consists of stages such as the generation of four chaotic maps, conversion of generated maps to binary vectors, rotation of Linear Feedback Shift Register (LFSR), and selection of generated binary chaotic key sequences from the generated key pool. The novelty of this implementation is to generate binary sequences by selecting from all four chaotic maps viz., Tent, Logistic, Henon, and Arnold Cat map (ACM). LFSR selects chaotic maps to produce random key sequences. Five primitive polynomials of degrees 5, 6, 7, and 8 are considered for the generation of key sequences. Each primitive polynomial generates 61 binary key sequences stored in a binary key pool. All 61 binary key sequences generated are submitted for the NIST and FIPS tests. Performance analysis is carried out of the generated binary key sequences. From the obtained results, it can be concluded that the binary key sequences are random and unpredictable and have a large key space based on the individual and combination of key sequences. Also, the generated binary key sequences can be efficiently utilized for the encryption of digital images.
图像加密是一种有效的机制,可在通信传输过程中确保数字图像的安全,其中密钥序列的生成起着至关重要的作用。拟议的系统包括四个阶段,如生成四个混沌图、将生成的混沌图转换为二进制矢量、旋转线性反馈移位寄存器(LFSR)以及从生成的密钥池中选择生成的二进制混沌密钥序列。该实现方法的新颖之处在于通过从所有四个混沌图(即 Tent、Logistic、Henon 和 Arnold Cat 图 (ACM))中进行选择来生成二进制序列。LFSR 通过选择混沌图来生成随机密钥序列。在生成密钥序列时,考虑了 5、6、7 和 8 度的五个基元多项式。每个基元多项式生成 61 个二进制密钥序列,存储在二进制密钥池中。生成的所有 61 个二进制密钥序列都提交给 NIST 和 FIPS 测试。对生成的二进制密钥序列进行了性能分析。从得到的结果可以得出结论,二进制密钥序列是随机的、不可预测的,并且根据密钥序列的单个和组合,具有较大的密钥空间。此外,生成的二进制密钥序列可以有效地用于数字图像加密。
{"title":"Chaotic Map based Random Binary Key Sequence Generation","authors":"Vishwas C. G. M., R. Kunte","doi":"10.5815/ijcnis.2024.04.07","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.07","url":null,"abstract":"Image encryption is an efficient mechanism by which digital images can be secured during transmission over communication in which key sequence generation plays a vital role. The proposed system consists of stages such as the generation of four chaotic maps, conversion of generated maps to binary vectors, rotation of Linear Feedback Shift Register (LFSR), and selection of generated binary chaotic key sequences from the generated key pool. The novelty of this implementation is to generate binary sequences by selecting from all four chaotic maps viz., Tent, Logistic, Henon, and Arnold Cat map (ACM). LFSR selects chaotic maps to produce random key sequences. Five primitive polynomials of degrees 5, 6, 7, and 8 are considered for the generation of key sequences. Each primitive polynomial generates 61 binary key sequences stored in a binary key pool. All 61 binary key sequences generated are submitted for the NIST and FIPS tests. Performance analysis is carried out of the generated binary key sequences. From the obtained results, it can be concluded that the binary key sequences are random and unpredictable and have a large key space based on the individual and combination of key sequences. Also, the generated binary key sequences can be efficiently utilized for the encryption of digital images.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"23 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141928124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.5815/ijcnis.2024.04.01
Marouane Myyara, Oussama Lagnfdi, A. Darif, Abderrazak Farchane
Multi-access Edge Computing optimizes computation in proximity to smart mobile devices, addressing the limitations of devices with insufficient capabilities. In scenarios featuring multiple compute-intensive and delay-sensitive applications, computation offloading becomes essential. The objective of this research is to enhance user experience, minimize service time, and balance workloads while optimizing computation offloading and resource utilization. In this study, we introduce dynamic computation offloading algorithms that concurrently minimize service time and maximize the quality of experience. These algorithms take into account task and resource characteristics to determine the optimal execution location based on evaluated metrics. To assess the positive impact of the proposed algorithms, we employed the Edgecloudsim simulator, offering a realistic assessment of a Multi-access Edge Computing system. Simulation results showcase the superiority of our dynamic computation offloading algorithm compared to alternatives, achieving enhanced quality of experience and minimal service time. The findings underscore the effectiveness of the proposed algorithm and its potential to enhance mobile application performance. The comprehensive evaluation provides insights into the robustness and practical applicability of the proposed approach, positioning it as a valuable solution in the context of MEC networks. This research contributes to the ongoing efforts in advancing computation offloading strategies for improved performance in edge computing environments.
{"title":"Quality of Experience Improvement and Service Time Optimization through Dynamic Computation Offloading Algorithms in Multi-access Edge Computing Networks","authors":"Marouane Myyara, Oussama Lagnfdi, A. Darif, Abderrazak Farchane","doi":"10.5815/ijcnis.2024.04.01","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.01","url":null,"abstract":"Multi-access Edge Computing optimizes computation in proximity to smart mobile devices, addressing the limitations of devices with insufficient capabilities. In scenarios featuring multiple compute-intensive and delay-sensitive applications, computation offloading becomes essential. The objective of this research is to enhance user experience, minimize service time, and balance workloads while optimizing computation offloading and resource utilization. In this study, we introduce dynamic computation offloading algorithms that concurrently minimize service time and maximize the quality of experience. These algorithms take into account task and resource characteristics to determine the optimal execution location based on evaluated metrics. To assess the positive impact of the proposed algorithms, we employed the Edgecloudsim simulator, offering a realistic assessment of a Multi-access Edge Computing system. Simulation results showcase the superiority of our dynamic computation offloading algorithm compared to alternatives, achieving enhanced quality of experience and minimal service time. The findings underscore the effectiveness of the proposed algorithm and its potential to enhance mobile application performance. The comprehensive evaluation provides insights into the robustness and practical applicability of the proposed approach, positioning it as a valuable solution in the context of MEC networks. This research contributes to the ongoing efforts in advancing computation offloading strategies for improved performance in edge computing environments.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"58 35","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141928905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cybersecurity has received significant attention globally, with the ever-continuing expansion of internet usage, due to growing trends and adverse impacts of cybercrimes, which include disrupting businesses, corrupting or altering sensitive data, stealing or exposing information, and illegally accessing a computer network. As a popular way, different kinds of firewalls, antivirus systems, and Intrusion Detection Systems (IDS) have been introduced to protect a network from such attacks. Recently, Machine Learning (ML), including Deep Learning (DL) based autonomous systems, have been state-of-the-art in cyber security, along with their drastic growth and superior performance. This study aims to develop a novel IDS system that gives more attention to classifying attack cases correctly and categorizes attacks into subclass levels by proposing a two-step process with a cascaded framework. The proposed framework recognizes the attacks using one ML model and classifies them into subclass levels using the other ML model in successive operations. The most challenging part is to train both models with unbalanced cases of attacks and non-attacks in the datasets, which is overcome by proposing a data augmentation technique. Precisely, limited attack samples of the dataset are augmented in the training set to learn the attack cases properly. Finally, the proposed framework is implemented with NN, the most popular ML model, and evaluated with the NSL-KDD dataset by conducting a rigorous analysis of each subclass emphasizing the major attack class. The proficiency of the proposed cascaded approach with data augmentation is compared with the other three models: the cascaded model without data augmentation and the standard single NN model with and without the data augmentation technique. Experimental results on the NSL-KDD dataset have revealed the proposed method as an effective IDS system and outperformed existing state-of-the-art ML models.
随着互联网使用的不断扩大,网络犯罪的趋势和负面影响日益严重,包括扰乱业务、破坏或篡改敏感数据、窃取或暴露信息以及非法访问计算机网络,因此网络安全在全球范围内受到极大关注。作为一种流行的方式,不同类型的防火墙、防病毒系统和入侵检测系统(IDS)已被引入,以保护网络免受此类攻击。最近,机器学习(ML),包括基于深度学习(DL)的自主系统,在网络安全领域成为最先进的技术,其发展迅猛,性能优越。本研究旨在开发一种新型 IDS 系统,该系统更注重正确分类攻击案例,并通过级联框架提出一个两步流程,将攻击分为子类级别。提议的框架使用一个 ML 模型识别攻击,并在连续操作中使用另一个 ML 模型将攻击分类为子类级别。最具挑战性的部分是在数据集中攻击和非攻击情况不平衡的情况下训练这两个模型,通过提出数据增强技术克服了这一难题。确切地说,在训练集中增加了数据集中有限的攻击样本,以正确学习攻击案例。最后,利用最流行的 ML 模型 NN 实现了所提出的框架,并通过 NSL-KDD 数据集对每个子类进行了严格的分析评估,强调了主要的攻击类别。与其他三种模型(不带数据增强的级联模型和带或不带数据增强技术的标准单一 NN 模型)相比,建议的级联方法在数据增强方面的能力更强。在 NSL-KDD 数据集上的实验结果表明,所提出的方法是一种有效的 IDS 系统,其性能优于现有的最先进的 ML 模型。
{"title":"Cascaded Machine Learning Approach with Data Augmentation for Intrusion Detection System","authors":"Argha Chandra Dhar, Arna Roy, M. Akhand, Md Abdus Samad Kamal, Kou Yamada","doi":"10.5815/ijcnis.2024.04.02","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.02","url":null,"abstract":"Cybersecurity has received significant attention globally, with the ever-continuing expansion of internet usage, due to growing trends and adverse impacts of cybercrimes, which include disrupting businesses, corrupting or altering sensitive data, stealing or exposing information, and illegally accessing a computer network. As a popular way, different kinds of firewalls, antivirus systems, and Intrusion Detection Systems (IDS) have been introduced to protect a network from such attacks. Recently, Machine Learning (ML), including Deep Learning (DL) based autonomous systems, have been state-of-the-art in cyber security, along with their drastic growth and superior performance. This study aims to develop a novel IDS system that gives more attention to classifying attack cases correctly and categorizes attacks into subclass levels by proposing a two-step process with a cascaded framework. The proposed framework recognizes the attacks using one ML model and classifies them into subclass levels using the other ML model in successive operations. The most challenging part is to train both models with unbalanced cases of attacks and non-attacks in the datasets, which is overcome by proposing a data augmentation technique. Precisely, limited attack samples of the dataset are augmented in the training set to learn the attack cases properly. Finally, the proposed framework is implemented with NN, the most popular ML model, and evaluated with the NSL-KDD dataset by conducting a rigorous analysis of each subclass emphasizing the major attack class. The proficiency of the proposed cascaded approach with data augmentation is compared with the other three models: the cascaded model without data augmentation and the standard single NN model with and without the data augmentation technique. Experimental results on the NSL-KDD dataset have revealed the proposed method as an effective IDS system and outperformed existing state-of-the-art ML models.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"59 47","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141929042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-08DOI: 10.5815/ijcnis.2024.03.06
J. Isabona, Sayo A. Akinwumi, Theophilus E. Arijaje, Odesanya Ituabhor, A. Imoize
Model-based parameter estimation, identification, and optimisation play a dominant role in many aspects of physical and operational processes in applied sciences, engineering, and other related disciplines. The intricate task involves engaging and fitting the most appropriate parametric model with nonlinear or linear features to experimental field datasets priori to selecting the best optimisation algorithm with the best configuration. Thus, the task is usually geared towards solving a clear optimsation problem. In this paper, a systematic-stepwise approach has been employed to review and benchmark six numerical-based optimization algorithms in MATLAB computational Environment. The algorithms include the Gradient Descent (GRA), Levenberg-Marguardt (LEM), Quasi-Newton (QAN), Gauss-Newton (GUN), Nelda-Meald (NEM), and Trust-Region-Dogleg (TRD). This has been accomplished by engaging them to solve an intricate radio frequency propagation modelling and parametric estimation in connection with practical spatial signal data. The spatial signal data were obtained via real-time field drive test conducted around six eNodeBs transmitters, with case studies taken from different terrains where 4G LTE transmitters are operational. Accordingly, three criteria in connection with rate of convergence Results show that the approximate hessian-based QAN algorithm, followed by the LEM algorithm yielded the best results in optimizing and estimating the RF propagation models parameters. The resultant approach and output of this paper will be of countless assets in assisting the end-users to select the most preferable optimization algorithm to handle their respective intricate problems.
{"title":"Parameter Estimation of Cellular Communication Systems Models in Computational MATLAB Environment: A Systematic Solver-based Numerical Optimization Approaches","authors":"J. Isabona, Sayo A. Akinwumi, Theophilus E. Arijaje, Odesanya Ituabhor, A. Imoize","doi":"10.5815/ijcnis.2024.03.06","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.03.06","url":null,"abstract":"Model-based parameter estimation, identification, and optimisation play a dominant role in many aspects of physical and operational processes in applied sciences, engineering, and other related disciplines. The intricate task involves engaging and fitting the most appropriate parametric model with nonlinear or linear features to experimental field datasets priori to selecting the best optimisation algorithm with the best configuration. Thus, the task is usually geared towards solving a clear optimsation problem. In this paper, a systematic-stepwise approach has been employed to review and benchmark six numerical-based optimization algorithms in MATLAB computational Environment. The algorithms include the Gradient Descent (GRA), Levenberg-Marguardt (LEM), Quasi-Newton (QAN), Gauss-Newton (GUN), Nelda-Meald (NEM), and Trust-Region-Dogleg (TRD). This has been accomplished by engaging them to solve an intricate radio frequency propagation modelling and parametric estimation in connection with practical spatial signal data. The spatial signal data were obtained via real-time field drive test conducted around six eNodeBs transmitters, with case studies taken from different terrains where 4G LTE transmitters are operational. Accordingly, three criteria in connection with rate of convergence Results show that the approximate hessian-based QAN algorithm, followed by the LEM algorithm yielded the best results in optimizing and estimating the RF propagation models parameters. The resultant approach and output of this paper will be of countless assets in assisting the end-users to select the most preferable optimization algorithm to handle their respective intricate problems.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":" 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141370390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}