Pub Date : 2023-09-27DOI: 10.1186/s13677-023-00516-5
Amjad Ullah, Tamas Kiss, József Kovács, Francesco Tusa, James Deslauriers, Huseyin Dagdeviren, Resmi Arjun, Hamed Hamzeh
Abstract IoT systems are becoming an essential part of our environment. Smart cities, smart manufacturing, augmented reality, and self-driving cars are just some examples of the wide range of domains, where the applicability of such systems have been increasing rapidly. These IoT use cases often require simultaneous access to geographically distributed arrays of sensors, heterogeneous remote, local as well as multi-cloud computational resources. This gives birth to the extended Cloud-to-Things computing paradigm. The emergence of this new paradigm raised the quintessential need to extend the orchestration requirements (i.e., the automated deployment and run-time management) of applications from the centralised cloud-only environment to the entire spectrum of resources in the Cloud-to-Things continuum. In order to cope with this requirement, in the last few years, there has been a lot of attention to the development of orchestration systems in both industry and academic environments. This paper is an attempt to gather the research conducted in the orchestration for the Cloud-to-Things continuum landscape and to propose a detailed taxonomy, which is then used to critically review the landscape of existing research work. We finally discuss the key challenges that require further attention and also present a conceptual framework based on the conducted analysis.
{"title":"Orchestration in the Cloud-to-Things compute continuum: taxonomy, survey and future directions","authors":"Amjad Ullah, Tamas Kiss, József Kovács, Francesco Tusa, James Deslauriers, Huseyin Dagdeviren, Resmi Arjun, Hamed Hamzeh","doi":"10.1186/s13677-023-00516-5","DOIUrl":"https://doi.org/10.1186/s13677-023-00516-5","url":null,"abstract":"Abstract IoT systems are becoming an essential part of our environment. Smart cities, smart manufacturing, augmented reality, and self-driving cars are just some examples of the wide range of domains, where the applicability of such systems have been increasing rapidly. These IoT use cases often require simultaneous access to geographically distributed arrays of sensors, heterogeneous remote, local as well as multi-cloud computational resources. This gives birth to the extended Cloud-to-Things computing paradigm. The emergence of this new paradigm raised the quintessential need to extend the orchestration requirements (i.e., the automated deployment and run-time management) of applications from the centralised cloud-only environment to the entire spectrum of resources in the Cloud-to-Things continuum. In order to cope with this requirement, in the last few years, there has been a lot of attention to the development of orchestration systems in both industry and academic environments. This paper is an attempt to gather the research conducted in the orchestration for the Cloud-to-Things continuum landscape and to propose a detailed taxonomy, which is then used to critically review the landscape of existing research work. We finally discuss the key challenges that require further attention and also present a conceptual framework based on the conducted analysis.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-22DOI: 10.1186/s13677-023-00509-4
Syed Mohamed Thameem Nizamudeen
Abstract In the current era, a tremendous volume of data has been generated by using web technologies. The association between different devices and services have also been explored to wisely and widely use recent technologies. Due to the restriction in the available resources, the chance of security violation is increasing highly on the constrained devices. IoT backend with the multi-cloud infrastructure to extend the public services in terms of better scalability and reliability. Several users might access the multi-cloud resources that lead to data threats while handling user requests for IoT services. It poses a new challenge in proposing new functional elements and security schemes. This paper introduces an intelligent Intrusion Detection Framework (IDF) to detect network and application-based attacks. The proposed framework has three phases: data pre-processing, feature selection and classification. Initially, the collected datasets are pre-processed using Integer- Grading Normalization (I-GN) technique that ensures a fair-scaled data transformation process. Secondly, Opposition-based Learning- Rat Inspired Optimizer (OBL-RIO) is designed for the feature selection phase. The progressive nature of rats chooses the significant features. The fittest value ensures the stability of the features from OBL-RIO. Finally, a 2D-Array-based Convolutional Neural Network (2D-ACNN) is proposed as the binary class classifier. The input features are preserved in a 2D-array model to perform on the complex layers. It detects normal (or) abnormal traffic. The proposed framework is trained and tested on the Netflow-based datasets. The proposed framework yields 95.20% accuracy, 2.5% false positive rate and 97.24% detection rate.
{"title":"Intelligent intrusion detection framework for multi-clouds – IoT environment using swarm-based deep learning classifier","authors":"Syed Mohamed Thameem Nizamudeen","doi":"10.1186/s13677-023-00509-4","DOIUrl":"https://doi.org/10.1186/s13677-023-00509-4","url":null,"abstract":"Abstract In the current era, a tremendous volume of data has been generated by using web technologies. The association between different devices and services have also been explored to wisely and widely use recent technologies. Due to the restriction in the available resources, the chance of security violation is increasing highly on the constrained devices. IoT backend with the multi-cloud infrastructure to extend the public services in terms of better scalability and reliability. Several users might access the multi-cloud resources that lead to data threats while handling user requests for IoT services. It poses a new challenge in proposing new functional elements and security schemes. This paper introduces an intelligent Intrusion Detection Framework (IDF) to detect network and application-based attacks. The proposed framework has three phases: data pre-processing, feature selection and classification. Initially, the collected datasets are pre-processed using Integer- Grading Normalization (I-GN) technique that ensures a fair-scaled data transformation process. Secondly, Opposition-based Learning- Rat Inspired Optimizer (OBL-RIO) is designed for the feature selection phase. The progressive nature of rats chooses the significant features. The fittest value ensures the stability of the features from OBL-RIO. Finally, a 2D-Array-based Convolutional Neural Network (2D-ACNN) is proposed as the binary class classifier. The input features are preserved in a 2D-array model to perform on the complex layers. It detects normal (or) abnormal traffic. The proposed framework is trained and tested on the Netflow-based datasets. The proposed framework yields 95.20% accuracy, 2.5% false positive rate and 97.24% detection rate.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136011513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-18DOI: 10.1186/s13677-023-00511-w
Pablo C. Cañizares, Alberto Núñez, Adrián Bernal, M. Emilia Cambronero, Adam Barker
Abstract Cloud computing is an evolving paradigm whose adoption has been increasing over the last few years. This fact has led to the growth of the cloud computing market, together with fierce competition for the leading market share, with an increase in the number of cloud service providers. Novel techniques are continuously being proposed to increase the cloud service provider’s profitability. However, only those techniques that are proven not to hinder the service agreements are considered for production clouds. Analysing the expected behaviour and performance of the cloud infrastructure is challenging, as the repeatability and reproducibility of experiments on these systems are made difficult by the large number of users concurrently accessing the infrastructure. To this, must be added the complications of using different provisioning policies, managing several workloads, and applying different resource configurations. Therefore, in order to alleviate these issues, we present Simcan2Cloud, a discrete-event-based simulator for modelling and simulating cloud computing environments. Simcan2Cloud focuses on modelling and simulating the behaviour of the cloud provider with a high level of detail, where both the cloud infrastructure and the interactions of the users with the cloud are integrated in the simulated scenarios. For this purpose, Simcan2Cloud supports different resource allocation policies, service level agreements (SLAs), and an intuitive and complete API for including new management policies. Finally, a thorough experimental study to measure the suitability and applicability of Simcan2Cloud, using both real-world traces and synthetic workloads, is presented.
{"title":"Simcan2Cloud: a discrete-event-based simulator for modelling and simulating cloud computing infrastructures","authors":"Pablo C. Cañizares, Alberto Núñez, Adrián Bernal, M. Emilia Cambronero, Adam Barker","doi":"10.1186/s13677-023-00511-w","DOIUrl":"https://doi.org/10.1186/s13677-023-00511-w","url":null,"abstract":"Abstract Cloud computing is an evolving paradigm whose adoption has been increasing over the last few years. This fact has led to the growth of the cloud computing market, together with fierce competition for the leading market share, with an increase in the number of cloud service providers. Novel techniques are continuously being proposed to increase the cloud service provider’s profitability. However, only those techniques that are proven not to hinder the service agreements are considered for production clouds. Analysing the expected behaviour and performance of the cloud infrastructure is challenging, as the repeatability and reproducibility of experiments on these systems are made difficult by the large number of users concurrently accessing the infrastructure. To this, must be added the complications of using different provisioning policies, managing several workloads, and applying different resource configurations. Therefore, in order to alleviate these issues, we present Simcan2Cloud, a discrete-event-based simulator for modelling and simulating cloud computing environments. Simcan2Cloud focuses on modelling and simulating the behaviour of the cloud provider with a high level of detail, where both the cloud infrastructure and the interactions of the users with the cloud are integrated in the simulated scenarios. For this purpose, Simcan2Cloud supports different resource allocation policies, service level agreements (SLAs), and an intuitive and complete API for including new management policies. Finally, a thorough experimental study to measure the suitability and applicability of Simcan2Cloud, using both real-world traces and synthetic workloads, is presented.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135202803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In resource constrained edge environment, multiple service providers can compete to rent the limited resources to cache their service instances on edge servers close to end users, thereby significantly reducing the service delay and improving quality of service (QoS). However, service providers renting the resources of different edge servers to deploy their service instances can incur different resource usage costs and service delay. To make full use of the limited resources of the edge servers to further reduce resource usage costs, multiple service providers on an edge server can form a coalition and share the limited resource of an edge server. In this paper, we investigate the service caching problem of multiple service providers in resource constrained edge environment, and propose an independent learners-based services caching scheme (ILSCS) which adopts a stateless Q-learning to learn an optimal service caching scheme. To verify the effectiveness of ILSCS scheme, we implement COALITION, RANDOM, MDU, and MCS four baseline algorithms, and compare the total collaboration cost and service latency of ILSCS scheme with these of these four baseline algorithms under different experimental parameter settings. The extensive experimental results show that the ILSCS scheme can achieve lower total collaboration cost and service latency.
{"title":"Stateless Q-learning algorithm for service caching in resource constrained edge environment","authors":"Binbin Huang, Ziqi Ran, Dongjin Yu, Yuanyuan Xiang, Xiaoying Shi, Zhongjin Li, Zhengqian Xu","doi":"10.1186/s13677-023-00506-7","DOIUrl":"https://doi.org/10.1186/s13677-023-00506-7","url":null,"abstract":"Abstract In resource constrained edge environment, multiple service providers can compete to rent the limited resources to cache their service instances on edge servers close to end users, thereby significantly reducing the service delay and improving quality of service (QoS). However, service providers renting the resources of different edge servers to deploy their service instances can incur different resource usage costs and service delay. To make full use of the limited resources of the edge servers to further reduce resource usage costs, multiple service providers on an edge server can form a coalition and share the limited resource of an edge server. In this paper, we investigate the service caching problem of multiple service providers in resource constrained edge environment, and propose an independent learners-based services caching scheme (ILSCS) which adopts a stateless Q-learning to learn an optimal service caching scheme. To verify the effectiveness of ILSCS scheme, we implement COALITION, RANDOM, MDU, and MCS four baseline algorithms, and compare the total collaboration cost and service latency of ILSCS scheme with these of these four baseline algorithms under different experimental parameter settings. The extensive experimental results show that the ILSCS scheme can achieve lower total collaboration cost and service latency.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135741335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-12DOI: 10.1186/s13677-023-00503-w
Lili Nie, Huiqiang Wang, Guangsheng Feng, Jiayu Sun, Hongwu Lv, Hang Cui
Abstract With the development of communication technology and mobile edge computing (MEC), self-driving has received more and more research interests. However, most object detection tasks for self-driving vehicles are still performed at vehicle terminals, which often requires a trade-off between detection accuracy and speed. To achieve efficient object detection without sacrificing accuracy, we propose an end–edge collaboration object detection approach based on Deep Reinforcement Learning (DRL) with a task prioritization mechanism. We use a time utility function to measure the efficiency of object detection task and aim to provide an online approach to maximize the average sum of the time utilities in all slots. Since this is an NP-hard mixed-integer nonlinear programming (MINLP) problem, we propose an online approach for task offloading and resource allocation based on Deep Reinforcement learning and Piecewise Linearization (DRPL). A deep neural network (DNN) is implemented as a flexible solution for learning offloading strategies based on road traffic conditions and wireless network environment, which can significantly reduce computational complexity. In addition, to accelerate DRPL network convergence, DNN outputs are grouped by in-vehicle cameras to form offloading strategies via permutation. Numerical results show that the DRPL scheme is at least 10% more effective and superior in terms of time utility compared to several representative offloading schemes for various vehicle local computing resource scenarios.
{"title":"A deep reinforcement learning assisted task offloading and resource allocation approach towards self-driving object detection","authors":"Lili Nie, Huiqiang Wang, Guangsheng Feng, Jiayu Sun, Hongwu Lv, Hang Cui","doi":"10.1186/s13677-023-00503-w","DOIUrl":"https://doi.org/10.1186/s13677-023-00503-w","url":null,"abstract":"Abstract With the development of communication technology and mobile edge computing (MEC), self-driving has received more and more research interests. However, most object detection tasks for self-driving vehicles are still performed at vehicle terminals, which often requires a trade-off between detection accuracy and speed. To achieve efficient object detection without sacrificing accuracy, we propose an end–edge collaboration object detection approach based on Deep Reinforcement Learning (DRL) with a task prioritization mechanism. We use a time utility function to measure the efficiency of object detection task and aim to provide an online approach to maximize the average sum of the time utilities in all slots. Since this is an NP-hard mixed-integer nonlinear programming (MINLP) problem, we propose an online approach for task offloading and resource allocation based on Deep Reinforcement learning and Piecewise Linearization (DRPL). A deep neural network (DNN) is implemented as a flexible solution for learning offloading strategies based on road traffic conditions and wireless network environment, which can significantly reduce computational complexity. In addition, to accelerate DRPL network convergence, DNN outputs are grouped by in-vehicle cameras to form offloading strategies via permutation. Numerical results show that the DRPL scheme is at least 10% more effective and superior in terms of time utility compared to several representative offloading schemes for various vehicle local computing resource scenarios.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135878616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1109/CSCloud-EdgeCom58631.2023.00084
Jigang Wen, Yuxiang Chen, Chuda Liu
Cloud computing is a promising technique to conquer the resource limitations of a single mobile device. To relieve the work load of mobile users, computation-intensive tasks are proposed to be offloaded to the remote cloud or local cloudlet. However, these solutions also face some challenges. It is difficult to support data intensive and delay-sensitive applications in the remote cloud, while the local cloudlets often have limited coverage. When both of these methods cannot be supported, another option is to relieve the load of a single device by taking advantage of resources of surrounding smart-phones or other wireless devices. To facilitate the efficient operation of the third option, we propose a novel pervasive mobile cloud framework to provide an incentive mechanism to motivate mobile users to contribute their sources for others to borrow and an efficient mechanism to enable multi-site computation partition. More specifically, we formulate the problem as a Stackelberg game, and prove that there exists a unique Nash Equilibrium for the game. Based on the unique Nash Equilibrium, we propose an offloading protocol to derive the mobile users’ strategies. Through extensive simulations, we evaluate the performance and validate the theoretical properties of the proposed economy-based incentive mechanism.
{"title":"Incentive Aware Computation Resource Sharing and Partition in Pervasive Mobile Cloud","authors":"Jigang Wen, Yuxiang Chen, Chuda Liu","doi":"10.1109/CSCloud-EdgeCom58631.2023.00084","DOIUrl":"https://doi.org/10.1109/CSCloud-EdgeCom58631.2023.00084","url":null,"abstract":"Cloud computing is a promising technique to conquer the resource limitations of a single mobile device. To relieve the work load of mobile users, computation-intensive tasks are proposed to be offloaded to the remote cloud or local cloudlet. However, these solutions also face some challenges. It is difficult to support data intensive and delay-sensitive applications in the remote cloud, while the local cloudlets often have limited coverage. When both of these methods cannot be supported, another option is to relieve the load of a single device by taking advantage of resources of surrounding smart-phones or other wireless devices. To facilitate the efficient operation of the third option, we propose a novel pervasive mobile cloud framework to provide an incentive mechanism to motivate mobile users to contribute their sources for others to borrow and an efficient mechanism to enable multi-site computation partition. More specifically, we formulate the problem as a Stackelberg game, and prove that there exists a unique Nash Equilibrium for the game. Based on the unique Nash Equilibrium, we propose an offloading protocol to derive the mobile users’ strategies. Through extensive simulations, we evaluate the performance and validate the theoretical properties of the proposed economy-based incentive mechanism.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"63 1","pages":"458-463"},"PeriodicalIF":4.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74649457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although deep learning has made remarkable achievements in natural language processing tasks, many researchers have recently indicated that models achieve high performance by exploiting statistical bias in datasets. However, once such models obtained on statistically biased datasets are applied in scenarios where statistical bias does not exist, they show a significant decrease in accuracy. In this work, we focus on the length divergence bias, which makes language models tend to classify samples with high length divergence as negative and vice versa. We propose a solution to make the model pay more attention to semantics and not be affected by bias. First, we propose constructing an adversarial test set to magnify the effect of bias on models. Then, we introduce some novel techniques to demote length divergence bias. Finally, we conduct our experiments on two textual matching corpora, and the results show that our approach effectively improves the generalization and robustness of the model, although the degree of bias of the two corpora is not the same.
{"title":"Reducing the Length Divergence Bias for Textual Matching Models via Alternating Adversarial Training","authors":"Lantao Zheng, Wenxin Kuang, Qizhuang Liang, Wei Liang, Qiao Hu, Wei Fu, Xiashu Ding, Bijiang Xu, Yupeng Hu","doi":"10.1109/CSCloud-EdgeCom58631.2023.00040","DOIUrl":"https://doi.org/10.1109/CSCloud-EdgeCom58631.2023.00040","url":null,"abstract":"Although deep learning has made remarkable achievements in natural language processing tasks, many researchers have recently indicated that models achieve high performance by exploiting statistical bias in datasets. However, once such models obtained on statistically biased datasets are applied in scenarios where statistical bias does not exist, they show a significant decrease in accuracy. In this work, we focus on the length divergence bias, which makes language models tend to classify samples with high length divergence as negative and vice versa. We propose a solution to make the model pay more attention to semantics and not be affected by bias. First, we propose constructing an adversarial test set to magnify the effect of bias on models. Then, we introduce some novel techniques to demote length divergence bias. Finally, we conduct our experiments on two textual matching corpora, and the results show that our approach effectively improves the generalization and robustness of the model, although the degree of bias of the two corpora is not the same.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"4 1","pages":"186-191"},"PeriodicalIF":4.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74171432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advent of the 5G/6G and Big Data, the network has become indispensable in people’s lives, and Cyber security has turned a relevant topic that people pay attention to. For Cyber security, anomaly detection, a.k.a. outlier detection or novelty detection, is one of the key points widely used in financial fraud detection, medical diagnosis, network security, and other aspects. As a hot topic, deep learning-based anomaly detection has been studied by more and more researchers. For such an objective, this article aims to classify anomaly detection based on deep learning, pointing out the problem and the principle, advantages, disadvantages, and application scenarios of each method, and describe possible future opportunities to address challenges.
{"title":"Anomaly Detection Based on Deep Learning: Insights and Opportunities","authors":"Huan Zhang, Ru Xie, Kuan-Ching Li, Weihong Huang, Chaoyi Yang, Jingnian Liu","doi":"10.1109/CSCloud-EdgeCom58631.2023.00015","DOIUrl":"https://doi.org/10.1109/CSCloud-EdgeCom58631.2023.00015","url":null,"abstract":"With the advent of the 5G/6G and Big Data, the network has become indispensable in people’s lives, and Cyber security has turned a relevant topic that people pay attention to. For Cyber security, anomaly detection, a.k.a. outlier detection or novelty detection, is one of the key points widely used in financial fraud detection, medical diagnosis, network security, and other aspects. As a hot topic, deep learning-based anomaly detection has been studied by more and more researchers. For such an objective, this article aims to classify anomaly detection based on deep learning, pointing out the problem and the principle, advantages, disadvantages, and application scenarios of each method, and describe possible future opportunities to address challenges.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"16 1","pages":"30-36"},"PeriodicalIF":4.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73891081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1109/CSCloud-EdgeCom58631.2023.00061
Chongjing Huang, Q. Fu, Chaoliang Wang, Zhaohui Li
The rapid development of in-vehicle intelligent applications brings difficulties to traditional cloud computing in vehicular networks. Due to the long transmission distance between vehicles and cloud centers and the instability of communication links easily lead to high latency and low reliability. Vehicle edge computing (VEC), as a new computing paradigm, can improve vehicle quality of service by offloading tasks to edge servers with abundant computational resources. This paper studied a task offloading algorithm that efficiently optimize the delay cost and operating cost in a multi-user, multi-server VEC scenario. The algorithm solves the problem of execution location of computational tasks and execution order on the servers. In this paper, we simulate a real scenario where vehicles generate tasks through time lapse and the set of tasks is unknown in advance. The task set is preprocessed using a greedy algorithm and the offloading decision is further optimized using an optimization algorithm based on simulated annealing algorithm and heuristic rules. The simulation results show that compared with the traditional baseline algorithm, our algorithm effectively improves the task offloading utility of the VEC system.
{"title":"Joint Task Offloading and Scheduling Algorithm in Vehicular Edge Computing Networks","authors":"Chongjing Huang, Q. Fu, Chaoliang Wang, Zhaohui Li","doi":"10.1109/CSCloud-EdgeCom58631.2023.00061","DOIUrl":"https://doi.org/10.1109/CSCloud-EdgeCom58631.2023.00061","url":null,"abstract":"The rapid development of in-vehicle intelligent applications brings difficulties to traditional cloud computing in vehicular networks. Due to the long transmission distance between vehicles and cloud centers and the instability of communication links easily lead to high latency and low reliability. Vehicle edge computing (VEC), as a new computing paradigm, can improve vehicle quality of service by offloading tasks to edge servers with abundant computational resources. This paper studied a task offloading algorithm that efficiently optimize the delay cost and operating cost in a multi-user, multi-server VEC scenario. The algorithm solves the problem of execution location of computational tasks and execution order on the servers. In this paper, we simulate a real scenario where vehicles generate tasks through time lapse and the set of tasks is unknown in advance. The task set is preprocessed using a greedy algorithm and the offloading decision is further optimized using an optimization algorithm based on simulated annealing algorithm and heuristic rules. The simulation results show that compared with the traditional baseline algorithm, our algorithm effectively improves the task offloading utility of the VEC system.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"1 1","pages":"318-323"},"PeriodicalIF":4.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90091349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1109/CSCloud-EdgeCom58631.2023.00045
Yang Liang, Changsong Ding, Zhi-gang Hu
With the continuous development and integration of mobile communication and cloud computing technology, cloud-edge collaboration has emerged as a promising distributed paradigm to solve data-intensive workflow applications. How to improve the execution performance of data-intensive workflows has become one of the key issues in the collaborative cloud-edge environment. To address this issue, this paper built a data placement model with multiple constraints. Taking deadline and execution budget as the core constraints, the model is solved by minimizing the data access cost of workflow in the cloud-edge clusters. Subsequently, an immune genetic-particle swarm hybrid optimization algorithm (IGPSHO) is proposed to find the optimal replica placement scheme. Through simulation, compared with the classical immune genetic algorithm (IGA) and particle swarm optimization (PSO), the IGPSHO has obvious advantages in terms of workflow default rate, time-consuming ratio, and average execution cost when the workflow scale is large.
{"title":"Data Placement Strategy of Data-Intensive Workflows in Collaborative Cloud-Edge Environment","authors":"Yang Liang, Changsong Ding, Zhi-gang Hu","doi":"10.1109/CSCloud-EdgeCom58631.2023.00045","DOIUrl":"https://doi.org/10.1109/CSCloud-EdgeCom58631.2023.00045","url":null,"abstract":"With the continuous development and integration of mobile communication and cloud computing technology, cloud-edge collaboration has emerged as a promising distributed paradigm to solve data-intensive workflow applications. How to improve the execution performance of data-intensive workflows has become one of the key issues in the collaborative cloud-edge environment. To address this issue, this paper built a data placement model with multiple constraints. Taking deadline and execution budget as the core constraints, the model is solved by minimizing the data access cost of workflow in the cloud-edge clusters. Subsequently, an immune genetic-particle swarm hybrid optimization algorithm (IGPSHO) is proposed to find the optimal replica placement scheme. Through simulation, compared with the classical immune genetic algorithm (IGA) and particle swarm optimization (PSO), the IGPSHO has obvious advantages in terms of workflow default rate, time-consuming ratio, and average execution cost when the workflow scale is large.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"27 1","pages":"217-222"},"PeriodicalIF":4.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81597575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}