首页 > 最新文献

Journal of Cloud Computing-Advances Systems and Applications最新文献

英文 中文
Orchestration in the Cloud-to-Things compute continuum: taxonomy, survey and future directions 云到物计算连续体中的编排:分类、调查和未来方向
3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-27 DOI: 10.1186/s13677-023-00516-5
Amjad Ullah, Tamas Kiss, József Kovács, Francesco Tusa, James Deslauriers, Huseyin Dagdeviren, Resmi Arjun, Hamed Hamzeh
Abstract IoT systems are becoming an essential part of our environment. Smart cities, smart manufacturing, augmented reality, and self-driving cars are just some examples of the wide range of domains, where the applicability of such systems have been increasing rapidly. These IoT use cases often require simultaneous access to geographically distributed arrays of sensors, heterogeneous remote, local as well as multi-cloud computational resources. This gives birth to the extended Cloud-to-Things computing paradigm. The emergence of this new paradigm raised the quintessential need to extend the orchestration requirements (i.e., the automated deployment and run-time management) of applications from the centralised cloud-only environment to the entire spectrum of resources in the Cloud-to-Things continuum. In order to cope with this requirement, in the last few years, there has been a lot of attention to the development of orchestration systems in both industry and academic environments. This paper is an attempt to gather the research conducted in the orchestration for the Cloud-to-Things continuum landscape and to propose a detailed taxonomy, which is then used to critically review the landscape of existing research work. We finally discuss the key challenges that require further attention and also present a conceptual framework based on the conducted analysis.
物联网系统正在成为我们环境的重要组成部分。智能城市、智能制造、增强现实和自动驾驶汽车只是广泛领域的一些例子,这些系统的适用性正在迅速增加。这些物联网用例通常需要同时访问地理分布的传感器阵列、异构远程、本地以及多云计算资源。这就产生了扩展的云到物计算范式。这种新范式的出现提出了将应用程序的编排需求(例如,自动部署和运行时管理)从集中式云环境扩展到云到物连续体中的整个资源范围的基本需求。为了满足这一需求,在过去的几年中,业界和学术界都对编排系统的开发给予了极大的关注。本文试图收集在云到物连续体景观的编排中进行的研究,并提出一个详细的分类法,然后用于批判性地审查现有研究工作的景观。最后,我们讨论了需要进一步关注的关键挑战,并根据所进行的分析提出了一个概念框架。
{"title":"Orchestration in the Cloud-to-Things compute continuum: taxonomy, survey and future directions","authors":"Amjad Ullah, Tamas Kiss, József Kovács, Francesco Tusa, James Deslauriers, Huseyin Dagdeviren, Resmi Arjun, Hamed Hamzeh","doi":"10.1186/s13677-023-00516-5","DOIUrl":"https://doi.org/10.1186/s13677-023-00516-5","url":null,"abstract":"Abstract IoT systems are becoming an essential part of our environment. Smart cities, smart manufacturing, augmented reality, and self-driving cars are just some examples of the wide range of domains, where the applicability of such systems have been increasing rapidly. These IoT use cases often require simultaneous access to geographically distributed arrays of sensors, heterogeneous remote, local as well as multi-cloud computational resources. This gives birth to the extended Cloud-to-Things computing paradigm. The emergence of this new paradigm raised the quintessential need to extend the orchestration requirements (i.e., the automated deployment and run-time management) of applications from the centralised cloud-only environment to the entire spectrum of resources in the Cloud-to-Things continuum. In order to cope with this requirement, in the last few years, there has been a lot of attention to the development of orchestration systems in both industry and academic environments. This paper is an attempt to gather the research conducted in the orchestration for the Cloud-to-Things continuum landscape and to propose a detailed taxonomy, which is then used to critically review the landscape of existing research work. We finally discuss the key challenges that require further attention and also present a conceptual framework based on the conducted analysis.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent intrusion detection framework for multi-clouds – IoT environment using swarm-based deep learning classifier 基于群的深度学习分类器的多云物联网环境智能入侵检测框架
3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-22 DOI: 10.1186/s13677-023-00509-4
Syed Mohamed Thameem Nizamudeen
Abstract In the current era, a tremendous volume of data has been generated by using web technologies. The association between different devices and services have also been explored to wisely and widely use recent technologies. Due to the restriction in the available resources, the chance of security violation is increasing highly on the constrained devices. IoT backend with the multi-cloud infrastructure to extend the public services in terms of better scalability and reliability. Several users might access the multi-cloud resources that lead to data threats while handling user requests for IoT services. It poses a new challenge in proposing new functional elements and security schemes. This paper introduces an intelligent Intrusion Detection Framework (IDF) to detect network and application-based attacks. The proposed framework has three phases: data pre-processing, feature selection and classification. Initially, the collected datasets are pre-processed using Integer- Grading Normalization (I-GN) technique that ensures a fair-scaled data transformation process. Secondly, Opposition-based Learning- Rat Inspired Optimizer (OBL-RIO) is designed for the feature selection phase. The progressive nature of rats chooses the significant features. The fittest value ensures the stability of the features from OBL-RIO. Finally, a 2D-Array-based Convolutional Neural Network (2D-ACNN) is proposed as the binary class classifier. The input features are preserved in a 2D-array model to perform on the complex layers. It detects normal (or) abnormal traffic. The proposed framework is trained and tested on the Netflow-based datasets. The proposed framework yields 95.20% accuracy, 2.5% false positive rate and 97.24% detection rate.
在当今时代,利用web技术产生了大量的数据。我们还探索了不同设备和服务之间的关联,以便明智而广泛地使用最新技术。由于可用资源的限制,在受限制的设备上,安全违规的几率大大增加。物联网后端配合多云基础设施,在可扩展性和可靠性方面扩展公共服务。在处理用户对物联网服务的请求时,多个用户可能会访问导致数据威胁的多云资源。这对提出新的功能元素和安全方案提出了新的挑战。本文介绍了一种智能入侵检测框架(IDF)来检测基于网络和应用程序的攻击。该框架分为三个阶段:数据预处理、特征选择和分类。首先,使用整数分级归一化(I-GN)技术对收集的数据集进行预处理,以确保公平的数据转换过程。其次,针对特征选择阶段,设计了基于对立的学习鼠激励优化器(OBL-RIO)。大鼠的进步性选择了显著特征。最适合的值确保了OBL-RIO特性的稳定性。最后,提出了一种基于二维数组的卷积神经网络(2D-ACNN)作为二值分类器。将输入特征保留在二维阵列模型中,以便在复杂层上执行。它检测正常(或)异常的流量。提出的框架在基于netflow的数据集上进行了训练和测试。该框架的准确率为95.20%,假阳性率为2.5%,检出率为97.24%。
{"title":"Intelligent intrusion detection framework for multi-clouds – IoT environment using swarm-based deep learning classifier","authors":"Syed Mohamed Thameem Nizamudeen","doi":"10.1186/s13677-023-00509-4","DOIUrl":"https://doi.org/10.1186/s13677-023-00509-4","url":null,"abstract":"Abstract In the current era, a tremendous volume of data has been generated by using web technologies. The association between different devices and services have also been explored to wisely and widely use recent technologies. Due to the restriction in the available resources, the chance of security violation is increasing highly on the constrained devices. IoT backend with the multi-cloud infrastructure to extend the public services in terms of better scalability and reliability. Several users might access the multi-cloud resources that lead to data threats while handling user requests for IoT services. It poses a new challenge in proposing new functional elements and security schemes. This paper introduces an intelligent Intrusion Detection Framework (IDF) to detect network and application-based attacks. The proposed framework has three phases: data pre-processing, feature selection and classification. Initially, the collected datasets are pre-processed using Integer- Grading Normalization (I-GN) technique that ensures a fair-scaled data transformation process. Secondly, Opposition-based Learning- Rat Inspired Optimizer (OBL-RIO) is designed for the feature selection phase. The progressive nature of rats chooses the significant features. The fittest value ensures the stability of the features from OBL-RIO. Finally, a 2D-Array-based Convolutional Neural Network (2D-ACNN) is proposed as the binary class classifier. The input features are preserved in a 2D-array model to perform on the complex layers. It detects normal (or) abnormal traffic. The proposed framework is trained and tested on the Netflow-based datasets. The proposed framework yields 95.20% accuracy, 2.5% false positive rate and 97.24% detection rate.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136011513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Simcan2Cloud: a discrete-event-based simulator for modelling and simulating cloud computing infrastructures Simcan2Cloud:一个基于离散事件的模拟器,用于建模和模拟云计算基础设施
3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-18 DOI: 10.1186/s13677-023-00511-w
Pablo C. Cañizares, Alberto Núñez, Adrián Bernal, M. Emilia Cambronero, Adam Barker
Abstract Cloud computing is an evolving paradigm whose adoption has been increasing over the last few years. This fact has led to the growth of the cloud computing market, together with fierce competition for the leading market share, with an increase in the number of cloud service providers. Novel techniques are continuously being proposed to increase the cloud service provider’s profitability. However, only those techniques that are proven not to hinder the service agreements are considered for production clouds. Analysing the expected behaviour and performance of the cloud infrastructure is challenging, as the repeatability and reproducibility of experiments on these systems are made difficult by the large number of users concurrently accessing the infrastructure. To this, must be added the complications of using different provisioning policies, managing several workloads, and applying different resource configurations. Therefore, in order to alleviate these issues, we present Simcan2Cloud, a discrete-event-based simulator for modelling and simulating cloud computing environments. Simcan2Cloud focuses on modelling and simulating the behaviour of the cloud provider with a high level of detail, where both the cloud infrastructure and the interactions of the users with the cloud are integrated in the simulated scenarios. For this purpose, Simcan2Cloud supports different resource allocation policies, service level agreements (SLAs), and an intuitive and complete API for including new management policies. Finally, a thorough experimental study to measure the suitability and applicability of Simcan2Cloud, using both real-world traces and synthetic workloads, is presented.
云计算是一种不断发展的范式,其采用在过去几年中一直在增加。这一事实导致了云计算市场的增长,以及对领先市场份额的激烈竞争,以及云服务提供商数量的增加。不断有人提出新的技术来提高云服务提供商的盈利能力。但是,只有那些被证明不会妨碍服务协议的技术才会被考虑用于生产云。分析云基础设施的预期行为和性能具有挑战性,因为大量用户同时访问基础设施,使得这些系统上实验的可重复性和再现性变得困难。除此之外,还必须添加使用不同供应策略、管理多个工作负载和应用不同资源配置的复杂性。因此,为了缓解这些问题,我们提出了Simcan2Cloud,一个基于离散事件的模拟器,用于建模和模拟云计算环境。Simcan2Cloud侧重于建模和模拟具有高水平细节的云提供商的行为,其中云基础设施和用户与云的交互都集成在模拟场景中。为此,Simcan2Cloud支持不同的资源分配策略、服务水平协议(sla),以及用于包含新管理策略的直观而完整的API。最后,提出了一项全面的实验研究,使用真实世界的跟踪和合成工作负载来测量Simcan2Cloud的适用性和适用性。
{"title":"Simcan2Cloud: a discrete-event-based simulator for modelling and simulating cloud computing infrastructures","authors":"Pablo C. Cañizares, Alberto Núñez, Adrián Bernal, M. Emilia Cambronero, Adam Barker","doi":"10.1186/s13677-023-00511-w","DOIUrl":"https://doi.org/10.1186/s13677-023-00511-w","url":null,"abstract":"Abstract Cloud computing is an evolving paradigm whose adoption has been increasing over the last few years. This fact has led to the growth of the cloud computing market, together with fierce competition for the leading market share, with an increase in the number of cloud service providers. Novel techniques are continuously being proposed to increase the cloud service provider’s profitability. However, only those techniques that are proven not to hinder the service agreements are considered for production clouds. Analysing the expected behaviour and performance of the cloud infrastructure is challenging, as the repeatability and reproducibility of experiments on these systems are made difficult by the large number of users concurrently accessing the infrastructure. To this, must be added the complications of using different provisioning policies, managing several workloads, and applying different resource configurations. Therefore, in order to alleviate these issues, we present Simcan2Cloud, a discrete-event-based simulator for modelling and simulating cloud computing environments. Simcan2Cloud focuses on modelling and simulating the behaviour of the cloud provider with a high level of detail, where both the cloud infrastructure and the interactions of the users with the cloud are integrated in the simulated scenarios. For this purpose, Simcan2Cloud supports different resource allocation policies, service level agreements (SLAs), and an intuitive and complete API for including new management policies. Finally, a thorough experimental study to measure the suitability and applicability of Simcan2Cloud, using both real-world traces and synthetic workloads, is presented.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135202803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stateless Q-learning algorithm for service caching in resource constrained edge environment 资源受限边缘环境下服务缓存的无状态q学习算法
3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-13 DOI: 10.1186/s13677-023-00506-7
Binbin Huang, Ziqi Ran, Dongjin Yu, Yuanyuan Xiang, Xiaoying Shi, Zhongjin Li, Zhengqian Xu
Abstract In resource constrained edge environment, multiple service providers can compete to rent the limited resources to cache their service instances on edge servers close to end users, thereby significantly reducing the service delay and improving quality of service (QoS). However, service providers renting the resources of different edge servers to deploy their service instances can incur different resource usage costs and service delay. To make full use of the limited resources of the edge servers to further reduce resource usage costs, multiple service providers on an edge server can form a coalition and share the limited resource of an edge server. In this paper, we investigate the service caching problem of multiple service providers in resource constrained edge environment, and propose an independent learners-based services caching scheme (ILSCS) which adopts a stateless Q-learning to learn an optimal service caching scheme. To verify the effectiveness of ILSCS scheme, we implement COALITION, RANDOM, MDU, and MCS four baseline algorithms, and compare the total collaboration cost and service latency of ILSCS scheme with these of these four baseline algorithms under different experimental parameter settings. The extensive experimental results show that the ILSCS scheme can achieve lower total collaboration cost and service latency.
摘要在资源受限的边缘环境中,多个服务提供商可以竞争租用有限的资源,在靠近终端用户的边缘服务器上缓存自己的服务实例,从而显著减少服务延迟,提高服务质量(QoS)。但是,服务提供商租用不同边缘服务器的资源来部署其服务实例可能会产生不同的资源使用成本和服务延迟。为了充分利用边缘服务器有限的资源,进一步降低资源使用成本,一台边缘服务器上的多个服务提供商可以组成联盟,共享一台边缘服务器有限的资源。本文研究了资源受限边缘环境下多个服务提供者的服务缓存问题,提出了一种基于独立学习者的服务缓存方案(ILSCS),该方案采用无状态q学习来学习最优服务缓存方案。为了验证ILSCS方案的有效性,我们实现了COALITION、RANDOM、MDU和MCS四种基线算法,并在不同实验参数设置下比较了ILSCS方案与四种基线算法的总协作成本和服务延迟。大量的实验结果表明,ILSCS方案可以实现较低的总协作成本和服务延迟。
{"title":"Stateless Q-learning algorithm for service caching in resource constrained edge environment","authors":"Binbin Huang, Ziqi Ran, Dongjin Yu, Yuanyuan Xiang, Xiaoying Shi, Zhongjin Li, Zhengqian Xu","doi":"10.1186/s13677-023-00506-7","DOIUrl":"https://doi.org/10.1186/s13677-023-00506-7","url":null,"abstract":"Abstract In resource constrained edge environment, multiple service providers can compete to rent the limited resources to cache their service instances on edge servers close to end users, thereby significantly reducing the service delay and improving quality of service (QoS). However, service providers renting the resources of different edge servers to deploy their service instances can incur different resource usage costs and service delay. To make full use of the limited resources of the edge servers to further reduce resource usage costs, multiple service providers on an edge server can form a coalition and share the limited resource of an edge server. In this paper, we investigate the service caching problem of multiple service providers in resource constrained edge environment, and propose an independent learners-based services caching scheme (ILSCS) which adopts a stateless Q-learning to learn an optimal service caching scheme. To verify the effectiveness of ILSCS scheme, we implement COALITION, RANDOM, MDU, and MCS four baseline algorithms, and compare the total collaboration cost and service latency of ILSCS scheme with these of these four baseline algorithms under different experimental parameter settings. The extensive experimental results show that the ILSCS scheme can achieve lower total collaboration cost and service latency.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135741335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep reinforcement learning assisted task offloading and resource allocation approach towards self-driving object detection 一种深度强化学习辅助任务卸载和资源分配的自动驾驶目标检测方法
3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-12 DOI: 10.1186/s13677-023-00503-w
Lili Nie, Huiqiang Wang, Guangsheng Feng, Jiayu Sun, Hongwu Lv, Hang Cui
Abstract With the development of communication technology and mobile edge computing (MEC), self-driving has received more and more research interests. However, most object detection tasks for self-driving vehicles are still performed at vehicle terminals, which often requires a trade-off between detection accuracy and speed. To achieve efficient object detection without sacrificing accuracy, we propose an end–edge collaboration object detection approach based on Deep Reinforcement Learning (DRL) with a task prioritization mechanism. We use a time utility function to measure the efficiency of object detection task and aim to provide an online approach to maximize the average sum of the time utilities in all slots. Since this is an NP-hard mixed-integer nonlinear programming (MINLP) problem, we propose an online approach for task offloading and resource allocation based on Deep Reinforcement learning and Piecewise Linearization (DRPL). A deep neural network (DNN) is implemented as a flexible solution for learning offloading strategies based on road traffic conditions and wireless network environment, which can significantly reduce computational complexity. In addition, to accelerate DRPL network convergence, DNN outputs are grouped by in-vehicle cameras to form offloading strategies via permutation. Numerical results show that the DRPL scheme is at least 10% more effective and superior in terms of time utility compared to several representative offloading schemes for various vehicle local computing resource scenarios.
随着通信技术和移动边缘计算(MEC)的发展,自动驾驶受到越来越多的研究兴趣。然而,大多数自动驾驶车辆的目标检测任务仍然在车载终端执行,这通常需要在检测精度和速度之间进行权衡。为了在不牺牲精度的情况下实现高效的目标检测,我们提出了一种基于深度强化学习(DRL)的端到端协作目标检测方法,该方法具有任务优先级机制。我们使用时间效用函数来衡量目标检测任务的效率,旨在提供一种在线方法来最大化所有时段的时间效用的平均总和。由于这是一个NP-hard混合整数非线性规划(MINLP)问题,我们提出了一种基于深度强化学习和分段线性化(DRPL)的在线任务卸载和资源分配方法。深度神经网络(deep neural network, DNN)是一种基于道路交通状况和无线网络环境的学习卸载策略的灵活解决方案,可以显著降低计算复杂度。此外,为了加速DRPL网络的收敛,DNN输出被车载摄像头分组,通过排列形成卸载策略。数值结果表明,在各种车辆局部计算资源场景下,与几种具有代表性的卸载方案相比,DRPL方案的效率和时间利用率至少提高了10%。
{"title":"A deep reinforcement learning assisted task offloading and resource allocation approach towards self-driving object detection","authors":"Lili Nie, Huiqiang Wang, Guangsheng Feng, Jiayu Sun, Hongwu Lv, Hang Cui","doi":"10.1186/s13677-023-00503-w","DOIUrl":"https://doi.org/10.1186/s13677-023-00503-w","url":null,"abstract":"Abstract With the development of communication technology and mobile edge computing (MEC), self-driving has received more and more research interests. However, most object detection tasks for self-driving vehicles are still performed at vehicle terminals, which often requires a trade-off between detection accuracy and speed. To achieve efficient object detection without sacrificing accuracy, we propose an end–edge collaboration object detection approach based on Deep Reinforcement Learning (DRL) with a task prioritization mechanism. We use a time utility function to measure the efficiency of object detection task and aim to provide an online approach to maximize the average sum of the time utilities in all slots. Since this is an NP-hard mixed-integer nonlinear programming (MINLP) problem, we propose an online approach for task offloading and resource allocation based on Deep Reinforcement learning and Piecewise Linearization (DRPL). A deep neural network (DNN) is implemented as a flexible solution for learning offloading strategies based on road traffic conditions and wireless network environment, which can significantly reduce computational complexity. In addition, to accelerate DRPL network convergence, DNN outputs are grouped by in-vehicle cameras to form offloading strategies via permutation. Numerical results show that the DRPL scheme is at least 10% more effective and superior in terms of time utility compared to several representative offloading schemes for various vehicle local computing resource scenarios.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135878616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incentive Aware Computation Resource Sharing and Partition in Pervasive Mobile Cloud 普适移动云中具有激励意识的计算资源共享与划分
IF 4 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-01 DOI: 10.1109/CSCloud-EdgeCom58631.2023.00084
Jigang Wen, Yuxiang Chen, Chuda Liu
Cloud computing is a promising technique to conquer the resource limitations of a single mobile device. To relieve the work load of mobile users, computation-intensive tasks are proposed to be offloaded to the remote cloud or local cloudlet. However, these solutions also face some challenges. It is difficult to support data intensive and delay-sensitive applications in the remote cloud, while the local cloudlets often have limited coverage. When both of these methods cannot be supported, another option is to relieve the load of a single device by taking advantage of resources of surrounding smart-phones or other wireless devices. To facilitate the efficient operation of the third option, we propose a novel pervasive mobile cloud framework to provide an incentive mechanism to motivate mobile users to contribute their sources for others to borrow and an efficient mechanism to enable multi-site computation partition. More specifically, we formulate the problem as a Stackelberg game, and prove that there exists a unique Nash Equilibrium for the game. Based on the unique Nash Equilibrium, we propose an offloading protocol to derive the mobile users’ strategies. Through extensive simulations, we evaluate the performance and validate the theoretical properties of the proposed economy-based incentive mechanism.
云计算是一种很有前途的技术,可以克服单个移动设备的资源限制。为了减轻移动用户的工作负担,建议将计算密集型任务卸载到远程云或本地云上。然而,这些解决方案也面临着一些挑战。很难在远程云中支持数据密集型和对延迟敏感的应用程序,而本地云的覆盖范围通常有限。当这两种方法都不支持时,另一种选择是通过利用周围智能手机或其他无线设备的资源来减轻单个设备的负载。为了促进第三种选择的高效运行,我们提出了一种新的普惠移动云框架,提供一种激励机制来激励移动用户贡献自己的资源供他人借用,并提供一种有效的机制来实现多站点计算分区。更具体地说,我们将问题表述为Stackelberg博弈,并证明存在唯一的纳什均衡。基于唯一纳什均衡,我们提出了一种卸载协议来推导移动用户的策略。通过大量的模拟,我们评估了基于经济的激励机制的绩效并验证了其理论性质。
{"title":"Incentive Aware Computation Resource Sharing and Partition in Pervasive Mobile Cloud","authors":"Jigang Wen, Yuxiang Chen, Chuda Liu","doi":"10.1109/CSCloud-EdgeCom58631.2023.00084","DOIUrl":"https://doi.org/10.1109/CSCloud-EdgeCom58631.2023.00084","url":null,"abstract":"Cloud computing is a promising technique to conquer the resource limitations of a single mobile device. To relieve the work load of mobile users, computation-intensive tasks are proposed to be offloaded to the remote cloud or local cloudlet. However, these solutions also face some challenges. It is difficult to support data intensive and delay-sensitive applications in the remote cloud, while the local cloudlets often have limited coverage. When both of these methods cannot be supported, another option is to relieve the load of a single device by taking advantage of resources of surrounding smart-phones or other wireless devices. To facilitate the efficient operation of the third option, we propose a novel pervasive mobile cloud framework to provide an incentive mechanism to motivate mobile users to contribute their sources for others to borrow and an efficient mechanism to enable multi-site computation partition. More specifically, we formulate the problem as a Stackelberg game, and prove that there exists a unique Nash Equilibrium for the game. Based on the unique Nash Equilibrium, we propose an offloading protocol to derive the mobile users’ strategies. Through extensive simulations, we evaluate the performance and validate the theoretical properties of the proposed economy-based incentive mechanism.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"63 1","pages":"458-463"},"PeriodicalIF":4.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74649457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reducing the Length Divergence Bias for Textual Matching Models via Alternating Adversarial Training 通过交替对抗训练减少文本匹配模型的长度偏差
IF 4 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-01 DOI: 10.1109/CSCloud-EdgeCom58631.2023.00040
Lantao Zheng, Wenxin Kuang, Qizhuang Liang, Wei Liang, Qiao Hu, Wei Fu, Xiashu Ding, Bijiang Xu, Yupeng Hu
Although deep learning has made remarkable achievements in natural language processing tasks, many researchers have recently indicated that models achieve high performance by exploiting statistical bias in datasets. However, once such models obtained on statistically biased datasets are applied in scenarios where statistical bias does not exist, they show a significant decrease in accuracy. In this work, we focus on the length divergence bias, which makes language models tend to classify samples with high length divergence as negative and vice versa. We propose a solution to make the model pay more attention to semantics and not be affected by bias. First, we propose constructing an adversarial test set to magnify the effect of bias on models. Then, we introduce some novel techniques to demote length divergence bias. Finally, we conduct our experiments on two textual matching corpora, and the results show that our approach effectively improves the generalization and robustness of the model, although the degree of bias of the two corpora is not the same.
尽管深度学习在自然语言处理任务方面取得了显著成就,但许多研究人员最近指出,模型通过利用数据集中的统计偏差来实现高性能。然而,一旦在统计偏差数据集上获得的这种模型应用于不存在统计偏差的情况下,它们的准确性就会显著降低。在这项工作中,我们关注长度发散偏差,这种偏差使得语言模型倾向于将长度发散高的样本分类为负样本,反之亦然。我们提出了一个解决方案,使模型更关注语义,不受偏见的影响。首先,我们提出构建一个对抗性测试集来放大偏差对模型的影响。然后,我们介绍了一些新的技术来降低长度发散偏差。最后,我们在两个文本匹配语料库上进行了实验,结果表明,尽管两个语料库的偏差程度不同,但我们的方法有效地提高了模型的泛化和鲁棒性。
{"title":"Reducing the Length Divergence Bias for Textual Matching Models via Alternating Adversarial Training","authors":"Lantao Zheng, Wenxin Kuang, Qizhuang Liang, Wei Liang, Qiao Hu, Wei Fu, Xiashu Ding, Bijiang Xu, Yupeng Hu","doi":"10.1109/CSCloud-EdgeCom58631.2023.00040","DOIUrl":"https://doi.org/10.1109/CSCloud-EdgeCom58631.2023.00040","url":null,"abstract":"Although deep learning has made remarkable achievements in natural language processing tasks, many researchers have recently indicated that models achieve high performance by exploiting statistical bias in datasets. However, once such models obtained on statistically biased datasets are applied in scenarios where statistical bias does not exist, they show a significant decrease in accuracy. In this work, we focus on the length divergence bias, which makes language models tend to classify samples with high length divergence as negative and vice versa. We propose a solution to make the model pay more attention to semantics and not be affected by bias. First, we propose constructing an adversarial test set to magnify the effect of bias on models. Then, we introduce some novel techniques to demote length divergence bias. Finally, we conduct our experiments on two textual matching corpora, and the results show that our approach effectively improves the generalization and robustness of the model, although the degree of bias of the two corpora is not the same.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"4 1","pages":"186-191"},"PeriodicalIF":4.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74171432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anomaly Detection Based on Deep Learning: Insights and Opportunities 基于深度学习的异常检测:洞察与机遇
IF 4 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-01 DOI: 10.1109/CSCloud-EdgeCom58631.2023.00015
Huan Zhang, Ru Xie, Kuan-Ching Li, Weihong Huang, Chaoyi Yang, Jingnian Liu
With the advent of the 5G/6G and Big Data, the network has become indispensable in people’s lives, and Cyber security has turned a relevant topic that people pay attention to. For Cyber security, anomaly detection, a.k.a. outlier detection or novelty detection, is one of the key points widely used in financial fraud detection, medical diagnosis, network security, and other aspects. As a hot topic, deep learning-based anomaly detection has been studied by more and more researchers. For such an objective, this article aims to classify anomaly detection based on deep learning, pointing out the problem and the principle, advantages, disadvantages, and application scenarios of each method, and describe possible future opportunities to address challenges.
随着5G/6G和大数据的到来,网络已经成为人们生活中不可或缺的一部分,网络安全也成为人们关注的相关话题。对于网络安全而言,异常检测,又称离群点检测或新颖性检测,是广泛应用于金融欺诈检测、医疗诊断、网络安全等方面的关键点之一。基于深度学习的异常检测作为一个研究热点,受到了越来越多研究者的关注。为此,本文旨在对基于深度学习的异常检测进行分类,指出每种方法存在的问题和原理、优缺点、应用场景,并描述未来可能的机遇来应对挑战。
{"title":"Anomaly Detection Based on Deep Learning: Insights and Opportunities","authors":"Huan Zhang, Ru Xie, Kuan-Ching Li, Weihong Huang, Chaoyi Yang, Jingnian Liu","doi":"10.1109/CSCloud-EdgeCom58631.2023.00015","DOIUrl":"https://doi.org/10.1109/CSCloud-EdgeCom58631.2023.00015","url":null,"abstract":"With the advent of the 5G/6G and Big Data, the network has become indispensable in people’s lives, and Cyber security has turned a relevant topic that people pay attention to. For Cyber security, anomaly detection, a.k.a. outlier detection or novelty detection, is one of the key points widely used in financial fraud detection, medical diagnosis, network security, and other aspects. As a hot topic, deep learning-based anomaly detection has been studied by more and more researchers. For such an objective, this article aims to classify anomaly detection based on deep learning, pointing out the problem and the principle, advantages, disadvantages, and application scenarios of each method, and describe possible future opportunities to address challenges.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"16 1","pages":"30-36"},"PeriodicalIF":4.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73891081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Joint Task Offloading and Scheduling Algorithm in Vehicular Edge Computing Networks 车辆边缘计算网络中的联合任务卸载与调度算法
IF 4 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-01 DOI: 10.1109/CSCloud-EdgeCom58631.2023.00061
Chongjing Huang, Q. Fu, Chaoliang Wang, Zhaohui Li
The rapid development of in-vehicle intelligent applications brings difficulties to traditional cloud computing in vehicular networks. Due to the long transmission distance between vehicles and cloud centers and the instability of communication links easily lead to high latency and low reliability. Vehicle edge computing (VEC), as a new computing paradigm, can improve vehicle quality of service by offloading tasks to edge servers with abundant computational resources. This paper studied a task offloading algorithm that efficiently optimize the delay cost and operating cost in a multi-user, multi-server VEC scenario. The algorithm solves the problem of execution location of computational tasks and execution order on the servers. In this paper, we simulate a real scenario where vehicles generate tasks through time lapse and the set of tasks is unknown in advance. The task set is preprocessed using a greedy algorithm and the offloading decision is further optimized using an optimization algorithm based on simulated annealing algorithm and heuristic rules. The simulation results show that compared with the traditional baseline algorithm, our algorithm effectively improves the task offloading utility of the VEC system.
车载智能应用的快速发展给车载网络中的传统云计算带来了困难。由于车辆与云中心之间的传输距离较长,通信链路不稳定,容易导致高延迟和低可靠性。车辆边缘计算(VEC)作为一种新的计算范式,通过将任务卸载到计算资源丰富的边缘服务器上,可以提高车辆的服务质量。研究了一种多用户、多服务器VEC场景下有效优化延迟成本和运行成本的任务卸载算法。该算法解决了计算任务在服务器上的执行位置和执行顺序问题。在本文中,我们模拟了一个真实的场景,其中车辆通过时间推移生成任务,并且任务集事先未知。使用贪心算法对任务集进行预处理,并使用基于模拟退火算法和启发式规则的优化算法对卸载决策进行进一步优化。仿真结果表明,与传统的基线算法相比,该算法有效地提高了VEC系统的任务卸载利用率。
{"title":"Joint Task Offloading and Scheduling Algorithm in Vehicular Edge Computing Networks","authors":"Chongjing Huang, Q. Fu, Chaoliang Wang, Zhaohui Li","doi":"10.1109/CSCloud-EdgeCom58631.2023.00061","DOIUrl":"https://doi.org/10.1109/CSCloud-EdgeCom58631.2023.00061","url":null,"abstract":"The rapid development of in-vehicle intelligent applications brings difficulties to traditional cloud computing in vehicular networks. Due to the long transmission distance between vehicles and cloud centers and the instability of communication links easily lead to high latency and low reliability. Vehicle edge computing (VEC), as a new computing paradigm, can improve vehicle quality of service by offloading tasks to edge servers with abundant computational resources. This paper studied a task offloading algorithm that efficiently optimize the delay cost and operating cost in a multi-user, multi-server VEC scenario. The algorithm solves the problem of execution location of computational tasks and execution order on the servers. In this paper, we simulate a real scenario where vehicles generate tasks through time lapse and the set of tasks is unknown in advance. The task set is preprocessed using a greedy algorithm and the offloading decision is further optimized using an optimization algorithm based on simulated annealing algorithm and heuristic rules. The simulation results show that compared with the traditional baseline algorithm, our algorithm effectively improves the task offloading utility of the VEC system.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"1 1","pages":"318-323"},"PeriodicalIF":4.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90091349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Placement Strategy of Data-Intensive Workflows in Collaborative Cloud-Edge Environment 协同云边缘环境下数据密集型工作流的数据放置策略
IF 4 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-01 DOI: 10.1109/CSCloud-EdgeCom58631.2023.00045
Yang Liang, Changsong Ding, Zhi-gang Hu
With the continuous development and integration of mobile communication and cloud computing technology, cloud-edge collaboration has emerged as a promising distributed paradigm to solve data-intensive workflow applications. How to improve the execution performance of data-intensive workflows has become one of the key issues in the collaborative cloud-edge environment. To address this issue, this paper built a data placement model with multiple constraints. Taking deadline and execution budget as the core constraints, the model is solved by minimizing the data access cost of workflow in the cloud-edge clusters. Subsequently, an immune genetic-particle swarm hybrid optimization algorithm (IGPSHO) is proposed to find the optimal replica placement scheme. Through simulation, compared with the classical immune genetic algorithm (IGA) and particle swarm optimization (PSO), the IGPSHO has obvious advantages in terms of workflow default rate, time-consuming ratio, and average execution cost when the workflow scale is large.
随着移动通信和云计算技术的不断发展和融合,云边缘协作已经成为解决数据密集型工作流应用的一种很有前途的分布式范例。如何提高数据密集型工作流的执行性能已成为协同云边缘环境中的关键问题之一。为了解决这个问题,本文构建了一个具有多个约束的数据放置模型。该模型以截止日期和执行预算为核心约束条件,通过最小化云边缘集群中工作流的数据访问成本来求解。随后,提出了一种免疫遗传-粒子群混合优化算法(IGPSHO)来寻找最优的副本放置方案。仿真结果表明,与经典的免疫遗传算法(IGA)和粒子群算法(PSO)相比,在工作流规模较大时,IGPSHO算法在工作流违约率、耗时比和平均执行成本等方面具有明显优势。
{"title":"Data Placement Strategy of Data-Intensive Workflows in Collaborative Cloud-Edge Environment","authors":"Yang Liang, Changsong Ding, Zhi-gang Hu","doi":"10.1109/CSCloud-EdgeCom58631.2023.00045","DOIUrl":"https://doi.org/10.1109/CSCloud-EdgeCom58631.2023.00045","url":null,"abstract":"With the continuous development and integration of mobile communication and cloud computing technology, cloud-edge collaboration has emerged as a promising distributed paradigm to solve data-intensive workflow applications. How to improve the execution performance of data-intensive workflows has become one of the key issues in the collaborative cloud-edge environment. To address this issue, this paper built a data placement model with multiple constraints. Taking deadline and execution budget as the core constraints, the model is solved by minimizing the data access cost of workflow in the cloud-edge clusters. Subsequently, an immune genetic-particle swarm hybrid optimization algorithm (IGPSHO) is proposed to find the optimal replica placement scheme. Through simulation, compared with the classical immune genetic algorithm (IGA) and particle swarm optimization (PSO), the IGPSHO has obvious advantages in terms of workflow default rate, time-consuming ratio, and average execution cost when the workflow scale is large.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"27 1","pages":"217-222"},"PeriodicalIF":4.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81597575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Cloud Computing-Advances Systems and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1