首页 > 最新文献

Journal of Grid Computing最新文献

英文 中文
Evaluation of Storage Placement in Computing Continuum for a Robotic Application 为机器人应用评估计算连续性中的存储布局
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-04 DOI: 10.1007/s10723-024-09758-2
Zeinab Bakhshi, Guillermo Rodriguez-Navas, Hans Hansson, Radu Prodan

This paper analyzes the timing performance of a persistent storage designed for distributed container-based architectures in industrial control applications. The timing performance analysis is conducted using an in-house simulator, which mirrors our testbed specifications. The storage ensures data availability and consistency even in presence of faults. The analysis considers four aspects: 1. placement strategy, 2. design options, 3. data size, and 4. evaluation under faulty conditions. Experimental results considering the timing constraints in industrial applications indicate that the storage solution can meet critical deadlines, particularly under specific failure patterns. Comparison results also reveal that, while the method may underperform current centralized solutions in fault-free conditions, it outperforms the centralized solutions in failure scenario. Moreover, the used evaluation method is applicable for assessing other container-based critical applications with timing constraints that require persistent storage.

本文分析了为工业控制应用中基于容器的分布式架构设计的持久存储的时序性能。时序性能分析是使用内部模拟器进行的,该模拟器反映了我们的测试平台规范。即使出现故障,该存储设备也能确保数据的可用性和一致性。分析考虑了四个方面:1.放置策略;2.设计方案;3.数据大小;4.故障条件下的评估。考虑到工业应用中的时间限制的实验结果表明,存储解决方案可以满足关键截止日期的要求,特别是在特定故障模式下。对比结果还显示,虽然在无故障条件下,该方法的性能可能低于当前的集中式解决方案,但在故障情况下,它的性能却优于集中式解决方案。此外,所使用的评估方法还适用于评估其他基于容器、有时间限制、需要持久存储的关键应用。
{"title":"Evaluation of Storage Placement in Computing Continuum for a Robotic Application","authors":"Zeinab Bakhshi, Guillermo Rodriguez-Navas, Hans Hansson, Radu Prodan","doi":"10.1007/s10723-024-09758-2","DOIUrl":"https://doi.org/10.1007/s10723-024-09758-2","url":null,"abstract":"<p>This paper analyzes the timing performance of a persistent storage designed for distributed container-based architectures in industrial control applications. The timing performance analysis is conducted using an in-house simulator, which mirrors our testbed specifications. The storage ensures data availability and consistency even in presence of faults. The analysis considers four aspects: 1. placement strategy, 2. design options, 3. data size, and 4. evaluation under faulty conditions. Experimental results considering the timing constraints in industrial applications indicate that the storage solution can meet critical deadlines, particularly under specific failure patterns. Comparison results also reveal that, while the method may underperform current centralized solutions in fault-free conditions, it outperforms the centralized solutions in failure scenario. Moreover, the used evaluation method is applicable for assessing other container-based critical applications with timing constraints that require persistent storage.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"1 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141254522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Effective Prediction of Resource Using Machine Learning in Edge Environments for the Smart Healthcare Industry 在智能医疗行业的边缘环境中使用机器学习进行有效的资源预测
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-30 DOI: 10.1007/s10723-024-09768-0
Guangyu Xu, Mingde Xu

Recent modern computing and trends in digital transformation provide a smart healthcare system for predicting diseases at an early stage. In healthcare services, Internet of Things (IoT) based models play a vital role in enhancing data processing and detection. As IoT grows, processing data requires more space. Transferring the patient reports takes too much time and energy, which causes high latency and energy. To overcome this, Edge computing is the solution. The data is analysed in the edge layer to improve the utilization. This paper proposed effective prediction of resource allocation and prediction models using IoT and Edge, which are suitable for healthcare applications. The proposed system consists of three modules: data preprocessing using filtering approaches, Resource allocation using the Deep Q network, and prediction phase using an optimised DL model called DBN-LSTM with frog leap optimization. The DL model is trained using the training health dataset, and the target field is predicted. It has been tested using the sensed data from the IoT layer, and the patient health status is expected to take appropriate actions. With timely prediction using edge devices, doctors and patients conveniently take necessary actions. The primary objective of this system is to secure low latency by improving the quality of service (QoS) metrics such as makespan, ARU, LBL, TAT, and accuracy. The deep reinforcement learning approach is employed due to its considerable acceptance for resource allocation. Compared to the state-of-the-art approaches, the proposed system obtained reduced makespan by increasing the average resource utilization and load balancing, which is suitable for accurate real-time analysis of patient health status.

最近的现代计算和数字化转型趋势提供了一个可在早期预测疾病的智能医疗系统。在医疗保健服务中,基于物联网(IoT)的模型在加强数据处理和检测方面发挥着至关重要的作用。随着物联网的发展,处理数据需要更多空间。传输病人报告需要耗费大量时间和精力,从而导致高延迟和高能耗。为了克服这一问题,边缘计算是一种解决方案。数据在边缘层进行分析,以提高利用率。本文利用物联网和边缘计算提出了有效的资源分配预测和预测模型,适用于医疗保健应用。所提议的系统由三个模块组成:使用过滤方法进行数据预处理;使用深度 Q 网络进行资源分配;使用优化的 DL 模型(DBN-LSTM)进行预测阶段的蛙跳优化。使用训练健康数据集对 DL 模型进行训练,然后预测目标区域。利用物联网层的传感数据对其进行了测试,预计病人的健康状况将采取适当的行动。通过使用边缘设备进行及时预测,医生和患者可以方便地采取必要行动。该系统的主要目标是通过提高服务质量(QoS)指标,如时间跨度(makespan)、ARU、LBL、TAT 和准确率,确保低延迟。由于深度强化学习方法在资源分配方面获得了广泛认可,因此该系统采用了这种方法。与最先进的方法相比,所提出的系统通过提高平均资源利用率和负载平衡减少了时间跨度,适用于对患者健康状况进行准确的实时分析。
{"title":"An Effective Prediction of Resource Using Machine Learning in Edge Environments for the Smart Healthcare Industry","authors":"Guangyu Xu, Mingde Xu","doi":"10.1007/s10723-024-09768-0","DOIUrl":"https://doi.org/10.1007/s10723-024-09768-0","url":null,"abstract":"<p>Recent modern computing and trends in digital transformation provide a smart healthcare system for predicting diseases at an early stage. In healthcare services, Internet of Things (IoT) based models play a vital role in enhancing data processing and detection. As IoT grows, processing data requires more space. Transferring the patient reports takes too much time and energy, which causes high latency and energy. To overcome this, Edge computing is the solution. The data is analysed in the edge layer to improve the utilization. This paper proposed effective prediction of resource allocation and prediction models using IoT and Edge, which are suitable for healthcare applications. The proposed system consists of three modules: data preprocessing using filtering approaches, Resource allocation using the Deep Q network, and prediction phase using an optimised DL model called DBN-LSTM with frog leap optimization. The DL model is trained using the training health dataset, and the target field is predicted. It has been tested using the sensed data from the IoT layer, and the patient health status is expected to take appropriate actions. With timely prediction using edge devices, doctors and patients conveniently take necessary actions. The primary objective of this system is to secure low latency by improving the quality of service (QoS) metrics such as makespan, ARU, LBL, TAT, and accuracy. The deep reinforcement learning approach is employed due to its considerable acceptance for resource allocation. Compared to the state-of-the-art approaches, the proposed system obtained reduced makespan by increasing the average resource utilization and load balancing, which is suitable for accurate real-time analysis of patient health status.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"61 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141191589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hybrid Discrete Grey Wolf Optimization Algorithm Imbalance-ness Aware for Solving Two-dimensional Bin-packing Problems 用于解决二维箱式包装问题的混合离散灰狼优化算法--兼顾不平衡性
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-10 DOI: 10.1007/s10723-024-09761-7
Saeed Kosari, Mirsaeid Hosseini Shirvani, Navid Khaledian, Danial Javaheri

In different industries, there are miscellaneous applications that require multi-dimensional resources. These kinds of applications need all of the resource dimensions at the same time. Since the resources are typically scarce/expensive/pollutant, presenting an efficient resource allocation is a very favorable approach to reducing overall cost. On the other hand, the requirement of the applications on different dimensions of the resources is variable, usually, resource allocations have a high rate of wastage owing to the unpleasant resource skew-ness phenomenon. For instance, micro-service allocation in the Internet of Things (IoT) applications and Virtual Machine Placement (VMP) in a cloud context are challenging tasks because they diversely require imbalanced all resource dimensions such as CPU and Memory bandwidths, so inefficient resource allocation raises issues. In a special case, the problem under study associated with the two-dimensional resource allocation of distributed applications is modeled to the two-dimensional bin-packing problems which are categorized as the famous NP-Hard. Several approaches were proposed in the literature, but the majority of them are not aware of skew-ness and dimensional imbalances in the list of requested resources which incurs additional costs. To solve this combinatorial problem, a novel hybrid discrete gray wolf optimization algorithm (HD-GWO) is presented. It utilizes strong global search operators along with several novel walking-around procedures each of which is aware of resource dimensional skew-ness and explores discrete search space with efficient permutations. To verify HD-GWO, it was tested in miscellaneous conditions considering different correlation coefficients (CC) of resource dimensions. Simulation results prove that HD-GWO significantly outperforms other state-of-the-art in terms of relevant evaluation metrics along with a high potential of scalability.

各行各业都有需要多维资源的各种应用。这类应用需要同时使用所有资源维度。由于资源通常是稀缺的/昂贵的/污染的,因此有效的资源分配是降低总体成本的一个非常有利的方法。另一方面,应用程序对资源不同维度的需求是不固定的,通常情况下,由于资源倾斜现象令人不快,资源分配的浪费率很高。例如,物联网(IoT)应用中的微服务分配和云背景下的虚拟机安置(VMP)都是具有挑战性的任务,因为它们对 CPU 和内存带宽等所有资源维度的需求各不相同,因此低效的资源分配会引发问题。在特殊情况下,所研究的与分布式应用程序的二维资源分配相关的问题被模拟为二维 bin-packing 问题,该问题被归类为著名的 NP-Hard。文献中提出了几种方法,但其中大多数都没有意识到所需资源列表中的倾斜度和维度不平衡会产生额外成本。为解决这一组合问题,本文提出了一种新型混合离散灰狼优化算法(HD-GWO)。该算法利用强大的全局搜索算子和几个新颖的走动程序,每个程序都能意识到资源维度的倾斜度,并通过高效的排列探索离散搜索空间。为了验证 HD-GWO,在考虑到资源维度的不同相关系数 (CC) 的各种条件下对其进行了测试。仿真结果证明,HD-GWO 在相关评估指标方面明显优于其他最先进的方法,同时具有很高的可扩展性。
{"title":"A Hybrid Discrete Grey Wolf Optimization Algorithm Imbalance-ness Aware for Solving Two-dimensional Bin-packing Problems","authors":"Saeed Kosari, Mirsaeid Hosseini Shirvani, Navid Khaledian, Danial Javaheri","doi":"10.1007/s10723-024-09761-7","DOIUrl":"https://doi.org/10.1007/s10723-024-09761-7","url":null,"abstract":"<p>In different industries, there are miscellaneous applications that require multi-dimensional resources. These kinds of applications need all of the resource dimensions at the same time. Since the resources are typically scarce/expensive/pollutant, presenting an efficient resource allocation is a very favorable approach to reducing overall cost. On the other hand, the requirement of the applications on different dimensions of the resources is variable, usually, resource allocations have a high rate of wastage owing to the unpleasant resource skew-ness phenomenon. For instance, micro-service allocation in the Internet of Things (IoT) applications and Virtual Machine Placement (VMP) in a cloud context are challenging tasks because they diversely require imbalanced all resource dimensions such as CPU and Memory bandwidths, so inefficient resource allocation raises issues. In a special case, the problem under study associated with the two-dimensional resource allocation of distributed applications is modeled to the two-dimensional bin-packing problems which are categorized as the famous NP-Hard. Several approaches were proposed in the literature, but the majority of them are not aware of skew-ness and dimensional imbalances in the list of requested resources which incurs additional costs. To solve this combinatorial problem, a novel hybrid discrete gray wolf optimization algorithm (<i>HD</i>-<i>GWO</i>) is presented. It utilizes strong global search operators along with several novel walking-around procedures each of which is aware of resource dimensional skew-ness and explores discrete search space with efficient permutations. To verify <i>HD</i>-<i>GWO</i>, it was tested in miscellaneous conditions considering different correlation coefficients (<i>CC</i>) of resource dimensions. Simulation results prove that <i>HD</i>-<i>GWO</i> significantly outperforms other state-of-the-art in terms of relevant evaluation metrics along with a high potential of scalability.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"309 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling Configurable Workflows in Smart Environments with Knowledge-based Process Fragment Reuse 利用基于知识的流程片段重用技术实现智能环境中的可配置工作流程
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-03 DOI: 10.1007/s10723-024-09763-5
Mouhamed Gaith Ayadi, Haithem Mezni

In today’s smart environments, the serviceli-zation of various resources has produced a tremendous number of IoT- and cloud-based smart services. Thanks to the pivotal role of pillar paradigms, such as edge/cloud computing, Internet of Things, and business process management, it is now possible to combine and translate these service-like resources into configurable workflows, to cope with users’ complex needs. Examples include treatment workflows in smart healthcare, delivery plans in drone-based missions, transportation plans in smart urban networks, etc. Rather than composing atomic services to obtain these workflows, reusing existing process fragments has several advantages, mainly the fast, secure, and configurable compositions. However, reusing smart process fragments has not yet been addressed in the context of smart environments. In addition, existing solutions in smart environments suffer from the complexity (e.g., multi-modal transportation in smart mobility) and privacy issues caused by the heterogeneity (e.g., package delivery in smart economy) of aggregated services. Moreover, these services may be conflicting in specific domains (e.g. medication/treatment workflows in smart healthcare), and may affect user experience. To solve the above issues, the present paper aims to accelerate the process of generating configurable treatment workflows w.r.t. the users’ requirements and their smart environment specificity. We exploit the principles of software reuse to map each sub-request into smart process fragments, which we combine using Cocke-Kasami-Younger (CKY) method, to finally obtain the suitable workflow. This contribution is preceded by a knowledge graph modeling of smart environments in terms of available services, process fragments, as well as their dependencies. The built information network is, then, managed using a graph representation learning method, in order to facilitate its processing and composing high-quality smart services. Experimental results on a real-world dataset proved the effectiveness of our approach, compared to existing solutions.

在当今的智能环境中,各种资源的服务化产生了大量基于物联网和云的智能服务。得益于边缘/云计算、物联网和业务流程管理等支柱范式的关键作用,现在有可能将这些类似服务的资源组合并转化为可配置的工作流,以满足用户的复杂需求。例如,智能医疗中的治疗工作流、无人机任务中的交付计划、智能城市网络中的交通计划等。与组合原子服务来获取这些工作流相比,重用现有的流程片段有几个优势,主要是组合快速、安全和可配置。然而,在智能环境中重新使用智能流程片段的问题尚未得到解决。此外,智能环境中的现有解决方案还受到聚合服务的复杂性(如智能交通中的多模式交通)和异构性(如智能经济中的包裹递送)造成的隐私问题的影响。此外,这些服务在特定领域(如智能医疗中的用药/治疗工作流)中可能存在冲突,并可能影响用户体验。为解决上述问题,本文旨在根据用户需求及其智能环境的特殊性,加快生成可配置治疗工作流的过程。我们利用软件重用原则,将每个子请求映射为智能流程片段,并使用 Cocke-Kasamii-Younger (CKY) 方法将这些片段组合起来,最终获得合适的工作流程。在完成这项工作之前,我们首先从可用服务、流程片段及其依赖关系的角度对智能环境进行了知识图谱建模。然后,利用图表示学习方法对所构建的信息网络进行管理,以促进其处理和组成高质量的智能服务。在现实世界数据集上的实验结果证明,与现有解决方案相比,我们的方法非常有效。
{"title":"Enabling Configurable Workflows in Smart Environments with Knowledge-based Process Fragment Reuse","authors":"Mouhamed Gaith Ayadi, Haithem Mezni","doi":"10.1007/s10723-024-09763-5","DOIUrl":"https://doi.org/10.1007/s10723-024-09763-5","url":null,"abstract":"<p>In today’s smart environments, the serviceli-zation of various resources has produced a tremendous number of IoT- and cloud-based smart services. Thanks to the pivotal role of pillar paradigms, such as edge/cloud computing, Internet of Things, and business process management, it is now possible to combine and translate these service-like resources into configurable workflows, to cope with users’ complex needs. Examples include treatment workflows in smart healthcare, delivery plans in drone-based missions, transportation plans in smart urban networks, etc. Rather than composing atomic services to obtain these workflows, reusing existing process fragments has several advantages, mainly the fast, secure, and configurable compositions. However, reusing smart process fragments has not yet been addressed in the context of smart environments. In addition, existing solutions in smart environments suffer from the complexity (e.g., multi-modal transportation in smart mobility) and privacy issues caused by the heterogeneity (e.g., package delivery in smart economy) of aggregated services. Moreover, these services may be conflicting in specific domains (e.g. medication/treatment workflows in smart healthcare), and may affect user experience. To solve the above issues, the present paper aims to accelerate the process of generating configurable treatment workflows w.r.t. the users’ requirements and their smart environment specificity. We exploit the principles of software reuse to map each sub-request into smart process fragments, which we combine using Cocke-Kasami-Younger (CKY) method, to finally obtain the suitable workflow. This contribution is preceded by a knowledge graph modeling of smart environments in terms of available services, process fragments, as well as their dependencies. The built information network is, then, managed using a graph representation learning method, in order to facilitate its processing and composing high-quality smart services. Experimental results on a real-world dataset proved the effectiveness of our approach, compared to existing solutions.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"5 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Synergy of Blockchain, IoT, and Edge Computing in Smart Traffic Management across Urban Landscapes 探索区块链、物联网和边缘计算在城市智能交通管理中的协同作用
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-04-17 DOI: 10.1007/s10723-024-09762-6
Yu Chen, Yilun Qiu, Zhenyu Tang, Shuling Long, Lingfeng Zhao, Zhong Tang

In the ever-evolving landscape of smart city transportation, effective traffic management remains a critical challenge. To address this, we propose a novel Smart Traffic Management System (STMS) Architecture algorithm that combines cutting-edge technologies, including Blockchain, IoT, edge computing, and reinforcement learning. STMS aims to optimize traffic flow, minimize congestion, and enhance transportation efficiency while ensuring data integrity, security, and decentralized decision-making. STMS integrates the Twin Delayed Deep Deterministic Policy Gradient (TD3) reinforcement learning algorithm with Blockchain technology to enable secure and transparent data sharing among traffic-related entities. Smart contracts are deployed on the Blockchain to automate the execution of predefined traffic rules, ensuring compliance and accountability. Integrating IoT sensors on vehicles, roadways, and traffic signals provides real-time traffic data, while edge nodes perform local traffic analysis and contribute to optimization. The algorithm’s decentralized decision-making empowers edge devices, traffic signals, and vehicles to interact autonomously, making informed decisions based on local data and predefined rules stored on the Blockchain. TD3 optimizes traffic signal timings, route suggestions, and traffic flow control, ensuring smooth transportation operations. STMSs holistic approach addresses traffic management challenges in smart cities by combining advanced technologies. By leveraging Blockchain’s immutability, IoT’s real-time insights, edge computing’s local intelligence, and TD3’s reinforcement learning capabilities, STMS presents a robust solution for achieving efficient and secure transportation systems. This research underscores the potential for innovative algorithms to revolutionize urban mobility, ushering in a new era of smart and sustainable transportation networks.

在不断发展的智能城市交通领域,有效的交通管理仍然是一项严峻的挑战。为此,我们提出了一种新型智能交通管理系统(STMS)架构算法,该算法结合了区块链、物联网、边缘计算和强化学习等前沿技术。STMS 旨在优化交通流量、减少拥堵、提高交通效率,同时确保数据完整性、安全性和分散决策。STMS 将双延迟深度确定性策略梯度(TD3)强化学习算法与区块链技术相结合,实现了交通相关实体之间安全、透明的数据共享。智能合约部署在区块链上,自动执行预定义的交通规则,确保合规性和问责制。整合车辆、道路和交通信号上的物联网传感器可提供实时交通数据,而边缘节点可执行本地交通分析并促进优化。该算法的分散决策功能使边缘设备、交通信号和车辆能够自主互动,根据本地数据和存储在区块链上的预定义规则做出明智决策。TD3 可优化交通信号时间、路线建议和交通流量控制,确保交通运营顺畅。STMS 的整体方法通过结合先进技术,解决了智慧城市的交通管理难题。通过利用区块链的不变性、物联网的实时洞察力、边缘计算的本地智能和 TD3 的强化学习能力,STMS 为实现高效、安全的交通系统提供了一个强大的解决方案。这项研究强调了创新算法彻底改变城市交通的潜力,开创了智能和可持续交通网络的新时代。
{"title":"Exploring the Synergy of Blockchain, IoT, and Edge Computing in Smart Traffic Management across Urban Landscapes","authors":"Yu Chen, Yilun Qiu, Zhenyu Tang, Shuling Long, Lingfeng Zhao, Zhong Tang","doi":"10.1007/s10723-024-09762-6","DOIUrl":"https://doi.org/10.1007/s10723-024-09762-6","url":null,"abstract":"<p>In the ever-evolving landscape of smart city transportation, effective traffic management remains a critical challenge. To address this, we propose a novel Smart Traffic Management System (STMS) Architecture algorithm that combines cutting-edge technologies, including Blockchain, IoT, edge computing, and reinforcement learning. STMS aims to optimize traffic flow, minimize congestion, and enhance transportation efficiency while ensuring data integrity, security, and decentralized decision-making. STMS integrates the Twin Delayed Deep Deterministic Policy Gradient (TD3) reinforcement learning algorithm with Blockchain technology to enable secure and transparent data sharing among traffic-related entities. Smart contracts are deployed on the Blockchain to automate the execution of predefined traffic rules, ensuring compliance and accountability. Integrating IoT sensors on vehicles, roadways, and traffic signals provides real-time traffic data, while edge nodes perform local traffic analysis and contribute to optimization. The algorithm’s decentralized decision-making empowers edge devices, traffic signals, and vehicles to interact autonomously, making informed decisions based on local data and predefined rules stored on the Blockchain. TD3 optimizes traffic signal timings, route suggestions, and traffic flow control, ensuring smooth transportation operations. STMSs holistic approach addresses traffic management challenges in smart cities by combining advanced technologies. By leveraging Blockchain’s immutability, IoT’s real-time insights, edge computing’s local intelligence, and TD3’s reinforcement learning capabilities, STMS presents a robust solution for achieving efficient and secure transportation systems. This research underscores the potential for innovative algorithms to revolutionize urban mobility, ushering in a new era of smart and sustainable transportation networks.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"228 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140615758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Micro Frontend Based Performance Improvement and Prediction for Microservices Using Machine Learning 基于微前端的微服务性能改进和预测(使用机器学习
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-04-16 DOI: 10.1007/s10723-024-09760-8
Neha Kaushik, Harish Kumar, Vinay Raj

Microservices has become a buzzword in industry as many large IT giants such as Amazon, Twitter, Uber, etc have started migrating their existing applications to this new style and few of them have started building their new applications with this style. Due to increasing user requirements and the need to add more business functionalities to the existing applications, the web applications designed using the microservices style also face a few performance challenges. Though this style has been successfully adopted in the design of large enterprise applications, still the applications face performance related issues. It is clear from the literature that most of the articles focus only on the backend microservices. To the best of our knowledge, there has been no solution proposed considering micro frontends along with the backend microservices. To improve the performance of the microservices based web applications, in this paper, a new framework for the design of web applications with micro frontends for frontend and microservices in the backend of the application is presented. To assess the proposed framework, an empirical investigation is performed to analyze the performance and it is found that the applications designed with micro frontends with microservices have performed better than the applications with monolithic frontends. Additionally, to predict the performance of microservices based applications, a machine learning model is proposed as machine learning has wide applications in software engineering related activities. The accuracy of the proposed model using different metrics is also presented.

随着亚马逊、Twitter、Uber 等许多大型 IT 巨头开始将其现有应用程序迁移到这种新风格,微服务已成为业界的热门词汇,其中少数公司已开始使用这种风格构建新的应用程序。由于用户需求不断增加,而且需要在现有应用程序中添加更多业务功能,使用微服务样式设计的网络应用程序也面临着一些性能挑战。虽然这种风格已成功应用于大型企业应用程序的设计中,但这些应用程序仍然面临着与性能相关的问题。从文献中可以明显看出,大多数文章只关注后端微服务。据我们所知,还没有人提出过将微前端与后端微服务一起考虑的解决方案。为了提高基于微服务的网络应用程序的性能,本文提出了一种新的网络应用程序设计框架,前端采用微前端,后端采用微服务。为了评估所提出的框架,我们进行了一项实证调查来分析其性能,结果发现,使用微前端和微服务设计的应用程序比使用单体前端的应用程序性能更好。此外,为了预测基于微服务的应用程序的性能,还提出了一个机器学习模型,因为机器学习在软件工程相关活动中有着广泛的应用。此外,还介绍了所提模型使用不同指标的准确性。
{"title":"Micro Frontend Based Performance Improvement and Prediction for Microservices Using Machine Learning","authors":"Neha Kaushik, Harish Kumar, Vinay Raj","doi":"10.1007/s10723-024-09760-8","DOIUrl":"https://doi.org/10.1007/s10723-024-09760-8","url":null,"abstract":"<p>Microservices has become a buzzword in industry as many large IT giants such as Amazon, Twitter, Uber, etc have started migrating their existing applications to this new style and few of them have started building their new applications with this style. Due to increasing user requirements and the need to add more business functionalities to the existing applications, the web applications designed using the microservices style also face a few performance challenges. Though this style has been successfully adopted in the design of large enterprise applications, still the applications face performance related issues. It is clear from the literature that most of the articles focus only on the backend microservices. To the best of our knowledge, there has been no solution proposed considering micro frontends along with the backend microservices. To improve the performance of the microservices based web applications, in this paper, a new framework for the design of web applications with micro frontends for frontend and microservices in the backend of the application is presented. To assess the proposed framework, an empirical investigation is performed to analyze the performance and it is found that the applications designed with micro frontends with microservices have performed better than the applications with monolithic frontends. Additionally, to predict the performance of microservices based applications, a machine learning model is proposed as machine learning has wide applications in software engineering related activities. The accuracy of the proposed model using different metrics is also presented.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"47 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140583383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CIA Security for Internet of Vehicles and Blockchain-AI Integration CIA 为车联网和区块链-人工智能集成提供安全保障
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-04-02 DOI: 10.1007/s10723-024-09757-3
Tao Hai, Muammer Aksoy, Celestine Iwendi, Ebuka Ibeke, Senthilkumar Mohan

The lack of data security and the hazardous nature of the Internet of Vehicles (IoV), in the absence of networking settings, have prevented the openness and self-organization of the vehicle networks of IoV cars. The lapses originating in the areas of Confidentiality, Integrity, and Authenticity (CIA) have also increased the possibility of malicious attacks. To overcome these challenges, this paper proposes an updated Games-based CIA security mechanism to secure IoVs using Blockchain and Artificial Intelligence (AI) technology. The proposed framework consists of a trustworthy authorization solution three layers, including the authentication of vehicles using Physical Unclonable Functions (PUFs), a flexible Proof-of-Work (dPOW) consensus framework, and AI-enhanced duel gaming. The credibility of the framework is validated by different security analyses, showcasing its superiority over existing systems in terms of security, functionality, computation, and transaction overhead. Additionally, the proposed solution effectively handles challenges like side channel and physical cloning attacks, which many existing frameworks fail to address. The implementation of this mechanism involves the use of a reduced encumbered blockchain, coupled with AI-based authentication through duel gaming, showcasing its efficiency and physical-level support, a feature not present in most existing blockchain-based IoV verification frameworks.

由于缺乏联网设置,车联网(IoV)的数据安全性和危险性不足,阻碍了车联网汽车网络的开放性和自组织性。源于保密性、完整性和真实性(CIA)领域的漏洞也增加了恶意攻击的可能性。为了克服这些挑战,本文提出了一种更新的基于游戏的 CIA 安全机制,利用区块链和人工智能(AI)技术确保物联网汽车的安全。所提出的框架由三层可信授权解决方案组成,包括使用物理不可克隆函数(PUF)对车辆进行身份验证、灵活的工作证明(dPOW)共识框架和人工智能增强型对决游戏。不同的安全分析验证了该框架的可信度,表明其在安全性、功能性、计算量和交易开销方面优于现有系统。此外,所提出的解决方案还能有效处理侧信道和物理克隆攻击等挑战,而许多现有框架都无法解决这些问题。该机制的实施涉及使用减少了加密的区块链,并通过决斗游戏与基于人工智能的身份验证相结合,从而展示了其效率和物理层支持,这是大多数现有基于区块链的物联网验证框架所不具备的功能。
{"title":"CIA Security for Internet of Vehicles and Blockchain-AI Integration","authors":"Tao Hai, Muammer Aksoy, Celestine Iwendi, Ebuka Ibeke, Senthilkumar Mohan","doi":"10.1007/s10723-024-09757-3","DOIUrl":"https://doi.org/10.1007/s10723-024-09757-3","url":null,"abstract":"<p>The lack of data security and the hazardous nature of the Internet of Vehicles (IoV), in the absence of networking settings, have prevented the openness and self-organization of the vehicle networks of IoV cars. The lapses originating in the areas of Confidentiality, Integrity, and Authenticity (CIA) have also increased the possibility of malicious attacks. To overcome these challenges, this paper proposes an updated Games-based CIA security mechanism to secure IoVs using Blockchain and Artificial Intelligence (AI) technology. The proposed framework consists of a trustworthy authorization solution three layers, including the authentication of vehicles using Physical Unclonable Functions (PUFs), a flexible Proof-of-Work (dPOW) consensus framework, and AI-enhanced duel gaming. The credibility of the framework is validated by different security analyses, showcasing its superiority over existing systems in terms of security, functionality, computation, and transaction overhead. Additionally, the proposed solution effectively handles challenges like side channel and physical cloning attacks, which many existing frameworks fail to address. The implementation of this mechanism involves the use of a reduced encumbered blockchain, coupled with AI-based authentication through duel gaming, showcasing its efficiency and physical-level support, a feature not present in most existing blockchain-based IoV verification frameworks.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"45 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140583649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Joint Design of Microservice Deployment and Routing in Cloud Data Centers 论云数据中心微服务部署与路由的联合设计
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-26 DOI: 10.1007/s10723-024-09759-1

Abstract

In recent years, internet enterprises have transitioned from traditional monolithic service to microservice architecture to better meet evolving business requirements. However, it also brings great challenges to the resource management of service providers. Existing research has not fully considered the request characteristics of internet application scenarios. Some studies apply traditional task scheduling models and strategies to microservice scheduling scenarios, while others optimize microservice deployment and request routing separately. In this paper, we propose a microservice instance deployment algorithm based on genetic and local search, and a request routing algorithm based on probabilistic forwarding. The service graph with complex dependencies is decomposed into multiple service chains, and the open Jackson queuing network is applied to analyze the performance of the microservice system. Data evaluation results demonstrate that our scheme significantly outperforms the benchmark strategy. Our algorithm has reduced the average response latency by 37%-67% and enhanced request success rate by 8%-115% compared to other baseline algorithms.

摘要 近年来,互联网企业纷纷从传统的单体服务向微服务架构转型,以更好地满足不断发展的业务需求。然而,这也给服务提供商的资源管理带来了巨大挑战。现有研究并未充分考虑互联网应用场景的请求特征。有的研究将传统的任务调度模型和策略应用到微服务调度场景中,有的研究则将微服务部署和请求路由分开优化。本文提出了一种基于遗传和局部搜索的微服务实例部署算法,以及一种基于概率转发的请求路由算法。将具有复杂依赖关系的服务图分解为多个服务链,并应用开放式杰克逊队列网络分析微服务系统的性能。数据评估结果表明,我们的方案明显优于基准策略。与其他基准算法相比,我们的算法将平均响应延迟降低了 37%-67%,请求成功率提高了 8%-115%。
{"title":"On the Joint Design of Microservice Deployment and Routing in Cloud Data Centers","authors":"","doi":"10.1007/s10723-024-09759-1","DOIUrl":"https://doi.org/10.1007/s10723-024-09759-1","url":null,"abstract":"<h3>Abstract</h3> <p>In recent years, internet enterprises have transitioned from traditional monolithic service to microservice architecture to better meet evolving business requirements. However, it also brings great challenges to the resource management of service providers. Existing research has not fully considered the request characteristics of internet application scenarios. Some studies apply traditional task scheduling models and strategies to microservice scheduling scenarios, while others optimize microservice deployment and request routing separately. In this paper, we propose a microservice instance deployment algorithm based on genetic and local search, and a request routing algorithm based on probabilistic forwarding. The service graph with complex dependencies is decomposed into multiple service chains, and the open Jackson queuing network is applied to analyze the performance of the microservice system. Data evaluation results demonstrate that our scheme significantly outperforms the benchmark strategy. Our algorithm has reduced the average response latency by 37%-67% and enhanced request success rate by 8%-115% compared to other baseline algorithms.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"8 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140302885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Performance of Smart Education Systems by Integrating Machine Learning on Edge Devices and Cloud in Educational Institutions 通过在教育机构的边缘设备和云端整合机器学习,提高智能教育系统的性能
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-14 DOI: 10.1007/s10723-024-09755-5
Shujie Qiu

Educational institutions today are embracing technology to enhance education quality through intelligent systems. This study introduces an innovative strategy to boost the performance of such procedures by seamlessly integrating machine learning on edge devices and cloud infrastructure. The proposed framework harnesses the capabilities of a Hybrid 1D Convolutional Neural Network (CNN) and Long Short-Term Memory Network (LSTM) architecture, offering profound insights into intelligent education. Operating at the crossroads of localised and centralised analyses, the Hybrid 1D CNN-LSTM architecture signifies a significant advancement. It directly engages edge devices used by students and educators, laying the groundwork for personalised learning experiences. This architecture adeptly captures the intricacies of various modalities, including text, images, and videos, by harmonising 1D CNN layers and LSTM modules. This approach facilitates the extraction of tailored features from each modality and the exploration of temporal intricacies. Consequently, the architecture provides a holistic comprehension of student engagement and comprehension dynamics, unveiling individual learning preferences. Moreover, the framework seamlessly integrates data from edge devices into the cloud infrastructure, allowing insights from both domains to merge. Educators benefit from attention-enhanced feature maps that encapsulate personalised insights, empowering them to customise content and strategies according to student learning preferences. The approach bridges real-time, localised analysis with comprehensive cloud-mediated insights, paving the path for transformative educational experiences. Empirical validation reinforces the effectiveness of the Hybrid 1D CNN-LSTM architecture, cementing its potential to revolutionise intelligent education within academic institutions. This fusion of machine learning across edge devices and cloud architecture can reshape the educational landscape, ushering in a more innovative and more responsive learning environment that caters to the diverse needs of students and educators alike.

当今的教育机构正在通过智能系统拥抱科技,以提高教育质量。本研究介绍了一种创新策略,通过在边缘设备和云基础设施上无缝集成机器学习,提高此类程序的性能。所提出的框架利用了混合一维卷积神经网络(CNN)和长短期记忆网络(LSTM)架构的功能,为智能教育提供了深刻的见解。混合一维卷积神经网络(CNN)和长短期记忆网络(LSTM)架构在局部分析和集中分析的交叉点上运行,标志着一项重大进步。它直接与学生和教育工作者使用的边缘设备相连接,为个性化学习体验奠定了基础。该架构通过协调一维 CNN 层和 LSTM 模块,巧妙地捕捉了文本、图像和视频等各种模式的复杂性。这种方法有助于从每种模式中提取量身定制的特征,并探索时间上的复杂性。因此,该架构能全面了解学生的参与和理解动态,揭示个人的学习偏好。此外,该框架还能将边缘设备的数据无缝集成到云基础设施中,从而将两个领域的见解融合在一起。教育工作者可以从包含个性化见解的注意力增强特征图中获益,从而能够根据学生的学习偏好定制内容和策略。这种方法将实时、本地化分析与全面的云端见解相结合,为变革性教育体验铺平了道路。经验验证加强了混合一维 CNN-LSTM 架构的有效性,巩固了其在学术机构内革新智能教育的潜力。这种融合了边缘设备和云架构的机器学习可以重塑教育格局,带来更具创新性和响应性的学习环境,满足学生和教育工作者的不同需求。
{"title":"Improving Performance of Smart Education Systems by Integrating Machine Learning on Edge Devices and Cloud in Educational Institutions","authors":"Shujie Qiu","doi":"10.1007/s10723-024-09755-5","DOIUrl":"https://doi.org/10.1007/s10723-024-09755-5","url":null,"abstract":"<p>Educational institutions today are embracing technology to enhance education quality through intelligent systems. This study introduces an innovative strategy to boost the performance of such procedures by seamlessly integrating machine learning on edge devices and cloud infrastructure. The proposed framework harnesses the capabilities of a Hybrid 1D Convolutional Neural Network (CNN) and Long Short-Term Memory Network (LSTM) architecture, offering profound insights into intelligent education. Operating at the crossroads of localised and centralised analyses, the Hybrid 1D CNN-LSTM architecture signifies a significant advancement. It directly engages edge devices used by students and educators, laying the groundwork for personalised learning experiences. This architecture adeptly captures the intricacies of various modalities, including text, images, and videos, by harmonising 1D CNN layers and LSTM modules. This approach facilitates the extraction of tailored features from each modality and the exploration of temporal intricacies. Consequently, the architecture provides a holistic comprehension of student engagement and comprehension dynamics, unveiling individual learning preferences. Moreover, the framework seamlessly integrates data from edge devices into the cloud infrastructure, allowing insights from both domains to merge. Educators benefit from attention-enhanced feature maps that encapsulate personalised insights, empowering them to customise content and strategies according to student learning preferences. The approach bridges real-time, localised analysis with comprehensive cloud-mediated insights, paving the path for transformative educational experiences. Empirical validation reinforces the effectiveness of the Hybrid 1D CNN-LSTM architecture, cementing its potential to revolutionise intelligent education within academic institutions. This fusion of machine learning across edge devices and cloud architecture can reshape the educational landscape, ushering in a more innovative and more responsive learning environment that caters to the diverse needs of students and educators alike.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"2 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140147452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cost-efficient Workflow as a Service using Containers 使用容器实现经济高效的工作流即服务
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-11 DOI: 10.1007/s10723-024-09745-7
Kamalesh Karmakar, Anurina Tarafdar, Rajib K. Das, Sunirmal Khatua

Workflows are special applications used to solve complex scientific problems. The emerging Workflow as a Service (WaaS) model provides scientists with an effective way of deploying their workflow applications in Cloud environments. The WaaS model can execute multiple workflows in a multi-tenant Cloud environment. Scheduling the tasks of the workflows in the WaaS model has several challenges. The scheduling approach must properly utilize the underlying Cloud resources and satisfy the users’ Quality of Service (QoS) requirements for all the workflows. In this work, we have proposed a heurisine-sensitive workflows in a containerized Cloud environment for the WaaS model. We formulated the problem of minimizing the MIPS (million instructions per second) requirement of tasks while satisfying the deadline of the workflows as a non-linear optimization problem and applied the Lagranges multiplier method to solve it. It allows us to configure/scale the containers’ resources and reduce costs. We also ensure maximum utilization of VM’s resources while allocating containers to VMs. Furthermore, we have proposed an approach to effectively scale containers and VMs to improve the schedulability of the workflows at runtime to deal with the dynamic arrival of the workflows. Extensive experiments and comparisons with other state-of-the-art works show that the proposed approach can significantly improve resource utilization, prevent deadline violation, and reduce the cost of renting Cloud resources for the WaaS model.

工作流是用于解决复杂科学问题的特殊应用程序。新兴的工作流即服务(WaaS)模式为科学家提供了一种在云环境中部署工作流应用程序的有效方法。WaaS 模型可以在多租户云环境中执行多个工作流。在 WaaS 模型中调度工作流的任务有几个挑战。调度方法必须妥善利用底层云资源,并满足用户对所有工作流的服务质量(QoS)要求。在这项工作中,我们针对 WaaS 模型提出了一种容器化云环境中的法理学敏感工作流。我们将在满足工作流截止日期的同时最大限度降低任务的 MIPS(每秒百万条指令)要求这一问题表述为一个非线性优化问题,并应用拉格朗日乘法来解决这一问题。这使我们能够配置/扩展容器资源并降低成本。在将容器分配给虚拟机的同时,我们还确保了虚拟机资源的最大利用率。此外,我们还提出了一种有效扩展容器和虚拟机的方法,以提高工作流在运行时的可调度性,从而应对工作流的动态到来。广泛的实验以及与其他一流作品的比较表明,所提出的方法可以显著提高资源利用率,防止违反截止日期,并降低 WaaS 模型租用云资源的成本。
{"title":"Cost-efficient Workflow as a Service using Containers","authors":"Kamalesh Karmakar, Anurina Tarafdar, Rajib K. Das, Sunirmal Khatua","doi":"10.1007/s10723-024-09745-7","DOIUrl":"https://doi.org/10.1007/s10723-024-09745-7","url":null,"abstract":"<p>Workflows are special applications used to solve complex scientific problems. The emerging Workflow as a Service (WaaS) model provides scientists with an effective way of deploying their workflow applications in Cloud environments. The WaaS model can execute multiple workflows in a multi-tenant Cloud environment. Scheduling the tasks of the workflows in the WaaS model has several challenges. The scheduling approach must properly utilize the underlying Cloud resources and satisfy the users’ Quality of Service (QoS) requirements for all the workflows. In this work, we have proposed a heurisine-sensitive workflows in a containerized Cloud environment for the WaaS model. We formulated the problem of minimizing the MIPS (million instructions per second) requirement of tasks while satisfying the deadline of the workflows as a non-linear optimization problem and applied the Lagranges multiplier method to solve it. It allows us to configure/scale the containers’ resources and reduce costs. We also ensure maximum utilization of VM’s resources while allocating containers to VMs. Furthermore, we have proposed an approach to effectively scale containers and VMs to improve the schedulability of the workflows at runtime to deal with the dynamic arrival of the workflows. Extensive experiments and comparisons with other state-of-the-art works show that the proposed approach can significantly improve resource utilization, prevent deadline violation, and reduce the cost of renting Cloud resources for the WaaS model.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"34 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140097911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Grid Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1