Pub Date : 2024-06-04DOI: 10.1007/s10723-024-09758-2
Zeinab Bakhshi, Guillermo Rodriguez-Navas, Hans Hansson, Radu Prodan
This paper analyzes the timing performance of a persistent storage designed for distributed container-based architectures in industrial control applications. The timing performance analysis is conducted using an in-house simulator, which mirrors our testbed specifications. The storage ensures data availability and consistency even in presence of faults. The analysis considers four aspects: 1. placement strategy, 2. design options, 3. data size, and 4. evaluation under faulty conditions. Experimental results considering the timing constraints in industrial applications indicate that the storage solution can meet critical deadlines, particularly under specific failure patterns. Comparison results also reveal that, while the method may underperform current centralized solutions in fault-free conditions, it outperforms the centralized solutions in failure scenario. Moreover, the used evaluation method is applicable for assessing other container-based critical applications with timing constraints that require persistent storage.
{"title":"Evaluation of Storage Placement in Computing Continuum for a Robotic Application","authors":"Zeinab Bakhshi, Guillermo Rodriguez-Navas, Hans Hansson, Radu Prodan","doi":"10.1007/s10723-024-09758-2","DOIUrl":"https://doi.org/10.1007/s10723-024-09758-2","url":null,"abstract":"<p>This paper analyzes the timing performance of a persistent storage designed for distributed container-based architectures in industrial control applications. The timing performance analysis is conducted using an in-house simulator, which mirrors our testbed specifications. The storage ensures data availability and consistency even in presence of faults. The analysis considers four aspects: 1. placement strategy, 2. design options, 3. data size, and 4. evaluation under faulty conditions. Experimental results considering the timing constraints in industrial applications indicate that the storage solution can meet critical deadlines, particularly under specific failure patterns. Comparison results also reveal that, while the method may underperform current centralized solutions in fault-free conditions, it outperforms the centralized solutions in failure scenario. Moreover, the used evaluation method is applicable for assessing other container-based critical applications with timing constraints that require persistent storage.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"1 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141254522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-30DOI: 10.1007/s10723-024-09768-0
Guangyu Xu, Mingde Xu
Recent modern computing and trends in digital transformation provide a smart healthcare system for predicting diseases at an early stage. In healthcare services, Internet of Things (IoT) based models play a vital role in enhancing data processing and detection. As IoT grows, processing data requires more space. Transferring the patient reports takes too much time and energy, which causes high latency and energy. To overcome this, Edge computing is the solution. The data is analysed in the edge layer to improve the utilization. This paper proposed effective prediction of resource allocation and prediction models using IoT and Edge, which are suitable for healthcare applications. The proposed system consists of three modules: data preprocessing using filtering approaches, Resource allocation using the Deep Q network, and prediction phase using an optimised DL model called DBN-LSTM with frog leap optimization. The DL model is trained using the training health dataset, and the target field is predicted. It has been tested using the sensed data from the IoT layer, and the patient health status is expected to take appropriate actions. With timely prediction using edge devices, doctors and patients conveniently take necessary actions. The primary objective of this system is to secure low latency by improving the quality of service (QoS) metrics such as makespan, ARU, LBL, TAT, and accuracy. The deep reinforcement learning approach is employed due to its considerable acceptance for resource allocation. Compared to the state-of-the-art approaches, the proposed system obtained reduced makespan by increasing the average resource utilization and load balancing, which is suitable for accurate real-time analysis of patient health status.
{"title":"An Effective Prediction of Resource Using Machine Learning in Edge Environments for the Smart Healthcare Industry","authors":"Guangyu Xu, Mingde Xu","doi":"10.1007/s10723-024-09768-0","DOIUrl":"https://doi.org/10.1007/s10723-024-09768-0","url":null,"abstract":"<p>Recent modern computing and trends in digital transformation provide a smart healthcare system for predicting diseases at an early stage. In healthcare services, Internet of Things (IoT) based models play a vital role in enhancing data processing and detection. As IoT grows, processing data requires more space. Transferring the patient reports takes too much time and energy, which causes high latency and energy. To overcome this, Edge computing is the solution. The data is analysed in the edge layer to improve the utilization. This paper proposed effective prediction of resource allocation and prediction models using IoT and Edge, which are suitable for healthcare applications. The proposed system consists of three modules: data preprocessing using filtering approaches, Resource allocation using the Deep Q network, and prediction phase using an optimised DL model called DBN-LSTM with frog leap optimization. The DL model is trained using the training health dataset, and the target field is predicted. It has been tested using the sensed data from the IoT layer, and the patient health status is expected to take appropriate actions. With timely prediction using edge devices, doctors and patients conveniently take necessary actions. The primary objective of this system is to secure low latency by improving the quality of service (QoS) metrics such as makespan, ARU, LBL, TAT, and accuracy. The deep reinforcement learning approach is employed due to its considerable acceptance for resource allocation. Compared to the state-of-the-art approaches, the proposed system obtained reduced makespan by increasing the average resource utilization and load balancing, which is suitable for accurate real-time analysis of patient health status.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"61 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141191589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In different industries, there are miscellaneous applications that require multi-dimensional resources. These kinds of applications need all of the resource dimensions at the same time. Since the resources are typically scarce/expensive/pollutant, presenting an efficient resource allocation is a very favorable approach to reducing overall cost. On the other hand, the requirement of the applications on different dimensions of the resources is variable, usually, resource allocations have a high rate of wastage owing to the unpleasant resource skew-ness phenomenon. For instance, micro-service allocation in the Internet of Things (IoT) applications and Virtual Machine Placement (VMP) in a cloud context are challenging tasks because they diversely require imbalanced all resource dimensions such as CPU and Memory bandwidths, so inefficient resource allocation raises issues. In a special case, the problem under study associated with the two-dimensional resource allocation of distributed applications is modeled to the two-dimensional bin-packing problems which are categorized as the famous NP-Hard. Several approaches were proposed in the literature, but the majority of them are not aware of skew-ness and dimensional imbalances in the list of requested resources which incurs additional costs. To solve this combinatorial problem, a novel hybrid discrete gray wolf optimization algorithm (HD-GWO) is presented. It utilizes strong global search operators along with several novel walking-around procedures each of which is aware of resource dimensional skew-ness and explores discrete search space with efficient permutations. To verify HD-GWO, it was tested in miscellaneous conditions considering different correlation coefficients (CC) of resource dimensions. Simulation results prove that HD-GWO significantly outperforms other state-of-the-art in terms of relevant evaluation metrics along with a high potential of scalability.
各行各业都有需要多维资源的各种应用。这类应用需要同时使用所有资源维度。由于资源通常是稀缺的/昂贵的/污染的,因此有效的资源分配是降低总体成本的一个非常有利的方法。另一方面,应用程序对资源不同维度的需求是不固定的,通常情况下,由于资源倾斜现象令人不快,资源分配的浪费率很高。例如,物联网(IoT)应用中的微服务分配和云背景下的虚拟机安置(VMP)都是具有挑战性的任务,因为它们对 CPU 和内存带宽等所有资源维度的需求各不相同,因此低效的资源分配会引发问题。在特殊情况下,所研究的与分布式应用程序的二维资源分配相关的问题被模拟为二维 bin-packing 问题,该问题被归类为著名的 NP-Hard。文献中提出了几种方法,但其中大多数都没有意识到所需资源列表中的倾斜度和维度不平衡会产生额外成本。为解决这一组合问题,本文提出了一种新型混合离散灰狼优化算法(HD-GWO)。该算法利用强大的全局搜索算子和几个新颖的走动程序,每个程序都能意识到资源维度的倾斜度,并通过高效的排列探索离散搜索空间。为了验证 HD-GWO,在考虑到资源维度的不同相关系数 (CC) 的各种条件下对其进行了测试。仿真结果证明,HD-GWO 在相关评估指标方面明显优于其他最先进的方法,同时具有很高的可扩展性。
{"title":"A Hybrid Discrete Grey Wolf Optimization Algorithm Imbalance-ness Aware for Solving Two-dimensional Bin-packing Problems","authors":"Saeed Kosari, Mirsaeid Hosseini Shirvani, Navid Khaledian, Danial Javaheri","doi":"10.1007/s10723-024-09761-7","DOIUrl":"https://doi.org/10.1007/s10723-024-09761-7","url":null,"abstract":"<p>In different industries, there are miscellaneous applications that require multi-dimensional resources. These kinds of applications need all of the resource dimensions at the same time. Since the resources are typically scarce/expensive/pollutant, presenting an efficient resource allocation is a very favorable approach to reducing overall cost. On the other hand, the requirement of the applications on different dimensions of the resources is variable, usually, resource allocations have a high rate of wastage owing to the unpleasant resource skew-ness phenomenon. For instance, micro-service allocation in the Internet of Things (IoT) applications and Virtual Machine Placement (VMP) in a cloud context are challenging tasks because they diversely require imbalanced all resource dimensions such as CPU and Memory bandwidths, so inefficient resource allocation raises issues. In a special case, the problem under study associated with the two-dimensional resource allocation of distributed applications is modeled to the two-dimensional bin-packing problems which are categorized as the famous NP-Hard. Several approaches were proposed in the literature, but the majority of them are not aware of skew-ness and dimensional imbalances in the list of requested resources which incurs additional costs. To solve this combinatorial problem, a novel hybrid discrete gray wolf optimization algorithm (<i>HD</i>-<i>GWO</i>) is presented. It utilizes strong global search operators along with several novel walking-around procedures each of which is aware of resource dimensional skew-ness and explores discrete search space with efficient permutations. To verify <i>HD</i>-<i>GWO</i>, it was tested in miscellaneous conditions considering different correlation coefficients (<i>CC</i>) of resource dimensions. Simulation results prove that <i>HD</i>-<i>GWO</i> significantly outperforms other state-of-the-art in terms of relevant evaluation metrics along with a high potential of scalability.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"309 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-03DOI: 10.1007/s10723-024-09763-5
Mouhamed Gaith Ayadi, Haithem Mezni
In today’s smart environments, the serviceli-zation of various resources has produced a tremendous number of IoT- and cloud-based smart services. Thanks to the pivotal role of pillar paradigms, such as edge/cloud computing, Internet of Things, and business process management, it is now possible to combine and translate these service-like resources into configurable workflows, to cope with users’ complex needs. Examples include treatment workflows in smart healthcare, delivery plans in drone-based missions, transportation plans in smart urban networks, etc. Rather than composing atomic services to obtain these workflows, reusing existing process fragments has several advantages, mainly the fast, secure, and configurable compositions. However, reusing smart process fragments has not yet been addressed in the context of smart environments. In addition, existing solutions in smart environments suffer from the complexity (e.g., multi-modal transportation in smart mobility) and privacy issues caused by the heterogeneity (e.g., package delivery in smart economy) of aggregated services. Moreover, these services may be conflicting in specific domains (e.g. medication/treatment workflows in smart healthcare), and may affect user experience. To solve the above issues, the present paper aims to accelerate the process of generating configurable treatment workflows w.r.t. the users’ requirements and their smart environment specificity. We exploit the principles of software reuse to map each sub-request into smart process fragments, which we combine using Cocke-Kasami-Younger (CKY) method, to finally obtain the suitable workflow. This contribution is preceded by a knowledge graph modeling of smart environments in terms of available services, process fragments, as well as their dependencies. The built information network is, then, managed using a graph representation learning method, in order to facilitate its processing and composing high-quality smart services. Experimental results on a real-world dataset proved the effectiveness of our approach, compared to existing solutions.
{"title":"Enabling Configurable Workflows in Smart Environments with Knowledge-based Process Fragment Reuse","authors":"Mouhamed Gaith Ayadi, Haithem Mezni","doi":"10.1007/s10723-024-09763-5","DOIUrl":"https://doi.org/10.1007/s10723-024-09763-5","url":null,"abstract":"<p>In today’s smart environments, the serviceli-zation of various resources has produced a tremendous number of IoT- and cloud-based smart services. Thanks to the pivotal role of pillar paradigms, such as edge/cloud computing, Internet of Things, and business process management, it is now possible to combine and translate these service-like resources into configurable workflows, to cope with users’ complex needs. Examples include treatment workflows in smart healthcare, delivery plans in drone-based missions, transportation plans in smart urban networks, etc. Rather than composing atomic services to obtain these workflows, reusing existing process fragments has several advantages, mainly the fast, secure, and configurable compositions. However, reusing smart process fragments has not yet been addressed in the context of smart environments. In addition, existing solutions in smart environments suffer from the complexity (e.g., multi-modal transportation in smart mobility) and privacy issues caused by the heterogeneity (e.g., package delivery in smart economy) of aggregated services. Moreover, these services may be conflicting in specific domains (e.g. medication/treatment workflows in smart healthcare), and may affect user experience. To solve the above issues, the present paper aims to accelerate the process of generating configurable treatment workflows w.r.t. the users’ requirements and their smart environment specificity. We exploit the principles of software reuse to map each sub-request into smart process fragments, which we combine using Cocke-Kasami-Younger (CKY) method, to finally obtain the suitable workflow. This contribution is preceded by a knowledge graph modeling of smart environments in terms of available services, process fragments, as well as their dependencies. The built information network is, then, managed using a graph representation learning method, in order to facilitate its processing and composing high-quality smart services. Experimental results on a real-world dataset proved the effectiveness of our approach, compared to existing solutions.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"5 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the ever-evolving landscape of smart city transportation, effective traffic management remains a critical challenge. To address this, we propose a novel Smart Traffic Management System (STMS) Architecture algorithm that combines cutting-edge technologies, including Blockchain, IoT, edge computing, and reinforcement learning. STMS aims to optimize traffic flow, minimize congestion, and enhance transportation efficiency while ensuring data integrity, security, and decentralized decision-making. STMS integrates the Twin Delayed Deep Deterministic Policy Gradient (TD3) reinforcement learning algorithm with Blockchain technology to enable secure and transparent data sharing among traffic-related entities. Smart contracts are deployed on the Blockchain to automate the execution of predefined traffic rules, ensuring compliance and accountability. Integrating IoT sensors on vehicles, roadways, and traffic signals provides real-time traffic data, while edge nodes perform local traffic analysis and contribute to optimization. The algorithm’s decentralized decision-making empowers edge devices, traffic signals, and vehicles to interact autonomously, making informed decisions based on local data and predefined rules stored on the Blockchain. TD3 optimizes traffic signal timings, route suggestions, and traffic flow control, ensuring smooth transportation operations. STMSs holistic approach addresses traffic management challenges in smart cities by combining advanced technologies. By leveraging Blockchain’s immutability, IoT’s real-time insights, edge computing’s local intelligence, and TD3’s reinforcement learning capabilities, STMS presents a robust solution for achieving efficient and secure transportation systems. This research underscores the potential for innovative algorithms to revolutionize urban mobility, ushering in a new era of smart and sustainable transportation networks.
{"title":"Exploring the Synergy of Blockchain, IoT, and Edge Computing in Smart Traffic Management across Urban Landscapes","authors":"Yu Chen, Yilun Qiu, Zhenyu Tang, Shuling Long, Lingfeng Zhao, Zhong Tang","doi":"10.1007/s10723-024-09762-6","DOIUrl":"https://doi.org/10.1007/s10723-024-09762-6","url":null,"abstract":"<p>In the ever-evolving landscape of smart city transportation, effective traffic management remains a critical challenge. To address this, we propose a novel Smart Traffic Management System (STMS) Architecture algorithm that combines cutting-edge technologies, including Blockchain, IoT, edge computing, and reinforcement learning. STMS aims to optimize traffic flow, minimize congestion, and enhance transportation efficiency while ensuring data integrity, security, and decentralized decision-making. STMS integrates the Twin Delayed Deep Deterministic Policy Gradient (TD3) reinforcement learning algorithm with Blockchain technology to enable secure and transparent data sharing among traffic-related entities. Smart contracts are deployed on the Blockchain to automate the execution of predefined traffic rules, ensuring compliance and accountability. Integrating IoT sensors on vehicles, roadways, and traffic signals provides real-time traffic data, while edge nodes perform local traffic analysis and contribute to optimization. The algorithm’s decentralized decision-making empowers edge devices, traffic signals, and vehicles to interact autonomously, making informed decisions based on local data and predefined rules stored on the Blockchain. TD3 optimizes traffic signal timings, route suggestions, and traffic flow control, ensuring smooth transportation operations. STMSs holistic approach addresses traffic management challenges in smart cities by combining advanced technologies. By leveraging Blockchain’s immutability, IoT’s real-time insights, edge computing’s local intelligence, and TD3’s reinforcement learning capabilities, STMS presents a robust solution for achieving efficient and secure transportation systems. This research underscores the potential for innovative algorithms to revolutionize urban mobility, ushering in a new era of smart and sustainable transportation networks.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"228 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140615758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1007/s10723-024-09760-8
Neha Kaushik, Harish Kumar, Vinay Raj
Microservices has become a buzzword in industry as many large IT giants such as Amazon, Twitter, Uber, etc have started migrating their existing applications to this new style and few of them have started building their new applications with this style. Due to increasing user requirements and the need to add more business functionalities to the existing applications, the web applications designed using the microservices style also face a few performance challenges. Though this style has been successfully adopted in the design of large enterprise applications, still the applications face performance related issues. It is clear from the literature that most of the articles focus only on the backend microservices. To the best of our knowledge, there has been no solution proposed considering micro frontends along with the backend microservices. To improve the performance of the microservices based web applications, in this paper, a new framework for the design of web applications with micro frontends for frontend and microservices in the backend of the application is presented. To assess the proposed framework, an empirical investigation is performed to analyze the performance and it is found that the applications designed with micro frontends with microservices have performed better than the applications with monolithic frontends. Additionally, to predict the performance of microservices based applications, a machine learning model is proposed as machine learning has wide applications in software engineering related activities. The accuracy of the proposed model using different metrics is also presented.
随着亚马逊、Twitter、Uber 等许多大型 IT 巨头开始将其现有应用程序迁移到这种新风格,微服务已成为业界的热门词汇,其中少数公司已开始使用这种风格构建新的应用程序。由于用户需求不断增加,而且需要在现有应用程序中添加更多业务功能,使用微服务样式设计的网络应用程序也面临着一些性能挑战。虽然这种风格已成功应用于大型企业应用程序的设计中,但这些应用程序仍然面临着与性能相关的问题。从文献中可以明显看出,大多数文章只关注后端微服务。据我们所知,还没有人提出过将微前端与后端微服务一起考虑的解决方案。为了提高基于微服务的网络应用程序的性能,本文提出了一种新的网络应用程序设计框架,前端采用微前端,后端采用微服务。为了评估所提出的框架,我们进行了一项实证调查来分析其性能,结果发现,使用微前端和微服务设计的应用程序比使用单体前端的应用程序性能更好。此外,为了预测基于微服务的应用程序的性能,还提出了一个机器学习模型,因为机器学习在软件工程相关活动中有着广泛的应用。此外,还介绍了所提模型使用不同指标的准确性。
{"title":"Micro Frontend Based Performance Improvement and Prediction for Microservices Using Machine Learning","authors":"Neha Kaushik, Harish Kumar, Vinay Raj","doi":"10.1007/s10723-024-09760-8","DOIUrl":"https://doi.org/10.1007/s10723-024-09760-8","url":null,"abstract":"<p>Microservices has become a buzzword in industry as many large IT giants such as Amazon, Twitter, Uber, etc have started migrating their existing applications to this new style and few of them have started building their new applications with this style. Due to increasing user requirements and the need to add more business functionalities to the existing applications, the web applications designed using the microservices style also face a few performance challenges. Though this style has been successfully adopted in the design of large enterprise applications, still the applications face performance related issues. It is clear from the literature that most of the articles focus only on the backend microservices. To the best of our knowledge, there has been no solution proposed considering micro frontends along with the backend microservices. To improve the performance of the microservices based web applications, in this paper, a new framework for the design of web applications with micro frontends for frontend and microservices in the backend of the application is presented. To assess the proposed framework, an empirical investigation is performed to analyze the performance and it is found that the applications designed with micro frontends with microservices have performed better than the applications with monolithic frontends. Additionally, to predict the performance of microservices based applications, a machine learning model is proposed as machine learning has wide applications in software engineering related activities. The accuracy of the proposed model using different metrics is also presented.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"47 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140583383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-02DOI: 10.1007/s10723-024-09757-3
Tao Hai, Muammer Aksoy, Celestine Iwendi, Ebuka Ibeke, Senthilkumar Mohan
The lack of data security and the hazardous nature of the Internet of Vehicles (IoV), in the absence of networking settings, have prevented the openness and self-organization of the vehicle networks of IoV cars. The lapses originating in the areas of Confidentiality, Integrity, and Authenticity (CIA) have also increased the possibility of malicious attacks. To overcome these challenges, this paper proposes an updated Games-based CIA security mechanism to secure IoVs using Blockchain and Artificial Intelligence (AI) technology. The proposed framework consists of a trustworthy authorization solution three layers, including the authentication of vehicles using Physical Unclonable Functions (PUFs), a flexible Proof-of-Work (dPOW) consensus framework, and AI-enhanced duel gaming. The credibility of the framework is validated by different security analyses, showcasing its superiority over existing systems in terms of security, functionality, computation, and transaction overhead. Additionally, the proposed solution effectively handles challenges like side channel and physical cloning attacks, which many existing frameworks fail to address. The implementation of this mechanism involves the use of a reduced encumbered blockchain, coupled with AI-based authentication through duel gaming, showcasing its efficiency and physical-level support, a feature not present in most existing blockchain-based IoV verification frameworks.
由于缺乏联网设置,车联网(IoV)的数据安全性和危险性不足,阻碍了车联网汽车网络的开放性和自组织性。源于保密性、完整性和真实性(CIA)领域的漏洞也增加了恶意攻击的可能性。为了克服这些挑战,本文提出了一种更新的基于游戏的 CIA 安全机制,利用区块链和人工智能(AI)技术确保物联网汽车的安全。所提出的框架由三层可信授权解决方案组成,包括使用物理不可克隆函数(PUF)对车辆进行身份验证、灵活的工作证明(dPOW)共识框架和人工智能增强型对决游戏。不同的安全分析验证了该框架的可信度,表明其在安全性、功能性、计算量和交易开销方面优于现有系统。此外,所提出的解决方案还能有效处理侧信道和物理克隆攻击等挑战,而许多现有框架都无法解决这些问题。该机制的实施涉及使用减少了加密的区块链,并通过决斗游戏与基于人工智能的身份验证相结合,从而展示了其效率和物理层支持,这是大多数现有基于区块链的物联网验证框架所不具备的功能。
{"title":"CIA Security for Internet of Vehicles and Blockchain-AI Integration","authors":"Tao Hai, Muammer Aksoy, Celestine Iwendi, Ebuka Ibeke, Senthilkumar Mohan","doi":"10.1007/s10723-024-09757-3","DOIUrl":"https://doi.org/10.1007/s10723-024-09757-3","url":null,"abstract":"<p>The lack of data security and the hazardous nature of the Internet of Vehicles (IoV), in the absence of networking settings, have prevented the openness and self-organization of the vehicle networks of IoV cars. The lapses originating in the areas of Confidentiality, Integrity, and Authenticity (CIA) have also increased the possibility of malicious attacks. To overcome these challenges, this paper proposes an updated Games-based CIA security mechanism to secure IoVs using Blockchain and Artificial Intelligence (AI) technology. The proposed framework consists of a trustworthy authorization solution three layers, including the authentication of vehicles using Physical Unclonable Functions (PUFs), a flexible Proof-of-Work (dPOW) consensus framework, and AI-enhanced duel gaming. The credibility of the framework is validated by different security analyses, showcasing its superiority over existing systems in terms of security, functionality, computation, and transaction overhead. Additionally, the proposed solution effectively handles challenges like side channel and physical cloning attacks, which many existing frameworks fail to address. The implementation of this mechanism involves the use of a reduced encumbered blockchain, coupled with AI-based authentication through duel gaming, showcasing its efficiency and physical-level support, a feature not present in most existing blockchain-based IoV verification frameworks.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"45 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140583649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26DOI: 10.1007/s10723-024-09759-1
Abstract
In recent years, internet enterprises have transitioned from traditional monolithic service to microservice architecture to better meet evolving business requirements. However, it also brings great challenges to the resource management of service providers. Existing research has not fully considered the request characteristics of internet application scenarios. Some studies apply traditional task scheduling models and strategies to microservice scheduling scenarios, while others optimize microservice deployment and request routing separately. In this paper, we propose a microservice instance deployment algorithm based on genetic and local search, and a request routing algorithm based on probabilistic forwarding. The service graph with complex dependencies is decomposed into multiple service chains, and the open Jackson queuing network is applied to analyze the performance of the microservice system. Data evaluation results demonstrate that our scheme significantly outperforms the benchmark strategy. Our algorithm has reduced the average response latency by 37%-67% and enhanced request success rate by 8%-115% compared to other baseline algorithms.
{"title":"On the Joint Design of Microservice Deployment and Routing in Cloud Data Centers","authors":"","doi":"10.1007/s10723-024-09759-1","DOIUrl":"https://doi.org/10.1007/s10723-024-09759-1","url":null,"abstract":"<h3>Abstract</h3> <p>In recent years, internet enterprises have transitioned from traditional monolithic service to microservice architecture to better meet evolving business requirements. However, it also brings great challenges to the resource management of service providers. Existing research has not fully considered the request characteristics of internet application scenarios. Some studies apply traditional task scheduling models and strategies to microservice scheduling scenarios, while others optimize microservice deployment and request routing separately. In this paper, we propose a microservice instance deployment algorithm based on genetic and local search, and a request routing algorithm based on probabilistic forwarding. The service graph with complex dependencies is decomposed into multiple service chains, and the open Jackson queuing network is applied to analyze the performance of the microservice system. Data evaluation results demonstrate that our scheme significantly outperforms the benchmark strategy. Our algorithm has reduced the average response latency by 37%-67% and enhanced request success rate by 8%-115% compared to other baseline algorithms.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"8 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140302885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-14DOI: 10.1007/s10723-024-09755-5
Shujie Qiu
Educational institutions today are embracing technology to enhance education quality through intelligent systems. This study introduces an innovative strategy to boost the performance of such procedures by seamlessly integrating machine learning on edge devices and cloud infrastructure. The proposed framework harnesses the capabilities of a Hybrid 1D Convolutional Neural Network (CNN) and Long Short-Term Memory Network (LSTM) architecture, offering profound insights into intelligent education. Operating at the crossroads of localised and centralised analyses, the Hybrid 1D CNN-LSTM architecture signifies a significant advancement. It directly engages edge devices used by students and educators, laying the groundwork for personalised learning experiences. This architecture adeptly captures the intricacies of various modalities, including text, images, and videos, by harmonising 1D CNN layers and LSTM modules. This approach facilitates the extraction of tailored features from each modality and the exploration of temporal intricacies. Consequently, the architecture provides a holistic comprehension of student engagement and comprehension dynamics, unveiling individual learning preferences. Moreover, the framework seamlessly integrates data from edge devices into the cloud infrastructure, allowing insights from both domains to merge. Educators benefit from attention-enhanced feature maps that encapsulate personalised insights, empowering them to customise content and strategies according to student learning preferences. The approach bridges real-time, localised analysis with comprehensive cloud-mediated insights, paving the path for transformative educational experiences. Empirical validation reinforces the effectiveness of the Hybrid 1D CNN-LSTM architecture, cementing its potential to revolutionise intelligent education within academic institutions. This fusion of machine learning across edge devices and cloud architecture can reshape the educational landscape, ushering in a more innovative and more responsive learning environment that caters to the diverse needs of students and educators alike.
{"title":"Improving Performance of Smart Education Systems by Integrating Machine Learning on Edge Devices and Cloud in Educational Institutions","authors":"Shujie Qiu","doi":"10.1007/s10723-024-09755-5","DOIUrl":"https://doi.org/10.1007/s10723-024-09755-5","url":null,"abstract":"<p>Educational institutions today are embracing technology to enhance education quality through intelligent systems. This study introduces an innovative strategy to boost the performance of such procedures by seamlessly integrating machine learning on edge devices and cloud infrastructure. The proposed framework harnesses the capabilities of a Hybrid 1D Convolutional Neural Network (CNN) and Long Short-Term Memory Network (LSTM) architecture, offering profound insights into intelligent education. Operating at the crossroads of localised and centralised analyses, the Hybrid 1D CNN-LSTM architecture signifies a significant advancement. It directly engages edge devices used by students and educators, laying the groundwork for personalised learning experiences. This architecture adeptly captures the intricacies of various modalities, including text, images, and videos, by harmonising 1D CNN layers and LSTM modules. This approach facilitates the extraction of tailored features from each modality and the exploration of temporal intricacies. Consequently, the architecture provides a holistic comprehension of student engagement and comprehension dynamics, unveiling individual learning preferences. Moreover, the framework seamlessly integrates data from edge devices into the cloud infrastructure, allowing insights from both domains to merge. Educators benefit from attention-enhanced feature maps that encapsulate personalised insights, empowering them to customise content and strategies according to student learning preferences. The approach bridges real-time, localised analysis with comprehensive cloud-mediated insights, paving the path for transformative educational experiences. Empirical validation reinforces the effectiveness of the Hybrid 1D CNN-LSTM architecture, cementing its potential to revolutionise intelligent education within academic institutions. This fusion of machine learning across edge devices and cloud architecture can reshape the educational landscape, ushering in a more innovative and more responsive learning environment that caters to the diverse needs of students and educators alike.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"2 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140147452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-11DOI: 10.1007/s10723-024-09745-7
Kamalesh Karmakar, Anurina Tarafdar, Rajib K. Das, Sunirmal Khatua
Workflows are special applications used to solve complex scientific problems. The emerging Workflow as a Service (WaaS) model provides scientists with an effective way of deploying their workflow applications in Cloud environments. The WaaS model can execute multiple workflows in a multi-tenant Cloud environment. Scheduling the tasks of the workflows in the WaaS model has several challenges. The scheduling approach must properly utilize the underlying Cloud resources and satisfy the users’ Quality of Service (QoS) requirements for all the workflows. In this work, we have proposed a heurisine-sensitive workflows in a containerized Cloud environment for the WaaS model. We formulated the problem of minimizing the MIPS (million instructions per second) requirement of tasks while satisfying the deadline of the workflows as a non-linear optimization problem and applied the Lagranges multiplier method to solve it. It allows us to configure/scale the containers’ resources and reduce costs. We also ensure maximum utilization of VM’s resources while allocating containers to VMs. Furthermore, we have proposed an approach to effectively scale containers and VMs to improve the schedulability of the workflows at runtime to deal with the dynamic arrival of the workflows. Extensive experiments and comparisons with other state-of-the-art works show that the proposed approach can significantly improve resource utilization, prevent deadline violation, and reduce the cost of renting Cloud resources for the WaaS model.
{"title":"Cost-efficient Workflow as a Service using Containers","authors":"Kamalesh Karmakar, Anurina Tarafdar, Rajib K. Das, Sunirmal Khatua","doi":"10.1007/s10723-024-09745-7","DOIUrl":"https://doi.org/10.1007/s10723-024-09745-7","url":null,"abstract":"<p>Workflows are special applications used to solve complex scientific problems. The emerging Workflow as a Service (WaaS) model provides scientists with an effective way of deploying their workflow applications in Cloud environments. The WaaS model can execute multiple workflows in a multi-tenant Cloud environment. Scheduling the tasks of the workflows in the WaaS model has several challenges. The scheduling approach must properly utilize the underlying Cloud resources and satisfy the users’ Quality of Service (QoS) requirements for all the workflows. In this work, we have proposed a heurisine-sensitive workflows in a containerized Cloud environment for the WaaS model. We formulated the problem of minimizing the MIPS (million instructions per second) requirement of tasks while satisfying the deadline of the workflows as a non-linear optimization problem and applied the Lagranges multiplier method to solve it. It allows us to configure/scale the containers’ resources and reduce costs. We also ensure maximum utilization of VM’s resources while allocating containers to VMs. Furthermore, we have proposed an approach to effectively scale containers and VMs to improve the schedulability of the workflows at runtime to deal with the dynamic arrival of the workflows. Extensive experiments and comparisons with other state-of-the-art works show that the proposed approach can significantly improve resource utilization, prevent deadline violation, and reduce the cost of renting Cloud resources for the WaaS model.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"34 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140097911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}