首页 > 最新文献

Journal of Grid Computing最新文献

英文 中文
Exploring the Synergy of Blockchain, IoT, and Edge Computing in Smart Traffic Management across Urban Landscapes 探索区块链、物联网和边缘计算在城市智能交通管理中的协同作用
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-04-17 DOI: 10.1007/s10723-024-09762-6
Yu Chen, Yilun Qiu, Zhenyu Tang, Shuling Long, Lingfeng Zhao, Zhong Tang

In the ever-evolving landscape of smart city transportation, effective traffic management remains a critical challenge. To address this, we propose a novel Smart Traffic Management System (STMS) Architecture algorithm that combines cutting-edge technologies, including Blockchain, IoT, edge computing, and reinforcement learning. STMS aims to optimize traffic flow, minimize congestion, and enhance transportation efficiency while ensuring data integrity, security, and decentralized decision-making. STMS integrates the Twin Delayed Deep Deterministic Policy Gradient (TD3) reinforcement learning algorithm with Blockchain technology to enable secure and transparent data sharing among traffic-related entities. Smart contracts are deployed on the Blockchain to automate the execution of predefined traffic rules, ensuring compliance and accountability. Integrating IoT sensors on vehicles, roadways, and traffic signals provides real-time traffic data, while edge nodes perform local traffic analysis and contribute to optimization. The algorithm’s decentralized decision-making empowers edge devices, traffic signals, and vehicles to interact autonomously, making informed decisions based on local data and predefined rules stored on the Blockchain. TD3 optimizes traffic signal timings, route suggestions, and traffic flow control, ensuring smooth transportation operations. STMSs holistic approach addresses traffic management challenges in smart cities by combining advanced technologies. By leveraging Blockchain’s immutability, IoT’s real-time insights, edge computing’s local intelligence, and TD3’s reinforcement learning capabilities, STMS presents a robust solution for achieving efficient and secure transportation systems. This research underscores the potential for innovative algorithms to revolutionize urban mobility, ushering in a new era of smart and sustainable transportation networks.

在不断发展的智能城市交通领域,有效的交通管理仍然是一项严峻的挑战。为此,我们提出了一种新型智能交通管理系统(STMS)架构算法,该算法结合了区块链、物联网、边缘计算和强化学习等前沿技术。STMS 旨在优化交通流量、减少拥堵、提高交通效率,同时确保数据完整性、安全性和分散决策。STMS 将双延迟深度确定性策略梯度(TD3)强化学习算法与区块链技术相结合,实现了交通相关实体之间安全、透明的数据共享。智能合约部署在区块链上,自动执行预定义的交通规则,确保合规性和问责制。整合车辆、道路和交通信号上的物联网传感器可提供实时交通数据,而边缘节点可执行本地交通分析并促进优化。该算法的分散决策功能使边缘设备、交通信号和车辆能够自主互动,根据本地数据和存储在区块链上的预定义规则做出明智决策。TD3 可优化交通信号时间、路线建议和交通流量控制,确保交通运营顺畅。STMS 的整体方法通过结合先进技术,解决了智慧城市的交通管理难题。通过利用区块链的不变性、物联网的实时洞察力、边缘计算的本地智能和 TD3 的强化学习能力,STMS 为实现高效、安全的交通系统提供了一个强大的解决方案。这项研究强调了创新算法彻底改变城市交通的潜力,开创了智能和可持续交通网络的新时代。
{"title":"Exploring the Synergy of Blockchain, IoT, and Edge Computing in Smart Traffic Management across Urban Landscapes","authors":"Yu Chen, Yilun Qiu, Zhenyu Tang, Shuling Long, Lingfeng Zhao, Zhong Tang","doi":"10.1007/s10723-024-09762-6","DOIUrl":"https://doi.org/10.1007/s10723-024-09762-6","url":null,"abstract":"<p>In the ever-evolving landscape of smart city transportation, effective traffic management remains a critical challenge. To address this, we propose a novel Smart Traffic Management System (STMS) Architecture algorithm that combines cutting-edge technologies, including Blockchain, IoT, edge computing, and reinforcement learning. STMS aims to optimize traffic flow, minimize congestion, and enhance transportation efficiency while ensuring data integrity, security, and decentralized decision-making. STMS integrates the Twin Delayed Deep Deterministic Policy Gradient (TD3) reinforcement learning algorithm with Blockchain technology to enable secure and transparent data sharing among traffic-related entities. Smart contracts are deployed on the Blockchain to automate the execution of predefined traffic rules, ensuring compliance and accountability. Integrating IoT sensors on vehicles, roadways, and traffic signals provides real-time traffic data, while edge nodes perform local traffic analysis and contribute to optimization. The algorithm’s decentralized decision-making empowers edge devices, traffic signals, and vehicles to interact autonomously, making informed decisions based on local data and predefined rules stored on the Blockchain. TD3 optimizes traffic signal timings, route suggestions, and traffic flow control, ensuring smooth transportation operations. STMSs holistic approach addresses traffic management challenges in smart cities by combining advanced technologies. By leveraging Blockchain’s immutability, IoT’s real-time insights, edge computing’s local intelligence, and TD3’s reinforcement learning capabilities, STMS presents a robust solution for achieving efficient and secure transportation systems. This research underscores the potential for innovative algorithms to revolutionize urban mobility, ushering in a new era of smart and sustainable transportation networks.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140615758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Micro Frontend Based Performance Improvement and Prediction for Microservices Using Machine Learning 基于微前端的微服务性能改进和预测(使用机器学习
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-04-16 DOI: 10.1007/s10723-024-09760-8
Neha Kaushik, Harish Kumar, Vinay Raj

Microservices has become a buzzword in industry as many large IT giants such as Amazon, Twitter, Uber, etc have started migrating their existing applications to this new style and few of them have started building their new applications with this style. Due to increasing user requirements and the need to add more business functionalities to the existing applications, the web applications designed using the microservices style also face a few performance challenges. Though this style has been successfully adopted in the design of large enterprise applications, still the applications face performance related issues. It is clear from the literature that most of the articles focus only on the backend microservices. To the best of our knowledge, there has been no solution proposed considering micro frontends along with the backend microservices. To improve the performance of the microservices based web applications, in this paper, a new framework for the design of web applications with micro frontends for frontend and microservices in the backend of the application is presented. To assess the proposed framework, an empirical investigation is performed to analyze the performance and it is found that the applications designed with micro frontends with microservices have performed better than the applications with monolithic frontends. Additionally, to predict the performance of microservices based applications, a machine learning model is proposed as machine learning has wide applications in software engineering related activities. The accuracy of the proposed model using different metrics is also presented.

随着亚马逊、Twitter、Uber 等许多大型 IT 巨头开始将其现有应用程序迁移到这种新风格,微服务已成为业界的热门词汇,其中少数公司已开始使用这种风格构建新的应用程序。由于用户需求不断增加,而且需要在现有应用程序中添加更多业务功能,使用微服务样式设计的网络应用程序也面临着一些性能挑战。虽然这种风格已成功应用于大型企业应用程序的设计中,但这些应用程序仍然面临着与性能相关的问题。从文献中可以明显看出,大多数文章只关注后端微服务。据我们所知,还没有人提出过将微前端与后端微服务一起考虑的解决方案。为了提高基于微服务的网络应用程序的性能,本文提出了一种新的网络应用程序设计框架,前端采用微前端,后端采用微服务。为了评估所提出的框架,我们进行了一项实证调查来分析其性能,结果发现,使用微前端和微服务设计的应用程序比使用单体前端的应用程序性能更好。此外,为了预测基于微服务的应用程序的性能,还提出了一个机器学习模型,因为机器学习在软件工程相关活动中有着广泛的应用。此外,还介绍了所提模型使用不同指标的准确性。
{"title":"Micro Frontend Based Performance Improvement and Prediction for Microservices Using Machine Learning","authors":"Neha Kaushik, Harish Kumar, Vinay Raj","doi":"10.1007/s10723-024-09760-8","DOIUrl":"https://doi.org/10.1007/s10723-024-09760-8","url":null,"abstract":"<p>Microservices has become a buzzword in industry as many large IT giants such as Amazon, Twitter, Uber, etc have started migrating their existing applications to this new style and few of them have started building their new applications with this style. Due to increasing user requirements and the need to add more business functionalities to the existing applications, the web applications designed using the microservices style also face a few performance challenges. Though this style has been successfully adopted in the design of large enterprise applications, still the applications face performance related issues. It is clear from the literature that most of the articles focus only on the backend microservices. To the best of our knowledge, there has been no solution proposed considering micro frontends along with the backend microservices. To improve the performance of the microservices based web applications, in this paper, a new framework for the design of web applications with micro frontends for frontend and microservices in the backend of the application is presented. To assess the proposed framework, an empirical investigation is performed to analyze the performance and it is found that the applications designed with micro frontends with microservices have performed better than the applications with monolithic frontends. Additionally, to predict the performance of microservices based applications, a machine learning model is proposed as machine learning has wide applications in software engineering related activities. The accuracy of the proposed model using different metrics is also presented.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140583383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CIA Security for Internet of Vehicles and Blockchain-AI Integration CIA 为车联网和区块链-人工智能集成提供安全保障
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-04-02 DOI: 10.1007/s10723-024-09757-3
Tao Hai, Muammer Aksoy, Celestine Iwendi, Ebuka Ibeke, Senthilkumar Mohan

The lack of data security and the hazardous nature of the Internet of Vehicles (IoV), in the absence of networking settings, have prevented the openness and self-organization of the vehicle networks of IoV cars. The lapses originating in the areas of Confidentiality, Integrity, and Authenticity (CIA) have also increased the possibility of malicious attacks. To overcome these challenges, this paper proposes an updated Games-based CIA security mechanism to secure IoVs using Blockchain and Artificial Intelligence (AI) technology. The proposed framework consists of a trustworthy authorization solution three layers, including the authentication of vehicles using Physical Unclonable Functions (PUFs), a flexible Proof-of-Work (dPOW) consensus framework, and AI-enhanced duel gaming. The credibility of the framework is validated by different security analyses, showcasing its superiority over existing systems in terms of security, functionality, computation, and transaction overhead. Additionally, the proposed solution effectively handles challenges like side channel and physical cloning attacks, which many existing frameworks fail to address. The implementation of this mechanism involves the use of a reduced encumbered blockchain, coupled with AI-based authentication through duel gaming, showcasing its efficiency and physical-level support, a feature not present in most existing blockchain-based IoV verification frameworks.

由于缺乏联网设置,车联网(IoV)的数据安全性和危险性不足,阻碍了车联网汽车网络的开放性和自组织性。源于保密性、完整性和真实性(CIA)领域的漏洞也增加了恶意攻击的可能性。为了克服这些挑战,本文提出了一种更新的基于游戏的 CIA 安全机制,利用区块链和人工智能(AI)技术确保物联网汽车的安全。所提出的框架由三层可信授权解决方案组成,包括使用物理不可克隆函数(PUF)对车辆进行身份验证、灵活的工作证明(dPOW)共识框架和人工智能增强型对决游戏。不同的安全分析验证了该框架的可信度,表明其在安全性、功能性、计算量和交易开销方面优于现有系统。此外,所提出的解决方案还能有效处理侧信道和物理克隆攻击等挑战,而许多现有框架都无法解决这些问题。该机制的实施涉及使用减少了加密的区块链,并通过决斗游戏与基于人工智能的身份验证相结合,从而展示了其效率和物理层支持,这是大多数现有基于区块链的物联网验证框架所不具备的功能。
{"title":"CIA Security for Internet of Vehicles and Blockchain-AI Integration","authors":"Tao Hai, Muammer Aksoy, Celestine Iwendi, Ebuka Ibeke, Senthilkumar Mohan","doi":"10.1007/s10723-024-09757-3","DOIUrl":"https://doi.org/10.1007/s10723-024-09757-3","url":null,"abstract":"<p>The lack of data security and the hazardous nature of the Internet of Vehicles (IoV), in the absence of networking settings, have prevented the openness and self-organization of the vehicle networks of IoV cars. The lapses originating in the areas of Confidentiality, Integrity, and Authenticity (CIA) have also increased the possibility of malicious attacks. To overcome these challenges, this paper proposes an updated Games-based CIA security mechanism to secure IoVs using Blockchain and Artificial Intelligence (AI) technology. The proposed framework consists of a trustworthy authorization solution three layers, including the authentication of vehicles using Physical Unclonable Functions (PUFs), a flexible Proof-of-Work (dPOW) consensus framework, and AI-enhanced duel gaming. The credibility of the framework is validated by different security analyses, showcasing its superiority over existing systems in terms of security, functionality, computation, and transaction overhead. Additionally, the proposed solution effectively handles challenges like side channel and physical cloning attacks, which many existing frameworks fail to address. The implementation of this mechanism involves the use of a reduced encumbered blockchain, coupled with AI-based authentication through duel gaming, showcasing its efficiency and physical-level support, a feature not present in most existing blockchain-based IoV verification frameworks.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140583649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Joint Design of Microservice Deployment and Routing in Cloud Data Centers 论云数据中心微服务部署与路由的联合设计
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-26 DOI: 10.1007/s10723-024-09759-1

Abstract

In recent years, internet enterprises have transitioned from traditional monolithic service to microservice architecture to better meet evolving business requirements. However, it also brings great challenges to the resource management of service providers. Existing research has not fully considered the request characteristics of internet application scenarios. Some studies apply traditional task scheduling models and strategies to microservice scheduling scenarios, while others optimize microservice deployment and request routing separately. In this paper, we propose a microservice instance deployment algorithm based on genetic and local search, and a request routing algorithm based on probabilistic forwarding. The service graph with complex dependencies is decomposed into multiple service chains, and the open Jackson queuing network is applied to analyze the performance of the microservice system. Data evaluation results demonstrate that our scheme significantly outperforms the benchmark strategy. Our algorithm has reduced the average response latency by 37%-67% and enhanced request success rate by 8%-115% compared to other baseline algorithms.

摘要 近年来,互联网企业纷纷从传统的单体服务向微服务架构转型,以更好地满足不断发展的业务需求。然而,这也给服务提供商的资源管理带来了巨大挑战。现有研究并未充分考虑互联网应用场景的请求特征。有的研究将传统的任务调度模型和策略应用到微服务调度场景中,有的研究则将微服务部署和请求路由分开优化。本文提出了一种基于遗传和局部搜索的微服务实例部署算法,以及一种基于概率转发的请求路由算法。将具有复杂依赖关系的服务图分解为多个服务链,并应用开放式杰克逊队列网络分析微服务系统的性能。数据评估结果表明,我们的方案明显优于基准策略。与其他基准算法相比,我们的算法将平均响应延迟降低了 37%-67%,请求成功率提高了 8%-115%。
{"title":"On the Joint Design of Microservice Deployment and Routing in Cloud Data Centers","authors":"","doi":"10.1007/s10723-024-09759-1","DOIUrl":"https://doi.org/10.1007/s10723-024-09759-1","url":null,"abstract":"<h3>Abstract</h3> <p>In recent years, internet enterprises have transitioned from traditional monolithic service to microservice architecture to better meet evolving business requirements. However, it also brings great challenges to the resource management of service providers. Existing research has not fully considered the request characteristics of internet application scenarios. Some studies apply traditional task scheduling models and strategies to microservice scheduling scenarios, while others optimize microservice deployment and request routing separately. In this paper, we propose a microservice instance deployment algorithm based on genetic and local search, and a request routing algorithm based on probabilistic forwarding. The service graph with complex dependencies is decomposed into multiple service chains, and the open Jackson queuing network is applied to analyze the performance of the microservice system. Data evaluation results demonstrate that our scheme significantly outperforms the benchmark strategy. Our algorithm has reduced the average response latency by 37%-67% and enhanced request success rate by 8%-115% compared to other baseline algorithms.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140302885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Performance of Smart Education Systems by Integrating Machine Learning on Edge Devices and Cloud in Educational Institutions 通过在教育机构的边缘设备和云端整合机器学习,提高智能教育系统的性能
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-14 DOI: 10.1007/s10723-024-09755-5
Shujie Qiu

Educational institutions today are embracing technology to enhance education quality through intelligent systems. This study introduces an innovative strategy to boost the performance of such procedures by seamlessly integrating machine learning on edge devices and cloud infrastructure. The proposed framework harnesses the capabilities of a Hybrid 1D Convolutional Neural Network (CNN) and Long Short-Term Memory Network (LSTM) architecture, offering profound insights into intelligent education. Operating at the crossroads of localised and centralised analyses, the Hybrid 1D CNN-LSTM architecture signifies a significant advancement. It directly engages edge devices used by students and educators, laying the groundwork for personalised learning experiences. This architecture adeptly captures the intricacies of various modalities, including text, images, and videos, by harmonising 1D CNN layers and LSTM modules. This approach facilitates the extraction of tailored features from each modality and the exploration of temporal intricacies. Consequently, the architecture provides a holistic comprehension of student engagement and comprehension dynamics, unveiling individual learning preferences. Moreover, the framework seamlessly integrates data from edge devices into the cloud infrastructure, allowing insights from both domains to merge. Educators benefit from attention-enhanced feature maps that encapsulate personalised insights, empowering them to customise content and strategies according to student learning preferences. The approach bridges real-time, localised analysis with comprehensive cloud-mediated insights, paving the path for transformative educational experiences. Empirical validation reinforces the effectiveness of the Hybrid 1D CNN-LSTM architecture, cementing its potential to revolutionise intelligent education within academic institutions. This fusion of machine learning across edge devices and cloud architecture can reshape the educational landscape, ushering in a more innovative and more responsive learning environment that caters to the diverse needs of students and educators alike.

当今的教育机构正在通过智能系统拥抱科技,以提高教育质量。本研究介绍了一种创新策略,通过在边缘设备和云基础设施上无缝集成机器学习,提高此类程序的性能。所提出的框架利用了混合一维卷积神经网络(CNN)和长短期记忆网络(LSTM)架构的功能,为智能教育提供了深刻的见解。混合一维卷积神经网络(CNN)和长短期记忆网络(LSTM)架构在局部分析和集中分析的交叉点上运行,标志着一项重大进步。它直接与学生和教育工作者使用的边缘设备相连接,为个性化学习体验奠定了基础。该架构通过协调一维 CNN 层和 LSTM 模块,巧妙地捕捉了文本、图像和视频等各种模式的复杂性。这种方法有助于从每种模式中提取量身定制的特征,并探索时间上的复杂性。因此,该架构能全面了解学生的参与和理解动态,揭示个人的学习偏好。此外,该框架还能将边缘设备的数据无缝集成到云基础设施中,从而将两个领域的见解融合在一起。教育工作者可以从包含个性化见解的注意力增强特征图中获益,从而能够根据学生的学习偏好定制内容和策略。这种方法将实时、本地化分析与全面的云端见解相结合,为变革性教育体验铺平了道路。经验验证加强了混合一维 CNN-LSTM 架构的有效性,巩固了其在学术机构内革新智能教育的潜力。这种融合了边缘设备和云架构的机器学习可以重塑教育格局,带来更具创新性和响应性的学习环境,满足学生和教育工作者的不同需求。
{"title":"Improving Performance of Smart Education Systems by Integrating Machine Learning on Edge Devices and Cloud in Educational Institutions","authors":"Shujie Qiu","doi":"10.1007/s10723-024-09755-5","DOIUrl":"https://doi.org/10.1007/s10723-024-09755-5","url":null,"abstract":"<p>Educational institutions today are embracing technology to enhance education quality through intelligent systems. This study introduces an innovative strategy to boost the performance of such procedures by seamlessly integrating machine learning on edge devices and cloud infrastructure. The proposed framework harnesses the capabilities of a Hybrid 1D Convolutional Neural Network (CNN) and Long Short-Term Memory Network (LSTM) architecture, offering profound insights into intelligent education. Operating at the crossroads of localised and centralised analyses, the Hybrid 1D CNN-LSTM architecture signifies a significant advancement. It directly engages edge devices used by students and educators, laying the groundwork for personalised learning experiences. This architecture adeptly captures the intricacies of various modalities, including text, images, and videos, by harmonising 1D CNN layers and LSTM modules. This approach facilitates the extraction of tailored features from each modality and the exploration of temporal intricacies. Consequently, the architecture provides a holistic comprehension of student engagement and comprehension dynamics, unveiling individual learning preferences. Moreover, the framework seamlessly integrates data from edge devices into the cloud infrastructure, allowing insights from both domains to merge. Educators benefit from attention-enhanced feature maps that encapsulate personalised insights, empowering them to customise content and strategies according to student learning preferences. The approach bridges real-time, localised analysis with comprehensive cloud-mediated insights, paving the path for transformative educational experiences. Empirical validation reinforces the effectiveness of the Hybrid 1D CNN-LSTM architecture, cementing its potential to revolutionise intelligent education within academic institutions. This fusion of machine learning across edge devices and cloud architecture can reshape the educational landscape, ushering in a more innovative and more responsive learning environment that caters to the diverse needs of students and educators alike.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140147452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cost-efficient Workflow as a Service using Containers 使用容器实现经济高效的工作流即服务
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-11 DOI: 10.1007/s10723-024-09745-7
Kamalesh Karmakar, Anurina Tarafdar, Rajib K. Das, Sunirmal Khatua

Workflows are special applications used to solve complex scientific problems. The emerging Workflow as a Service (WaaS) model provides scientists with an effective way of deploying their workflow applications in Cloud environments. The WaaS model can execute multiple workflows in a multi-tenant Cloud environment. Scheduling the tasks of the workflows in the WaaS model has several challenges. The scheduling approach must properly utilize the underlying Cloud resources and satisfy the users’ Quality of Service (QoS) requirements for all the workflows. In this work, we have proposed a heurisine-sensitive workflows in a containerized Cloud environment for the WaaS model. We formulated the problem of minimizing the MIPS (million instructions per second) requirement of tasks while satisfying the deadline of the workflows as a non-linear optimization problem and applied the Lagranges multiplier method to solve it. It allows us to configure/scale the containers’ resources and reduce costs. We also ensure maximum utilization of VM’s resources while allocating containers to VMs. Furthermore, we have proposed an approach to effectively scale containers and VMs to improve the schedulability of the workflows at runtime to deal with the dynamic arrival of the workflows. Extensive experiments and comparisons with other state-of-the-art works show that the proposed approach can significantly improve resource utilization, prevent deadline violation, and reduce the cost of renting Cloud resources for the WaaS model.

工作流是用于解决复杂科学问题的特殊应用程序。新兴的工作流即服务(WaaS)模式为科学家提供了一种在云环境中部署工作流应用程序的有效方法。WaaS 模型可以在多租户云环境中执行多个工作流。在 WaaS 模型中调度工作流的任务有几个挑战。调度方法必须妥善利用底层云资源,并满足用户对所有工作流的服务质量(QoS)要求。在这项工作中,我们针对 WaaS 模型提出了一种容器化云环境中的法理学敏感工作流。我们将在满足工作流截止日期的同时最大限度降低任务的 MIPS(每秒百万条指令)要求这一问题表述为一个非线性优化问题,并应用拉格朗日乘法来解决这一问题。这使我们能够配置/扩展容器资源并降低成本。在将容器分配给虚拟机的同时,我们还确保了虚拟机资源的最大利用率。此外,我们还提出了一种有效扩展容器和虚拟机的方法,以提高工作流在运行时的可调度性,从而应对工作流的动态到来。广泛的实验以及与其他一流作品的比较表明,所提出的方法可以显著提高资源利用率,防止违反截止日期,并降低 WaaS 模型租用云资源的成本。
{"title":"Cost-efficient Workflow as a Service using Containers","authors":"Kamalesh Karmakar, Anurina Tarafdar, Rajib K. Das, Sunirmal Khatua","doi":"10.1007/s10723-024-09745-7","DOIUrl":"https://doi.org/10.1007/s10723-024-09745-7","url":null,"abstract":"<p>Workflows are special applications used to solve complex scientific problems. The emerging Workflow as a Service (WaaS) model provides scientists with an effective way of deploying their workflow applications in Cloud environments. The WaaS model can execute multiple workflows in a multi-tenant Cloud environment. Scheduling the tasks of the workflows in the WaaS model has several challenges. The scheduling approach must properly utilize the underlying Cloud resources and satisfy the users’ Quality of Service (QoS) requirements for all the workflows. In this work, we have proposed a heurisine-sensitive workflows in a containerized Cloud environment for the WaaS model. We formulated the problem of minimizing the MIPS (million instructions per second) requirement of tasks while satisfying the deadline of the workflows as a non-linear optimization problem and applied the Lagranges multiplier method to solve it. It allows us to configure/scale the containers’ resources and reduce costs. We also ensure maximum utilization of VM’s resources while allocating containers to VMs. Furthermore, we have proposed an approach to effectively scale containers and VMs to improve the schedulability of the workflows at runtime to deal with the dynamic arrival of the workflows. Extensive experiments and comparisons with other state-of-the-art works show that the proposed approach can significantly improve resource utilization, prevent deadline violation, and reduce the cost of renting Cloud resources for the WaaS model.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140097911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Scheduling Framework of Streaming Applications based on Resource Demand Prediction with Hybrid Algorithms 基于混合算法资源需求预测的流媒体应用自适应调度框架
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-09 DOI: 10.1007/s10723-024-09756-4
Hongjian Li, Wei Luo, Wenbin Xie, Huaqing Ye, Xiaolin Duan
{"title":"Adaptive Scheduling Framework of Streaming Applications based on Resource Demand Prediction with Hybrid Algorithms","authors":"Hongjian Li, Wei Luo, Wenbin Xie, Huaqing Ye, Xiaolin Duan","doi":"10.1007/s10723-024-09756-4","DOIUrl":"https://doi.org/10.1007/s10723-024-09756-4","url":null,"abstract":"","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140077034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Agent Systems for Collaborative Inference Based on Deep Policy Q-Inference Network 基于深度策略 Q 推理网络的协作推理多代理系统
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-29 DOI: 10.1007/s10723-024-09750-w
Shangshang Wang, Yuqin Jing, Kezhu Wang, Xue Wang

This study tackles the problem of increasing efficiency and scalability in deep neural network (DNN) systems by employing collaborative inference, an approach that is gaining popularity because to its ability to maximize computational resources. It involves splitting a pre-trained DNN model into two parts and running them separately on user equipment (UE) and edge servers. This approach is advantageous because it results in faster and more energy-efficient inference, as computation can be offloaded to edge servers rather than relying solely on UEs. However, a significant challenge of collaborative belief is the dynamic coupling of DNN layers, which makes it difficult to separate and run the layers independently. To address this challenge, we proposed a novel approach to optimize collaborative inference in a multi-agent scenario where a single-edge server coordinates the assumption of multiple UEs. Our proposed method suggests using an autoencoder-based technique to reduce the size of intermediary features and constructing tasks using the deep policy inference Q-inference network’s overhead (DPIQN). To optimize the collaborative inference, employ the Deep Recurrent Policy Inference Q-Network (DRPIQN) technique, which allows for a hybrid action space. The results of the tests demonstrate that this approach can significantly reduce inference latency by up to 56% and energy usage by up to 72% on various networks. Overall, this proposed approach provides an efficient and effective method for implementing collaborative inference in multi-agent scenarios, which could have significant implications for developing DNN systems.

本研究通过采用协作推理来解决提高深度神经网络(DNN)系统效率和可扩展性的问题,协作推理是一种因能最大限度利用计算资源而日益流行的方法。它将预先训练好的 DNN 模型分成两部分,分别在用户设备(UE)和边缘服务器上运行。这种方法的优点是推理速度更快、能效更高,因为计算可以卸载到边缘服务器上,而不是完全依赖 UE。然而,协同信念面临的一个重大挑战是 DNN 各层的动态耦合,这使得各层难以分离和独立运行。为了应对这一挑战,我们提出了一种新方法,以优化多代理场景中的协作推理,即由单个边缘服务器协调多个 UE 的假设。我们提出的方法建议使用基于自动编码器的技术来减少中间特征的大小,并使用深度策略推理 Q-推理网络的开销(DPIQN)来构建任务。为了优化协作推理,采用了深度递归策略推理 Q 网络(DRPIQN)技术,该技术允许混合行动空间。测试结果表明,在各种网络上,这种方法可以将推理延迟大幅减少 56%,将能量消耗大幅减少 72%。总之,这种拟议方法为在多代理场景中实施协作推理提供了一种高效、有效的方法,对开发 DNN 系统具有重要意义。
{"title":"Multi-Agent Systems for Collaborative Inference Based on Deep Policy Q-Inference Network","authors":"Shangshang Wang, Yuqin Jing, Kezhu Wang, Xue Wang","doi":"10.1007/s10723-024-09750-w","DOIUrl":"https://doi.org/10.1007/s10723-024-09750-w","url":null,"abstract":"<p>This study tackles the problem of increasing efficiency and scalability in deep neural network (DNN) systems by employing collaborative inference, an approach that is gaining popularity because to its ability to maximize computational resources. It involves splitting a pre-trained DNN model into two parts and running them separately on user equipment (UE) and edge servers. This approach is advantageous because it results in faster and more energy-efficient inference, as computation can be offloaded to edge servers rather than relying solely on UEs. However, a significant challenge of collaborative belief is the dynamic coupling of DNN layers, which makes it difficult to separate and run the layers independently. To address this challenge, we proposed a novel approach to optimize collaborative inference in a multi-agent scenario where a single-edge server coordinates the assumption of multiple UEs. Our proposed method suggests using an autoencoder-based technique to reduce the size of intermediary features and constructing tasks using the deep policy inference Q-inference network’s overhead (DPIQN). To optimize the collaborative inference, employ the Deep Recurrent Policy Inference Q-Network (DRPIQN) technique, which allows for a hybrid action space. The results of the tests demonstrate that this approach can significantly reduce inference latency by up to 56% and energy usage by up to 72% on various networks. Overall, this proposed approach provides an efficient and effective method for implementing collaborative inference in multi-agent scenarios, which could have significant implications for developing DNN systems.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140003991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dueling Double Deep Q Network Strategy in MEC for Smart Internet of Vehicles Edge Computing Networks 智能车联网边缘计算网络 MEC 中的双深 Q 网络对决策略
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-29 DOI: 10.1007/s10723-024-09752-8
Haotian Pang, Zhanwei Wang

Advancing in communication systems requires nearby devices to act as networks when devices are not in use. Such technology is mobile edge computing, which provides enormous communication services in the network. In this research, we explore a multiuser smart Internet of Vehicles (IoV) network with mobile edge computing (MEC) assistance, where the first edge server can assist in completing the intense computing jobs from the vehicular users. Many currently available works for MEC networks primarily concentrate on minimising system latency to ensure the quality of service (QoS) for users by designing some offloading strategies. Still, they need to account for the retail prices from the server and, as a result, the budgetary constraints of the users. To solve this problem, we present a Dueling Double Deep Q Network (D3QN) with an Optimal Stopping Theory (OST) strategy that helps to solve the multi-task joint edge problems and minimises the offloading problems in MEC-based IoV networks. The multi-task-offloading model aims to increase the likelihood of offloading to the ideal servers by utilising the OST characteristics. Lastly, simulators show how the proposed methods perform better than the traditional ones. The findings demonstrate that the suggested offloading techniques may be successfully applied in mobile nodes and significantly cut the anticipated time required to process the workloads.

通信系统的发展需要附近的设备在不使用时充当网络。这种技术就是移动边缘计算,它能在网络中提供巨大的通信服务。在这项研究中,我们探索了一种具有移动边缘计算(MEC)辅助功能的多用户智能车联网(IoV)网络,在这种网络中,第一边缘服务器可以协助完成来自车辆用户的高强度计算工作。目前,许多针对 MEC 网络的研究主要集中在通过设计一些卸载策略来最大限度地减少系统延迟,从而确保用户的服务质量(QoS)。但是,它们仍需要考虑服务器的零售价格,因此也需要考虑用户的预算限制。为了解决这个问题,我们提出了一种具有最优停止理论(OST)策略的决斗双深Q网络(D3QN),它有助于解决多任务联合边缘问题,并最大限度地减少基于MEC的物联网网络中的卸载问题。多任务卸载模型旨在利用 OST 特性提高向理想服务器卸载的可能性。最后,模拟器显示了建议的方法如何比传统方法表现得更好。研究结果表明,建议的卸载技术可成功应用于移动节点,并大大缩短处理工作负载所需的预期时间。
{"title":"Dueling Double Deep Q Network Strategy in MEC for Smart Internet of Vehicles Edge Computing Networks","authors":"Haotian Pang, Zhanwei Wang","doi":"10.1007/s10723-024-09752-8","DOIUrl":"https://doi.org/10.1007/s10723-024-09752-8","url":null,"abstract":"<p>Advancing in communication systems requires nearby devices to act as networks when devices are not in use. Such technology is mobile edge computing, which provides enormous communication services in the network. In this research, we explore a multiuser smart Internet of Vehicles (IoV) network with mobile edge computing (MEC) assistance, where the first edge server can assist in completing the intense computing jobs from the vehicular users. Many currently available works for MEC networks primarily concentrate on minimising system latency to ensure the quality of service (QoS) for users by designing some offloading strategies. Still, they need to account for the retail prices from the server and, as a result, the budgetary constraints of the users. To solve this problem, we present a Dueling Double Deep Q Network (D3QN) with an Optimal Stopping Theory (OST) strategy that helps to solve the multi-task joint edge problems and minimises the offloading problems in MEC-based IoV networks. The multi-task-offloading model aims to increase the likelihood of offloading to the ideal servers by utilising the OST characteristics. Lastly, simulators show how the proposed methods perform better than the traditional ones. The findings demonstrate that the suggested offloading techniques may be successfully applied in mobile nodes and significantly cut the anticipated time required to process the workloads.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140004046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Work Scheduling in Cloud Network Based on Deep Q-LSTM Models for Efficient Resource Utilization 基于深度 Q-LSTM 模型的云网络工作调度,实现高效资源利用
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-28 DOI: 10.1007/s10723-024-09746-6
Yanli Xing

Edge computing has emerged as an innovative paradigm, bringing cloud service resources closer to mobile consumers at the network's edge. This proximity enables efficient processing of computationally demanding and time-sensitive tasks. However, the dynamic nature of the edge network, characterized by a high density of devices, diverse mobile usage patterns, a wide range of applications, and sporadic traffic, often leads to uneven resource distribution. This imbalance hampers system efficiency and contributes to task failures. To overcome these challenges, we propose a novel approach known as the DRL-LSTM approach, which combines Deep Reinforcement Learning (DRL) with Long Short-Term Memory (LSTM) architecture. The primary objective of the DRL-LSTM approach is to optimize workload planning in edge computing environments. Leveraging the capabilities of DRL, this approach effectively handles complex and multidimensional workload planning problems. By incorporating LSTM as a recurrent neural network, it captures and models temporal dependencies in sequential data, enabling efficient workload management, reduced service time, and enhanced task completion rates. Additionally, the DRL-LSTM approach integrates Deep-Q-Network (DQN) algorithms to address the complexity and high dimensionality of workload scheduling problems. Through simulations, we demonstrate that the DRL-LSTM approach outperforms alternative approaches regarding service time, virtual machine (VM) utilization, and the rate of failed tasks. The integration of DRL and LSTM enables the process to effectively tackle the challenges associated with workload planning in edge computing, leading to improved system performance. The proposed DRL-LSTM approach offers a promising solution for optimizing workload planning in edge computing environments. Combining the power of Deep Reinforcement Learning, Long Short-Term Memory architecture, and Deep-Q-Network algorithms facilitates efficient resource allocation, reduces service time, and increases task completion rates. It holds significant potential for enhancing the overall performance and effectiveness of edge computing systems.

边缘计算已成为一种创新模式,它使云服务资源更接近网络边缘的移动消费者。这种接近性使计算要求高且时间敏感的任务得到高效处理。然而,边缘网络具有设备密度高、移动使用模式多样、应用范围广泛和流量零散等特点,其动态性质往往导致资源分配不均。这种不平衡会影响系统效率,导致任务失败。为了克服这些挑战,我们提出了一种称为 DRL-LSTM 的新方法,它将深度强化学习(DRL)与长短期记忆(LSTM)架构相结合。DRL-LSTM 方法的主要目标是优化边缘计算环境中的工作负载规划。利用 DRL 的功能,该方法可有效处理复杂的多维工作负载规划问题。通过将 LSTM 作为递归神经网络,它可以捕捉并模拟顺序数据中的时间依赖性,从而实现高效的工作量管理、缩短服务时间并提高任务完成率。此外,DRL-LSTM 方法还集成了深度-Q 网络(DQN)算法,以解决工作量调度问题的复杂性和高维性。通过仿真,我们证明 DRL-LSTM 方法在服务时间、虚拟机(VM)利用率和失败任务率方面优于其他方法。DRL 和 LSTM 的集成使该流程能够有效解决边缘计算中与工作负载规划相关的挑战,从而提高系统性能。所提出的 DRL-LSTM 方法为优化边缘计算环境中的工作负载规划提供了一种前景广阔的解决方案。将深度强化学习、长短期记忆架构和深度 Q 网络算法的强大功能结合起来,有助于高效分配资源、缩短服务时间并提高任务完成率。它在提高边缘计算系统的整体性能和效率方面具有巨大潜力。
{"title":"Work Scheduling in Cloud Network Based on Deep Q-LSTM Models for Efficient Resource Utilization","authors":"Yanli Xing","doi":"10.1007/s10723-024-09746-6","DOIUrl":"https://doi.org/10.1007/s10723-024-09746-6","url":null,"abstract":"<p>Edge computing has emerged as an innovative paradigm, bringing cloud service resources closer to mobile consumers at the network's edge. This proximity enables efficient processing of computationally demanding and time-sensitive tasks. However, the dynamic nature of the edge network, characterized by a high density of devices, diverse mobile usage patterns, a wide range of applications, and sporadic traffic, often leads to uneven resource distribution. This imbalance hampers system efficiency and contributes to task failures. To overcome these challenges, we propose a novel approach known as the DRL-LSTM approach, which combines Deep Reinforcement Learning (DRL) with Long Short-Term Memory (LSTM) architecture. The primary objective of the DRL-LSTM approach is to optimize workload planning in edge computing environments. Leveraging the capabilities of DRL, this approach effectively handles complex and multidimensional workload planning problems. By incorporating LSTM as a recurrent neural network, it captures and models temporal dependencies in sequential data, enabling efficient workload management, reduced service time, and enhanced task completion rates. Additionally, the DRL-LSTM approach integrates Deep-Q-Network (DQN) algorithms to address the complexity and high dimensionality of workload scheduling problems. Through simulations, we demonstrate that the DRL-LSTM approach outperforms alternative approaches regarding service time, virtual machine (VM) utilization, and the rate of failed tasks. The integration of DRL and LSTM enables the process to effectively tackle the challenges associated with workload planning in edge computing, leading to improved system performance. The proposed DRL-LSTM approach offers a promising solution for optimizing workload planning in edge computing environments. Combining the power of Deep Reinforcement Learning, Long Short-Term Memory architecture, and Deep-Q-Network algorithms facilitates efficient resource allocation, reduces service time, and increases task completion rates. It holds significant potential for enhancing the overall performance and effectiveness of edge computing systems.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140004255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Grid Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1