首页 > 最新文献

Journal of Grid Computing最新文献

英文 中文
Deep Learning-Based Multi-Domain Framework for End-to-End Services in 5G Networks 基于深度学习的5G端到端服务多域框架
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-04 DOI: 10.1007/s10723-023-09714-6
Yanjia Tian, Yan Dong, Xiang Feng

Over the past few years, network slicing has emerged as a pivotal component within the realm of 5G technology. It plays a critical role in effectively delineating network services based on a myriad of performance and operational requirements, all of which draw from a shared pool of common resources. The core objective of 5G technology is to facilitate simultaneous network slicing, thereby enabling the creation of multiple distinct end-to-end networks. This multiplicity of networks serves the paramount purpose of ensuring that the traffic within one network slice does not impede or adversely affect the traffic within another. Therefore, this paper proposes a Deep learning-based Multi Domain framework for end-to-end network slicing in traffic-aware prediction. The proposed method uses Deep Reinforcement Learning (DRL) for in-depth resource allocation analysis and improves the Quality of Service (QOS). The DRL-based Multi-domain framework provides traffic-aware prediction and enhances flexibility. The study results demonstrate that the suggested approach outperforms conventional, heuristic, and randomized methods and enhances resource use while maintaining QoS.

在过去的几年里,网络切片已经成为5G技术领域的关键组成部分。它在有效地描述基于无数性能和操作需求的网络服务方面发挥着关键作用,所有这些需求都来自一个共享的公共资源池。5G技术的核心目标是促进同时进行网络切片,从而创建多个不同的端到端网络。这种网络的多样性最重要的目的是确保一个网络片内的流量不会妨碍或对另一个网络片内的流量产生不利影响。因此,本文提出了一种基于深度学习的多域框架,用于流量感知预测中的端到端网络切片。该方法利用深度强化学习(DRL)进行深度资源分配分析,提高了服务质量(QOS)。基于drl的多域框架提供流量感知预测,增强了灵活性。研究结果表明,该方法优于传统的启发式和随机化方法,并在保持QoS的同时提高了资源利用率。
{"title":"Deep Learning-Based Multi-Domain Framework for End-to-End Services in 5G Networks","authors":"Yanjia Tian, Yan Dong, Xiang Feng","doi":"10.1007/s10723-023-09714-6","DOIUrl":"https://doi.org/10.1007/s10723-023-09714-6","url":null,"abstract":"<p>Over the past few years, network slicing has emerged as a pivotal component within the realm of 5G technology. It plays a critical role in effectively delineating network services based on a myriad of performance and operational requirements, all of which draw from a shared pool of common resources. The core objective of 5G technology is to facilitate simultaneous network slicing, thereby enabling the creation of multiple distinct end-to-end networks. This multiplicity of networks serves the paramount purpose of ensuring that the traffic within one network slice does not impede or adversely affect the traffic within another. Therefore, this paper proposes a Deep learning-based Multi Domain framework for end-to-end network slicing in traffic-aware prediction. The proposed method uses Deep Reinforcement Learning (DRL) for in-depth resource allocation analysis and improves the Quality of Service (QOS). The DRL-based Multi-domain framework provides traffic-aware prediction and enhances flexibility. The study results demonstrate that the suggested approach outperforms conventional, heuristic, and randomized methods and enhances resource use while maintaining QoS.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"26 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bibliometric Analysis of Convergence of Artificial Intelligence and Blockchain for Edge of Things 物联网边缘人工智能与区块链融合的文献计量分析
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-04 DOI: 10.1007/s10723-023-09716-4
Deepak Sharma, Rajeev Kumar, Ki-Hyun Jung

The convergence of Artificial Intelligence (AI) and Blockchain technologies has emerged as a powerful paradigm to address the challenges of data management, security, and privacy in the Edge of Things (EoTs) environment. This bibliometric analysis aims to explore the research landscape and trends surrounding the topic of convergence of AI and Blockchain for EoTs to gain insights into its development and potential implications. For this, research published during the past six years (2018-2023) in the Web of Science indexed sources has been considered as it has been a new field. VoSViewer-based full counting methodology has been used to analyze citation, co-citation, and co-authorship based collaborations among authors, organizations, countries, sources, and documents. The full counting method in VoSViewer involves considering all authors or sources with equal weight when calculating various bibliometric indicators. Co-occurrence, timeline, and burst detection analysis of keywords and published articles were also carried out to unravel significant research trends on the convergence of AI and Blockchain for EoTs. Our findings reveal a steady growth in research output, indicating the increasing importance and interest in AI-enabled Blockchain solutions for EoTs. Further, the analysis uncovered key influential researchers and institutions driving advancements in this domain, shedding light on potential collaborative networks and knowledge hubs. Additionally, the study examines the evolution of research themes over time, offering insights into emerging areas and future research directions. This bibliometric analysis contributes to the understanding of the state-of-the-art in convergence of AI and Blockchain for EoTs, highlighting the most influential works and identifying knowledge gaps. Researchers, industry practitioners, and policymakers can leverage these findings to inform their research strategies and decision-making processes, fostering innovation and advancements in this cutting-edge interdisciplinary field.

人工智能(AI)和区块链技术的融合已经成为解决物联网(iot)环境中数据管理、安全和隐私挑战的强大范例。本文献计量分析旨在探索围绕人工智能和区块链融合主题的研究前景和趋势,以深入了解其发展和潜在影响。因此,过去6年(2018-2023年)在Web of Science索引来源中发表的研究被认为是一个新领域。基于vosviewer的完整计数方法已被用于分析作者、组织、国家、来源和文件之间的引用、共同引用和共同作者合作。VoSViewer的全计数方法包括在计算各种文献计量指标时考虑所有作者或来源的同等权重。还对关键词和已发表文章进行了共现、时间线和突发检测分析,以揭示人工智能与区块链融合的重要研究趋势。我们的研究结果显示,研究产出稳步增长,表明对支持人工智能的区块链解决方案的重要性和兴趣日益增加。此外,该分析还揭示了推动该领域进步的关键有影响力的研究人员和机构,揭示了潜在的合作网络和知识中心。此外,该研究还考察了研究主题随时间的演变,为新兴领域和未来的研究方向提供了见解。这种文献计量分析有助于理解人工智能和区块链在eot领域的融合,突出最具影响力的作品,并确定知识差距。研究人员、行业从业者和政策制定者可以利用这些发现来为他们的研究策略和决策过程提供信息,从而促进这一前沿跨学科领域的创新和进步。
{"title":"A Bibliometric Analysis of Convergence of Artificial Intelligence and Blockchain for Edge of Things","authors":"Deepak Sharma, Rajeev Kumar, Ki-Hyun Jung","doi":"10.1007/s10723-023-09716-4","DOIUrl":"https://doi.org/10.1007/s10723-023-09716-4","url":null,"abstract":"<p>The convergence of Artificial Intelligence (AI) and Blockchain technologies has emerged as a powerful paradigm to address the challenges of data management, security, and privacy in the Edge of Things (EoTs) environment. This bibliometric analysis aims to explore the research landscape and trends surrounding the topic of convergence of AI and Blockchain for EoTs to gain insights into its development and potential implications. For this, research published during the past six years (2018-2023) in the Web of Science indexed sources has been considered as it has been a new field. VoSViewer-based full counting methodology has been used to analyze citation, co-citation, and co-authorship based collaborations among authors, organizations, countries, sources, and documents. The full counting method in VoSViewer involves considering all authors or sources with equal weight when calculating various bibliometric indicators. Co-occurrence, timeline, and burst detection analysis of keywords and published articles were also carried out to unravel significant research trends on the convergence of AI and Blockchain for EoTs. Our findings reveal a steady growth in research output, indicating the increasing importance and interest in AI-enabled Blockchain solutions for EoTs. Further, the analysis uncovered key influential researchers and institutions driving advancements in this domain, shedding light on potential collaborative networks and knowledge hubs. Additionally, the study examines the evolution of research themes over time, offering insights into emerging areas and future research directions. This bibliometric analysis contributes to the understanding of the state-of-the-art in convergence of AI and Blockchain for EoTs, highlighting the most influential works and identifying knowledge gaps. Researchers, industry practitioners, and policymakers can leverage these findings to inform their research strategies and decision-making processes, fostering innovation and advancements in this cutting-edge interdisciplinary field.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"26 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart Financial Investor’s Risk Prediction System Using Mobile Edge Computing 基于移动边缘计算的智能金融投资者风险预测系统
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-04 DOI: 10.1007/s10723-023-09710-w
Caijun Cheng, Huazhen Huang

The financial system has reached its pinnacle because of economic and social growth, which has propelled the financial sector into another era. Public and corporate financial investment operations have significantly risen in this climate, and they now play a significant part in and impact the efficient use of market money. This finance sector will be affected by high-risk occurrences because of the cohabitation of dangers and passions, which will cause order to become unstable and definite financial losses. An organization’s operational risk is a significant barrier to its growth. A bit of negligence could cause the business’s standing to erode rapidly. Increasing funding management and forecasting risks is essential for the successful development of companies, enhancing their competitiveness in the marketplace and minimizing negative effects. As a result, this study takes the idea of mobile edge computing. It creates an intelligent system that can forecast different risks throughout the financial investment process based on the operational knowledge of important investment platforms. The CNN-LSTM approach, based on knowledge graphs, is then used to forecast financial risks. The results are then thoroughly examined through tests, demonstrating that the methodology can accurately estimate the risk associated with financial investments. Finally, a plan for improving the system for predicting financial risk is put out.

由于经济和社会的发展,金融体系达到了顶峰,这将金融业推向了另一个时代。在这种环境下,公共和企业的金融投资业务显著增加,它们现在在市场资金的有效利用方面发挥着重要作用并影响着市场资金的有效利用。由于危险和激情的共存,这一金融领域将受到高风险事件的影响,这将导致秩序变得不稳定和确定的财务损失。一个组织的操作风险是其成长的一个重要障碍。一点疏忽都可能导致企业的地位迅速受到侵蚀。加强资金管理和预测风险对于公司的成功发展,提高其在市场上的竞争力和尽量减少负面影响至关重要。因此,本研究采用了移动边缘计算的思想。它基于重要投资平台的操作知识,创建了一个智能系统,可以预测整个金融投资过程中的不同风险。基于知识图的CNN-LSTM方法随后被用于预测金融风险。然后通过测试彻底检查结果,证明该方法可以准确地估计与金融投资相关的风险。最后,提出了完善金融风险预测系统的方案。
{"title":"Smart Financial Investor’s Risk Prediction System Using Mobile Edge Computing","authors":"Caijun Cheng, Huazhen Huang","doi":"10.1007/s10723-023-09710-w","DOIUrl":"https://doi.org/10.1007/s10723-023-09710-w","url":null,"abstract":"<p>The financial system has reached its pinnacle because of economic and social growth, which has propelled the financial sector into another era. Public and corporate financial investment operations have significantly risen in this climate, and they now play a significant part in and impact the efficient use of market money. This finance sector will be affected by high-risk occurrences because of the cohabitation of dangers and passions, which will cause order to become unstable and definite financial losses. An organization’s operational risk is a significant barrier to its growth. A bit of negligence could cause the business’s standing to erode rapidly. Increasing funding management and forecasting risks is essential for the successful development of companies, enhancing their competitiveness in the marketplace and minimizing negative effects. As a result, this study takes the idea of mobile edge computing. It creates an intelligent system that can forecast different risks throughout the financial investment process based on the operational knowledge of important investment platforms. The CNN-LSTM approach, based on knowledge graphs, is then used to forecast financial risks. The results are then thoroughly examined through tests, demonstrating that the methodology can accurately estimate the risk associated with financial investments. Finally, a plan for improving the system for predicting financial risk is put out.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"72 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Reinforcement Learning and Markov Decision Problem for Task Offloading in Mobile Edge Computing 移动边缘计算中任务卸载的深度强化学习和马尔可夫决策问题
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-04 DOI: 10.1007/s10723-023-09708-4
Xiaohu Gao, Mei Choo Ang, Sara A. Althubiti

Mobile Edge Computing (MEC) offers cloud-like capabilities to mobile users, making it an up-and-coming method for advancing the Internet of Things (IoT). However, current approaches are limited by various factors such as network latency, bandwidth, energy consumption, task characteristics, and edge server overload. To address these limitations, this research propose a novel approach that integrates Deep Reinforcement Learning (DRL) with Deep Deterministic Policy Gradient (DDPG) and Markov Decision Problem for task offloading in MEC. Among DRL algorithms, the ITODDPG algorithm based on the DDPG algorithm and MDP is a popular choice for task offloading in MEC. Firstly, the ITODDPG algorithm formulates the task offloading problem in MEC as an MDP, which enables the agent to learn a policy that maximizes the expected cumulative reward. Secondly, ITODDPG employs a deep neural network to approximate the Q-function, which maps the state-action pairs to their expected cumulative rewards. Finally, the experimental results demonstrate that the ITODDPG algorithm outperforms the baseline algorithms regarding average compensation and convergence speed. In addition to its superior performance, our proposed approach can learn complex non-linear policies using DNN and an information-theoretic objective function to improve the performance of task offloading in MEC. Compared to traditional methods, our approach delivers improved performance, making it highly effective for developing IoT environments. Experimental trials were carried out, and the results indicate that the suggested approach can enhance performance compared to the other three baseline methods. It is highly scalable, capable of handling large and complex environments, and suitable for deployment in real-world scenarios, ensuring its widespread applicability to a diverse range of task offloading and MEC applications.

移动边缘计算(MEC)为移动用户提供类似云的功能,使其成为推进物联网(IoT)的一种新兴方法。然而,当前的方法受到各种因素的限制,例如网络延迟、带宽、能耗、任务特征和边缘服务器过载。为了解决这些限制,本研究提出了一种将深度强化学习(DRL)与深度确定性策略梯度(DDPG)和马尔可夫决策问题相结合的新方法,用于MEC中的任务卸载。在DRL算法中,基于DDPG算法和MDP的ITODDPG算法是MEC中任务卸载的热门选择。首先,ITODDPG算法将MEC中的任务卸载问题表述为一个MDP,使agent能够学习到期望累积奖励最大化的策略。其次,ITODDPG使用深度神经网络来近似q函数,将状态-动作对映射到它们的期望累积奖励。最后,实验结果表明,ITODDPG算法在平均补偿和收敛速度方面优于基准算法。除了其优越的性能外,我们提出的方法可以使用深度神经网络和信息论目标函数来学习复杂的非线性策略,以提高MEC中任务卸载的性能。与传统方法相比,我们的方法提供了更高的性能,使其在开发物联网环境方面非常有效。实验结果表明,与其他三种基线方法相比,该方法可以提高性能。它具有高度可扩展性,能够处理大型复杂环境,适合在实际场景中部署,确保其广泛适用于各种任务卸载和MEC应用程序。
{"title":"Deep Reinforcement Learning and Markov Decision Problem for Task Offloading in Mobile Edge Computing","authors":"Xiaohu Gao, Mei Choo Ang, Sara A. Althubiti","doi":"10.1007/s10723-023-09708-4","DOIUrl":"https://doi.org/10.1007/s10723-023-09708-4","url":null,"abstract":"<p>Mobile Edge Computing (MEC) offers cloud-like capabilities to mobile users, making it an up-and-coming method for advancing the Internet of Things (IoT). However, current approaches are limited by various factors such as network latency, bandwidth, energy consumption, task characteristics, and edge server overload. To address these limitations, this research propose a novel approach that integrates Deep Reinforcement Learning (DRL) with Deep Deterministic Policy Gradient (DDPG) and Markov Decision Problem for task offloading in MEC. Among DRL algorithms, the ITODDPG algorithm based on the DDPG algorithm and MDP is a popular choice for task offloading in MEC. Firstly, the ITODDPG algorithm formulates the task offloading problem in MEC as an MDP, which enables the agent to learn a policy that maximizes the expected cumulative reward. Secondly, ITODDPG employs a deep neural network to approximate the Q-function, which maps the state-action pairs to their expected cumulative rewards. Finally, the experimental results demonstrate that the ITODDPG algorithm outperforms the baseline algorithms regarding average compensation and convergence speed. In addition to its superior performance, our proposed approach can learn complex non-linear policies using DNN and an information-theoretic objective function to improve the performance of task offloading in MEC. Compared to traditional methods, our approach delivers improved performance, making it highly effective for developing IoT environments. Experimental trials were carried out, and the results indicate that the suggested approach can enhance performance compared to the other three baseline methods. It is highly scalable, capable of handling large and complex environments, and suitable for deployment in real-world scenarios, ensuring its widespread applicability to a diverse range of task offloading and MEC applications.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"37 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138537018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Prediction of Makespan Matrix Workflow Scheduling Algorithm for Heterogeneous Cloud Environments 异构云环境下Makespan矩阵工作流调度算法的高效预测
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-11-28 DOI: 10.1007/s10723-023-09711-9
Longxin Zhang, Minghui Ai, Runti Tan, Junfeng Man, Xiaojun Deng, Keqin Li

Leveraging a cloud computing environment for executing workflow applications offers high flexibility and strong scalability, thereby significantly improving resource utilization. Current scholarly discussions heavily focus on effectively reducing the scheduling length (makespan) of parallel task sets and improving the efficiency of large workflow applications in cloud computing environments. Effectively managing task dependencies and execution sequences plays a crucial role in designing efficient workflow scheduling algorithms. This study forwards a high-efficiency workflow scheduling algorithm based on predict makespan matrix (PMMS) for heterogeneous cloud computing environments. First, PMMS calculates the priority of each task based on the predict makespan (PM) matrix and obtains the task scheduling list. Second, the optimistic scheduling length (OSL) value of each task is calculated based on the PM matrix and the earliest finish time. Third, the best virtual machine is selected for each task according to the minimum OSL value. A large number of substantial experiments show that the scheduling length of workflow for PMMS, compared with state-of-the-art HEFT, PEFT, and PPTS algorithms, is reduced by 6.84%–15.17%, 5.47%–11.39%, and 4.74%–17.27%, respectively. This hinges on the premise of ensuring priority constraints and not increasing the time complexity.

利用云计算环境执行工作流应用程序提供了高灵活性和强大的可扩展性,从而显著提高了资源利用率。当前的学术讨论主要集中在有效地减少并行任务集的调度长度(makespan)和提高云计算环境下大型工作流应用程序的效率。有效地管理任务依赖关系和执行顺序对于设计高效的工作流调度算法至关重要。针对异构云计算环境,提出了一种基于预测最大跨度矩阵(PMMS)的高效工作流调度算法。PMMS首先根据预测最大寿命矩阵计算各任务的优先级,得到任务调度列表;其次,根据PM矩阵和最早完成时间计算各任务的最优调度长度(OSL)值;第三,根据最小OSL值为每个任务选择最佳虚拟机。大量实验结果表明,PMMS的工作流调度长度比HEFT、PEFT和PPTS算法分别缩短了6.84% ~ 15.17%、5.47% ~ 11.39%和4.74% ~ 17.27%。这取决于保证优先级约束和不增加时间复杂度的前提。
{"title":"Efficient Prediction of Makespan Matrix Workflow Scheduling Algorithm for Heterogeneous Cloud Environments","authors":"Longxin Zhang, Minghui Ai, Runti Tan, Junfeng Man, Xiaojun Deng, Keqin Li","doi":"10.1007/s10723-023-09711-9","DOIUrl":"https://doi.org/10.1007/s10723-023-09711-9","url":null,"abstract":"<p>Leveraging a cloud computing environment for executing workflow applications offers high flexibility and strong scalability, thereby significantly improving resource utilization. Current scholarly discussions heavily focus on effectively reducing the scheduling length (makespan) of parallel task sets and improving the efficiency of large workflow applications in cloud computing environments. Effectively managing task dependencies and execution sequences plays a crucial role in designing efficient workflow scheduling algorithms. This study forwards a high-efficiency workflow scheduling algorithm based on predict makespan matrix (PMMS) for heterogeneous cloud computing environments. First, PMMS calculates the priority of each task based on the predict makespan (PM) matrix and obtains the task scheduling list. Second, the optimistic scheduling length (OSL) value of each task is calculated based on the PM matrix and the earliest finish time. Third, the best virtual machine is selected for each task according to the minimum OSL value. A large number of substantial experiments show that the scheduling length of workflow for PMMS, compared with state-of-the-art HEFT, PEFT, and PPTS algorithms, is reduced by 6.84%–15.17%, 5.47%–11.39%, and 4.74%–17.27%, respectively. This hinges on the premise of ensuring priority constraints and not increasing the time complexity.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"94 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sustainable Environmental Design Using Green IOT with Hybrid Deep Learning and Building Algorithm for Smart City 基于混合深度学习和建筑算法的绿色物联网可持续环境设计
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-11-27 DOI: 10.1007/s10723-023-09704-8
Yuting Zhong, Zesheng Qin, Abdulmajeed Alqhatani, Ahmed Sayed M. Metwally, Ashit Kumar Dutta, Joel J. P. C. Rodrigues

Smart cities and urbanization use enormous IoT devices to transfer data for analysis and information processing. These IoT can relate to billions of devices and transfer essential data from their surroundings. There is a massive need for energy because of the tremendous data exchange between billions of gadgets. Green IoT aims to make the environment a better place while lowering the power usage of IoT devices. In this work, a hybrid deep learning method called "Green energy-efficient routing (GEER) with long short-term memory deep Q-Network is used to minimize the energy consumption of devices. Initially, a GEER with Ant Colony Optimization (ACO) and AutoEncoder (AE) provides efficient routing between devices in the network. Next, the long short-term memory deep Q-Network based Reinforcement Learning (RL) method reduces the energy consumption of IoT devices. This hybrid approach leverages the strengths of each technique to address different aspects of energy-efficient routing. ACO and AE contribute to efficient routing decisions, while LSTM DQN optimizes energy consumption, resulting in a well-rounded solution. Finally, the proposed GELSDQN-ACO method is compared with previous methods such as RNN-LSTM, DPC-DBN, and LSTM-DQN. Moreover, we critically analyze the green IoT and perform implementation and evaluation.

智慧城市和城市化使用大量物联网设备来传输数据以进行分析和信息处理。这些物联网可以与数十亿台设备相关,并从周围环境传输重要数据。由于数十亿设备之间的巨大数据交换,对能源的需求非常大。绿色物联网旨在使环境变得更美好,同时降低物联网设备的功耗。在这项工作中,使用了一种名为“绿色节能路由(GEER)与长短期记忆深度Q-Network的混合深度学习方法来最小化设备的能量消耗。首先,采用蚁群优化(ACO)和自动编码器(AE)的GEER提供了网络中设备之间的有效路由。其次,基于长短期记忆深度Q-Network的强化学习(RL)方法降低了物联网设备的能耗。这种混合方法利用每种技术的优势来解决节能路由的不同方面。ACO和AE有助于有效的路由决策,而LSTM DQN优化了能耗,从而产生了一个全面的解决方案。最后,将GELSDQN-ACO方法与RNN-LSTM、DPC-DBN、LSTM-DQN等方法进行了比较。此外,我们批判性地分析绿色物联网并进行实施和评估。
{"title":"Sustainable Environmental Design Using Green IOT with Hybrid Deep Learning and Building Algorithm for Smart City","authors":"Yuting Zhong, Zesheng Qin, Abdulmajeed Alqhatani, Ahmed Sayed M. Metwally, Ashit Kumar Dutta, Joel J. P. C. Rodrigues","doi":"10.1007/s10723-023-09704-8","DOIUrl":"https://doi.org/10.1007/s10723-023-09704-8","url":null,"abstract":"<p>Smart cities and urbanization use enormous IoT devices to transfer data for analysis and information processing. These IoT can relate to billions of devices and transfer essential data from their surroundings. There is a massive need for energy because of the tremendous data exchange between billions of gadgets. Green IoT aims to make the environment a better place while lowering the power usage of IoT devices. In this work, a hybrid deep learning method called \"Green energy-efficient routing (GEER) with long short-term memory deep Q-Network is used to minimize the energy consumption of devices. Initially, a GEER with Ant Colony Optimization (ACO) and AutoEncoder (AE) provides efficient routing between devices in the network. Next, the long short-term memory deep Q-Network based Reinforcement Learning (RL) method reduces the energy consumption of IoT devices. This hybrid approach leverages the strengths of each technique to address different aspects of energy-efficient routing. ACO and AE contribute to efficient routing decisions, while LSTM DQN optimizes energy consumption, resulting in a well-rounded solution. Finally, the proposed GELSDQN-ACO method is compared with previous methods such as RNN-LSTM, DPC-DBN, and LSTM-DQN. Moreover, we critically analyze the green IoT and perform implementation and evaluation.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"10 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138537030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Auto-Scaling Approach for Microservices in Cloud Computing Environments 云计算环境中微服务的自动伸缩方法
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-11-27 DOI: 10.1007/s10723-023-09713-7
Matineh ZargarAzad, Mehrdad Ashtiani

Recently, microservices have become a commonly-used architectural pattern for building cloud-native applications. Cloud computing provides flexibility for service providers, allowing them to remove or add resources depending on the workload of their web applications. If the resources allocated to the service are not aligned with its requirements, instances of failure or delayed response will increase, resulting in customer dissatisfaction. This problem has become a significant challenge in microservices-based applications, because thousands of microservices in the system may have complex interactions. Auto-scaling is a feature of cloud computing that enables resource scalability on demand, thus allowing service providers to deliver resources to their applications without human intervention under a dynamic workload to minimize resource cost and latency while maintaining the quality of service requirements. In this research, we aimed to establish a computational model for analyzing the workload of all microservices. To this end, the overall workload entering the system was considered, and the relationships and function calls between microservices were taken into account, because in a large-scale application with thousands of microservices, accurately monitoring all microservices and gathering precise performance metrics are usually difficult. Then, we developed a multi-criteria decision-making method to select the candidate microservices for scaling. We have tested the proposed approach with three datasets. The results of the conducted experiments show that the detection of input load toward microservices is performed with an average accuracy of about 99% which is a notable result. Furthermore, the proposed approach has demonstrated a substantial enhancement in resource utilization, achieving an average improvement of 40.74%, 20.28%, and 28.85% across three distinct datasets in comparison to existing methods. This is achieved by a notable reduction in the number of scaling operations, reducing the count by 54.40%, 55.52%, and 69.82%, respectively. Consequently, this optimization translates into a decrease in required resources, leading to cost reductions of 1.64%, 1.89%, and 1.67% respectively.

最近,微服务已经成为构建云原生应用程序的常用架构模式。云计算为服务提供商提供了灵活性,允许他们根据web应用程序的工作负载删除或添加资源。如果分配给服务的资源与其需求不一致,则故障或延迟响应的实例将增加,从而导致客户不满。这个问题已经成为基于微服务的应用程序中的一个重大挑战,因为系统中的数千个微服务可能具有复杂的交互。自动扩展是云计算的一项特性,它支持按需扩展资源,从而允许服务提供商在动态工作负载下无需人工干预即可将资源交付给其应用程序,从而在保持服务质量需求的同时最小化资源成本和延迟。在本研究中,我们旨在建立一个计算模型来分析所有微服务的工作负载。为此,考虑了进入系统的总体工作负载,并考虑了微服务之间的关系和功能调用,因为在具有数千个微服务的大规模应用程序中,准确监控所有微服务并收集精确的性能指标通常是困难的。然后,我们开发了一种多准则决策方法来选择候选微服务进行扩展。我们已经用三个数据集测试了所提出的方法。实验结果表明,对微服务输入负载的检测平均准确率约为99%,这是一个显著的结果。此外,与现有方法相比,所提出的方法在三个不同的数据集上实现了40.74%,20.28%和28.85%的平均改进,大大提高了资源利用率。这是通过显著减少缩放操作的数量来实现的,分别减少了54.40%、55.52%和69.82%的计数。因此,这种优化转化为所需资源的减少,导致成本分别降低1.64%,1.89%和1.67%。
{"title":"An Auto-Scaling Approach for Microservices in Cloud Computing Environments","authors":"Matineh ZargarAzad, Mehrdad Ashtiani","doi":"10.1007/s10723-023-09713-7","DOIUrl":"https://doi.org/10.1007/s10723-023-09713-7","url":null,"abstract":"<p>Recently, microservices have become a commonly-used architectural pattern for building cloud-native applications. Cloud computing provides flexibility for service providers, allowing them to remove or add resources depending on the workload of their web applications. If the resources allocated to the service are not aligned with its requirements, instances of failure or delayed response will increase, resulting in customer dissatisfaction. This problem has become a significant challenge in microservices-based applications, because thousands of microservices in the system may have complex interactions. Auto-scaling is a feature of cloud computing that enables resource scalability on demand, thus allowing service providers to deliver resources to their applications without human intervention under a dynamic workload to minimize resource cost and latency while maintaining the quality of service requirements. In this research, we aimed to establish a computational model for analyzing the workload of all microservices. To this end, the overall workload entering the system was considered, and the relationships and function calls between microservices were taken into account, because in a large-scale application with thousands of microservices, accurately monitoring all microservices and gathering precise performance metrics are usually difficult. Then, we developed a multi-criteria decision-making method to select the candidate microservices for scaling. We have tested the proposed approach with three datasets. The results of the conducted experiments show that the detection of input load toward microservices is performed with an average accuracy of about 99% which is a notable result. Furthermore, the proposed approach has demonstrated a substantial enhancement in resource utilization, achieving an average improvement of 40.74%, 20.28%, and 28.85% across three distinct datasets in comparison to existing methods. This is achieved by a notable reduction in the number of scaling operations, reducing the count by 54.40%, 55.52%, and 69.82%, respectively. Consequently, this optimization translates into a decrease in required resources, leading to cost reductions of 1.64%, 1.89%, and 1.67% respectively.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"17 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI and Blockchain Assisted Framework for Offloading and Resource Allocation in Fog Computing 人工智能和区块链辅助的雾计算卸载和资源分配框架
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-11-27 DOI: 10.1007/s10723-023-09694-7
Mohammad Aknan, Maheshwari Prasad Singh, Rajeev Arya

The role of Internet of Things (IoT) applications has increased tremendously in several areas like healthcare, agriculture, academia, industries, transportation, smart cities, etc. to make human life better. The number of IoT devices is increasing exponentially, and generating huge amounts of data that IoT nodes cannot handle. The centralized cloud architecture can process this enormous IoT data but fails to offer quality of service (QoS) due to high transmission latency, network congestion, and bandwidth. The fog paradigm has evolved that bring computing resources at the network edge for offering services to latency-sensitive IoT applications. Still, offloading decision, heterogeneous fog network, diverse workload, security issues, energy consumption, and expected QoS is significant challenges in this area. Hence, we have proposed a Blockchain-enabled Intelligent framework to tackle the mentioned issues and allocate the optimal resources for upcoming IoT requests in a collaborative cloud fog environment. The proposed framework is integrated with an Artificial Intelligence (AI) based meta-heuristic algorithm that has a high convergence rate, and the capability to take the offloading decision at run time, leading to improved results quality. Blockchain technology secures IoT applications and their data from modern attacks. The experimental results of the proposed framework exhibit significant improvement by up to 20% in execution time and cost and up to 18% in energy consumption over other meta-heuristic approaches under similar experimental environments.

物联网(IoT)应用在医疗保健、农业、学术界、工业、交通、智慧城市等多个领域的作用大大增加,使人类的生活更美好。物联网设备的数量呈指数级增长,并产生物联网节点无法处理的大量数据。集中式云架构可以处理这些庞大的物联网数据,但由于传输延迟高、网络拥塞和带宽,无法提供服务质量(QoS)。雾模式已经发展,将计算资源带到网络边缘,为延迟敏感的物联网应用程序提供服务。然而,卸载决策、异构雾网络、不同的工作负载、安全问题、能源消耗和预期的QoS是该领域的重大挑战。因此,我们提出了一个支持区块链的智能框架来解决上述问题,并在协作云雾环境中为即将到来的物联网请求分配最佳资源。该框架集成了基于人工智能(AI)的元启发式算法,该算法具有高收敛率,并且能够在运行时做出卸载决策,从而提高了结果质量。区块链技术可以保护物联网应用程序及其数据免受现代攻击。实验结果表明,在类似的实验环境下,与其他元启发式方法相比,所提出的框架在执行时间和成本上显著提高了20%,能耗提高了18%。
{"title":"AI and Blockchain Assisted Framework for Offloading and Resource Allocation in Fog Computing","authors":"Mohammad Aknan, Maheshwari Prasad Singh, Rajeev Arya","doi":"10.1007/s10723-023-09694-7","DOIUrl":"https://doi.org/10.1007/s10723-023-09694-7","url":null,"abstract":"<p>The role of Internet of Things (IoT) applications has increased tremendously in several areas like healthcare, agriculture, academia, industries, transportation, smart cities, etc. to make human life better. The number of IoT devices is increasing exponentially, and generating huge amounts of data that IoT nodes cannot handle. The centralized cloud architecture can process this enormous IoT data but fails to offer quality of service (QoS) due to high transmission latency, network congestion, and bandwidth. The fog paradigm has evolved that bring computing resources at the network edge for offering services to latency-sensitive IoT applications. Still, offloading decision, heterogeneous fog network, diverse workload, security issues, energy consumption, and expected QoS is significant challenges in this area. Hence, we have proposed a Blockchain-enabled Intelligent framework to tackle the mentioned issues and allocate the optimal resources for upcoming IoT requests in a collaborative cloud fog environment. The proposed framework is integrated with an Artificial Intelligence (AI) based meta-heuristic algorithm that has a high convergence rate, and the capability to take the offloading decision at run time, leading to improved results quality. Blockchain technology secures IoT applications and their data from modern attacks. The experimental results of the proposed framework exhibit significant improvement by up to 20% in execution time and cost and up to 18% in energy consumption over other meta-heuristic approaches under similar experimental environments.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"7 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secured SDN Based Task Scheduling in Edge Computing for Smart City Health Monitoring Operation Management System 基于安全SDN的边缘计算智能城市健康监测运行管理系统任务调度
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-11-22 DOI: 10.1007/s10723-023-09707-5
Shuangshuang Zhang, Yue Tang, Dinghui Wang, Noorliza Karia, Chenguang Wang

Health monitoring systems (HMS) with wearable IoT devices are constantly being developed and improved. But most of these gadgets have limited energy and processing power due to resource constraints. Mobile edge computing (MEC) must be used to analyze the HMS information to decrease bandwidth usage and increase reaction times for applications that depend on latency and require intense computation. To achieve these needs while considering emergencies under HMS, this work offers an effective task planning and allocation of resources mechanism in MEC. Utilizing the Software Denied Network (SDN) framework; we provide a priority-aware semi-greedy with genetic algorithm (PSG-GA) method. It prioritizes tasks differently by considering their emergencies, calculated concerning the data collected from a patient’s smart wearable devices. The process can determine whether a job must be completed domestically at the hospital workstations (HW) or in the cloud. The goal is to minimize both the bandwidth cost and the overall task processing time. Existing techniques were compared to the proposed SD-PSGA regarding average latency, job scheduling effectiveness, execution duration, bandwidth consumption, CPU utilization, and power usage. The testing results are encouraging since SD-PSGA can handle emergencies and fulfill the task’s latency-sensitive requirements at a lower bandwidth cost. The accuracy of testing model achieves 97 to 98% for nearly 200 tasks.

具有可穿戴物联网设备的健康监测系统(HMS)正在不断开发和改进。但由于资源的限制,大多数这些小工具的能量和处理能力有限。必须使用移动边缘计算(MEC)来分析HMS信息,以减少带宽使用,并增加依赖于延迟和需要大量计算的应用程序的反应时间。为了满足这些需求,同时考虑突发事件,本研究为MEC提供了一种有效的任务规划和资源分配机制。利用软件拒绝网络(SDN)框架提出了一种基于遗传算法(PSG-GA)的优先级感知半贪婪算法。它根据从患者的智能可穿戴设备收集的数据,根据紧急情况计算出不同的任务优先级。该流程可以确定工作是必须在医院工作站(HW)内部完成还是必须在云中完成。目标是最小化带宽成本和总体任务处理时间。将现有技术与建议的SD-PSGA在平均延迟、作业调度有效性、执行持续时间、带宽消耗、CPU利用率和功耗方面进行了比较。测试结果令人鼓舞,因为SD-PSGA可以处理紧急情况,并以较低的带宽成本满足任务的延迟敏感需求。在近200个测试任务中,测试模型的准确率达到97 ~ 98%。
{"title":"Secured SDN Based Task Scheduling in Edge Computing for Smart City Health Monitoring Operation Management System","authors":"Shuangshuang Zhang, Yue Tang, Dinghui Wang, Noorliza Karia, Chenguang Wang","doi":"10.1007/s10723-023-09707-5","DOIUrl":"https://doi.org/10.1007/s10723-023-09707-5","url":null,"abstract":"<p>Health monitoring systems (HMS) with wearable IoT devices are constantly being developed and improved. But most of these gadgets have limited energy and processing power due to resource constraints. Mobile edge computing (MEC) must be used to analyze the HMS information to decrease bandwidth usage and increase reaction times for applications that depend on latency and require intense computation. To achieve these needs while considering emergencies under HMS, this work offers an effective task planning and allocation of resources mechanism in MEC. Utilizing the Software Denied Network (SDN) framework; we provide a priority-aware semi-greedy with genetic algorithm (PSG-GA) method. It prioritizes tasks differently by considering their emergencies, calculated concerning the data collected from a patient’s smart wearable devices. The process can determine whether a job must be completed domestically at the hospital workstations (HW) or in the cloud. The goal is to minimize both the bandwidth cost and the overall task processing time. Existing techniques were compared to the proposed SD-PSGA regarding average latency, job scheduling effectiveness, execution duration, bandwidth consumption, CPU utilization, and power usage. The testing results are encouraging since SD-PSGA can handle emergencies and fulfill the task’s latency-sensitive requirements at a lower bandwidth cost. The accuracy of testing model achieves 97 to 98% for nearly 200 tasks.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"62 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Immune Whale Differential Evolution Optimization (HIWDEO) Based Computation Offloading in MEC for IoT 基于混合免疫鲸鱼差分进化优化(HIWDEO)的物联网MEC计算卸载
IF 5.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-11-21 DOI: 10.1007/s10723-023-09705-7
Jizhou Li, Qi Wang, Shuai Hu, Ling Li

The adoption of User Equipment (UE) is on the rise, driven by advancements in Mobile Cloud Computing (MCC), Mobile Edge Computing (MEC), the Internet of Things (IoT), and Artificial Intelligence (AI). Among these, MEC stands out as a pivotal aspect of the 5G network. A critical challenge within the realm of MEC is task offloading. This involves optimizing conflicting factors like execution time, energy usage, and computation duration. Additionally, addressing the offloading of interdependent tasks poses another significant hurdle that requires attention. The developed models are single objective, task dependency, and computationally expensive. As a result, the Immune whale differential evolution optimization algorithm is proposed to offload the dependent tasks to the MEC with three objectives: minimizing the execution delay and reducing the energy and cost of MEC resources. The standard Whale optimization is incorporated with DE with customized mutation operations and immune system to enhance the searching strategy of Whale optimization. The proposed HIWDEO secured reduced energy and overhead of UE to execute its tasks. The comparison between the developed model and other optimization approaches shows the superiority of HIWDEO.

在移动云计算(MCC)、移动边缘计算(MEC)、物联网(IoT)和人工智能(AI)进步的推动下,用户设备(UE)的采用率正在上升。其中,MEC作为5G网络的关键方面脱颖而出。MEC领域的一个关键挑战是任务卸载。这涉及到优化冲突因素,如执行时间、能源使用和计算持续时间。此外,解决相互依赖任务的卸载问题是另一个需要注意的重大障碍。开发的模型目标单一,任务依赖,计算成本高。为此,提出了免疫鲸鱼差分进化优化算法,以最小化执行延迟和降低MEC资源的能量和成本为目标,将相关任务卸载给MEC。将标准的Whale优化与具有自定义突变操作和免疫系统的DE相结合,增强了Whale优化的搜索策略。提出的HIWDEO确保了UE执行任务所需的能源和开销的减少。将所建立的模型与其他优化方法进行了比较,表明了HIWDEO方法的优越性。
{"title":"Hybrid Immune Whale Differential Evolution Optimization (HIWDEO) Based Computation Offloading in MEC for IoT","authors":"Jizhou Li, Qi Wang, Shuai Hu, Ling Li","doi":"10.1007/s10723-023-09705-7","DOIUrl":"https://doi.org/10.1007/s10723-023-09705-7","url":null,"abstract":"<p>The adoption of User Equipment (UE) is on the rise, driven by advancements in Mobile Cloud Computing (MCC), Mobile Edge Computing (MEC), the Internet of Things (IoT), and Artificial Intelligence (AI). Among these, MEC stands out as a pivotal aspect of the 5G network. A critical challenge within the realm of MEC is task offloading. This involves optimizing conflicting factors like execution time, energy usage, and computation duration. Additionally, addressing the offloading of interdependent tasks poses another significant hurdle that requires attention. The developed models are single objective, task dependency, and computationally expensive. As a result, the Immune whale differential evolution optimization algorithm is proposed to offload the dependent tasks to the MEC with three objectives: minimizing the execution delay and reducing the energy and cost of MEC resources. The standard Whale optimization is incorporated with DE with customized mutation operations and immune system to enhance the searching strategy of Whale optimization. The proposed HIWDEO secured reduced energy and overhead of UE to execute its tasks. The comparison between the developed model and other optimization approaches shows the superiority of HIWDEO.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"125 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138537033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Grid Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1